The proliferation of large language models (LLMs), epitomized by systems like ChatGPT, has catalyzed a paradigm shift in educational technologies, fostering a robust human-AI synergy. As the frontier expands, multimodal AI models have burgeoned, facilitating human interaction through varied channels, from text to imagery and audio-visual educational content. This dynamism has not only enriched educational interfaces but also transformed content generation, curation, and summarization in pedagogy, heralding an unparalleled era in education. However, the ascent of LLMs and their multimodal counterparts, Large Multimodal Models (LMMs), has not been without challenges. They are increasingly found susceptible to adversarial manipulations, potentially undermining the educational process's integrity. This paper delves deep into the security, privacy, compliance, and trustworthiness of LLMs and LMMs, offering a comprehensive survey of their vulnerabilities. We elucidate the myriad adversarial tactics targeting LLMs and proffer contemporary mitigation strategies.