Large language models (LLMs) have demonstrated extraordinary capabilities and contributed to multiple fields, such as generating and summarizing text, language translation, and question-answering. Nowadays, LLMs have become very popular tools in natural language processing (NLP) tasks, with the capability to analyze complicated linguistic patterns and provide relevant responses depending on the context. While offering significant advantages, these models are also vulnerable to security and privacy attacks, such as jailbreaking attacks, data poisoning attacks, and personally identifiable information (PII) leakage attacks. This survey provides a thorough review of the security and privacy challenges of LLMs, along with the application-based risks in various domains, such as transportation, education, and healthcare. We assess the extent of LLM vulnerabilities, investigate emerging security and privacy attacks against LLMs, and review potential defense mechanisms. Additionally, the survey outlines existing research gaps and highlights future research directions.