As artificial intelligence (AI) becomes an integral part of our digital landscape, the looming threat of adversarial attacks casts a shadow over its immense potential. This presentation takes a technical deep dive into the evolving landscape of AI security and the relentless tactics employed by adversaries to exploit vulnerabilities. Attendees will gain insights into the various attacker strategies including OWASP LLM TOP 10, and security flaws in LLM frameworks that are exploitable. Moreover, there will be demos of adversarial AI attacks on POC applications. Demos covered include the Fast Gradient Sign Method (FGSM), Prompt injection to Code execution, Poisoning Training Data, Model Serialization Attacks, and SQL injection in LLM applications. The session aims to equip attendees with a comprehensive understanding of the adversarial tactics prevalent in AI security and empower them to guard against the shadows that threaten AI systems.