Explore offensive and defensive techniques for securing AI systems and large language models. Includes prompt injection, model extraction, adversarial input crafting, agent abuse, dataset poisoning, and emerging threats to LLMs and ML pipelines. Focused on red teaming, exploit development, and practical risk assessment of AI-enabled applications.