# Jailbreak and Red-Teaming: From 'DAN' to Constitutional AI — Art of LLM Attack and Defense

> Source: https://sukruyusufkaya.com/en/learn/llm-muhendisligi/jailbreak-red-teaming-constitutional-ai
> Updated: 2026-05-13T13:00:32.316Z
> Category: LLM Mühendisliği
> Module: Module 22: AI Safety and Regulation — Jailbreak, KVKK, EU AI Act
**TLDR:** Attack + defense side of LLM security: prompt injection, jailbreak techniques (DAN, roleplay, encoding attacks), token smuggling, indirect injection (leakage from RAG). Bai et al. 2022 Constitutional AI approach — Anthropic's defense strategy. Red-teaming protocols (OpenAI, Anthropic best practices). Turkish-specific jailbreak examples (Islamic sensitivity bypass, KVKK bypass attempts). Production-grade defense layers: input filter + output filter + monitoring.

