KVKK + EU AI Act + ISO 42001 Compliance Guide: A Unified Framework for Turkish Enterprises
A unified compliance framework for AI systems covering Turkey's KVKK, the EU AI Act, and the international ISO 42001 standard. Includes a regulation-overlap matrix, EU AI Act risk levels, a 12-month implementation roadmap, a 47-item checklist, and sector-specific practices — a practical reference for C-level and compliance leaders.
One-line answer: KVKK + EU AI Act + ISO 42001 form a three-layered AI compliance framework; unifying their overlap in a single management system is superior in both cost and speed.
- Turkish enterprises operating AI systems are simultaneously subject to three frameworks: KVKK (Turkey), the EU AI Act (EU), and ISO 42001 (international voluntary standard) — one does not replace another.
- The three regulations overlap by roughly 60% — a single, unified compliance program can manage all three obligations together.
- The EU AI Act is risk-based: Prohibited, High Risk, Limited Risk, Minimal Risk. Every Turkish company serving the EU must classify its systems.
- ISO 42001 is voluntary but covers ~80% of EU AI Act high-risk obligations, making it the de facto choice of C-level decision-makers.
- Full compliance typically takes 9-15 months; late starters will face heavy cost burdens during the 2026-2027 transition window.
1. Why Three Regulations at Once?
A company in Turkey building or operating AI systems is usually subject to three different regulatory frameworks at the same time:
- KVKK (Law No. 6698, Turkey, 2016): Covers every AI processing step involving personal data. Mandatory, with administrative fines for breach.
- EU AI Act (EU, 2024): Mandatory for those who place or use AI systems in the EU market. Fines may reach 7% of annual global turnover.
- ISO/IEC 42001 (International, 2023): Voluntary AI management-system standard. Certification is increasingly required in EU tenders.
Why Combine the Three?
Roughly 60% of the obligations across the three frameworks overlap. Data governance, risk assessment, documentation, human oversight, and recordkeeping are required by all three. Running them as three separate programs instead of one unified compliance architecture is wasteful both in cost and operational efficiency.
2. KVKK (Law No. 6698) and AI
- KVKK (Personal Data Protection Law)
- Turkey's primary law governing personal-data processing (Law No. 6698, 2016). Every AI training run, inference call, and data-storage step involving personal data is in scope; consent, purpose limitation, data minimization, and cross-border-transfer rules apply.
- Also known as: LPPD, Turkish GDPR
- Wikidata: Q56021829
The AI Face of KVKK
KVKK is not AI-specific, but because AI systems usually process personal data, it is the first compliance layer of any AI project. Key obligations:
- Explicit consent or another legal basis. Without explicit consent from the data subject, another legal basis (contract performance, legitimate interest, etc.) must be relied on.
- Purpose limitation. Using data for AI training beyond the original purpose typically requires a new legal basis.
- Data minimization. Only necessary personal data may be processed; sending the entire customer chat history into an LLM prompt is usually a minimization violation.
- Cross-border transfer. Sending personal data to LLM providers abroad (OpenAI, Anthropic, Google) must be evaluated under Board decisions.
- Data controller obligations. VERBIS registration, privacy notice, responding to data-subject requests within 30 days.
KVKK Penalties
KVKK administrative fines have been notably increased in 2025-2026; failure to inform, missing explicit consent, and data-security breaches can produce very high penalties. Board decisions are publicly available and should be tracked as precedents.
3. EU AI Act: Risk-Based Classification
- EU AI Act (European Union Artificial Intelligence Act)
- The EU's law that regulates AI systems by risk tier (Regulation (EU) 2024/1689). Entered into force in March 2024 and is being phased in between 2025-2027. Applies to anyone placing AI systems on the EU market, even if not established in the EU (extraterritorial).
- Also known as: AI Act, EU AI Law
- Wikidata: Q123828984
Risk Tiers
The EU AI Act defines four risk categories, each with different obligations.
| Tier | Example Systems | Obligations | Frequency in Turkish Companies |
|---|---|---|---|
| Prohibited | Social scoring, manipulative AI, real-time biometric identification (limited exceptions) | Outright ban | Very low |
| High Risk | HR shortlisting, credit scoring, education assessment, critical infrastructure, biometrics | Risk management, quality management system, conformity assessment, human oversight, recordkeeping, user information | High - banking, health, HR SaaS |
| Limited Risk | Chatbots, deepfake generation, emotion recognition | Transparency (notifying users they are interacting with AI) | Very high |
| Minimal Risk | Spam filters, game AI, simple recommenders | None (voluntary codes of conduct) | Common |
General-Purpose AI (GPAI) Obligations
A separate set of obligations exists for foundation models. GPAI providers (OpenAI, Anthropic, Google, Mistral, Meta) are subject to technical documentation, copyright-compliance policy, and systemic-risk assessment duties.
Practical takeaway. As a Turkish company that is not a GPAI provider, these specific obligations do not bind you directly, but if you deploy GPAI-based systems, you must obtain and document your provider's compliance materials.
EU AI Act Application Timeline
The Act enters into force in phases:
- 2 February 2025: Prohibited systems and AI literacy obligation
- 2 August 2025: GPAI governance provisions, penalty regime
- 2 August 2026: High-risk system main obligations (the bulk)
- 2 August 2027: Specific high-risk categories (AI as product components)
4. ISO/IEC 42001: The AI Management System Standard
- ISO/IEC 42001:2023
- The first international standard for AI management systems (AIMS), published in December 2023. Positioned as the AI equivalent of ISO 27001. Voluntary, but certification is the strongest signal of enterprise AI maturity.
- Also known as: ISO 42001, AIMS
What ISO 42001 Covers
The standard provides a management-system framework for responsibly, auditably, and sustainably managing AI systems:
- AI policy and objectives
- Risk assessment and treatment plan
- AI lifecycle management (planning, development, deployment, monitoring, decommissioning)
- Data management
- Human oversight and control
- Third-party management
- Performance evaluation and continual improvement
- Communication and transparency
Why ISO 42001?
ISO 42001 is voluntary, yet offers three pragmatic benefits:
- About 80% of EU AI Act high-risk obligations are addressed within ISO 42001. One certification advances two compliance fronts.
- It is becoming a tender requirement. A meaningful share of European Commission-related projects increasingly cite ISO 42001 as preference/requirement.
- A concrete signal in investor decks. It is the only recognized international certificate that can attest to AI maturity.
Relationship with ISO 27001
Companies already certified to ISO 27001 can add ISO 42001 at 30-40% lower cost, since most documentation, audit, and governance infrastructure is already in place.
5. The Three-Regulation Overlap Matrix (Original Contribution)
The most critical tool for a Turkish compliance manager is to see exactly where the three frameworks overlap. The matrix below compares the three across seven core compliance areas.
| Area | KVKK | EU AI Act | ISO 42001 |
|---|---|---|---|
| Data Governance | Mandatory (notice, consent, minimization) | Mandatory (high risk: quality management) | Mandatory (Clause 7) |
| Risk Assessment | PIA for high-risk processing | Mandatory (high risk) | Mandatory (Clause 6.1.2) |
| Human Oversight | For profiling decisions | Mandatory (high risk) | Mandatory (Clause 8.3) |
| Transparency | Privacy notice | AI interaction disclosure (limited risk+) | Mandatory (Clause 7.4) |
| Recordkeeping & Logs | Processing inventory | High risk: log retention | Mandatory (Clause 7.5) |
| Third-Party Management | Processor contracts | Supply-chain compliance | Mandatory (Clause 8.4) |
| Incident Management | 72-hour notification | Serious-incident reporting | Mandatory (Clause 10) |
Practical meaning. Across these seven areas, a single control set can satisfy the requirements of all three regulations. When designing your compliance program, build one program per area, not one program per regulation — that is the correct architecture.
6. Practical Guide to Risk Classification
Determining which EU AI Act risk category an AI system falls into is the first step of the compliance program. A practical decision matrix:
EU AI Act Risk-Tier Determination — 5 Steps
Practical classification of an AI system's risk tier.
- 1
1. Check the Prohibited List
Is the system among the prohibited practices under Article 5 (manipulative behavior, social scoring, real-time biometric identification, etc.)? If yes, the system cannot be placed on the EU market.
- 2
2. Check Annex III
Annex III lists the high-risk categories: biometrics, critical infrastructure, education, employment, public services, law enforcement, migration, justice, democratic processes. Is your system on this list?
- 3
3. Article 6(2) Exemption
For Annex III systems, Article 6(2) allows limited exemptions: narrow/ancillary tasks, no influence on human decisions, no profiling. Detailed assessment required.
- 4
4. Transparency Obligation
If not high risk, does the system (a) interact with a person, (b) perform emotion recognition / biometric categorization, or (c) generate deepfake / AI-generated content? If yes — limited risk - transparency obligation.
- 5
5. Minimal Risk Default
Systems not falling into any of the above are minimal-risk. No specific obligations apply beyond voluntary codes of conduct.
Most Common High-Risk Scenarios in Turkish Companies
- HR SaaS (CV screening, interview assessment): Annex III - Employment
- Credit-application scoring: Annex III - Access to essential services
- Education and exam assessment: Annex III - Education
- Biometric identification systems: Annex III - Biometrics
- Public-service application assessment: Annex III - Public services
7. 12-Month Implementation Roadmap
KVKK + EU AI Act + ISO 42001 12-Month Compliance Roadmap
A phased plan to build a three-layered compliance program from scratch.
- 1
Months 1-2: Inventory and Current State
AI system inventory (existing + planned), personal-data inventory (KVKK), gap analysis across the three regulations. Output: compliance posture report.
- 2
Months 2-3: Governance and Policy
AI Committee setup, AI policy, acceptable-use policy, ethical principles, RACI matrix. Update KVKK privacy notices to cover AI processing.
- 3
Months 3-5: Risk Assessment
EU AI Act risk classification per system, KVKK PIA, ISO 42001 risk treatment plan. Output: system-level risk files.
- 4
Months 4-7: Technical Controls
For high-risk systems: quality management system, eval harness, audit logs, observability, human-oversight mechanisms. Anonymization layer, data-residency options.
- 5
Months 6-9: Documentation
Technical documentation (EU AI Act Annex IV), user information notices, third-party agreements, training materials.
- 6
Months 9-11: Training and Operationalization
AI-literacy training for all AI-relevant personnel (EU AI Act Article 4 obligation), embedding compliance into day-to-day operations.
- 7
Months 11-12: Audit and Certification
Internal audit, external pre-audit if applicable. If targeting ISO 42001, plan the formal certification audit.
8. Common Mistakes
8.1. "I don't sell in the EU, so the EU AI Act doesn't apply to me"
Wrong. Indirect EU market exposure (e.g., an EU customer of your SaaS, an EU subsidiary that performs AI processing) brings you into scope. The right question is: "Can my system affect a person in the EU?"
8.2. Leaving KVKK to the data team alone
KVKK compliance is not solely a data/IT matter; product, legal, sales, and customer service must collaborate. The "AI Committee" is precisely the structure to solve this.
8.3. Treating ISO 42001 as mandatory (or ignoring it)
ISO 42001 is voluntary, but because it satisfies ~80% of EU AI Act high-risk obligations in one stroke, it is a strategically strong choice. "I won't bother because it's not mandatory" creates a tender disadvantage against certified competitors.
8.4. Postponing AI literacy training
EU AI Act Article 4 — from 2 February 2025, you must provide adequate AI-literacy training to personnel who develop, use, or operate AI systems. This applies even to companies without a high-risk system.
8.5. Lack of third-party-model (GPAI) supplier management
Failing to obtain compliance documents from GPAI providers like OpenAI, Anthropic, Google creates serious risk in production deployments. If contracts and compliance documentation are missing, the EU AI Act obligation reverts to you.
8.6. Delaying eval harness and audit logs to "later"
Both the EU AI Act and ISO 42001 require continuous monitoring and recordkeeping. Without audit logs, compliance cannot be proven. This is a Day-1 investment; adding it later is 3-5x more expensive.
9. Sector Notes
9.1. Banking and Finance
KVKK + BDDK + EU AI Act + ISO 42001 form a four-layer structure. BDDK's AI-relevant secondary regulation (cloud-services guideline, outsourcing) and data-residency requirements are critical. Large Turkish banks (Garanti BBVA, İş Bankası) process AI on-prem or in Turkey-region cloud.
9.2. Health
KVKK special-category provisions (health) + EU AI Act high-risk classification + medical-device regulation (MDR) apply together. Anonymization and cross-border-transfer constraints are among the strictest of any sector.
9.3. E-commerce
KVKK privacy notice + limited-risk transparency (chatbot disclosure) + GPAI supplier management are the primary compliance burdens. Profiling rules apply to recommender/segmentation systems involving customer personal data.
9.4. HR SaaS
CV screening, interview assessment, and performance scoring are high risk (Annex III - Employment). Full obligation set (quality management, human oversight, documentation) is required.
9.5. Public Sector
EU AI Act public-sector obligations (Article 26+) apply alongside the Digital Transformation Office's AI policy guidance in Turkey. Citizen data rights demand extra sensitivity.
10. Case Studies (Anonymized)
Case 1 — Turkish HR SaaS Startup, EU AI Act High-Risk Compliance
A Turkish HRTech startup planned to expand into the EU market with CV-screening and interview-assessment products. Classification: Annex III - Employment = High risk.
Intervention. Set up an AIMS under ISO 42001, prepared EU AI Act Annex IV technical documentation, implemented an explainability mechanism (XAI - decision rationale), and defined human-oversight processes.
Result. After 11 months, both EU AI Act high-risk compliance and ISO 42001 readiness were completed. Two large EU customers won, adding ~$1.2M ARR.
Case 2 — Turkish Bank, KVKK + AI Governance Program
A Turkish bank lacked central AI governance; every team launched POCs independently.
Intervention. Established an AI Committee (CDO, CISO, KVKK officer, Risk, Internal Audit). KVKK PIA template, EU AI Act risk classification template, and ISO 42001 readiness plan were rolled out. All new AI projects now route through committee approval.
Result. After 8 months: clean regulatory risk panel and 40% faster production rollout due to more consistent processes.
Case 3 — Turkish E-Commerce Marketplace, GPAI Supplier Management
A Turkish marketplace ran 8 AI use-cases on OpenAI and Anthropic APIs. Supplier agreements lacked AI-specific clauses.
Intervention. Data Processing Agreement (DPA) updated with AI-specific clauses, PII filtering layer added (PII detection before every API call), monthly compliance report automated.
Result. KVKK risk score significantly reduced; EU customer DPIA pass rate reached 100%.
11. 47-Item Compliance Checklist (Summary)
The checklist is provided as a downloadable asset; the summary below allows a quick self-check.
Governance (7). AI Committee exists? · AI policy approved? · Acceptable-use policy published? · Ethical principles defined? · RACI matrix exists? · Incident/breach response procedure exists? · AI literacy training planned?
KVKK (10). VERBIS registration current? · Privacy notices cover AI processing? · Consent flow correct? · PIA procedure defined? · Data-minimization controls in place? · Cross-border transfer procedure defined? · Processor contracts include AI clauses? · Data-subject request process closed within 30 days? · Breach notification within 72 hours? · Data deletion/anonymization procedure defined?
EU AI Act (12). System inventory exists? · Risk classification complete? · Quality management system in place for high-risk? · Risk-management process operating? · Data-governance requirements met? · Technical documentation (Annex IV) ready? · Logging mechanism active? · Transparency and information obligations fulfilled? · Human oversight designed? · Accuracy/robustness/cybersecurity tests run? · Conformity assessment complete? · CE marking applied (for high-risk)?
ISO 42001 (10). AIMS scope defined? · AI policy aligned with ISO 42001? · Risk treatment plan documented? · Statement of Applicability ready? · Internal audit plan exists? · Management review process defined? · Corrective action process running? · Performance indicators defined and monitored? · Transparency obligations met? · Continual improvement process active?
Technical Infrastructure (8). Eval harness set up? · Audit log active across all AI systems? · Anonymization/PII detection layer in place? · Data residency determined and compliant? · Production observability (Langfuse, Helicone, etc.) active? · Model versioning and rollback process defined? · Explainability mechanisms (for high-risk) integrated? · Security tests (prompt injection, jailbreak) performed?
12. Frequently Asked Questions
13. Next Steps
To launch your company's three-layered AI compliance program or harden an existing one:
- Compliance gap analysis. Three-layer KVKK + EU AI Act + ISO 42001 gap assessment; output: prioritized action roadmap.
- AI Committee setup and governance workshop. Framework, RACI matrix, decision procedures clarified in a 2-day workshop.
- ISO 42001 readiness program. AIMS design, documentation, internal audit, and certification-audit preparation.
For details, please use the contact form on the site.
References
- KVKK - Law No. 6698 — Republic of Turkiye - KVKK, Republic of Turkiye ·
- EU AI Act - Regulation (EU) 2024/1689 — European Union, Official Journal of the EU ·
- AI Act Explorer — Future of Life Institute, FLI ·
- ISO/IEC 42001:2023 AI Management Systems — ISO/IEC, ISO ·
- ISO/IEC 23894:2023 AI Risk Management — ISO/IEC, ISO ·
- NIST AI Risk Management Framework — NIST, NIST ·
- KVKK Board Decisions — KVKK Board, Republic of Turkiye - KVKK ·
- European Commission AI Act Guidelines — European Commission, European Commission ·
- OECD AI Principles — OECD, OECD ·
- Turkey National AI Strategy 2021-2025 — Digital Transformation Office of the Presidency, Republic of Turkiye ·
This is a living document; updated quarterly as regulatory texts and Board decisions evolve. The content is informational and does not constitute legal advice.
Consulting Pathways
Consulting pages closest to this article
For the most logical next step after this article, you can review the most relevant solution, role, and industry landing pages here.
AI Governance, Risk and Security Consulting
A governance framework that makes enterprise AI usage more sustainable across data, access, model behavior and operational risk.
AI Evaluation, Guardrails and Observability
A comprehensive evaluation layer to measure, observe and control AI accuracy, safety and performance.
Enterprise AI Architecture Consulting for CTOs
Technical leadership consulting to move AI initiatives from isolated PoCs into secure, scalable and production-ready architecture.