# KVKK + EU AI Act + ISO 42001 Compliance Guide: A Unified Framework for Turkish Enterprises

> Source: https://sukruyusufkaya.com/en/blog/kvkk-eu-ai-act-iso-42001-uyum
> Updated: 2026-05-13T19:57:53.022Z
> Type: blog
> Category: yapay-zeka
**TLDR:** A unified compliance framework for AI systems covering Turkey's KVKK, the EU AI Act, and the international ISO 42001 standard. Includes a regulation-overlap matrix, EU AI Act risk levels, a 12-month implementation roadmap, a 47-item checklist, and sector-specific practices — a practical reference for C-level and compliance leaders.

<callout-box data-variant="info" data-title="Important Legal Notice">

This article is informational and does not constitute legal advice. For compliance decisions specific to your organization, you must work with legal counsel specializing in KVKK and EU AI Act. The interpretations reflect texts and published guidance as of 2026; content is updated as regulatory texts evolve.

</callout-box>

<tldr data-summary="[&#34;Turkish enterprises operating AI systems are simultaneously subject to three frameworks: KVKK (Turkey), the EU AI Act (EU), and ISO 42001 (international voluntary standard) — one does not replace another.&#34;,&#34;The three regulations overlap by roughly 60% — a single, unified compliance program can manage all three obligations together.&#34;,&#34;The EU AI Act is risk-based: Prohibited, High Risk, Limited Risk, Minimal Risk. Every Turkish company serving the EU must classify its systems.&#34;,&#34;ISO 42001 is voluntary but covers ~80% of EU AI Act high-risk obligations, making it the de facto choice of C-level decision-makers.&#34;,&#34;Full compliance typically takes 9-15 months; late starters will face heavy cost burdens during the 2026-2027 transition window.&#34;]" data-one-line="KVKK + EU AI Act + ISO 42001 form a three-layered AI compliance framework; unifying their overlap in a single management system is superior in both cost and speed."></tldr>

## 1. Why Three Regulations at Once?

A company in Turkey building or operating AI systems is usually subject to three different regulatory frameworks at the same time:

- **KVKK (Law No. 6698, Turkey, 2016):** Covers every AI processing step involving personal data. Mandatory, with administrative fines for breach.
- **EU AI Act (EU, 2024):** Mandatory for those who place or use AI systems in the EU market. Fines may reach 7% of annual global turnover.
- **ISO/IEC 42001 (International, 2023):** Voluntary AI management-system standard. Certification is increasingly required in EU tenders.

<callout-box data-variant="warning" data-title="Common Misconception">

"I only care about KVKK because I'm a Turkish company" is **wrong** for any Turkish company that offers products/services to the EU market. The EU AI Act applies extraterritorially to anyone placing AI systems on the EU market, regardless of where they are established. SaaS companies with European customers, e-export manufacturers, and healthtech firms are directly within scope.

</callout-box>

### Why Combine the Three?

Roughly 60% of the obligations across the three frameworks **overlap**. Data governance, risk assessment, documentation, human oversight, and recordkeeping are required by all three. Running them as **three separate programs** instead of **one unified compliance architecture** is wasteful both in cost and operational efficiency.

## 2. KVKK (Law No. 6698) and AI

<definition-box data-term="KVKK (Personal Data Protection Law)" data-definition="Turkey's primary law governing personal-data processing (Law No. 6698, 2016). Every AI training run, inference call, and data-storage step involving personal data is in scope; consent, purpose limitation, data minimization, and cross-border-transfer rules apply." data-also="LPPD, Turkish GDPR" data-wikidata="Q56021829"></definition-box>

### The AI Face of KVKK

KVKK is not AI-specific, but because AI systems usually process personal data, it is the **first compliance layer** of any AI project. Key obligations:

- **Explicit consent or another legal basis.** Without explicit consent from the data subject, another legal basis (contract performance, legitimate interest, etc.) must be relied on.
- **Purpose limitation.** Using data for AI training beyond the original purpose typically requires a new legal basis.
- **Data minimization.** Only necessary personal data may be processed; sending the entire customer chat history into an LLM prompt is usually a minimization violation.
- **Cross-border transfer.** Sending personal data to LLM providers abroad (OpenAI, Anthropic, Google) must be evaluated under Board decisions.
- **Data controller obligations.** VERBIS registration, privacy notice, responding to data-subject requests within 30 days.

### KVKK Penalties

KVKK administrative fines have been notably increased in 2025-2026; failure to inform, missing explicit consent, and data-security breaches can produce very high penalties. Board decisions are publicly available and should be tracked as precedents.

## 3. EU AI Act: Risk-Based Classification

<definition-box data-term="EU AI Act (European Union Artificial Intelligence Act)" data-definition="The EU's law that regulates AI systems by risk tier (Regulation (EU) 2024/1689). Entered into force in March 2024 and is being phased in between 2025-2027. Applies to anyone placing AI systems on the EU market, even if not established in the EU (extraterritorial)." data-also="AI Act, EU AI Law" data-wikidata="Q123828984"></definition-box>

### Risk Tiers

The EU AI Act defines four risk categories, each with different obligations.

<comparison-table data-caption="EU AI Act Risk Tiers" data-headers="[&#34;Tier&#34;,&#34;Example Systems&#34;,&#34;Obligations&#34;,&#34;Frequency in Turkish Companies&#34;]" data-rows="[{&#34;feature&#34;:&#34;Prohibited&#34;,&#34;values&#34;:[&#34;Social scoring, manipulative AI, real-time biometric identification (limited exceptions)&#34;,&#34;Outright ban&#34;,&#34;Very low&#34;]},{&#34;feature&#34;:&#34;High Risk&#34;,&#34;values&#34;:[&#34;HR shortlisting, credit scoring, education assessment, critical infrastructure, biometrics&#34;,&#34;Risk management, quality management system, conformity assessment, human oversight, recordkeeping, user information&#34;,&#34;High - banking, health, HR SaaS&#34;]},{&#34;feature&#34;:&#34;Limited Risk&#34;,&#34;values&#34;:[&#34;Chatbots, deepfake generation, emotion recognition&#34;,&#34;Transparency (notifying users they are interacting with AI)&#34;,&#34;Very high&#34;]},{&#34;feature&#34;:&#34;Minimal Risk&#34;,&#34;values&#34;:[&#34;Spam filters, game AI, simple recommenders&#34;,&#34;None (voluntary codes of conduct)&#34;,&#34;Common&#34;]}]"></comparison-table>

### General-Purpose AI (GPAI) Obligations

A separate set of obligations exists for foundation models. GPAI providers (OpenAI, Anthropic, Google, Mistral, Meta) are subject to technical documentation, copyright-compliance policy, and systemic-risk assessment duties.

**Practical takeaway.** As a Turkish company that is not a GPAI provider, these specific obligations do not bind you directly, but **if you deploy GPAI-based systems**, you must obtain and document your provider's compliance materials.

### EU AI Act Application Timeline

The Act enters into force in phases:

- **2 February 2025:** Prohibited systems and AI literacy obligation
- **2 August 2025:** GPAI governance provisions, penalty regime
- **2 August 2026:** High-risk system main obligations (the bulk)
- **2 August 2027:** Specific high-risk categories (AI as product components)

<callout-box data-variant="warning" data-title="2026 Action Threshold">

August 2026 is the **full compliance date for high-risk AI systems**. If your system falls into the high-risk category and compliance work has not yet begun, the remaining time may not be enough for planning + execution. The risk assessment must be completed by end of Q2 2026.

</callout-box>

## 4. ISO/IEC 42001: The AI Management System Standard

<definition-box data-term="ISO/IEC 42001:2023" data-definition="The first international standard for AI management systems (AIMS), published in December 2023. Positioned as the AI equivalent of ISO 27001. Voluntary, but certification is the strongest signal of enterprise AI maturity." data-also="ISO 42001, AIMS"></definition-box>

### What ISO 42001 Covers

The standard provides a management-system framework for responsibly, auditably, and sustainably managing AI systems:

- AI policy and objectives
- Risk assessment and treatment plan
- AI lifecycle management (planning, development, deployment, monitoring, decommissioning)
- Data management
- Human oversight and control
- Third-party management
- Performance evaluation and continual improvement
- Communication and transparency

### Why ISO 42001?

ISO 42001 is voluntary, yet offers three pragmatic benefits:

1. **About 80% of EU AI Act high-risk obligations are addressed within ISO 42001.** One certification advances two compliance fronts.
2. **It is becoming a tender requirement.** A meaningful share of European Commission-related projects increasingly cite ISO 42001 as preference/requirement.
3. **A concrete signal in investor decks.** It is the only recognized international certificate that can attest to AI maturity.

### Relationship with ISO 27001

Companies already certified to ISO 27001 can add ISO 42001 at 30-40% lower cost, since most documentation, audit, and governance infrastructure is already in place.

## 5. The Three-Regulation Overlap Matrix (Original Contribution)

The most critical tool for a Turkish compliance manager is to see exactly **where** the three frameworks overlap. The matrix below compares the three across seven core compliance areas.

<comparison-table data-caption="Overlap Matrix: Seven Core Compliance Areas" data-headers="[&#34;Area&#34;,&#34;KVKK&#34;,&#34;EU AI Act&#34;,&#34;ISO 42001&#34;]" data-rows="[{&#34;feature&#34;:&#34;Data Governance&#34;,&#34;values&#34;:[&#34;Mandatory (notice, consent, minimization)&#34;,&#34;Mandatory (high risk: quality management)&#34;,&#34;Mandatory (Clause 7)&#34;]},{&#34;feature&#34;:&#34;Risk Assessment&#34;,&#34;values&#34;:[&#34;PIA for high-risk processing&#34;,&#34;Mandatory (high risk)&#34;,&#34;Mandatory (Clause 6.1.2)&#34;]},{&#34;feature&#34;:&#34;Human Oversight&#34;,&#34;values&#34;:[&#34;For profiling decisions&#34;,&#34;Mandatory (high risk)&#34;,&#34;Mandatory (Clause 8.3)&#34;]},{&#34;feature&#34;:&#34;Transparency&#34;,&#34;values&#34;:[&#34;Privacy notice&#34;,&#34;AI interaction disclosure (limited risk+)&#34;,&#34;Mandatory (Clause 7.4)&#34;]},{&#34;feature&#34;:&#34;Recordkeeping & Logs&#34;,&#34;values&#34;:[&#34;Processing inventory&#34;,&#34;High risk: log retention&#34;,&#34;Mandatory (Clause 7.5)&#34;]},{&#34;feature&#34;:&#34;Third-Party Management&#34;,&#34;values&#34;:[&#34;Processor contracts&#34;,&#34;Supply-chain compliance&#34;,&#34;Mandatory (Clause 8.4)&#34;]},{&#34;feature&#34;:&#34;Incident Management&#34;,&#34;values&#34;:[&#34;72-hour notification&#34;,&#34;Serious-incident reporting&#34;,&#34;Mandatory (Clause 10)&#34;]}]"></comparison-table>

**Practical meaning.** Across these seven areas, **a single control set** can satisfy the requirements of all three regulations. When designing your compliance program, build **one program per area, not one program per regulation** — that is the correct architecture.

## 6. Practical Guide to Risk Classification

Determining which EU AI Act risk category an AI system falls into is **the first step of the compliance program**. A practical decision matrix:

<howto-steps data-name="EU AI Act Risk-Tier Determination — 5 Steps" data-description="Practical classification of an AI system's risk tier." data-time="PT2H" data-steps="[{&#34;name&#34;:&#34;1. Check the Prohibited List&#34;,&#34;text&#34;:&#34;Is the system among the prohibited practices under Article 5 (manipulative behavior, social scoring, real-time biometric identification, etc.)? If yes, the system cannot be placed on the EU market.&#34;},{&#34;name&#34;:&#34;2. Check Annex III&#34;,&#34;text&#34;:&#34;Annex III lists the high-risk categories: biometrics, critical infrastructure, education, employment, public services, law enforcement, migration, justice, democratic processes. Is your system on this list?&#34;},{&#34;name&#34;:&#34;3. Article 6(2) Exemption&#34;,&#34;text&#34;:&#34;For Annex III systems, Article 6(2) allows limited exemptions: narrow/ancillary tasks, no influence on human decisions, no profiling. Detailed assessment required.&#34;},{&#34;name&#34;:&#34;4. Transparency Obligation&#34;,&#34;text&#34;:&#34;If not high risk, does the system (a) interact with a person, (b) perform emotion recognition / biometric categorization, or (c) generate deepfake / AI-generated content? If yes — limited risk - transparency obligation.&#34;},{&#34;name&#34;:&#34;5. Minimal Risk Default&#34;,&#34;text&#34;:&#34;Systems not falling into any of the above are minimal-risk. No specific obligations apply beyond voluntary codes of conduct.&#34;}]"></howto-steps>

### Most Common High-Risk Scenarios in Turkish Companies

- **HR SaaS (CV screening, interview assessment):** Annex III - Employment
- **Credit-application scoring:** Annex III - Access to essential services
- **Education and exam assessment:** Annex III - Education
- **Biometric identification systems:** Annex III - Biometrics
- **Public-service application assessment:** Annex III - Public services

## 7. 12-Month Implementation Roadmap

<howto-steps data-name="KVKK + EU AI Act + ISO 42001 12-Month Compliance Roadmap" data-description="A phased plan to build a three-layered compliance program from scratch." data-time="P12M" data-steps="[{&#34;name&#34;:&#34;Months 1-2: Inventory and Current State&#34;,&#34;text&#34;:&#34;AI system inventory (existing + planned), personal-data inventory (KVKK), gap analysis across the three regulations. Output: compliance posture report.&#34;},{&#34;name&#34;:&#34;Months 2-3: Governance and Policy&#34;,&#34;text&#34;:&#34;AI Committee setup, AI policy, acceptable-use policy, ethical principles, RACI matrix. Update KVKK privacy notices to cover AI processing.&#34;},{&#34;name&#34;:&#34;Months 3-5: Risk Assessment&#34;,&#34;text&#34;:&#34;EU AI Act risk classification per system, KVKK PIA, ISO 42001 risk treatment plan. Output: system-level risk files.&#34;},{&#34;name&#34;:&#34;Months 4-7: Technical Controls&#34;,&#34;text&#34;:&#34;For high-risk systems: quality management system, eval harness, audit logs, observability, human-oversight mechanisms. Anonymization layer, data-residency options.&#34;},{&#34;name&#34;:&#34;Months 6-9: Documentation&#34;,&#34;text&#34;:&#34;Technical documentation (EU AI Act Annex IV), user information notices, third-party agreements, training materials.&#34;},{&#34;name&#34;:&#34;Months 9-11: Training and Operationalization&#34;,&#34;text&#34;:&#34;AI-literacy training for all AI-relevant personnel (EU AI Act Article 4 obligation), embedding compliance into day-to-day operations.&#34;},{&#34;name&#34;:&#34;Months 11-12: Audit and Certification&#34;,&#34;text&#34;:&#34;Internal audit, external pre-audit if applicable. If targeting ISO 42001, plan the formal certification audit.&#34;}]"></howto-steps>

<stat-callout data-value="9-15 months" data-context="The typical time required for a mid-sized Turkish company to establish three-layer compliance (KVKK + EU AI Act + ISO 42001) from scratch is" data-outcome="9-15 months; late starters may not meet obligations triggered in Q3 2026." data-source="{&#34;label&#34;:&#34;Sector Practice Review&#34;,&#34;url&#34;:&#34;https://sukruyusufkaya.com/en/blog/kvkk-eu-ai-act-iso-42001-uyum&#34;,&#34;date&#34;:&#34;2025&#34;}"></stat-callout>

## 8. Common Mistakes

### 8.1. "I don't sell in the EU, so the EU AI Act doesn't apply to me"

Wrong. Indirect EU market exposure (e.g., an EU customer of your SaaS, an EU subsidiary that performs AI processing) brings you into scope. The right question is: "Can my system affect a person in the EU?"

### 8.2. Leaving KVKK to the data team alone

KVKK compliance is not solely a data/IT matter; product, legal, sales, and customer service must collaborate. The "AI Committee" is precisely the structure to solve this.

### 8.3. Treating ISO 42001 as mandatory (or ignoring it)

ISO 42001 is voluntary, but because it satisfies ~80% of EU AI Act high-risk obligations in one stroke, it is a strategically strong choice. "I won't bother because it's not mandatory" creates a tender disadvantage against certified competitors.

### 8.4. Postponing AI literacy training

EU AI Act Article 4 — **from 2 February 2025**, you must provide adequate AI-literacy training to personnel who develop, use, or operate AI systems. This applies even to companies without a high-risk system.

### 8.5. Lack of third-party-model (GPAI) supplier management

Failing to obtain compliance documents from GPAI providers like OpenAI, Anthropic, Google creates serious risk in production deployments. If contracts and compliance documentation are missing, the EU AI Act obligation reverts to you.

### 8.6. Delaying eval harness and audit logs to "later"

Both the EU AI Act and ISO 42001 require continuous monitoring and recordkeeping. Without audit logs, compliance cannot be proven. This is a **Day-1 investment**; adding it later is 3-5x more expensive.

## 9. Sector Notes

### 9.1. Banking and Finance

KVKK + BDDK + EU AI Act + ISO 42001 form a four-layer structure. BDDK's AI-relevant secondary regulation (cloud-services guideline, outsourcing) and data-residency requirements are critical. Large Turkish banks (Garanti BBVA, İş Bankası) process AI on-prem or in Turkey-region cloud.

### 9.2. Health

KVKK special-category provisions (health) + EU AI Act high-risk classification + medical-device regulation (MDR) apply together. Anonymization and cross-border-transfer constraints are among the strictest of any sector.

### 9.3. E-commerce

KVKK privacy notice + limited-risk transparency (chatbot disclosure) + GPAI supplier management are the primary compliance burdens. Profiling rules apply to recommender/segmentation systems involving customer personal data.

### 9.4. HR SaaS

CV screening, interview assessment, and performance scoring are **high risk** (Annex III - Employment). Full obligation set (quality management, human oversight, documentation) is required.

### 9.5. Public Sector

EU AI Act public-sector obligations (Article 26+) apply alongside the Digital Transformation Office's AI policy guidance in Turkey. Citizen data rights demand extra sensitivity.

## 10. Case Studies (Anonymized)

### Case 1 — Turkish HR SaaS Startup, EU AI Act High-Risk Compliance

A Turkish HRTech startup planned to expand into the EU market with CV-screening and interview-assessment products. Classification: **Annex III - Employment = High risk.**

**Intervention.** Set up an AIMS under ISO 42001, prepared EU AI Act Annex IV technical documentation, implemented an explainability mechanism (XAI - decision rationale), and defined human-oversight processes.

**Result.** After 11 months, both EU AI Act high-risk compliance and ISO 42001 readiness were completed. Two large EU customers won, adding ~$1.2M ARR.

### Case 2 — Turkish Bank, KVKK + AI Governance Program

A Turkish bank lacked central AI governance; every team launched POCs independently.

**Intervention.** Established an AI Committee (CDO, CISO, KVKK officer, Risk, Internal Audit). KVKK PIA template, EU AI Act risk classification template, and ISO 42001 readiness plan were rolled out. All new AI projects now route through committee approval.

**Result.** After 8 months: clean regulatory risk panel and 40% faster production rollout due to more consistent processes.

### Case 3 — Turkish E-Commerce Marketplace, GPAI Supplier Management

A Turkish marketplace ran 8 AI use-cases on OpenAI and Anthropic APIs. Supplier agreements lacked AI-specific clauses.

**Intervention.** Data Processing Agreement (DPA) updated with AI-specific clauses, PII filtering layer added (PII detection before every API call), monthly compliance report automated.

**Result.** KVKK risk score significantly reduced; EU customer DPIA pass rate reached 100%.

## 11. 47-Item Compliance Checklist (Summary)

The checklist is provided as a downloadable asset; the summary below allows a quick self-check.

**Governance (7).** AI Committee exists? · AI policy approved? · Acceptable-use policy published? · Ethical principles defined? · RACI matrix exists? · Incident/breach response procedure exists? · AI literacy training planned?

**KVKK (10).** VERBIS registration current? · Privacy notices cover AI processing? · Consent flow correct? · PIA procedure defined? · Data-minimization controls in place? · Cross-border transfer procedure defined? · Processor contracts include AI clauses? · Data-subject request process closed within 30 days? · Breach notification within 72 hours? · Data deletion/anonymization procedure defined?

**EU AI Act (12).** System inventory exists? · Risk classification complete? · Quality management system in place for high-risk? · Risk-management process operating? · Data-governance requirements met? · Technical documentation (Annex IV) ready? · Logging mechanism active? · Transparency and information obligations fulfilled? · Human oversight designed? · Accuracy/robustness/cybersecurity tests run? · Conformity assessment complete? · CE marking applied (for high-risk)?

**ISO 42001 (10).** AIMS scope defined? · AI policy aligned with ISO 42001? · Risk treatment plan documented? · Statement of Applicability ready? · Internal audit plan exists? · Management review process defined? · Corrective action process running? · Performance indicators defined and monitored? · Transparency obligations met? · Continual improvement process active?

**Technical Infrastructure (8).** Eval harness set up? · Audit log active across all AI systems? · Anonymization/PII detection layer in place? · Data residency determined and compliant? · Production observability (Langfuse, Helicone, etc.) active? · Model versioning and rollback process defined? · Explainability mechanisms (for high-risk) integrated? · Security tests (prompt injection, jailbreak) performed?

## 12. Frequently Asked Questions

<callout-box data-variant="answer" data-title="My company is in Turkey selling to the EU. Which regulations apply?">

Typically: **KVKK** (because you are established in Turkey and process data), **EU AI Act** (because you place an AI system on the EU market — extraterritorial), and voluntarily **ISO 42001** (which mirrors high-risk obligations and adds tender advantages). For precise scope, work with legal counsel.

</callout-box>

<callout-box data-variant="answer" data-title="Are EU AI Act penalties really that serious?">

Yes. Up to 7% of annual global turnover or €35M for prohibited systems; up to 3% or €15M for high-risk obligation breaches. Whichever is higher applies. SMEs have a tiered reduction but penalties remain high.

</callout-box>

<callout-box data-variant="answer" data-title="How long does ISO 42001 certification take and what does it cost?">

Preparation 6-9 months (faster if ISO 27001 already in place); formal certification audit 2-4 months. Total cost (consulting + audit + internal effort) typically ranges from ~300K to 900K TRY for a mid-sized company.

</callout-box>

<callout-box data-variant="answer" data-title="Is a KVKK PIA the same as an EU AI Act risk assessment?">

No, but they overlap significantly. KVKK PIA focuses on personal-data protection; EU AI Act risk assessment focuses on the AI system's effects on individuals/society (discrimination, safety, explainability). A single integrated process can run both in parallel.

</callout-box>

<callout-box data-variant="answer" data-title="I use OpenAI/Anthropic APIs — am I still responsible?">

Yes. The GPAI provider (OpenAI/Anthropic) bears GPAI-specific obligations, but **as the deployer**, you bear a substantial part of compliance. You must obtain contractual compliance documents and add controls for your specific use case.

</callout-box>

<callout-box data-variant="answer" data-title="I don't think we are high-risk — who confirms?">

EU AI Act conformity assessment is mandated for high-risk; minimal/limited risk allows self-assessment. The misclassification risk falls on you. For borderline cases, external expert assessment is advised; Commission Guidelines provide binding interpretation.

</callout-box>

<callout-box data-variant="answer" data-title="We only use ChatGPT internally — does compliance still apply?">

Yes, in limited scope. If employees send personal data to ChatGPT, KVKK privacy notice and data-minimization obligations apply; transfers to OpenAI fall under cross-border-transfer rules. Under the EU AI Act, internal use is usually minimal risk, but AI-literacy training is still mandatory. An acceptable-use policy is essential.

</callout-box>

<callout-box data-variant="answer" data-title="Who should be on the AI Committee?">

Typical members: CDO or AI lead (chair), CISO, KVKK officer / DPO, Legal Counsel, Internal Audit, Risk Management, product team representative. Monthly meeting at minimum, quarterly report to senior leadership.

</callout-box>

## 13. Next Steps

To launch your company's three-layered AI compliance program or harden an existing one:

1. **Compliance gap analysis.** Three-layer KVKK + EU AI Act + ISO 42001 gap assessment; output: prioritized action roadmap.
2. **AI Committee setup and governance workshop.** Framework, RACI matrix, decision procedures clarified in a 2-day workshop.
3. **ISO 42001 readiness program.** AIMS design, documentation, internal audit, and certification-audit preparation.

For details, please use the contact form on the site.

<references-list data-items="[{&#34;title&#34;:&#34;KVKK - Law No. 6698&#34;,&#34;url&#34;:&#34;https://www.kvkk.gov.tr/Icerik/2037/2016-674&#34;,&#34;author&#34;:&#34;Republic of Turkiye - KVKK&#34;,&#34;publishedAt&#34;:&#34;2016-04-07&#34;,&#34;publisher&#34;:&#34;Republic of Turkiye&#34;},{&#34;title&#34;:&#34;EU AI Act - Regulation (EU) 2024/1689&#34;,&#34;url&#34;:&#34;https://eur-lex.europa.eu/eli/reg/2024/1689/oj&#34;,&#34;author&#34;:&#34;European Union&#34;,&#34;publishedAt&#34;:&#34;2024-07-12&#34;,&#34;publisher&#34;:&#34;Official Journal of the EU&#34;},{&#34;title&#34;:&#34;AI Act Explorer&#34;,&#34;url&#34;:&#34;https://artificialintelligenceact.eu/&#34;,&#34;author&#34;:&#34;Future of Life Institute&#34;,&#34;publishedAt&#34;:&#34;2024&#34;,&#34;publisher&#34;:&#34;FLI&#34;},{&#34;title&#34;:&#34;ISO/IEC 42001:2023 AI Management Systems&#34;,&#34;url&#34;:&#34;https://www.iso.org/standard/81230.html&#34;,&#34;author&#34;:&#34;ISO/IEC&#34;,&#34;publishedAt&#34;:&#34;2023-12-18&#34;,&#34;publisher&#34;:&#34;ISO&#34;},{&#34;title&#34;:&#34;ISO/IEC 23894:2023 AI Risk Management&#34;,&#34;url&#34;:&#34;https://www.iso.org/standard/77304.html&#34;,&#34;author&#34;:&#34;ISO/IEC&#34;,&#34;publishedAt&#34;:&#34;2023-02&#34;,&#34;publisher&#34;:&#34;ISO&#34;},{&#34;title&#34;:&#34;NIST AI Risk Management Framework&#34;,&#34;url&#34;:&#34;https://www.nist.gov/itl/ai-risk-management-framework&#34;,&#34;author&#34;:&#34;NIST&#34;,&#34;publishedAt&#34;:&#34;2023-01-26&#34;,&#34;publisher&#34;:&#34;NIST&#34;},{&#34;title&#34;:&#34;KVKK Board Decisions&#34;,&#34;url&#34;:&#34;https://www.kvkk.gov.tr/Icerik/4/Karar&#34;,&#34;author&#34;:&#34;KVKK Board&#34;,&#34;publishedAt&#34;:&#34;2024-2025&#34;,&#34;publisher&#34;:&#34;Republic of Turkiye - KVKK&#34;},{&#34;title&#34;:&#34;European Commission AI Act Guidelines&#34;,&#34;url&#34;:&#34;https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai&#34;,&#34;author&#34;:&#34;European Commission&#34;,&#34;publishedAt&#34;:&#34;2024-2026&#34;,&#34;publisher&#34;:&#34;European Commission&#34;},{&#34;title&#34;:&#34;OECD AI Principles&#34;,&#34;url&#34;:&#34;https://oecd.ai/en/ai-principles&#34;,&#34;author&#34;:&#34;OECD&#34;,&#34;publishedAt&#34;:&#34;2019/2024&#34;,&#34;publisher&#34;:&#34;OECD&#34;},{&#34;title&#34;:&#34;Turkey National AI Strategy 2021-2025&#34;,&#34;url&#34;:&#34;https://cbddo.gov.tr/projeler/ulusal-yapay-zeka-stratejisi/&#34;,&#34;author&#34;:&#34;Digital Transformation Office of the Presidency&#34;,&#34;publishedAt&#34;:&#34;2021&#34;,&#34;publisher&#34;:&#34;Republic of Turkiye&#34;}]"></references-list>

---

This is a living document; updated **quarterly** as regulatory texts and Board decisions evolve. The content is informational and does not constitute legal advice.