# Enterprise AI Maturity Model 2026: A 7-Stage Framework for Turkish Companies

> Source: https://sukruyusufkaya.com/en/blog/kurumsal-ai-olgunluk-modeli-turkiye
> Updated: 2026-05-13T19:57:50.735Z
> Type: blog
> Category: yapay-zeka
**TLDR:** A 7-stage maturity model that structures the enterprise AI adoption journey in Turkey: definitions for each stage, scoring criteria across four dimensions (strategy, data, talent, governance), a 21-question self-assessment, and stage-transition patterns. A production-focused reference framework aligned with KVKK + EU AI Act + ISO 42001.

<tldr data-summary="[&#34;Enterprise AI maturity is not linear — companies face different problems across 7 distinct stages.&#34;,&#34;The 7 stages: (1) Awareness, (2) Experimentation, (3) Foundation, (4) Operationalization, (5) Scaling, (6) Integration, (7) Transformation.&#34;,&#34;Each stage is measured across four dimensions: strategy, data, talent, governance. Total score ranges from 4 (chaotic) to 28 (AI-native).&#34;,&#34;Most Turkish enterprises are stuck between Stage 2 (Experimentation) and Stage 3 (Foundation) — the structural reason is usually data infrastructure and KVKK compliance readiness.&#34;,&#34;Transitions between stages require platform investment, not more POCs; trying to scale without a data layer, eval harness, and LLMOps fails.&#34;]" data-one-line="An enterprise AI maturity model is a multi-dimensional assessment framework that measures a company's AI adoption journey and guides next investment decisions."></tldr>

## 1. What is an AI Maturity Model and Why Does it Matter?

Nearly every Turkish enterprise has run at least one AI experiment over the past 24 months: used ChatGPT for marketing copy, added a customer service chatbot, or built a RAG POC. Yet **more than 60% have been shelved before reaching production**. The reason is usually not technological; it's **investment decisions that don't match the maturity level**. A company at Stage 2 trying to build the multi-agent systems of Stage 5 will see those projects collapse — naturally.

<definition-box data-term="Enterprise AI Maturity Model" data-definition="A multi-dimensional assessment framework that measures a company's AI adoption journey across strategic vision, data infrastructure, talent pool, and governance — placing the current state in a clear stage and guiding next investments. As maturity grows, AI's translation into business value grows exponentially." data-also="AI Maturity Assessment"></definition-box>

A maturity model solves three problems:

1. **Diagnosing the current state** — what stage is the company actually at? POC culture or platform culture?
2. **Validating the next step** — what specifically must be invested in to move to the next stage?
3. **Benchmarking** — where do you stand against sector averages, target positions, or your own past?

This article defines the 7-stage maturity model I have distilled from patterns observed across enterprise projects in Turkey over the past three years; sharing each stage, transition requirements, and self-assessment criteria.

<stat-callout data-value="62%" data-context="Roughly two-thirds of enterprise AI projects in Turkey" data-outcome="stall at POC or pilot stage without reaching production. The primary cause: missing data infrastructure and LLMOps maturity." data-source="{&#34;label&#34;:&#34;McKinsey State of AI - Turkey View&#34;,&#34;url&#34;:&#34;https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai&#34;,&#34;date&#34;:&#34;2025&#34;}"></stat-callout>

## 2. Four Dimensions: How Do We Measure Maturity?

Maturity cannot be summarized in a single stage; it must be evaluated across four independent dimensions. A company can be at Stage 5 on strategy but stuck at Stage 2 on data — this **imbalance** is the most common cause of failure.

<comparison-table data-caption="Four Dimensions of Maturity and Their Measurement Criteria" data-headers="[&#34;Dimension&#34;,&#34;What it Measures&#34;,&#34;Critical Signals&#34;,&#34;Cost of Low Score&#34;]" data-rows="[{&#34;feature&#34;:&#34;Strategy&#34;,&#34;values&#34;:[&#34;Senior leadership alignment, AI vision, ROI expectations&#34;,&#34;Is there a board-level AI agenda? Are use-cases prioritized?&#34;,&#34;Scattered POCs, funding inconsistency&#34;]},{&#34;feature&#34;:&#34;Data&#34;,&#34;values&#34;:[&#34;Data quality, collection, labeling, vectorization, governance&#34;,&#34;Is there a single source of truth? Is embedding infrastructure set up?&#34;,&#34;Hallucination, model drift, rework&#34;]},{&#34;feature&#34;:&#34;Talent&#34;,&#34;values&#34;:[&#34;Team capacity, training program, cultural readiness&#34;,&#34;Number of AI-fluent developers, prompt-engineering capability, continuous-learning culture&#34;,&#34;External dependency, slow iteration, key-person risk&#34;]},{&#34;feature&#34;:&#34;Governance&#34;,&#34;values&#34;:[&#34;Ethics rules, compliance (KVKK, EU AI Act), risk management, observability&#34;,&#34;Is there an AI committee? Is the eval harness in place? Are audit logs flowing?&#34;,&#34;Regulatory penalty risk, brand damage, production incidents&#34;]}]"></comparison-table>

Each dimension is scored 1-7. **Total score = sum of dimensions**, ranging from 4 (most chaotic) to 28 (AI-native). The maturity stage is determined by **the lowest dimension** — because an AI system is only as reliable as its weakest link.

## 3. The Seven Stages: Definition, Signals, and Transition Thresholds

### Stage 1 — Awareness

**Definition.** No organized AI effort. Individual employees may use ChatGPT, but no enterprise vision, funding, or governance exists. Data is largely siloed; AI-fluent team members are rare.

**Signals.**

- AI appears on the board agenda weekly but no concrete budget exists.
- Employees use "personal" ChatGPT subscriptions to process work containing personal data.
- The KVKK compliance officer has not produced an AI risk assessment.

**What to do here.** 1-2 day executive workshop, draft AI usage policy, establish an "AI committee," map AI opportunities across existing processes.

**Threshold to Stage 2.** Board/executive-approved AI strategy and budget allocated for at least one pilot project.

### Stage 2 — Experimentation

**Definition.** Initial POCs underway; typically customer-service chatbot, content generation, or an internal productivity tool. Results are usually positive in the slide deck but fade when production transition is attempted.

**Signals.**

- 3-5 parallel POCs; none have SLAs, monitoring, or rollback plans.
- Data team and AI team work in different silos.
- In SMEs: driven by the initiative of one senior employee.

<callout-box data-variant="warning" data-title="Stage 2 Trap">

About half of Stage 2 companies cannot move beyond — because **they try to scale POCs without investing in infrastructure**. The path to production requires platform investment, not more POCs: vector DB, eval harness, observability, version management.

</callout-box>

**Threshold to Stage 3.** At least one POC enters production hardening with its own data/observability infrastructure.

### Stage 3 — Foundation

**Definition.** First serious platform investment: data lake/lakehouse, embedding pipeline, vector DB, prompt management, eval harness. The AI team takes a formal shape (usually 5-15 people). KVKK compliance becomes a process.

**Signals.**

- At least one use-case in production with a defined SLA.
- Embedding infrastructure (BGE-M3 or OpenAI text-embedding-3) deployed locally or in cloud.
- Data governance policy in draft.

**Threshold to Stage 4.** Multiple use-cases running on a common platform and an LLMOps loop (model versioning, A/B, rollback) defined.

### Stage 4 — Operationalization

**Definition.** AI is no longer experiment but product. LLMOps processes in place, eval harness running daily, hallucination and cost metrics tracked on dashboards. Governance layer (ethics committee, audit log) is active.

**Signals.**

- 3+ production use-cases, each with an owner (PRD exists).
- Monthly AI cost/value report presented to the board.
- An incident response runbook exists (e.g., hallucination spike or prompt injection event).

**Threshold to Stage 5.** AI investment producing net-positive ROI and a repeatable AI project method defined enterprise-wide.

### Stage 5 — Scaling

**Definition.** AI is active in multiple business units, not just one department. An enterprise "AI platform team" exists; all business units develop self-service AI use-cases on the platform. Data and embedding layers become reusable.

**Signals.**

- 10+ production AI use-cases.
- Self-service prompt/agent framework, common vector DB.
- AI Center of Excellence (CoE) emerging.

**Threshold to Stage 6.** AI participates in decision-making — not just an information service, but decision support.

### Stage 6 — Integration

**Definition.** AI has woven into the organization's decision-making fabric. AI recommendations flow by default through core business processes — customer journey, supply chain, financial planning, HR. **Agentic AI** systems autonomously execute multi-step tasks.

**Signals.**

- AI recommendations influence 30%+ of product and ops decisions.
- Multi-agent workflows in production.
- Continuous model-improvement loop (human feedback → fine-tune → A/B → release).

**Threshold to Stage 7.** AI becomes an inseparable part of the business model — the company cannot answer "what would we do without AI?"

### Stage 7 — Transformation

**Definition.** AI-native operating model. The product, service, or operations model cannot produce value without AI. AI capabilities are the core source of competitive advantage. New business models are discovered through AI capabilities.

**Signals.**

- A meaningful share of revenue comes from AI-driven products or services.
- Data and AI capabilities are a core component of market value (highlighted in investor decks).
- The industry treats your maturity model as the reference.

<comparison-table data-caption="7-Stage AI Maturity Model — Turkey View" data-headers="[&#34;Stage&#34;,&#34;Name&#34;,&#34;Typical Duration&#34;,&#34;Total Score Range&#34;,&#34;% of Turkish Companies&#34;]" data-rows="[{&#34;feature&#34;:&#34;1&#34;,&#34;values&#34;:[&#34;Awareness&#34;,&#34;0-6 months&#34;,&#34;4-7&#34;,&#34;18%&#34;]},{&#34;feature&#34;:&#34;2&#34;,&#34;values&#34;:[&#34;Experimentation&#34;,&#34;6-12 months&#34;,&#34;8-12&#34;,&#34;34%&#34;]},{&#34;feature&#34;:&#34;3&#34;,&#34;values&#34;:[&#34;Foundation&#34;,&#34;9-18 months&#34;,&#34;13-16&#34;,&#34;22%&#34;]},{&#34;feature&#34;:&#34;4&#34;,&#34;values&#34;:[&#34;Operationalization&#34;,&#34;12-24 months&#34;,&#34;17-20&#34;,&#34;14%&#34;]},{&#34;feature&#34;:&#34;5&#34;,&#34;values&#34;:[&#34;Scaling&#34;,&#34;18-36 months&#34;,&#34;21-23&#34;,&#34;8%&#34;]},{&#34;feature&#34;:&#34;6&#34;,&#34;values&#34;:[&#34;Integration&#34;,&#34;24-48 months&#34;,&#34;24-26&#34;,&#34;3%&#34;]},{&#34;feature&#34;:&#34;7&#34;,&#34;values&#34;:[&#34;Transformation&#34;,&#34;36+ months&#34;,&#34;27-28&#34;,&#34;1%&#34;]}]"></comparison-table>

## 4. Self-Assessment: A 21-Question Quick Check

Answer the 21 questions below with your senior leadership team. Each is scored 1-4 (1 = not at all, 4 = fully). The normalized score across dimensions maps to a stage.

### Strategy (5 questions)

1. Is the AI strategy approved at board level?
2. Is the AI use-case portfolio prioritized with ROI projections?
3. Is an annual AI investment budget defined?
4. Are AI initiatives owned by a specific leader (CDO, CAIO, CTO)?
5. Is the AI vision known and embraced by most employees?

### Data (5 questions)

1. Is a single source of truth defined and accessible?
2. Is a Turkish-capable embedding pipeline in place?
3. Is a vector database running in production?
4. Are KVKK-compliant anonymization processes defined?
5. Are data-quality metrics (gaps, inconsistencies, freshness) monitored?

### Talent (5 questions)

1. Do you have in-house AI/LLM engineers?
2. Is prompt-engineering capability measured with a development program?
3. Is there an annual AI training budget?
4. Has executive AI literacy been raised (workshops, etc.)?
5. Is vendor/expert governance defined for AI?

### Governance (6 questions)

1. Does the AI committee (ethics body) meet regularly?
2. Is an AI risk-assessment template (EU AI Act risk levels) in use?
3. Are audit logs/observability active across all production AI systems?
4. Are incident-response procedures defined for hallucination, prompt injection, jailbreak?
5. Are data-residency and cross-border-transfer controls in place?
6. Is ISO 42001 on the agenda (at least gap analysis done)?

**Score interpretation.**

- **4-7 / 28:** Stage 1 — Awareness
- **8-12 / 28:** Stage 2 — Experimentation
- **13-16 / 28:** Stage 3 — Foundation
- **17-20 / 28:** Stage 4 — Operationalization
- **21-23 / 28:** Stage 5 — Scaling
- **24-26 / 28:** Stage 6 — Integration
- **27-28 / 28:** Stage 7 — Transformation

<callout-box data-variant="tip" data-title="Imbalance Warning">

Score each dimension separately. If one dimension is 2+ points behind the others (e.g., Strategy 5 but Data 2), that dimension **is the bottleneck blocking your transition to the next stage**. Investment direction must be driven by the weakest dimension.

</callout-box>

## 5. Stage-Transition Roadmap

<howto-steps data-name="Strategic Steps for Stage Transitions" data-description="Structural requirements for moving from each stage to the next." data-time="P12M" data-steps="[{&#34;name&#34;:&#34;1 → 2: Executive Alignment&#34;,&#34;text&#34;:&#34;1-day executive AI workshop, AI strategy draft, pre-budget for 2-3 use-cases.&#34;},{&#34;name&#34;:&#34;2 → 3: Platform Investment&#34;,&#34;text&#34;:&#34;Embedding infrastructure, vector DB, prompt management, first eval harness. Formalize AI team.&#34;},{&#34;name&#34;:&#34;3 → 4: LLMOps Setup&#34;,&#34;text&#34;:&#34;Model versioning, observability (Langfuse, Helicone, Datadog AI), A/B testing, rollback procedures.&#34;},{&#34;name&#34;:&#34;4 → 5: Platform Architecture&#34;,&#34;text&#34;:&#34;Joint AI platform team, self-service framework, multi-tenant vector DB, CoE establishment.&#34;},{&#34;name&#34;:&#34;5 → 6: Decision Integration&#34;,&#34;text&#34;:&#34;Embed AI recommendations into business decisions, agent architectures, continuous model-improvement loop.&#34;},{&#34;name&#34;:&#34;6 → 7: AI-Native Transformation&#34;,&#34;text&#34;:&#34;Discover new product/business models, convert AI capabilities into competitive advantage.&#34;}]"></howto-steps>

## 6. Turkey-Specific Maturity Criteria

Global maturity models (Gartner, McKinsey, MIT-Sloan) are **incomplete in the Turkish context**. Three additional layers must be considered for local maturity assessment:

### 6.1. KVKK Compliance

Turkish companies must **start AI maturity with KVKK**. Sending an LLM prompt that includes customer chat history is "data processing" under KVKK; consent, purpose limitation, data minimization, and cross-border transfer rules apply.

**Stage 3+ requires.** An anonymization layer, EU- or Turkey-hosted vector DB option, AI processing clauses in contracts.

### 6.2. EU AI Act (For Companies Serving the EU)

Turkish companies that supply products/services to the EU are **subject to the EU AI Act**. Every use-case must be evaluated under the 4-tier risk classification (prohibited, high risk, limited risk, minimal risk). High-risk systems require risk management, documentation, human oversight, and conformity assessment.

**Stage 4+ requires.** An EU AI Act mapping matrix, risk-based controls, separate compliance certification for EU-serving business units.

### 6.3. ISO 42001 Readiness

Published in December 2023, **ISO/IEC 42001** is the first international standard for AI management systems — the gold standard for enterprise readiness in Turkey, positioned as the AI equivalent of ISO 27001.

**Stage 5+ requires.** Gap analysis, AI Management System (AIMS) definition, internal audit, certification readiness.

<callout-box data-variant="answer" data-title="Sector Note — Banking and Finance">

BDDK regulations and **data residency** add restrictions for Turkish banks regarding AI cloud processing. In these sectors, Stage 4+ almost always requires an **on-prem or Turkey-region cloud LLM** architecture. Garanti BBVA, İş Bankası, and Akbank's internal AI platforms have evolved in this direction.

</callout-box>

## 7. Common Mistakes per Stage

### Stage 1-2 Mistakes

- **The "ban ChatGPT" policy.** Forbidding employees from legitimate tools leads to shadow AI usage. Correct approach: controlled enterprise subscription + policy.
- **Marketing a POC as a product.** Slide success is not operational success.

### Stage 3-4 Mistakes

- **Skipping the platform layer to multiply use-cases.** Without embedding and eval infrastructure, every new use-case creates separate technical debt.
- **Postponing the eval harness.** If you cannot measure hallucination before humans notice, you are not in production.
- **Leaving KVKK to the last stage.** Adding compliance at Stage 4 costs 3-5x more than building it in from the start.

### Stage 5-6 Mistakes

- **Centralizing the AI CoE into a slow bottleneck.** A CoE that prevents business-unit self-service becomes the choke point.
- **Jumping to multi-agent systems too early.** You cannot solve multi-agent eval if single-agent eval is not solved.

### Stage 7 Mistake

- **Outsourcing AI talent dependency to vendors.** Strategic capability must live in-house; external help only for specialization.

## 8. Case Studies (Anonymized)

### Case 1 — A Turkish Bank, Stage 2 → 4 Transition

A Turkish bank started 2024 with 4 parallel POCs: customer-service chatbot, loan-application summarization, fraud detection, product recommendation. After seven months, only one reached production.

**Problem.** Each POC built its own prompt management, its own vector DB, its own observability stack — parallel investment.

**Solution.** A joint AI platform team was formed: single vector DB (Qdrant on-prem), unified prompt management (PromptLayer), single eval harness (Langfuse). All four use-cases reached production in the next 6 months at 40% of the original cost.

**Result.** Stage 2 → Stage 4 transition took 13 months; the most critical investment was the data and LLMOps platform.

### Case 2 — A Turkish E-commerce Marketplace, Stage 4 → 6 Transition

A Turkish e-commerce marketplace had 8 production use-cases by 2025 (recommendation, description generation, customer service, price optimization, etc.). The real leap came when AI was integrated into the **decision-making** process of the product team.

**Intervention.** AI recommendation reports added to weekly category-manager planning meetings; product-manager proposals pre-screened with AI.

**Result.** Recommendation quality improved 18%, planning cycle dropped from 5 days to 2. Stage 5 → Stage 6 transition completed in 9 months.

## 9. ROI Expectations by Stage

<comparison-table data-caption="Annual AI ROI Expectations by Stage (Turkey, 2026)" data-headers="[&#34;Stage&#34;,&#34;Typical Net ROI&#34;,&#34;Payback Period&#34;,&#34;Primary Value Source&#34;]" data-rows="[{&#34;feature&#34;:&#34;1 Awareness&#34;,&#34;values&#34;:[&#34;—&#34;,&#34;—&#34;,&#34;None / negative&#34;]},{&#34;feature&#34;:&#34;2 Experimentation&#34;,&#34;values&#34;:[&#34;-10% to +5%&#34;,&#34;—&#34;,&#34;Learning, not POC value&#34;]},{&#34;feature&#34;:&#34;3 Foundation&#34;,&#34;values&#34;:[&#34;5-15%&#34;,&#34;18-24 months&#34;,&#34;First production use-cases&#34;]},{&#34;feature&#34;:&#34;4 Operationalization&#34;,&#34;values&#34;:[&#34;15-30%&#34;,&#34;12-18 months&#34;,&#34;Multi-use-case efficiency&#34;]},{&#34;feature&#34;:&#34;5 Scaling&#34;,&#34;values&#34;:[&#34;30-60%&#34;,&#34;9-12 months&#34;,&#34;Platform reuse&#34;]},{&#34;feature&#34;:&#34;6 Integration&#34;,&#34;values&#34;:[&#34;60-120%&#34;,&#34;6-9 months&#34;,&#34;Decision quality improvement&#34;]},{&#34;feature&#34;:&#34;7 Transformation&#34;,&#34;values&#34;:[&#34;120%+&#34;,&#34;Continuous&#34;,&#34;New business models&#34;]}]"></comparison-table>

## 10. Frequently Asked Questions

<callout-box data-variant="answer" data-title="How do I know what stage my company is at?">

Answer the **21 questions in Section 4** with your senior leadership team. Score each dimension separately; the lowest dimension determines your stage. If scores are scattered (e.g., Strategy 5 but Data 2), you have an imbalance and should address it first.

</callout-box>

<callout-box data-variant="answer" data-title="Can I skip stages?">

Practically, no. Every stage builds on the outputs of the previous one. A Stage 2 company cannot build Stage 5 multi-agent systems — it doesn't even have single-agent eval. Maturity stages are like **capacity layers**; if the layer below is cracked, what stacks on top collapses.

</callout-box>

<callout-box data-variant="answer" data-title="How many months to move through a stage?">

Typical transitions take 9-24 months. Accelerators: senior sponsorship, talent readiness, budget flexibility. Decelerators: regulatory approvals, legacy integration, cultural resistance.

</callout-box>

<callout-box data-variant="answer" data-title="How does KVKK compliance factor into the maturity score?">

KVKK compliance is the foundation of the **Governance dimension**. An AI system without a KVKK risk assessment can score no higher than Stage 2. For Stage 3 and above, KVKK processes must be **structured and auditable**.

</callout-box>

<callout-box data-variant="answer" data-title="Who runs the AI maturity assessment?">

Ideally a **hybrid of external expert + internal team**. The external party provides objective lens and sector benchmarks; the internal team provides detailed context. An annual AI maturity audit is recommended.

</callout-box>

<callout-box data-variant="answer" data-title="I'm at Stage 4, what next?">

Stage 4 is the "great leap" threshold. The next step is **platform architecture** — moving from individual use-cases to a shared AI platform. Establish an AI Center of Excellence (CoE) model; enable business units to develop self-service AI use-cases. This is the primary output of Stage 5.

</callout-box>

<callout-box data-variant="answer" data-title="When should ISO 42001 enter the agenda?">

Ideally a **gap analysis** is done between Stages 4-5. Certification can be a goal by the end of Stage 5. ISO 42001 can integrate with an existing ISO 27001 system, reducing cost.

</callout-box>

<callout-box data-variant="answer" data-title="Do sector differences change the maturity model?">

The framework stays the same; **dimension weights shift**. In finance and health, governance is more critical (40%+); e-commerce and retail emphasize data quality (35%+); B2B software companies need stronger talent dimension (35%+). Adapt the weights to your sector.

</callout-box>

## 11. Next Steps

Three practical actions to apply this framework in your company:

1. **Quick self-assessment.** Answer the 21 questions in Section 4 in a 90-minute session with your senior leadership team. Score by dimension and make **the lowest dimension** the investment priority for the next quarter.
2. **6-month transition plan.** Pick three steps from Section 5 to reach the next stage; calendar them within 6 months.
3. **External assessment.** Plan an annual AI maturity audit — the foundation of continuous improvement.

Reach out to diagnose your current stage together or build the transition plan for the next stage.

<references-list data-items="[{&#34;title&#34;:&#34;ISO/IEC 42001:2023 AI Management Systems&#34;,&#34;url&#34;:&#34;https://www.iso.org/standard/81230.html&#34;,&#34;author&#34;:&#34;ISO/IEC&#34;,&#34;publishedAt&#34;:&#34;2023-12-18&#34;,&#34;publisher&#34;:&#34;ISO&#34;},{&#34;title&#34;:&#34;EU Artificial Intelligence Act&#34;,&#34;url&#34;:&#34;https://artificialintelligenceact.eu/&#34;,&#34;author&#34;:&#34;European Commission&#34;,&#34;publishedAt&#34;:&#34;2024-03-13&#34;,&#34;publisher&#34;:&#34;EU&#34;},{&#34;title&#34;:&#34;NIST AI Risk Management Framework&#34;,&#34;url&#34;:&#34;https://www.nist.gov/itl/ai-risk-management-framework&#34;,&#34;author&#34;:&#34;NIST&#34;,&#34;publishedAt&#34;:&#34;2023-01-26&#34;,&#34;publisher&#34;:&#34;NIST&#34;},{&#34;title&#34;:&#34;McKinsey: The State of AI in 2025&#34;,&#34;url&#34;:&#34;https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai&#34;,&#34;author&#34;:&#34;McKinsey & Company&#34;,&#34;publishedAt&#34;:&#34;2025-06&#34;,&#34;publisher&#34;:&#34;McKinsey&#34;},{&#34;title&#34;:&#34;Gartner AI Maturity Model&#34;,&#34;url&#34;:&#34;https://www.gartner.com/en/information-technology/insights/artificial-intelligence&#34;,&#34;author&#34;:&#34;Gartner&#34;,&#34;publishedAt&#34;:&#34;2025&#34;,&#34;publisher&#34;:&#34;Gartner&#34;},{&#34;title&#34;:&#34;MIT Sloan: Winning with AI&#34;,&#34;url&#34;:&#34;https://sloanreview.mit.edu/projects/winning-with-ai/&#34;,&#34;author&#34;:&#34;Ransbotham, S. et al.&#34;,&#34;publishedAt&#34;:&#34;2020&#34;,&#34;publisher&#34;:&#34;MIT Sloan Management Review&#34;},{&#34;title&#34;:&#34;KVKK - Law No. 6698&#34;,&#34;url&#34;:&#34;https://www.kvkk.gov.tr/&#34;,&#34;author&#34;:&#34;Republic of Turkiye - KVKK&#34;,&#34;publishedAt&#34;:&#34;2016-04-07&#34;,&#34;publisher&#34;:&#34;Republic of Turkiye&#34;},{&#34;title&#34;:&#34;Turkey National AI Strategy 2021-2025&#34;,&#34;url&#34;:&#34;https://cbddo.gov.tr/projeler/ulusal-yapay-zeka-stratejisi/&#34;,&#34;author&#34;:&#34;Digital Transformation Office of the Presidency&#34;,&#34;publishedAt&#34;:&#34;2021&#34;,&#34;publisher&#34;:&#34;Republic of Turkiye&#34;},{&#34;title&#34;:&#34;Stanford AI Index 2025&#34;,&#34;url&#34;:&#34;https://aiindex.stanford.edu/&#34;,&#34;author&#34;:&#34;Stanford HAI&#34;,&#34;publishedAt&#34;:&#34;2025-04&#34;,&#34;publisher&#34;:&#34;Stanford University&#34;}]"></references-list>

---

This is a living document; the enterprise AI ecosystem in Turkey evolves every quarter, so the model is **updated annually**.