İçeriğe geç

OpenAI GPT-4o-mini / GPT-4o / GPT-4.1 Fine-Tuning API: JSONL Şema + Cost + Dashboard

OpenAI fine-tuning API'sinin tam pratiği: JSONL format (chat messages), validation set, hyperparameter override (epochs/lr/batch), upload/monitor/download checkpoint flow. Cost telemetry: training token × $25/M (GPT-4o-mini), inference 1.5× base price. RTX 4090'da kendi 1000 TR örneğin GPT-4o-mini'yi 30 dakikada FT eder.

Şükrü Yusuf KAYA
30 dakikalık okuma
Orta
OpenAI GPT-4o-mini / GPT-4o / GPT-4.1 Fine-Tuning API: JSONL Şema + Cost + Dashboard
jsonl
// === OpenAI fine-tuning JSONL şeması ===
// Her satır: chat conversation
{"messages":[{"role":"system","content":"Sen TR yardımcı asistansın."},{"role":"user","content":"İstanbul nüfusu?"},{"role":"assistant","content":"Yaklaşık 15 milyon."}]}
{"messages":[{"role":"system","content":"Sen TR yardımcı asistansın."},{"role":"user","content":"2+2 kaç?"},{"role":"assistant","content":"4."}]}
 
// Function calling support
{"messages":[{"role":"user","content":"Hava nasıl?"},{"role":"assistant","content":null,"function_call":{"name":"get_weather","arguments":"{\"city\":\"Istanbul\"}"}}],"functions":[{"name":"get_weather","parameters":{"type":"object","properties":{"city":{"type":"string"}}}}]}
 
// Validation: 50-100 sample, training set'ten ayrı
OpenAI FT JSONL şeması
python
# === OpenAI fine-tuning flow ===
from openai import OpenAI
client = OpenAI()
 
# 1. Upload training file
with open("train.jsonl", "rb") as f:
train_file = client.files.create(file=f, purpose="fine-tune")
 
with open("val.jsonl", "rb") as f:
val_file = client.files.create(file=f, purpose="fine-tune")
 
# 2. Create fine-tune job
job = client.fine_tuning.jobs.create(
training_file=train_file.id,
validation_file=val_file.id,
model="gpt-4o-mini-2024-07-18",
hyperparameters={
"n_epochs": 3,
"batch_size": "auto",
"learning_rate_multiplier": "auto",
},
suffix="tr-cookbook-v1",
)
print(f"Job started: {job.id}")
 
# 3. Monitor
import time
while True:
j = client.fine_tuning.jobs.retrieve(job.id)
print(f"Status: {j.status}, trained_tokens: {j.trained_tokens}")
if j.status in ["succeeded", "failed", "cancelled"]:
break
time.sleep(60)
 
# 4. Use fine-tuned model
response = client.chat.completions.create(
model=j.fine_tuned_model,
messages=[{"role": "user", "content": "Test query"}],
)
print(response.choices[0].message.content)
OpenAI FT complete flow

1. OpenAI FT Cost Tablosu (2026)#

ModelTraining (per M token)Inference inputInference output
GPT-4o-mini$25$0.30$1.20
GPT-4o$100$3.75$15
GPT-4.1$90$3.00$12
GPT-3.5-turbo (legacy)$8$3.00$6.00
Örnek: 1000 TR sample × 500 token average × 3 epoch = 1.5M training token
  • GPT-4o-mini: $37.50 (~₺1250)
  • GPT-4o: $150 (~₺5000)
Cookbook'un kuralı: Dev iterate için GPT-4o-mini ucuz. Production quality için GPT-4o.
✅ Teslim
  1. 100 TR örnek JSONL hazırla. 2) GPT-4o-mini FT job başlat. 3) Pre/post FT karşılaştır. 4) Sonraki ders: 14.2 — OpenAI o-series Reinforcement Fine-Tuning.

Yorumlar & Soru-Cevap

(0)
Yorum yazmak için giriş yap.
Yorumlar yükleniyor...

İlgili İçerikler