# MLX-LM Apple Silicon: FT + Serve on M-Series Mac + Distributed MLX

> Source: https://sukruyusufkaya.com/en/learn/fine-tuning-cookbook/ftc-mlx-lm-apple-silicon-finetune-serve
> Updated: 2026-05-14T14:43:01.396Z
> Category: Fine-Tuning Cookbook (Model-by-Model)
> Module: Part XV — Serving Engineering
**TLDR:** Apple MLX (2023+) — unified memory ML framework for Apple Silicon. MLX-LM for Llama/Qwen/Gemma FT + inference. 70B inference on M3 Max 128GB, 8B FT on M2 Pro 32GB. Cookbook supplement for Mac users.

