Skip to content
View Goekdeniz-Guelmez's full-sized avatar
🤨
AGI in 2026?
🤨
AGI in 2026?

Block or report Goekdeniz-Guelmez

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Goekdeniz-Guelmez/README.md

Gökdeniz Gülmez

ML Researcher · Open-Source Engineer · Apple Silicon AI

GitHub followers arXiv Sponsor

Stuttgart — building the infrastructure that makes local AI actually work.


MLX Ecosystem Contributions

Acknowledged contributor across the core MLX stack: mlx · mlx-lm · mlx-examples · mlx-vlm

Training features added to mlx / mlx-lm: Full-weight fine-tuning · Muon optimizer · ReLU² activation · WandB reporting · Multi-optimizer support

Highlighted model architectures ported to MLX-LM
Model Org
Mamba v1, v2, v3 State Space
MiniCPM, MiniCPM3 OpenBMB
Helium Kyutai
GLM, GLM4, GLM5 Z.ai / THUKEG
dots.llm1 Rednote
Ernie 4.5 MoE Baidu
Bailing MoE, Bailing Linear (Ling) inclusionAI
Granite MoE IBM
LongCat Meituan
Nemotron H NVIDIA
Apertus Swiss-AI
OLMoE, OLMo 3 AllenAI
Jamba AI21 Labs
...and more See mlx-lm commit history

Research

Paper Notes Year
DynaMoE Dynamic adaptive Mixture-of-Experts LLM architecture 2026
Gabliteration Automated abliteration for any Transformers-compatible LLM 2025

Projects

actively maintained maintained low_activity

mlx-lm-lora actively maintained — LoRA / QLoRA / full fine-tuning on Apple Silicon. 12+ training methods, DPO / GRPO / ORPO / PPO, Muon optimizer, WandB. The go-to fine-tuning toolkit for M-series.

mlx-vlm (main trainer maintainer) actively maintained — Vision-language model training on MLX. Fully rewrote the training backend, added ORPO support.

MLX-Embeddings-LoRA actively maintained — Fine-tune embedding models for retrieval and semantic tasks on Apple Silicon.

MLX-Benchmark actively maintained — First CLI benchmark measuring LLM understanding of the MLX ecosystem and its APIs.

Moshi-FineTune-MLX actively maintained — LoRA and full fine-tuning for Moshi speech-to-speech models on Apple Silicon.

Local NotebookLM actively maintained — Fully local, PDF-grounded audio generation (up to 6 speakers). No API keys, no cloud. Companion native app included.

MLX-LM-LENS maintained — Interpretability and abliteration tooling for MLX language models.

MLX-KAN low_activity — Kolmogorov-Arnold Networks, natively in MLX.

Gabliteration low_activity — Companion repo to arXiv:2412.06527. Remove refusal directions from any HF Transformers model.


Currently Building

J.O.S.I.E.-Home — Fully local real-time multimodal smart home assistant. Discrete diffusion LM, custom ChatML-style tokenizer (hardcoded vocab: rooms, devices, properties, value bins). No cloud dependency.

Josie-Linear — New Linear Dynamic Mixture-of-Experts LLM architecture.


If my work has saved you GPU bills or ended up in your pipeline — consider sponsoring. Everything here is free, maintained in my spare time.

Sponsor

GitHub activity graph

Pinned Loading

  1. mlx-lm-lora mlx-lm-lora Public

    Train Large Language Models on MLX.

    Python 369 44

  2. mlx-lm-lora-example-notebooks mlx-lm-lora-example-notebooks Public

    this repo has all official MLX-LM-LoRA example notebooks for training on Apple Silicon

    Jupyter Notebook 31 6

  3. mlx-kan mlx-kan Public

    KAN (Kolmogorov–Arnold Networks) in the MLX framework for Apple Silicon

    Python 31 4

  4. mlx-embeddings-lora mlx-embeddings-lora Public

    Train Embedding Models on MLX.

    Python 16 2

  5. Local-NotebookLM Local-NotebookLM Public

    Googles NotebookLM but local

    Python 893 113

  6. gabliteration gabliteration Public

    Automated hyperparameter search for optimal Gabliteration configurations on large language models

    Python 50 8