Models
20 registered models
| Status | Name | Type | Endpoint | Added |
|---|---|---|---|---|
| Julia Monarch Julia with no attention trained using Monarch | OpenAI Compat | MonarchSLM | 2026-02-25 | |
| JuliaSLM JuliaSLM 5m | OpenAI Compat | JuliaSLM | 2026-02-25 | |
| alpha-hf-v0-historic-chat just a barely chattable model | OpenAI Compat | alpha-v0-historic | 2026-02-24 | |
| JuliaFluxGPT from scratch in julia | OpenAI Compat | JuliaFluxGPT | 2026-02-23 | |
| randygpt-ds2-moe 12L, 128d, 4 experts x 256d, top-2 ~4.48M parm WIP TRAINING | OpenAI Compat | randygpt-ds-moe | 2026-02-21 | |
| randygpt-ds2 WIP cleaner data | OpenAI Compat | randygpt-ds2 | 2026-02-20 | |
| fourth-ward asdasd | OpenAI Compat | fourth-gpt | 2026-02-20 | |
| MicroGpt-GO A 2-layer, 15M parameter character-level GPT model trained from scratch in Go. | OpenAI Compat | microgpt | 2026-02-20 | |
| JuliaGPT2 fork of micro Julia | OpenAI Compat | JuliaGPT | 2026-02-20 | |
| hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_MOfficial unturf Community Ollama | OpenAI Compat | hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M | 2026-02-19 | |
| hermesOfficial hermes | OpenAI Compat | adamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic | 2026-02-19 | |
| randyGPT-ds Parameters 2.78M Layers 12 Heads 4 Embedding dim 128 Context window 256 tokens Vocab size 1500 (BPE) Training iters 2750 Best val loss 4.4178 | OpenAI Compat | randygpt-ds | 2026-02-19 | |
| randyGPT-s Trained on ~103MB of cleaned Project Gutenberg text (114 public domain books) with BPE-1500 tokenization, AdamW optimizer, cosine LR decay, and ReduceLROnPlateau. Metal GPU via Candle on Apple Silicon. | OpenAI Compat | randygpt-s | 2026-02-19 | |
| lfm25-1-2b-instruct Fast edge inference: 239 tok/s decode on AMD CPU, 82 tok/s on mobile NPU. Runs under 1GB of memory | OpenAI Compat | lfm25-1-2b-instruct | 2026-02-19 | |
| exaone-4-1-2b small reasoning model | OpenAI Compat | exaone-4-1-2b | 2026-02-19 | |
| ouroboros-1m-gemma-270m Ouroboros-1M is a proof-of-concept engineering feat that scales the tiny gemma-3-270m-it to support a 1 Million Token Context Window | OpenAI Compat | ouroboros-1m-gemma-270m | 2026-02-19 | |
| JuliaGPT MicroGPT in Julia | OpenAI Compat | juliagpt | 2026-02-19 | |
| omega omega is actually an agent | OpenAI Compat | omega | 2026-02-19 | |
| openai - gpt-4.1-miniOfficial not retarded | OpenRouter | openai/gpt-4.1-mini | 2026-02-19 | |
| randygpt-8L-4H-128D ~1.73M bpe500 val loss ~3.99, ppl ~54 from your training run at iter ~825 | Custom HTTP | randygpt.gnostr.cloud | 2026-02-19 |