Models

20 registered models

StatusNameTypeEndpointAdded
Julia Monarch

Julia with no attention trained using Monarch

OpenAI CompatMonarchSLM2026-02-25
JuliaSLM

JuliaSLM 5m

OpenAI CompatJuliaSLM2026-02-25
alpha-hf-v0-historic-chat

just a barely chattable model

OpenAI Compatalpha-v0-historic2026-02-24
JuliaFluxGPT

from scratch in julia

OpenAI CompatJuliaFluxGPT2026-02-23
randygpt-ds2-moe

12L, 128d, 4 experts x 256d, top-2 ~4.48M parm WIP TRAINING

OpenAI Compatrandygpt-ds-moe2026-02-21
randygpt-ds2

WIP cleaner data

OpenAI Compatrandygpt-ds22026-02-20
fourth-ward

asdasd

OpenAI Compatfourth-gpt2026-02-20
MicroGpt-GO

A 2-layer, 15M parameter character-level GPT model trained from scratch in Go.

OpenAI Compatmicrogpt2026-02-20
JuliaGPT2

fork of micro Julia

OpenAI CompatJuliaGPT2026-02-20
hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_MOfficial

unturf Community Ollama

OpenAI Compathf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M2026-02-19
hermesOfficial

hermes

OpenAI Compatadamo1139/Hermes-3-Llama-3.1-8B-FP8-Dynamic2026-02-19
randyGPT-ds

Parameters 2.78M Layers 12 Heads 4 Embedding dim 128 Context window 256 tokens Vocab size 1500 (BPE) Training iters 2750 Best val loss 4.4178

OpenAI Compatrandygpt-ds2026-02-19
randyGPT-s

Trained on ~103MB of cleaned Project Gutenberg text (114 public domain books) with BPE-1500 tokenization, AdamW optimizer, cosine LR decay, and ReduceLROnPlateau. Metal GPU via Candle on Apple Silicon.

OpenAI Compatrandygpt-s2026-02-19
lfm25-1-2b-instruct

Fast edge inference: 239 tok/s decode on AMD CPU, 82 tok/s on mobile NPU. Runs under 1GB of memory

OpenAI Compatlfm25-1-2b-instruct2026-02-19
exaone-4-1-2b

small reasoning model

OpenAI Compatexaone-4-1-2b2026-02-19
ouroboros-1m-gemma-270m

Ouroboros-1M is a proof-of-concept engineering feat that scales the tiny gemma-3-270m-it to support a 1 Million Token Context Window

OpenAI Compatouroboros-1m-gemma-270m2026-02-19
JuliaGPT

MicroGPT in Julia

OpenAI Compatjuliagpt2026-02-19
omega

omega is actually an agent

OpenAI Compatomega2026-02-19
openai - gpt-4.1-miniOfficial

not retarded

OpenRouteropenai/gpt-4.1-mini2026-02-19
randygpt-8L-4H-128D ~1.73M bpe500

val loss ~3.99, ppl ~54 from your training run at iter ~825

Custom HTTPrandygpt.gnostr.cloud2026-02-19