🎲 Galton Lab

Interactive Journey Through Probability as Flow
"What if probability didn't have to be calculatedβ€”what if it could flow?"

Welcome to the Interactive Demos

Galton Lab reimagines how neural networks compute probability. Instead of calculating distributions algebraically (softmax), we let probability emerge from geometric flowβ€”like water finding its way downhill.

These interactive demos will take you from the physics of a Galton board to production-ready machine learning systems, showing how this simple idea scales to transformers, image classification, reinforcement learning, and beyond.

🎯

Foundation: Physics to Transformers

Start your journey from physical Galton boards through learned pegs, continuous flow, and integration with transformer architectures.
  • Physical Galton Board (Interactive)
  • Learned Bias Fields
  • ML Prediction with Adaptive Compute
  • Continuous SDF Flow on a Torus
  • Transformer Integration
  • The Softmax Problem & Solutions
Start the Foundation Journey β†’
🌐

Beyond LLMs: Universal Applications

Discover how Galton flow applies to image classification, attention mechanisms, RL policies, and any domain where you need categorical distributions.
  • Image Classification (2D Flow Fields)
  • Attention as Geometric Routing
  • RL Policies via Action Flow
  • Interactive Visualizations
  • Links to Runnable Code Examples
  • The Universal Pattern
Explore Broader Applications β†’

Additional Resources

πŸ“– Documentation

Galton.md β€” The origin story

ODE Training Guide β€” Deep technical dive

πŸ’» Code Examples

examples/ β€” Runnable Python examples

Use Cases β€” 8+ application domains

πŸ”¬ Try It Yourself

GitHub Repository

Clone, train, and experiment with your own Galton samplers

πŸ“Š Research

experiments/ β€” Comparative studies

tests/ β€” Geometric invariants