MIND Language
The native programming language for intelligent systems powering NIKOLA.
What is MIND?
MIND (Machine INtelligence Development) is a systems programming language designed specifically for AI and high-performance computing. NIKOLA is written entirely in MIND, leveraging its native tensor operations and zero-overhead abstractions.
Why MIND Instead of Python/Rust?
vs Python/PyTorch
- 65,000-125,000x faster compile
- 1,345-11,284x faster autodiff
- No GIL limitations
- Zero dependencies
vs Rust
- Native tensors
- Simpler syntax
- Built-in GPU kernels
- AI-first design
vs C++
- Memory safe
- Modern syntax
- Faster compile
- Cross-platform GPU
Key Features
Native Tensor Types
First-class tensor support with compile-time dimension checking:
// HalfKA NNUE architecture: 45056 → 1024 → 8 → 32 → 1
let ft_weights: tensor<i16, (45056, 1024)> = load_weights("ft.bin");
let features: tensor<i8, 45056> = extract_halfka(&position);
let accumulator = ft_weights.forward(&features); // GPU-acceleratedGPU Kernels
Write GPU code inline with automatic backend selection:
fn gpu_batch_evaluate(positions: &[Board], batch_size: usize) -> Vec<i32> {
let mut results = vec![0i32; positions.len()];
on(gpu0) {
parallel for i in 0..positions.len() {
let features = extract_halfka(&positions[i]);
results[i] = network_forward(&features);
}
}
results
}Zero-Cost Abstractions
High-level code compiles to optimal machine code:
// This high-level code...
let best_move = moves
.filter(|m| is_legal(m))
.max_by(|m| evaluate(m))
// ...compiles to hand-optimized assemblyCross-Platform Compilation
Single codebase compiles to CUDA, Metal, ROCm, WebGPU, and CPU backends without modification. The MIND compiler automatically generates optimized kernels for each target.
MIND in NIKOLA
NIKOLA's 44 source files demonstrate MIND's strengths:
- GPU-Batched NNUE - Lazy SMP threads submit positions to GPU for batch evaluation (500M+ pos/sec)
- SPTT Hybrid Search - Dynamic switching between alpha-beta and GPU MCTS based on position
- GPU MCTS - Monte Carlo Tree Search with PUCT selection and virtual loss for parallelism
- Fortress Detection - CNN-based detection of defensive structures
- Move Generation - Magic bitboards with SIMD-optimized operations
- Distributed Hash - Lock-free transposition tables across cluster nodes
Getting Started with MIND
Install the MIND toolchain:
Linux / macOS
curl -fsSL https://mindlang.dev/install.sh | bashWindows
irm https://mindlang.dev/install.ps1 | iex