Democratizing AI infrastructure with a community-powered compute grid.
Access GPU compute at 30-50% lower cost than centralized cloud providers. Our distributed network eliminates data center overhead, passing savings directly to you.
No more waiting in queues for H100s. Launch jobs instantly across thousands of consumer and pro-sumer GPUs perfectly suited for fine-tuning and inference.
From single-GPU prototypes to multi-shard distributed training. Seamlessly scale your workload across a heterogeneous grid of NVIDIA, AMD, and Intel hardware.
Six-step pipeline from job definition to verified delivery with transparent billing.
Choose the right trust level for your workload — from open community to dedicated enterprise capacity.
HU (Hugin Unit) is your simple billing unit. 1 HU ≈ 3,600 normalized GPU-seconds. Pre-estimate and upper bound before every job.
Owner Payout Rate: 1 HU = €0.153 net · 73% owner share · ~20% above RunPod / Vast.ai
| Device Class | ~HU/Hour | Examples |
|---|---|---|
| Smartphones (Idle) | 0.05 - 0.15 | iPhone 15 Pro, Pixel 8 |
| Electric Vehicles (EV) | 0.2 - 0.4 | Tesla MCU, Polestar 2 |
| 4-6GB Consumer | 0.4 - 0.6 | GTX 1650, RTX 3050 |
| 8-12GB Consumer | 0.6 - 0.9 | RTX 3060, RTX 4070 |
| 16-24GB Workstation | 1.6 - 2.8 | RTX 3090, RTX 4090 |
| 24-48GB+ Datacenter | 2.8 - 4.5 | A100, H100 |