Cumulus
Homepage
Docs
←Back to all articles
Tag

#gpu-cloud

2 articles tagged with "gpu-cloud"

#inference4#model-hosting4#serverless-gpu3#cuda2#grace-hopper2#gpu2#pricing2#gpu-cloud2#reinforcement-learning1#fine-tuning1#visual-generation1#pipeline1#mamba1#qwen3.51#linear-attention1
February 17, 20266 min read

Ionattention: Grace Hopper–Native Inference

How Cumulus built the fastest GPU inference runtime for NVIDIA GH200 — 7,167 tok/s on a single chip. Coherent CUDA graphs, eager KV writeback, and phantom-tile attention scheduling.

inferencegpugrace-hopper+6
Read article
February 9, 20263 min read

Why We Built a Cheaper, Faster GPU Cloud for AI Model Hosting

Cumulus Labs is building the cheapest serverless GPU cloud for AI model hosting. Here's why dedicated GPU instances waste money and how pay-per-second GPU inference changes the economics.

gpu-cloudserverless-gpumodel-hosting+1
Read article

Cumulus Labs

© 2026 Cumulus Compute Labs Corporation. All rights reserved.