
CoreWeave
Specialized cloud provider for AI training with H100/B200 at scale
About
CoreWeave is a cloud infrastructure provider specializing in large-scale AI training. The platform utilizes Nvidia H100, H200, and B200 graphics processing units (GPUs) to deliver high-performance computing capabilities. This infrastructure has been adopted by leading companies, including OpenAI and Microsoft, for their demanding AI workloads. CoreWeave's GPU-first architecture is particularly well-suited for frontier-model training, which requires substantial computational resources. By leveraging the power of Nvidia's H100, H200, and B200 GPUs, CoreWeave enables organizations to push the boundaries of AI innovation and accelerate their research and development efforts.
Best for
- •Frontier AI labs training large models
- •Enterprises reserving GPU capacity
Similar programs in Infrastructure
Algolia
FeaturedManaged search API with NeuralSearch AI for production apps
Anyscale
Ray-based distributed compute for AI and ML at scale
Baseten
Production ML model deployment with auto-scaling GPU inference
Chroma
Open-source embedding database for building AI applications