Select from our high-performance GPU options based on your computational needs.

Available GPU Types

Choose the GPU that best matches your workload requirements:

GPU ModelPerformanceBest For
H100-SXM5-80GBHighest performance for large-scale AI trainingLarge language models, complex AI research
H100-PCIe-NVLink-80GBHigh-performance with NVLinkMulti-GPU workloads, distributed training
H100-PCIe-80GBHigh-performance PCIe interfaceSingle-GPU inference, model training
A100-SXM4-80GB-NVLinkExcellent for deep learning and HPCDeep learning, scientific computing
A100-PCIe-80GBHigh-performance PCIe A100ML training, data analytics
L40Balanced performance for diverse workloadsAI inference, graphics workloads
RTX-A6000Cost-effective for smaller workloadsDevelopment, small-scale training
A40Versatile professional GPUProfessional graphics, AI development

GPU Instance Pricing

USD Pricing

GPU ModelUSD per HourPerformance Level
H100-SXM5-80GB$2.69Highest
H100-PCIe-NVLink-80GB$2.29Very High
H100-PCIe-80GB$2.25Very High
A100-SXM4-80GB-NVLink$1.59High
A100-PCIe-80GB$1.55High
L40$1.19Medium-High
RTX-A6000$0.69Medium
A40$0.69Medium

EUR Pricing

GPU ModelEUR per HourPerformance Level
H100-SXM5-80GB€2.49Highest
H100-PCIe-NVLink-80GB€2.09Very High
H100-PCIe-80GB€2.05Very High
A100-SXM4-80GB-NVLink€1.45High
A100-PCIe-80GB€1.39High
L40€1.09Medium-High
RTX-A6000€0.45Medium
A40€0.45Medium

INR Pricing

GPU ModelINR per HourPerformance Level
H100-SXM5-80GB₹239Highest
H100-PCIe-NVLink-80GB₹199Very High
H100-PCIe-80GB₹195Very High
A100-SXM4-80GB-NVLink₹135High
A100-PCIe-80GB₹135High
L40₹99Medium-High
RTX-A6000₹49Medium
A40₹49Medium

GPU Detailed Specifications

NVIDIA H100 GPUs

H100-SXM5-80GB

  • Memory: 80GB HBM3
  • Architecture: Hopper
  • Best for: Large language models, transformer training
  • Multi-Instance GPU: Yes
  • NVLink: 900 GB/s

H100-PCIe-NVLink-80GB

  • Memory: 80GB HBM3
  • Interface: PCIe with NVLink
  • Best for: Multi-GPU distributed training
  • Interconnect: High-bandwidth NVLink

H100-PCIe-80GB

  • Memory: 80GB HBM3
  • Interface: PCIe 5.0
  • Best for: Single-GPU inference and training
  • Power Efficiency: Optimized for single-node workloads

GPU Count Selection

Choose the number of GPUs for your virtual machine:

Single GPU

Perfect for:

  • Development and testing
  • Small to medium model training
  • Inference workloads
  • Cost-effective computing

Multi-GPU

Ideal for:

  • Large model training
  • Distributed computing
  • High-throughput inference
  • Parallel processing workloads

Multi-GPU Configuration

  • Available counts are dynamically updated based on current stock
  • Higher GPU counts provide more computational power for parallel workloads
  • Multi-GPU setups automatically include NVLink for supported GPU types
  • GPU availability varies by region and is updated in real-time

NVLink is automatically included for multi-GPU configurations with compatible GPU types (H100 and A100 series with NVLink variants).

GPU Selection Guide

By Use Case

By Performance Tier

H100 Series

  • Latest architecture with highest performance
  • 80GB memory for the largest models
  • Advanced Tensor Cores for AI workloads
  • Best for cutting-edge research and production

Best for: Large-scale AI training, advanced research, production LLM inference

Regional Availability

GPU availability and selection varies by region:

NORWAY-1

Europe Region

  • All GPU types available
  • Low latency for European users
  • GDPR compliant infrastructure

CANADA-1

North America Region

  • All GPU types available
  • High-speed connectivity
  • Optimized for North American users

US-1

United States Region

  • All GPU types available
  • US-based infrastructure
  • Low latency for US users

GPU availability is updated in real-time. If your preferred GPU type is not available in your selected region, try another region or check back later.

GPU Selection Tips

Performance Optimization

Memory Considerations

Key Factors:

  • Model size requirements
  • Batch size optimization
  • Dataset memory usage
  • Multi-model deployment

Compute Requirements

Key Factors:

  • Training time constraints
  • Inference latency needs
  • Parallel processing requirements
  • Throughput expectations

Cost Optimization

Getting Started

1

Assess Your Needs

  • Determine your primary use case
  • Estimate memory requirements
  • Consider performance needs
  • Plan your budget
2

Select GPU Type

  • Choose based on workload requirements
  • Consider regional availability
  • Review pricing for your currency
  • Start with single GPU and scale as needed
3

Deploy and Test

  • Deploy VM with selected GPU
  • Test performance with your workload
  • Monitor resource utilization
  • Optimize configuration as needed
4

Scale and Optimize

  • Add more GPUs if needed
  • Optimize software for multi-GPU
  • Monitor costs and performance
  • Adjust configuration based on results

Start with a single GPU to test your workload, then scale to multi-GPU configurations as needed. This approach helps optimize both performance and costs.