Select from high-performance GPU options based on your computational needs
Select from our high-performance GPU options based on your computational needs.
Choose the GPU that best matches your workload requirements:
GPU Model | Performance | Best For |
---|---|---|
H100-SXM5-80GB | Highest performance for large-scale AI training | Large language models, complex AI research |
H100-PCIe-NVLink-80GB | High-performance with NVLink | Multi-GPU workloads, distributed training |
H100-PCIe-80GB | High-performance PCIe interface | Single-GPU inference, model training |
A100-SXM4-80GB-NVLink | Excellent for deep learning and HPC | Deep learning, scientific computing |
A100-PCIe-80GB | High-performance PCIe A100 | ML training, data analytics |
L40 | Balanced performance for diverse workloads | AI inference, graphics workloads |
RTX-A6000 | Cost-effective for smaller workloads | Development, small-scale training |
A40 | Versatile professional GPU | Professional graphics, AI development |
GPU Model | USD per Hour | Performance Level |
---|---|---|
H100-SXM5-80GB | $2.69 | Highest |
H100-PCIe-NVLink-80GB | $2.29 | Very High |
H100-PCIe-80GB | $2.25 | Very High |
A100-SXM4-80GB-NVLink | $1.59 | High |
A100-PCIe-80GB | $1.55 | High |
L40 | $1.19 | Medium-High |
RTX-A6000 | $0.69 | Medium |
A40 | $0.69 | Medium |
GPU Model | EUR per Hour | Performance Level |
---|---|---|
H100-SXM5-80GB | €2.49 | Highest |
H100-PCIe-NVLink-80GB | €2.09 | Very High |
H100-PCIe-80GB | €2.05 | Very High |
A100-SXM4-80GB-NVLink | €1.45 | High |
A100-PCIe-80GB | €1.39 | High |
L40 | €1.09 | Medium-High |
RTX-A6000 | €0.45 | Medium |
A40 | €0.45 | Medium |
GPU Model | INR per Hour | Performance Level |
---|---|---|
H100-SXM5-80GB | ₹239 | Highest |
H100-PCIe-NVLink-80GB | ₹199 | Very High |
H100-PCIe-80GB | ₹195 | Very High |
A100-SXM4-80GB-NVLink | ₹135 | High |
A100-PCIe-80GB | ₹135 | High |
L40 | ₹99 | Medium-High |
RTX-A6000 | ₹49 | Medium |
A40 | ₹49 | Medium |
H100-SXM5-80GB
H100-PCIe-NVLink-80GB
H100-PCIe-80GB
H100-SXM5-80GB
H100-PCIe-NVLink-80GB
H100-PCIe-80GB
A100-SXM4-80GB-NVLink
A100-PCIe-80GB
L40
RTX A6000
A40
Choose the number of GPUs for your virtual machine:
Perfect for:
Ideal for:
NVLink is automatically included for multi-GPU configurations with compatible GPU types (H100 and A100 series with NVLink variants).
Large Language Models (LLMs)
Recommended GPUs:
Considerations:
Deep Learning Research
Recommended GPUs:
Considerations:
AI Inference
Recommended GPUs:
Considerations:
Development & Prototyping
Recommended GPUs:
Considerations:
Graphics & Visualization
Recommended GPUs:
Considerations:
H100 Series
Best for: Large-scale AI training, advanced research, production LLM inference
H100 Series
Best for: Large-scale AI training, advanced research, production LLM inference
A100 Series
Best for: Deep learning, scientific computing, multi-user environments
L40
Best for: AI inference, mixed workloads, development
RTX A6000 / A40
Best for: Development, small-scale training, visualization
GPU availability and selection varies by region:
Europe Region
North America Region
United States Region
GPU availability is updated in real-time. If your preferred GPU type is not available in your selected region, try another region or check back later.
Key Factors:
Key Factors:
Development vs Production
Development:
Production:
Workload Matching
Training Workloads:
Inference Workloads:
Assess Your Needs
Select GPU Type
Deploy and Test
Scale and Optimize
Start with a single GPU to test your workload, then scale to multi-GPU configurations as needed. This approach helps optimize both performance and costs.
Select from high-performance GPU options based on your computational needs
Select from our high-performance GPU options based on your computational needs.
Choose the GPU that best matches your workload requirements:
GPU Model | Performance | Best For |
---|---|---|
H100-SXM5-80GB | Highest performance for large-scale AI training | Large language models, complex AI research |
H100-PCIe-NVLink-80GB | High-performance with NVLink | Multi-GPU workloads, distributed training |
H100-PCIe-80GB | High-performance PCIe interface | Single-GPU inference, model training |
A100-SXM4-80GB-NVLink | Excellent for deep learning and HPC | Deep learning, scientific computing |
A100-PCIe-80GB | High-performance PCIe A100 | ML training, data analytics |
L40 | Balanced performance for diverse workloads | AI inference, graphics workloads |
RTX-A6000 | Cost-effective for smaller workloads | Development, small-scale training |
A40 | Versatile professional GPU | Professional graphics, AI development |
GPU Model | USD per Hour | Performance Level |
---|---|---|
H100-SXM5-80GB | $2.69 | Highest |
H100-PCIe-NVLink-80GB | $2.29 | Very High |
H100-PCIe-80GB | $2.25 | Very High |
A100-SXM4-80GB-NVLink | $1.59 | High |
A100-PCIe-80GB | $1.55 | High |
L40 | $1.19 | Medium-High |
RTX-A6000 | $0.69 | Medium |
A40 | $0.69 | Medium |
GPU Model | EUR per Hour | Performance Level |
---|---|---|
H100-SXM5-80GB | €2.49 | Highest |
H100-PCIe-NVLink-80GB | €2.09 | Very High |
H100-PCIe-80GB | €2.05 | Very High |
A100-SXM4-80GB-NVLink | €1.45 | High |
A100-PCIe-80GB | €1.39 | High |
L40 | €1.09 | Medium-High |
RTX-A6000 | €0.45 | Medium |
A40 | €0.45 | Medium |
GPU Model | INR per Hour | Performance Level |
---|---|---|
H100-SXM5-80GB | ₹239 | Highest |
H100-PCIe-NVLink-80GB | ₹199 | Very High |
H100-PCIe-80GB | ₹195 | Very High |
A100-SXM4-80GB-NVLink | ₹135 | High |
A100-PCIe-80GB | ₹135 | High |
L40 | ₹99 | Medium-High |
RTX-A6000 | ₹49 | Medium |
A40 | ₹49 | Medium |
H100-SXM5-80GB
H100-PCIe-NVLink-80GB
H100-PCIe-80GB
H100-SXM5-80GB
H100-PCIe-NVLink-80GB
H100-PCIe-80GB
A100-SXM4-80GB-NVLink
A100-PCIe-80GB
L40
RTX A6000
A40
Choose the number of GPUs for your virtual machine:
Perfect for:
Ideal for:
NVLink is automatically included for multi-GPU configurations with compatible GPU types (H100 and A100 series with NVLink variants).
Large Language Models (LLMs)
Recommended GPUs:
Considerations:
Deep Learning Research
Recommended GPUs:
Considerations:
AI Inference
Recommended GPUs:
Considerations:
Development & Prototyping
Recommended GPUs:
Considerations:
Graphics & Visualization
Recommended GPUs:
Considerations:
H100 Series
Best for: Large-scale AI training, advanced research, production LLM inference
H100 Series
Best for: Large-scale AI training, advanced research, production LLM inference
A100 Series
Best for: Deep learning, scientific computing, multi-user environments
L40
Best for: AI inference, mixed workloads, development
RTX A6000 / A40
Best for: Development, small-scale training, visualization
GPU availability and selection varies by region:
Europe Region
North America Region
United States Region
GPU availability is updated in real-time. If your preferred GPU type is not available in your selected region, try another region or check back later.
Key Factors:
Key Factors:
Development vs Production
Development:
Production:
Workload Matching
Training Workloads:
Inference Workloads:
Assess Your Needs
Select GPU Type
Deploy and Test
Scale and Optimize
Start with a single GPU to test your workload, then scale to multi-GPU configurations as needed. This approach helps optimize both performance and costs.