Barrack API Documentation
  1. vm
Barrack API Documentation
  • Platform Documentation
    • "Authentication & Access"
    • "Getting Started"
    • "Platform Documentation"
    • "Troubleshooting"
    • vm
      • "Advanced Configuration"
      • "Boot Source Configuration"
      • "GPU Configuration"
      • "Region Selection"
      • "SSH Key Configuration"
      • "VM States & Billing"
    • storage
      • "Managing Volumes"
      • "Storage Management"
    • ssh
      • "Creating SSH Keys"
      • "Importing SSH Keys"
      • "Managing SSH Keys"
    • security
      • "Attaching Firewalls to VMs"
      • "Firewall Management"
      • "Firewall Rules"
    • dashboard
      • "Dashboard Overview"
      • "VM Management"
    • backup
      • "Creating Snapshots"
      • "Custom Images"
      • "Managing Snapshots"
      • "Restoring Snapshots"
      • "Snapshots Management"
    • account
      • "Credits System"
      • "Account Management"
  • Balance
    • Get credit balance
      GET
  • GPU Stocks
    • Get GPU stock availability
      GET
  • GPU Specs
    • Get GPU specifications
      GET
  • Regions
    • List available regions
      GET
  • Pricing
    • Get pricing information
      GET
    • Get pricing information
      POST
  • Deploy Instance
    • Create a new instance
  • Instance management
    • List instances
    • Delete instance
    • Get instance billing
    • Hibernate instance
    • Reboot instance
    • Restore hibernated instance
    • Start instance
    • Stop instance
    • Toggle public IP for instance
    • Get instance details
    • Get hibernated instances count
    • Add instance security rule
    • Remove instance security rule
  • OS Images
    • List OS images
  • SSH Keys
    • List SSH keys
    • Create SSH key
    • Get SSH key
    • Delete SSH key
  • Scripts
    • List all startup scripts
    • Create a startup script
    • Get startup script details
    • Update a startup script
    • Delete a startup script
    • Delete multiple startup scripts
  • Firewalls
    • List firewalls
    • Get firewall details
    • Attach firewall to instances
    • Get firewalls attached to instance
    • Delete firewall
    • Remove firewall rule
    • Add firewall rule
    • Create firewall
    • Get supported protocols
    • List instances available for firewall attachment
    • List instances available for firewall attachment
  • Snapshots
    • List snapshots
    • Get snapshot billing
    • List Instance eligible for snapshot creation
    • Get snapshot details
    • Delete snapshot
    • Restore snapshot
    • Create snapshot
    • Create snapshot
    • Get snapshot billing
  • Images
    • List custom images
    • Get image details
    • Delete custom image
    • Create image from snapshot
    • Get snapshot-image relationship
    • Get snapshot-image relationship list
  • AI Chat
    • Chat with AI
    • Get AI usage summary
    • Get AI usage history
    • Get available AI models
  • Volumes
    • Attach volumes to instance
    • Detach volumes from instance
    • Clone volume
    • Resize volume
    • Get volume billing
    • List available volume types
    • List volumes
    • Create volume
    • Get volume details
    • Delete volume
  • Virtual Machines
    • Get firewalls attached to instance
    • Remove instance security rule
    • List Instance eligible for snapshot creation
  • Schemas
    • AITransaction
    • AddFirewallRuleRequest
    • AddFirewallRuleResponse
    • AddVMSecurityRuleResponse
    • AttachFirewallRequest
    • AggregatedBillingResponse
    • AttachFirewallResponse
    • AttachVolumesRequest
    • AttachVolumesResponse
    • AttachedFirewall
    • BatchSnapshotImageRelationshipResponse
    • Balance
    • Billing
    • ChatCompletionsRequest
    • ChatCompletionsResponse
    • CloneVolumeRequest
    • BillingRecord
    • CloneVolumeResponse
    • BillingSummary
    • CreateFirewallRequest
    • CreateFirewallResponse
    • CreateImageFromSnapshotRequest
    • CreateSSHKeyRequest
    • CreateScriptRequest
    • CreateSnapshotRequest
    • CreateSnapshotResponse
    • CreateSSHKeyResponse
    • CurrencyEnum
    • Data
    • CreditBalanceResponse
    • DeleteFirewallResponse
    • CreateVolumeRequest
    • DeleteSnapshotResponse
    • CreateVolumeResponse
    • DeleteVMResponse
    • DetachVolumesRequest
    • DetachVolumesResponse
    • DeploymentRequest
    • DirectionEnum
    • DeploymentResponse
    • EthertypeEnum
    • ErrorResponse
    • FirewallAttachment
    • DeleteVolumeResponse
    • DeleteSSHKeyResponse
    • FirewallEnvironment
    • FirewallResponse
    • FirewallRule
    • FirewallVM
    • GetBatchSnapshotImageRelationshipsRequest
    • GetFirewallDetailsResponse
    • GPUStockConfiguration
    • GPUSpec
    • GetHibernatedVMsResponse
    • GPUStockItem
    • GPUSpecsResponse
    • GetSupportedProtocolsResponse
    • GPUStocksResponse
    • GetVMAttachedFirewallsResponse
    • GetVolumeTypesResponse
    • HibernateVMResponse
    • HibernationBillingMetrics
    • ListFirewallsResponse
    • ListUserVMsResponse
    • ListVMsResponse
    • ModelsResponse
    • GetSSHKeysResponse
    • Pagination
    • PricingRequest
    • OSImage
    • PricingResponse
    • OSImagesResponse
    • Protocol
    • GpuCountEnum
    • ProtocolEnum
    • Image
    • GpuModelEnum
    • GetVolumeDetailsResponse
    • RebootVMResponse
    • ImageCreateResponse
    • RecentHibernation
    • ImageDeleteResponse
    • GetVolumesWithNextNameResponse
    • RemoveFirewallRuleResponse
    • ImageDetailResponse
    • RemoveVMSecurityRuleResponse
    • Region
    • ImageListResponse
    • ResizeVolumeRequest
    • RegionsResponse
    • Price
    • ResizeVolumeResponse
    • ResourceTypeEnum
    • RestoreSnapshotRequest
    • RestoreSnapshotResponse
    • RestoreVMResponse
    • Snapshot
    • SnapshotImageRelationshipResponse
    • Specs
    • StartVMResponse
    • StopVMResponse
    • TierEnum
    • UpdateScriptRequest
    • UsageHistoryResponse
    • UsageSummaryResponse
    • RegionEnum
    • UserVM
    • VMDetailsResponse
    • VMFlavor
    • VMImage
    • VMInstance
    • VMResponse
    • SpecsMetadata
    • VMSecurityRule
    • VMSecurityRuleRequest
    • VMStatus
    • VMVolumeAttachment
    • Script
    • VolumeTypeEnum
    • SSHKeyResponse
    • ScriptListResponse
    • ToggleVMPublicIPResponse
    • VolumeBillingRecord
    • VolumeBillingSummary
    • VolumeHourlyBillingResponse
    • VolumeResponse
  1. vm

"GPU Configuration"

Select from our high-performance GPU options based on your computational needs.
Choose the GPU that best matches your workload requirements:
GPU ModelPerformanceBest For
H100-SXM5-80GBHighest performance for large-scale AI trainingLarge language models, complex AI research
H100-PCIe-NVLink-80GBHigh-performance with NVLinkMulti-GPU workloads, distributed training
H100-PCIe-80GBHigh-performance PCIe interfaceSingle-GPU inference, model training
A100-SXM4-80GB-NVLinkExcellent for deep learning and HPCDeep learning, scientific computing
A100-PCIe-80GBHigh-performance PCIe A100ML training, data analytics
L40Balanced performance for diverse workloadsAI inference, graphics workloads
RTX-A6000Cost-effective for smaller workloadsDevelopment, small-scale training
A40Versatile professional GPUProfessional graphics, AI development

GPU Instance Pricing#

USD Pricing#

GPU ModelUSD per HourPerformance Level
H100-SXM5-80GB$2.69Highest
H100-PCIe-NVLink-80GB$2.29Very High
H100-PCIe-80GB$2.25Very High
A100-SXM4-80GB-NVLink$1.59High
A100-PCIe-80GB$1.55High
L40$1.19Medium-High
RTX-A6000$0.69Medium
A40$0.69Medium

EUR Pricing#

GPU ModelEUR per HourPerformance Level
H100-SXM5-80GB€2.49Highest
H100-PCIe-NVLink-80GB€2.09Very High
H100-PCIe-80GB€2.05Very High
A100-SXM4-80GB-NVLink€1.45High
A100-PCIe-80GB€1.39High
L40€1.09Medium-High
RTX-A6000€0.45Medium
A40€0.45Medium

INR Pricing#

GPU ModelINR per HourPerformance Level
H100-SXM5-80GB₹239Highest
H100-PCIe-NVLink-80GB₹199Very High
H100-PCIe-80GB₹195Very High
A100-SXM4-80GB-NVLink₹135High
A100-PCIe-80GB₹135High
L40₹99Medium-High
RTX-A6000₹49Medium
A40₹49Medium

GPU Detailed Specifications#

H100 Series
A100 Series
Professional GPUs

NVIDIA H100 GPUs#

H100-SXM5-80GB
Memory: 80GB HBM3
Architecture: Hopper
Best for: Large language models, transformer training
Multi-Instance GPU: Yes
NVLink: 900 GB/s
H100-PCIe-NVLink-80GB
Memory: 80GB HBM3
Interface: PCIe with NVLink
Best for: Multi-GPU distributed training
Interconnect: High-bandwidth NVLink
H100-PCIe-80GB
Memory: 80GB HBM3
Interface: PCIe 5.0
Best for: Single-GPU inference and training
Power Efficiency: Optimized for single-node workloads

GPU Count Selection#

Choose the number of GPUs for your virtual machine:
Single GPU
Perfect for:
Development and testing
Small to medium model training
Inference workloads
Cost-effective computing
Multi-GPU
Ideal for:
Large model training
Distributed computing
High-throughput inference
Parallel processing workloads

Multi-GPU Configuration#

Available counts are dynamically updated based on current stock
Higher GPU counts provide more computational power for parallel workloads
Multi-GPU setups automatically include NVLink for supported GPU types
GPU availability varies by region and is updated in real-time

GPU Selection Guide#

By Use Case#

Large Language Models (LLMs)
Deep Learning Research
AI Inference
Development & Prototyping
Graphics & Visualization

By Performance Tier#

Maximum Performance
High Performance
Balanced Performance
Cost-Effective
H100 Series
Latest architecture with highest performance
80GB memory for the largest models
Advanced Tensor Cores for AI workloads
Best for cutting-edge research and production
Best for: Large-scale AI training, advanced research, production LLM inference

Regional Availability#

GPU availability and selection varies by region:
NORWAY-1
Europe Region
All GPU types available
Low latency for European users
GDPR compliant infrastructure
CANADA-1
North America Region
All GPU types available
High-speed connectivity
Optimized for North American users
US-1
United States Region
All GPU types available
US-based infrastructure
Low latency for US users

GPU Selection Tips#

Performance Optimization#

Memory Considerations
Key Factors:
Model size requirements
Batch size optimization
Dataset memory usage
Multi-model deployment
Compute Requirements
Key Factors:
Training time constraints
Inference latency needs
Parallel processing requirements
Throughput expectations

Cost Optimization#

Development vs Production
Workload Matching

Getting Started#

1
Assess Your Needs
Determine your primary use case
Estimate memory requirements
Consider performance needs
Plan your budget
2
Select GPU Type
Choose based on workload requirements
Consider regional availability
Review pricing for your currency
Start with single GPU and scale as needed
3
Deploy and Test
Deploy VM with selected GPU
Test performance with your workload
Monitor resource utilization
Optimize configuration as needed
4
Scale and Optimize
Add more GPUs if needed
Optimize software for multi-GPU
Monitor costs and performance
Adjust configuration based on results
Previous
"Boot Source Configuration"
Next
"Region Selection"
Built with