Training Parameters
Hardware *
GTX 1660
RTX 2060
RTX 3060
RTX 3070
RTX 3080
RTX 3090
RTX 4090
Tesla T4
V100
A100 40GB
A100 80GB
H100
AMD MI250X
TPU v2
TPU v3
TPU v4
Laptop CPU
Desktop i7
Xeon Server CPU
Dual Xeon
GPU Count *
Training Hours *
Model Type *
Logistic Regression
Decision Tree
Random Forest
Small CNN
ResNet-50
EfficientNet
Transformer Base
BERT Base
BERT Large
GPT-2 Small
GPT-3 Scale (normalized)
Large LLM Fine-tuning
Diffusion Model
Deployment *
Local Desktop
On-Prem Server
AWS
Azure
GCP
Energy-Optimized Data Center
Hyperscale AI Cluster
Region *
India
China
Japan
Singapore
South Korea
Germany
France
UK
Norway
EU Average
USA
Canada
Brazil
Global Average
Inference Parameters
Enable Hardware Comparison
Mode
Comparison Hardware *
Estimate Carbon Impact
Saved Scenarios
Select a saved scenario...
Load
Delete
Export JSON
Disclaimer: Engineering-based estimation using
public hardware specifications and national carbon intensity
averages. Intended for sustainability awareness and benchmarking.