GPU Benchmark

pip install gpu-benchmark && gpu-benchmark

Run this command to benchmark your GPU and contribute to our global benchmark results

Loading benchmark data...

Showing 1-0 of 0 results

How the Benchmark Works

1Installation

pip install gpu-benchmark

A simple CLI tool that works on any CUDA-compatible NVIDIA GPU with Python 3.8+

2Run the Test

gpu-benchmark

The benchmark runs for 5 minutes after loading the Stable Diffusion pipeline (default) or selected model.

What We Measure

Images Generated

Number of images your GPU can generate in 5 minutes (model dependent).

Max Heat (°C)

Maximum GPU temperature reached during the benchmark.

Avg Heat (°C)

Average GPU temperature during the benchmark.

Country

Your location (detected automatically).

Rank

Your result as a percentage of the best result in the entire database for the selected model: (your_score / best_overall_score_for_model) * 100

(num_images_generated / max_overall_score) * 100

GPU Model %

Your result as a percentage of the best result for your specific GPU model: (your_score / best_score_for_your_gpu_model) * 100

(num_images_generated / max_score_for_gpu_model) * 100

Power (W)

Average GPU power consumption during the benchmark.

Memory (GB)

Total available GPU VRAM.

Platform

The operating system or environment used for the benchmark.

Provider

The provider or host of the GPU (e.g., private, runpod, vast.ai, etc.).

Acceleration

The acceleration technology used (e.g., CUDA version).

PyTorch Version

The PyTorch version used for the benchmark.