Workload Archive
This page archives the workload logs and problem descriptions presented at JSSPP workshops.
- List of all workloads
- Parallel Workloads Archive [external archive]
Workload from Kubernetes system (year 2025)
This workload comes from the 2026 paper "Opportunistic Resource Reclamation in Kubernetes: From Aggressive Resizing to Flash Jobs" by Viktória Spišaková, R. Stoyanov, D. Klusáček and L. Hejtmánek.
CPU and GPU traces contain all workloads for year 2025. The CPU workload trace focuses on CPU utilization but also includes memory statistics, indicates whether workload allocated any kind of GPU, and includes extra metadata (categorization, duration).
CPU workload trace does not include data for March 30, 2025 due to cluster outage. The GPU workload trace consists of all workloads allocating the whole GPU during the year 2025, which lasted at least 30 minutes.
Kubernetes workload [download log] (to appear).
CPU Workload Trace Format
One Pod per line, values separated by commas:
- pod_name
- namespace
- node
- cpu_avg (Over Pod’s lifetime)
- cpu_min (Over Pod’s lifetime)
- cpu_max (Over Pod’s lifetime)
- cpu_mode (Over Pod’s lifetime)
- cpu_request
- cpu_limit (-1.0 denotes no limit was set)
- mem_avg (Over Pod’s lifetime in Bytes)
- mem_min (Over Pod’s lifetime in Bytes)
- mem_max (Over Pod’s lifetime in Bytes)
- mem_request (In Bytes)
- mem_limit (-1.0 denotes no limit was set, In Bytes)
- gpu_avg (1.0 denotes whole GPU was allocated, 0.0 denotes whole GPU was not allocated)
- gpu_10mig_avg (1.0 denotes 10GB GPU MIG part was allocated, 0.0 denotes 10GB GPU MIG part was not allocated)
- gpu_20mig_avg (1.0 denotes 20GB GPU MIG part was allocated, 0.0 denotes 20GB GPU MIG part was not allocated)
- duration_hours
- start_date
- end_date
- days_seen
- real_req_ratio_mode (Mode value of ratio between CPU real utilization and CPU requests over Pod’s lifetime)
- real_req_ratio_avg (Average value of ratio between CPU real utilization and CPU requests over Pod’s lifetime)
- category (Workload type category)
GPU Workload Trace Format (Allocating Whole GPU)
One Pod per line, values separated by commas:
- pod_name
- namespace
- category (Volatility category)
- runtime_min_mb (Minimum GPU memory usage over Pod’s lifetime in MB)
- runtime_mean_mb (Mean GPU memory usage over Pod’s lifetime in MB)
- runtime_p50_mb (P50 GPU memory usage over Pod’s lifetime in MB)
- runtime_p95_mb (P95 GPU memory usage over Pod’s lifetime in MB)
- runtime_p99_mb (P99 GPU memory usage over Pod’s lifetime in MB)
- runtime_max_mb (Max GPU memory usage over Pod’s lifetime in MB)
- runtime_std_mb (Standard deviation of GPU memory usage over Pod’s lifetime in MB)
- max_jump_raw_mb (Maximum increase in GPU memory usage over Pod’s lifetime in MB between two consecutive timestamps)
- max_jump_mono_mb (Maximum monotonic increase in GPU memory usage over Pod’s lifetime in MB)
- recommended_reserve_mb (Recommended GPU memory reserve in MB)
- workload_type (Workload type category)
Anonymization
If the same namespace appears in both traces, it is mapped to the same anonymized value. If the same Pod name appears in both traces, it is mapped to the same anonymized value regardless of Pod’s namespace.
Usage and Acknowledgments
This workload trace has been kindly supplied by the e-INFRA CZ project (ID:90254), supported by the Ministry of Education, Youth, and Sports of the Czech Republic. If you use this log in your work, please use a similar acknowledgment. To acknowledge authors' work please consider citing the paper that introduced this log:Viktória Spišaková, R. Stoyanov, D. Klusáček and L. Hejtmánek, "Opportunistic Resource Reclamation in Kubernetes: From Aggressive Resizing to Flash Jobs". In Job Scheduling Strategies for Parallel Processing, Springer, 2026.
JSSPP 2025 - the Workshop on Jobs Scheduling Strategies for Parallel Processing. Contact email: jssppw@gmail.com