Skip to content

CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$

License

Notifications You must be signed in to change notification settings

mlcommons/cm4mlperf-results

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

74 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MLPerf Benchmark Results in the MLCommons CMX format

This repository contains compacted and aggregated results of the MLPerf Inference benchmark, MLPerf Training benchmark and TinyMLPerf benchmark in the compact MLCommons Collective Mind format for the Collective Knowledge Playground being developed by the MLCommons taskforce on automation and reproducibility.

The goal is to make it easier for the community to analyze MLPerf results, add derived metrics such as performance/Watt and constraints, generate graphs, prepare reports and link reproducibility reports as shown in these examples:

How to import raw MLPerf results to CMX format

Install MLCommons CMX framework.

MLPerf inference benchmark results

Follow this README from the related CM automations script.

You can see aggregated results here.

TinyMLPerf benchmark results

Follow this README from the related CM automations script.

You can see aggregated results here.

MLPerf training benchmark results

Follow this README from the related CM automations script.

You can see aggregated results here.

How to update this repository with new results

Using your own Python script

You can use this repository to analyze, reuse, update and improve MLPerf results compact by calculating and adding derived metrics (performance/watt) or links to reproducibility reports that will be visible at the MLCommons CK playground.

Install MLCommons CMX framework.

Pull the CMX repositories with automation recipes and MLPerf results:

cmx pull repo mlcommons@ck --dir=cmx4mlops/cmx4mlops
cmx pull repo mlcommons@cm4mlperf-results

Find CM entries with MLPerf inference v3.1 experiments from CMD:

cmx find experiment --tags=mlperf-inference,v4.1

Find CM entries with MLPerf inference v3.1 experiments from Python:

import cmind

r = cmind.access({'action':'find',
                  'automation':'experiment,a0a2d123ef064bcb',
                  'tags':'mlperf-inference,v3.1'})

if r['return']>0: cmind.error(r)

lst = r['list']

for experiment in lst:
    print (experiment.path)

Using CMX script

We created a sample CM script in this repository that you can use and extend to add derived metrics:

cmx run script "process mlperf-inference results" --experiment_tags=mlperf-inference,v4.1

Copyright

2021-2025 MLCommons

License

Apache 2.0

Project coordinators

Grigori Fursin and Arjun Suresh.

Contact us

This project is maintained by the MLCommons taskforce on automation and reproducibility, cTuning foundation and cKnowledge.org.

About

CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages