-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME
894 lines (726 loc) · 39.3 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
SPEC CPU 2006 v1.2
SPEC CPU(tm) 2006 is designed to provide a comparative measure of
compute-intensive performance across the widest practical range of hardware
using workloads developed from real user applications. Metrics for both integer
and floating point compute intensive performance are provided. Full
documentation is available on the SPEC website: http://www.spec.org/cpu2006/.
In order to use this benchmark, SPEC CPU must be installed and the [spec_dir]/config
directory must be writable by the benchmark user. The runtime parameters
defined below essentially determine the 'runspec' arguments.
SPEC CPU2006 consists of a total of 29 individual benchmarks. 12 of these
benchmarks measure integer related CPU performance, and the remaining 19
measure floating point performance. Aggregate scores are calculated when
the benchmark run is int (all integer benchmarks), fp (all floating point
benchmarks), or all (both integer and floating point benchmarks). These
aggregate scores are calculated as the geometric mean of the medians from
3 runs of each individual benchmark in the suite. Aggregate scores are
calculated based on tuning (base and/or peak) and whether the run is speed
(single copy) or rate (multiple copies).
A few notes on execution:
1. Benchmark execution will always use the runspec action validate signifying
the following: Build (if needed), run, check for correct answers, and
generate reports
2. check_version will always be 0
TESTING PARAMETERS
* benchmark the benchmark(s) to run - any of the benchmark
identifiers listed in config/spec-benchmarks.ini
may be specified. This argument can be repeated
to designate multiple benchmarks. You may specify
'int' for all SPECint benchmarks, 'fp' for all
SPECfp benchmarks and 'all' for all benchmarks.
Benchmarks may be referenced either by their
numeric or full identifier (e.g. --benchmark=400
or --benchmark=400.perlbench). Additionally, you
may designate benchmarks that should be removed
by prefixing them with a minus character
(e.g. --benchmark=all --benchmark=-429). May also
be specified using a single space or comma
separated value (e.g. --benchmark "all -429")
DEFAULT: all
* collectd_rrd If set, collectd rrd stats will be captured from
--collectd_rrd_dir. To do so, when testing starts,
existing directories in --collectd_rrd_dir will
be renamed to .bak, and upon test completion
any directories not ending in .bak will be zipped
and saved along with other test artifacts (as
collectd-rrd.zip). User MUST have sudo privileges
to use this option
* collectd_rrd_dir Location where collectd rrd files are stored -
default is /var/lib/collectd/rrd
* comment optional comment to add to the log file
DEFAULT: none
* config name of a configuration file in [spec_dir]/config
to use for the run. The following macros will be
automatically set via the --define argument
capability of runspec (optional parameters will
only be present if specified by the user):
rate if this is a rate run, this
macro will be present defining
the number of copies
cpu_cache: level 2 cpu cache
(e.g. 4096 KB)
cpu_count: the number of CPU cores present
cpu_family: numeric CPU family identifier
cpu_model: numeric CPU model identifier
cpu_name: the CPU model name (e.g. Intel
Xeon 5570)
cpu_speed: the CPU speed in MHz
(e.g. 2933.436)
cpu_vendor: the CPU vendor
(e.g. GenuineIntel)
compute_service_id: the compute service ID
external_id: an external identifier for the
compute resource
instance_id: identifier for the compute
resource under test
(e.g. m1.xlarge)
ip_or_hostname: IP or hostname of the compute
resource
is32bit: set if the OS is 32 bit
is64bit: set if the OS is 64 bit
iteration_num: the test iteration number
(e.g. 2)
meta_*: any of the meta parameters
listed below
label: user defined label for the
compute resource
location: location of the compute
resource (e.g. CA, US)
memory_free: free memory in KB
memory_total: total memory in KB
numa: set only if the system under
test has numa support
os: the operating system name
(e.g. centos)
os_version: the operating system version
(e.g. 6.2)
provider_id: the provider identifier
(e.g. aws)
region: compute resource region
identifier (e.g. us-west)
run_id: the benchmark run ID
run_name: the name of the run (if
assigned by the user)
sse: the highest SSE flag supported
storage_config: storage config identifier
(e.g. ebs, ephemeral)
subregion: compute resource subregion
identifier (e.g. 1a)
test_id: a user defined test identifier
x64: set if the x64 parameter is
also set
if this parameter value identifies a remote file
(either an absolute or relative path on the
compute resource, or an external reference like
http://...) that file will be automatically copied
into the [spec_dir]/config directory - if not specified,
a default.cfg file should be present in the config
directory
DEFAULT: none
* copies the number of copies to run concurrently. A higher
number of copies will generally produce a better
score (subject to resource availability for those
copies to run). This parameter value may be one of
the following:
cpu relative: a percentage relative to the
number of CPU cores present. For
example, if copies=50% and the
compute instance has 4 cores, 2
copies will be run - standard
rounding will be used
fixed: a simple numeric value
representing the number of copies
to run (e.g. copies=2)
memory relative: a memory to copies size ratio.
For example, if copies=2GB and
the compute instance has 16GB of
memory, then 8 copies will be run
standard rounding will be used.
Either MB or GB suffix may be
used
mixed: a combination of the above 3 types
may be used, each value separated
by a forward slash /. For example,
if copies=100%/2GB, then the number
of copies will be the lesser of
either the number of CPU cores or
memory/2GB. Alternatively, if this
value is prefixed by a +, the
greater of the values will be
used (e.g. copies=+100%/2GB)
The general recommend ratio of copies to resources
is 2GB of memory for 64 bit binaries, 1GB of
memory for 32 bit binaries, 1 CPU core and 2-3GB
of free disk space. To specify a different number
of copies for 32-bit binaries versus 64-bit
binaries (based on the value of the x64 parameter
defined below), separate the values with a pipe,
and prefix the 64-bit specified value with x64:
(e.g. copies="x64:100%/2GB|100%/1GB")
DEFAULT: x64:100%/1GB|100%/512MB (NULL for speed runs)
* define_* additional macros to define using the runspec
--define capability (these will be accessible in
the config file using the format %{macro_name}) -
any number of defines may be specified.
Conditional logic within the config file is
supported using the format:
%ifdef %{macro_name}
# do something
%else
# do something else
%endif
More information is available about the use of
macros on the SPEC website here:
http://www.spec.org/cpu2006/Docs/config.html#sectionI.D.2
For flags - do not set a value for this parameter
(e.g. -p define_smt translates to --define smt)
DEFAULT: none
* delay Add a delay of the specified number of seconds
before and after each benchmark. The delay is not
counted toward the benchmark runtime.
DEFAULT: none
* failover_no_sse When set to 1 in combination with an sse parameter
benchmark execution will be re-attempted without
sse if runspec execution with sse results in an
error status code (runspec will be restarted
without the sse macro set)
DEFAULT: 0
* flagsurl Path to a flags file to use for the run - A flags
file provides information about how to interpret
and report on flags (e.g. -O5, -fast, etc.) that
are used in a config file. The flagsurl may be an
absolute or relative path in the file system, or
refer to an http accessible file
(e.g. $[top]/config/flags/Intel-ic12.0-linux64-revB.xml)
Alternatively, flagsurl can be defined in the
config file
DEFAULT: none
* huge_pages Whether or not to enable huge pages if
supported by the OS. To do so, prior to runspec
execution, if the file /usr/lib64/libhugetlbfs.so
or /usr/lib/libhugetlbfs.so exists, it then checks
that free huge pages are available in /proc/meminfo
and if these conditions are met, sets the following
environment variables:
export HUGETLB_MORECORE=yes
export LD_PRELOAD=/usr/lib/libhugetlbfs.so
Note: In order to use huge pages, you must enable
them first using something along the lines of:
# first clear out existing huge pages
echo 0 > /proc/sys/vm/nr_hugepages
# create 500 2MB huge pages (1GB total) - 2MB is
# the default huge page size on RHEL6
echo 500 > /proc/sys/vm/nr_hugepages
# mount the huge pages
mkdir -p /libhugetlbfs
mount -t hugetlbfs hugetlbfs /libhugetlbfs
Note: CentOS 6+ supports transparent huge pages
(THP) by default. This parameter will likely have
little effect on systems where THP is already
enabled
DEFAULT: 0
* ignore_errors whether or not to ignore errors - if 0, benchmark
execution will stop if any errors occur
DEFAULT: 0
* iterations How many times to run each benchmark. This
parameter should only be changed if reportable=0
because reportable runs always use 3 iterations
DEFAULT: 3 (not used if reportable=1)
* max_copies May be used in conjunction with dynamic copies
calculation (see copies parameter above) in order
to set a hard limit on the number of copies
DEFAULT: none (no limit)
* nobuild If 1, don't build new binaries if they do not
already exist
DEFAULT: 1
* nocleanup Do not delete test files generated by SPEC
(i.e. [spec]/benchspec/CPU2006/[benchmark]/run/*)
DEFAULT: 0
* nonuma Do not set the 'numa' macro or invoke using
'numactl --interleave=all' even if numa is
supported
DEFAULT: 0
* nosse_macro Optional macro to define for --sse optimal if no
SSE flag will be set
* output The output directory to use for writing test
artifacts. If not specified, the current working
directory will be used
* purge_output Whether or not to remote run files (created in the
[spec_dir]/benchspec/CPU2006/*/run/ directories)
following benchmarking completion
DEFAULT: 1
* rate Whether to execute a speed or a rate run. Per the
official documentation: One way is to measure how
fast the computer completes a single task; this is
a speed measure. Another way is to measure how many
tasks a computer can accomplish in a certain amount
of time; this is called a throughput, capacity or
rate measure. Automatically set if 'copies' > 1
DEFAULT: 1
* reportable whether or not to designate the run as reportable,
only int, fp or all benchmarks can be designated
as reportable. Per the official documentation: A
reportable execution runs all the benchmarks in a
suite with the test and train data sets as an
additional verification that the benchmark
binaries get correct results. The test and train
workloads are not timed. Then, the reference
workloads are run three times, so that median run
time can be determined for each benchmark.
DEFAULT: 0
* review Format results for review, meaning that additional
detail will be printed that normally would not be
present
DEFAULT: 0
* run_timeout The amount of time to allow each test iteration to
run
DEFAULT: 72 hours
* size Size of the input data to run: test, train or ref
DEFAULT: ref
* spec_dir Directory where SPEC CPU 2006 is installed. If not
specified, the benchmark run script will look up
the directory tree from both pwd and --output for
presence of a 'cpu2006'. If this fails, it will
check '/opt/cpu2006'
* sse Run with a specific SSE optimization flag - if not
specified, the most optimal SSE flag will be used
for the processor in use. The options availabe for
this parameter are:
optimal: choose the most optimal flag
none: do not use SSE optimizations
AVX: AVX, SSE4.2, SSE4.1, SSSE3, SSE3, SSE2
and SSE instructions
SSE4.2: SSE4.2, SSE4.1, SSSE3, SSE3 SSE2 and
SSE instructions
SSE4.1: SSE4.1, SSSE3, SSE3, SSE2 and SSE
instructions
SSSE3: SSSE3, SSE3, SSE2 and SSE instructions
SSE3: SSE3, SSE2 and SSE instructions
SSE2: SSE2 and SSE instructions
More information is available regarding SSE compiler
optimizations here: http://goo.gl/yevdH
DEFAULT: optimal
* sse_max The max SSE flag to support in conjunction with
sse=optimal - if a processor supports greater than
this SSE level, sse_max will be used instead
DEFAULT: SSE4.2
* sse_min The minimum SSE flag to support in conjunction with
sse=optimal - if a processor does not at least
support this SSE level sse optimization will not
be used
DEFAULT: SSSE3
* tune Tuning option: base, peak or all - reportable runs
must be either base or all
DEFAULT: base
* validate_disk_space Whether or not to validate if there is sufficient
diskspace available for a run - this calculation
is based on a minimum requirement of 2GB per copy
If this space is not available, the run will fail
DEFAULT: 1
* verbose Show verbose output
* x64 Optional parameter that will be passed into
runspec using the macro --define x64 - this may be
used to designate that a run utilize 32-bit versus
64-bit binaries - this parameter can also affect
the dynamic calculation of the 'copies' parameter
described above. Valid options are 0, 1 or 2
DEFAULT: 2 (64-bit binaries for 64-bit systems,
32-bit otherwise)
* x64_failover This flag will cause testing to be re-attempted
for the opposite x64 flag if current testing
fails (e.g. if initial testing is x64=1 and it
fails, then testing will be re-attempted with
x64=0). When used in conjunction with
failover_no_sse, sse failover will take precedence
followed by x64 failover
DEFAULT: 0
META PARAMETERS
If set, these parameters will be included in the results generated using
save.sh. Additionally, the parameters with a * suffix can be used to change the
values in the SPEC CPU 2006 config file using macros. When specified, each of
these parameters will be passed in to runspec using
--define [parameter_name]=[parameter_value] and will then be accessible in the
config using macros %{parameter_name}
* meta_burst If set to 1, designates testing performed in burst
mode (e.g. Amazon EC2 t-series burst)
* meta_compute_service The name of the compute service this test pertains
to. May also be specified using the environment
variable bm_compute_service
* meta_compute_service_id The id of the compute service this test pertains
to. Added to saved results. May also be specified
using the environment variable bm_compute_service_id
* meta_cpu CPU descriptor - if not specified, it will be set
using the 'model name' attribute in /proc/cpuinfo
* meta_instance_id The compute service instance type this test pertains
to (e.g. c3.xlarge). May also be specified using
the environment variable bm_instance_id
* meta_hw_avail* Date that this hardware or instance type was made
available
* meta_hw_fpu* Floating point unit
* meta_hw_nthreadspercore* Number of hardware threads per core - DEFAULT 1
* meta_hw_other* Any other relevant information about the instance
type
* meta_hw_ocache* Other hardware primary cache
* meta_hw_pcache* Hardware primary cache
* meta_hw_tcache* Hardware tertiary cache
* meta_hw_ncpuorder* Valid number of processors orderable for this
model, including a unit. (e.g. "2, 4, 6, or
8 chips"
* meta_license_num* The SPEC CPU 2006 license number
* meta_memory Memory descriptor - if not specified, the system
memory size will be used
* meta_notes_N* General notes - all of the meta_notes_* parameters
support up to 5 entries (N=1-5)
* meta_notes_base_N* Notes about base optimization options
* meta_notes_comp_N* Notes about compiler invocation
* meta_notes_os_N* Notes about operating system tuning and changes
* meta_notes_part_N* Notes about component parts (for kit-built systems)
* meta_notes_peak_N* Notes about peak optimization options
* meta_notes_plat_N* Notes about platform tuning and changes
* meta_notes_port_N* Notes about portability options
* meta_notes_submit_N* Notes about use of the submit option
* meta_os Operating system descriptor - if not specified,
it will be taken from the first line of /etc/issue
* meta_provider The name of the cloud provider this test pertains
to. May also be specified using the environment
variable bm_provider
* meta_provider_id The id of the cloud provider this test pertains
to. May also be specified using the environment
variable bm_provider_id
* meta_region The compute service region this test pertains to.
May also be specified using the environment
variable bm_region
* meta_resource_id An optional benchmark resource identifiers. May
also be specified using the environment variable
bm_resource_id
* meta_run_id An optional benchmark run identifiers. May also be
specified using the environment variable bm_run_id
* meta_storage_config Storage configuration descriptor. May also be
specified using the environment variable
bm_storage_config
* meta_sw_avail* Date that the OS image was made available
* meta_sw_other* Any other relevant information about the software
* meta_test_id Identifier for the test. May also be specified
using the environment variable bm_test_id
DEPENDENCIES
This benchmark has the following dependencies:
SPEC CPU 2006 This benchmark is licensed by spec.org. To use
this benchmark harness you must have it installed
and available in the 'spec_dir' directory
perl Used by SPEC CPU 2006
php-cli Test automation scripts (/usr/bin/php)
zip Used to compress test artifacts
TEST ARTIFACTS
This benchmark generates the following artifacts:
collectd-rrd.zip collectd RRD files (see --collectd_rrd)
specint2006.csv SPECint test results in CSV format
specint2006.gif GIF image referenced in the SPECint HTML report
specint2006.html HTML formatted SPECint test report
specint2006.pdf PDF formatted SPECint test report
specint2006.txt Text formatted SPECint test report
specfp2006.csv SPECfp test results in CSV format
specfp2006.gif GIF image referenced in the SPECfp HTML report
specfp2006.html HTML formatted SPECfp test report
specfp2006.pdf PDF formatted SPECfp test report
specfp2006.txt Text formatted SPECfp test report
SAVE SCHEMA
The following columns are included in CSV files/tables generated by save.sh.
Indexed MySQL/PostgreSQL columns are identified by *. Columns without
descriptions are documented as runtime parameters above. Data types and
indexing used are documented in save/schema/speccpu2006.json. Columns can be
removed using the save.sh --remove parameter
# Individual benchmark metrics. These provide the selected, min, max and
# (sample) standard deviation metrics for each benchmark as well as runtime and
# reftime (reftime only included for speed runs: rate=0). For rate runs
# (i.e. rate=1) the metrics represent the 'rate' metric - signifying that
# multiple copies of the benchmark were run in parallel (i.e. --copies > 1).
# Rate metrics essentially represent throughput. For speed runs these columns
# contain a 'ratio' metric derived from ([base_run_time]/[ref_time]). For speed
# runs, only 1 copy of the benchmark is run. These columns may be excluded using
# --remove benchmark_4*
benchmark_400_perlbench
benchmark_400_perlbench_max
benchmark_400_perlbench_min
benchmark_400_perlbench_reftime
benchmark_400_perlbench_runtime
benchmark_400_perlbench_stdev
benchmark_401_bzip2
benchmark_401_bzip2_max
benchmark_401_bzip2_min
benchmark_401_bzip2_reftime
benchmark_401_bzip2_runtime
benchmark_401_bzip2_stdev
benchmark_403_gcc
benchmark_403_gcc_max
benchmark_403_gcc_min
benchmark_403_gcc_reftime
benchmark_403_gcc_runtime
benchmark_403_gcc_stdev
benchmark_410_bwaves
benchmark_410_bwaves_max
benchmark_410_bwaves_min
benchmark_410_bwaves_reftime
benchmark_410_bwaves_runtime
benchmark_410_bwaves_stdev
benchmark_416_gamess
benchmark_416_gamess_max
benchmark_416_gamess_min
benchmark_416_gamess_reftime
benchmark_416_gamess_runtime
benchmark_416_gamess_stdev
benchmark_429_mcf
benchmark_429_mcf_max
benchmark_429_mcf_min
benchmark_429_mcf_reftime
benchmark_429_mcf_runtime
benchmark_429_mcf_stdev
benchmark_433_milc
benchmark_433_milc_max
benchmark_433_milc_min
benchmark_433_milc_reftime
benchmark_433_milc_runtime
benchmark_433_milc_stdev
benchmark_434_zeusmp
benchmark_434_zeusmp_max
benchmark_434_zeusmp_min
benchmark_434_zeusmp_reftime
benchmark_434_zeusmp_runtime
benchmark_434_zeusmp_stdev
benchmark_435_gromacs
benchmark_435_gromacs_max
benchmark_435_gromacs_min
benchmark_435_gromacs_reftime
benchmark_435_gromacs_runtime
benchmark_435_gromacs_stdev
benchmark_436_cactusadm
benchmark_436_cactusadm_max
benchmark_436_cactusadm_min
benchmark_436_cactusadm_reftime
benchmark_436_cactusadm_runtime
benchmark_436_cactusadm_stdev
benchmark_437_leslie3d
benchmark_437_leslie3d_max
benchmark_437_leslie3d_min
benchmark_437_leslie3d_reftime
benchmark_437_leslie3d_runtime
benchmark_437_leslie3d_stdev
benchmark_444_namd
benchmark_444_namd_max
benchmark_444_namd_min
benchmark_444_namd_reftime
benchmark_444_namd_runtime
benchmark_444_namd_stdev
benchmark_445_gobmk
benchmark_445_gobmk_max
benchmark_445_gobmk_min
benchmark_445_gobmk_reftime
benchmark_445_gobmk_runtime
benchmark_445_gobmk_stdev
benchmark_447_dealii
benchmark_447_dealii_max
benchmark_447_dealii_min
benchmark_447_dealii_reftime
benchmark_447_dealii_runtime
benchmark_447_dealii_stdev
benchmark_450_soplex
benchmark_450_soplex_max
benchmark_450_soplex_min
benchmark_450_soplex_reftime
benchmark_450_soplex_runtime
benchmark_450_soplex_stdev
benchmark_453_povray
benchmark_453_povray_max
benchmark_453_povray_min
benchmark_453_povray_reftime
benchmark_453_povray_runtime
benchmark_453_povray_stdev
benchmark_454_calculix
benchmark_454_calculix_max
benchmark_454_calculix_min
benchmark_454_calculix_reftime
benchmark_454_calculix_runtime
benchmark_454_calculix_stdev
benchmark_456_hmmer
benchmark_456_hmmer_max
benchmark_456_hmmer_min
benchmark_456_hmmer_reftime
benchmark_456_hmmer_runtime
benchmark_456_hmmer_stdev
benchmark_458_sjeng
benchmark_458_sjeng_max
benchmark_458_sjeng_min
benchmark_458_sjeng_reftime
benchmark_458_sjeng_runtime
benchmark_458_sjeng_stdev
benchmark_459_gemsfdtd
benchmark_459_gemsfdtd_max
benchmark_459_gemsfdtd_min
benchmark_459_gemsfdtd_reftime
benchmark_459_gemsfdtd_runtime
benchmark_459_gemsfdtd_stdev
benchmark_462_libquantum
benchmark_462_libquantum_max
benchmark_462_libquantum_min
benchmark_462_libquantum_reftime
benchmark_462_libquantum_runtime
benchmark_462_libquantum_stdev
benchmark_464_h264ref
benchmark_464_h264ref_max
benchmark_464_h264ref_min
benchmark_464_h264ref_reftime
benchmark_464_h264ref_runtime
benchmark_464_h264ref_stdev
benchmark_465_tonto
benchmark_465_tonto_max
benchmark_465_tonto_min
benchmark_465_tonto_reftime
benchmark_465_tonto_runtime
benchmark_465_tonto_stdev
benchmark_470_lbm
benchmark_470_lbm_max
benchmark_470_lbm_min
benchmark_470_lbm_reftime
benchmark_470_lbm_runtime
benchmark_470_lbm_stdev
benchmark_471_omnetpp
benchmark_471_omnetpp_max
benchmark_471_omnetpp_min
benchmark_471_omnetpp_reftime
benchmark_471_omnetpp_runtime
benchmark_471_omnetpp_stdev
benchmark_473_astar
benchmark_473_astar_max
benchmark_473_astar_min
benchmark_473_astar_reftime
benchmark_473_astar_runtime
benchmark_473_astar_stdev
benchmark_481_wrf
benchmark_481_wrf_max
benchmark_481_wrf_min
benchmark_481_wrf_reftime
benchmark_481_wrf_runtime
benchmark_481_wrf_stdev
benchmark_482_sphinx3
benchmark_482_sphinx3_max
benchmark_482_sphinx3_min
benchmark_482_sphinx3_reftime
benchmark_482_sphinx3_runtime
benchmark_482_sphinx3_stdev
benchmark_483_xalancbmk
benchmark_483_xalancbmk_max
benchmark_483_xalancbmk_min
benchmark_483_xalancbmk_reftime
benchmark_483_xalancbmk_runtime
benchmark_483_xalancbmk_stdev
benchmark_version: [benchmark version]
benchmarks: comma separated names of benchmarks run - or int, fp or all
collectd_rrd: [URL to zip file containing collectd rrd files]
comment
config: config file name
copies: number of copies (for rate runs only)
delay
failover_no_sse
flagsurl: flags file url
huge_pages
ignore_errors
iteration: [iteration number (used with incremental result directories)]
iterations: number of iterations - 3 is default (required for compliant runs)
max_copies
meta_burst
meta_compute_service
meta_compute_service_id*
meta_cpu: [CPU model info]
meta_cpu_cache: [CPU cache]
meta_cpu_cores: [# of CPU cores]
meta_cpu_speed: [CPU clock speed (MHz)]
meta_instance_id*
meta_hostname: [system under test (SUT) hostname]
meta_hw_avail
meta_hw_fpu
meta_hw_nthreadspercore
meta_hw_other
meta_hw_ocache
meta_hw_pcache
meta_hw_tcache
meta_hw_ncpuorder
meta_license_num
meta_memory
meta_memory_gb: [memory in gigabytes]
meta_memory_mb: [memory in megabytes]
meta_notes
meta_notes_base
meta_notes_comp
meta_notes_os
meta_notes_part
meta_notes_peak
meta_notes_plat
meta_notes_submit
meta_os_info: [operating system name and version]
meta_provider
meta_provider_id*
meta_region*
meta_resource_id
meta_run_id
meta_storage_config*
meta_sw_avail
meta_sw_other
meta_test_id*
nobuild
nonuma
numa: true if numa was supported and used (--define numa flag and numactl)
num_benchmarks: total number of individual benchmarks run
peak: true for a peak run, false for a base run
purge_output
rate: true for a rate run, false for a speed run
reportable
review
size: input data size - test, train or ref
spec_dir
specfp2006: peak, speed floating point score (only present if rate=0 and peak=1)
specfp_base2006: base, speed floating point score (only present if rate=0 and peak=0)
specfp_csv: [URL to the SPECfp CSV format report (if save.sh --store option used)]
specfp_gif: [URL to the SPECfp HTML report GIF image (if save.sh --store option used)]
specfp_html: [URL to the SPECfp HTML format report (if save.sh --store option used)]
specfp_pdf: [URL to the SPECfp PDF format report (if save.sh --store option used)]
specfp_rate2006: peak, rate floating point score (only present if rate=1 and peak=1)
specfp_rate_base2006: base, rate floating point score (only present if rate=1 and peak=0)
specfp_text: [URL to the SPECfp text format report (if save.sh --store option used)]
specint2006: peak, speed integer score (only present if rate=0 and peak=1)
specint_base2006: base, speed integer score (only present if rate=0 and peak=0)
specint_csv: [URL to the SPECint CSV format report (if save.sh --store option used)]
specint_gif: [URL to the SPECint HTML report GIF image (if save.sh --store option used)]
specint_html: [URL to the SPECint HTML format report (if save.sh --store option used)]
specint_pdf: [URL to the SPECint PDF format report (if save.sh --store option used)]
specint_rate2006: peak, rate integer score (only present if rate=1 and peak=1)
specint_rate_base2006: base, rate integer score (only present if rate=1 and peak=0)
specint_text: [URL to the SPECint text format report (if save.sh --store option used)]
sse: sse optimization used (if applicable)
sse_max
sse_min
test_started
test_stopped
tune: tune level - base, peak or all
valid: 1 if this was a valid run (0 if invalid)
validate_disk_space
x64: true if 64-bit binaries used, false if 32-bit
x64_failover
USAGE
# run 1 test iteration with some metadata
./run.sh --meta_compute_service_id aws:ec2 --meta_instance_id c3.xlarge --meta_region us-east-1 --meta_test_id aws-0914
# run with SPEC CPU 2006 installed in /usr/local/speccpu
./run.sh --spec_dir /usr/local/speccpu
# run for floating point benchmarks only
./run.sh --benchmark fp
# run for perlbench and bwaves only
./run.sh --benchmark 400 --benchmark 410
# save.sh saves results to CSV, MySQL, PostgreSQL, BigQuery or via HTTP
# callback. It can also save artifacts (text report ) to S3, Azure Blob Storage
# or Google Cloud Storage
# save results to CSV files
./save.sh
# save results from 5 iterations text example above
./save.sh ~/spec-testing
# save results to a PostgreSQL database
./save --db postgresql --db_user dbuser --db_pswd dbpass --db_host db.mydomain.com --db_name benchmarks
# save results to BigQuery and artifact (TRIAD gnuplot PNG image) to S3
./save --db bigquery --db_name benchmark_dataset --store s3 --store_key THISIH5TPISAEZIJFAKE --store_secret thisNoat1VCITCGggisOaJl3pxKmGu2HMKxxfake --store_container benchmarks1234