Benchmarking RISC-V SBC with SPEC CPU2017(Multicore/intrate)
By Ali Tariq | February 2, 2024

We have already run and explained the coremark benchmark on RISC-V SBC (VisionFive v1, VisionFive v2, etc.). But it was the open-source benchmark and it was suitable for microcontroller benchmarking. This page covers benchmarking of RISC-V SBC with SPEC CPU2017.

What is SPEC CPU2017

SPEC CPU2017 is a benchmark package. It contains SPEC's next-generation, industry-standardized, CPU-intensive suites for measuring and comparing compute-intensive performance, stressing a system's processor, memory subsystem, and compiler.

This is a proprietary benchmark created by SPEC (abbreviation of Standard Performance Evaluation Corporation), but they also provide some packages with non-profit pricing.

Tool Structure

SPEC CPU2017 provides various benchmark suites. A complete detail of these packages is available at this link. I am going to provide a top-level overview in the following bullet points.

  • SPEC CPU2017 is divided into specrate and specspeed tests
  • specrate measures the throughput (or work-per-unit-time) of the DUT by running multiple concurrent copies of each benchmark
  • specspeed measures the total time needed to run the benchmark and the performance score is calculated based on the time taken to execute the benchmark. A higher score means less time was needed to run the benchmark
  • specspeed and specrate is further divided into integer and floating point tests called intrate, fprate, intspeed, fpspeed
  • Each of the intrate, fprate, intspeed, and fpspeed are further divided into base and peak benchmarks
  • The difference between base and peak benchmarks is only of the compiler options. Options allowed under the base rules are a subset of those allowed under the peak rules
  • Valid base benchmark results are required for a valid SPEC CPU2017 test whereas peak benchmark results are optional

Benchmark Environment

Since the specspeed benchmarks require the main memory to be 16GB to run on the SBC and RISC-V SBCs I have at the time of writing this blog are limited to 8GB of main memory. So, they are not run.

This blog will be limited to intrate benchmarks of SPEC CPU2017 with base and peak rules.

The SPEC CPU2017 is provided as an ISO image as a setup. The version which I am using here is cpu2017-1.1.9.iso.

In SPEC CPU SPECrate benchmarks, multiple copies of each benchmark run concurrently. Here the number of copies is set by the number of cores in the corresponding device. Only the selected copy's result will be included in the graph added below.

Installation

For installing and running SPEC-CPU 2017 in the Linux environment, you need to have gcc and gfortran installed on the system.

First, create a directory for mounting the iso image and then mount the iso image (this will require superuser permissions).

mkdir spec-cpu-mounted; sudo mount cpu2017-1.1.9.iso ./spec-cpu-mounted

Now navigate to the mounted directory and execute the install.sh file.

./install.sh

This will ask you to enter the directory where you wish to install. Once you give the directory's absolute path, it will prompt you to confirm. Just type 'yes' and hit enter.

Once that is done, the setup script will compile all the benchmarks in the mentioned directory.

Execution of test

Before executing the test, it is important to note that the benchmark requires a config file for the architecture. Some of the config files are shipped with the benchmark, for those which do not have benchmark files, users have to create them. At the time of writing this blog, the RISC-V file was not present in the provided setup, but it is provided by SPEC on this link.

Once downloaded, a user can change a few things according to his needs. Some of the things that I tweaked are as below:

  • GCC version configuration (since I was using above 10)
  • Number of copies to run (according to the number of cores)

Once these things are set up, the test is good to be run, for running the intrate test, one can use the following command template in the root directory where the SPEC CPU2017 is installed.

./bin/runcpu --config="$PATH_OF_CONFIG_FILE" $TESTNAME --reportable

The --reportable flag is required for a valid test result. For intrate the $TESTNAME can be changed to `intrate` and for executing base and peak tests separately, the $TESTNAME can be changed to `SPECrate2017_int_peak` or `SPECrate2017_int_peak`.

Results


Individual Test Results

StarFive VisionFive v1

The individual test results of StarFive VisionFive v1 are as follows:


StarFive VisionFive v1

The individual test result of StarFive VisionFive v2 are as follows:


Raspberry Pi 4 Model B

The individual test result of Raspberry Pi 4 Model B are as follows:


Intel i7-6500U

The individual test result of Intel i7-6500U are as follows:



Final Score

Int_base results

The results of the compute instances of specrate for base rules are as follows:

StarFive Visionfive v1: 0.584021
StarFive VisionFive v2: 1.554537
Raspberry Pi 4 Model B: 3.130916
Intel i7-6500U: 8.326124

SPEC CPU2017 int_base

 

Int_peak results

The results of the compute instances of specrate for peak rules are as follows:

StarFive Visionfive v1: 0.585396
StarFive VisionFive v2: 1.546247
Raspberry Pi 4 Model B: 3.243703
Intel i7-6500U: 8.673959

SPEC CPU2017 int_base

Conclusion

As can be seen in the graph, the performance of StarFive VisionFive v2 is better than the StarFive VisionFive v1 (which is very justifiable keeping in view their specs). But we can see that there is still a lot for RISC-V SBCs to cover in various areas which can include library, packages, and hardware optimizations before they can provide the same performance as their ARM counterparts.

Benchmarking RISC-V SBC with SPEC CPU2017(Multicore/intrate)
Cloud-V 9 February, 2024
Share this post
Tags
Archive

Accelerating llama.cpp with RISC-V Vector Extension
By Ahmad Tameem | December 13, 2023