Re: A proposal to enhance RISC-V HPM (Hardware Performance Monitor)


alankao
 


Hi Anup,

 

Fair enough.  Of course, your conclusion is totally correct: "we CAN EASILY set the HPM* ...," but the real question here is in point 1: Are counter updates really that in-frequent so?  So far, nobody provided any convincing pieces of evidence. Yet, I would like to share our findings from an out-of-the-box experiment.

 

The system for testing is a 4.17-based Linux branch which contains andes_pmu patch, running on our A27 design on FPGA.  The testing command was 

 

time perf record -e cache-references ./whestone

 

where "cache-references" is an andes_pmu cache event and the target program is a normal whestone executable.  We tested three different settings,

 

Baseline: all HPM CSRs are written in S-mode using plain csr_write.

SBI1: HPM counter CSRs are written with SBI calls.
SBI2: HPM counter and event CSRs are written with SBI calls.

We ran each of them 20 times and calculate the average of "real" part of the output.  It turns out that, (in seconds)
Baseline:  23.131
SBI1: 24.840 (increasing 7.4% more time from baseline)
SBI2: 25.691 (
increasing 11.1% more time from baseline)

Considering the result, does the write-enable bit count as a benefit now?
Also, it's perfectly fine that QEMU virt platform doesn't support PMUs which allow S-mode to write HPM CSRs, so 
I don't think #279 answers #280 and #278.

Thanks,
Alan

Join tech-privileged@lists.riscv.org to automatically receive all group messages.