Re: MTIME update frequency
On Wed, Nov 17, 2021 at 2:30 PM Darius Rad <darius@...> wrote:
> CSRs need to have the consistent time at the observation point. The fastest way in most system to "observe" value in CSRs is through the cache or through memory. So the difference between the two CSRs should not be observable to the extent that the following test fails:
Literally, such a "loosely synchronized" implementation would not be compliant with RISC-V architecture and its tight synchronization requirement. As Darius noted (below), that is just one way to observe the clocks and, most importantly, the arch spec requirement is not defined in terms of an observation litmus test like above. The literal arch spec requirement is the actual requirement for arch compliance.
That is perhaps a useful test, but not the only way to observe the clocks,
As Anup noted early in this thread: One could require a fixed 1ns resolution (like ARM SBSA), but for RISC-V one must also honor the architectural requirement that all time CSRs are synchronized to within 1 tick. And for multi-socket (or multi-die) systems it becomes even more challenging (arguably impossible) to synchronize time to within 1ns resolution across an SMP domain.