On Wed, Nov 17, 2021 at 04:36:10PM -0500, Darius Rad wrote:
On Wed, Nov 17, 2021 at 04:37:21AM +0000, Anup Patel wrote:So an implementation that supports 100MHz clock would need to update the mtime by 10 on each tick to meet the 1ns granularity. In current spec a implementation that supports 10 MHz clock would need to update the mtime by 10 on each tick to meet the 10ns resolution. I am not sure incrementing by 1 vs. incrementing by 10 makes it much harder as it was already required for a system that implements a 10MHz clock.Before we go ahead and change the MTIME resolution requirement in the platform spec, I would like to highlight following points (from past discussions) which led to mandating a fixed MTIME resolution in the platform spec:Considering the requirement that all time CSRs be synchronized to within 1
CSRs need to have the consistent time at the observation point. The fastest way in most system to "observe" value in CSRs is through the cache or through memory. So the difference between the two CSRs should not be observable to the extent that the following test fails:
Hart-X (sends a message to Hart-Y):
Write timestamp message to memory
Read timestamp message from memory (gets the data from cache or memory)
compare timestamp to time (the timestamp should not be in future)
If this test is passes then the fact that the CSR in Hart-Y is 2 ticks behind or ahead of Hart-X is not observable.
Setting the granularity to 1 ns provides the path for RISC-V implementations that want the fine resolution to acheive the goal without penalizing implementations that do not.