Re: MTIME update frequency


Greg Favor
 

I took the opportunity to raise this with Andrew and Krste (who authored the text in question) as to the intended and proper reading of the following arch spec text:

The RDTIME pseudoinstruction reads the low XLEN bits of the time CSR, which counts wall-clock real time that has passed from an arbitrary start time in the past. The underlying 64-bit counter should never overflow in practice. The execution environment should provide a means of determining the period of the real-time counter (seconds/tick). The period must be constant. The real-time clocks of all harts in a single user application should be synchronized to within one tick of the real-time clock. The environment should provide a means to determine the accuracy of the clock.

Before summarizing their response, I'll note that the presence of two discoverable properties (tick period and time accuracy) might suggest that these correspond to time resolution and update frequency (which implies inaccuracy equal to the period of the update rate).  And other readings of this text might suggest differently.

In short, the intended meaning and proper reading of the arch text is the most literal reading of the first several sentences, i.e. one tick equals one ulp, and the counter monotonically increments (i.e. by +1) once each tick period.  The resolution and update period are one and the same.  Time CSRs across harts must be synchronized to within one ulp or tick period.

Secondly, the last sentence's statement about accuracy is meant to refer to the bounds on deviation from the nominal frequency of the RTC source.

Krste and Andrew will incorporate some extra words into the current text to make fully clear the intended literal meaning of the text (and hence cut off any alternative not-so-literal readings of the text), as well as to clarify what the accuracy sentence refers to.


Coming back around to the OS-A platform specs, the distinction between resolution and update frequency obviously needs to go away.  Given the need for flexibility across the range of OS-A compliant implementations, mandating any one tick period will be good for some and unacceptable for others.  I suggest mandating a max tick period of 100 ns (corresponding to a relatively low 10 MHz frequency for OS-A class designs).  This allows implementations to do anything from 100 ns down to 1 ns and even lower (if they are able to directly satisfy the synchronization requirement).  This also ensures a reasonable upper bound on the lack of resolution/accuracy in time CSR readings.

Greg

P.S. As a personal aside, I find it next to impractical to distribute and/or explicitly synchronize time truly to within ~1 ns in medium to high core count implementations (particularly, but not only, because of physical design and process considerations).  (And that leaves aside multi-die/socket implementations.)  But the above doesn't stop anyone from figuring out how to actually do that.

Join tech-unixplatformspec@lists.riscv.org to automatically receive all group messages.