Re: MTIME update frequency


Ved Shanbhogue
 

On Sun, Nov 21, 2021 at 11:33:43AM -0800, Greg Favor wrote:
Coming back around to the OS-A platform specs, the distinction between
resolution and update frequency obviously needs to go away. Given the need
for flexibility across the range of OS-A compliant implementations,
mandating any one tick period will be good for some and unacceptable for
others. I suggest mandating a max tick period of 100 ns (corresponding to
a relatively low 10 MHz frequency for OS-A class designs). This allows
implementations to do anything from 100 ns down to 1 ns and even lower (if
they are able to directly satisfy the synchronization requirement). This
also ensures a reasonable upper bound on the lack of resolution/accuracy in
time CSR readings.
Thanks Greg. This would be helpful to not prevent an implementation to engineer a solution but does not force an implementation to do so. I think this makes sense.

P.S. As a personal aside, I find it next to impractical to distribute
and/or explicitly synchronize time truly to within ~1 ns in medium to high
core count implementations (particularly, but not only, because of physical
design and process considerations). (And that leaves aside
multi-die/socket implementations.) But the above doesn't stop anyone from
figuring out how to actually do that.
Totally agree.

regards
ved

Join tech-unixplatformspec@lists.riscv.org to automatically receive all group messages.