Re: MTIME update frequency


Darius Rad
 

On Sun, Nov 21, 2021 at 11:33:43AM -0800, Greg Favor wrote:

Coming back around to the OS-A platform specs, the distinction between
resolution and update frequency obviously needs to go away. Given the need
for flexibility across the range of OS-A compliant implementations,
mandating any one tick period will be good for some and unacceptable for
others. I suggest mandating a max tick period of 100 ns (corresponding to
a relatively low 10 MHz frequency for OS-A class designs). This allows
implementations to do anything from 100 ns down to 1 ns and even lower (if
they are able to directly satisfy the synchronization requirement). This
also ensures a reasonable upper bound on the lack of resolution/accuracy in
time CSR readings.
What is the rationale for mandating any specific time period in OS-A-ish
platforms?

The period can be determined by device tree and/or ACPI, at least one of
which is required for OS-A platforms, so the idea that a fixed period makes
things easier is debatable.

If the argument is that a fixed period is necessary for migration of
virtual machines using the hypervisor extension, then perhaps it should
only be a requirement when the hypervisor extension is also present.

As Anup (I believe) mentioned, mandating a specific period will make
non-compliant some existing platforms that would otherwise be compliant.
Maybe this isn't a strong factor, but it was my understanding that one goal
of the first iteration of the platform specifications was to cover, to the
extent possible, existing platforms (hence the legacy PLIC and 8250
requirements, as well).

It is also unfortunate that there is no opportunity for recommendations,
where this requirement could be phrased in such a way that it is not
required, but provides rationale to hopefully persuade platform
implementations to comply.

// darius

Join {tech-unixplatformspec@lists.riscv.org to automatically receive all group messages.