Re: MTIME update frequency


Ved Shanbhogue
 

On Wed, Nov 17, 2021 at 05:30:05PM -0500, Darius Rad wrote:
On Wed, Nov 17, 2021 at 03:50:17PM -0600, Vedvyas Shanbhogue wrote:
On Wed, Nov 17, 2021 at 04:36:10PM -0500, Darius Rad wrote:
On Wed, Nov 17, 2021 at 04:37:21AM +0000, Anup Patel wrote:
Before we go ahead and change the MTIME resolution requirement in the platform spec, I would like to highlight following points (from past discussions) which led to mandating a fixed MTIME resolution in the platform spec:


1. The Linux kernel creates a clock source on-top-of time CSR (mirror of MMIO MTIME) with timebase-frequency discovered from DT. The generic time management in Linux kernel requires nanoseconds granularity so each value read from clock source is converted to nanoseconds using a mult and shift (i.e. nanoseconds = (time_csr * multi) >> shift)). In other words, Linux kernel always uses integer operation to convert X resolution of time CSR to 1ns resolution and this conversion will have some round-off errors. We could have mandated a fixed 1ns resolution (just like ARM SBSA) but for RISC-V we also need to honour the architectural requirement of all time CSRs to be synchronized within 1 tick (i.e. one resolution period) and for multi-sockets (or multi-die) systems it becomes challenging to synchronize multiple MTIME counters within 1ns resolution. Considering this facts, it made sense to have fixed 10ns resolution for MTIME but the update frequency could be lower than 100MHz. (@Greg Favor<mailto:gfavor@...>, please add if I missed anything)
Considering the requirement that all time CSRs be synchronized to within 1
tick, setting a fixed resolution indirectly makes synchronization much more
difficult for implementations that have a lower update frequency. For such
implementations, since each update is more than 1 tick, it would be
necessary to ensure that all time CSRs always have exactly the same value,
which is considerably more difficult than within 1 tick.
So an implementation that supports 100MHz clock would need to update the mtime by 10 on each tick to meet the 1ns granularity. In current spec a implementation that supports 10 MHz clock would need to update the mtime by 10 on each tick to meet the 10ns resolution. I am not sure incrementing by 1 vs. incrementing by 10 makes it much harder as it was already required for a system that implements a 10MHz clock.
The specification says "synchronized to within one tick".

If the time CSR is incremented one tick at a time, then, for every update,
as long as one update propagates to all harts before the next update, this
requirement is met. Some harts may show time T and some may show T+1, but
those are within one tick, which is acceptable.

Now suppose there is a system that increments 10 ticks per update. If the
time value in any hart lags, where some show T and some show T+10, this is
not within one tick, which is not acceptable.

Thus, for systems that update one tick at a time, they have a full tick to
propagate updates to all harts. For systems that update multiple ticks at
a time, they have *no* time to propagate updates to all harts; all updates
must be instantaneous on *all* harts.
I agree. But that requirement is there already for a system that implements a 10 Mhz clock and has to suport 10ns resolution. If 10ns vs. 1ns is a really sticking point could we have two values supported by the platform specification - similar to the range of update frequencies supported. We can leave it to the end user whether they want to put these two classes of systems in one migration pool.

regards
ved

Join {tech-unixplatformspec@lists.riscv.org to automatically receive all group messages.