On Wed, Nov 17, 2021 at 04:37:21AM +0000, Anup Patel wrote:
Before we go ahead and change the MTIME resolution requirement in the platform spec, I would like to highlight following points (from past discussions) which led to mandating a fixed MTIME resolution in the platform spec:
1. The Linux kernel creates a clock source on-top-of time CSR (mirror of MMIO MTIME) with timebase-frequency discovered from DT. The generic time management in Linux kernel requires nanoseconds granularity so each value read from clock source is converted to nanoseconds using a mult and shift (i.e. nanoseconds = (time_csr * multi) >> shift)). In other words, Linux kernel always uses integer operation to convert X resolution of time CSR to 1ns resolution and this conversion will have some round-off errors. We could have mandated a fixed 1ns resolution (just like ARM SBSA) but for RISC-V we also need to honour the architectural requirement of all time CSRs to be synchronized within 1 tick (i.e. one resolution period) and for multi-sockets (or multi-die) systems it becomes challenging to synchronize multiple MTIME counters within 1ns resolution. Considering this facts, it made sense to have fixed 10ns resolution for MTIME but the update frequency could be lower than 100MHz. (@Greg Favor<mailto:gfavor@...>, please add if I missed anything)
Considering the requirement that all time CSRs be synchronized to within 1
tick, setting a fixed resolution indirectly makes synchronization much more
difficult for implementations that have a lower update frequency. For such
implementations, since each update is more than 1 tick, it would be
necessary to ensure that all time CSRs always have exactly the same value,
which is considerably more difficult than within 1 tick.
2. It is common in enterprise clouds to migrate a Guest/VM across different hosts. Considering the diversity in RISC-V world, we have to support migration of Guest/VM from host A to host B where these hosts can be from different vendors. Now if host A and host B have different MTIME resolution then Guest/VM migration will not work and this problem also applies to ARM world. This is another reason why ARM SBSA mandates a fixed timer resolution. Although, ARM world standardized timer frequency quite late in the game but RISC-V world can avoid these problems by standardizing MTIME resolution quite early. Alternatively, there is also SBI para-virt call possible to detect change in MTIME resolution but this would mean additional code (along with barriers) in the path of reading time CSR (as an example, look at KVM para-virt clock used in x86 world).
3. Requiring a fixed MTIME resolution in the platform spec would mean that some of the existing platforms won’t comply but the platform spec is supposed to be forward looking. If we don’t standardize a fixed MTIME resolution now then it is likely that we will end-up standardizing it in future platform spec.
Based on above points, I still think mandating fixed MTIME resolution is desirable for OS-A platforms and this has nothing to do with how timebase-frequency is discovered (i.e. DT/ACPI).
From: <tech-unixplatformspec@...> on behalf of Jonathan Behrens <behrensj@...>
Date: Wednesday, 17 November 2021 at 6:26 AM
To: Vedvyas Shanbhogue <ved@...>
Cc: Greg Favor <gfavor@...>, Heinrich Schuchardt <xypron.glpk@...>, tech-unixplatformspec <tech-unixplatformspec@...>
Subject: Re: [RISC-V] [tech-unixplatformspec] MTIME update frequency
Given that the device-tree mechanism is apparently already in place, it honestly probably wouldn't be a big deal to just formalize that and not require any particular mtime resolution. I still prefer the simplicity of always doing 10ns, but don't feel that strongly about it.
On Tue, Nov 16, 2021 at 3:13 PM Vedvyas Shanbhogue via lists.riscv.org<http://lists.riscv.org> <ved=rivosinc.com@...<mailto:rivosinc.com@...>> wrote:
On Tue, Nov 16, 2021 at 11:31:44AM -0800, Greg Favor wrote:
On Tue, Nov 16, 2021 at 10:50 AM Jonathan Behrens <behrensj@...<mailto:behrensj@...>> wrote:Linux discovers the timebase from device-tree and does not assume a fixed frequency:
To Ved's last post, it is the timebase resolution (and not update
frequency) that determines the conversion from time to ticks.
So the question is whether there should be a fixed resolution so that
platform-compliant software can simply do a fixed absolute time to
mtime/time conversion, and conversely how much or little change to Linux
would be required to support a discoverable variable conversion?