Re: MTIME update frequency
Greg Favor
On Tue, Nov 16, 2021 at 10:50 AM Jonathan Behrens <behrensj@...> wrote:
The draft says "Platform must support a default ACLINT MTIME counter resolution of 10ns" which I interpret to mean that 100,000 always corresponds to 1ms. The point of different frequencies just means that you can increase mtime by 1 every 10ns or by 2 every 20ns or by 10 every 100ns.
Yes. This gets to the heart of the difference between resolution and update frequency. For a given resolution one is free to update (with +1 increments) at an update frequency corresponding to the resolution (i.e. 10ns and 100 MHz), or update at a lower frequency (i.e. with +5 increments at 20 MHz). For that matter, one could do fractional updates at higher than 100 MHz (e.g. with +0.04 increments at 2.5 GHz, where the fractional part of 'time' does not appear in the architectural mtime and time registers).
To Ved's last post, it is the timebase resolution (and not update frequency) that determines the conversion from time to ticks.
So the question is whether there should be a fixed resolution so that platform-compliant software can simply do a fixed absolute time to mtime/time conversion, and conversely how much or little change to Linux would be required to support a discoverable variable conversion?
Greg