Re: MTIME update frequency


Jonathan Behrens <behrensj@...>
 

The draft says "Platform must support a default ACLINT MTIME counter resolution of 10ns" which I interpret to mean that 100,000 always corresponds to 1ms. The point of different frequencies just means that you can increase mtime by 1 every 10ns or by 2 every 20ns or by 10 every 100ns.

Jonathan


On Tue, Nov 16, 2021 at 1:10 PM Ved Shanbhogue <ved@...> wrote:
On Tue, Nov 16, 2021 at 12:49:11PM -0500, Jonathan Behrens wrote:
>Adding more configuration options increases complexity. Under the current
>draft, if software wants an interrupt 1ms in the future it can set mtimecmp
>to the value of mtime plus 100,000. If we make the resolution of mtime vary

I hope I am reading the right current draft. The current draft states:
"The ACLINT MTIME update frequency (i.e. hardware clock) must be between 10 MHz and 100 MHz, and updates must be strictly monotonic."

So a value of 100,000 could mean a delay between 100ms to 1 ms. So per current draft it would be wrong for software to assume 100,000 implies 1ms.

>between systems, then we have to do a bunch more specification and
>implementation work to pipe that information around. Based on Greg's
>message it sounds like that may be happening, but I also see the appeal of
>just picking the extremely simple option that works well enough for
>everyone's case (even if it isn't some people's top pick)
>

An enumeration of MTIME frequency is needed per current draft. I beleive the device-tree binding for RISC-V uses the timebase-frequency property under the cpus node.
https://github.com/riscv-non-isa/riscv-device-tree-doc/blob/master/bindings/riscv/cpus.txt
Is further specification needed?

regards
ved

Join tech-unixplatformspec@lists.riscv.org to automatically receive all group messages.