Re: Proposed deprecation of N extension

Gernot <gernot.heiser@...>

The thread model is any attacks on buggy software, and the defence has been known for 45 years: Satzer & Schroder’s Principle of Least Privilege. This means a highly modularised system with almost everything at user level, including device drivers.

While most security-/safety-critical systems are built that way, it’s hard to get the model to perform. And capturing interrupts in the supervisor and then re-injecting them as a signal is a high-overhead solution, that can be completely avoided by delivering the interrupt directly to usermode code.


On 6 Jun 2021, at 04:50, Nick Kossifidis <mick@...> wrote:

Στις 2021-06-05 05:29, Gernot έγραψε:

Hmm, I always thought RISC-V was trying to be a leader in security, not a follower
If we treat N extension as a security-related mechanism, the threat model it tries to address is not clear. Anything that can be done with the N extension to delegate traps to U-mode apps through medeleg/sedeleg, can also be done in software (the same software that needs to set medeleg/sedeleg btw so we still rely on it). I don't see what's the difference in terms of security from e.g. delivering a signal to the app and calling its signal handler (I understand it from a performance point of view for some use cases). As for delegating external hw interrupts to U-mode through medeleg/sedeleg, that would only work if we don't have any memory protection mechanism in place (since M-mode/S-mode will be bypassed so it won't be possible to switch PMP rules / page table when we get an interrupt / do uret), not to mention any U-mode app can set utvec and take over the (global) interrupt handler. User I/O on Linux for example is a low-overhead software mechanism ( already used e.g. by DPDK or various user-space drivers, which makes much more sense than this.

So unless I'm missing something I don't see any security benefits from the current N extension, or how it gives RISC-V a security advantage. Intel's approach with user interrupts (an overly complicated mechanism IMHO) doesn't bypass memory protections, nor allows an app to take over. An interrupt is delivered directly to a user app, only if an MSR managed by the OS has the expected value (if not the interrupt is pending until the OS switches to that app). We can discuss about a similar mechanism on RISC-V, in fact we 've talked about this on TEE TG some time ago but it was before AIA started, and although we tried to find ways of using the N extension, we also reached to the conclusion that in its current state we can't do much with it. Also as Andrew said we don't gain much hw-wise either, one can just implement S-mode CSRs that are already ratified and have the same functionality. Let's drop it for now, define a threat model (that may for example also include delivery of interrupts to enclaves, something Intel doesn't support), and come back at it.


Join { to automatically receive all group messages.