Re: Are pages allowed to cross PMA regions?


Andy Glew (Gmail) <andyglew@...>
 

I cannot say what the RISC-V rule is

 but I can provide example use cases for similar issues from other architectures.

(1) Legacy MMIO map

(2)  non-legacy MMIOmap with  huge, larger and larger pages / 

(3)   device vendors that wish to pack all of their  device memory into a compact region

(4)  security issues

===

(1) for example, x86  has an extremely fragmented  legacy  MMIO map below 1 MB:  some regions are  4K granular, some 16, some 64...  ;    but OS vendors wanted to use a single large page, whether originally 4M/2M or eventually  1G etc. to map it,  because big mappings  reduced TLB pressure, and  in particular because they might also want to access DRAM  or ROM  in that area efficiently.     to deal with this Intel x86 has the ability to "splinter"  large TLB entries (2M/4M/1G/...) into smaller 4K entries,  and has the ability to  indicate subregions of large TLB entries not being present.   e.g. one might have a 4M TLB entry marked such that only [1M,4M) are valid, and accesses in [0,1M)  most lookup splintered entries in the 4K TLBs.

 this was done because  many if not most or all Intel x86 implementations cache memory types,  the things you get from PMAs,  in the TLB

 BTW more and more I wish that I had not decided to store memory types in the TLB,  since there was plenty of time to do an MTRR lookup on a cache miss.

 I'm not saying the RISC-V has to do this. I'm just describing a use case

(1'):   if you allow such fragmentation of memory attributes, and implementation may choose to separate TLBs for translation from a protection look aside buffer for protection and memory attributes -  all that a PLB or perhaps a APLB,  attribute and protection  look aside buffer.  TLB entries are quite big since they require  both physical and virtual addresses, whereas  one may get away with only a few bits per granule,  e.g. 4K granule, with many such granules sharing the same APLB  entry.     the implementation can hide the APLB,  behaving as if it is nothing except TLBs that the US needs to manage


(2)  all the RISC-V people may  deprecate legacy memory map issues,  the same arises even if it's not legacy...

(3)  another use case  is less legacy related:     I/O device vendors sometimes want to constrain all of the physical memory addresses related to their devices to a single  naturally aligned power-of-2 region.   but I/O device vendors often have multiple different memory types for a single device. E.g. a GPU might want to have 1 GB or 16 GB of frame buffer memory, mapped something like write combining,  and a far smaller amount  of active MMIO memory.   e.g. given  a base address B  which is a multiple of a gigabyte,  the I/O device vendor might want [K,K+1G-16K) mapped write combing  Optimize for frame buffer, and [K+1G-16K,K+1G) mapped  non-idempotent uncacheable.

 there is much less need for this nowadays, since PCI  now allows I/O devices to declare a list of their memory requirements,  e.g. 1G WC and 16K UC  in the example above.   PCI then allows the physical addresses associated with the I/O device to be changed, so that  the WC  memory from this device and others is nicely aligned, as is the MMIO UC.    however, not everybody likes the idea of physical addresses being able to change. Moreover, bus bridges from  between different physical address widths  may prefer not to waste physical address ranges.

(4)  if you wish to legislate that virtual memory translations cannot cross PMA boundaries, the question is how do you enforce it.

 if the operating system or hypervisor  that controls the virtual memory translations is the most privileged software in the system, you can probably do this, risking mainly  accidental bugs

 however, quite a few secure systems have privilege domains that are more privileged than the operating system or hypervisor, but which do not want to manage the virtual memory translations.   more they want to allow the operating system or hypervisor to control the page tables as much as possible for performance reasons.    but if there is then a correctness problem if the operating system or hypervisor has allowed a large page translation to cross PMA boundaries, it must be trapped  at least, and possibly emulated if it's transparent.

__________________________________
| www.emclient.com

------ Original Message ------
From "andres.amaya via lists.riscv.org" <andres.amaya=codasip.com@...>
Date 8/12/2022 07:10:40
Subject [RISC-V] [tech-privileged] Are pages allowed to cross PMA regions?

Hello,

There is something unclear to me after reading the PMA section or the Privileged ISA manual (i.e. Section 3.6). Can a virtual paged be mapped to addresses that cross PMA regions? For example, is it acceptable to map a 1GB page such that half its physical addresses have the (e.g.) cacheable attribute but the other half of physical addresses are uncacheable? You could think about this with every attribute: vacant, idempotent, etc.

This sounds odd, but the ISA does not explicitly allow or forbid it. Is it something that must to be supported? If so, are there example use-cases?

Thanks for the help!

Join tech-privileged@lists.riscv.org to automatically receive all group messages.