Date   

Re: PCIe requirements: Memory vs I/O

Josh Scheid
 

On Tue, Jun 15, 2021 at 9:44 AM Greg Favor <gfavor@...> wrote:
This sentence is fraught with use of a few ill-defined terms, e.g. "regular main memory" and "device scratchpad RAMs" - which for now maybe isn't worth trying to "fix".  I would suggest making a PR on the Priv spec with a proposed rewording.  For example (with a goal of not totally replacing the sentence):

Memory regions that do not fit into regular main memory, for example, device-related RAMs, may be categorized as main memory regions or I/O regions based on the desired attributes.


I can and will do that.  The point of raising this here is to explicitly confirm that the platform intent is to enable Memory PMA within, say, PCIe-managed regions.  With that confirmation now effectively clear we can push on the priv spec.

-Josh


[PATCH 1/1] RAS features for OS-A platform server extension

Kumar Sankaran
 

Signed-off-by: Kumar Sankaran <ksankaran@...>
---
riscv-platform-spec.adoc | 42 ++++++++++++++++++++++++++--------------
1 file changed, 27 insertions(+), 15 deletions(-)

diff --git a/riscv-platform-spec.adoc b/riscv-platform-spec.adoc
index 4c356b8..d779452 100644
--- a/riscv-platform-spec.adoc
+++ b/riscv-platform-spec.adoc
@@ -19,18 +19,6 @@
// table of contents
toc::[]

-// document copyright and licensing information
-include::licensing.adoc[]
-
-// changelog for the document
-include::changelog.adoc[]
-
-// Introduction: describe the intent and purpose of the document
-include::introduction.adoc[]
-
-// Profiles: (NB: content from very first version)
-include::profiles.adoc[]
-
== Introduction
The platform specification defines a set of platforms that specify requirements
for interoperability between software and hardware. The platform policy
@@ -68,11 +56,13 @@ The M platform has the following extensions:
|SBI | Supervisor Binary Interface
|UEFI | Unified Extensible Firmware Interface
|ACPI | Advanced Configuration and Power Interface
+|APEI | ACPI Platform Error Interfaces
|SMBIOS | System Management Basic I/O System
|DTS | Devicetree source file
|DTB | Devicetree binary
|RVA22 | RISC-V Application 2022
|EE | Execution Environment
+|OSPM | Operating System Power Management
|RV32GC | RISC-V 32-bit general purpose ISA described as RV32IMAFDC.
|RV64GC | RISC-V 64-bit general purpose ISA described as RV64IMAFDC.
|===
@@ -87,6 +77,7 @@ The M platform has the following extensions:
|link:[RVA22 Specification]
| TBD
|link:https://arm-software.github.io/ebbr/[EBBR Specification]
| v2.0.0-pre1
|link:https://uefi.org/sites/default/files/resources/ACPI_Spec_6_4_Jan22.pdf[ACPI
Specification] | v6.4
+|link:https://uefi.org/specs/ACPI/6.4/18_ACPI_Platform_Error_Interfaces/ACPI_PLatform_Error_Interfaces.html[APEI
Specification] | v6.4
|link:https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_3.4.0.pdf[SMBIOS
Specification] | v3.4.0
|link:[Platform Policy]
| TBD
|===
@@ -504,6 +495,30 @@ delegate the virtual supervisor timer interrupt
to 'VS' mode.
* IOMMU

==== RAS
+All the below mentioned RAS features are required for the OS-A platform server
+extension
+
+* Main memory must be protected with SECDED-ECC +
+* All cache structures must be protected +
+** single-bit errors must be detected and corrected +
+** multi-bit errors can be detected and reported +
+* There must be memory-mapped RAS registers associated with these protected
+structures to log detected errors with information about the type and location
+of the error +
+* The platform must support the APEI specification to convey all error
+information to OSPM +
+* Correctable errors must be reported by hardware and either be corrected or
+recovered by hardware, transparent to system operation and to software +
+* Hardware must provide status of these correctable errors via RAS registers +
+* Uncorrectable errors must be reported by the hardware via RAS error
+registers for system software to take the needed corrective action +
+* Attempted use of corrupted (uncorrectable) data must result in a precise
+exception on that instruction with a distinguishing custom exception cause
+code +
+* Errors logged in RAS registers must be able to generate an interrupt request
+to the system interrupt controller that may be directed to either M-mode or
+S/HS-mode for firmware-first versus OS-first error reporting +
+* PCIe AER capability is required +

// M Platform
== M Platform
@@ -593,6 +608,3 @@ also implement PMP support.
When PMP is supported it is recommended to include at least 4 regions, although
if possible more should be supported to allow more flexibility. Hardware
implementations should aim for supporting at least 16 PMP regions.
-
-// acknowledge all of the contributors
-include::contributors.adoc[]
--
2.21.0


Re: [PATCH] Add direct memory access synchronize extension

Paul Walmsley
 

It would be ideal if the CMO group could focus on fast-tracking the Cache Block Maintenance Operations for Phase 1 and get opcodes assigned, and this part of the specification frozen.  The maintenance operations are mandatory for non-CPU-cache-coherent peripheral DMA to work correctly; that's why these should be completed first.   As far as I can tell, prefetch and zeroing are strictly optimizations, so it would be best if these could be delayed to a Phase 2 -- which could be developed in parallel while Phase 1 goes through the opcode committee, etc. 


Then the SBI sync extension should be superfluous. It would be ideal if we could avoid having multiple mechanisms for the same operations.


For this to work, though, the CMO group needs to move on the block maintenance instructions quickly. 



- Paul



On 6/15/21 4:33 PM, David Kruckemyer wrote:

Hi all,

My apologies as I just got wind of this discussion (I was unable to attend the last few CMO TG meetings due to travel). I think we should sync up on the CMO TG and SBI/platform efforts since there seems to be a bit of disconnect.

Regarding the CMO TG goals, we have intended to get a basic subset of operations into the profile/platform specifications for this year. The "phase 1" status is listed here:


Though honestly, a bit of this is out of date already, so expect some clarification in the coming days (just need to do some terminology cleanup).

Please do not hesitate to reach out to me with any questions (or to post questions to the the CMO TG mailing list: tech-cmo@... )

Cheers,
David


On Mon, Jun 7, 2021 at 2:35 AM Nick Kossifidis <mick@...> wrote:
Στις 2021-06-07 07:03, Anup Patel έγραψε:
>
> Let's have a simple SBI DMA sync extension in SBI v0.4 spec.
>
> The shared code pages between M-mode and S-mode will have it's own
> Challenges and we will have to define more stuff in SBI spec to support
> this (see above).
>

Totally agree with you, I just thought it'd be a good opportunity to
bring this up so that we can discuss it at some point, let's have
something that works and we can optimize it later on.

> It seems CMO extension might freeze sooner than we think (others can
> comment on this). If CMO extension is frozen by year end then we can
> trap-n-emulate CMO instructions instead of SBI DMA sync extension. If
> it does not freeze by year end then we will have to go ahead with
> SBI DMA sync extension as stop-gap solution.
>

The CMOs TG has a meeting today, I'll try and join and ask for updates
on this.






Re: [PATCH] Add direct memory access synchronize extension

David Kruckemyer
 

Hi all,

My apologies as I just got wind of this discussion (I was unable to attend the last few CMO TG meetings due to travel). I think we should sync up on the CMO TG and SBI/platform efforts since there seems to be a bit of disconnect.

Regarding the CMO TG goals, we have intended to get a basic subset of operations into the profile/platform specifications for this year. The "phase 1" status is listed here:


Though honestly, a bit of this is out of date already, so expect some clarification in the coming days (just need to do some terminology cleanup).

Please do not hesitate to reach out to me with any questions (or to post questions to the the CMO TG mailing list: tech-cmo@... )

Cheers,
David


On Mon, Jun 7, 2021 at 2:35 AM Nick Kossifidis <mick@...> wrote:
Στις 2021-06-07 07:03, Anup Patel έγραψε:
>
> Let's have a simple SBI DMA sync extension in SBI v0.4 spec.
>
> The shared code pages between M-mode and S-mode will have it's own
> Challenges and we will have to define more stuff in SBI spec to support
> this (see above).
>

Totally agree with you, I just thought it'd be a good opportunity to
bring this up so that we can discuss it at some point, let's have
something that works and we can optimize it later on.

> It seems CMO extension might freeze sooner than we think (others can
> comment on this). If CMO extension is frozen by year end then we can
> trap-n-emulate CMO instructions instead of SBI DMA sync extension. If
> it does not freeze by year end then we will have to go ahead with
> SBI DMA sync extension as stop-gap solution.
>

The CMOs TG has a meeting today, I'll try and join and ask for updates
on this.






Re: PCIe requirements: Memory vs I/O

Greg Favor
 

This sentence is fraught with use of a few ill-defined terms, e.g. "regular main memory" and "device scratchpad RAMs" - which for now maybe isn't worth trying to "fix".  I would suggest making a PR on the Priv spec with a proposed rewording.  For example (with a goal of not totally replacing the sentence):

Memory regions that do not fit into regular main memory, for example, device-related RAMs, may be categorized as main memory regions or I/O regions based on the desired attributes.

Greg






On Tue, Jun 15, 2021 at 9:11 AM Josh Scheid <jscheid@...> wrote:
On Mon, Jun 14, 2021 at 7:10 PM Greg Favor <gfavor@...> wrote:
On Mon, Jun 14, 2021 at 3:56 PM Josh Scheid <jscheid@...> wrote:
The proposal allows for prefetchable BARs to be programmed to support as I/O or Memory.  This seems to conflict with the priv spec that states:

"""
Memory regions that do not fit into regular main memory, for example, device scratchpad RAMs,
are categorized as I/O regions.
"""

This is for outbound traffic and, if one sets aside the word "I/O" in the proposed text saying "two I/O regions for mapping ..." (e.g. replacing "I/O" with "address"), then is there a conflict? 

The prefetchable BAR can be "mapped" by either a PMA "main memory" region or by a PMA "I/O" region.


The conflict is that statement in the priv. spec suggests that things like  "device scratchpad RAMs" like those that might be in PCIe land "are" I/O, in that they are not Memory.  Moving that priv spec statement to be illustrative and non-normative may be a solution.  Perhaps it's not really mean to be a restriction, but then a more obvious I/O example instead of a "device scratchpad RAM" would be better, as well as making it non-normative.

-Josh


Re: PCIe requirements: Memory vs I/O

Josh Scheid
 

On Mon, Jun 14, 2021 at 7:10 PM Greg Favor <gfavor@...> wrote:
On Mon, Jun 14, 2021 at 3:56 PM Josh Scheid <jscheid@...> wrote:
The proposal allows for prefetchable BARs to be programmed to support as I/O or Memory.  This seems to conflict with the priv spec that states:

"""
Memory regions that do not fit into regular main memory, for example, device scratchpad RAMs,
are categorized as I/O regions.
"""

This is for outbound traffic and, if one sets aside the word "I/O" in the proposed text saying "two I/O regions for mapping ..." (e.g. replacing "I/O" with "address"), then is there a conflict? 

The prefetchable BAR can be "mapped" by either a PMA "main memory" region or by a PMA "I/O" region.


The conflict is that statement in the priv. spec suggests that things like  "device scratchpad RAMs" like those that might be in PCIe land "are" I/O, in that they are not Memory.  Moving that priv spec statement to be illustrative and non-normative may be a solution.  Perhaps it's not really mean to be a restriction, but then a more obvious I/O example instead of a "device scratchpad RAM" would be better, as well as making it non-normative.

-Josh


Re: [RFC PATCH 1/1] server extension: PCIe requirements

Abner Chang
 

Hi   Mayuresh,
As I mentioned in the platform meeting, we missed the requirement for firmware. I added in below section and please rephrase it if you want.

Regards,
Abner

Mayuresh Chitale <mchitale@...> 於 2021年6月10日 週四 上午2:27寫道:
This patch adds requirements for PCIe support for the server extension

Signed-off-by: Mayuresh Chitale <mchitale@...>

Signed-off-by: Mayuresh Chitale <mchitale@...>
---
 riscv-platform-spec.adoc | 133 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 132 insertions(+), 1 deletion(-)

diff --git a/riscv-platform-spec.adoc b/riscv-platform-spec.adoc
index 4418788..9de487e 100644
--- a/riscv-platform-spec.adoc
+++ b/riscv-platform-spec.adoc
@@ -363,7 +363,138 @@ https://lists.riscv.org/g/tech-privileged/message/404[Sstc] extension.
 ** Platforms are required to delegate the supervisor timer interrupt to 'S'
 mode. If the 'H' extension is implemented then the platforms are required to
 delegate the virtual supervisor timer interrupt to 'VS' mode.
-* PCI-E
+
+===== PCIe
+Platforms are required to support PCIe
+footnote:[https://pcisig.com/specifications].Following are the requirements:
+
+====== PCIe Config Space
+* Platforms shall support access to the PCIe config space via ECAM as described
+in the PCI Express Base specification.
+* The entire config space for a single PCIe domain should be accessible via a
+single ECAM I/O region.
+* Platform firmware should implement the MCFG table to allow the operating
+systems to discover the supported PCIe domains and map the ECAM I/O region for
+each domain.
+* ECAM I/O regions shall be configured as channel 0 I/O regions.
+
+====== PCIe Memory Space
+* PCIe Outbound region +
+Platforms are required to provide atleast two I/O regions for mapping the
+memory requested by PCIe endpoints and PCIe bridges/switches through BARs.
+The first I/O region is required to be located below 4G physical address to
+map the memory requested by non-prefetchabe BARs. This region shall be
+configured as channel 0 I/O region. The second I/O region is required to be
+located above 4G physical address to map the memory requested by prefetchable
+BARs. This region may be configured as I/O region or as memory region.
+
+* PCIe Inbound region +
+For security reasons, platforms are required to provide a mechanism to
+restrict the inbound accesses over PCIe to certain specific regions in
+the address space such as the DRAM.
+
+====== PCIe Interrupts
+* Platforms shall support both INTx and MSI/MSI-x interrupts.
+* Integration with AIA +
+TBD
+
+====== PCIe I/O coherency
+Following are the requirements:
+
+* Platforms shall provide a mechanism to control the `NoSnoop` bit for any
+outbound TLP.
+* If the host bridge/root port receives a TLP which does not have `NoSnoop` bit
+set then hardware shall generate a snoop request.
+* If the host bridge/root port receives a TLP which has `NoSnoop` set then no
+hardware coherency is required. Software coherency may be required via CMOs.
+
+====== PCIe Topology
+Platforms are required to implement atleast one of the following topologies and
+the components required in that topology.
+
+[ditaa]
+....
+
+            +----------+                             +----------+
+            |   CPU    |                             |   CPU    |
+            |          |                             |          |
+            +-----|----+                             +-----|----+
+                  |                                        |
+                  |                                        |
+    +-------------|------------+             +-------------|------------+
+    |        ROOT | COMPLEX    |             |        ROOT | COMPLEX    |
+    |                          |             |                          |
+    |      +------|-------+    |             |      +------|-------+    |
+    |      |  Host Bridge |    |             |      |  Host Bridge |    |
+    |      +------|-------+    |             |      +------|-------+    |
+    |             |            |             |             |            |
+    |             | BUS 0      |             |             | BUS 0      |
+    |     |-------|------|     |             |       +-----|-------+    |
+    |     |              |     |             |       | ROOT  PORT  |    |
+    |     |              |     |             |       +-----|-------+    |
+    | +---|---+      +---|---+ |             |             |            |
+    | | RCEIP |      | RCEC  | |             |             | PCIe Link  |
+    | +-------+      +-------+ |             |             |            |
+    |                          |             +-------------|------------+
+    +--------------------------+                           |
+                                                           |  BUS 1
+    RCEIP - Root complex integrated endpoint
+    RCEC - Root complex event collector
+....
+
+* Host Bridge +
+Following are the requirements for host bridges:
+
+** Any read or write access by a hart to an ECAM I/O region shall be converted
+by the host bridge into the corresponding PCIe config read or config write
+request.
+** Any read or write access by a hart to a PCIe outbound region shall be
+forwarded by the host bridge to a BAR or prefetch/non-prefetch memory window,
+if the address falls within the region claimed by the BAR or prefetch/
+non-prefetch memory window. Otherwise the host bridge shall return an error.
+
+** Host bridge shall return all 1s in the following cases:
+*** Config read to non existent functions and devices on root bus.
+*** Config reads that receive Unsupported Request response from functions and
+devices on the root bus.
+* Root ports +
+Following are the requirements for root ports.
+** Root ports shall appear as PCI-PCI bridge to software.
+** Root ports shall implememnt all registers of Type 1 header.
+** Root ports shall implement all capabilities specified in the PCI Express
+Base specification for a root port.
+** Root ports shall forward type 1 configuration access when the bus number in
+the TLP is greater than the root port's secondary bus number and less than or
+equal to the root port's subordinate bus number.
+** Root ports shall convert type 1 configuration access to a type 0
+configuration acess when bus number in the TLP is equal to the root port's
+secondary bus number.
+** Root ports shall respond to any type 0 configuration accesses it receives.
+** Root ports shall forward memory accesses targeting its prefetch/non-prefetch
+memory windows to downstream components. If address of the transaction does not
+fall within the regions claimed by prefetch/non-prefetch memory windows then
+the root port shall generate a Unsupported Request.
+** Root port requester id or completer id shall be formed using the bdf of the
+root port.
+** The root ports shall support the CRS software visbility.
+** Root ports shall return all 1s in the following cases:
+*** Config read to non existent functions and devices on seconday bus.
+*** Config reads that receive Unsupported Request from downstream components.
+*** Config read when root port's link is down.
+** The root port shall implement the AER capability.
+
+* RCEIP +
+All the requirements for RCEIP in the PCI Express Base specification shall be implemented.
+In addition the following requirements shall be met:
+** If RCEIP is implemented then RCEC shall be implemented as well. All
+requrirements for RCEC specified in the PCI Express Base specification shall be
+implemented. RCEC is required to terminate the AER and PME messages from RCEIP.
+** If both the topologies mentioned above are supported then RCEIP and RCEC
+shall be implemented in a separate PCIe domain and shall be addressable via a
+separate ECAM I/O region.
+
+====== PCIe peer to peer transactions +
+TBD

====== PCIe Device Firmware Requirement 
PCI expansion ROM code type 3 (UEFI) image must be provided by PCIe device for OS/A
server extension platform accroding to
if that PCIe device is utilized during UEFI firmware boot process. The image stored in PCI
expansion ROM is an UEFI driver that must be compliant with https://uefi.org/specifications[UEFI specification 2.9]
14.4.2 PCI Option ROMs.


 ==== Secure Boot
 * TEE
--
2.17.1







Re: [RFC PATCH 1/1] server extension: PCIe requirements

Greg Favor
 

On Mon, Jun 14, 2021 at 5:23 PM Josh Scheid <jscheid@...> wrote:

I understand that IOPMP is not an IOMMU, but to the extent that it is a general "bus master memory protection" widget, it can be used by M-mode to ensure simple things, such as that S-mode-SW-controlled PCIe initiators can not access address regions not accessible by S-mode. 

Yes, most likely IOPMP can be used to do this.
 
For example, the platform spec could avoid mentioning the IOPMP proposal, but state that the platform is required to have a mechanism to allow M-mode SW to control (including prevent) PCIe initiator access to regions of system address space.  While remaining open to custom implementations, it's clear on the functional intent.

That would be appropriate.  And, for example, one much simpler implementation approach (than the IOPMP proposals) would be to replicate a CPU PMP block as an "IO PMP" in front of each I/O device or group of devices.

That would allow M-mode software to only have to deal with one PMP software programming model across all masters in the system.

Greg


Re: PCIe requirements: Memory vs I/O

Greg Favor
 

On Mon, Jun 14, 2021 at 3:56 PM Josh Scheid <jscheid@...> wrote:
The proposal allows for prefetchable BARs to be programmed to support as I/O or Memory.  This seems to conflict with the priv spec that states:

"""
Memory regions that do not fit into regular main memory, for example, device scratchpad RAMs,
are categorized as I/O regions.
"""

This is for outbound traffic and, if one sets aside the word "I/O" in the proposed text saying "two I/O regions for mapping ..." (e.g. replacing "I/O" with "address"), then is there a conflict? 

The prefetchable BAR can be "mapped" by either a PMA "main memory" region or by a PMA "I/O" region.

Greg


Re: [RFC PATCH 1/1] server extension: PCIe requirements

Josh Scheid
 

On Mon, Jun 14, 2021 at 4:02 PM Greg Favor <gfavor@...> wrote:
On Mon, Jun 14, 2021 at 2:28 PM Josh Scheid <jscheid@...> wrote:
+For security reasons, platforms are required to provide a mechanism to
+restrict the inbound accesses over PCIe to certain specific regions in
+the address space such as the DRAM.

While a standard IOMMU is further off, is the current opinion that the IOPMP is not in a position to be required or suggested as an implementation of the above requirement?  If not, then it's hard to check for compliance.

I'm not sure if an IOPMP could be used for this particular purpose, but more generally IOPMP is being driven by embedded people and isn't consciously thinking about functionality requirements implied by H-style virtualization, or PCIe MSIs, or other PCIe features.  In this regard IOPMP is analogous to PLIC and CLIC - and not generally suitable for OS/A platforms (and presumably is well-suited for M platforms).

I understand that IOPMP is not an IOMMU, but to the extent that it is a general "bus master memory protection" widget, it can be used by M-mode to ensure simple things, such as that S-mode-SW-controlled PCIe initiators can not access address regions not accessible by S-mode.  There's value in memory protection even without full virtualization support.  I'm questioning how vague the memory protection "requirement" should be to the extent that it ends up being usable and sufficient to provide a defined level of assurance. 

For example, the platform spec could avoid mentioning the IOPMP proposal, but state that the platform is required to have a mechanism to allow M-mode SW to control (including prevent) PCIe initiator access to regions of system address space.  While remaining open to custom implementations, it's clear on the functional intent.

-Josh


Re: [RFC PATCH 1/1] server extension: PCIe requirements

Greg Favor
 

On Mon, Jun 14, 2021 at 2:28 PM Josh Scheid <jscheid@...> wrote:
+For security reasons, platforms are required to provide a mechanism to
+restrict the inbound accesses over PCIe to certain specific regions in
+the address space such as the DRAM.

While a standard IOMMU is further off, is the current opinion that the IOPMP is not in a position to be required or suggested as an implementation of the above requirement?  If not, then it's hard to check for compliance.

I'm not sure if an IOPMP could be used for this particular purpose, but more generally IOPMP is being driven by embedded people and isn't consciously thinking about functionality requirements implied by H-style virtualization, or PCIe MSIs, or other PCIe features.  In this regard IOPMP is analogous to PLIC and CLIC - and not generally suitable for OS/A platforms (and presumably is well-suited for M platforms).
 
Is this mechanism expected to be M-mode SW controlled, or is it also expected to be controlled by S-mode (either directly or via SBI)?

IOPMP has continued to evolve significantly, so I only casually/loosely watch what's going on.  But they don't appear to be thinking much yet about this aspect (unless they started in the past couple of weeks).  Although at a hardware level the flexibility is probably there for control by either since IOPMP registers will presumably be memory-mapped.

Greg


PCIe requirements: Memory vs I/O

Josh Scheid
 

The proposal allows for prefetchable BARs to be programmed to support as I/O or Memory.  This seems to conflict with the priv spec that states:

"""
Memory regions that do not fit into regular main memory, for example, device scratchpad RAMs,
are categorized as I/O regions.
"""

I agree that it is useful to allow for Memory treatment of some address space in some PCIe devices.  So there should be an action to accommodate that by adjusting the wording in the priv spec.

-Josh


Re: [RFC PATCH 1/1] server extension: PCIe requirements

Josh Scheid
 

On Wed, Jun 9, 2021 at 11:27 AM Mayuresh Chitale <mchitale@...> wrote:
This patch adds requirements for PCIe support for the server extension

Signed-off-by: Mayuresh Chitale <mchitale@...>

Signed-off-by: Mayuresh Chitale <mchitale@...>
---
 riscv-platform-spec.adoc | 133 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 132 insertions(+), 1 deletion(-)

diff --git a/riscv-platform-spec.adoc b/riscv-platform-spec.adoc
index 4418788..9de487e 100644
--- a/riscv-platform-spec.adoc
+++ b/riscv-platform-spec.adoc
@@ -363,7 +363,138 @@ https://lists.riscv.org/g/tech-privileged/message/404[Sstc] extension.
 ** Platforms are required to delegate the supervisor timer interrupt to 'S'
 mode. If the 'H' extension is implemented then the platforms are required to
 delegate the virtual supervisor timer interrupt to 'VS' mode.

Is this an M-mode SW requirement or a HW requirement that these interrupts are delegatable (writeable) in HW?

Why require the delegation by M-mode instead of allowing for M-mode to trap and pass down?  Is this just a performance benefit?

-* PCI-E
+
+===== PCIe
+Platforms are required to support PCIe
+footnote:[https://pcisig.com/specifications].Following are the requirements:

Any particular baseline PCIe version and/or extensions?

+
+====== PCIe Config Space
+* Platforms shall support access to the PCIe config space via ECAM as described
+in the PCI Express Base specification.
+* The entire config space for a single PCIe domain should be accessible via a
+single ECAM I/O region.
+* Platform firmware should implement the MCFG table to allow the operating
+systems to discover the supported PCIe domains and map the ECAM I/O region for
+each domain.
+* ECAM I/O regions shall be configured as channel 0 I/O regions.
+
+====== PCIe Memory Space
+* PCIe Outbound region +
+Platforms are required to provide atleast two I/O regions for mapping the
+memory requested by PCIe endpoints and PCIe bridges/switches through BARs.
+The first I/O region is required to be located below 4G physical address to
+map the memory requested by non-prefetchabe BARs. This region shall be
+configured as channel 0 I/O region. The second I/O region is required to be
+located above 4G physical address to map the memory requested by prefetchable
+BARs.

Is there any guidance needed about the amount of total space available (below 4G), or that space needs to be allocated for each domain?

I think that this is only necessary in the platform because of the current lack of an IOMMU requirement or standard.  With an IOMMU, that component can be used to locate 32-bit BARS anywhere in the system address space.  So at least keep in mind the requirement can be dropped at that time.
 
This region may be configured as I/O region or as memory region.

Is an SBI call needed to support S-mode configuration?  What is the default expected to be if there is no SBI call or no call is made?

IIRC, some older versions of some HCI standards (USB, SATA?) only had device support for 32-bit addresses.  I mention this to check if the requirement is really just that non-prefetchable BARs need to be supported <4GB, or that it's also needed for other 32-bit BAR support.  Thus it may need to support prefetchable BARs located <4GB.
 
+
+* PCIe Inbound region +
+For security reasons, platforms are required to provide a mechanism to
+restrict the inbound accesses over PCIe to certain specific regions in
+the address space such as the DRAM.

While a standard IOMMU is further off, is the current opinion that the IOPMP is not in a position to be required or suggested as an implementation of the above requirement?  If not, then it's hard to check for compliance.

Is this mechanism expected to be M-mode SW controlled, or is it also expected to be controlled by S-mode (either directly or via SBI)?

+
+====== PCIe Interrupts
+* Platforms shall support both INTx and MSI/MSI-x interrupts.
+* Integration with AIA +
+TBD

While TBD, one question interesting to me is whether or not it matters if the PCI RC implements it's own INTx to MSI bridge, or if an AIA APLIC is required for that.

+
+====== PCIe I/O coherency
+Following are the requirements:
+
+* Platforms shall provide a mechanism to control the `NoSnoop` bit for any
+outbound TLP.

Is it implicit here if this mechanism is provided to M-mode SW only, or also to S-mode?

+* If the host bridge/root port receives a TLP which does not have `NoSnoop` bit
+set then hardware shall generate a snoop request.
+* If the host bridge/root port receives a TLP which has `NoSnoop` set then no
+hardware coherency is required. Software coherency may be required via CMOs.

I read this as primarily stating that inbound NoSnoop controls the "coherent" access attribute.  But why this instead of focusing on control of the "cacheable" vs "non-cacheable" attribute?  With the latter, it seems more apparent how harts would then manages coherency: by controlling accesses to use the same "cacheable" attribute.

+
+====== PCIe Topology
+Platforms are required to implement atleast one of the following topologies and
+the components required in that topology.
+
+[ditaa]
+....
+
+            +----------+                             +----------+
+            |   CPU    |                             |   CPU    |
+            |          |                             |          |
+            +-----|----+                             +-----|----+
+                  |                                        |
+                  |                                        |
+    +-------------|------------+             +-------------|------------+
+    |        ROOT | COMPLEX    |             |        ROOT | COMPLEX    |
+    |                          |             |                          |
+    |      +------|-------+    |             |      +------|-------+    |
+    |      |  Host Bridge |    |             |      |  Host Bridge |    |
+    |      +------|-------+    |             |      +------|-------+    |
+    |             |            |             |             |            |
+    |             | BUS 0      |             |             | BUS 0      |
+    |     |-------|------|     |             |       +-----|-------+    |
+    |     |              |     |             |       | ROOT  PORT  |    |
+    |     |              |     |             |       +-----|-------+    |
+    | +---|---+      +---|---+ |             |             |            |
+    | | RCEIP |      | RCEC  | |             |             | PCIe Link  |
+    | +-------+      +-------+ |             |             |            |
+    |                          |             +-------------|------------+
+    +--------------------------+                           |
+                                                           |  BUS 1
+    RCEIP - Root complex integrated endpoint
+    RCEC - Root complex event collector
+....


Have we considered the option of requiring EPs to be behind virtual integrated RPs, instead of being RCiEPs?  This seems to bypass some of the unique limitations of RCiEPs, including the RCEC.

Do we need to ban or allow for impl-spec address mapping capabilities between PCI and system addresses?

Do we need say anything about peer-to-peer support, or requirements if a system enables it?  Including ACS?

Should the system mtimer counter also be the source for PCIe PTP?

-Josh


Re: Non-coherent I/O

Josh Scheid
 

On Mon, Jun 14, 2021 at 1:04 PM Greg Favor <gfavor@...> wrote:
I have already sent questions to Andrew to get the official view as to the intent of this aspect of the Priv spec and what is the proper way or perspective with which to be reading the ISA specs.  That then may result in the need for clarifying text to be added to the spec.  And once it is clear as to the scope and bounds of the ISA specs and what they require and allow, then it is left to profile and platform specs to specify tighter requirements.


Re I/O-related coherence and ordering, Daniel Lustig will readily acknowledge (and I'm quoting him) that "the I/O ordering model isn't currently defined as precisely as RVWMO".

And Krste will certainly say (i.e. has said) that RISC-V supports systems with coherent and non-coherent masters, and needs to standardize arch support for software management in such platforms asap.


While potentially a fine goal, it seems that to make this happen in a manner that allows Platform-compliant SW to be portable, more needs to be done
beyond the Zicmobase work, at least in terms of "glue" specification to tie it all together. It's also possible that the goal of generally enabling non-coherent
masters in RISC-V is perhaps outside the scope of OS-A Platform work, in that in the short term things can be done to enable it in implementation-specific
HW+SW systems, but allowing for implementation portable SW (across platform-compliant implementations) will take longer.

-Josh


Re: Non-coherent I/O

Greg Favor
 

I have already sent questions to Andrew to get the official view as to the intent of this aspect of the Priv spec and what is the proper way or perspective with which to be reading the ISA specs.  That then may result in the need for clarifying text to be added to the spec.  And once it is clear as to the scope and bounds of the ISA specs and what they require and allow, then it is left to profile and platform specs to specify tighter requirements.


Re I/O-related coherence and ordering, Daniel Lustig will readily acknowledge (and I'm quoting him) that "the I/O ordering model isn't currently defined as precisely as RVWMO".

And Krste will certainly say (i.e. has said) that RISC-V supports systems with coherent and non-coherent masters, and needs to standardize arch support for software management in such platforms asap.


Greg


Re: Non-coherent I/O

mark
 

If this is an issue with the priv spec please add it to the priv spec github issues.

thanks
Mark

On Mon, Jun 14, 2021 at 10:44 AM Josh Scheid <jscheid@...> wrote:
Priv:
"""
Accesses by one hart to main memory regions are observable not only by other harts but also
by other devices with the capability to initiate requests in the main memory system (e.g., DMA
engines). Coherent main memory regions always have either the RVWMO or RVTSO memory
model. Incoherent main memory regions have an implementation-defined memory model.
"""

The above is the core normative piece discussion coherent initiators. 

It's confusing because the "observable" statement in the first sentence is indirectly overridden by the consideration of incoherent main memory.

It may be enough to add additional wording in the platform spec that for platforms that behave differently for NoSnoop=1 inbound TLPs (vs ignoring them and treating them as NoSnoop=0) the region of addresses accessed in that manner should be communicated as having "incoherent" PMA generally in the system.

But it also implies that there's no standard memory model for incoherent memory.  Is the use of RVWMO+Zicmobose sufficient, or is more needed to describe a portable memory model in this case?
-Josh


Non-coherent I/O

Josh Scheid
 

Priv:
"""
Accesses by one hart to main memory regions are observable not only by other harts but also
by other devices with the capability to initiate requests in the main memory system (e.g., DMA
engines). Coherent main memory regions always have either the RVWMO or RVTSO memory
model. Incoherent main memory regions have an implementation-defined memory model.
"""

The above is the core normative piece discussion coherent initiators. 

It's confusing because the "observable" statement in the first sentence is indirectly overridden by the consideration of incoherent main memory.

It may be enough to add additional wording in the platform spec that for platforms that behave differently for NoSnoop=1 inbound TLPs (vs ignoring them and treating them as NoSnoop=0) the region of addresses accessed in that manner should be communicated as having "incoherent" PMA generally in the system.

But it also implies that there's no standard memory model for incoherent memory.  Is the use of RVWMO+Zicmobose sufficient, or is more needed to describe a portable memory model in this case?
-Josh


Re: SBI v0.3-rc1 released

Anup Patel
 

We had quite a bit of discussion about SBI versioning in past when we were drafting SBI v0.2 specification. The conclusion of those discussions was:

  1. We certainly needed a version for SBI implementation hence the sbi_get_impl_version() call
  2. We certainly needed a version for SBI specification itself hence the sbi_get_spec_version() call
  3. Most of us were not sure whether we really needed a separate version for each SBI extension. May be FIRMWARE and EXPERIMENTAL extensions might need their own version but still not sure. To tackle this, SBI v0.2 defined sbi_probe_extension() as “Returns 0 if the given SBI extension is not available or an extension-specific non-zero value if it is available”.

 

We have still not come across any SBI extension where the extension will keep growing over time and we will need a separate version for such SBI extension. Also, SBI extension can always use the SBI specification version to distinguish changes over time. For example, SBI HSM suspend call is only available in HSM extension for SBI v0.3 (or higher) but it is not available for SBI v0.2 (or lower).

 

Regards,

Anup

 

From: tech-unixplatformspec@... <tech-unixplatformspec@...> On Behalf Of Jonathan Behrens
Sent: 09 June 2021 19:35
To: Atish Patra <Atish.Patra@...>
Cc: tech-unixplatformspec@...; palmer@...; ksankaran@...; Anup Patel <Anup.Patel@...>
Subject: Re: [RISC-V] [tech-unixplatformspec] SBI v0.3-rc1 released

 

One thing that I'd like to see resolved for the 0.3 release is a precise specification for what sbi_probe_extension does. Right now the description says "Returns 0 if the given SBI extension ID (EID) is not available, or an  extension-specific non-zero value if it is available." However, every other extension listed in the spec fails to say what value should be returned if it is available.

 

I'd suggest that this function should indicate some sort of version number for each of the extensions, either just 1 to say that there haven't been multiple versions of any of the standard extensions or perhaps a value formatted like sbi_get_spec_version to encode more detailed information.

 

Jonathan

 

On Wed, Jun 9, 2021 at 2:40 AM Atish Patra via lists.riscv.org <atish.patra=wdc.com@...> wrote:


We have tagged the current SBI specification as a release candidate for
v0.3[1]. It is tagged as v0.3-rc1 which includes few new extensions and
cosmetic changes of the entire specification.
Here is a detailed change log:

- New extensions:
 - SBI PMU extension
 - SBI System reset extension
- Updated extensions:
 - Hart Suspend function added to HSM extension
- Overall specification reorganization and style update
- Additional clarifications for HSM extension and introduction section
- Makefile support to build html & pdf versions of the specification

We don't expect any significant functional changes. We will wait for
any further feedback and release the official v0.3 in a month or so.

Thank you for your contributions!

[1] https://github.com/riscv/riscv-sbi-doc/releases/tag/v0.3.0-rc1

--
Regards,
Atish





Next Platform HSC Meeting on Mon Jun 14 2021 8AM PST

Kumar Sankaran
 

Hi All,
The next platform HSC meeting is scheduled on Mon Jun 14th at 8AM PST.

Here are the details:

Agenda and minutes kept on the github wiki:
https://github.com/riscv/riscv-platform-specs/wiki

Here are the slides:
https://docs.google.com/presentation/d/1VepCqjMSHw9bSN6VIHhGn6K4tQQ6meuG49LYCi7-ctw/edit#slide=id.gc525db7f82_0_267

Meeting info
Zoom meeting: https://zoom.us/j/2786028446
Passcode: 901897

Or iPhone one-tap :
US: +16465588656,,2786028466# or +16699006833,,2786028466# Or Telephone:
Dial(for higher quality, dial a number based on your current location):
US: +1 646 558 8656 or +1 669 900 6833
Meeting ID: 278 602 8446
International numbers available:
https://zoom.us/zoomconference?m=_R0jyyScMETN7-xDLLRkUFxRAP07A-_

Regards
Kumar


Slides from today's AIA meeting (10-06-2021)

Anup Patel
 

Hi All,

The slides from today's AIA meeting are here:
https://docs.google.com/presentation/d/1WHGm7ZpOkVlk_sAVYVU5UwBXt1cdH-8fM1s2vdpY6K4/edit?usp=sharing

Both AIA and ACLINT specifications are now on RISC-V GitHub:
https://github.com/riscv/riscv-aia
https://github.com/riscv/riscv-aclint

Regards,
Anup

781 - 800 of 1847