Date   

Re: [RFC PATCH 1/1] server extension: PCIe requirements

Josh Scheid
 

On Mon, Jun 14, 2021 at 4:02 PM Greg Favor <gfavor@...> wrote:
On Mon, Jun 14, 2021 at 2:28 PM Josh Scheid <jscheid@...> wrote:
+For security reasons, platforms are required to provide a mechanism to
+restrict the inbound accesses over PCIe to certain specific regions in
+the address space such as the DRAM.

While a standard IOMMU is further off, is the current opinion that the IOPMP is not in a position to be required or suggested as an implementation of the above requirement?  If not, then it's hard to check for compliance.

I'm not sure if an IOPMP could be used for this particular purpose, but more generally IOPMP is being driven by embedded people and isn't consciously thinking about functionality requirements implied by H-style virtualization, or PCIe MSIs, or other PCIe features.  In this regard IOPMP is analogous to PLIC and CLIC - and not generally suitable for OS/A platforms (and presumably is well-suited for M platforms).

I understand that IOPMP is not an IOMMU, but to the extent that it is a general "bus master memory protection" widget, it can be used by M-mode to ensure simple things, such as that S-mode-SW-controlled PCIe initiators can not access address regions not accessible by S-mode.  There's value in memory protection even without full virtualization support.  I'm questioning how vague the memory protection "requirement" should be to the extent that it ends up being usable and sufficient to provide a defined level of assurance. 

For example, the platform spec could avoid mentioning the IOPMP proposal, but state that the platform is required to have a mechanism to allow M-mode SW to control (including prevent) PCIe initiator access to regions of system address space.  While remaining open to custom implementations, it's clear on the functional intent.

-Josh


Re: [RFC PATCH 1/1] server extension: PCIe requirements

Greg Favor
 

On Mon, Jun 14, 2021 at 2:28 PM Josh Scheid <jscheid@...> wrote:
+For security reasons, platforms are required to provide a mechanism to
+restrict the inbound accesses over PCIe to certain specific regions in
+the address space such as the DRAM.

While a standard IOMMU is further off, is the current opinion that the IOPMP is not in a position to be required or suggested as an implementation of the above requirement?  If not, then it's hard to check for compliance.

I'm not sure if an IOPMP could be used for this particular purpose, but more generally IOPMP is being driven by embedded people and isn't consciously thinking about functionality requirements implied by H-style virtualization, or PCIe MSIs, or other PCIe features.  In this regard IOPMP is analogous to PLIC and CLIC - and not generally suitable for OS/A platforms (and presumably is well-suited for M platforms).
 
Is this mechanism expected to be M-mode SW controlled, or is it also expected to be controlled by S-mode (either directly or via SBI)?

IOPMP has continued to evolve significantly, so I only casually/loosely watch what's going on.  But they don't appear to be thinking much yet about this aspect (unless they started in the past couple of weeks).  Although at a hardware level the flexibility is probably there for control by either since IOPMP registers will presumably be memory-mapped.

Greg


PCIe requirements: Memory vs I/O

Josh Scheid
 

The proposal allows for prefetchable BARs to be programmed to support as I/O or Memory.  This seems to conflict with the priv spec that states:

"""
Memory regions that do not fit into regular main memory, for example, device scratchpad RAMs,
are categorized as I/O regions.
"""

I agree that it is useful to allow for Memory treatment of some address space in some PCIe devices.  So there should be an action to accommodate that by adjusting the wording in the priv spec.

-Josh


Re: [RFC PATCH 1/1] server extension: PCIe requirements

Josh Scheid
 

On Wed, Jun 9, 2021 at 11:27 AM Mayuresh Chitale <mchitale@...> wrote:
This patch adds requirements for PCIe support for the server extension

Signed-off-by: Mayuresh Chitale <mchitale@...>

Signed-off-by: Mayuresh Chitale <mchitale@...>
---
 riscv-platform-spec.adoc | 133 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 132 insertions(+), 1 deletion(-)

diff --git a/riscv-platform-spec.adoc b/riscv-platform-spec.adoc
index 4418788..9de487e 100644
--- a/riscv-platform-spec.adoc
+++ b/riscv-platform-spec.adoc
@@ -363,7 +363,138 @@ https://lists.riscv.org/g/tech-privileged/message/404[Sstc] extension.
 ** Platforms are required to delegate the supervisor timer interrupt to 'S'
 mode. If the 'H' extension is implemented then the platforms are required to
 delegate the virtual supervisor timer interrupt to 'VS' mode.

Is this an M-mode SW requirement or a HW requirement that these interrupts are delegatable (writeable) in HW?

Why require the delegation by M-mode instead of allowing for M-mode to trap and pass down?  Is this just a performance benefit?

-* PCI-E
+
+===== PCIe
+Platforms are required to support PCIe
+footnote:[https://pcisig.com/specifications].Following are the requirements:

Any particular baseline PCIe version and/or extensions?

+
+====== PCIe Config Space
+* Platforms shall support access to the PCIe config space via ECAM as described
+in the PCI Express Base specification.
+* The entire config space for a single PCIe domain should be accessible via a
+single ECAM I/O region.
+* Platform firmware should implement the MCFG table to allow the operating
+systems to discover the supported PCIe domains and map the ECAM I/O region for
+each domain.
+* ECAM I/O regions shall be configured as channel 0 I/O regions.
+
+====== PCIe Memory Space
+* PCIe Outbound region +
+Platforms are required to provide atleast two I/O regions for mapping the
+memory requested by PCIe endpoints and PCIe bridges/switches through BARs.
+The first I/O region is required to be located below 4G physical address to
+map the memory requested by non-prefetchabe BARs. This region shall be
+configured as channel 0 I/O region. The second I/O region is required to be
+located above 4G physical address to map the memory requested by prefetchable
+BARs.

Is there any guidance needed about the amount of total space available (below 4G), or that space needs to be allocated for each domain?

I think that this is only necessary in the platform because of the current lack of an IOMMU requirement or standard.  With an IOMMU, that component can be used to locate 32-bit BARS anywhere in the system address space.  So at least keep in mind the requirement can be dropped at that time.
 
This region may be configured as I/O region or as memory region.

Is an SBI call needed to support S-mode configuration?  What is the default expected to be if there is no SBI call or no call is made?

IIRC, some older versions of some HCI standards (USB, SATA?) only had device support for 32-bit addresses.  I mention this to check if the requirement is really just that non-prefetchable BARs need to be supported <4GB, or that it's also needed for other 32-bit BAR support.  Thus it may need to support prefetchable BARs located <4GB.
 
+
+* PCIe Inbound region +
+For security reasons, platforms are required to provide a mechanism to
+restrict the inbound accesses over PCIe to certain specific regions in
+the address space such as the DRAM.

While a standard IOMMU is further off, is the current opinion that the IOPMP is not in a position to be required or suggested as an implementation of the above requirement?  If not, then it's hard to check for compliance.

Is this mechanism expected to be M-mode SW controlled, or is it also expected to be controlled by S-mode (either directly or via SBI)?

+
+====== PCIe Interrupts
+* Platforms shall support both INTx and MSI/MSI-x interrupts.
+* Integration with AIA +
+TBD

While TBD, one question interesting to me is whether or not it matters if the PCI RC implements it's own INTx to MSI bridge, or if an AIA APLIC is required for that.

+
+====== PCIe I/O coherency
+Following are the requirements:
+
+* Platforms shall provide a mechanism to control the `NoSnoop` bit for any
+outbound TLP.

Is it implicit here if this mechanism is provided to M-mode SW only, or also to S-mode?

+* If the host bridge/root port receives a TLP which does not have `NoSnoop` bit
+set then hardware shall generate a snoop request.
+* If the host bridge/root port receives a TLP which has `NoSnoop` set then no
+hardware coherency is required. Software coherency may be required via CMOs.

I read this as primarily stating that inbound NoSnoop controls the "coherent" access attribute.  But why this instead of focusing on control of the "cacheable" vs "non-cacheable" attribute?  With the latter, it seems more apparent how harts would then manages coherency: by controlling accesses to use the same "cacheable" attribute.

+
+====== PCIe Topology
+Platforms are required to implement atleast one of the following topologies and
+the components required in that topology.
+
+[ditaa]
+....
+
+            +----------+                             +----------+
+            |   CPU    |                             |   CPU    |
+            |          |                             |          |
+            +-----|----+                             +-----|----+
+                  |                                        |
+                  |                                        |
+    +-------------|------------+             +-------------|------------+
+    |        ROOT | COMPLEX    |             |        ROOT | COMPLEX    |
+    |                          |             |                          |
+    |      +------|-------+    |             |      +------|-------+    |
+    |      |  Host Bridge |    |             |      |  Host Bridge |    |
+    |      +------|-------+    |             |      +------|-------+    |
+    |             |            |             |             |            |
+    |             | BUS 0      |             |             | BUS 0      |
+    |     |-------|------|     |             |       +-----|-------+    |
+    |     |              |     |             |       | ROOT  PORT  |    |
+    |     |              |     |             |       +-----|-------+    |
+    | +---|---+      +---|---+ |             |             |            |
+    | | RCEIP |      | RCEC  | |             |             | PCIe Link  |
+    | +-------+      +-------+ |             |             |            |
+    |                          |             +-------------|------------+
+    +--------------------------+                           |
+                                                           |  BUS 1
+    RCEIP - Root complex integrated endpoint
+    RCEC - Root complex event collector
+....


Have we considered the option of requiring EPs to be behind virtual integrated RPs, instead of being RCiEPs?  This seems to bypass some of the unique limitations of RCiEPs, including the RCEC.

Do we need to ban or allow for impl-spec address mapping capabilities between PCI and system addresses?

Do we need say anything about peer-to-peer support, or requirements if a system enables it?  Including ACS?

Should the system mtimer counter also be the source for PCIe PTP?

-Josh


Re: Non-coherent I/O

Josh Scheid
 

On Mon, Jun 14, 2021 at 1:04 PM Greg Favor <gfavor@...> wrote:
I have already sent questions to Andrew to get the official view as to the intent of this aspect of the Priv spec and what is the proper way or perspective with which to be reading the ISA specs.  That then may result in the need for clarifying text to be added to the spec.  And once it is clear as to the scope and bounds of the ISA specs and what they require and allow, then it is left to profile and platform specs to specify tighter requirements.


Re I/O-related coherence and ordering, Daniel Lustig will readily acknowledge (and I'm quoting him) that "the I/O ordering model isn't currently defined as precisely as RVWMO".

And Krste will certainly say (i.e. has said) that RISC-V supports systems with coherent and non-coherent masters, and needs to standardize arch support for software management in such platforms asap.


While potentially a fine goal, it seems that to make this happen in a manner that allows Platform-compliant SW to be portable, more needs to be done
beyond the Zicmobase work, at least in terms of "glue" specification to tie it all together. It's also possible that the goal of generally enabling non-coherent
masters in RISC-V is perhaps outside the scope of OS-A Platform work, in that in the short term things can be done to enable it in implementation-specific
HW+SW systems, but allowing for implementation portable SW (across platform-compliant implementations) will take longer.

-Josh


Re: Non-coherent I/O

Greg Favor
 

I have already sent questions to Andrew to get the official view as to the intent of this aspect of the Priv spec and what is the proper way or perspective with which to be reading the ISA specs.  That then may result in the need for clarifying text to be added to the spec.  And once it is clear as to the scope and bounds of the ISA specs and what they require and allow, then it is left to profile and platform specs to specify tighter requirements.


Re I/O-related coherence and ordering, Daniel Lustig will readily acknowledge (and I'm quoting him) that "the I/O ordering model isn't currently defined as precisely as RVWMO".

And Krste will certainly say (i.e. has said) that RISC-V supports systems with coherent and non-coherent masters, and needs to standardize arch support for software management in such platforms asap.


Greg


Re: Non-coherent I/O

mark
 

If this is an issue with the priv spec please add it to the priv spec github issues.

thanks
Mark

On Mon, Jun 14, 2021 at 10:44 AM Josh Scheid <jscheid@...> wrote:
Priv:
"""
Accesses by one hart to main memory regions are observable not only by other harts but also
by other devices with the capability to initiate requests in the main memory system (e.g., DMA
engines). Coherent main memory regions always have either the RVWMO or RVTSO memory
model. Incoherent main memory regions have an implementation-defined memory model.
"""

The above is the core normative piece discussion coherent initiators. 

It's confusing because the "observable" statement in the first sentence is indirectly overridden by the consideration of incoherent main memory.

It may be enough to add additional wording in the platform spec that for platforms that behave differently for NoSnoop=1 inbound TLPs (vs ignoring them and treating them as NoSnoop=0) the region of addresses accessed in that manner should be communicated as having "incoherent" PMA generally in the system.

But it also implies that there's no standard memory model for incoherent memory.  Is the use of RVWMO+Zicmobose sufficient, or is more needed to describe a portable memory model in this case?
-Josh


Non-coherent I/O

Josh Scheid
 

Priv:
"""
Accesses by one hart to main memory regions are observable not only by other harts but also
by other devices with the capability to initiate requests in the main memory system (e.g., DMA
engines). Coherent main memory regions always have either the RVWMO or RVTSO memory
model. Incoherent main memory regions have an implementation-defined memory model.
"""

The above is the core normative piece discussion coherent initiators. 

It's confusing because the "observable" statement in the first sentence is indirectly overridden by the consideration of incoherent main memory.

It may be enough to add additional wording in the platform spec that for platforms that behave differently for NoSnoop=1 inbound TLPs (vs ignoring them and treating them as NoSnoop=0) the region of addresses accessed in that manner should be communicated as having "incoherent" PMA generally in the system.

But it also implies that there's no standard memory model for incoherent memory.  Is the use of RVWMO+Zicmobose sufficient, or is more needed to describe a portable memory model in this case?
-Josh


Re: SBI v0.3-rc1 released

Anup Patel
 

We had quite a bit of discussion about SBI versioning in past when we were drafting SBI v0.2 specification. The conclusion of those discussions was:

  1. We certainly needed a version for SBI implementation hence the sbi_get_impl_version() call
  2. We certainly needed a version for SBI specification itself hence the sbi_get_spec_version() call
  3. Most of us were not sure whether we really needed a separate version for each SBI extension. May be FIRMWARE and EXPERIMENTAL extensions might need their own version but still not sure. To tackle this, SBI v0.2 defined sbi_probe_extension() as “Returns 0 if the given SBI extension is not available or an extension-specific non-zero value if it is available”.

 

We have still not come across any SBI extension where the extension will keep growing over time and we will need a separate version for such SBI extension. Also, SBI extension can always use the SBI specification version to distinguish changes over time. For example, SBI HSM suspend call is only available in HSM extension for SBI v0.3 (or higher) but it is not available for SBI v0.2 (or lower).

 

Regards,

Anup

 

From: tech-unixplatformspec@... <tech-unixplatformspec@...> On Behalf Of Jonathan Behrens
Sent: 09 June 2021 19:35
To: Atish Patra <Atish.Patra@...>
Cc: tech-unixplatformspec@...; palmer@...; ksankaran@...; Anup Patel <Anup.Patel@...>
Subject: Re: [RISC-V] [tech-unixplatformspec] SBI v0.3-rc1 released

 

One thing that I'd like to see resolved for the 0.3 release is a precise specification for what sbi_probe_extension does. Right now the description says "Returns 0 if the given SBI extension ID (EID) is not available, or an  extension-specific non-zero value if it is available." However, every other extension listed in the spec fails to say what value should be returned if it is available.

 

I'd suggest that this function should indicate some sort of version number for each of the extensions, either just 1 to say that there haven't been multiple versions of any of the standard extensions or perhaps a value formatted like sbi_get_spec_version to encode more detailed information.

 

Jonathan

 

On Wed, Jun 9, 2021 at 2:40 AM Atish Patra via lists.riscv.org <atish.patra=wdc.com@...> wrote:


We have tagged the current SBI specification as a release candidate for
v0.3[1]. It is tagged as v0.3-rc1 which includes few new extensions and
cosmetic changes of the entire specification.
Here is a detailed change log:

- New extensions:
 - SBI PMU extension
 - SBI System reset extension
- Updated extensions:
 - Hart Suspend function added to HSM extension
- Overall specification reorganization and style update
- Additional clarifications for HSM extension and introduction section
- Makefile support to build html & pdf versions of the specification

We don't expect any significant functional changes. We will wait for
any further feedback and release the official v0.3 in a month or so.

Thank you for your contributions!

[1] https://github.com/riscv/riscv-sbi-doc/releases/tag/v0.3.0-rc1

--
Regards,
Atish





Next Platform HSC Meeting on Mon Jun 14 2021 8AM PST

Kumar Sankaran
 

Hi All,
The next platform HSC meeting is scheduled on Mon Jun 14th at 8AM PST.

Here are the details:

Agenda and minutes kept on the github wiki:
https://github.com/riscv/riscv-platform-specs/wiki

Here are the slides:
https://docs.google.com/presentation/d/1VepCqjMSHw9bSN6VIHhGn6K4tQQ6meuG49LYCi7-ctw/edit#slide=id.gc525db7f82_0_267

Meeting info
Zoom meeting: https://zoom.us/j/2786028446
Passcode: 901897

Or iPhone one-tap :
US: +16465588656,,2786028466# or +16699006833,,2786028466# Or Telephone:
Dial(for higher quality, dial a number based on your current location):
US: +1 646 558 8656 or +1 669 900 6833
Meeting ID: 278 602 8446
International numbers available:
https://zoom.us/zoomconference?m=_R0jyyScMETN7-xDLLRkUFxRAP07A-_

Regards
Kumar


Slides from today's AIA meeting (10-06-2021)

Anup Patel
 

Hi All,

The slides from today's AIA meeting are here:
https://docs.google.com/presentation/d/1WHGm7ZpOkVlk_sAVYVU5UwBXt1cdH-8fM1s2vdpY6K4/edit?usp=sharing

Both AIA and ACLINT specifications are now on RISC-V GitHub:
https://github.com/riscv/riscv-aia
https://github.com/riscv/riscv-aclint

Regards,
Anup


Re: [RFC PATCH 1/1] server extension: PCIe requirements

Mayuresh Chitale
 





On Thu, Jun 10, 2021 at 5:19 AM Bin Meng <bmeng.cn@...> wrote:
On Thu, Jun 10, 2021 at 2:27 AM Mayuresh Chitale
<mchitale@...> wrote:
>
> This patch adds requirements for PCIe support for the server extension
>
> Signed-off-by: Mayuresh Chitale <mchitale@...>
>
> Signed-off-by: Mayuresh Chitale <mchitale@...>

nits: 2 SoB here

Thanks. I will fix this and the typos below in the next version. 
> ---
>  riscv-platform-spec.adoc | 133 ++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 132 insertions(+), 1 deletion(-)
>
> diff --git a/riscv-platform-spec.adoc b/riscv-platform-spec.adoc
> index 4418788..9de487e 100644
> --- a/riscv-platform-spec.adoc
> +++ b/riscv-platform-spec.adoc
> @@ -363,7 +363,138 @@ https://lists.riscv.org/g/tech-privileged/message/404[Sstc] extension.
>  ** Platforms are required to delegate the supervisor timer interrupt to 'S'
>  mode. If the 'H' extension is implemented then the platforms are required to
>  delegate the virtual supervisor timer interrupt to 'VS' mode.
> -* PCI-E
> +
> +===== PCIe
> +Platforms are required to support PCIe
> +footnote:[https://pcisig.com/specifications].Following are the requirements:
> +
> +====== PCIe Config Space
> +* Platforms shall support access to the PCIe config space via ECAM as described
> +in the PCI Express Base specification.
> +* The entire config space for a single PCIe domain should be accessible via a
> +single ECAM I/O region.
> +* Platform firmware should implement the MCFG table to allow the operating

Is ACPI mandatory?

Yes,  ACPI is mandatory for server extension.

> +systems to discover the supported PCIe domains and map the ECAM I/O region for
> +each domain.
> +* ECAM I/O regions shall be configured as channel 0 I/O regions.
> +
> +====== PCIe Memory Space
> +* PCIe Outbound region +
> +Platforms are required to provide atleast two I/O regions for mapping the

at least

> +memory requested by PCIe endpoints and PCIe bridges/switches through BARs.
> +The first I/O region is required to be located below 4G physical address to
> +map the memory requested by non-prefetchabe BARs. This region shall be
> +configured as channel 0 I/O region. The second I/O region is required to be
> +located above 4G physical address to map the memory requested by prefetchable
> +BARs. This region may be configured as I/O region or as memory region.
> +
> +* PCIe Inbound region +
> +For security reasons, platforms are required to provide a mechanism to

Is this mechanism a standard one, or platform specific?

I am not sure if we have a standard mechanism yet.
 
> +restrict the inbound accesses over PCIe to certain specific regions in
> +the address space such as the DRAM.
> +
> +====== PCIe Interrupts
> +* Platforms shall support both INTx and MSI/MSI-x interrupts.
> +* Integration with AIA +
> +TBD
> +
> +====== PCIe I/O coherency
> +Following are the requirements:
> +
> +* Platforms shall provide a mechanism to control the `NoSnoop` bit for any
> +outbound TLP.
> +* If the host bridge/root port receives a TLP which does not have `NoSnoop` bit
> +set then hardware shall generate a snoop request.
> +* If the host bridge/root port receives a TLP which has `NoSnoop` set then no
> +hardware coherency is required. Software coherency may be required via CMOs.
> +
> +====== PCIe Topology
> +Platforms are required to implement atleast one of the following topologies and

at least

> +the components required in that topology.
> +
> +[ditaa]
> +....
> +
> +            +----------+                             +----------+
> +            |   CPU    |                             |   CPU    |
> +            |          |                             |          |
> +            +-----|----+                             +-----|----+
> +                  |                                        |
> +                  |                                        |
> +    +-------------|------------+             +-------------|------------+
> +    |        ROOT | COMPLEX    |             |        ROOT | COMPLEX    |
> +    |                          |             |                          |
> +    |      +------|-------+    |             |      +------|-------+    |
> +    |      |  Host Bridge |    |             |      |  Host Bridge |    |
> +    |      +------|-------+    |             |      +------|-------+    |
> +    |             |            |             |             |            |
> +    |             | BUS 0      |             |             | BUS 0      |
> +    |     |-------|------|     |             |       +-----|-------+    |
> +    |     |              |     |             |       | ROOT  PORT  |    |
> +    |     |              |     |             |       +-----|-------+    |
> +    | +---|---+      +---|---+ |             |             |            |
> +    | | RCEIP |      | RCEC  | |             |             | PCIe Link  |
> +    | +-------+      +-------+ |             |             |            |
> +    |                          |             +-------------|------------+
> +    +--------------------------+                           |
> +                                                           |  BUS 1
> +    RCEIP - Root complex integrated endpoint
> +    RCEC - Root complex event collector
> +....
> +
> +* Host Bridge +
> +Following are the requirements for host bridges:
> +
> +** Any read or write access by a hart to an ECAM I/O region shall be converted
> +by the host bridge into the corresponding PCIe config read or config write
> +request.
> +** Any read or write access by a hart to a PCIe outbound region shall be
> +forwarded by the host bridge to a BAR or prefetch/non-prefetch memory window,
> +if the address falls within the region claimed by the BAR or prefetch/
> +non-prefetch memory window. Otherwise the host bridge shall return an error.
> +
> +** Host bridge shall return all 1s in the following cases:
> +*** Config read to non existent functions and devices on root bus.
> +*** Config reads that receive Unsupported Request response from functions and
> +devices on the root bus.
> +* Root ports +
> +Following are the requirements for root ports.
> +** Root ports shall appear as PCI-PCI bridge to software.
> +** Root ports shall implememnt all registers of Type 1 header.

typo: implement

> +** Root ports shall implement all capabilities specified in the PCI Express
> +Base specification for a root port.
> +** Root ports shall forward type 1 configuration access when the bus number in
> +the TLP is greater than the root port's secondary bus number and less than or
> +equal to the root port's subordinate bus number.
> +** Root ports shall convert type 1 configuration access to a type 0
> +configuration acess when bus number in the TLP is equal to the root port's

typo: access

> +secondary bus number.
> +** Root ports shall respond to any type 0 configuration accesses it receives.
> +** Root ports shall forward memory accesses targeting its prefetch/non-prefetch
> +memory windows to downstream components. If address of the transaction does not
> +fall within the regions claimed by prefetch/non-prefetch memory windows then
> +the root port shall generate a Unsupported Request.
> +** Root port requester id or completer id shall be formed using the bdf of the
> +root port.
> +** The root ports shall support the CRS software visbility.

typo: visibility

> +** Root ports shall return all 1s in the following cases:
> +*** Config read to non existent functions and devices on seconday bus.

typo: secondary

> +*** Config reads that receive Unsupported Request from downstream components.
> +*** Config read when root port's link is down.
> +** The root port shall implement the AER capability.
> +
> +* RCEIP +
> +All the requirements for RCEIP in the PCI Express Base specification shall be implemented.
> +In addition the following requirements shall be met:
> +** If RCEIP is implemented then RCEC shall be implemented as well. All
> +requrirements for RCEC specified in the PCI Express Base specification shall be
> +implemented. RCEC is required to terminate the AER and PME messages from RCEIP.
> +** If both the topologies mentioned above are supported then RCEIP and RCEC
> +shall be implemented in a separate PCIe domain and shall be addressable via a
> +separate ECAM I/O region.
> +
> +====== PCIe peer to peer transactions +
> +TBD

Regards,
Bin


Re: [RFC PATCH 1/1] server extension: PCIe requirements

Bin Meng
 

On Thu, Jun 10, 2021 at 2:27 AM Mayuresh Chitale
<mchitale@...> wrote:

This patch adds requirements for PCIe support for the server extension

Signed-off-by: Mayuresh Chitale <mchitale@...>

Signed-off-by: Mayuresh Chitale <mchitale@...>
nits: 2 SoB here

---
riscv-platform-spec.adoc | 133 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 132 insertions(+), 1 deletion(-)

diff --git a/riscv-platform-spec.adoc b/riscv-platform-spec.adoc
index 4418788..9de487e 100644
--- a/riscv-platform-spec.adoc
+++ b/riscv-platform-spec.adoc
@@ -363,7 +363,138 @@ https://lists.riscv.org/g/tech-privileged/message/404[Sstc] extension.
** Platforms are required to delegate the supervisor timer interrupt to 'S'
mode. If the 'H' extension is implemented then the platforms are required to
delegate the virtual supervisor timer interrupt to 'VS' mode.
-* PCI-E
+
+===== PCIe
+Platforms are required to support PCIe
+footnote:[https://pcisig.com/specifications].Following are the requirements:
+
+====== PCIe Config Space
+* Platforms shall support access to the PCIe config space via ECAM as described
+in the PCI Express Base specification.
+* The entire config space for a single PCIe domain should be accessible via a
+single ECAM I/O region.
+* Platform firmware should implement the MCFG table to allow the operating
Is ACPI mandatory?

+systems to discover the supported PCIe domains and map the ECAM I/O region for
+each domain.
+* ECAM I/O regions shall be configured as channel 0 I/O regions.
+
+====== PCIe Memory Space
+* PCIe Outbound region +
+Platforms are required to provide atleast two I/O regions for mapping the
at least

+memory requested by PCIe endpoints and PCIe bridges/switches through BARs.
+The first I/O region is required to be located below 4G physical address to
+map the memory requested by non-prefetchabe BARs. This region shall be
+configured as channel 0 I/O region. The second I/O region is required to be
+located above 4G physical address to map the memory requested by prefetchable
+BARs. This region may be configured as I/O region or as memory region.
+
+* PCIe Inbound region +
+For security reasons, platforms are required to provide a mechanism to
Is this mechanism a standard one, or platform specific?

+restrict the inbound accesses over PCIe to certain specific regions in
+the address space such as the DRAM.
+
+====== PCIe Interrupts
+* Platforms shall support both INTx and MSI/MSI-x interrupts.
+* Integration with AIA +
+TBD
+
+====== PCIe I/O coherency
+Following are the requirements:
+
+* Platforms shall provide a mechanism to control the `NoSnoop` bit for any
+outbound TLP.
+* If the host bridge/root port receives a TLP which does not have `NoSnoop` bit
+set then hardware shall generate a snoop request.
+* If the host bridge/root port receives a TLP which has `NoSnoop` set then no
+hardware coherency is required. Software coherency may be required via CMOs.
+
+====== PCIe Topology
+Platforms are required to implement atleast one of the following topologies and
at least

+the components required in that topology.
+
+[ditaa]
+....
+
+ +----------+ +----------+
+ | CPU | | CPU |
+ | | | |
+ +-----|----+ +-----|----+
+ | |
+ | |
+ +-------------|------------+ +-------------|------------+
+ | ROOT | COMPLEX | | ROOT | COMPLEX |
+ | | | |
+ | +------|-------+ | | +------|-------+ |
+ | | Host Bridge | | | | Host Bridge | |
+ | +------|-------+ | | +------|-------+ |
+ | | | | | |
+ | | BUS 0 | | | BUS 0 |
+ | |-------|------| | | +-----|-------+ |
+ | | | | | | ROOT PORT | |
+ | | | | | +-----|-------+ |
+ | +---|---+ +---|---+ | | | |
+ | | RCEIP | | RCEC | | | | PCIe Link |
+ | +-------+ +-------+ | | | |
+ | | +-------------|------------+
+ +--------------------------+ |
+ | BUS 1
+ RCEIP - Root complex integrated endpoint
+ RCEC - Root complex event collector
+....
+
+* Host Bridge +
+Following are the requirements for host bridges:
+
+** Any read or write access by a hart to an ECAM I/O region shall be converted
+by the host bridge into the corresponding PCIe config read or config write
+request.
+** Any read or write access by a hart to a PCIe outbound region shall be
+forwarded by the host bridge to a BAR or prefetch/non-prefetch memory window,
+if the address falls within the region claimed by the BAR or prefetch/
+non-prefetch memory window. Otherwise the host bridge shall return an error.
+
+** Host bridge shall return all 1s in the following cases:
+*** Config read to non existent functions and devices on root bus.
+*** Config reads that receive Unsupported Request response from functions and
+devices on the root bus.
+* Root ports +
+Following are the requirements for root ports.
+** Root ports shall appear as PCI-PCI bridge to software.
+** Root ports shall implememnt all registers of Type 1 header.
typo: implement

+** Root ports shall implement all capabilities specified in the PCI Express
+Base specification for a root port.
+** Root ports shall forward type 1 configuration access when the bus number in
+the TLP is greater than the root port's secondary bus number and less than or
+equal to the root port's subordinate bus number.
+** Root ports shall convert type 1 configuration access to a type 0
+configuration acess when bus number in the TLP is equal to the root port's
typo: access

+secondary bus number.
+** Root ports shall respond to any type 0 configuration accesses it receives.
+** Root ports shall forward memory accesses targeting its prefetch/non-prefetch
+memory windows to downstream components. If address of the transaction does not
+fall within the regions claimed by prefetch/non-prefetch memory windows then
+the root port shall generate a Unsupported Request.
+** Root port requester id or completer id shall be formed using the bdf of the
+root port.
+** The root ports shall support the CRS software visbility.
typo: visibility

+** Root ports shall return all 1s in the following cases:
+*** Config read to non existent functions and devices on seconday bus.
typo: secondary

+*** Config reads that receive Unsupported Request from downstream components.
+*** Config read when root port's link is down.
+** The root port shall implement the AER capability.
+
+* RCEIP +
+All the requirements for RCEIP in the PCI Express Base specification shall be implemented.
+In addition the following requirements shall be met:
+** If RCEIP is implemented then RCEC shall be implemented as well. All
+requrirements for RCEC specified in the PCI Express Base specification shall be
+implemented. RCEC is required to terminate the AER and PME messages from RCEIP.
+** If both the topologies mentioned above are supported then RCEIP and RCEC
+shall be implemented in a separate PCIe domain and shall be addressable via a
+separate ECAM I/O region.
+
+====== PCIe peer to peer transactions +
+TBD
Regards,
Bin


[RFC PATCH 1/1] server extension: PCIe requirements

Mayuresh Chitale
 

This patch adds requirements for PCIe support for the server extension

Signed-off-by: Mayuresh Chitale <mchitale@...>

Signed-off-by: Mayuresh Chitale <mchitale@...>
---
riscv-platform-spec.adoc | 133 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 132 insertions(+), 1 deletion(-)

diff --git a/riscv-platform-spec.adoc b/riscv-platform-spec.adoc
index 4418788..9de487e 100644
--- a/riscv-platform-spec.adoc
+++ b/riscv-platform-spec.adoc
@@ -363,7 +363,138 @@ https://lists.riscv.org/g/tech-privileged/message/404[Sstc] extension.
** Platforms are required to delegate the supervisor timer interrupt to 'S'
mode. If the 'H' extension is implemented then the platforms are required to
delegate the virtual supervisor timer interrupt to 'VS' mode.
-* PCI-E
+
+===== PCIe
+Platforms are required to support PCIe
+footnote:[https://pcisig.com/specifications].Following are the requirements:
+
+====== PCIe Config Space
+* Platforms shall support access to the PCIe config space via ECAM as described
+in the PCI Express Base specification.
+* The entire config space for a single PCIe domain should be accessible via a
+single ECAM I/O region.
+* Platform firmware should implement the MCFG table to allow the operating
+systems to discover the supported PCIe domains and map the ECAM I/O region for
+each domain.
+* ECAM I/O regions shall be configured as channel 0 I/O regions.
+
+====== PCIe Memory Space
+* PCIe Outbound region +
+Platforms are required to provide atleast two I/O regions for mapping the
+memory requested by PCIe endpoints and PCIe bridges/switches through BARs.
+The first I/O region is required to be located below 4G physical address to
+map the memory requested by non-prefetchabe BARs. This region shall be
+configured as channel 0 I/O region. The second I/O region is required to be
+located above 4G physical address to map the memory requested by prefetchable
+BARs. This region may be configured as I/O region or as memory region.
+
+* PCIe Inbound region +
+For security reasons, platforms are required to provide a mechanism to
+restrict the inbound accesses over PCIe to certain specific regions in
+the address space such as the DRAM.
+
+====== PCIe Interrupts
+* Platforms shall support both INTx and MSI/MSI-x interrupts.
+* Integration with AIA +
+TBD
+
+====== PCIe I/O coherency
+Following are the requirements:
+
+* Platforms shall provide a mechanism to control the `NoSnoop` bit for any
+outbound TLP.
+* If the host bridge/root port receives a TLP which does not have `NoSnoop` bit
+set then hardware shall generate a snoop request.
+* If the host bridge/root port receives a TLP which has `NoSnoop` set then no
+hardware coherency is required. Software coherency may be required via CMOs.
+
+====== PCIe Topology
+Platforms are required to implement atleast one of the following topologies and
+the components required in that topology.
+
+[ditaa]
+....
+
+ +----------+ +----------+
+ | CPU | | CPU |
+ | | | |
+ +-----|----+ +-----|----+
+ | |
+ | |
+ +-------------|------------+ +-------------|------------+
+ | ROOT | COMPLEX | | ROOT | COMPLEX |
+ | | | |
+ | +------|-------+ | | +------|-------+ |
+ | | Host Bridge | | | | Host Bridge | |
+ | +------|-------+ | | +------|-------+ |
+ | | | | | |
+ | | BUS 0 | | | BUS 0 |
+ | |-------|------| | | +-----|-------+ |
+ | | | | | | ROOT PORT | |
+ | | | | | +-----|-------+ |
+ | +---|---+ +---|---+ | | | |
+ | | RCEIP | | RCEC | | | | PCIe Link |
+ | +-------+ +-------+ | | | |
+ | | +-------------|------------+
+ +--------------------------+ |
+ | BUS 1
+ RCEIP - Root complex integrated endpoint
+ RCEC - Root complex event collector
+....
+
+* Host Bridge +
+Following are the requirements for host bridges:
+
+** Any read or write access by a hart to an ECAM I/O region shall be converted
+by the host bridge into the corresponding PCIe config read or config write
+request.
+** Any read or write access by a hart to a PCIe outbound region shall be
+forwarded by the host bridge to a BAR or prefetch/non-prefetch memory window,
+if the address falls within the region claimed by the BAR or prefetch/
+non-prefetch memory window. Otherwise the host bridge shall return an error.
+
+** Host bridge shall return all 1s in the following cases:
+*** Config read to non existent functions and devices on root bus.
+*** Config reads that receive Unsupported Request response from functions and
+devices on the root bus.
+* Root ports +
+Following are the requirements for root ports.
+** Root ports shall appear as PCI-PCI bridge to software.
+** Root ports shall implememnt all registers of Type 1 header.
+** Root ports shall implement all capabilities specified in the PCI Express
+Base specification for a root port.
+** Root ports shall forward type 1 configuration access when the bus number in
+the TLP is greater than the root port's secondary bus number and less than or
+equal to the root port's subordinate bus number.
+** Root ports shall convert type 1 configuration access to a type 0
+configuration acess when bus number in the TLP is equal to the root port's
+secondary bus number.
+** Root ports shall respond to any type 0 configuration accesses it receives.
+** Root ports shall forward memory accesses targeting its prefetch/non-prefetch
+memory windows to downstream components. If address of the transaction does not
+fall within the regions claimed by prefetch/non-prefetch memory windows then
+the root port shall generate a Unsupported Request.
+** Root port requester id or completer id shall be formed using the bdf of the
+root port.
+** The root ports shall support the CRS software visbility.
+** Root ports shall return all 1s in the following cases:
+*** Config read to non existent functions and devices on seconday bus.
+*** Config reads that receive Unsupported Request from downstream components.
+*** Config read when root port's link is down.
+** The root port shall implement the AER capability.
+
+* RCEIP +
+All the requirements for RCEIP in the PCI Express Base specification shall be implemented.
+In addition the following requirements shall be met:
+** If RCEIP is implemented then RCEC shall be implemented as well. All
+requrirements for RCEC specified in the PCI Express Base specification shall be
+implemented. RCEC is required to terminate the AER and PME messages from RCEIP.
+** If both the topologies mentioned above are supported then RCEIP and RCEC
+shall be implemented in a separate PCIe domain and shall be addressable via a
+separate ECAM I/O region.
+
+====== PCIe peer to peer transactions +
+TBD

==== Secure Boot
* TEE
--
2.17.1


[RFC PATCH 0/1] System peripherals - PCIe

Mayuresh Chitale
 

This is an initial patch for PCIe requirements for the server extension. The
goal is to specify requirements for those PCIe elements which interact with
the system such as PCIe config space, memory space, topology, interrupts etc.

Mayuresh Chitale (1):
This patch adds requirements for PCIe support for the server extension

riscv-platform-spec.adoc | 135 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 133 insertions(+), 2 deletions(-)

--
2.17.1


Re: SBI v0.3-rc1 released

Jonathan Behrens <behrensj@...>
 

One thing that I'd like to see resolved for the 0.3 release is a precise specification for what sbi_probe_extension does. Right now the description says "Returns 0 if the given SBI extension ID (EID) is not available, or an  extension-specific non-zero value if it is available." However, every other extension listed in the spec fails to say what value should be returned if it is available.

I'd suggest that this function should indicate some sort of version number for each of the extensions, either just 1 to say that there haven't been multiple versions of any of the standard extensions or perhaps a value formatted like sbi_get_spec_version to encode more detailed information.

Jonathan


On Wed, Jun 9, 2021 at 2:40 AM Atish Patra via lists.riscv.org <atish.patra=wdc.com@...> wrote:

We have tagged the current SBI specification as a release candidate for
v0.3[1]. It is tagged as v0.3-rc1 which includes few new extensions and
cosmetic changes of the entire specification.
Here is a detailed change log:

- New extensions:
 - SBI PMU extension
 - SBI System reset extension
- Updated extensions:
 - Hart Suspend function added to HSM extension
- Overall specification reorganization and style update
- Additional clarifications for HSM extension and introduction section
- Makefile support to build html & pdf versions of the specification

We don't expect any significant functional changes. We will wait for
any further feedback and release the official v0.3 in a month or so.

Thank you for your contributions!

[1] https://github.com/riscv/riscv-sbi-doc/releases/tag/v0.3.0-rc1

--
Regards,
Atish






SBI v0.3-rc1 released

atishp@...
 

We have tagged the current SBI specification as a release candidate for
v0.3[1]. It is tagged as v0.3-rc1 which includes few new extensions and
cosmetic changes of the entire specification.
Here is a detailed change log:

- New extensions:
- SBI PMU extension
- SBI System reset extension
- Updated extensions:
- Hart Suspend function added to HSM extension
- Overall specification reorganization and style update
- Additional clarifications for HSM extension and introduction section
- Makefile support to build html & pdf versions of the specification

We don't expect any significant functional changes. We will wait for
any further feedback and release the official v0.3 in a month or so.

Thank you for your contributions!

[1] https://github.com/riscv/riscv-sbi-doc/releases/tag/v0.3.0-rc1

--
Regards,
Atish


Re: [PATCH v2] riscv-sbi.adoc: Clarify that an SBI extension shall not be partially implemented

atishp@...
 

On Tue, 2021-06-08 at 09:38 +0800, Bin Meng wrote:
Mention that an SBI extension shall not be partially implemented.

Signed-off-by: Bin Meng <bmeng.cn@...>

---

Changes in v2:
- %s/a SBI/an SBI
- reword the clarification

 riscv-sbi.adoc | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/riscv-sbi.adoc b/riscv-sbi.adoc
index 6b548a5..df90840 100644
--- a/riscv-sbi.adoc
+++ b/riscv-sbi.adoc
@@ -35,6 +35,7 @@ https://creativecommons.org/licenses/by/4.0/.
 * Improved documentation of SBI hart state managment extension
 * Added suspend function to SBI hart state managment extension
 * Added performance monitoring unit extension
+* Clarified that an SBI extension shall not be partially implemented
 
 === Version 0.2
 
@@ -52,6 +53,11 @@ abstraction for platform (or hypervisor) specific
functionality. The design
 of the SBI follows the general RISC-V philosophy of having a small
core along
 with a set of optional modular extensions.
 
+SBI extensions as whole are optional but they shall not be partially
+implemented. If sbi_probe_extension() signals that an extension is
available,
+all functions conforming to the SBI version reported by
sbi_get_spec_version()
+must be implemented in total.
+
Thanks.
Reviewed-by: Atish Patra <atish.patra@...>

 The higher privilege software providing SBI interface to the
supervisor-mode
 software is referred to as an SBI implemenation or Supervisor
Execution
 Environment (SEE). An SBI implementation (or SEE) can be platform
runtime
--
Regards,
Atish


[PATCH v2] riscv-sbi.adoc: Clarify that an SBI extension shall not be partially implemented

Bin Meng
 

Mention that an SBI extension shall not be partially implemented.

Signed-off-by: Bin Meng <bmeng.cn@...>

---

Changes in v2:
- %s/a SBI/an SBI
- reword the clarification

riscv-sbi.adoc | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/riscv-sbi.adoc b/riscv-sbi.adoc
index 6b548a5..df90840 100644
--- a/riscv-sbi.adoc
+++ b/riscv-sbi.adoc
@@ -35,6 +35,7 @@ https://creativecommons.org/licenses/by/4.0/.
* Improved documentation of SBI hart state managment extension
* Added suspend function to SBI hart state managment extension
* Added performance monitoring unit extension
+* Clarified that an SBI extension shall not be partially implemented

=== Version 0.2

@@ -52,6 +53,11 @@ abstraction for platform (or hypervisor) specific functionality. The design
of the SBI follows the general RISC-V philosophy of having a small core along
with a set of optional modular extensions.

+SBI extensions as whole are optional but they shall not be partially
+implemented. If sbi_probe_extension() signals that an extension is available,
+all functions conforming to the SBI version reported by sbi_get_spec_version()
+must be implemented in total.
+
The higher privilege software providing SBI interface to the supervisor-mode
software is referred to as an SBI implemenation or Supervisor Execution
Environment (SEE). An SBI implementation (or SEE) can be platform runtime
--
2.25.1


Re: [PATCH] Clarify that a SBI extension cannot be partially implemented

Bin Meng
 

Hi Atish,

On Tue, Jun 8, 2021 at 2:09 AM Atish Patra <Atish.Patra@...> wrote:

On Fri, 2021-06-04 at 19:05 +0000, Atish Patra wrote:
On Fri, 2021-06-04 at 20:48 +0800, Bin Meng wrote:
Hi Heinrich,

On Fri, Jun 4, 2021 at 8:13 PM Heinrich Schuchardt <
xypron.glpk@...> wrote:

a
On 6/4/21 11:57 AM, Bin Meng wrote:
Signed-off-by: Bin Meng <bmeng.cn@...>
---

riscv-sbi.adoc | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/riscv-sbi.adoc b/riscv-sbi.adoc
index 11c30c3..8696f97 100644
--- a/riscv-sbi.adoc
+++ b/riscv-sbi.adoc
@@ -34,6 +34,7 @@ https://creativecommons.org/licenses/by/4.0/.
* Improved SBI introduction secion
* Improved documentation of SBI hart state managment
extension
* Added suspend function to SBI hart state managment
extension
+* Clarified that a SBI extension cannot be partially
implemented

=== Version 0.2

@@ -51,6 +52,11 @@ abstraction for platform (or hypervisor)
specific functionality. The design
of the SBI follows the general RISC-V philosophy of having a
small core along
with a set of optional modular extensions.

+SBI extensions as whole are optional but if a SBI <abc>
extension compliant
%s/a SBI/an SBI/ (as you will pronounce SBI as as-bee-aye)
Sure. Will send a new patch to fix other places in the same file.


+with SBI v0.X spec is implemented then all functions of SBI
<abc> extension
+as defined in SBI v0.X are assumed to be present. Basically, a
SBI extension
Can we do away with all the placeholders?

How about:

"SBI extensions as whole are optional but they shall not be
partially
implemented: If sbi_probe_extension() signals that an extension
is
available, it must be implemented in total and conform to the SBI
version reported by sbi_get_spec_version()."
This one is more verbose but sounds better to me. May be we should
just
explicitly say that "all functions belonging to that extension must
be
implemented" similar to the below version.
Hi Bin,
Are you planning to send v2 for this patch or I can modify the text and
merge?
Okay, I will send v2.

Regards,
Bin

761 - 780 of 1818