Date   

Profiles for RISC-V

Aaron Durbin
 

Hi All,


My apologies for the long email up front. I hope people find this useful as well as a starting point for a broader discussion in how all these pieces fit together within RISC-V. There are implications to the decisions we make on this front. I’m cross posting to multiple lists for visibility. However, please direct all responses to the Profiles mailing list: https://lists.riscv.org/g/tech-profiles


I had volunteered to take a stab at describing the interaction between Profiles, Platforms, and OS-A SEE. As alluded to in the last list item, this particular write up will focus on OS-A (application level devices) and the A Profiles. In the current proposed specification, there can and will be Profiles defined for different target markets/applications. I hope I captured the intent of what has been said in meetings I have been in as well as what is written previously. Please provide any feedback where I am wrong in my understanding of the direction on these particular topics.


For Profiles, Platforms, and OS-A SEE, there is a dependency chain in that a Profile serves as the base dependency and OS-A SEE would depend on a Profile. A Platform, in turn, would depend on an OS-A SEE. The OS-A SEE is about providing specs for binary operating systems for booting, and a Platform is targeting particular manifestations of RISC-V implementation for a particular target market (e.g. server). Each will, or can, have different versions of their requirements over time. One can think of a tuple of those specification versions culminating in distinct requirements for both hardware and software implementations.


Profiles are expected to be a roadmap of the future support expected in both software and hardware. In the current Profile proposed specification, there are RVA20S64 and RVA22S64 variants. There’s also a proposal for a RVA23S64 Profile based on what is known today about in-flight extensions that are working through the system in 2022. Within the current proposed specification there are classifications of options/extensions: mandatory, supported optional, unsupported optional, and incompatible. Please read the proposed specification for details of the meanings. In short, the mandatory classification provides baseline expectations, and the incompatible classification sets an expectation that those options/extensions are *not* present.


The intention for a Profile is to “only describe ISA features, not a complete execution environment.” It should be noted, though maybe obvious to some, that a Profile is providing expectations for both the instruction stream that can be emitted from a compiler as well as other architectural state that does not have an impact on the compiler’s instruction emission (runtime handling within the OS, e.g.). The Profile is expecting that software can assume all those mandatory requirements exist, either in instruction support or runtime. 


The current cadence is a yearly Profile release that incorporates the latest ratified extensions. While extensions/options may start in a supported optional categorization, the trend is that there is a progression to a mandatory classification for the ratified extensions. (Note: RVA23S64 proposal does have a case where an extension moves in the opposite direction).


With the aforementioned yearly Profile release cadence setting the roadmap in direction, one needs to figure out how to intercept a particular release from both a software and hardware implementation standpoint. An OS-A SEE version would need to pick a Profile version. With that in mind there is now a default target for the compiler to choose instructions (mandatory instructions). For the kernel itself adhering to a particular Profile, it’s not clear if the kernel would refuse to boot if a particular mandatory option/extension was not available on a particular implementation. The hardware implementers can take up to 2-3 years to deliver a product. As such, there is inherently a lag in interception on the hardware front with a Profile. A RVA22 Profile may not see compatible hardware until 2024, e.g.. But for binary compatibility of Operating Systems, one needs to figure out how to align against a Profile version (by way of an OS-A SEE). An implication I observe from the progression of extensions/options going to mandatory is that there is a multiplier effect on the number of build targets. That can be solved by fixing on a particular Profile version in time (read: older), and dealing with runtime choices for newer extensions/options. That’s simplistic, but I’m not sure that’s necessarily the best path. We may need to decide what extensions we truly want to be mandatory as a fixed baseline. A biennial cadence was proposed for a OS-A SEE release, but as I noted there are implications from the transitive dependencies of the Profile progression. I do want to point out that the previous explanation was focusing on binary compatibility. If a user/operator can rebuild their software, they certainly can target newer Profiles with the knowledge their particular piece of hardware supports it.


In short, we have multiple moving trains that we need to align the various specifications with the right version to ensure it fosters the best possible outcome for the RISC-V ecosystem. There are implications for hardware implementers bringing a product to market as well as software vendors who are bringing products to market too.


Thanks for taking the time to read and digest. I hope it produces a fruitful discussion that allows us to come up with the best plan that achieves the goals of hardware implementers, software implementers, and users.


-Aaron



Call for Candidates - OS-A SEE TG

Aaron Durbin
 

All,

As per the policy governing chairs and vice chairs, we are holding a call for candidates for the positions of CHAIR and VICE-CHAIR for the OS-A SEE TG. To nominate yourself or another member of the community, please respond to this thread with a short biography of the candidate (less than 250 words) as well as a statement of intent (less than 250 words).

If both Chair and Vice-Chair are available, candidates may nominate for one or both roles.

A diversity of ideas and contributions, one that originates from a diverse community, from all walks of life, cultures, countries, and skin colors—is vital for building sustainable and healthy open source communities. Individuals from diverse backgrounds inject new and innovative ideas to advance an inclusive and welcoming ecosystem for all. RISC-V is committed to building diverse and inclusive communities.

This call for candidates will be open for 2 weeks, ending on May 26, 2022 (inclusive).

If you have any questions regarding this process please contact help@....

The current approved charter by the Priv SW HC (nee Software HC) can be found here.

Kind regard,
Aaron Durbin


RISC-V UEFI Protocol Specification - Public review completed

Sunil V L
 

Hi All,

I am pleased to inform that 45 days of public review period has ended
for the RISC-V UEFI Protocol Spec. Feedbacks during review period are
addressed.

We will go through the remaining process as per the "Ratification Ready" row in
https://docs.google.com/presentation/d/1nQ5uFb39KA6gvUi5SReWfIQSiRN7hp6z7ZPfctE4mKk/edit#slide=id.ga0a994c3c8_0_6

The latest spec for the final ratification stage is available at,
https://github.com/riscv-non-isa/riscv-uefi/releases/download/1.0.0/RISCV_UEFI_PROTOCOL-spec.pdf

Updated Status checklist can be found at:
https://docs.google.com/spreadsheets/d/1EHRXPGZnqUxiBoBHN9Vc7Svkoi2RPZITqyld5IAunDE/edit#gid=1029126936

Thanks!
Sunil


OS-A SEE Update

Aaron Durbin
 

Hi All,

I'm cross posting to tech-unixplatformspec@ and tech-os-a-see@ lists because there wasn't sufficient overlap in membership to get the proper visibility.


I hope people will respond with their feedback so we can move forward. I purposefully made things generic such that we can focus on the aspects of booting an OS as well as obtaining any necessary information at runtime. Hypervisor and non-Hypervisor support is fully expected in that the same OS could be booted with or w/o the hypervisor extension.

The one bit of feedback that I have heard is that SEE is a reused term from the privilege spec. While SEE is generic within the privilege spec we're proposing an OS-A specific SEE expectations. As such, it was suggested we formally call it SEEI (Supervisor Execution Environment Interface). The charter has not been updated to reflect that, but I think that's a good idea to provide specificity and clarity without colliding with terms already being used.

More on approach intent of OS-A SEE spec: It is my belief that to kickstart the ecosystem we need a common set of interfaces the OS can rely on for booting and runtime. However, it should not prescribe particular hardware support (unless absolutely necessary which we need to discuss). The reason for this thinking is that SW has the ability to dynamically probe and bind drivers for particular HW support. We should expect to take advantage of this situation to create a big umbrellea that allows binary portability across RISC-V implementations. That is why the focus would largely be on interacting w/ SEE at boot and runtime.

Please let me know your thoughts.

-Aaron


Re: Watchdog Spec Questions

Greg Favor
 

On Tue, Apr 19, 2022 at 4:48 PM Phil McCoy <pnm@...> wrote:
Several modifications to the WDT spec were proposed, and seemingly agreed in this discussion thread.  Is there any plan to actually update the spec?  Should I try to learn enough git-foo to generate a pull request?

Any plans to try to get this ratified in the foreseeable future?

I wouldn't say there was true convergence on a near-final hardware WDT spec.  To start with there have also been arguments to abstract any hardware away from what S/HS-mode sees and to instead have an OS/hypervisor do SBI calls to set up its watchdog and to "refresh" the watchdog, and it could receive some form of callback on a first-stage watchdog timeout.  This is based on the frequency of watchdog refreshes being relatively very low and not anywhere close to a performance overhead issue.

In any case, proper discussion and resolution of this issue should happen in the OS-A SEE TG (whether that results in a RISC-V hardware standard like in ARM architecture or in a RISC-V SBI standard extension).

Greg


 


Re: Watchdog Spec Questions

Phil McCoy <pnm@...>
 

Several modifications to the WDT spec were proposed, and seemingly agreed in this discussion thread.  Is there any plan to actually update the spec?  Should I try to learn enough git-foo to generate a pull request?

Any plans to try to get this ratified in the foreseeable future?

I'm currently trying to finalize a WDT implementation spec, and would like to align with the official spec as much as possible.

Thanks,
Phil


Re: [RISC-V][tech-os-a-see] OS-A SEE Proposed Charter

Aaron Durbin
 



On Fri, Apr 15, 2022 at 6:56 AM <darius@...> wrote:
On Tue, Apr 12, 2022 at 09:43:43AM -0700, Aaron Durbin wrote:
> On Fri, Apr 8, 2022 at 3:45 AM <darius@...> wrote:
> > On Mon, Apr 04, 2022 at 08:49:28AM -0600, Aaron Durbin wrote:
> > > On Fri, Mar 25, 2022 at 7:25 AM <darius@...> wrote:
> > >
> > > >
> > > >   - The RISC-V Privileged Architecture treats the SEE as "concrete
> > > >     instances of components implementing the interfaces", whereas the
> > > >     proposed charter is somewhat more vague.  Obviously that
> > definition is
> > > >     what we are discussing here, but generally it should be clear
> > whether
> > > >     the SEE is (1) one or a set of interfaces, (2) a reference or
> > abstract
> > > >     implementation, or (3) any one of a number of specific
> > implementations.
> > > >
> > >
> > > It would be (3). I'll adjust to try to make that clearer. Feel free to
> > > provide specific suggestions as well to the prose.
> > >
> >
> > Maybe I am missing some nuance, but what I keep going back to is that it
> > seems the very thing you are describe is exactly the SBI as defined in the
> > RISC-V Privileged Architecture.  Unfortunate terminology aside, to me it
> > seems like the ideal term is SBI and is clearly described in the Privileged
> > Architecture.
> >
> > A specification typically defines the interfaces, as the concrete instance
> > is up to each implementation.  I guess I don't understand how we would
> > define the *environment* (SEE) if not by the *interface* (SBI), and if we
> > are defining the *interface*, is that not SBI (per the priv spec
> > definition)?
> >
> >
> I think I answered your original question incorrectly (or I misunderstood
> what (3) was suggesting. I was suggesting (3) could be achieved by
> adhering/using (1).
>
> W.r.t. to SBI vs SEE, I appreciate the attention to detail, Darius. You are
> correct that the current privilege spec calls these abstract concepts out
> in that specific way. FWIW, the HBI and HEE as currently detailed in the
> specification are not reality in how the H extension has been defined.
>

I haven't been following the H extension closely, and was not aware the
HBI/HEE terms have suffered the same fate.  This is one of those things
that makes it hard to convince people to take RISC-V seriously; when we
can't even be consistent with terms that we invent.

> How things have progressed in RISC-V has caused a naming a collision. We
> have the abstract SBI concept from the current incarnation of the ISA
> manual, and we have the SBI specification. The latter does fully define the
> interfaces a supervisor implementation requires in the world of an OS-A
> rich OS. That includes other entities, e.g. UEFI, DeviceTree bindings,
> ACPI, etc in addition to some pieces of the SBI specification.
>
> So what should we call this? SBI is already overloaded. Would could tack on
> an 'I' for interface and make it SEEI, but there isn't a pure 'SBI' name
> that one can take advantage of because that ship has already sailed.
>

I was hoping that since the SBI specification had not been ratified that it
could be renamed to be more consistent with prior usage of that term.  But
if, as you say, that ship has sailed, then I'll drop it.

SEEI seems reasonable, or perhaps SEI for Supervisor Environment Interface.
I don't have a strong opinion about the term that is used, other than it
*not* be SEE, so we don't perpetuate this bad habit of reusing specific
terms with explicitly different meanings.  And that we define the term
clearly at whatever point we start using it (i.e., in the charter, if the
charter uses it).

Both of those sound reasonable to me. I didn't modify the charter yet as I'd like to hear from others on this list w/ their opinion. I can make the change as required subsequently.
 

> > > >   - SBI is used in the RISC-V Privileged Architecture as the entirety
> > > >     of the interface between the SEE and the operating system.
> > > >     Separately, we have this RISC-V thing called SBI (of which OpenSBI
> > > >     is an implementation) which encapsulates some, but not all, of
> > > >     that functionality.  For example, memory mapped peripherals are
> > > >     not part of the OpenSBI type of SBI, but appear to be covered by
> > > >     the other style SBI.
> > > >
> > >
> > > It's an unfortunate reuse of terms that mean different things. It's also
> > > abstract in that current implementations/usage does not rely on SBI spec
> > as
> > > its only dependency in booting the kernel.
> > >
> >
> > I don't follow that second sentence, could you elaborate?
> >
>
> The kernels don't only use SBI, as in the SBI specification, to boot and
> run for an OS-A machine. As noted above other specs and dependencies exist
> beyond SBI when SBI mean 'SBI specification'.
>

I think I understand what you are saying now, and I think that was roughly
the same point I was trying to make, in the context of advocating for a
single consistent use of the term SBI (for one usage only).  In any case, I
guess we're just accepting that the term SBI is a confusing mess.

> > > > The proposed charter seems to be unclear on this point.  It says "focus
> > > > will be on the interfaces between the SEE and the hosted environment",
> > > > which suggests the lower interface to me.  However, it also refers to
> > > > binary compatible operating systems, which suggests the upper
> > interface.
> > > >
> > >
> > > The target of SEE is for entities building binary compatible operating
> > > systems. i.e. One could take a distribution from any provider and boot
> > said
> > > distribution on a compatible RISC-V implementation. I adjusted the
> > verbiage
> > > some to hopefully make that clearer w.r.t. intent.
> > >
> >
> > Maybe I'm missing something, because I am envisioning that the SEE *is* the
> > firmware or hosted environment.  I don't understand what these interfaces
> > would be between.
> >
>
> SEE is the supervisor's environment and how the supervisor interacts w/ the
> firmware or hosted environment is what we are attempting to nail down and
> define. I don't think there are any interfaces between or if there are then
> that would be out of the scope of this endeavor.
>

Perhaps instead of "The focus will be on the interfaces between the SEE and
the hosted environment", use "The focus will be on the interfaces between
the operating system and the SEE (through which the operating system may,
for example, access services provided by the firmware or otherwise)".

I adjusted the language, however I used 'its SEE' instead 'the SEE' to hopefully convey that the kernel/OS is residing within the SEE for itself -- not some other entity.



// darius







Re: [RISC-V][tech-os-a-see] OS-A SEE Proposed Charter

Darius Rad
 

On Tue, Apr 12, 2022 at 09:43:43AM -0700, Aaron Durbin wrote:
On Fri, Apr 8, 2022 at 3:45 AM <darius@...> wrote:
On Mon, Apr 04, 2022 at 08:49:28AM -0600, Aaron Durbin wrote:
On Fri, Mar 25, 2022 at 7:25 AM <darius@...> wrote:


- The RISC-V Privileged Architecture treats the SEE as "concrete
instances of components implementing the interfaces", whereas the
proposed charter is somewhat more vague. Obviously that
definition is
what we are discussing here, but generally it should be clear
whether
the SEE is (1) one or a set of interfaces, (2) a reference or
abstract
implementation, or (3) any one of a number of specific
implementations.
It would be (3). I'll adjust to try to make that clearer. Feel free to
provide specific suggestions as well to the prose.
Maybe I am missing some nuance, but what I keep going back to is that it
seems the very thing you are describe is exactly the SBI as defined in the
RISC-V Privileged Architecture. Unfortunate terminology aside, to me it
seems like the ideal term is SBI and is clearly described in the Privileged
Architecture.

A specification typically defines the interfaces, as the concrete instance
is up to each implementation. I guess I don't understand how we would
define the *environment* (SEE) if not by the *interface* (SBI), and if we
are defining the *interface*, is that not SBI (per the priv spec
definition)?

I think I answered your original question incorrectly (or I misunderstood
what (3) was suggesting. I was suggesting (3) could be achieved by
adhering/using (1).

W.r.t. to SBI vs SEE, I appreciate the attention to detail, Darius. You are
correct that the current privilege spec calls these abstract concepts out
in that specific way. FWIW, the HBI and HEE as currently detailed in the
specification are not reality in how the H extension has been defined.
I haven't been following the H extension closely, and was not aware the
HBI/HEE terms have suffered the same fate. This is one of those things
that makes it hard to convince people to take RISC-V seriously; when we
can't even be consistent with terms that we invent.

How things have progressed in RISC-V has caused a naming a collision. We
have the abstract SBI concept from the current incarnation of the ISA
manual, and we have the SBI specification. The latter does fully define the
interfaces a supervisor implementation requires in the world of an OS-A
rich OS. That includes other entities, e.g. UEFI, DeviceTree bindings,
ACPI, etc in addition to some pieces of the SBI specification.

So what should we call this? SBI is already overloaded. Would could tack on
an 'I' for interface and make it SEEI, but there isn't a pure 'SBI' name
that one can take advantage of because that ship has already sailed.
I was hoping that since the SBI specification had not been ratified that it
could be renamed to be more consistent with prior usage of that term. But
if, as you say, that ship has sailed, then I'll drop it.

SEEI seems reasonable, or perhaps SEI for Supervisor Environment Interface.
I don't have a strong opinion about the term that is used, other than it
*not* be SEE, so we don't perpetuate this bad habit of reusing specific
terms with explicitly different meanings. And that we define the term
clearly at whatever point we start using it (i.e., in the charter, if the
charter uses it).

- SBI is used in the RISC-V Privileged Architecture as the entirety
of the interface between the SEE and the operating system.
Separately, we have this RISC-V thing called SBI (of which OpenSBI
is an implementation) which encapsulates some, but not all, of
that functionality. For example, memory mapped peripherals are
not part of the OpenSBI type of SBI, but appear to be covered by
the other style SBI.
It's an unfortunate reuse of terms that mean different things. It's also
abstract in that current implementations/usage does not rely on SBI spec
as
its only dependency in booting the kernel.
I don't follow that second sentence, could you elaborate?
The kernels don't only use SBI, as in the SBI specification, to boot and
run for an OS-A machine. As noted above other specs and dependencies exist
beyond SBI when SBI mean 'SBI specification'.
I think I understand what you are saying now, and I think that was roughly
the same point I was trying to make, in the context of advocating for a
single consistent use of the term SBI (for one usage only). In any case, I
guess we're just accepting that the term SBI is a confusing mess.

The proposed charter seems to be unclear on this point. It says "focus
will be on the interfaces between the SEE and the hosted environment",
which suggests the lower interface to me. However, it also refers to
binary compatible operating systems, which suggests the upper
interface.
The target of SEE is for entities building binary compatible operating
systems. i.e. One could take a distribution from any provider and boot
said
distribution on a compatible RISC-V implementation. I adjusted the
verbiage
some to hopefully make that clearer w.r.t. intent.
Maybe I'm missing something, because I am envisioning that the SEE *is* the
firmware or hosted environment. I don't understand what these interfaces
would be between.
SEE is the supervisor's environment and how the supervisor interacts w/ the
firmware or hosted environment is what we are attempting to nail down and
define. I don't think there are any interfaces between or if there are then
that would be out of the scope of this endeavor.
Perhaps instead of "The focus will be on the interfaces between the SEE and
the hosted environment", use "The focus will be on the interfaces between
the operating system and the SEE (through which the operating system may,
for example, access services provided by the firmware or otherwise)".

// darius


[PATCH] pcie: Update 4.7.3.1

Mayuresh Chitale
 

Add requirement for preserving the PCIe ID routing as described in issue:
https://github.com/riscv/riscv-platform-specs/issues/81

Signed-off-by: Mayuresh Chitale <mchitale@...>
---
riscv-platform-spec.adoc | 2 ++
1 file changed, 2 insertions(+)

diff --git a/riscv-platform-spec.adoc b/riscv-platform-spec.adoc
index e06500d..289163c 100644
--- a/riscv-platform-spec.adoc
+++ b/riscv-platform-spec.adoc
@@ -825,6 +825,8 @@ supported PCIe domains and map the ECAM I/O region for each domain.
* Platform software must configure ECAM I/O regions such that the effective
memory attributes are that of a PMA I/O region (i.e. strongly-ordered,
non-cacheable, non-idempotent).
+* If the platform software (for e.g OS) re-enumerates the PCIe topology then it
+is required that the underlying fabric routing is always correctly preserved.

===== PCIe Memory Space
Platforms are required to map PCIe address space directly in the system address
--
2.17.1


Handoff between secure firmware and non-secure Firmware via HOB lists

Heinrich Schuchardt
 

Currently the SBI specification defines how to hand device-trees from the SEE to the S-mode firmware.

In the context of Trusted Firmware A a document has been developed describing what a more generic handover structure may look like that will also encompass ACPI tables and additional information like TPM measurements.

https://developer.arm.com/documentation/den0135/a

As probably EDK II and U-Boot will adopt parsing this structure it would make sense to discuss if the same can be used in the RISC-V world too.

Best regards

Heinrich


Next Platform HSC Meeting on Mon Apr 4th 2022 8AM PST

Kumar Sankaran
 

Hi All,
The next platform HSC meeting is scheduled on Mon Apr 4th 2022 at 8AM PST.

Here are the details:

Agenda and minutes kept on the github wiki:
https://github.com/riscv/riscv-platform-specs/wiki

Here are the slides:
https://docs.google.com/presentation/d/1yRfVWjIqKK0QvjAx-oaFYWjxATegQAqB1zxsOZhPbxM/edit#slide=id.g120b4f4f100_0_0

Meeting info
Zoom meeting: https://zoom.us/j/2786028446
Passcode: 901897

Or iPhone one-tap :
US: +16465588656,,2786028466# or +16699006833,,2786028466# Or Telephone:
Dial(for higher quality, dial a number based on your current location):
US: +1 646 558 8656 or +1 669 900 6833
Meeting ID: 278 602 8446
International numbers available:
https://zoom.us/zoomconference?m=_R0jyyScMETN7-xDLLRkUFxRAP07A-_

Regards
Kumar


[PATCH] Fix typos in introduction for RISCV_EFI_BOOT_PROTOCOL

Heinrich Schuchardt
 

UEFI uses to talk of configuration tables not of firmware tables.

Add missing 'the', 'and'.

Enhance readability of sentence concerning ExitBootServices().

Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@...>
---
boot_protocol.adoc | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/boot_protocol.adoc b/boot_protocol.adoc
index 12846b6..5c56edd 100644
--- a/boot_protocol.adoc
+++ b/boot_protocol.adoc
@@ -1,7 +1,7 @@
[[boot_protocol]]
=3D=3D RISCV_EFI_BOOT_PROTOCOL
Either Device Tree (DT) or Advanced Configuration and Power Interface (A=
CPI)
-firmware tables are used to convey the information about hardware to the
+configuration tables are used to convey the information about hardware t=
o the
Operating Systems. Some of the information are known only at boot time a=
nd
needed very early before the Operating Systems/boot loaders can parse th=
e
firmware tables.=20
@@ -9,16 +9,17 @@ firmware tables.
One example is the boot hartid on RISC-V systems. On non-UEFI systems, t=
his is
typically passed as an argument to the kernel (in a0). However, UEFI sys=
tems need
to follow UEFI application calling conventions and hence it can not be p=
assed in
-a0. There is an existing solution which uses /chosen node in DT based sy=
stems to
-pass this information. However, this solution doesn't work for ACPI base=
d
+a0. There is an existing solution which uses the /chosen node in DT base=
d systems
+to pass this information. However, this solution doesn't work for ACPI b=
ased
systems. Hence, a UEFI protocol is preferred for both DT and ACPI based =
systems.
=20
This UEFI protocol for RISC-V systems provides early information to the
-bootloaders or Operating Systems. Firmwares like EDK2/u-boot need to imp=
lement
-this protocol on RISC-V UEFI systems.
+bootloaders or Operating Systems. Firmwares like EDK2 and u-boot need to
+implement this protocol on RISC-V UEFI systems.
=20
-This protocol is typically used by the bootloaders before *ExitBootServi=
ces()*
-call and pass the information to the Operating Systems.
+This protocol is typically called by the bootloaders before invoking
+*ExitBootServices()*. They then pass the information to the Operating
+Systems.
=20
The version of RISCV_EFI_BOOT_PROTOCOL specified by this specification i=
s
0x00010000. All future revisions must be backwards compatible. If a new =
version
--=20
2.34.1


Re: Public review of RISC-V UEFI Protocol Specification

Anup Patel
 

(Resending for RISC-V ISA-DEV and RISC-V SW-DEV because previous email
was not received on these lists.)

On Wed, Mar 23, 2022 at 9:46 PM Sunil V L <sunilvl@...> wrote:

This is to announce the start of the public review period for
the RISC-V UEFI Protocol specification. This specification is
considered as frozen now as per the RISC-V International policies.

The review period begins today, Wednesday March 23, and ends on Friday
May 6 (inclusive).

The specification can be found here
https://github.com/riscv-non-isa/riscv-uefi/releases/download/1.0-rc3/RISCV_UEFI_PROTOCOL-spec.pdf

which was generated from the source available in the following GitHub
repo:
https://github.com/riscv-non-isa/riscv-uefi

The specification is also attached in this email.

To respond to the public review, please either reply to this email or
send comments to the platform mailing list[1] or add issues to the
GitHub repo[2]. We welcome all input and appreciate your time and
effort in helping us by reviewing the specification.

During the public review period, corrections, comments, and
suggestions, will be gathered for review by the Platform HSC members.
Any minor corrections and/or uncontroversial changes will be
incorporated into the specification. Any remaining issues or proposed changes
will be addressed in the public review summary report. If there are no
issues that require incompatible changes to the public review
specification, the platform HSC will recommend the updated
specifications be approved and ratified by the RISC-V Technical
Steering Committee and the RISC-V Board of Directors.

Thanks to all the contributors for all their hard work.

[1] tech-unixplatformspec@...
[2] https://github.com/riscv-non-isa/riscv-uefi/issues

Regards
Sunil






Public review of RISC-V UEFI Protocol Specification

Sunil V L
 

This is to announce the start of the public review period for
the RISC-V UEFI Protocol specification. This specification is
considered as frozen now as per the RISC-V International policies.

The review period begins today, Wednesday March 23, and ends on Friday
May 6 (inclusive).

The specification can be found here
https://github.com/riscv-non-isa/riscv-uefi/releases/download/1.0-rc3/RISCV_UEFI_PROTOCOL-spec.pdf

which was generated from the source available in the following GitHub
repo:
https://github.com/riscv-non-isa/riscv-uefi

The specification is also attached in this email.

To respond to the public review, please either reply to this email or
send comments to the platform mailing list[1] or add issues to the
GitHub repo[2]. We welcome all input and appreciate your time and
effort in helping us by reviewing the specification.

During the public review period, corrections, comments, and
suggestions, will be gathered for review by the Platform HSC members.
Any minor corrections and/or uncontroversial changes will be
incorporated into the specification. Any remaining issues or proposed changes
will be addressed in the public review summary report. If there are no
issues that require incompatible changes to the public review
specification, the platform HSC will recommend the updated
specifications be approved and ratified by the RISC-V Technical
Steering Committee and the RISC-V Board of Directors.

Thanks to all the contributors for all their hard work.

[1] tech-unixplatformspec@...
[2] https://github.com/riscv-non-isa/riscv-uefi/issues

Regards
Sunil


Next Platform HSC Meeting

Kumar Sankaran
 

Hi All,

Due to lack of a full agenda, I am canceling the next platform HSC meeting on Monday Mar 21st 2022. This way, people can use this time to attend other RISC-V meetings.

 

In terms of the discussion topics, the following is the status of the OS-A SEE TG from Aaron Durbin who is the acting Chair of this group.

    1. Proposed Charter agreement
    2. Charter approval
    3. Call for Chairs
    4. Flesh out spec.

 

I will update the above to the meeting minutes for Mar 21st 2022.

 

Agenda and minutes kept on the github wiki:

https://github.com/riscv/riscv-platform-specs/wiki

 

Regards

Kumar


OS-A SEE TG Infrastructure

Aaron Durbin
 

Hi All,

I wanted to point out that we have GitHub repositories and a mailing list for OS-A SEE (Supervisor Execution Environment) TG. Please join if you are interested.

GitHub Spec: https://github.com/riscv-non-isa/riscv-os-a-see

Nothing yet has been seeded from a OS-A SEE perspective in those repos. The OS-A SEE TG is still in the Inception phase. As such, we need to nail down a charter. My plan was to submit a preliminary charter to the admin repository as a starting point for us to work on. We will also need to call for chairs for the OS-A SEE TG as well. That's the initial first steps, and then we can pull pieces from the existing OS-A platform spec (https://github.com/riscv/riscv-platform-specs/blob/main/riscv-platform-spec.adoc) that adhere and follow the approved charter for OS-A SEE TG to get the spec rolling. The OS-A Platform will then depend on the OS-A SEE spec (which transitively will depend on the RVA22S64 Profile).

For those that missed the memo, this TG is a part of a broader Platform reorg as detailed here: https://docs.google.com/presentation/d/1gldII0Gziyz2ajwgT8z5vPhw_HBuUmCJWHOiiQ4CAVs/edit#slide=id.g116d4f1a24e_0_678

I look forward to working with all of you. And, please feel free to use this thread to provide any feedback or thoughts on direction for the OS-A SEE TG.

-Aaron


Next Platform HSC Meeting on Mon Mar 7th 2022 8AM PST

Kumar Sankaran
 

Hi All,
The next platform HSC meeting is scheduled on Mon Mar 7th 2022 at 8AM PST.

Here are the details:

Agenda and minutes kept on the github wiki:
https://github.com/riscv/riscv-platform-specs/wiki

Here are the slides:
https://docs.google.com/presentation/d/1gldII0Gziyz2ajwgT8z5vPhw_HBuUmCJWHOiiQ4CAVs/edit#slide=id.g116d4f1a24e_0_685

Meeting info
Zoom meeting: https://zoom.us/j/2786028446
Passcode: 901897

Or iPhone one-tap :
US: +16465588656,,2786028466# or +16699006833,,2786028466# Or Telephone:
Dial(for higher quality, dial a number based on your current location):
US: +1 646 558 8656 or +1 669 900 6833
Meeting ID: 278 602 8446
International numbers available:
https://zoom.us/zoomconference?m=_R0jyyScMETN7-xDLLRkUFxRAP07A-_

Regards
Kumar


Re: Watchdog timer per hart?

Allen Baum
 

That's a bit looser a definition than I'd expect, but that explains your comments, certainly. Thx.

On Wed, Mar 2, 2022 at 5:14 PM Greg Favor <gfavor@...> wrote:
On Wed, Mar 2, 2022 at 4:54 PM Allen Baum <allen.baum@...> wrote:
Don't they even define whether restartability is required or not?

Since the suitable response to a first or second stage timeout is rather system-specific, ARM didn't try to ordain exactly where the timeout signals go and what happens as a result.  In SBSA they just described the general expected possibilities (which my previous remarks were based on).  But here's what a 2020 version of BSA says (which is roughly similar to SBSA but a bit narrower in the possibilities it describes):

The basic function of the Generic Watchdog is to count for a fixed period of time, during which it expects to be
refreshed by the system indicating normal operation. If a refresh occurs within the watch period, the period is
refreshed to the start. If the refresh does not occur then the watch period expires, and a signal is raised and a
second watch period is begun.

The initial signal is typically wired to an interrupt and alerts the system. The system can attempt to take
corrective action that includes refreshing the watchdog within the second watch period. If the refresh is
successful, the system returns to the previous normal operation. If it fails, then the second watch period
expires and a second signal is generated. The signal is fed to a higher agent as an interrupt or reset for it to
take executive action.

Greg
 

On Wed, Mar 2, 2022 at 4:00 PM Greg Favor <gfavor@...> wrote:
Even ARM SBSA allowed a lot of flexibility as to where the first-stage and second-stage timeout "signals" went (which ultimately then placed the handling in the hands of software somewhere).  In other words, SBSA didn't prescribe the details of the overall watchdog handling picture.

Greg

On Wed, Mar 2, 2022 at 2:35 PM Allen Baum <allen.baum@...> wrote:
Now we're starting to drill down appropriately. There is a wide range.
This is me thinking out loud and trying desperately to avoid the real work I should be doing:

 - A watchdog time event can cause an interrupt (as opposed to a HW reset)
  -- maskable or non-maskable? 
  -- Using xTVEC to vector or a platform defined vector.? (e.g. the reset vector)
  -- A new cause type or reuse an existing one? (e.g.using the reset cause)
  -- restartable or non-restartable or both? (both implies - to me at least-  the 2 stage watchdog concept, "pulling the emergency cord")
      If the watchdog timer is restartable, either it must
        --- be maskable, or 
        --- implement something like the restartable-NMI spec to be able to save state.
   -- what does "pulling the emergency cord" do? e.g. 
       --- some kind of HW reset (we had a light reset at Intel that cleared as little as possible so that a post-mortem dump could identify what was going on)
       --- just vector to a SW handler (obviously this should depend on why the watchdog timer was activated, e.g. waiting for a HW event or SW event)


On Wed, Mar 2, 2022 at 12:41 PM Kumar Sankaran <ksankaran@...> wrote:
From a platform standpoint, the intent was to have a single platform
level watchdog that is shared across the entire platform. This
platform watchdog could be the 2-level watchdog as described below by
Greg. Whether S-mode software or M-mode software would handle the
tickling of this watchdog and handle timeouts is a subject for further
discussion.

On Wed, Mar 2, 2022 at 12:34 PM Greg Favor <gfavor@...> wrote:
>
> On Wed, Mar 2, 2022 at 12:23 PM Aaron Durbin <adurbin@...> wrote:
>>
>> Yes. Greg articulated what I was getting at better than I did. I apologize for muddying the waters. From a platform standpoint one system-level watchdog should suffice as it's typically the last resort of restarting a system prior to sending a tech out.
>
>
> One comment - for when any concrete discussion about having a system-level watchdog occurs:
>
> One can have a one-stage or a two-stage watchdog.  The former yanks the emergency cord on the system upon timeout.
>
> The latter (which is what ARM defined in SBSA and the subsequent SBA) interrupts the OS on the first timeout and gives it a chance to take remedial actions (and refresh the watchdog).  Then, if a second timeout occurs (without a refresh after the first timeout), the emergency cord is yanked.
>
> ARM also defined separate Secure and Non-Secure watchdogs (akin to what one might call S-mode and M-mode watchdogs).  The OS has its own watchdog to tickle and an emergency situation results in reboot of the OS (for example).  And the Secure Monitor has its own watchdog and an emergency situation results in reboot of the system (for example).
>
> Greg
>
>



--
Regards
Kumar






Re: Watchdog timer per hart?

Greg Favor
 

On Wed, Mar 2, 2022 at 4:54 PM Allen Baum <allen.baum@...> wrote:
Don't they even define whether restartability is required or not?

Since the suitable response to a first or second stage timeout is rather system-specific, ARM didn't try to ordain exactly where the timeout signals go and what happens as a result.  In SBSA they just described the general expected possibilities (which my previous remarks were based on).  But here's what a 2020 version of BSA says (which is roughly similar to SBSA but a bit narrower in the possibilities it describes):

The basic function of the Generic Watchdog is to count for a fixed period of time, during which it expects to be
refreshed by the system indicating normal operation. If a refresh occurs within the watch period, the period is
refreshed to the start. If the refresh does not occur then the watch period expires, and a signal is raised and a
second watch period is begun.

The initial signal is typically wired to an interrupt and alerts the system. The system can attempt to take
corrective action that includes refreshing the watchdog within the second watch period. If the refresh is
successful, the system returns to the previous normal operation. If it fails, then the second watch period
expires and a second signal is generated. The signal is fed to a higher agent as an interrupt or reset for it to
take executive action.

Greg
 

On Wed, Mar 2, 2022 at 4:00 PM Greg Favor <gfavor@...> wrote:
Even ARM SBSA allowed a lot of flexibility as to where the first-stage and second-stage timeout "signals" went (which ultimately then placed the handling in the hands of software somewhere).  In other words, SBSA didn't prescribe the details of the overall watchdog handling picture.

Greg

On Wed, Mar 2, 2022 at 2:35 PM Allen Baum <allen.baum@...> wrote:
Now we're starting to drill down appropriately. There is a wide range.
This is me thinking out loud and trying desperately to avoid the real work I should be doing:

 - A watchdog time event can cause an interrupt (as opposed to a HW reset)
  -- maskable or non-maskable? 
  -- Using xTVEC to vector or a platform defined vector.? (e.g. the reset vector)
  -- A new cause type or reuse an existing one? (e.g.using the reset cause)
  -- restartable or non-restartable or both? (both implies - to me at least-  the 2 stage watchdog concept, "pulling the emergency cord")
      If the watchdog timer is restartable, either it must
        --- be maskable, or 
        --- implement something like the restartable-NMI spec to be able to save state.
   -- what does "pulling the emergency cord" do? e.g. 
       --- some kind of HW reset (we had a light reset at Intel that cleared as little as possible so that a post-mortem dump could identify what was going on)
       --- just vector to a SW handler (obviously this should depend on why the watchdog timer was activated, e.g. waiting for a HW event or SW event)


On Wed, Mar 2, 2022 at 12:41 PM Kumar Sankaran <ksankaran@...> wrote:
From a platform standpoint, the intent was to have a single platform
level watchdog that is shared across the entire platform. This
platform watchdog could be the 2-level watchdog as described below by
Greg. Whether S-mode software or M-mode software would handle the
tickling of this watchdog and handle timeouts is a subject for further
discussion.

On Wed, Mar 2, 2022 at 12:34 PM Greg Favor <gfavor@...> wrote:
>
> On Wed, Mar 2, 2022 at 12:23 PM Aaron Durbin <adurbin@...> wrote:
>>
>> Yes. Greg articulated what I was getting at better than I did. I apologize for muddying the waters. From a platform standpoint one system-level watchdog should suffice as it's typically the last resort of restarting a system prior to sending a tech out.
>
>
> One comment - for when any concrete discussion about having a system-level watchdog occurs:
>
> One can have a one-stage or a two-stage watchdog.  The former yanks the emergency cord on the system upon timeout.
>
> The latter (which is what ARM defined in SBSA and the subsequent SBA) interrupts the OS on the first timeout and gives it a chance to take remedial actions (and refresh the watchdog).  Then, if a second timeout occurs (without a refresh after the first timeout), the emergency cord is yanked.
>
> ARM also defined separate Secure and Non-Secure watchdogs (akin to what one might call S-mode and M-mode watchdogs).  The OS has its own watchdog to tickle and an emergency situation results in reboot of the OS (for example).  And the Secure Monitor has its own watchdog and an emergency situation results in reboot of the system (for example).
>
> Greg
>
>



--
Regards
Kumar






Re: Watchdog timer per hart?

Allen Baum
 

Don't they even define whether restartability is required or not?

On Wed, Mar 2, 2022 at 4:00 PM Greg Favor <gfavor@...> wrote:
Even ARM SBSA allowed a lot of flexibility as to where the first-stage and second-stage timeout "signals" went (which ultimately then placed the handling in the hands of software somewhere).  In other words, SBSA didn't prescribe the details of the overall watchdog handling picture.

Greg

On Wed, Mar 2, 2022 at 2:35 PM Allen Baum <allen.baum@...> wrote:
Now we're starting to drill down appropriately. There is a wide range.
This is me thinking out loud and trying desperately to avoid the real work I should be doing:

 - A watchdog time event can cause an interrupt (as opposed to a HW reset)
  -- maskable or non-maskable? 
  -- Using xTVEC to vector or a platform defined vector.? (e.g. the reset vector)
  -- A new cause type or reuse an existing one? (e.g.using the reset cause)
  -- restartable or non-restartable or both? (both implies - to me at least-  the 2 stage watchdog concept, "pulling the emergency cord")
      If the watchdog timer is restartable, either it must
        --- be maskable, or 
        --- implement something like the restartable-NMI spec to be able to save state.
   -- what does "pulling the emergency cord" do? e.g. 
       --- some kind of HW reset (we had a light reset at Intel that cleared as little as possible so that a post-mortem dump could identify what was going on)
       --- just vector to a SW handler (obviously this should depend on why the watchdog timer was activated, e.g. waiting for a HW event or SW event)


On Wed, Mar 2, 2022 at 12:41 PM Kumar Sankaran <ksankaran@...> wrote:
From a platform standpoint, the intent was to have a single platform
level watchdog that is shared across the entire platform. This
platform watchdog could be the 2-level watchdog as described below by
Greg. Whether S-mode software or M-mode software would handle the
tickling of this watchdog and handle timeouts is a subject for further
discussion.

On Wed, Mar 2, 2022 at 12:34 PM Greg Favor <gfavor@...> wrote:
>
> On Wed, Mar 2, 2022 at 12:23 PM Aaron Durbin <adurbin@...> wrote:
>>
>> Yes. Greg articulated what I was getting at better than I did. I apologize for muddying the waters. From a platform standpoint one system-level watchdog should suffice as it's typically the last resort of restarting a system prior to sending a tech out.
>
>
> One comment - for when any concrete discussion about having a system-level watchdog occurs:
>
> One can have a one-stage or a two-stage watchdog.  The former yanks the emergency cord on the system upon timeout.
>
> The latter (which is what ARM defined in SBSA and the subsequent SBA) interrupts the OS on the first timeout and gives it a chance to take remedial actions (and refresh the watchdog).  Then, if a second timeout occurs (without a refresh after the first timeout), the emergency cord is yanked.
>
> ARM also defined separate Secure and Non-Secure watchdogs (akin to what one might call S-mode and M-mode watchdogs).  The OS has its own watchdog to tickle and an emergency situation results in reboot of the OS (for example).  And the Secure Monitor has its own watchdog and an emergency situation results in reboot of the system (for example).
>
> Greg
>
>



--
Regards
Kumar





141 - 160 of 1847