[RISC-V] [tech-*] STRATEGIC FEATURE COEXISTANCE was:([tech-fast-int] usefulness of PUSHINT/POPINT from [tech-code-size])
David Horner
These are all important considerations. However, what they have in common when considering Allen's question: This discussion is bringing up an issue that needs wider discussion about extensions in general. is that they are all
tactical considerations are in the context of our current
framework of instruction space allocation. What we will find is
that these trade-off considerations will reinforce the dilemma
that Allen raises. How do we manage these conflicting
"necessities/requirements" of different target environments.
I have hinted at it
already, we need not only tactical analysis of feature tradeoff
in different domains but a strategic approach to support them.
The concern is nothing
new. It has been raised, if only obliquely, many times prior on
the [google] groups.riscv.org
(-dev, -sw especially) and lists.riscv.org TG threads.
The vector group,
especially, has grappled with it in the context of current V
encoding being a subset of a [hypothetical] 64 bit encoding.
Specific proposals have
been mentioned, but there was then no political will or perhaps
more fairly, no common perception that there was a compelling
reason to work systematically to address it. The [then] common
thinking was that 48 and 64 bit instruction spaces will be used
as 32 and 16 bit are exhausted, and everyone will be happy.
Well, that naive hope has not materialized and many are
envisioning clashes that will hurt RISCV progress, either
fragmentation or stagnation, as tactical approaches and
considerations are implement or debated.
Previously two major
strategic approaches were hinted at, even if they were not
outright proposed.
Hardware Support - this
has been explicitly proposed in many flavours: and is currently
in the minds of many.
The idea is a mode
shift analogous to
arm's
transition to thumb and
back and
intel's myriad
of operating modes: real, protected, virtual, long and their
disparate instantiations.
I agree that
implementations should have considerable freedom on how to
provide hardware select-able functionality.
However, a
proposed framework to support that should be provided by
RISV.org.
Recent discussion
and document tweaks about misa (Machine ISA register) suggest
that this mechanism,
though
valuable, is inadequate as robust support for the explosion of
features.
An expanded
framework will be necessary, perhaps along the lines of the two
level performance counters definitions.
The conflict with
overlapping mappings of groups of instructions to the same
encoding space is not easily addressed by this mechanism.
which leads us to
Software Support:
The Generalized
Proposal:
All future extensions
are not mapped to a fixed exclusive universal encoding,
but rather to
appropriately sized [based initially off 32 isize] minor
[22-bit], major[25-bit] or quadrant [30-bit] encoding,
that is allocated to
the appropriate instruction encoding at link/load time to match
the hardware [or hardware dynamic configuration, as above].
This handles the green
field encodings.
Each feature could have
a default minor/major/quadrant encoding designation.
Brown field can also
managed, simply if the related co-encoded feature is present,
with more complexity, and perhaps extensive opcode mapping if
blended into other feature's encodings.
An implementation
method would be to have a fixed exclusive universal prefix for
each feature.
Each instruction would
then be emitted by the compiler as a [prefix]:[instruction with
default encoding] pair.
If the initial prefixes
are also nops [most of which are currently designated as hints],
then the code would be
executable on machines that use the default mapping
without any link/load
intervention [at lower performance granted].
This approach is
backward compatible for the other established extensions:
most
notably F which consumes 7 major opcodes spaces [and *only* 5
with Zfinx (Zifloat?)] and
then
AMO which also consumes the majority of a major opcode.
This
strategic change has a number of immediate and significant
benefits:
1)
custom reserved major op codes effectively become unreserved
as "standard" extensions can be mapped there also.
The custom reserved nature will then only be the designated
default allocation, "standard extensions" will not default to
them.
2) as
mentioned above, if the prefix is a nop then link/load support
is not needed for direct execution support [only efficiency].
3)
the transition to higher bit encodings can be simplified. As
easily as the compiler emmitting the designated prefix for
that feature that encodes for 64 bit instructions.
So, two
assigned fixed exclusive encodings per feature may be useful,
one a 64bit encoding and one a nop.
These are meaningful and useful considerations. Rather, I hope that by having a framework for coexistance of features, that those discussions can proceed in a more guided way; that discovers can be incorporated into a framework centric corpus of understanding of trade-offs and cooperative benefits of features/profiles. On 2020-10-23 11:45 p.m., Robert Chyla
wrote:
|
|
Are we talking about something that is effectively bank switching the opcodes here? Something like that was proposed very early on, using a CSR (like MISA maybe - the details are lost to me) to enable and disable them. The specific issue that brought it up is if someone developed a custom extension, did a lot of work, and then some other extension came along that stepped on those opcodes - and the implementation wanted to use both of them. The author thought it was pretty obvious this kind of thing was going to happen. I don't think that exact scenario will, but running out of standard 32b opcodes with ratified extensions might. We're already starting to look at the long tail - extensions that are specialized to specific workloads, but highly advantageous to them. I'm guessing we will get to the point that these extensions will not have to coexist inside a single app, though - so a bank switching approach (non-user mode at the least, perhaps not within an app at all) could potentially work, but it sounds ugly to make the tools understand the configuration. On Sat, Oct 24, 2020 at 8:23 AM ds2horner <ds2horner@...> wrote:
|
|
David Horner
My take: This is analogous to ascii(7-bit) and EBCIDIC(8-bit) both competing in the 8 bit byte addressable character space. Initial solutions were fragmentation, then code pages (select-able character sets). Eventually unicode became the standard that allowed universal adoption and definition, and down sizing to domains that needed a specific 8-bit byte encoding/mapping, printers, tty etc. Just as C-extention initially relied on the linker/loader to do the code replacement, so to the uni-op-code would initially rely on the linker/loader. As the tool chain becomes more sophisticated
self-conforming-software to hardware configuration (à la Linux)) will be developed.
. On 2020-10-24 11:23 a.m., ds2horner
wrote:
|
|
David Horner
On 2020-10-26 12:48 a.m., Allen Baum
wrote:
That is one approach. It is a consideration that has recently been mentioned wrt misa.
I remember Luke Kenneth Casson Leighton <lkcl@...> was in on the discussions. A variety of csr and related approaches were considered.
exactly. Also in lkcl's case the "vectorization" extension of all opcodes
is [was proposed] of this nature agreed. thus the uni-op-code approach wich can co-exist with any of these stragegies but provides a framework to mange them. (just as ascii and EBCIDIC extensions are comparably managed).
|
|