Re: GCC RISC-V Vector Intrinsic Instructions and #defines missing
#defines
Kito Cheng
toggle quoted messageShow quoted text
On Sat, Apr 10, 2021 at 9:14 AM Jim Wilson <jimw@sifive.com> wrote:
|
|
Re: GCC RISC-V Vector Intrinsic Instructions and #defines missing
#defines
I would suggest filing an issue in the riscv/riscv-gnu-toolchain github tree. Put something like vector or rvv in the issue title to make it clear it is a vector related issue. The gcc support is not being actively worked on at the moment. LLVM is the current focus for all vector compiler support. Eventually someone may start working on the gcc vector support again. Meanwhile, bugs filed against the gcc vector support may or may not be fixed. Jim
|
|
GCC RISC-V Vector Intrinsic Instructions and #defines missing
#defines
Tony Cole
Hi all,
I’m still new to RISC-V and the Vector extensions, so forgive me if I’ve missed something, the following have been fixed or noted before.
Also, am I sending this to the correct group for GCC RISC-V Vector Intrinsics? If not, who and how should I inform?
I’m currently using: riscv32-unknown-elf-gcc (GCC) 10.1.0 (…/10.1.0–rvv-intrinsic-patch/bin/ riscv32-unknown-elf-gcc – version)
These (and probably others) don’t exist in the GCC compiler RISCV Vector intrinsics (the m8 versions):
vint32m1_t vwredsum_vs_i16m8_i32m1 (vint32m1_t dst, vint16m8_t vector, vint32m1_t scalar, size_t vl); vint64m1_t vwredsum_vs_i32m8_i64m1 (vint64m1_t dst, vint32m8_t vector, vint64m1_t scalar, size_t vl);
They are listed in here: https://github.com/riscv/rvv-intrinsic-doc/blob/master/intrinsic_funcs/09_vector_reduction_functions.md
So, I’ve had to temporally change to (the m4 versions):
vint32m1_t vwredsum_vs_i16m4_i32m1 (vint32m1_t dst, vint16m4_t vector, vint32m1_t scalar, size_t vl); vint64m1_t vwredsum_vs_i32m4_i64m1 (vint64m1_t dst, vint32m4_t vector, vint64m1_t scalar, size_t vl);
to get it to compile and work.
This may have already been fixed? Please let me know.
Also,
I was expecting to find some #defines for the rounding modes in riscv-vector.h, something like:
/* Vector Fixed-Point Rounding Mode Register vxrm settings Use with vwrite_csr(RVV_VXRM, RVV_VXRM_XXX) */
#define RVV_VXRM_RNU (0) /* Round-to-nearest-up (add 0.5 LSB) */ #define RVV_VXRM_RNE (1) /* Round-to-nearest-even */ #define RVV_VXRM_RDN (2) /* Round-down (truncate) */ #define RVV_VXRM_ROD (3) /* Round-to-add (OR bits into LSB, aka "jam") */
Tony Cole CPU Consultant I RISC-V Cores, Bristol E-mail: Tony.Cole@... Company: Huawei technologies R&D (UK) Ltd I Address: 290 Park Avenue, Aztec West, Almondsbury, Bristol, Avon, BS32 4SY, UK
This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure,reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it ! 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面 地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件
|
|
Possible RISC-V Vector Instructions missing
Tony Cole
Hi Vector Team,
I’m new to RISC-V and the Vector extensions, so forgive me if I’ve missed something.
I have searched the specs, emails and git hub issues, but not found anything on this:
While writing some vector code using the vector intrinsics, I noticed some instructions missing that I expected to see:
I noticed there is no saturated reverse subtract version of vssub_vx, e.g. vsrsub_vx (or should it be vrssub_vx ?) and so no vsneg_v pseudo instructions
But there are the following integer/float reverse subtract instructions: vrsub_vx vfrsub_vf
and their pseudo instruction counterparts: vneg_v vfneg_v
For orthogonality there should be saturated versions of the above, but maybe there is not enough encoding space? Or possibly remove vrsub_vx & vfrsub_vf to gain encoding space ??
Note: I wanted to use vsrsub_vx (to do vsneg_v), but instead achieved it by loading a vector with zero and performing vssub_vv.
Tony Cole CPU Consultant I RISC-V Cores, Bristol E-mail: Tony.Cole@... Company: Huawei technologies R&D (UK) Ltd I Address: 290 Park Avenue, Aztec West, Almondsbury, Bristol, Avon, BS32 4SY, UK
This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure,reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it ! 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面 地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件
|
|
No vector task group meeting tomorrow
I haven’t seen any burning issues come by, and am still trying to clean up spec.
So unless someone has agenda items, I’m canceling meeting tomorrow, Krste
|
|
No vector TG meeting this week
I’m still working on spec cleanup and I don’ t have any major outstanding issues to discuss, so will cancel the TG meeting this week.
Please bring up any burning issues on this mailing list, Krste
|
|
Vector Task Group minutes from 2021/3/26 meeting
Date: 2021/03/26
Task Group: Vector Extension Chair: Krste Asanovic Vice-Chair: Roger Espasa Number of Attendees: ~10 Current issues on github: https://github.com/riscv/riscv-v-spec A short meeting discussing only: #545 Vector AMO Encoding The group discussed the issue with vector AMO overlapping the desired space for future scalar AMOs. The group agreed with the proposal to leave vector AMOs out of the v1.0 specification to allow time to revisit the encoding taking into account the needs of scalar AMOs.
|
|
Vector Task Group meeting Friday March 26
We'll meet again in usual slot.
The main discussion topic will be #545. Please read the issue thread on github. Summary: The proposal is to move vector AMOs from their current encoding to leave space for scalar subword AMOs, and to drop vector AMOs from the base application processor vector profile (they were already excluded from Zve* subset profiles). We'll rework the vector AMO encoding, but not in path for v1.0 ratification. Krste
|
|
Vector Extension Task Group Minutes 2021/03/19
Date: 2021/03/19
Task Group: Vector Extension Chair: Krste Asanovic Vice-Chair: Roger Espasa Number of Attendees: ~16 Current issues on github: https://github.com/riscv/riscv-v-spec Issues discussed #640 Bound on VLMAX/VLEN Previously, we'd discussed making the upper bound on VLMAX part of profile, but realization was that bound cannot be later increased in a software-compatibile way without adding a new instruction, so is effectively part of the ISA spec. We discussed having the more general case of VLMAX being the bound, but consensus was that having bound be a function of VLEN (<=65536) was simpler to specify and had no great effect on range of supported systems. The extension to add independent control of data input size of vrgather, proposed in #655, was briefly discussed, but this will not be included in v1.0. #651 expanding tail-agnostic to allow result values (masks only, or data) The discussion was around expanding the set of allowable tail-agnostic values to include the results of the computation. The consensus was to expand this for mask register writes (except loads), where only tail-agnostic behavior is required. But support was not as clear for data register writes, where tail-undisturbed behavior must be supported and where FP operations require masking off exception flags even for tail-agnostic. PoR is to expand mask register writes to allow results to be written in tail, while continuing discussion on further relaxing for data register writes. #457 Ordering in vector AMOs Current vector AMOs have no facility to order writes to the same address, whereas indexed stores have an ordered option. Discussion was on proposal to tie address-based ordering to the wd (write result data) bit. One concern was that this seemed to hamper some cases, including where software wanted the results but knew addresses were disjoint. Providing ordering only on same address would likely require slow implementation on out-of-order machine where addresses can be produced out of order for different element groups. Decision was to maintain PoR and consider post-v1.0 ways to support ordered vector AMOs.
|
|
Next Vector TG Meeting, Friday March 19
There are a few issues to discuss, so we’ll meet in the regular time slot on the calendar,
Krste
|
|
cancel Mar 12 Vector TG meeting
I'm cancelling meeting again, as I still have not been able to clean
spec. I realize it will be more efficient for folks to wait for a clean version for a complete read through. Few issues are being reported/found, so I do not anticipate any substantive change. One realization on issue #640 is that VLEN limit (<=64Kb) has to be part of ISA spec, not just profile, to allow backwards compatibility. Details on github issue. One update is that RIOS Lab has agreed to help with architecture tests and the SAIL model - thank you, RIOS! Krste
|
|
cancel next Vector TG meeting, Friday March 5
I'm still working through spec cleanup.
The list and github has been quiet, and I have no new issues to raise, so I suggest we cancel this meeting and push out for a week. Krste
|
|
Vector Task Group meeting minutes for 2021/2/19
Date: 2021/02/19
Task Group: Vector Extension Chair: Krste Asanovic Vice-Chair: Roger Espasa Number of Attendees: ~23 Current issues on github: https://github.com/riscv/riscv-v-spec # Next Meeting/Freezing The schedule is to meet again in two weeks (Friday March 5). The plan is to have all pending updates and cleanups in spec by that date, to be able to agree to freeze and move forward into public review (v1.0), which should happen soon after this meeting. Please continue to send PRs for any small typos and clarifications, and use mailing list for larger issues. Issues discussed #640 Bound on VLMAX The major issue raised was that software would otherwise have to cope with indices that might not fit in 16b. The group agreed that profiles and/or platform specs can set the upper bound, with a current recommendation that all profiles limit VLMAX to 64K elements (VLEN=64Kib, or 8K bytes per vector register). The current ISA spec can support larger VLMAX already, but adding vrgatherei32 instruction would be a useful addition (post-v1.0) if architectural vector regfiles >256KiB become common. It was also discussed that a privileged setting will be desired to modulate visible VLEN to support thread migration, or just different application vector profiles with different VLMAX in general.
|
|
Re: Zfinx + Vector
Thanks Krste, I’ve put exactly that I the spec.
Tariq
From: tech-vector-ext@... [mailto:tech-vector-ext@...]
On Behalf Of Krste Asanovic
Sent: 18 February 2021 19:09 To: Tariq Kurd <tariq.kurd@...> Cc: tech-vector-ext@... Subject: Re: [RISC-V] [tech-vector-ext] Zfinx + Vector
If you check over the vector instruciton listing table It’s all the instructions in funct3=OPFVF with an F in the operand column. Most of these are missing.
Krste
|
|
Vector task group meeting, Friday Feb 19
We’ll meet today in usual slot, details on Google calendar
Agenda is to discuss any issues found while reading over the v0.10 spec. List and GitHub has been quite quiet, so this might be a short meeting. Krste
|
|
Re: Zfinx + Vector
If you check over the vector instruciton listing table It’s all the instructions in funct3=OPFVF with an F in the operand column. Most of these are missing.
toggle quoted messageShow quoted text
Krste
|
|
Zfinx + Vector
Hi everyone,
I’ve updated the Zfinx spec to show which V-extension instructions are affected.
https://github.com/riscv/riscv-zfinx/blob/master/Zfinx_spec.adoc#vector
Please review the list, and tell me of any impact on the vector spec which I’ve overlooked.
Thanks
Tariq
Tariq Kurd Processor Design I RISC-V Cores, Bristol E-mail: Tariq.Kurd@... Company: Huawei technologies R&D (UK) Ltd I Address: 290 Park Avenue, Aztec West, Almondsbury, Bristol, Avon, BS32 4SY, UK
This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure,reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it ! 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面 地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
|
|
Re: Vector TG minutes for 2020/12/18 meeting
Bill Huffman
For hardware with very long vector registers, the same effect might be accomplished by having a custom way to change VLMAX dynamically (across all harts, etc.). It would seem that would cover a larger set of useful cases for what Guy is thinking about - if I'm following him.
toggle quoted messageShow quoted text
Bill
-----Original Message-----
From: tech-vector-ext@lists.riscv.org <tech-vector-ext@lists.riscv.org> On Behalf Of Krste Asanovic Sent: Tuesday, February 16, 2021 3:21 PM To: Guy Lemieux <guy.lemieux@gmail.com> Cc: krste@berkeley.edu; Zalman Stern <zalman@google.com>; tech-vector-ext@lists.riscv.org Subject: Re: [RISC-V] [tech-vector-ext] Vector TG minutes for 2020/12/18 meeting EXTERNAL MAIL | in terms of overlap with that case — that case normally selectsOn Tue, 16 Feb 2021 15:12:46 -0800, Guy Lemieux <guy.lemieux@gmail.com> said: | maximally sized AVL. the implied goals there are to make best use of | vector register capacity and throughput. l | i’m suggesting a case where a minimally sized AVL is used, as chosen by the architect. | this allows a programmer to optimize for minimum latency while still | getting good throughput. in some cases, the full VLMAX state may still | be used to hold data, but operations are chunked down to minimally sized AVL (eg for latency reasons). I still don't see how hardware can set a <VLMAX value that will work well for any code in loop. Your latency comment seems to imply an external observer sees the individual strips go by (e.g., in DSP applicaiton where data comes in and goes out in chunks), as otherwise only total time to finish loop matters. In these situations, I also can't see having the microarchitecture pick the chunk size - usually the I/O latency constraint sets the chunk size and goal of vector execution is to execute the chunks as efficiently as possible. Krste | i’m not sure of the portability concerns. if an implementation is free | to set VLMAX, and software must be written for any possible AVL that is returned, then it appears to me that deliberately returning a smaller implementation-defined AVL should still be portable. | programming for min-latency isn’t common in HPC, but can be useful in real-time systems. | g | On Tue, Feb 16, 2021 at 3:01 PM <krste@berkeley.edu> wrote: | There's a large overlap here with the (rd!=x0,rs1=x0) case that | selects AVL=VLMAX. If migration is intended, then VLMAX should be | same across harts. | Machines with long temporal vector registers might benefit from | using | less than VLMAX, but this is highly dependent on specifics of the | interaction of the microarchitecture and the scheduled application | kernel (otherwise, the long vector registers were a waste of | resources). I can't see how to do this portably beyond selecting | VLMAX. | Krste | | Of course, the implementation-defined value must be fixed across all harts, so thread migration doesn't break software. | | Guy | | On Mon, Feb 15, 2021 at 11:30 PM <krste@berkeley.edu> wrote: | | Replying to old thread to add rationale for current choice. | |||||| On Mon, 21 Dec 2020 13:52:07 -0800, Zalman Stern <zalman@google.com> said: | | | Does it get easier if the specification is just the immediate value plus one? | | No - this costs more gates on critical path. Mapping 00000 | => 32 is | | simpler in area and delay. | | | I really don't understand how this encoding is particularly great for immediates as many of the valuhes are likely very rarely or even never used and | it | | seems | | | like one can't get long enough values even for existing SIMD hardware in some data types. Compare to e.g.: | | | (first_bit ? 3 : 1) << rest_of_the_bits | | | or: | | | map[] = { 1, 3, 5, 8 }; // Or maybe something else for | 5 and 8 | | | map[first_two_bits] << rest_of_the_bits; | | | I.e. get a lot of powers of two, multiples of three-vecs for graphics, maybe something else. | | As a counter-example for this particular example, one code I | looked at | | recently related to AR/VR used 9 as one dimension. | | The challenge is agreeing on the best mapping from the 32 | immediate | | encodings to the most commonly used AVL values. | | More creative mappings do consume some incremental logic and | path | | delay (as well as adding some complexity to software toolchain). | | While they can provide small gains in some cases, this is | offset by | | small losses in other cases (someone will want AVL=17 | somewhere, and | | it's not clear that say AVL=40 is a substantially better use | of | | encoding). There is not huge penalty if the immediate does | not fit, | | at most a li instruction, which might be hoisted out of the loop. | | The curent v0.10 definition uses the obvious mapping of the immediate. | | Simplicity is a virtue, and any potential gains are small | for AVL > | | 31, where most implementation costs are amortized over the | longer | | vector and many implementations won't support longer lengths | for a | | given datatype in any case. | | Krste | | | -Z- | | | On Mon, Dec 21, 2020 at 10:47 AM Guy Lemieux <guy.lemieux@gmail.com> wrote: | | | for vsetivli, with the uimm=00000 encoding, rather than setting vl to 32, how setting it to some other meaning? | | | one option is to set vl=VLMAX. i have some concerns | about software using this safely (eg, if VLMAX turns out to be much | larger than software | | anticipated, | | | then it would fail; correcting this requires more | instructions than just using the regular vsetvl/vsetvli would have used). | | | another option is to allow an implementation-defined | vl to be chosen by hardware; this could be anywhere between 1 and | VLMAX. for example, | | implementations | | | may just choose vl=32, or they may choose something else. it allows the CPU architect to devise a scheme that best fits the implementation. this | may | | | consider factors like the effective width of the execution engine, the pipeline depth (to reduce likelihood of stalls from dependent | instructions), or | | | that the vector register file is actually a multi-level memory hierarchy where some smaller values may operate with greater efficiency (lower | power), | | or | | | matching VL to the optimal memory system burst length. perhaps some guidance by the spec could be given here for the default scheme, eg whether | the | | | implementation optimizes for best performance or power (while still allowing implementations to modify this default via an implementation-defined | CSR). | | | software using a few extra cycles to check the | returned vl against AVL should not a big problem (the simplest | solution being vsetvli followed by | | vsetivli) | | | g | | | On Fri, Dec 18, 2020 at 6:13 PM Krste Asanovic <krste@berkeley.edu> wrote: | | | # vsetivli | | | A new variant of vsetvl was proposed providing an | immediate as the AVL | | | in rs1[4:0]. The immediate encoding is the same | as for CSR immediate | | | instructions. The instruction would have bit 31:30 | = 11 and bits 29:20 | | | would be encoded same as vsetvli. | | | This would be used when AVL was statically known, | and known to fit | | | inside vector register group. Compared with | existing PoR, it removes | | | need to load immediate into a spare scalar | register before executing | | | vsetvli, and is useful for handling scalar values | in vector register | | | (vl=1) and other cases where short fixed-sized | vectors are the | | | datatype (e.g., graphics). | | | There was discussion on whether uimm=00000 should | represent 32 or be | | | reserved. 32 is more useful, but adds a little | complexity to | | | hardware. | | | There was also discussion on whether instruction | should set vill if | | | selected AVL is not supported, or whether should | clip vl to VLMAX as | | | with other instructions, or if behavior should be | reserved. Group | | | generally favored writing vill to expose software errors. | | |
|
|
Re: Vector TG minutes for 2020/12/18 meeting
| in terms of overlap with that case — that case normally selects maximally sized AVL. the implied goals there are to make best use of vector register capacity andOn Tue, 16 Feb 2021 15:12:46 -0800, Guy Lemieux <guy.lemieux@gmail.com> said: | throughput. l | i’m suggesting a case where a minimally sized AVL is used, as chosen by the architect. | this allows a programmer to optimize for minimum latency while still | getting good throughput. in some cases, the full VLMAX state may still be used to hold data, but operations are chunked down to minimally sized AVL (eg for | latency reasons). I still don't see how hardware can set a <VLMAX value that will work well for any code in loop. Your latency comment seems to imply an external observer sees the individual strips go by (e.g., in DSP applicaiton where data comes in and goes out in chunks), as otherwise only total time to finish loop matters. In these situations, I also can't see having the microarchitecture pick the chunk size - usually the I/O latency constraint sets the chunk size and goal of vector execution is to execute the chunks as efficiently as possible. Krste | i’m not sure of the portability concerns. if an implementation is free to set VLMAX, and software must be written for any possible AVL that is returned, then it | appears to me that deliberately returning a smaller implementation-defined AVL should still be portable. | programming for min-latency isn’t common in HPC, but can be useful in real-time systems. | g | On Tue, Feb 16, 2021 at 3:01 PM <krste@berkeley.edu> wrote: | There's a large overlap here with the (rd!=x0,rs1=x0) case that | selects AVL=VLMAX. If migration is intended, then VLMAX should be | same across harts. | Machines with long temporal vector registers might benefit from using | less than VLMAX, but this is highly dependent on specifics of the | interaction of the microarchitecture and the scheduled application | kernel (otherwise, the long vector registers were a waste of | resources). I can't see how to do this portably beyond selecting | VLMAX. | Krste | | Of course, the implementation-defined value must be fixed across all harts, so thread migration doesn't break software. | | Guy | | On Mon, Feb 15, 2021 at 11:30 PM <krste@berkeley.edu> wrote: | | Replying to old thread to add rationale for current choice. | |||||| On Mon, 21 Dec 2020 13:52:07 -0800, Zalman Stern <zalman@google.com> said: | | | Does it get easier if the specification is just the immediate value plus one? | | No - this costs more gates on critical path. Mapping 00000 => 32 is | | simpler in area and delay. | | | I really don't understand how this encoding is particularly great for immediates as many of the valuhes are likely very rarely or even never used and | it | | seems | | | like one can't get long enough values even for existing SIMD hardware in some data types. Compare to e.g.: | | | (first_bit ? 3 : 1) << rest_of_the_bits | | | or: | | | map[] = { 1, 3, 5, 8 }; // Or maybe something else for 5 and 8 | | | map[first_two_bits] << rest_of_the_bits; | | | I.e. get a lot of powers of two, multiples of three-vecs for graphics, maybe something else. | | As a counter-example for this particular example, one code I looked at | | recently related to AR/VR used 9 as one dimension. | | The challenge is agreeing on the best mapping from the 32 immediate | | encodings to the most commonly used AVL values. | | More creative mappings do consume some incremental logic and path | | delay (as well as adding some complexity to software toolchain). | | While they can provide small gains in some cases, this is offset by | | small losses in other cases (someone will want AVL=17 somewhere, and | | it's not clear that say AVL=40 is a substantially better use of | | encoding). There is not huge penalty if the immediate does not fit, | | at most a li instruction, which might be hoisted out of the loop. | | The curent v0.10 definition uses the obvious mapping of the immediate. | | Simplicity is a virtue, and any potential gains are small for AVL > | | 31, where most implementation costs are amortized over the longer | | vector and many implementations won't support longer lengths for a | | given datatype in any case. | | Krste | | | -Z- | | | On Mon, Dec 21, 2020 at 10:47 AM Guy Lemieux <guy.lemieux@gmail.com> wrote: | | | for vsetivli, with the uimm=00000 encoding, rather than setting vl to 32, how setting it to some other meaning? | | | one option is to set vl=VLMAX. i have some concerns about software using this safely (eg, if VLMAX turns out to be much larger than software | | anticipated, | | | then it would fail; correcting this requires more instructions than just using the regular vsetvl/vsetvli would have used). | | | another option is to allow an implementation-defined vl to be chosen by hardware; this could be anywhere between 1 and VLMAX. for example, | | implementations | | | may just choose vl=32, or they may choose something else. it allows the CPU architect to devise a scheme that best fits the implementation. this | may | | | consider factors like the effective width of the execution engine, the pipeline depth (to reduce likelihood of stalls from dependent | instructions), or | | | that the vector register file is actually a multi-level memory hierarchy where some smaller values may operate with greater efficiency (lower | power), | | or | | | matching VL to the optimal memory system burst length. perhaps some guidance by the spec could be given here for the default scheme, eg whether | the | | | implementation optimizes for best performance or power (while still allowing implementations to modify this default via an implementation-defined | CSR). | | | software using a few extra cycles to check the returned vl against AVL should not a big problem (the simplest solution being vsetvli followed by | | vsetivli) | | | g | | | On Fri, Dec 18, 2020 at 6:13 PM Krste Asanovic <krste@berkeley.edu> wrote: | | | # vsetivli | | | A new variant of vsetvl was proposed providing an immediate as the AVL | | | in rs1[4:0]. The immediate encoding is the same as for CSR immediate | | | instructions. The instruction would have bit 31:30 = 11 and bits 29:20 | | | would be encoded same as vsetvli. | | | This would be used when AVL was statically known, and known to fit | | | inside vector register group. Compared with existing PoR, it removes | | | need to load immediate into a spare scalar register before executing | | | vsetvli, and is useful for handling scalar values in vector register | | | (vl=1) and other cases where short fixed-sized vectors are the | | | datatype (e.g., graphics). | | | There was discussion on whether uimm=00000 should represent 32 or be | | | reserved. 32 is more useful, but adds a little complexity to | | | hardware. | | | There was also discussion on whether instruction should set vill if | | | selected AVL is not supported, or whether should clip vl to VLMAX as | | | with other instructions, or if behavior should be reserved. Group | | | generally favored writing vill to expose software errors. | | |
|
|
Re: Vector TG minutes for 2020/12/18 meeting
Guy Lemieux
in terms of overlap with that case — that case normally selects maximally sized AVL. the implied goals there are to make best use of vector register capacity and throughput. l i’m suggesting a case where a minimally sized AVL is used, as chosen by the architect. this allows a programmer to optimize for minimum latency while still getting good throughput. in some cases, the full VLMAX state may still be used to hold data, but operations are chunked down to minimally sized AVL (eg for latency reasons). i’m not sure of the portability concerns. if an implementation is free to set VLMAX, and software must be written for any possible AVL that is returned, then it appears to me that deliberately returning a smaller implementation-defined AVL should still be portable. programming for min-latency isn’t common in HPC, but can be useful in real-time systems. g
On Tue, Feb 16, 2021 at 3:01 PM <krste@...> wrote: There's a large overlap here with the (rd!=x0,rs1=x0) case that
|
|