Note: lists.riscv.org will be down for maintenance on Wednesday, October 5th, starting at 9AM Pacific Time (4PM Wednesday October 5, 2022 UTC), for approximately one hour.
For common AI workloads such as DNNs, data communications between network layers introduce huge pressure on capacity and bandwidth of the memory hierarchy.
For instance, dynamic large activation or feature map data needs to be buffered and communicated across multiple layers, which often appears to be sparse (e.g. ReLU).
People use bit vectors to "compress" the data buffered and "decompress" for the following layer computations.
Here we can see from the spec that "vcompress" has already been included, how about "vdecompress"?