summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/LoopVectorize/ARM/mve-maskedldst.ll
Commit message (Collapse)AuthorAgeFilesLines
* [ARM] Enable MVE masked loads and storesDavid Green2019-12-091-1/+1
| | | | | | | With the extra optimisations we have done, these should now be fine to enable by default. Which is what this patch does. Differential Revision: https://reviews.llvm.org/D70968
* [DAGCombine][ARM] Enable extending masked loadsSam Parker2019-10-171-3/+139
| | | | | | | | | | | Add generic DAG combine for extending masked loads. Allow us to generate sext/zext masked loads which can access v4i8, v8i8 and v4i16 memory to produce v4i32, v8i16 and v4i32 respectively. Differential Revision: https://reviews.llvm.org/D68337 llvm-svn: 375085
* [ARM] Masked loads and storesDavid Green2019-09-151-0/+40
Masked loads and store fit naturally with MVE, the instructions being easily predicated. This adds lowering for the simple cases of masked loads and stores. It does not yet deal with widening/narrowing or pre/post inc, and so is currently behind an option. The llvm masked load intrinsic will accept a "passthru" value, dictating the values used for the zero masked lanes. In MVE the instructions write 0 to the zero predicated lanes, so we need to match a passthru that isn't 0 (or undef) with a select instruction to pull in the correct data after the load. Differential Revision: https://reviews.llvm.org/D67186 llvm-svn: 371932
OpenPOWER on IntegriCloud