summaryrefslogtreecommitdiffstats
path: root/lldb/packages/Python/lldbsuite/test/python_api/symbol-context/TestSymbolContext.py
diff options
context:
space:
mode:
authorHal Finkel <hfinkel@anl.gov>2016-04-26 02:00:36 +0000
committerHal Finkel <hfinkel@anl.gov>2016-04-26 02:00:36 +0000
commit411d31ad72456ba88c0b0bee0faba2b774add65f (patch)
tree8e9ef065530d87813786258ac5e0e8989c1635a7 /lldb/packages/Python/lldbsuite/test/python_api/symbol-context/TestSymbolContext.py
parent0da4442f14d0be466090821bf85cca56e1a27da9 (diff)
downloadbcm5719-llvm-411d31ad72456ba88c0b0bee0faba2b774add65f.tar.gz
bcm5719-llvm-411d31ad72456ba88c0b0bee0faba2b774add65f.zip
[LoopVectorize] Don't consider conditional-load dereferenceability for marked parallel loops
I really thought we were doing this already, but we were not. Given this input: void Test(int *res, int *c, int *d, int *p) { for (int i = 0; i < 16; i++) res[i] = (p[i] == 0) ? res[i] : res[i] + d[i]; } we did not vectorize the loop. Even with "assume_safety" the check that we don't if-convert conditionally-executed loads (to protect against data-dependent deferenceability) was not elided. One subtlety: As implemented, it will still prefer to use a masked-load instrinsic (given target support) over the speculated load. The choice here seems architecture specific; the best option depends on how expensive the masked load is compared to a regular load. Ideally, using the masked load still reduces unnecessary memory traffic, and so should be preferred. If we'd rather do it the other way, flipping the order of the checks is easy. The LangRef is updated to make explicit that llvm.mem.parallel_loop_access also implies that if conversion is okay. Differential Revision: http://reviews.llvm.org/D19512 llvm-svn: 267514
Diffstat (limited to 'lldb/packages/Python/lldbsuite/test/python_api/symbol-context/TestSymbolContext.py')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud