diff options
| author | Nikita Popov <nikita.ppv@gmail.com> | 2018-12-07 21:16:58 +0000 |
|---|---|---|
| committer | Nikita Popov <nikita.ppv@gmail.com> | 2018-12-07 21:16:58 +0000 |
| commit | 94b8e2ea4ec9246434181e152558cbc2c1c3c7d8 (patch) | |
| tree | b217ad259cdcfc0dece662c7c9fa4dbe06baf8ca /llvm/lib/ObjectYAML/CodeViewYAMLDebugSections.cpp | |
| parent | 4ca00df57189d95b282cfc6296a51bc1058e670a (diff) | |
| download | bcm5719-llvm-94b8e2ea4ec9246434181e152558cbc2c1c3c7d8.tar.gz bcm5719-llvm-94b8e2ea4ec9246434181e152558cbc2c1c3c7d8.zip | |
[MemCpyOpt] memset->memcpy forwarding with undef tail
Currently memcpyopt optimizes cases like
memset(a, byte, N);
memcpy(b, a, M);
to
memset(a, byte, N);
memset(b, byte, M);
if M <= N. Often this allows further simplifications down the line,
which drop the first memset entirely.
This patch extends this optimization for the case where M > N, but we
know that the bytes a[N..M] are undef due to alloca/lifetime.start.
This situation arises relatively often for Rust code, because Rust does
not initialize trailing structure padding and loves to insert redundant
memcpys. This also fixes https://bugs.llvm.org/show_bug.cgi?id=39844.
For the implementation, I'm reusing a bit of code for a similar existing
optimization (direct memcpy of undef). I've also added memset support to
MemDepAnalysis GetLocation -- Instead, getPointerDependencyFrom could be
used, but it seems to make more sense to add this to GetLocation and thus
make the computation cachable.
Differential Revision: https://reviews.llvm.org/D55120
llvm-svn: 348645
Diffstat (limited to 'llvm/lib/ObjectYAML/CodeViewYAMLDebugSections.cpp')
0 files changed, 0 insertions, 0 deletions

