| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
| |
This fixes one formatting goof I left in my previous commit and *many*
other inconsistencies.
I'm planning to make substantial changes here and so wanted to get to
a clean baseline.
llvm-svn: 289379
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
make some readability improvements.
Both the check file and input file have to be fully buffered to
normalize their whitespace. But previously this would be done in a stack
SmallString and then copied into a heap allocated MemoryBuffer. That
seems pretty wasteful, especially for something like FileCheck where
there are only ever two such entities.
This just rearranges the code so that we can keep the canonicalized
buffers on the stack of the main function, use reasonably large stack
buffers to reduce allocation. A rough estimate seems to show that about
80% of LLVM's .ll and .s files will fit into a 4k buffer, so this should
completely avoid heap allocation for the buffer in those cases. My
system's malloc is fast enough that the allocations don't directly show
up in timings. However, on some very slow test cases, this saves 1% - 2%
by avoiding the copy into the heap allocated buffer.
This also splits out the code which checks the input into a helper much
like the code to build the checks as that made the code much more
readable to me. Nit picks and suggestions welcome here. It has really
exposed a *bunch* of stuff that could be cleaned up though, so I'm
probably going to go and spring clean all of this code as I have more
changes coming to speed things up.
llvm-svn: 289378
|
|
|
|
|
|
|
|
| |
SimplifyDemandedVectorElts.
This teaches SimplifyDemandedElts that the FMA can be removed if the lower element isn't used. It also teaches it that if upper elements of the first operand aren't used then we can simplify them.
llvm-svn: 289377
|
|
|
|
|
|
|
|
|
|
| |
malloc_zones[0]
In certain OS versions, it was possible that libmalloc replaced the sanitizer zone from being the default zone (i.e. being in malloc_zones[0]). This patch introduces a failsafe that makes sure we always stay the default zone. No testcase for this, because this doesn't reproduce under normal circumstances.
Differential Revision: https://reviews.llvm.org/D27083
llvm-svn: 289376
|
|
|
|
|
|
|
|
| |
We currently have a interceptor for malloc_create_zone, which returns a new zone that redirects all the zone requests to our sanitizer zone. However, calling malloc_destroy_zone on that zone will cause libmalloc to print out some warning messages, because the zone is not registered in the list of zones. This patch handles this and adds a testcase for that.
Differential Revision: https://reviews.llvm.org/D27083
llvm-svn: 289375
|
|
|
|
|
|
| |
I forgot to add the new files before commiting.
llvm-svn: 289374
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
iteration.
Instead, load the byte at the needle length, compare it directly, and
save it to use in the lookup table of lengths we can skip forward.
I also added an annotation to expect that the comparison fails so that
the loop gets laid out contiguously without the call to memcpy (and the
substantial register shuffling that the ABI requires of that call).
Finally, because this behaves especially badly with a needle length of
one (by calling memcmp with a zero length) special case that to directly
call memchr, which is what we should have been doing anyways.
This was motivated by the fact that there are a large number of test
cases in 'check-llvm' where FileCheck's performance is dominated by
calls to StringRef::find (in a release, no-asserts build). I'm working
on patches to generally improve matters there, but this alone was worth
a 12.5% improvement in one test case where FileCheck spent 92% of its
time in this routine.
I experimented a bunch with different minor variations on this theme,
for example setting the pointer *at* the last byte and indexing
backwards for the call to memcmp. That didn't improve anything on this
version and seemed more complex. I also tried other things to make the
loop flow more nicely and none worked. =/ It is a bit unfortunate, the
generated code here remains pretty gross, but I don't see any obvious
ways to improve it. At this point, most of my ideas would be really
elaborate:
1) While the remainder of the string is long enough, we could load
a 16-byte or 32-byte vector at the address of the last byte and use
palignr to rotate that and check the first 15- or 31-bytes at the
front of the next segment, essentially pre-loading the first several
bytes of the next iteration so we could quickly detect a mismatch in
those bytes without an additional memory access. Down side would be
the code complexity, having a fallback loop, and likely misaligned
vector load. Plus it would make the common case of the last byte not
matching somewhat slower (need some extraction from a vector).
2) While we have space, we could do an aligned load of a 16- or 32-byte
vector that *contains* the end byte, and use any peceding bytes to
have a more precise "no" test, and any subsequent bytes could be
saved for the next iteration. This remove any unaligned load penalty,
but still requires us to pay the overhead of vector extraction for
the cases where we didn't need to do anything other than load and
compare the last byte.
3) Try to walk from the last byte in a way that is more friendly to
cache and/or memory pre-fetcher considering we have to poke the last
byte anyways.
No idea if any of these are really worth pursuing though. They all seem
somewhat unlikely to yield big wins in practice and to be a lot of work
and complexity. So I settled here, which at least seems like a strict
improvement over the previous version.
llvm-svn: 289373
|
|
|
|
|
|
|
|
| |
scalar FMA intrinsics.
These intrinsics don't read the upper bits of their second and third inputs so we can try to simplify them.
llvm-svn: 289372
|
|
|
|
|
|
|
|
| |
scalar cmp intrinsics with masking and rounding.
These intrinsics don't read the upper elements of their first and second input. These are slightly different the the SSE version which does use the upper bits of its first element as passthru bits since the result goes to an XMM register. For AVX-512 the result goes to a mask register instead.
llvm-svn: 289371
|
|
|
|
|
|
|
|
| |
elements for scalar add,div,mul,sub,max,min intrinsics with masking and rounding.
These intrinsics don't read the upper bits of their second input. And the third input is the passthru for masking and that only uses the lower element as well.
llvm-svn: 289370
|
|
|
|
|
|
| |
This adds CodeGen tests for the AVR C calling convention.
llvm-svn: 289369
|
|
|
|
| |
llvm-svn: 289368
|
|
|
|
| |
llvm-svn: 289367
|
|
|
|
| |
llvm-svn: 289366
|
|
|
|
| |
llvm-svn: 289365
|
|
|
|
| |
llvm-svn: 289363
|
|
|
|
| |
llvm-svn: 289362
|
|
|
|
|
|
|
|
|
| |
Collect missing include that cannot be fetched otherwise (e.g. when
using headermaps).
rdar://problem/27913709
llvm-svn: 289361
|
|
|
|
|
|
|
|
|
|
| |
Include headermaps (.hmap files) in the .cache directory and
add VFS entries. All headermaps are known after HeaderSearch
setup, collect them right after.
rdar://problem/27913709
llvm-svn: 289360
|
|
|
|
| |
llvm-svn: 289359
|
|
|
|
|
|
|
|
|
|
|
|
| |
These swap tests were swapping non-POCS non-equal allocators which
is undefined behavior. This patch changes the tests to use allocators
which compare equal. In order to test that the allocators were not
swapped I added an "id" field to test_allocator which does not
participate in equality but does propagate across copies/swaps.
This patch is based off of D26623 which was submitted by STL.
llvm-svn: 289358
|
|
|
|
| |
llvm-svn: 289357
|
|
|
|
| |
llvm-svn: 289356
|
|
|
|
| |
llvm-svn: 289355
|
|
|
|
|
|
| |
to match 128 and 256-bit.
llvm-svn: 289354
|
|
|
|
| |
llvm-svn: 289353
|
|
|
|
| |
llvm-svn: 289352
|
|
|
|
|
|
|
|
| |
versions without masking so wrap it with select.
This will allow the backend to constant fold these to generic shuffle vectors like 128-bit and 256-bit without having to working about handling masking.
llvm-svn: 289351
|
|
|
|
|
|
| |
being able to constant fold them in InstCombineCalls like we do for 128/256-bit.
llvm-svn: 289350
|
|
|
|
| |
llvm-svn: 289349
|
|
|
|
|
|
| |
shufflevector if the indices are constant.
llvm-svn: 289348
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
moneypunct_byname and numpunct_byname.
Summary:
The underlying C locales provide the `thousands_sep` and `decimal_point` as strings, possible with more than one character. We currently don't handle this case even for `wchar_t`.
This patch properly converts the mbs -> wide character for `moneypunct_byname<wchar_t>`. For the `moneypunct_byname<char>` case we attempt to narrow the WC and if that fails we also attempt to translate it to some reasonable value. For example we translate U00A0 (non-breaking space) into U0020 (regular space). If none of these conversions succeed then we simply allow the base class to provide a fallback value.
Reviewers: mclow.lists, EricWF
Subscribers: vangyzen, george.burgess.iv, cfe-commits
Differential Revision: https://reviews.llvm.org/D24218
llvm-svn: 289347
|
|
|
|
|
|
| |
This should've been removed in r289323.
llvm-svn: 289346
|
|
|
|
|
|
|
|
| |
version without masking so wrap it with select.
This will allow the backend to constant fold these to generic shuffle vectors like 128-bit and 256-bit without having to working about handling masking.
llvm-svn: 289345
|
|
|
|
|
|
| |
able to constant fold it in InstCombineCalls like we do for 128/256-bit.
llvm-svn: 289344
|
|
|
|
|
|
|
| |
These are currently limited to integer types, but we should
be able to extend to splat vectors and possibly general vectors.
llvm-svn: 289343
|
|
|
|
| |
llvm-svn: 289342
|
|
|
|
|
|
| |
LowerHorizontalByteSum
llvm-svn: 289341
|
|
|
|
|
|
| |
select around the unmasked avx1 intrinsics.
llvm-svn: 289340
|
|
|
|
|
|
|
|
| |
if/else chain. This should buy a little more time against the MSVC limit mentioned in PR31034.
The handlers for stores all return at the end of their block so they can be picked off early.
llvm-svn: 289339
|
|
|
|
|
|
| |
select and the avx unmasked builtins.
llvm-svn: 289338
|
|
|
|
|
|
|
| |
This was failing when trying to fold immediates into operand 1 of a
phi, which only has one statically known operand.
llvm-svn: 289337
|
|
|
|
|
|
|
|
| |
actually used.
Also fix the ZeroVector's type - I've no idea how this hasn't caused problems........
llvm-svn: 289336
|
|
|
|
|
|
| |
vcvttps2uqq when AVX512DQ and AVX512VL are available.
llvm-svn: 289335
|
|
|
|
| |
llvm-svn: 289334
|
|
|
|
|
|
| |
single boolean flag passed to a helper function. Just check the opcode and create the flag.
llvm-svn: 289333
|
|
|
|
| |
llvm-svn: 289332
|
|
|
|
| |
llvm-svn: 289331
|
|
|
|
|
|
|
| |
Adding type records to TPI stream is too time consuming.
It is reported that linking chrome_child.dll took 5 minutes.
llvm-svn: 289330
|
|
|
|
|
|
|
|
| |
from 'large element' scalar/vector to 'small element' vector.
Extension to D27129 which already supported bitcasts from 'small element' vector to 'large element' scalar/vector types.
llvm-svn: 289329
|