summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
* [FileCheck] Run clang-format over this code. NFC.Chandler Carruth2016-12-111-118/+108
| | | | | | | | | | This fixes one formatting goof I left in my previous commit and *many* other inconsistencies. I'm planning to make substantial changes here and so wanted to get to a clean baseline. llvm-svn: 289379
* Refactor FileCheck some to reduce memory allocation and copying. AlsoChandler Carruth2016-12-111-87/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | make some readability improvements. Both the check file and input file have to be fully buffered to normalize their whitespace. But previously this would be done in a stack SmallString and then copied into a heap allocated MemoryBuffer. That seems pretty wasteful, especially for something like FileCheck where there are only ever two such entities. This just rearranges the code so that we can keep the canonicalized buffers on the stack of the main function, use reasonably large stack buffers to reduce allocation. A rough estimate seems to show that about 80% of LLVM's .ll and .s files will fit into a 4k buffer, so this should completely avoid heap allocation for the buffer in those cases. My system's malloc is fast enough that the allocations don't directly show up in timings. However, on some very slow test cases, this saves 1% - 2% by avoiding the copy into the heap allocated buffer. This also splits out the code which checks the input into a helper much like the code to build the checks as that made the code much more readable to me. Nit picks and suggestions welcome here. It has really exposed a *bunch* of stuff that could be cleaned up though, so I'm probably going to go and spring clean all of this code as I have more changes coming to speed things up. llvm-svn: 289378
* [X86][InstCombine] Add support for scalar FMA intrinsics to ↵Craig Topper2016-12-112-0/+221
| | | | | | | | SimplifyDemandedVectorElts. This teaches SimplifyDemandedElts that the FMA can be removed if the lower element isn't used. It also teaches it that if upper elements of the first operand aren't used then we can simplify them. llvm-svn: 289377
* [sanitizer] Make sure libmalloc doesn't remove the sanitizer zone from ↵Kuba Mracek2016-12-111-0/+23
| | | | | | | | | | malloc_zones[0] In certain OS versions, it was possible that libmalloc replaced the sanitizer zone from being the default zone (i.e. being in malloc_zones[0]). This patch introduces a failsafe that makes sure we always stay the default zone. No testcase for this, because this doesn't reproduce under normal circumstances. Differential Revision: https://reviews.llvm.org/D27083 llvm-svn: 289376
* [sanitizer] Handle malloc_destroy_zone() on DarwinKuba Mracek2016-12-112-0/+34
| | | | | | | | We currently have a interceptor for malloc_create_zone, which returns a new zone that redirects all the zone requests to our sanitizer zone. However, calling malloc_destroy_zone on that zone will cause libmalloc to print out some warning messages, because the zone is not registered in the list of zones. This patch handles this and adds a testcase for that. Differential Revision: https://reviews.llvm.org/D27083 llvm-svn: 289375
* [X86][InstCombine] Add the test cases for r289370, r289371, and r289372.Craig Topper2016-12-112-0/+444
| | | | | | I forgot to add the new files before commiting. llvm-svn: 289374
* Tweak the core loop in StringRef::find to avoid calling memcmp on everyChandler Carruth2016-12-111-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iteration. Instead, load the byte at the needle length, compare it directly, and save it to use in the lookup table of lengths we can skip forward. I also added an annotation to expect that the comparison fails so that the loop gets laid out contiguously without the call to memcpy (and the substantial register shuffling that the ABI requires of that call). Finally, because this behaves especially badly with a needle length of one (by calling memcmp with a zero length) special case that to directly call memchr, which is what we should have been doing anyways. This was motivated by the fact that there are a large number of test cases in 'check-llvm' where FileCheck's performance is dominated by calls to StringRef::find (in a release, no-asserts build). I'm working on patches to generally improve matters there, but this alone was worth a 12.5% improvement in one test case where FileCheck spent 92% of its time in this routine. I experimented a bunch with different minor variations on this theme, for example setting the pointer *at* the last byte and indexing backwards for the call to memcmp. That didn't improve anything on this version and seemed more complex. I also tried other things to make the loop flow more nicely and none worked. =/ It is a bit unfortunate, the generated code here remains pretty gross, but I don't see any obvious ways to improve it. At this point, most of my ideas would be really elaborate: 1) While the remainder of the string is long enough, we could load a 16-byte or 32-byte vector at the address of the last byte and use palignr to rotate that and check the first 15- or 31-bytes at the front of the next segment, essentially pre-loading the first several bytes of the next iteration so we could quickly detect a mismatch in those bytes without an additional memory access. Down side would be the code complexity, having a fallback loop, and likely misaligned vector load. Plus it would make the common case of the last byte not matching somewhat slower (need some extraction from a vector). 2) While we have space, we could do an aligned load of a 16- or 32-byte vector that *contains* the end byte, and use any peceding bytes to have a more precise "no" test, and any subsequent bytes could be saved for the next iteration. This remove any unaligned load penalty, but still requires us to pay the overhead of vector extraction for the cases where we didn't need to do anything other than load and compare the last byte. 3) Try to walk from the last byte in a way that is more friendly to cache and/or memory pre-fetcher considering we have to poke the last byte anyways. No idea if any of these are really worth pursuing though. They all seem somewhat unlikely to yield big wins in practice and to be a lot of work and complexity. So I settled here, which at least seems like a strict improvement over the previous version. llvm-svn: 289373
* [X86][InstCombine] Teach InstCombineCalls to simplify demanded elements for ↵Craig Topper2016-12-111-0/+8
| | | | | | | | scalar FMA intrinsics. These intrinsics don't read the upper bits of their second and third inputs so we can try to simplify them. llvm-svn: 289372
* [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded for ↵Craig Topper2016-12-111-1/+3
| | | | | | | | scalar cmp intrinsics with masking and rounding. These intrinsics don't read the upper elements of their first and second input. These are slightly different the the SSE version which does use the upper bits of its first element as passthru bits since the result goes to an XMM register. For AVX-512 the result goes to a mask register instead. llvm-svn: 289371
* [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded ↵Craig Topper2016-12-111-0/+31
| | | | | | | | elements for scalar add,div,mul,sub,max,min intrinsics with masking and rounding. These intrinsics don't read the upper bits of their second input. And the third input is the passthru for masking and that only uses the lower element as well. llvm-svn: 289370
* [AVR] Add calling convention CodeGen testsDylan McKay2016-12-113-0/+167
| | | | | | This adds CodeGen tests for the AVR C calling convention. llvm-svn: 289369
* [libFuzzer] don't depend on time in a testKostya Serebryany2016-12-111-1/+1
| | | | llvm-svn: 289368
* Actually re-disable -Wsign-compareEric Fiselier2016-12-111-1/+1
| | | | llvm-svn: 289367
* Re-disable -Wsign-compare for now. I didn't catch all occurrencesEric Fiselier2016-12-111-0/+1
| | | | llvm-svn: 289366
* Fix signed comparison warningEric Fiselier2016-12-111-2/+2
| | | | llvm-svn: 289365
* Enable the -Wsign-compare warning to better support MSVCEric Fiselier2016-12-1167-289/+321
| | | | llvm-svn: 289363
* [AVR] Add a test to validate a simple 'blinking led' programDylan McKay2016-12-111-0/+125
| | | | llvm-svn: 289362
* [CrashReproducer] Setup a module collector callback for HeaderIncludeBruno Cardoso Lopes2016-12-112-0/+27
| | | | | | | | | Collect missing include that cannot be fetched otherwise (e.g. when using headermaps). rdar://problem/27913709 llvm-svn: 289361
* [CrashReproducer] Collect headermap filesBruno Cardoso Lopes2016-12-114-1/+71
| | | | | | | | | | Include headermaps (.hmap files) in the .cache directory and add VFS entries. All headermaps are known after HeaderSearch setup, collect them right after. rdar://problem/27913709 llvm-svn: 289360
* Fix copy/paste errors introduced in r289358Eric Fiselier2016-12-112-16/+16
| | | | llvm-svn: 289359
* Fix undefined behavior in container swap tests.Eric Fiselier2016-12-1118-195/+204
| | | | | | | | | | | | These swap tests were swapping non-POCS non-equal allocators which is undefined behavior. This patch changes the tests to use allocators which compare equal. In order to test that the allocators were not swapped I added an "id" field to test_allocator which does not participate in equality but does propagate across copies/swaps. This patch is based off of D26623 which was submitted by STL. llvm-svn: 289358
* Fix yet another dynamic exception specEric Fiselier2016-12-111-2/+2
| | | | llvm-svn: 289357
* Fix more uses of dynamic exception specifications in C++17Eric Fiselier2016-12-1121-85/+110
| | | | llvm-svn: 289356
* Fix count_new.hpp to work w/o dynamic exception specificationsEric Fiselier2016-12-111-4/+20
| | | | llvm-svn: 289355
* [AVX-512][InstCombine] Add 512-bit vpermilvar intrinsics to InstCombineCalls ↵Craig Topper2016-12-112-10/+82
| | | | | | to match 128 and 256-bit. llvm-svn: 289354
* Workaround the removal of dynamic exception specifications in C++17Eric Fiselier2016-12-111-1/+5
| | | | llvm-svn: 289353
* [X86] Fix a comment to say 'an FMA' instead of 'a FMA'. NFCCraig Topper2016-12-111-1/+1
| | | | llvm-svn: 289352
* [AVX-512] Remove masking from 512-bit vpermil builtins. The backend now has ↵Craig Topper2016-12-113-42/+32
| | | | | | | | versions without masking so wrap it with select. This will allow the backend to constant fold these to generic shuffle vectors like 128-bit and 256-bit without having to working about handling masking. llvm-svn: 289351
* [X86] Remove masking from 512-bit VPERMIL intrinsics in preparation for ↵Craig Topper2016-12-115-55/+159
| | | | | | being able to constant fold them in InstCombineCalls like we do for 128/256-bit. llvm-svn: 289350
* [AVR] Fix a signed vs unsigned compiler warningDylan McKay2016-12-111-1/+1
| | | | llvm-svn: 289349
* [X86][InstCombine] Teach InstCombineCalls to turn pshufb intrinsic into a ↵Craig Topper2016-12-112-2/+152
| | | | | | shufflevector if the indices are constant. llvm-svn: 289348
* [libc++] Fix support for multibyte thousands_sep and decimal_point in ↵Eric Fiselier2016-12-115-70/+140
| | | | | | | | | | | | | | | | | | moneypunct_byname and numpunct_byname. Summary: The underlying C locales provide the `thousands_sep` and `decimal_point` as strings, possible with more than one character. We currently don't handle this case even for `wchar_t`. This patch properly converts the mbs -> wide character for `moneypunct_byname<wchar_t>`. For the `moneypunct_byname<char>` case we attempt to narrow the WC and if that fails we also attempt to translate it to some reasonable value. For example we translate U00A0 (non-breaking space) into U0020 (regular space). If none of these conversions succeed then we simply allow the base class to provide a fallback value. Reviewers: mclow.lists, EricWF Subscribers: vangyzen, george.burgess.iv, cfe-commits Differential Revision: https://reviews.llvm.org/D24218 llvm-svn: 289347
* [AVR] Remove incorrect commentDylan McKay2016-12-101-2/+0
| | | | | | This should've been removed in r289323. llvm-svn: 289346
* [AVX-512] Remove masking from 512-bit pshufb builtin. The backend now has a ↵Craig Topper2016-12-103-20/+16
| | | | | | | | version without masking so wrap it with select. This will allow the backend to constant fold these to generic shuffle vectors like 128-bit and 256-bit without having to working about handling masking. llvm-svn: 289345
* [X86] Remove masking from 512-bit PSHUFB intrinsics in preparation for being ↵Craig Topper2016-12-106-32/+77
| | | | | | able to constant fold it in InstCombineCalls like we do for 128/256-bit. llvm-svn: 289344
* [InstCombine] add helper for shift-by-shift folds; NFCISanjay Patel2016-12-101-150/+162
| | | | | | | These are currently limited to integer types, but we should be able to extend to splat vectors and possibly general vectors. llvm-svn: 289343
* [X86][SSE] Add tests for sign extended vXi64 multiplication Simon Pilgrim2016-12-101-0/+198
| | | | llvm-svn: 289342
* [X86][SSE] Ensure UNPCK inputs are a consistent value type in ↵Simon Pilgrim2016-12-101-2/+3
| | | | | | LowerHorizontalByteSum llvm-svn: 289341
* [AVX-512] Remove 128/256 masked vpermil instrinsics and autoupgrade to a ↵Craig Topper2016-12-105-112/+102
| | | | | | select around the unmasked avx1 intrinsics. llvm-svn: 289340
* [X86][IR] Move the autoupgrading of store intrinsics out of the main nested ↵Craig Topper2016-12-101-90/+102
| | | | | | | | if/else chain. This should buy a little more time against the MSVC limit mentioned in PR31034. The handlers for stores all return at the end of their block so they can be picked off early. llvm-svn: 289339
* [AVX-512] Remove 128/256-bit masked vpermilvar builtins and replace with ↵Craig Topper2016-12-103-62/+48
| | | | | | select and the avx unmasked builtins. llvm-svn: 289338
* AMDGPU: Fix asan errors when folding operandsMatt Arsenault2016-12-101-2/+2
| | | | | | | This was failing when trying to fold immediates into operand 1 of a phi, which only has one statically known operand. llvm-svn: 289337
* [X86][SSE] Move ZeroVector creation into the shuffle pattern case where its ↵Simon Pilgrim2016-12-101-2/+2
| | | | | | | | actually used. Also fix the ZeroVector's type - I've no idea how this hasn't caused problems........ llvm-svn: 289336
* [AVX-512] Add support for lowering (v2i64 (fp_to_sint (v2f32))) to ↵Craig Topper2016-12-103-24/+103
| | | | | | vcvttps2uqq when AVX512DQ and AVX512VL are available. llvm-svn: 289335
* [X86] Clarify indentation. NFCCraig Topper2016-12-101-1/+1
| | | | llvm-svn: 289334
* [X86] Combine LowerFP_TO_SINT and LowerFP_TO_UINT. They only differ by a ↵Craig Topper2016-12-102-24/+7
| | | | | | single boolean flag passed to a helper function. Just check the opcode and create the flag. llvm-svn: 289333
* [InstSimplify] improve function name; NFCSanjay Patel2016-12-101-4/+6
| | | | llvm-svn: 289332
* [mips] Eliminate else-after-return. NFCSimon Atanasyan2016-12-101-4/+3
| | | | llvm-svn: 289331
* Create a TPI stream only when /debugpdb is given.Rui Ueyama2016-12-105-2/+7
| | | | | | | Adding type records to TPI stream is too time consuming. It is reported that linking chrome_child.dll took 5 minutes. llvm-svn: 289330
* [SelectionDAG] Add ability for computeKnownBits to peek through bitcasts ↵Simon Pilgrim2016-12-103-14/+28
| | | | | | | | from 'large element' scalar/vector to 'small element' vector. Extension to D27129 which already supported bitcasts from 'small element' vector to 'large element' scalar/vector types. llvm-svn: 289329
OpenPOWER on IntegriCloud