summaryrefslogtreecommitdiffstats
path: root/llvm/test/Instrumentation/MemorySanitizer
Commit message (Collapse)AuthorAgeFilesLines
* [msan] Instrument x86.pclmulqdq* intrinsics.Evgenii Stepanov2020-01-271-0/+72
| | | | | | | | | | | | | | | | Summary: These instructions ignore parts of the input vectors which makes the default MSan handling too strict and causes false positive reports. Reviewers: vitalybuka, RKSimon, thakis Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D73374 (cherry picked from commit 1df8549b26892198ddf77dfd627eb9f979d45b7e)
* Migrate function attribute "no-frame-pointer-elim"="false" to ↵Fangrui Song2019-12-241-1/+1
| | | | "frame-pointer"="none" as cleanups after D56351
* [msan] Remove more attributes from sanitized functions.Evgenii Stepanov2019-10-281-0/+47
| | | | | | | | | | | | | | | | | | Summary: MSan instrumentation adds stores and loads even to pure readonly/writeonly functions. It is removing some of those attributes from instrumented functions and call targets, but apparently not enough. Remove writeonly, argmemonly and speculatable in addition to readonly / readnone. Reviewers: pcc, vitalybuka Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69541
* Insert module constructors in a module passVitaly Buka2019-10-111-7/+6
| | | | | | | | | | | | | | | | | | | | | Summary: If we insert them from function pass some analysis may be missing or invalid. Fixes PR42877. Reviewers: eugenis, leonardchan Reviewed By: leonardchan Subscribers: hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D68832 > llvm-svn: 374481 Signed-off-by: Vitaly Buka <vitalybuka@google.com> llvm-svn: 374527
* Revert 374481 "[tsan,msan] Insert module constructors in a module pass"Nico Weber2019-10-111-6/+7
| | | | | | | CodeGen/sanitizer-module-constructor.c fails on mac and windows, see e.g. http://lab.llvm.org:8011/builders/clang-x64-windows-msvc/builds/11424 llvm-svn: 374503
* [tsan,msan] Insert module constructors in a module passVitaly Buka2019-10-101-7/+6
| | | | | | | | | | | | | | | | | | Summary: If we insert them from function pass some analysis may be missing or invalid. Fixes PR42877. Reviewers: eugenis, leonardchan Reviewed By: leonardchan Subscribers: hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D68832 llvm-svn: 374481
* Handle llvm.launder.invariant.group in msan.Evgeniy Stepanov2019-10-022-0/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: [MSan] handle llvm.launder.invariant.group Msan used to give false-positives in class Foo { public: virtual ~Foo() {}; }; // Return true iff *x is set. bool f1(void **x, bool flag); Foo* f() { void *p; bool found; found = f1(&p,flag); if (found) { // p is always set here. return static_cast<Foo*>(p); // False positive here. } return nullptr; } Patch by Ilya Tokar. Reviewers: #sanitizers, eugenis Reviewed By: #sanitizers, eugenis Subscribers: eugenis, Prazek, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68236 llvm-svn: 373515
* MSan: handle callbr instructionsAlexander Potapenko2019-07-031-0/+31
| | | | | | | | | | | | | | | | | | | | Summary: Handling callbr is very similar to handling an inline assembly call: MSan must checks the instruction's inputs. callbr doesn't (yet) have outputs, so there's nothing to unpoison, and conservative assembly handling doesn't apply either. Fixes PR42479. Reviewers: eugenis Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D64072 llvm-svn: 365008
* [MSAN] Add unary FNeg visitor to the MemorySanitizerCameron McInally2019-06-051-0/+16
| | | | | | Differential Revision: https://reviews.llvm.org/D62909 llvm-svn: 362664
* [IR] Disallow llvm.global_ctors and llvm.global_dtors of the 2-field form in ↵Fangrui Song2019-05-151-18/+0
| | | | | | | | | | | | | | | | | | | | textual format The 3-field form was introduced by D3499 in 2014 and the legacy 2-field form was planned to be removed in LLVM 4.0 For the textual format, this patch migrates the existing 2-field form to use the 3-field form and deletes the compatibility code. test/Verifier/global-ctors-2.ll checks we have a friendly error message. For bitcode, lib/IR/AutoUpgrade UpgradeGlobalVariables will upgrade the 2-field form (add i8* null as the third field). Reviewed By: rnk, dexonsmith Differential Revision: https://reviews.llvm.org/D61547 llvm-svn: 360742
* MSan: handle llvm.lifetime.start intrinsicAlexander Potapenko2019-04-301-0/+129
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: When a variable goes into scope several times within a single function or when two variables from different scopes share a stack slot it may be incorrect to poison such scoped locals at the beginning of the function. In the former case it may lead to false negatives (see https://github.com/google/sanitizers/issues/590), in the latter - to incorrect reports (because only one origin remains on the stack). If Clang emits lifetime intrinsics for such scoped variables we insert code poisoning them after each call to llvm.lifetime.start(). If for a certain intrinsic we fail to find a corresponding alloca, we fall back to poisoning allocas for the whole function, as it's now impossible to tell which alloca was missed. The new instrumentation may slow down hot loops containing local variables with lifetime intrinsics, so we allow disabling it with -mllvm -msan-handle-lifetime-intrinsics=false. Reviewers: eugenis, pcc Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D60617 llvm-svn: 359536
* IR: Support parsing numeric block ids, and emit them in textual output.James Y Knight2019-03-225-52/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Just as as llvm IR supports explicitly specifying numeric value ids for instructions, and emits them by default in textual output, now do the same for blocks. This is a slightly incompatible change in the textual IR format. Previously, llvm would parse numeric labels as string names. E.g. define void @f() { br label %"55" 55: ret void } defined a label *named* "55", even without needing to be quoted, while the reference required quoting. Now, if you intend a block label which looks like a value number to be a name, you must quote it in the definition too (e.g. `"55":`). Previously, llvm would print nameless blocks only as a comment, and would omit it if there was no predecessor. This could cause confusion for readers of the IR, just as unnamed instructions did prior to the addition of "%5 = " syntax, back in 2008 (PR2480). Now, it will always print a label for an unnamed block, with the exception of the entry block. (IMO it may be better to print it for the entry-block as well. However, that requires updating many more tests.) Thus, the following is supported, and is the canonical printing: define i32 @f(i32, i32) { %3 = add i32 %0, %1 br label %4 4: ret i32 %3 } New test cases covering this behavior are added, and other tests updated as required. Differential Revision: https://reviews.llvm.org/D58548 llvm-svn: 356789
* [msan] Instrument x86 BMI intrinsics.Evgeniy Stepanov2019-03-041-0/+147
| | | | | | | | | | | | | | | | Summary: They simply shuffle bits. MSan needs to do the same with shadow bits, after making sure that the shuffle mask is fully initialized. Reviewers: pcc, vitalybuka Subscribers: hiraditya, #sanitizers, llvm-commits Tags: #sanitizers, #llvm Differential Revision: https://reviews.llvm.org/D58858 llvm-svn: 355348
* [NewPM][MSan] Add Options HandlingPhilip Pfaffe2019-02-042-0/+4
| | | | | | | | | | | | | | Summary: This patch enables passing options to msan via the passes pipeline, e.e., -passes=msan<recover;kernel;track-origins=4>. Reviewers: chandlerc, fedor.sergeev, leonardchan Subscribers: hiraditya, bollu, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57640 llvm-svn: 353090
* [MSan] Apply the ctor creation scheme of TSanPhilip Pfaffe2019-01-163-2/+24
| | | | | | | | | | | | Summary: To avoid adding an extern function to the global ctors list, apply the changes of D56538 also to MSan. Reviewers: chandlerc, vitalybuka, fedor.sergeev, leonardchan Subscribers: hiraditya, bollu, llvm-commits Differential Revision: https://reviews.llvm.org/D56734 llvm-svn: 351322
* [NewPM] Port MsanPhilip Pfaffe2019-01-0342-21/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Keeping msan a function pass requires replacing the module level initialization: That means, don't define a ctor function which calls __msan_init, instead just declare the init function at the first access, and add that to the global ctors list. Changes: - Pull the actual sanitizer and the wrapper pass apart. - Add a newpm msan pass. The function pass inserts calls to runtime library functions, for which it inserts declarations as necessary. - Update tests. Caveats: - There is one test that I dropped, because it specifically tested the definition of the ctor. Reviewers: chandlerc, fedor.sergeev, leonardchan, vitalybuka Subscribers: sdardis, nemanjai, javed.absar, hiraditya, kbarton, bollu, atanasyan, jsji Differential Revision: https://reviews.llvm.org/D55647 llvm-svn: 350305
* [MSan] Handle llvm.is.constant intrinsicAlexander Potapenko2018-12-311-0/+21
| | | | | | | | | | MSan used to report false positives in the case the argument of llvm.is.constant intrinsic was uninitialized. In fact checking this argument is unnecessary, as the intrinsic is only used at compile time, and its value doesn't depend on the value of the argument. llvm-svn: 350173
* [X86] Change 'simple nonmem' intrinsic test to not use PADDSWSimon Pilgrim2018-12-201-5/+5
| | | | | | Those intrinsics will be autoupgraded soon to @llvm.sadd.sat generics (D55894), so to keep a x86-specific case I'm replacing it with @llvm.x86.sse2.pmulhu.w llvm-svn: 349739
* [MSan] Don't emit __msan_instrument_asm_load() callsAlexander Potapenko2018-12-201-6/+0
| | | | | | | | | | | LLVM treats void* pointers passed to assembly routines as pointers to sized types. We used to emit calls to __msan_instrument_asm_load() for every such void*, which sometimes led to false positives. A less error-prone (and truly "conservative") approach is to unpoison only assembly output arguments. llvm-svn: 349734
* [KMSAN] Enable -msan-handle-asm-conservative by defaultAlexander Potapenko2018-12-032-32/+72
| | | | | | | | | | This change enables conservative assembly instrumentation in KMSAN builds by default. It's still possible to disable it with -msan-handle-asm-conservative=0 if something breaks. It's now impossible to enable conservative instrumentation for userspace builds, but it's not used anyway. llvm-svn: 348112
* [MSan] another take at instrumenting inline assembly - now with callsAlexander Potapenko2018-10-312-13/+251
| | | | | | | | | | | | | | | | | Turns out it's not always possible to figure out whether an asm() statement argument points to a valid memory region. One example would be per-CPU objects in the Linux kernel, for which the addresses are calculated using the FS register and a small offset in the .data..percpu section. To avoid pulling all sorts of checks into the instrumentation, we replace actual checking/unpoisoning code with calls to msan_instrument_asm_load(ptr, size) and msan_instrument_asm_store(ptr, size) functions in the runtime. This patch doesn't implement the runtime hooks in compiler-rt, as there's been no demand in assembly instrumentation for userspace apps so far. llvm-svn: 345702
* [MSan] Add KMSAN instrumentation to MSan passAlexander Potapenko2018-09-073-6/+439
| | | | | | | | | | | | | | | | | | | | | | | | Introduce the -msan-kernel flag, which enables the kernel instrumentation. The main differences between KMSAN and MSan instrumentations are: - KMSAN implies msan-track-origins=2, msan-keep-going=true; - there're no explicit accesses to shadow and origin memory. Shadow and origin values for a particular X-byte memory location are read and written via pointers returned by __msan_metadata_ptr_for_load_X(u8 *addr) and __msan_store_shadow_origin_X(u8 *addr, uptr shadow, uptr origin); - TLS variables are stored in a single struct in per-task storage. A call to a function returning that struct is inserted into every instrumented function before the entry block; - __msan_warning() takes a 32-bit origin parameter; - local variables are poisoned with __msan_poison_alloca() upon function entry and unpoisoned with __msan_unpoison_alloca() before leaving the function; - the pass doesn't declare any global variables or add global constructors to the translation unit. llvm-svn: 341637
* [MSan] store origins for variadic function parameters in ↵Alexander Potapenko2018-09-061-0/+117
| | | | | | | | | | | | | | | | | __msan_va_arg_origin_tls Add the __msan_va_arg_origin_tls TLS array to keep the origins for variadic function parameters. Change the instrumentation pass to store parameter origins in this array. This is a reland of r341528. test/msan/vararg.cc doesn't work on Mips, PPC and AArch64 (because this patch doesn't touch them), XFAIL these arches. Also turned out Clang crashed on i80 vararg arguments because of incorrect origin type returned by getOriginPtrForVAArgument() - fixed it and added a test. llvm-svn: 341554
* [MSan] revert r341528 to unbreak the botsAlexander Potapenko2018-09-061-110/+0
| | | | llvm-svn: 341541
* [MSan] store origins for variadic function parameters in ↵Alexander Potapenko2018-09-061-0/+110
| | | | | | | | | | __msan_va_arg_origin_tls Add the __msan_va_arg_origin_tls TLS array to keep the origins for variadic function parameters. Change the instrumentation pass to store parameter origins in this array. llvm-svn: 341528
* [MSan] Make sure variadic function arguments do not overflow __msan_va_arg_tlsAlexander Potapenko2018-09-066-0/+169
| | | | | | | | | | | | Turns out that calling a variadic function with too many (e.g. >100 i64's) arguments overflows __msan_va_arg_tls, which leads to smashing other TLS data with function argument shadow values. getShadow() already checks for kParamTLSSize and returns clean shadow if the argument does not fit, so just skip storing argument shadow for such arguments. llvm-svn: 341525
* [MSan] Shrink the register save area for non-SSE buildsAlexander Potapenko2018-08-101-0/+20
| | | | | | | | If code is compiled for X86 without SSE support, the register save area doesn't contain FPU registers, so `AMD64FpEndOffset` should be equal to `AMD64GpEndOffset`. llvm-svn: 339414
* [MSan] run materializeChecks() before materializeStores()Alexander Potapenko2018-07-201-0/+3
| | | | | | | | | | | | | | | | | | When pointer checking is enabled, it's important that every pointer is checked before its value is used. For stores MSan used to generate code that calculates shadow/origin addresses from a pointer before checking it. For userspace this isn't a problem, because the shadow calculation code is quite simple and compiler is able to move it after the check on -O2. But for KMSAN getShadowOriginPtr() creates a runtime call, so we want the check to be performed strictly before that call. Swapping materializeChecks() and materializeStores() resolves the issue: both functions insert code before the given IR location, so the new insertion order guarantees that the code calculating shadow address is between the address check and the memory access. llvm-svn: 337571
* [FileCheck] Add -allow-deprecated-dag-overlap to failing llvm testsJoel E. Denny2018-07-111-2/+2
| | | | | | | | | | | | | | | | | | | See https://reviews.llvm.org/D47106 for details. Reviewed By: probinson Differential Revision: https://reviews.llvm.org/D47171 This commit drops that patch's changes to: llvm/test/CodeGen/NVPTX/f16x2-instructions.ll llvm/test/CodeGen/NVPTX/param-load-store.ll For some reason, the dos line endings there prevent me from commiting via the monorepo. A follow-up commit (not via the monorepo) will finish the patch. llvm-svn: 336843
* [msan] Don't check divisor shadow in fdiv.Evgeniy Stepanov2018-05-181-0/+15
| | | | | | | | | | | | | | | | Summary: Floating point division by zero or even undef does not have undefined behavior and may occur due to optimizations. Fixes https://bugs.llvm.org/show_bug.cgi?id=37523. Reviewers: kcc Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D47085 llvm-svn: 332761
* [msan] Instrument masked.store, masked.load intrinsics.Evgeniy Stepanov2018-05-151-0/+124
| | | | | | | | | | | | Summary: Instrument masked store/load intrinsics. Reviewers: kcc Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D46785 llvm-svn: 332402
* [X86] Remove and autoupgrade cvtsi2ss/cvtsi2sd intrinsics to match what ↵Craig Topper2018-05-121-22/+0
| | | | | | clang has used for a very long time. llvm-svn: 332186
* [DebugInfo] Add DILabel metadata and intrinsic llvm.dbg.label.Shiva Chen2018-05-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to set breakpoints on labels and list source code around labels, we need collect debug information for labels, i.e., label name, the function label belong, line number in the file, and the address label located. In order to keep these information in LLVM IR and to allow backend to generate debug information correctly. We create a new kind of metadata for labels, DILabel. The format of DILabel is !DILabel(scope: !1, name: "foo", file: !2, line: 3) We hope to keep debug information as much as possible even the code is optimized. So, we create a new kind of intrinsic for label metadata to avoid the metadata is eliminated with basic block. The intrinsic will keep existing if we keep it from optimized out. The format of the intrinsic is llvm.dbg.label(metadata !1) It has only one argument, that is the DILabel metadata. The intrinsic will follow the label immediately. Backend could get the label metadata through the intrinsic's parameter. We also create DIBuilder API for labels to be used by Frontend. Frontend could use createLabel() to allocate DILabel objects, and use insertLabel() to insert llvm.dbg.label intrinsic in LLVM IR. Differential Revision: https://reviews.llvm.org/D45024 Patch by Hsiangkai Wang. llvm-svn: 331841
* [x86] Revert r330322 (& r330323): Lowering x86 adds/addus/subs/subus intrinsicsChandler Carruth2018-04-261-5/+5
| | | | | | | | The LLVM commit introduces a crash in LLVM's instruction selection. I filed http://llvm.org/PR37260 with the test case. llvm-svn: 330997
* Lowering x86 adds/addus/subs/subus intrinsics (llvm part)Alexander Ivchenko2018-04-191-5/+5
| | | | | | | | | | | | | This is the patch that lowers x86 intrinsics to native IR in order to enable optimizations. The patch also includes folding of previously missing saturation patterns so that IR emits the same machine instructions as the intrinsics. Patch by tkrupa Differential Revision: https://reviews.llvm.org/D44785 llvm-svn: 330322
* MSan: introduce the conservative assembly handling mode.Alexander Potapenko2018-04-031-0/+83
| | | | | | | | | | | | The default assembly handling mode may introduce false positives in the cases when MSan doesn't understand that the assembly call initializes the memory pointed to by one of its arguments. We introduce the conservative mode, which initializes the first |sizeof(type)| bytes for every |type*| pointer passed into the assembly statement. llvm-svn: 329054
* Add msan custom mapping options.Evgeniy Stepanov2018-03-291-0/+43
| | | | | | | | | | | Similarly to https://reviews.llvm.org/D18865 this adds options to provide custom mapping for msan. As discussed in http://lists.llvm.org/pipermail/llvm-dev/2018-February/121339.html Patch by vit9696(at)avp.su. Differential Revision: https://reviews.llvm.org/D44926 llvm-svn: 328830
* Remove alignment argument from memcpy/memmove/memset in favour of alignment ↵Daniel Neilson2018-01-199-27/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | attributes (Step 1) Summary: This is a resurrection of work first proposed and discussed in Aug 2015: http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html and initially landed (but then backed out) in Nov 2015: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument which is required to be a constant integer. It represents the alignment of the dest (and source), and so must be the minimum of the actual alignment of the two. This change is the first in a series that allows source and dest to each have their own alignments by using the alignment attribute on their arguments. In this change we: 1) Remove the alignment argument. 2) Add alignment attributes to the source & dest arguments. We, temporarily, require that the alignments for source & dest be equal. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false) will now read call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false) Downstream users may have to update their lit tests that check for @llvm.memcpy/memmove/memset call/declaration patterns. The following extended sed script may help with updating the majority of your tests, but it does not catch all possible patterns so some manual checking and updating will be required. s~declare void @llvm\.mem(set|cpy|move)\.p([^(]*)\((.*), i32, i1\)~declare void @llvm.mem\1.p\2(\3, i1)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* \3, i8 \4, i8 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* \3, i8 \4, i16 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* \3, i8 \4, i32 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* \3, i8 \4, i64 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* \3, i8 \4, i128 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* align \6 \3, i8 \4, i8 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* align \6 \3, i8 \4, i16 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* align \6 \3, i8 \4, i32 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* align \6 \3, i8 \4, i64 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* align \6 \3, i8 \4, i128 \5, i1 \7)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* \4, i8\5* \6, i8 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* \4, i8\5* \6, i16 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* \4, i8\5* \6, i32 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* \4, i8\5* \6, i64 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* \4, i8\5* \6, i128 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* align \8 \4, i8\5* align \8 \6, i8 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* align \8 \4, i8\5* align \8 \6, i16 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* align \8 \4, i8\5* align \8 \6, i32 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* align \8 \4, i8\5* align \8 \6, i64 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* align \8 \4, i8\5* align \8 \6, i128 \7, i1 \9)~g The remaining changes in the series will: Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing source and dest alignments. Step 3) Update Clang to use the new IRBuilder API. Step 4) Update Polly to use the new IRBuilder API. Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API, and those that use use MemIntrinsicInst::[get|set]Alignment() to use getDestAlignment() and getSourceAlignment() instead. Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the MemIntrinsicInst::[get|set]Alignment() methods. Reviewers: pete, hfinkel, lhames, reames, bollu Reviewed By: reames Subscribers: niosHD, reames, jholewinski, qcolombet, jfb, sanjoy, arsenm, dschuff, dylanmckay, mehdi_amini, sdardis, nemanjai, david2050, nhaehnle, javed.absar, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, asb, rbar, johnrusso, simoncook, jordy.potman.lists, apazos, sabuasal, llvm-commits Differential Revision: https://reviews.llvm.org/D41675 llvm-svn: 322965
* [MSan] Move the access address check before the shadow access for that addressAlexander Potapenko2017-11-231-0/+22
| | | | | | | | | | | | MSan used to insert the shadow check of the store pointer operand _after_ the shadow of the value operand has been written. This happens to work in the userspace, as the whole shadow range is always mapped. However in the kernel the shadow page may not exist, so the bug may cause a crash. This patch moves the address check in front of the shadow access. llvm-svn: 318901
* [msan] Don't sanitize "nosanitize" instructionsVitaly Buka2017-11-202-16/+48
| | | | | | | | | | Reviewers: eugenis Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D40205 llvm-svn: 318708
* Update some code.google.com linksHans Wennborg2017-11-131-2/+2
| | | | llvm-svn: 318115
* [MSan] Disable sanitization for __sanitizer_dtor_callback.Matt Morehouse2017-09-201-0/+16
| | | | | | | | | | | | | | | | Summary: Eliminate unnecessary instrumentation at __sanitizer_dtor_callback call sites. Fixes https://github.com/google/sanitizers/issues/861. Reviewers: eugenis, kcc Reviewed By: eugenis Subscribers: vitalybuka, llvm-commits, cfe-commits, hiraditya Differential Revision: https://reviews.llvm.org/D38063 llvm-svn: 313831
* Add element-atomic mem intrinsic canary tests for Memory Sanitizer.Daniel Neilson2017-07-181-0/+35
| | | | | | | | | | | | | | | | | Summary: Add canary tests to verify that MSAN currently does nothing with the element atomic memory intrinsics for memcpy, memmove, and memset. Placeholder tests that will fail once element atomic @llvm.mem[cpy|move|set] instrinsics have been added to the MemIntrinsic class hierarchy. These will act as a reminder to verify that MSAN handles these intrinsics properly once they have been added to that class hierarchy. Reviewers: reames Reviewed By: reames Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D35510 llvm-svn: 308251
* [msan] Only check shadow memory for operands that are sized.Evgeniy Stepanov2017-07-111-0/+22
| | | | | | | | | | Fixes PR33347: https://bugs.llvm.org/show_bug.cgi?id=33347. Differential Revision: https://reviews.llvm.org/D35160 Patch by Matt Morehouse. llvm-svn: 307684
* fix trivial typos, NFCHiroshi Inoue2017-07-011-2/+2
| | | | llvm-svn: 306952
* [X86] Replace 'REQUIRES: x86' in tests with 'REQUIRES: ↵Craig Topper2017-06-047-7/+7
| | | | | | x86-registered-target' which seems to be the correct way to make them run on an x86 build. llvm-svn: 304682
* Update three tests I missed in r302979 and r302990Justin Bogner2017-05-181-0/+1
| | | | llvm-svn: 303319
* MSan: Mark MemorySanitizer tests that use x86 intrinsics as REQUIRES: x86Justin Bogner2017-05-137-64/+73
| | | | | | | Tests that use target intrinsics are inherently target specific. Mark them as such. llvm-svn: 302990
* [msan] Fix PR32842Alexander Potapenko2017-05-111-0/+20
| | | | | | | | | | | | | | | | | | | | It turned out that MSan was incorrectly calculating the shadow for int comparisons: it was done by truncating the result of (Shadow1 OR Shadow2) to i1, effectively rendering all bits except LSB useless. This approach doesn't work e.g. in the case where the values being compared are even (i.e. have the LSB of the shadow equal to zero). Instead, if CreateShadowCast() has to cast a bigger int to i1, we replace the truncation with an ICMP to 0. This patch doesn't affect the code generated for SPEC 2006 binaries, i.e. there's no performance impact. For the test case reported in PR32842 MSan with the patch generates a slightly more efficient code: orq %rcx, %rax jne .LBB0_6 , instead of: orl %ecx, %eax testb $1, %al jne .LBB0_6 llvm-svn: 302787
* Add address space mangling to lifetime intrinsicsMatt Arsenault2017-04-105-20/+20
| | | | | | In preparation for allowing allocas to have non-0 addrspace. llvm-svn: 299876
OpenPOWER on IntegriCloud