summaryrefslogtreecommitdiffstats
path: root/llvm/test/Instrumentation/AddressSanitizer/basic.ll
Commit message (Collapse)AuthorAgeFilesLines
* [NewPM] Second attempt at porting ASanLeonard Chan2019-02-131-0/+4
| | | | | | | | | | | | | | | | | | | | This is the second attempt to port ASan to new PM after D52739. This takes the initialization requried by ASan from the Module by moving it into a separate class with it's own analysis that the new PM ASan can use. Changes: - Split AddressSanitizer into 2 passes: 1 for the instrumentation on the function, and 1 for the pass itself which creates an instance of the first during it's run. The same is done for AddressSanitizerModule. - Add new PM AddressSanitizer and AddressSanitizerModule. - Add legacy and new PM analyses for reading data needed to initialize ASan with. - Removed DominatorTree dependency from ASan since it was unused. - Move GlobalsMetadata and ShadowMapping out of anonymous namespace since the new PM analysis holds these 2 classes and will need to expose them. Differential Revision: https://reviews.llvm.org/D56470 llvm-svn: 353985
* Revert "[PassManager/Sanitizer] Enable usage of ported AddressSanitizer ↵Leonard Chan2018-10-261-2/+0
| | | | | | | | | | | | | passes with -fsanitize=address" This reverts commit 8d6af840396f2da2e4ed6aab669214ae25443204 and commit b78d19c287b6e4a9abc9fb0545de9a3106d38d3d which causes slower build times by initializing the AddressSanitizer on every function run. The corresponding revisions are https://reviews.llvm.org/D52814 and https://reviews.llvm.org/D52739. llvm-svn: 345433
* [PassManager/Sanitizer] Port of AddresSanitizer pass from legacy to new ↵Leonard Chan2018-10-111-0/+2
| | | | | | | | | | | | | | | | PassManager This patch ports the legacy pass manager to the new one to take advantage of the benefits of the new PM. This involved moving a lot of the declarations for `AddressSantizer` to a header so that it can be publicly used via PassRegistry.def which I believe contains all the passes managed by the new PM. This patch essentially decouples the instrumentation from the legacy PM such hat it can be used by both legacy and new PM infrastructure. Differential Revision: https://reviews.llvm.org/D52739 llvm-svn: 344274
* Remove alignment argument from memcpy/memmove/memset in favour of alignment ↵Daniel Neilson2018-01-191-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | attributes (Step 1) Summary: This is a resurrection of work first proposed and discussed in Aug 2015: http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html and initially landed (but then backed out) in Nov 2015: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument which is required to be a constant integer. It represents the alignment of the dest (and source), and so must be the minimum of the actual alignment of the two. This change is the first in a series that allows source and dest to each have their own alignments by using the alignment attribute on their arguments. In this change we: 1) Remove the alignment argument. 2) Add alignment attributes to the source & dest arguments. We, temporarily, require that the alignments for source & dest be equal. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false) will now read call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false) Downstream users may have to update their lit tests that check for @llvm.memcpy/memmove/memset call/declaration patterns. The following extended sed script may help with updating the majority of your tests, but it does not catch all possible patterns so some manual checking and updating will be required. s~declare void @llvm\.mem(set|cpy|move)\.p([^(]*)\((.*), i32, i1\)~declare void @llvm.mem\1.p\2(\3, i1)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* \3, i8 \4, i8 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* \3, i8 \4, i16 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* \3, i8 \4, i32 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* \3, i8 \4, i64 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* \3, i8 \4, i128 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* align \6 \3, i8 \4, i8 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* align \6 \3, i8 \4, i16 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* align \6 \3, i8 \4, i32 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* align \6 \3, i8 \4, i64 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* align \6 \3, i8 \4, i128 \5, i1 \7)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* \4, i8\5* \6, i8 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* \4, i8\5* \6, i16 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* \4, i8\5* \6, i32 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* \4, i8\5* \6, i64 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* \4, i8\5* \6, i128 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* align \8 \4, i8\5* align \8 \6, i8 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* align \8 \4, i8\5* align \8 \6, i16 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* align \8 \4, i8\5* align \8 \6, i32 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* align \8 \4, i8\5* align \8 \6, i64 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* align \8 \4, i8\5* align \8 \6, i128 \7, i1 \9)~g The remaining changes in the series will: Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing source and dest alignments. Step 3) Update Clang to use the new IRBuilder API. Step 4) Update Polly to use the new IRBuilder API. Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API, and those that use use MemIntrinsicInst::[get|set]Alignment() to use getDestAlignment() and getSourceAlignment() instead. Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the MemIntrinsicInst::[get|set]Alignment() methods. Reviewers: pete, hfinkel, lhames, reames, bollu Reviewed By: reames Subscribers: niosHD, reames, jholewinski, qcolombet, jfb, sanjoy, arsenm, dschuff, dylanmckay, mehdi_amini, sdardis, nemanjai, david2050, nhaehnle, javed.absar, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, asb, rbar, johnrusso, simoncook, jordy.potman.lists, apazos, sabuasal, llvm-commits Differential Revision: https://reviews.llvm.org/D41675 llvm-svn: 322965
* ASAN: Provide reliable debug info for local variables at -O0.Adrian Prantl2017-12-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | The function stack poisioner conditionally stores local variables either in an alloca or in malloc'ated memory, which has the unfortunate side-effect, that the actual address of the variable is only materialized when the variable is accessed, which means that those variables are mostly invisible to the debugger even when compiling without optimizations. This patch stores the address of the local stack base into an alloca, which can be referred to by the debug info and is available throughout the function. This adds one extra pointer-sized alloca to each stack frame (but mem2reg can optimize it away again when optimizations are enabled, yielding roughly the same debug info quality as before in optimized code). rdar://problem/30433661 Differential Revision: https://reviews.llvm.org/D41034 llvm-svn: 320415
* [asan] Test ASan instrumentation for shadow scale value of 5Walter Lee2017-11-171-5/+10
| | | | | | | | | Add additional RUN clauses to test for -asan-mapping-scale=5 in selective tests, with special CHECK statements where needed. Differential Revision: https://reviews.llvm.org/D39775 llvm-svn: 318493
* Add element-atomic mem intrinsic canary tests for Address Sanitizer.Daniel Neilson2017-07-181-0/+20
| | | | | | | | | | | | | | | | | Summary: Add canary tests to verify that ASAN currently does nothing with the element atomic memory intrinsics for memcpy, memmove, and memset. Placeholder tests that will fail once element atomic @llvm.mem[cpy|move|set] instrinsics have been added to the MemIntrinsic class hierarchy. These will act as a reminder to verify that ASAN handles these intrinsics properly once they have been added to that class hierarchy. Reviewers: reames Reviewed By: reames Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D35505 llvm-svn: 308248
* AddressSanitizer: don't track swifterror memory addressesArnold Schwaighofer2017-02-151-0/+26
| | | | | | | | | | | | | | They are register promoted by ISel and so it makes no sense to treat them as memory. Inserting calls to the thread sanitizer would also generate invalid IR. You would hit: "swifterror value can only be loaded and stored from, or as a swifterror argument!" llvm-svn: 295230
* Revert "Change memcpy/memset/memmove to have dest and source alignments."Pete Cooper2015-11-191-6/+6
| | | | | | | | | | This reverts commit r253511. This likely broke the bots in http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202 http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787 llvm-svn: 253543
* Change memcpy/memset/memmove to have dest and source alignments.Pete Cooper2015-11-181-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html These intrinsics currently have an explicit alignment argument which is required to be a constant integer. It represents the alignment of the source and dest, and so must be the minimum of those. This change allows source and dest to each have their own alignments by using the alignment attribute on their arguments. The alignment argument itself is removed. There are a few places in the code for which the code needs to be checked by an expert as to whether using only src/dest alignment is safe. For those places, they currently take the minimum of src/dest alignments which matches the current behaviour. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false) will now read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false) For out of tree owners, I was able to strip alignment from calls using sed by replacing: (call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\) with: $1i1 false) and similarly for memmove and memcpy. I then added back in alignment to test cases which needed it. A similar commit will be made to clang which actually has many differences in alignment as now IRBuilder can generate different source/dest alignments on calls. In IRBuilder itself, a new argument was added. Instead of calling: CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false) you now call CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false) There is a temporary class (IntegerAlignment) which takes the source alignment and rejects implicit conversion from bool. This is to prevent isVolatile here from passing its default parameter to the source alignment. Note, changes in future can now be made to codegen. I didn't change anything here, but this change should enable better memcpy code sequences. Reviewed by Hal Finkel. llvm-svn: 253511
* [asan] Rename the ABI versioning symbol to '__asan_version_mismatch_check' ↵Kuba Brecka2015-07-231-1/+1
| | | | | | | | | | instead of abusing '__asan_init' We currently version `__asan_init` and when the ABI version doesn't match, the linker gives a `undefined reference to '__asan_init_v5'` message. From this, it might not be obvious that it's actually a version mismatch error. This patch makes the error message much clearer by changing the name of the undefined symbol to be `__asan_version_mismatch_check_xxx` (followed by the version string). We obviously don't want the initializer to be named like that, so it's a separate symbol that is used only for the purpose of version checking. Reviewed at http://reviews.llvm.org/D11004 llvm-svn: 243003
* ASan: Use `createSanitizerCtor` to create ctor, and call `__asan_init`Ismail Pazarbasi2015-05-071-0/+4
| | | | | | | | | | Reviewers: kcc, samsonov Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8778 llvm-svn: 236777
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-02-271-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | load instruction Essentially the same as the GEP change in r230786. A similar migration script can be used to update test cases, though a few more test case improvements/changes were required this time around: (r229269-r229278) import fileinput import sys import re pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)") for line in sys.stdin: sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line)) Reviewers: rafael, dexonsmith, grosser Differential Revision: http://reviews.llvm.org/D7649 llvm-svn: 230794
* IR: Make metadata typeless in assemblyDuncan P. N. Exon Smith2014-12-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that `Metadata` is typeless, reflect that in the assembly. These are the matching assembly changes for the metadata/value split in r223802. - Only use the `metadata` type when referencing metadata from a call intrinsic -- i.e., only when it's used as a `Value`. - Stop pretending that `ValueAsMetadata` is wrapped in an `MDNode` when referencing it from call intrinsics. So, assembly like this: define @foo(i32 %v) { call void @llvm.foo(metadata !{i32 %v}, metadata !0) call void @llvm.foo(metadata !{i32 7}, metadata !0) call void @llvm.foo(metadata !1, metadata !0) call void @llvm.foo(metadata !3, metadata !0) call void @llvm.foo(metadata !{metadata !3}, metadata !0) ret void, !bar !2 } !0 = metadata !{metadata !2} !1 = metadata !{i32* @global} !2 = metadata !{metadata !3} !3 = metadata !{} turns into this: define @foo(i32 %v) { call void @llvm.foo(metadata i32 %v, metadata !0) call void @llvm.foo(metadata i32 7, metadata !0) call void @llvm.foo(metadata i32* @global, metadata !0) call void @llvm.foo(metadata !3, metadata !0) call void @llvm.foo(metadata !{!3}, metadata !0) ret void, !bar !2 } !0 = !{!2} !1 = !{i32* @global} !2 = !{!3} !3 = !{} I wrote an upgrade script that handled almost all of the tests in llvm and many of the tests in cfe (even handling many `CHECK` lines). I've attached it (or will attach it in a moment if you're speedy) to PR21532 to help everyone update their out-of-tree testcases. This is part of PR21532. llvm-svn: 224257
* [asan] Assign a low branch weight to ASan's slow path, patch by Jonas ↵Kostya Serebryany2014-09-021-1/+3
| | | | | | Wagner. This speeds up asan (at least on SPEC) by 1%-5% or more. Also fix lint in dfsan. llvm-svn: 216972
* CHECK-LABEL-ize one testAlexey Samsonov2014-07-161-7/+7
| | | | llvm-svn: 213177
* [asan] properly instrument memory accesses that have small alignment ↵Kostya Serebryany2014-05-231-2/+14
| | | | | | (smaller than min(8,size)) by making two checks instead of one. This may slowdown some cases, e.g. long long on 32-bit or wide loads produced after loop unrolling. The benefit is higher sencitivity. llvm-svn: 209508
* [asan] add llvm-ish test for memset/etc instrumentationKostya Serebryany2014-04-211-0/+17
| | | | llvm-svn: 206747
* [ASan] Add -asan-module to the ASan .ll tests.Alexander Potapenko2014-03-201-1/+1
| | | | | | | | After the -asan pass had been split into -asan (function-level) and -asan-module (module-level) some of the tests have silently stopped working, because they didn't instrument the globals anymore. We've decided to have every test using both passes, irrespective of the presence of globals in it. llvm-svn: 204335
* [asan] rewrite asan's stack frame layoutKostya Serebryany2013-12-061-19/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Rewrite asan's stack frame layout. First, most of the stack layout logic is moved into a separte file to make it more testable and (potentially) useful for other projects. Second, make the frames more compact by using adaptive redzones (smaller for small objects, larger for large objects). Third, try to minimized gaps due to large alignments (this is hypothetical since today we don't see many stack vars aligned by more than 32). The frames indeed become more compact, but I'll still need to run more benchmarks before committing, but I am sking for review now to get early feedback. This change will be accompanied by a trivial change in compiler-rt tests to match the new frame sizes. Reviewers: samsonov, dvyukov Reviewed By: samsonov CC: llvm-commits Differential Revision: http://llvm-reviews.chandlerc.com/D2324 llvm-svn: 196568
* [asan] workaround for PR16277: don't instrument AllocaInstr with alignment ↵Kostya Serebryany2013-06-261-0/+19
| | | | | | more than the redzone size llvm-svn: 184928
* [asan] don't instrument functions with available_externally linkage. This ↵Kostya Serebryany2013-03-181-0/+12
| | | | | | saves a bit of compile time and reduces the number of redundant global strings generated by asan (https://code.google.com/p/address-sanitizer/issues/detail?id=167) llvm-svn: 177250
* Unify clang/llvm attributes for asan/tsan/msan (LLVM part)Kostya Serebryany2013-02-261-6/+6
| | | | | | | | | | | | | | | | | | | These are two related changes (one in llvm, one in clang). LLVM: - rename address_safety => sanitize_address (the enum value is the same, so we preserve binary compatibility with old bitcode) - rename thread_safety => sanitize_thread - rename no_uninitialized_checks -> sanitize_memory CLANG: - add __attribute__((no_sanitize_address)) as a synonym for __attribute__((no_address_safety_analysis)) - add __attribute__((no_sanitize_thread)) - add __attribute__((no_sanitize_memory)) for S in address thread memory If -fsanitize=S is present and __attribute__((no_sanitize_S)) is not set llvm attribute sanitize_S llvm-svn: 176075
* [asan] instrument memory accesses with unusual sizesKostya Serebryany2013-02-191-1/+31
| | | | | | | | | | | | | | | | | This patch makes asan instrument memory accesses with unusual sizes (e.g. 5 bytes or 10 bytes), e.g. long double or packed structures. Instrumentation is done with two 1-byte checks (first and last bytes) and if the error is found __asan_report_load_n(addr, real_size) or __asan_report_store_n(addr, real_size) is called. Also, call these two new functions in memset/memcpy instrumentation. asan-rt part will follow. llvm-svn: 175507
* [asan] revert r175266 as it breaks code with packed structures. supporting ↵Kostya Serebryany2013-02-181-1/+1
| | | | | | long double will require a more general solution llvm-svn: 175442
* [asan] support long double on 64-bit. See ↵Kostya Serebryany2013-02-151-0/+9
| | | | | | https://code.google.com/p/address-sanitizer/issues/detail?id=151 llvm-svn: 175266
* [asan] fix tests for the new ABIKostya Serebryany2013-02-121-2/+2
| | | | llvm-svn: 174959
* [asan] make sure asan erases old unused allocas after it created a new one. ↵Kostya Serebryany2012-10-191-0/+20
| | | | | | This became important after the recent move from ModulePass to FunctionPass because no cleanup is happening after asan pass any more. llvm-svn: 166267
* [asan] insert crash basic blocks inline as opposed to inserting them at the ↵Kostya Serebryany2012-08-141-10/+8
| | | | | | end of the function. This doesn't seem to fix or break anything, but is considered to be more friendly to downstream passes (test change) llvm-svn: 161871
* [asan] make sure that the crash callbacks do not get merged (Chandler's ↵Kostya Serebryany2012-07-201-18/+19
| | | | | | idea: insert an empty InlineAsm). Change the order in which the new BBs are inserted: the slow path BB is insert between old BBs, the crash BB is inserted at the end. Don't create an empty BB (introduced by recent commits). Update the test. The experimental code that does manual crash callback merge will most likely be deleted later. llvm-svn: 160544
* [asan] refactor instrumentation to allow merging the crash callbacks (not ↵Kostya Serebryany2012-07-161-6/+8
| | | | | | fully implemented yet, no functionality change except the BB order) llvm-svn: 160284
* Revert r160254 temporarily.Chandler Carruth2012-07-161-10/+12
| | | | | | | | | | | | It turns out that ASan relied on the at-the-end block insertion order to (purely by happenstance) disable some LLVM optimizations, which in turn start firing when the ordering is made more "normal". These optimizations in turn merge many of the instrumentation reporting calls which breaks the return address based error reporting in ASan. We're looking at several different options for fixing this. llvm-svn: 160256
* Teach AddressSanitizer to create basic blocks in a more natural order.Chandler Carruth2012-07-161-12/+10
| | | | | | | | | This is particularly useful to the backend code generators which try to process things in the incoming function order. Also, cleanup some uses of IRBuilder to be a bit simpler and more clear. llvm-svn: 160254
* Add a basic test for AddressSanitizer. This is just a bare-bonesChandler Carruth2012-07-161-0/+70
functionality test. In general, unless the functionality is substantially separated, we should lump more basic testing into this file. The test running infrastructure likes having a few test files with more comprehensive testing within them. llvm-svn: 160253
OpenPOWER on IntegriCloud