summaryrefslogtreecommitdiffstats
path: root/compiler-rt/lib/tsan/rtl/tsan_rtl.cc
Commit message (Collapse)AuthorAgeFilesLines
...
* tsan: revert r262037Dmitry Vyukov2016-02-261-4/+2
| | | | | | Broke aarch64 and darwin bots. llvm-svn: 262046
* tsan: split thread into logical and physical stateDmitry Vyukov2016-02-261-2/+4
| | | | | | | | | | | | | | | | | | | | Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible) and physical state (various caches, most notably malloc cache). Move physical state in a new Process entity. Besides just being the right thing from abstraction point of view, this solves several problems: 1. Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels. This unnecessary increases memory consumption. 2. Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context. As the result we could not do anything more than just clearing shadow. For example, we leaked sync objects and heap block descriptors. 3. This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache). This in turn will allow to get rid of dependency on libc entirely. 4. Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will reduce resource consumption. The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread, which is equivalent to the current scheme. llvm-svn: 262037
* [tsan] Use re-exec method to enable interceptors on older versions of OS XKuba Brecka2015-12-031-0/+3
| | | | | | | | In AddressSanitizer, we have the MaybeReexec method to detect when we're running without DYLD_INSERT_LIBRARIES (in which case interceptors don't work) and re-execute with the environment variable set. On OS X 10.11+, this is no longer necessary, but to have ThreadSanitizer supported on older versions of OS X, let's use the same method as well. This patch moves the implementation from `asan/` into `sanitizer_common/`. Differential Revision: http://reviews.llvm.org/D15123 llvm-svn: 254600
* [tsan] Fix weakly imported functions on OS XKuba Brecka2015-11-301-4/+4
| | | | | | | | | | On OS X, for weak function (that user can override by providing their own implementation in the main binary), we need extern `"C" SANITIZER_INTERFACE_ATTRIBUTE SANITIZER_WEAK_ATTRIBUTE NOINLINE`. Fixes a broken test case on OS X, java_symbolization.cc, which uses a weak function __tsan_symbolize_external. Differential Revision: http://reviews.llvm.org/D14907 llvm-svn: 254298
* [compiler-rt] [tsan] Unify aarch64 mappingAdhemerval Zanella2015-11-261-6/+5
| | | | | | | | | | | | | This patch unify the 39 and 42-bit support for AArch64 by using an external memory read to check the runtime detected VMA and select the better mapping and transformation. Although slower, this leads to same instrumented binary to be independent of the kernel. Along with this change this patch also fix some 42-bit failures with ALSR disable by increasing the upper high app memory threshold and also the 42-bit madvise value for non large page set. llvm-svn: 254151
* [tsan] Alternative ThreadState storage for OS XKuba Brecka2015-11-051-1/+1
| | | | | | | | This implements a "poor man's TLV" to be used for TSan's ThreadState on OS X. Based on the fact that `pthread_self()` is always available and reliable and returns a valid pointer to memory, we'll use the shadow memory of this pointer as a thread-local storage. No user code should ever read/write to this internal libpthread structure, so it's safe to use it for this purpose. We lazily allocate the ThreadState object and store the pointer here. Differential Revision: http://reviews.llvm.org/D14288 llvm-svn: 252159
* [tsan] Use malloc zone interceptors on OS X, part 2Kuba Brecka2015-11-051-0/+1
| | | | | | | | TSan needs to use a custom malloc zone on OS X, which is already implemented in ASan. This patch uses the sanitizer_common implementation in `sanitizer_malloc_mac.inc` for TSan as well. Reviewed at http://reviews.llvm.org/D14330 llvm-svn: 252155
* [sanitizer] Move CheckVMASize after flag initializationAdhemerval Zanella2015-09-151-3/+1
| | | | llvm-svn: 247684
* [compiler-rt] [sanitizers] Add VMA size check at runtimeAdhemerval Zanella2015-09-111-0/+3
| | | | | | | | | This patch adds a runtime check for asan, dfsan, msan, and tsan for architectures that support multiple VMA size (like aarch64). Currently the check only prints a warning indicating which is the VMA built and expected against the one detected at runtime. llvm-svn: 247413
* tsan: speed up race deduplicationDmitry Vyukov2015-09-031-0/+2
| | | | | | | | | | | | | | | | | | | | | | | Race deduplication code proved to be a performance bottleneck in the past if suppressions/annotations are used, or just some races left unaddressed. And we still get user complaints about this: https://groups.google.com/forum/#!topic/thread-sanitizer/hB0WyiTI4e4 ReportRace already has several layers of caching for racy pcs/addresses to make deduplication faster. However, ReportRace still takes a global mutex (ThreadRegistry and ReportMutex) during deduplication and also calls mmap/munmap (which take process-wide semaphore in kernel), this makes deduplication non-scalable. This patch moves race deduplication outside of global mutexes and also removes all mmap/munmap calls. As the result, race_stress.cc with 100 threads and 10000 iterations become 30x faster: before: real 0m21.673s user 0m5.932s sys 0m34.885s after: real 0m0.720s user 0m23.646s sys 0m1.254s http://reviews.llvm.org/D12554 llvm-svn: 246758
* [Sanitizers] Unify the semantics and usage of "exitcode" runtime flag across ↵Alexey Samsonov2015-08-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | all sanitizers. Summary: Merge "exitcode" flag from ASan, LSan, TSan and "exit_code" from MSan into one entity. Additionally, make sure sanitizer_common now uses the value of common_flags()->exitcode when dying on error, so that this flag will automatically work for other sanitizers (UBSan and DFSan) as well. User-visible changes: * "exit_code" MSan runtime flag is now deprecated. If explicitly specified, this flag will take precedence over "exitcode". The users are encouraged to migrate to the new version. * __asan_set_error_exit_code() and __msan_set_exit_code() functions are removed. With few exceptions, we don't support changing runtime flags during program execution - we can't make them thread-safe. The users should use __sanitizer_set_death_callback() that would call _exit() with proper exit code instead. * Plugin tools (LSan and UBSan) now inherit the exit code of the parent tool. In particular, this means that ASan would now crash the program with exit code "1" instead of "23" if it detects leaks. Reviewers: kcc, eugenis Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D12120 llvm-svn: 245734
* [sanitizer] Implement include_if_exists with process name substitution.Evgeniy Stepanov2015-07-211-1/+1
| | | | | | | | | include_if_exists=/path/to/sanitizer/options reads flags from the file if it is present. "%b" in the include file path (for both variants of the flag) is replaced with the basename of the main executable. llvm-svn: 242853
* [ASan] Make binary name reader cross-platform.Yury Gribov2015-06-041-0/+1
| | | | | | Differential Revision: http://reviews.llvm.org/D10213 llvm-svn: 239020
* Add descriptive names to sanitizer entries in /proc/self/maps. Helps debugging.Evgeniy Stepanov2015-05-291-7/+10
| | | | | | | | | | | | | | | | | | This is done by creating a named shared memory region, unlinking it and setting up a private (i.e. copy-on-write) mapping of that instead of a regular anonymous mapping. I've experimented with regular (sparse) files, but they can not be scaled to the size of MSan shadow mapping, at least on Linux/X86_64 and ext3 fs. Controlled by a common flag, decorate_proc_maps, disabled by default. This patch has a few shortcomings: * not all mappings are annotated, especially in TSan. * our handling of memset() of shadow via mmap() puts small anonymous mappings inside larger named mappings, which looks ugly and can, in theory, hit the mapping number limit. llvm-svn: 238621
* Allow UBSan+MSan and UBSan+TSan combinations (Clang part).Alexey Samsonov2015-04-281-0/+4
| | | | | | | | Embed UBSan runtime into TSan and MSan runtimes in the same as we do in ASan. Extend UBSan test suite to also run tests for these combinations. llvm-svn: 235954
* Use WriteToFile instead of internal_write in non-POSIX codeTimur Iskhodzhanov2015-04-091-1/+1
| | | | llvm-svn: 234487
* [Sanitizers] Make OpenFile more portableTimur Iskhodzhanov2015-04-081-3/+3
| | | | llvm-svn: 234410
* [libsanitizer] Fix OpenFile() usage in TSan and DFSan.Alexander Potapenko2015-03-231-1/+1
| | | | | | This is a follow-up for r232936. llvm-svn: 232937
* [Tsan] Do not sanitize memcpy() during thread initialization on FreeBSDViktor Kutuzov2015-03-161-1/+1
| | | | | | Differential Revision: http://reviews.llvm.org/D8324 llvm-svn: 232381
* [TSan][MIPS] Adding support for MIPS64Mohit K. Bhakkad2015-02-201-1/+8
| | | | | | | | | | | | Patch by Sagar Thakur Reviewers: dvyukov, samsonov, petarj, kcc, dsanders. Subscribers: mohit.bhakkad, Anand.Takale, llvm-commits. Differential Revision: http://reviews.llvm.org/D6291 llvm-svn: 229972
* [TSan] Provide default values for compile definitions.Alexey Samsonov2015-02-171-1/+1
| | | | | | | | | Provide defaults for TSAN_COLLECT_STATS and TSAN_NO_HISTORY. Replace #ifdef directives with #if. This fixes a bug introduced in r229112, where building TSan runtime with -DTSAN_COLLECT_STATS=0 would still enable stats collection and reporting. llvm-svn: 229581
* tsan: remove everything related to rss/background thread in Go modeDmitry Vyukov2015-02-161-8/+2
| | | | | | | | | In Go mode the background thread is not started (internal_thread_start is empty). There is no sense in having this code compiled in. Also removes dependency on sanitizer_linux_libcdep.cc which is good, ideally Go runtime does not depend on libc at all. llvm-svn: 229396
* tsan: fix shadow memory mapping on windowsDmitry Vyukov2015-02-161-2/+2
| | | | llvm-svn: 229391
* tsan: fix buildDmitry Vyukov2015-02-141-6/+9
| | | | | | | | | | Revision 229127 introduced a bug: zero value is not OK for trace headers, because stack0 needs constructor call. Instead unmap the unused part of trace after all ctors have been executed. llvm-svn: 229263
* tsan: don't initialize trace header in release modeDmitry Vyukov2015-02-131-0/+6
| | | | | | | | | | | | | We are going to use only a small part of the trace with the default value of history_size. However, the constructor writes to the whole trace. It writes mostly zeros, so freshly mmaped memory will do. The only non-zero field if mutex type used for debugging. Reduces per-goroutine overhead by 8K. https://code.google.com/p/thread-sanitizer/issues/detail?id=89 llvm-svn: 229127
* tsan: remove stats from ThreadState ifndef TSAN_COLLECT_STATSDmitry Vyukov2015-02-131-0/+3
| | | | | | | Issue 89: Uses a lot of memory for each goroutine https://code.google.com/p/thread-sanitizer/issues/detail?id=89 llvm-svn: 229112
* tsan: don't unroll memory access loop in debug modeDmitry Vyukov2015-01-211-0/+11
| | | | | | | | | MemoryAccess function consumes ~4K of stack in debug mode, in significant part due to the unrolled loop. And gtest gives only 4K of stack to death test threads, which causes stack overflows in debug mode. llvm-svn: 226644
* [asan] Allow changing verbosity in activation flags.Evgeniy Stepanov2015-01-201-2/+1
| | | | | | | This change removes some debug output in asan_flags.cc that was reading the verbosity level before all the flags were parsed. llvm-svn: 226566
* tsan: remove TSAN_SHADOW_COUNTDmitry Vyukov2015-01-191-45/+7
| | | | | | | | | TSAN_SHADOW_COUNT is defined to 4 in all environments. Other values of TSAN_SHADOW_COUNT were never tested and were broken by recent changes to shadow mapping. Remove it as there is no reason to fix nor maintain it. llvm-svn: 226466
* Remove TSAN_DEBUG in favor of SANITIZER_DEBUG.Alexey Samsonov2015-01-031-2/+2
| | | | llvm-svn: 225111
* tsan: disable flaky debug checkDmitry Vyukov2014-12-181-2/+3
| | | | | | see the comment for details llvm-svn: 224507
* [tsan] remove TSAN_GO in favor of SANITIZER_GOKostya Serebryany2014-12-091-26/+26
| | | | llvm-svn: 223732
* Replace InternalScopedBuffer<char> with InternalScopedString where applicable.Alexey Samsonov2014-12-021-3/+2
| | | | | | | | | | | | | | | | Summary: No functionality change. Test Plan: make check-all Reviewers: kcc Reviewed By: kcc Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6472 llvm-svn: 223164
* [TSan] Use StackTrace from sanitizer_common where applicableAlexey Samsonov2014-11-031-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This change removes `__tsan::StackTrace` class. There are now three alternatives: # Lightweight `__sanitizer::StackTrace`, which doesn't own a buffer of PCs. It is used in functions that need stack traces in read-only mode, and helps to prevent unnecessary allocations/copies (e.g. for StackTraces fetched from StackDepot). # `__sanitizer::BufferedStackTrace`, which stores buffer of PCs in a constant array. It is used in TraceHeader (non-Go version) # `__tsan::VarSizeStackTrace`, which owns buffer of PCs, dynamically allocated via TSan internal allocator. Test Plan: compiler-rt test suite Reviewers: dvyukov, kcc Reviewed By: kcc Subscribers: llvm-commits, kcc Differential Revision: http://reviews.llvm.org/D6004 llvm-svn: 221194
* Change StackDepot interface to use StackTrace more extensivelyAlexey Samsonov2014-10-261-2/+2
| | | | llvm-svn: 220637
* tsan: support mmap(MAP_32BIT)Dmitry Vyukov2014-10-241-2/+27
| | | | | | | | | Allow user memory in the first TB of address space. This also enabled non-pie binaries and freebsd. Fixes issue: https://code.google.com/p/thread-sanitizer/issues/detail?id=5 llvm-svn: 220571
* [TSan] Use common flags in the same way as all the other sanitizersAlexey Samsonov2014-09-101-18/+12
| | | | llvm-svn: 217559
* [TSan] Initialize flags as early as possible. Disables back coredump, ↵Alexey Samsonov2014-08-151-4/+4
| | | | | | accidentally enabled in r215479. Add a test. llvm-svn: 215763
* tsan: fix unaligned memory access routineDmitry Vyukov2014-08-131-3/+3
| | | | | | It was possimitically handling an aligned 8-byte access as 2 4-byte accesses. llvm-svn: 215546
* [Sanitizer] Simplify Symbolizer creation interface.Alexey Samsonov2014-07-261-2/+1
| | | | | | | | | | | Get rid of Symbolizer::Init(path_to_external) in favor of thread-safe Symbolizer::GetOrInit(), and use the latter version everywhere. Implicitly depend on the value of external_symbolizer_path runtime flag instead of passing it around manually. No functionality change. llvm-svn: 214005
* tsan: query RSS every 100msDmitry Vyukov2014-07-251-3/+1
| | | | | | Now that it become faster, it's OK to query it every 100ms again. llvm-svn: 213943
* tsan: fix CurrentStackIdDmitry Vyukov2014-06-061-14/+28
| | | | | | FuncEnter adds FuncEnter entry to trace that nobody removes later llvm-svn: 210359
* tsan: minor optimizations for Go runtimeDmitry Vyukov2014-06-061-1/+2
| | | | llvm-svn: 210351
* tsan: fix out-of-bounds access in Go runtimeDmitry Vyukov2014-06-061-5/+3
| | | | | | FuncEntry can resize the shadow stack, while "thr->shadow_stack_pos[0] = pc" writes out-of-bounds. llvm-svn: 210349
* tsan: fix mapping of meta shadow for GoDmitry Vyukov2014-06-061-8/+18
| | | | | | | Go maps heap and data+bss, these regions are not adjacent. data+bss is mapped first. llvm-svn: 210348
* tsan: optimize memory access functionsDmitry Vyukov2014-05-301-15/+137
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The optimization is two-fold: First, the algorithm now uses SSE instructions to handle all 4 shadow slots at once. This makes processing faster. Second, if shadow contains the same access, we do not store the event into trace. This increases effective trace size, that is, tsan can remember up to 10x more previous memory accesses. Perofrmance impact: Before: [ OK ] DISABLED_BENCH.Mop8Read (2461 ms) [ OK ] DISABLED_BENCH.Mop8Write (1836 ms) After: [ OK ] DISABLED_BENCH.Mop8Read (1204 ms) [ OK ] DISABLED_BENCH.Mop8Write (976 ms) But this measures only fast-path. On large real applications the speedup is ~20%. Trace size impact: On app1: Memory accesses : 1163265870 Including same : 791312905 (68%) on app2: Memory accesses : 166875345 Including same : 150449689 (90%) 90% of filtered events means that trace size is effectively 10x larger. llvm-svn: 209897
* tsan: write memory profile in one line (which is much more readable)Dmitry Vyukov2014-05-291-4/+1
| | | | | | | | | | e.g.: RSS 420 MB: shadow:35 meta:231 file:2 mmap:129 trace:19 heap:0 other:0 nthr=1/31 RSS 365 MB: shadow:3 meta:231 file:2 mmap:106 trace:19 heap:0 other:0 nthr=1/31 RSS 429 MB: shadow:23 meta:234 file:2 mmap:143 trace:19 heap:6 other:0 nthr=1/31 RSS 509 MB: shadow:78 meta:241 file:2 mmap:147 trace:19 heap:19 other:0 nthr=1/31 llvm-svn: 209813
* tsan: allow to write memory profile to stdout/stderrDmitry Vyukov2014-05-291-9/+14
| | | | llvm-svn: 209811
* tsan: refactor storage of meta information for heap blocks and sync objectsDmitry Vyukov2014-05-291-1/+21
| | | | | | | | | | | | | | | The new storage (MetaMap) is based on direct shadow (instead of a hashmap + per-block lists). This solves a number of problems: - eliminates quadratic behaviour in SyncTab::GetAndLock (https://code.google.com/p/thread-sanitizer/issues/detail?id=26) - eliminates contention in SyncTab - eliminates contention in internal allocator during allocation of sync objects - removes a bunch of ad-hoc code in java interface - reduces java shadow from 2x to 1/2x - allows to memorize heap block meta info for Java and Go - allows to cleanup sync object meta info for Go - which in turn enabled deadlock detector for Go llvm-svn: 209810
* [tsan] Fix tsango build.Evgeniy Stepanov2014-05-271-0/+4
| | | | llvm-svn: 209658
OpenPOWER on IntegriCloud