| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Like r367463, but for tsan/rtl.
llvm-svn: 367564
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
|
|
|
|
|
|
|
|
|
|
| |
MutexUnlock uses ReleaseStore on s->clock, which is the right thing to do.
However MutexReadOrWriteUnlock for writers uses Release on s->clock.
Make MutexReadOrWriteUnlock also use ReleaseStore for consistency and performance.
Unfortunately, I don't think any test can detect this as this only potentially
affects performance.
llvm-svn: 335322
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Allow to suppress by current stack.
We generally allow to suppress by all main stacks.
Current is probably the stack one wants to use to
suppress such reports.
2. Fix last lock stack restoration.
We trimmed shadow value by storing it in u32.
This magically worked for the test that provoked
the report on the main thread. But this breaks
for locks in any other threads.
llvm-svn: 331023
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
__tsan_mutex_linker_init behavior
Add a new flag, __tsan_mutex_not_static, which has the opposite sense
of __tsan_mutex_linker_init. When the new __tsan_mutex_not_static flag
is passed to __tsan_mutex_destroy, tsan ignores the destruction unless
the mutex was also created with the __tsan_mutex_not_static flag.
This is useful for constructors that otherwise woud set
__tsan_mutex_linker_init but cannot, because they are declared constexpr.
Google has a custom mutex with two constructors, a "linker initialized"
constructor that relies on zero-initialization and sets
__tsan_mutex_linker_init, and a normal one which sets no tsan flags.
The "linker initialized" constructor is morally constexpr, but we can't
declare it constexpr because of the need to call into tsan as a side effect.
With this new flag, the normal c'tor can set __tsan_mutex_not_static,
the "linker initialized" constructor can rely on tsan's lazy initialization,
and __tsan_mutex_destroy can still handle both cases correctly.
Author: Greg Falcon (gfalcon)
Reviewed in: https://reviews.llvm.org/D39095
llvm-svn: 316209
|
|
|
|
|
|
|
| |
Pass ClockCache to ThreadClock::set and introduce ThreadCache::ResetCached.
For now both are unused, but will reduce future diffs.
llvm-svn: 307784
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a linker init mutex with lazy flag setup
(no __tsan_mutex_create call), it is possible that
no lock/unlock happened before the destroy call.
Then when destroy runs we still don't know that
it is a linker init mutex and will emulate a memory write.
This in turn can lead to false positives as the mutex
is in fact linker initialized.
Support linker init flag in destroy annotation to resolve this.
llvm-svn: 301795
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are several problems with the current annotations (AnnotateRWLockCreate and friends):
- they don't fully support deadlock detection (we need a hook _before_ mutex lock)
- they don't support insertion of random artificial delays to perturb execution (again we need a hook _before_ mutex lock)
- they don't support setting extended mutex attributes like read/write reentrancy (only "linker init" was bolted on)
- they don't support setting mutex attributes if a mutex don't have a "constructor" (e.g. static, Java, Go mutexes)
- they don't ignore synchronization inside of lock/unlock operations which leads to slowdown and false negatives
The new annotations solve of the above problems. See tsan_interface.h for the interface specification and comments.
Reviewed in https://reviews.llvm.org/D31093
llvm-svn: 298809
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we either define SANITIZER_GO for Go or don't define it at all for C++.
This works fine with preprocessor (ifdef/ifndef/defined), but does not work
for C++ if statements (e.g. if (SANITIZER_GO) {...}). Also this is different
from majority of SANITIZER_FOO macros which are always defined to either 0 or 1.
Always define SANITIZER_GO to either 0 or 1.
This allows to use SANITIZER_GO in expressions and in flag default values.
Also remove kGoMode and kCppMode, which were meant to be used in expressions,
but they are not defined in sanitizer_common code, so SANITIZER_GO become prevalent.
Also convert some preprocessor checks to C++ if's or ternary expressions.
Majority of this change is done mechanically with:
sed "s#ifdef SANITIZER_GO#if SANITIZER_GO#g"
sed "s#ifndef SANITIZER_GO#if \!SANITIZER_GO#g"
sed "s#defined(SANITIZER_GO)#SANITIZER_GO#g"
llvm-svn: 285443
|
|
|
|
|
|
|
| |
Creating sync objects on acquire is pointless:
acquire of a just created sync object if a no-op.
llvm-svn: 273862
|
|
|
|
|
|
|
|
|
| |
The new annotation was added a while ago, but was not actually used.
Use the annotation to detect linker-initialized mutexes instead
of the broken IsGlobalVar which has both false positives and false
negatives. Remove IsGlobalVar mess.
llvm-svn: 271663
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current interface assumes that Go calls ProcWire/ProcUnwire
to establish the association between thread and proc.
With the wisdom of hindsight, this interface does not work
very well. I had to sprinkle Go scheduler with wire/unwire
calls, and any mistake leads to hard to debug crashes.
This is not something one wants to maintian.
Fortunately, there is a simpler solution. We can ask Go
runtime as to what is the current Processor, and that
question is very easy to answer on Go side.
Switch to such interface.
llvm-svn: 267703
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is reincarnation of http://reviews.llvm.org/D17648 with the bug fix pointed out by Adhemerval (zatrazz).
Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible)
and physical state (various caches, most notably malloc cache). Move physical state in a new
Process entity. Besides just being the right thing from abstraction point of view, this solves several
problems:
Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels.
This unnecessary increases memory consumption.
Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context.
As the result we could not do anything more than just clearing shadow. For example, we leaked
sync objects and heap block descriptors.
This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache).
This in turn will allow to get rid of dependency on libc entirely.
Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will
reduce resource consumption.
The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread,
which is equivalent to the current scheme.
llvm-svn: 267678
|
|
|
|
|
|
|
|
| |
This patch adds a new TSan report type, ReportTypeMutexInvalidAccess, which is triggered when pthread_mutex_lock or pthread_mutex_unlock returns EINVAL (this means the mutex is invalid, uninitialized or already destroyed).
Differential Revision: http://reviews.llvm.org/D18132
llvm-svn: 263641
|
|
|
|
|
|
| |
Broke aarch64 and darwin bots.
llvm-svn: 262046
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible)
and physical state (various caches, most notably malloc cache). Move physical state in a new
Process entity. Besides just being the right thing from abstraction point of view, this solves several
problems:
1. Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels.
This unnecessary increases memory consumption.
2. Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context.
As the result we could not do anything more than just clearing shadow. For example, we leaked
sync objects and heap block descriptors.
3. This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache).
This in turn will allow to get rid of dependency on libc entirely.
4. Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will
reduce resource consumption.
The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread,
which is equivalent to the current scheme.
llvm-svn: 262037
|
|
|
|
|
|
| |
https://github.com/google/sanitizers/issues/594
llvm-svn: 246592
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch is generated using clang-tidy misc-use-override check.
This command was used:
tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py \
-checks='-*,misc-use-override' -header-filter='llvm|clang' -j=32 -fix \
-format
llvm-svn: 234680
|
|
|
|
| |
llvm-svn: 223732
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This change removes `__tsan::StackTrace` class. There are
now three alternatives:
# Lightweight `__sanitizer::StackTrace`, which doesn't own a buffer
of PCs. It is used in functions that need stack traces in read-only
mode, and helps to prevent unnecessary allocations/copies (e.g.
for StackTraces fetched from StackDepot).
# `__sanitizer::BufferedStackTrace`, which stores buffer of PCs in
a constant array. It is used in TraceHeader (non-Go version)
# `__tsan::VarSizeStackTrace`, which owns buffer of PCs, dynamically
allocated via TSan internal allocator.
Test Plan: compiler-rt test suite
Reviewers: dvyukov, kcc
Reviewed By: kcc
Subscribers: llvm-commits, kcc
Differential Revision: http://reviews.llvm.org/D6004
llvm-svn: 221194
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
introduce a BufferedStackTrace class, which owns this array.
Summary:
This change splits __sanitizer::StackTrace class into a lightweight
__sanitizer::StackTrace, which doesn't own array of PCs, and BufferedStackTrace,
which owns it. This would allow us to simplify the interface of StackDepot,
and eventually merge __sanitizer::StackTrace with __tsan::StackTrace.
Test Plan: regression test suite.
Reviewers: kcc, dvyukov
Reviewed By: dvyukov
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5985
llvm-svn: 220635
|
|
|
|
| |
llvm-svn: 217559
|
|
|
|
|
|
|
|
|
|
| |
Vector clocks is the most actively allocated object in tsan runtime.
Current internal allocator is not scalable enough to handle allocation
of clocks in scalable way (too small caches). This changes transforms
clocks to 2-level array with 512-byte blocks. Since all blocks are of
the same size, it's possible to cache them more efficiently in per-thread caches.
llvm-svn: 214912
|
|
|
|
|
|
| |
(https://code.google.com/p/thread-sanitizer/issues/detail?id=67)
llvm-svn: 212529
|
|
|
|
|
|
| |
deadlocks when reporting bad unlock
llvm-svn: 212526
|
|
|
|
|
|
| |
In Go it's legal to unlock from a different goroutine.
llvm-svn: 210358
|
|
|
|
| |
llvm-svn: 210353
|
|
|
|
| |
llvm-svn: 210301
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new storage (MetaMap) is based on direct shadow (instead of a hashmap + per-block lists).
This solves a number of problems:
- eliminates quadratic behaviour in SyncTab::GetAndLock (https://code.google.com/p/thread-sanitizer/issues/detail?id=26)
- eliminates contention in SyncTab
- eliminates contention in internal allocator during allocation of sync objects
- removes a bunch of ad-hoc code in java interface
- reduces java shadow from 2x to 1/2x
- allows to memorize heap block meta info for Java and Go
- allows to cleanup sync object meta info for Go
- which in turn enabled deadlock detector for Go
llvm-svn: 209810
|
|
|
|
|
|
|
|
|
|
| |
The refactoring makes suppressions more flexible
and allow to suppress based on arbitrary number of stacks.
In particular it fixes:
https://code.google.com/p/thread-sanitizer/issues/detail?id=64
"Make it possible to suppress deadlock reports by any stack (not just first)"
llvm-svn: 209757
|
|
|
|
|
|
| |
Fixes issue https://code.google.com/p/thread-sanitizer/issues/detail?id=45
llvm-svn: 207218
|
|
|
|
| |
llvm-svn: 207211
|
|
|
|
| |
llvm-svn: 207209
|
|
|
|
| |
llvm-svn: 207208
|
|
|
|
|
|
|
| |
+ fixes crashes due to races on symbolizer, see
https://code.google.com/p/thread-sanitizer/issues/detail?id=55
llvm-svn: 207206
|
|
|
|
|
|
|
| |
+ fixes crashes due to races on symbolizer, see:
https://code.google.com/p/thread-sanitizer/issues/detail?id=55
llvm-svn: 207204
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make vector clock operations O(1) for several important classes of use cases.
See comments for details.
Below are stats from a large server app, 77% of all clock operations are handled as O(1).
Clock acquire : 25983645
empty clock : 6288080
fast from release-store : 14917504
contains my tid : 4515743
repeated (fast) : 2141428
full (slow) : 2636633
acquired something : 1426863
Clock release : 2544216
resize : 6241
fast1 : 197693
fast2 : 1016293
fast3 : 2007
full (slow) : 1797488
was acquired : 709227
clear tail : 1
last overflow : 0
Clock release store : 3446946
resize : 200516
fast : 469265
slow : 2977681
clear tail : 0
Clock acquire-release : 820028
llvm-svn: 204656
|
|
|
|
| |
llvm-svn: 204461
|
|
|
|
| |
llvm-svn: 204454
|
|
|
|
| |
llvm-svn: 204327
|
|
|
|
| |
llvm-svn: 204240
|
|
|
|
| |
llvm-svn: 204228
|
|
|
|
|
|
| |
more robust in case we failed to extract a stack trace for one of the locks
llvm-svn: 204225
|
|
|
|
|
|
| |
lock-order-inversion with N locks (i.e. print stack traces for both lock acquisitions in every edge in the graph). More improvements to follow
llvm-svn: 204042
|
|
|
|
| |
llvm-svn: 204037
|
|
|
|
| |
llvm-svn: 204035
|
|
|
|
|
|
| |
come)
llvm-svn: 204034
|
|
|
|
|
|
| |
onLockBefore and onLockAfter hooks
llvm-svn: 203796
|
|
|
|
|
|
|
| |
intercept pthread_cond (it is required to properly track state of mutexes)
detect cycles in mutex graph
llvm-svn: 202975
|
|
|
|
|
|
|
|
| |
Introduce DDetector interface between the tool and the DD itself.
It will help to experiment with other DD implementation,
as well as reuse DD in other tools.
llvm-svn: 202485
|