| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
preparation
This doesn't pass 'ninja check-llvm' for me. Lots of tests, including
the ones updated, fail with crashes and other explosions.
llvm-svn: 229952
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This patch consists of a single pass whose only purpose is to visit previous inserted gc.statepoints which do not have gc.relocates inserted yet, and insert them. This can be used either immediately after IR generation to perform 'early safepoint insertion' or late in the pass order to perform 'late insertion'.
This patch is setting the stage for work to continue in tree. In particular, there are known naming and style violations in the current patch. I'll try to get those resolved over the next week or so. As I touch each area to make style changes, I need to make sure we have adequate testing in place. As part of the cleanup, I will be cleaning up a collection of test cases we have out of tree and submitting them upstream. The tests included in this change are very basic and mostly to provide examples of usage.
The pass has several main subproblems it needs to address:
- First, it has identify any live pointers. In the current code, the use of address spaces to distinguish pointers to GC managed objects is hard coded, but this will become parametrizable in the near future. Note that the current change doesn't actually contain a useful liveness analysis. It was seperated into a followup change as the code wasn't ready to be shared. Instead, the current implementation just considers any dominating def of appropriate pointer type to be live.
- Second, it has to identify base pointers for each live pointer. This is a fairly straight forward data flow algorithm.
- Third, the information in the previous steps is used to actually introduce rewrites. Rather than trying to do this by hand, we simply re-purpose the code behind Mem2Reg to do this for us.
llvm-svn: 229945
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Today a simple function that only catches exceptions and doesn't run
destructor cleanups ends up containing a dead call to _Unwind_Resume
(PR20300). We can't remove these dead resume instructions during normal
optimization because inlining might introduce additional landingpads
that do have cleanups to run. Instead we can do this during EH
preparation, which is guaranteed to run after inlining.
Fixes PR20300.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D7744
llvm-svn: 229944
|
| |
|
|
|
|
|
|
|
|
| |
intrinsics."
The instructions were being generated on architectures that don't support avx512.
This reverts commit r229837.
llvm-svn: 229942
|
| |
|
|
| |
llvm-svn: 229941
|
| |
|
|
|
|
| |
Patch by Laszlo Szekeres
llvm-svn: 229940
|
| |
|
|
|
|
| |
operator agree on type.
llvm-svn: 229938
|
| |
|
|
|
|
|
|
|
|
| |
conservatively be proven to not decrement the retain's RCIdentity.
I also cleaned up the code to make it more understandable for mere mortals.
<rdar://problem/19853758>
llvm-svn: 229937
|
| |
|
|
|
|
|
|
|
| |
This is different from CanAlterRefCount since CanDecrementRefCount is
attempting to prove specifically whether or not an instruction can
decrement instead of the more general question of whether it can
decrement or increment.
llvm-svn: 229936
|
| |
|
|
|
|
|
|
|
| |
When trying to match the current schema with the new debug info
hierarchy, I downgraded `SizeInBits`, `AlignInBits` and `OffsetInBits`
to 32-bits (oops!). Caught this while testing my upgrade script to move
the hierarchy into place. Bump it back up to 64-bits and update tests.
llvm-svn: 229933
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This re-applies r223862, r224198, r224203, and r224754, which were
reverted in r228129 because they exposed Clang misalignment problems
when self-hosting.
The combine caused the crashes because we turned ISD::LOAD/STORE nodes
to ARMISD::VLD1/VST1_UPD nodes. When selecting addressing modes, we
were very lax for the former, and only emitted the alignment operand
(as in "[r1:128]") when it was larger than the standard alignment of
the memory type.
However, for ARMISD nodes, we just used the MMO alignment, no matter
what. In our case, we turned ISD nodes to ARMISD nodes, and this
caused the alignment operands to start being emitted.
And that's how we exposed alignment problems that were ignored before
(but I believe would have been caught with SCTRL.A==1?).
To fix this, we can just mirror the hack done for ISD nodes: only
take into account the MMO alignment when the access is overaligned.
Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.
We can do the same thing for generic load/stores.
Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).
rdar://19717869, rdar://14062261.
llvm-svn: 229932
|
| |
|
|
|
|
|
|
| |
during SetupMachineFunction. This is also the single use of MII
and it'll be changing to TargetInstrInfo (which is MachineFunction
based) in the next commit here.
llvm-svn: 229931
|
| |
|
|
|
|
|
|
|
|
|
| |
In preparation for a future patch:
- rename isLoad to isLoadOp: the former is confusing, and can be taken
to refer to the fact that the node is an ISD::LOAD. (it isn't, yet.)
- change formatting here and there.
- add some comments.
- const-ify bools.
llvm-svn: 229929
|
| |
|
|
|
|
|
| |
AsmPrinterDwarf since the information is on the MCRegisterInfo
via the MCContext and MMI that we already have on the AsmPrinter.
llvm-svn: 229928
|
| |
|
|
|
|
|
|
|
| |
Add missing `nullptr` from `MDSubroutineType`'s operands for
`MDCompositeTypeBase::getIdentifier()` (and add tests for all the other
unused fields). This highlights just how crazy it is that
`MDSubroutineType` inherits from `MDCompositeTypeBase`.
llvm-svn: 229926
|
| |
|
|
|
|
| |
The former lets us use SmallVectors. Do so in ARM and AArch64.
llvm-svn: 229925
|
| |
|
|
|
|
| |
TargetOptions.
llvm-svn: 229917
|
| |
|
|
|
|
|
|
|
| |
asm support in the asm printer. If we can get a subtarget from
the machine function then we should do so, otherwise we can
go ahead and create a default one since we're at the module
level.
llvm-svn: 229916
|
| |
|
|
|
|
| |
HexagonMCInstrInfo and eliminating HexagonMCInst class.
llvm-svn: 229914
|
| |
|
|
|
|
|
| |
For compatiblity with GNU as. Binutils documents this as
'.uleb128 expressions'. Subtle, isn't it?
llvm-svn: 229911
|
| |
|
|
| |
llvm-svn: 229908
|
| |
|
|
| |
llvm-svn: 229907
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is much better than the previous manner of just using
short-curcuiting booleans from:
1. A "naive" efficiency perspective: we do not have to rely on the
compiler to change the short circuiting boolean operations into a
switch.
2. An understanding perspective by making the implicit behavior of
negative predicates explicit.
3. A maintainability perspective through the covered switch flag making
it easy to know where to update code when adding new ARCInstKinds.
llvm-svn: 229906
|
| |
|
|
|
|
|
| |
I also renamed ObjCARCUtil.cpp -> ARCInstKind.cpp. That file only contained
items related to ARCInstKind anyways.
llvm-svn: 229905
|
| |
|
|
|
|
|
|
| |
Older versions of the TargetConditionals header always defined TARGET_OS_IPHONE to something (0 or 1), so we need to test not only for the existence but also if it is 1.
This resolves PR22631.
llvm-svn: 229904
|
| |
|
|
|
|
| |
HexagonMCInstrInfo.
llvm-svn: 229903
|
| |
|
|
|
|
|
|
| |
As expected, this required a few more const-correctness fixes.
Based on Hal's feedback on D7684.
llvm-svn: 229899
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The LoopInfo in combination with depth_first is used to enumerate the
loops.
Right now -analyze is not yet complete. It only prints the result of
the analysis, the report and the run-time checks. Printing the unsafe
depedences will require a bit more reshuffling which I'd like to do in a
follow-on to this patchset. Unsafe dependences are currently checked
via -debug-only=loop-accesses in the new test.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229898
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The only difference between these two is that VectorizerReport adds a
vectorizer-specific prefix to its messages. When LAA is used in the
vectorizer context the prefix is added when we promote the
LoopAccessReport into a VectorizerReport via one of the constructors.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229897
|
| |
|
|
|
|
|
|
|
|
| |
When I split out LoopAccessReport from this, I need to create some temps
so constness becomes necessary.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229896
|
| |
|
|
|
|
|
|
|
|
|
| |
This allows the analysis to be attempted with any loop. This feature
will be used with -analysis. (LV only requests the analysis on loops
that have already satisfied these tests.)
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229895
|
| |
|
|
|
|
|
|
|
| |
Also add pass name as an argument to VectorizationReport::emitAnalysis.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229894
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a function pass that runs the analysis on demand. The analysis
can be initiated by querying the loop access info via LAA::getInfo. It
either returns the cached info or runs the analysis.
Symbolic stride information continues to reside outside of this analysis
pass. We may move it inside later but it's not a priority for me right
now. The idea is that Loop Distribution won't support run-time stride
checking at least initially.
This means that when querying the analysis, symbolic stride information
can be provided optionally. Whether stride information is used can
invalidate the cache entry and rerun the analysis. Note that if the
loop does not have any symbolic stride, the entry should be preserved
across Loop Distribution and LV.
Since currently the only user of the pass is LV, I just check that the
symbolic stride information didn't change when using a cached result.
On the LV side, LoopVectorizationLegality requests the info object
corresponding to the loop from the analysis pass. A large chunk of the
diff is due to LAI becoming a pointer from a reference.
A test will be added as part of the -analyze patch.
Also tested that with AVX, we generate identical assembly output for the
testsuite (including the external testsuite) before and after.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229893
|
| |
|
|
|
|
|
|
|
|
|
|
| |
LAA will be an on-demand analysis pass, so we need to cache the result
of the analysis. canVectorizeMemory is renamed to analyzeLoop which
computes the result. canVectorizeMemory becomes the query function for
the cached result.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229892
|
| |
|
|
|
|
|
|
|
|
|
| |
The transformation passes will query this and then emit them as part of
their own report. The currently only user LV is modified to do just
that.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229891
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As LAA is becoming a pass, we can no longer pass the params to its
constructor. This changes the command line flags to have external
storage. These can now be accessed both from LV and LAA.
VectorizerParams is moved out of LoopAccessInfo in order to shorten the
code to access it.
This commits also has the fix (D7731) to the break dependence cycle
between the analysis and vector libraries.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229890
|
| |
|
|
|
|
|
|
|
|
| |
This reverts commit r229651.
I'd like to ultimately revert r229650 but this reformat stands in the
way. I'll reformat the affected files once the the loop-access pass is
fully committed.
llvm-svn: 229889
|
| |
|
|
|
|
| |
functions detached from HexagonMCInst.
llvm-svn: 229885
|
| |
|
|
|
|
| |
out of the asm printer.
llvm-svn: 229883
|
| |
|
|
|
|
| |
parameter and a tiny main() in a separate file
llvm-svn: 229882
|
| |
|
|
|
|
|
|
|
|
| |
This is true in clang, and let's us remove the problematic code that
waits around for the original file and then times out if it doesn't get
created in short order. This caused any 'dead' lock file or legitimate
time out to cause a cascade of timeouts in any processes waiting on the
same lock (even if they only just showed up).
llvm-svn: 229881
|
| |
|
|
| |
llvm-svn: 229880
|
| |
|
|
| |
llvm-svn: 229872
|
| |
|
|
| |
llvm-svn: 229871
|
| |
|
|
|
|
| |
Patch by Raoux, Thomas F!
llvm-svn: 229864
|
| |
|
|
| |
llvm-svn: 229861
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
systematic lowering of v8i16.
This required a slight strategy shift to prefer unpack lowerings in more
places. While this isn't a cut-and-dry win in every case, it is in the
overwhelming majority. There are only a few places where the old
lowering would probably be a touch faster, and then only by a small
margin.
In some cases, this is yet another significant improvement.
llvm-svn: 229859
|
| |
|
|
|
|
|
|
|
|
| |
addition to lowering to trees rooted in an unpack.
This saves shuffles and or registers in many various ways, lets us
handle another class of v4i32 shuffles pre SSE4.1 without domain
crosses, etc.
llvm-svn: 229856
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
terribly complex partial blend logic.
This code path was one of the more complex and bug prone when it first
went in and it hasn't faired much better. Ultimately, with the simpler
basis for unpack lowering and support bit-math blending, this is
completely obsolete. In the worst case without this we generate
different but equivalent instructions. However, in many cases we
generate much better code. This is especially true when blends or pshufb
is available.
This does expose one (minor) weakness of the unpack lowering that I'll
try to address.
In case you were wondering, this is actually a big part of what I've
been trying to pull off in the recent string of commits.
llvm-svn: 229853
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
needed, and significantly improve the SSSE3 path.
This makes the new strategy much more clear. If we can blend, we just go
with that. If we can't blend, we try to permute into an unpack so
that we handle cases where the unpack doing the blend also simplifies
the shuffle. If that fails and we've got SSSE3, we now call into
factored-out pshufb lowering code so that we leverage the fact that
pshufb can set up a blend for us while shuffling. This generates great
code, especially because we *know* we don't have a fast blend at this
point. Finally, we fall back on decomposing into permutes and blends
because we do at least have a bit-math-based blend if we need to use
that.
This pretty significantly improves some of the v8i16 code paths. We
never need to form pshufb for the single-input shuffles because we have
effective target-specific combines to form it there, but we were missing
its effectiveness in the blends.
llvm-svn: 229851
|