| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fold add nuw and uadd.with.overflow with constants if the
addition does not overflow.
Part of https://bugs.llvm.org/show_bug.cgi?id=38146.
Patch by Dan Robertson.
Differential Revision: https://reviews.llvm.org/D59471
llvm-svn: 356584
|
|
|
|
| |
llvm-svn: 356536
|
|
|
|
|
|
|
|
| |
Teach instcombine to propagate demanded elements through a masked load or masked gather instruction. This is in the broader context of improving vector pointer instcombine under https://reviews.llvm.org/D57140.
Differential Revision: https://reviews.llvm.org/D57372
llvm-svn: 356510
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Combine 2 fcmps that are checking for nan-ness:
and (fcmp ord X, 0), (and (fcmp ord Y, 0), Z) --> and (fcmp ord X, Y), Z
or (fcmp uno X, 0), (or (fcmp uno Y, 0), Z) --> or (fcmp uno X, Y), Z
This is an exact match for a minimal reassociation pattern.
If we want to handle this more generally that should go in
the reassociate pass and allow removing this code.
This should fix:
https://bugs.llvm.org/show_bug.cgi?id=41069
llvm-svn: 356471
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
transforms
Follow-up to:
rL356338
rL356369
We can calculate an arbitrary vector constant minus the bitwidth, so there's
no need to limit this transform to scalars and splats.
llvm-svn: 356372
|
|
|
|
|
|
|
|
|
|
|
| |
Follow-up to:
rL356338
Rotates are a special case of funnel shift where the 2 input operands
are the same value, but that does not need to be a restriction for the
canonicalization when the shift amount is a constant.
llvm-svn: 356369
|
|
|
|
|
|
|
|
|
|
|
| |
This was noted as a backend problem:
https://bugs.llvm.org/show_bug.cgi?id=41057
...and subsequently fixed for x86:
rL356121
But we should canonicalize these in IR for the benefit of all targets
and improve IR analysis such as CSE.
llvm-svn: 356338
|
|
|
|
|
|
|
|
|
|
|
|
| |
A change of two parts:
1) A generic enhancement for all callers of SDVE to exploit the fact that if all lanes are undef, the result is undef.
2) A GEP specific piece to strengthen/fix the vector index undef element handling, and call into the generic infrastructure when visiting the GEP.
The result is that we replace a vector gep with at least one undef in each lane with a undef. We can also do the same for vector intrinsics. Once the masked.load patch (D57372) has landed, I'll update to include call tests as well.
Differential Revision: https://reviews.llvm.org/D57468
llvm-svn: 356293
|
|
|
|
|
|
|
|
| |
Before r355981, this was under LLVM_DEBUG. I don't think the assert is
quite right, but this really should be a verifier check. Instcombine
should not be asserting on this sort of thing.
llvm-svn: 356219
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bitwidth
The shift argument is defined to be modulo the bitwidth, so if that argument
is a constant, we can always reduce the constant to its minimal form to allow
better CSE and other follow-on transforms.
We need to be careful to ignore constant expressions here, or we will likely
infinite loop. I'm adding a general vector constant query for that case.
Differential Revision: https://reviews.llvm.org/D59374
llvm-svn: 356192
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This indicates an intrinsic parameter is required to be a constant,
and should not be replaced with a non-constant value.
Add the attribute to all AMDGPU and generic intrinsics that comments
indicate it should apply to. I scanned other target intrinsics, but I
don't see any obvious comments indicating which arguments are intended
to be only immediates.
This breaks one questionable testcase for the autoupgrade. I'm unclear
on whether the autoupgrade is supposed to really handle declarations
which were never valid. The verifier fails because the attributes now
refer to a parameter past the end of the argument list.
llvm-svn: 355981
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fold `add nsw` and `sadd.with.overflow` with constants if the addition
does not overflow.
Part of https://bugs.llvm.org/show_bug.cgi?id=38146.
Patch by Dan Robertson.
Differential Revision: https://reviews.llvm.org/D58881
llvm-svn: 355530
|
|
|
|
|
|
| |
Fixes PR40838.
llvm-svn: 355301
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Follow-up to rL355221.
This isn't specifically called for within PR14613,
but we'll get there eventually if it's not already
requested in some other bug report.
https://rise4fun.com/Alive/5b0
Name: smax
Pre: WillNotOverflowSignedSub(C1,C0)
%a = add nsw i8 %x, C0
%cond = icmp sgt i8 %a, C1
%r = select i1 %cond, i8 %a, i8 C1
=>
%c2 = icmp sgt i8 %x, C1-C0
%u2 = select i1 %c2, i8 %x, i8 C1-C0
%r = add nsw i8 %u2, C0
Name: smin
Pre: WillNotOverflowSignedSub(C1,C0)
%a = add nsw i32 %x, C0
%cond = icmp slt i32 %a, C1
%r = select i1 %cond, i32 %a, i32 C1
=>
%c2 = icmp slt i32 %x, C1-C0
%u2 = select i1 %c2, i32 %x, i32 C1-C0
%r = add nsw i32 %u2, C0
llvm-svn: 355272
|
|
|
|
|
|
|
|
| |
I'm assuming that the nan propogation logic for InstructonSimplify's handling of fadd and fsub is correct, and applying the same to atomicrmw.
Differential Revision: https://reviews.llvm.org/D58836
llvm-svn: 355222
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the motivating cases from PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
...moving the add enables us to narrow the
min/max which eliminates zext/trunc which
enables signficantly better vectorization.
But that bug is still not completely fixed.
https://rise4fun.com/Alive/5KQ
Name: umax
Pre: C1 u>= C0
%a = add nuw i8 %x, C0
%cond = icmp ugt i8 %a, C1
%r = select i1 %cond, i8 %a, i8 C1
=>
%c2 = icmp ugt i8 %x, C1-C0
%u2 = select i1 %c2, i8 %x, i8 C1-C0
%r = add nuw i8 %u2, C0
Name: umin
Pre: C1 u>= C0
%a = add nuw i32 %x, C0
%cond = icmp ult i32 %a, C1
%r = select i1 %cond, i32 %a, i32 C1
=>
%c2 = icmp ult i32 %x, C1-C0
%u2 = select i1 %c2, i32 %x, i32 C1-C0
%r = add nuw i32 %u2, C0
llvm-svn: 355221
|
|
|
|
|
|
|
|
|
|
| |
An idempotent atomicrmw is one that does not change memory in the process of execution. We have already added handling for the various integer operations; this patch extends the same handling to floating point operations which were recently added to IR.
Note: At the moment, we canonicalize idempotent fsub to fadd when ordering requirements prevent us from using a load. As discussed in the review, I will be replacing this with canonicalizing both floating point ops to integer ops in the near future.
Differential Revision: https://reviews.llvm.org/D58251
llvm-svn: 355210
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is part of a transform that may be done in the backend:
D13757
...but it should always be beneficial to fold this sooner in IR
for all targets.
https://rise4fun.com/Alive/vaiW
Name: sext add nsw
%add = add nsw i8 %i, C0
%ext = sext i8 %add to i32
%r = add i32 %ext, C1
=>
%s = sext i8 %i to i32
%r = add i32 %s, sext(C0)+C1
Name: zext add nuw
%add = add nuw i8 %i, C0
%ext = zext i8 %add to i16
%r = add i16 %ext, C1
=>
%s = zext i8 %i to i16
%r = add i16 %s, zext(C0)+C1
llvm-svn: 355118
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The description of KnownBits::zext() and
KnownBits::zextOrTrunc() has confusingly been telling
that the operation is equivalent to zero extending the
value we're tracking. That has not been true, instead
the user has been forced to explicitly set the extended
bits as known zero afterwards.
This patch adds a second argument to KnownBits::zext()
and KnownBits::zextOrTrunc() to control if the extended
bits should be considered as known zero or as unknown.
Reviewers: craig.topper, RKSimon
Reviewed By: RKSimon
Subscribers: javed.absar, hiraditya, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58650
llvm-svn: 355099
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Yet another pattern variation suggested by:
https://bugs.llvm.org/show_bug.cgi?id=14613
There are 8 more potential commuted patterns here on top of the
8 that were already handled (rL354221, rL354276, rL354393).
We have the obvious commute of the 'add' + commute of the cmp
predicate/operands (ugt/ult) + commute of the select operands:
Name: base
%notx = xor i32 %x, -1
%a = add i32 %notx, %y
%c = icmp ult i32 %x, %y
%r = select i1 %c, i32 -1, i32 %a
=>
%c2 = icmp ult i32 %a, %y
%r = select i1 %c2, i32 -1, i32 %a
Name: ugt
%notx = xor i32 %x, -1
%a = add i32 %notx, %y
%c = icmp ugt i32 %y, %x
%r = select i1 %c, i32 -1, i32 %a
=>
%c2 = icmp ult i32 %a, %y
%r = select i1 %c2, i32 -1, i32 %a
Name: commute select
%notx = xor i32 %x, -1
%a = add i32 %notx, %y
%c = icmp ult i32 %y, %x
%r = select i1 %c, i32 %a, i32 -1
=>
%c2 = icmp ult i32 %a, %y
%r = select i1 %c2, i32 -1, i32 %a
Name: ugt + commute select
%notx = xor i32 %x, -1
%a = add i32 %notx, %y
%c = icmp ugt i32 %x, %y
%r = select i1 %c, i32 %a, i32 -1
=>
%c2 = icmp ult i32 %a, %y
%r = select i1 %c2, i32 -1, i32 %a
https://rise4fun.com/Alive/den
llvm-svn: 354887
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
add A, sext(B) --> sub A, zext(B)
We have to choose 1 of these forms, so I'm opting for the
zext because that's easier for value tracking.
The backend should be prepared for this change after:
D57401
rL353433
This is also a preliminary step towards reducing the amount
of bit hackery that we do in IR to optimize icmp/select.
That should be waiting to happen at a later optimization stage.
The seeming regression in the fuzzer test was discussed in:
D58359
We were only managing that fold in instcombine by luck, and
other passes should be able to deal with that better anyway.
llvm-svn: 354748
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We want to use the sum in the icmp to allow matching with
m_UAddWithOverflow and eliminate the 'not'. This is discussed
in D51929 and is another step towards solving PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
Name: uaddsat, -1 fval
%notx = xor i32 %x, -1
%a = add i32 %x, %y
%c = icmp ugt i32 %notx, %y
%r = select i1 %c, i32 %a, i32 -1
=>
%a = add i32 %x, %y
%c2 = icmp ugt i32 %y, %a
%r = select i1 %c2, i32 -1, i32 %a
Name: uaddsat, -1 fval + ult
%notx = xor i32 %x, -1
%a = add i32 %x, %y
%c = icmp ult i32 %y, %notx
%r = select i1 %c, i32 %a, i32 -1
=>
%a = add i32 %x, %y
%c2 = icmp ugt i32 %y, %a
%r = select i1 %c2, i32 -1, i32 %a
https://rise4fun.com/Alive/nTp
llvm-svn: 354393
|
|
|
|
|
|
|
|
|
|
| |
This is no-functional-change-intended, but that was also
true when it was part of rL354276, and I managed to lose
2 predicates for the fold with constant...causing much bot
distress. So this time I'm adding a couple of negative tests
to avoid that.
llvm-svn: 354384
|
|
|
|
|
|
|
| |
This reverts commit 079b610c29b4a428b3ae7b64dbac0378facf6632.
Bots are failing after this change on a stage 2 compile of clang.
llvm-svn: 354277
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We want to use the sum in the icmp to allow matching with
m_UAddWithOverflow and eliminate the 'not'. This is discussed
in D51929 and is another step towards solving PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
Name: uaddsat, -1 fval
%notx = xor i32 %x, -1
%a = add i32 %x, %y
%c = icmp ugt i32 %notx, %y
%r = select i1 %c, i32 %a, i32 -1
=>
%a = add i32 %x, %y
%c2 = icmp ugt i32 %y, %a
%r = select i1 %c2, i32 -1, i32 %a
Name: uaddsat, -1 fval + ult
%notx = xor i32 %x, -1
%a = add i32 %x, %y
%c = icmp ult i32 %y, %notx
%r = select i1 %c, i32 %a, i32 -1
=>
%a = add i32 %x, %y
%c2 = icmp ugt i32 %y, %a
%r = select i1 %c2, i32 -1, i32 %a
https://rise4fun.com/Alive/nTp
llvm-svn: 354276
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We want to use the sum in the icmp to allow matching with
m_UAddWithOverflow and eliminate the 'not'. This is discussed
in D51929 and is another step towards solving PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
Name: not op
%notx = xor i32 %x, -1
%a = add i32 %x, %y
%c = icmp ult i32 %notx, %y
%r = select i1 %c, i32 -1, i32 %a
=>
%a = add i32 %x, %y
%c2 = icmp ult i32 %a, %y
%r = select i1 %c2, i32 -1, i32 %a
Name: not op ugt
%notx = xor i32 %x, -1
%a = add i32 %x, %y
%c = icmp ugt i32 %y, %notx
%r = select i1 %c, i32 -1, i32 %a
=>
%a = add i32 %x, %y
%c2 = icmp ult i32 %a, %y
%r = select i1 %c2, i32 -1, i32 %a
https://rise4fun.com/Alive/niom
(The matching here is still incomplete.)
llvm-svn: 354224
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We want to use the sum in the icmp to allow matching with
m_UAddWithOverflow and eliminate the 'not'. This is discussed
in D51929 and is another step towards solving PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
(The matching here is incomplete. Trying to take minimal steps
to make sure we don't induce infinite looping from existing
canonicalizations of the 'select'.)
llvm-svn: 354221
|
|
|
|
|
|
| |
Better addressing comments from https://reviews.llvm.org/D58290.
llvm-svn: 354171
|
|
|
|
|
|
|
|
|
|
| |
Implement two more transforms of atomicrmw:
1) We can convert an atomicrmw which produces a known value in memory into an xchg instead.
2) We can convert an atomicrmw xchg w/o users into a store for some orderings.
Differential Revision: https://reviews.llvm.org/D58290
llvm-svn: 354170
|
|
|
|
|
|
| |
https://bugs.llvm.org/show_bug.cgi?id=40734
llvm-svn: 354144
|
|
|
|
| |
llvm-svn: 354059
|
|
|
|
|
|
|
|
|
|
| |
For "idempotent" atomicrmw instructions which we can't simply turn into load, canonicalize the operation and constant. This reduces the matching needed elsewhere in the optimizer, but doesn't directly impact codegen.
For any architecture where OR/Zero is not a good default choice, you can extend the AtomicExpand lowerIdempotentRMWIntoFencedLoad mechanism. I reviewed X86 to make sure this works well, haven't audited other backends.
Differential Revision: https://reviews.llvm.org/D58244
llvm-svn: 354058
|
|
|
|
|
|
|
|
|
|
| |
Expand on Quentin's r353471 patch which converts some atomicrmws into loads. Handle remaining operation types, and fix a slight bug. Atomic loads are required to have alignment. Since this was within the InstCombine fixed point, somewhere else in InstCombine was adding alignment before the verifier saw it, but still, we should fix.
Terminology wise, I'm using the "idempotent" naming that is used for the same operations in AtomicExpand and X86ISelLoweringInfo. Once this lands, I'll add similar tests for AtomicExpand, and move the pattern match function to a common location. In the review, there was seemingly consensus that "idempotent" was slightly incorrect for this context. Once we setle on a better name, I'll update all uses at once.
Differential Revision: https://reviews.llvm.org/D58242
llvm-svn: 354046
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When instcombine sinks an instruction between two basic blocks, it sinks any
dbg.value users in the source block with it, to prevent debug use-before-free.
However we can do better by attempting to salvage the debug users, which would
avoid moving where the variable location changes. If we successfully salvage,
still sink a (cloned) dbg.value with the sunk instruction, as the sunk
instruction is more likely to be "live" later in the compilation process.
If we can't salvage dbg.value users of a sunk instruction, mark the dbg.values
in the original block as being undef. This terminates any earlier variable
location range, and represents the fact that we've optimized out the variable
location for a portion of the program.
Differential Revision: https://reviews.llvm.org/D56788
llvm-svn: 353936
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bug seems to be harmless in release builds, but will cause an error in UBSAN
builds or an assertion failure in debug builds.
When it gets to this opcode comparison, it assumes both of the operands are BinaryOperators,
but the prior m_LogicalShift will also match a ConstantExpr. The cast<BinaryOperator> will
assert in a debug build, or reading an invalid value for BinaryOp from memory with
((BinaryOperator*)constantExpr)->getOpcode() will cause an error in a UBSAN build.
The test I added will fail without this change in debug/UBSAN builds, but not in release.
Patch by: @AndrewScheidecker (Andrew Scheidecker)
Differential Revision: https://reviews.llvm.org/D58049
llvm-svn: 353736
|
|
|
|
| |
llvm-svn: 353630
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For some specific cases with bitcast A->B->A with intervening PHI nodes InstCombiner::optimizeBitCastFromPhi transformation creates extra PHI nodes, which are actually a copy of already created PHI or in another words, they are redundant. These extra PHI nodes could lead to extra move instructions generated after DeSSA transformation. This happens when several conditions are met
- SROA kicks in and creates new alloca;
- there is a simple assignment L = R, which falls under 'canonicalize loads' done by combineLoadToOperationType (this transformation is by default). Exactly this transformation is the reason of bitcasts generated;
- the alloca is then used in A->B->A + PHI chain;
- there is a loop unrolling.
As a result optimizeBitCastFromPhi creates as many of PHI nodes for each new SROA alloca as loop unrolling factor is. These new extra PHI nodes are redundant actually except of one and should not be created. Moreover the idea of optimizeBitCastFromPhi is to get rid of the cast (when possible) but that doesn't happen in these conditions.
The proposed fix is to do the cast replacement for the whole calculated/accumulated PHI closure not for one cast only, which is an argument to the optimizeBitCastFromPhi. These will help to accomplish several things: 1) avoid extra PHI nodes generated as all casts which may trigger optimizeBitCastFromPhi transformation will be replaced, 3) bitcasts will be replaced, and 3) create more opportunities to remove dead code, which appears after the replacement.
A new test case shows that it's possible to get rid of all bitcasts completely and get quite good code reduction.
Author: Igor Tsimbalist <igor.v.tsimbalist@intel.com>
Reviewed By: Carrot
Differential Revision: https://reviews.llvm.org/D57053
llvm-svn: 353595
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch accompanies the RFC posted here:
http://lists.llvm.org/pipermail/llvm-dev/2018-October/127239.html
This patch adds a new CallBr IR instruction to support asm-goto
inline assembly like gcc as used by the linux kernel. This
instruction is both a call instruction and a terminator
instruction with multiple successors. Only inline assembly
usage is supported today.
This also adds a new INLINEASM_BR opcode to SelectionDAG and
MachineIR to represent an INLINEASM block that is also
considered a terminator instruction.
There will likely be more bug fixes and optimizations to follow
this, but we felt it had reached a point where we would like to
switch to an incremental development model.
Patch by Craig Topper, Alexander Ivchenko, Mikhail Dvoretckii
Differential Revision: https://reviews.llvm.org/D53765
llvm-svn: 353563
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit teaches InstCombine how to replace an atomicrmw operation
into a simple load atomic.
For a given `atomicrmw <op>`, this is possible when:
1. The ordering of that operation is compatible with a load (i.e.,
anything that doesn't have a release semantic).
2. <op> does not modify the value being stored
Differential Revision: https://reviews.llvm.org/D57854
llvm-svn: 353471
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a class of bugs introduced by D44367,
which transforms various cases of icmp (bitcast ([su]itofp X)), Y to icmp X, Y.
If the bitcast is between vector types with a different number of elements,
the current code will produce bad IR along the lines of: icmp <N x i32> ..., <M x i32> <...>.
This patch suppresses the transform if the bitcast changes the number of vector elements.
Patch by: @AndrewScheidecker (Andrew Scheidecker)
Differential Revision: https://reviews.llvm.org/D57871
llvm-svn: 353467
|
|
|
|
| |
llvm-svn: 353462
|
|
|
|
|
|
|
|
|
|
| |
We should canonicalize to one of these forms,
and compare-with-zero could be more conducive
to follow-on transforms. This also leads to
generally better codegen as shown in PR40611:
https://bugs.llvm.org/show_bug.cgi?id=40611
llvm-svn: 353313
|
|
|
|
|
|
|
|
|
|
| |
As discussed in D53037, this can lead to worse codegen, and we
don't generally expect the backend to be able to optimize
arbitrary shuffles. If there's only one use of the 1st shuffle,
that means it's getting removed, so that should always be
safe.
llvm-svn: 353235
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The fix added in r352904 is not quite correct, or rather misleading:
1. When the texfailctrl (TFC) argument was non-constant, the fix assumed
non-TFE/LWE, which is incorrect.
2. Regardless, this code path cannot even be hit for correct
TFE/LWE-enabled calls, because those return a struct. Added
a test case for those for completeness.
Change-Id: I92d314dbc67a2670f6d7adaab765ef45f56a49cf
Reviewers: hliao, dstuttard, arsenm
Subscribers: kzhuravl, jvesely, wdng, yaxunl, tpr, t-tye, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57681
llvm-svn: 353097
|
|
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D57174
llvm-svn: 352914
|
|
|
|
|
|
|
|
|
| |
This cleans up all GetElementPtr creation in LLVM to explicitly pass a
value type rather than deriving it from the pointer's element-type.
Differential Revision: https://reviews.llvm.org/D57173
llvm-svn: 352913
|
|
|
|
|
|
|
|
|
| |
This cleans up all LoadInst creation in LLVM to explicitly pass the
value type rather than deriving it from the pointer's element-type.
Differential Revision: https://reviews.llvm.org/D57172
llvm-svn: 352911
|
|
|
|
|
|
|
|
|
| |
This cleans up all CallInst creation in LLVM to explicitly pass a
function type rather than deriving it from the pointer's element-type.
Differential Revision: https://reviews.llvm.org/D57170
llvm-svn: 352909
|
|
|
|
|
|
|
|
| |
- If that operand is not ConstantInt, skip enabling TFE/LWE.
Differential Revision: https://reviews.llvm.org/D57539
llvm-svn: 352904
|
|
|
|
|
|
|
|
|
| |
An unused variable problem was introduced with rL352870
and stubbed out with rL352871, but we can make a better
fix by actually using the local variable in code rather
than just the assert.
llvm-svn: 352873
|