| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fix: add a flag to MapValue and friends which indicates whether
any module-level mappings are being made. In the common case of
inlining, no module-level mappings are needed, so MapValue doesn't
need to examine non-function-local metadata, which can be very
expensive in the case of a large module with really deep metadata
(e.g. a large C++ program compiled with -g).
This flag is a little awkward; perhaps eventually it can be moved
into the ClonedCodeInfo class.
llvm-svn: 112190
|
|
|
|
| |
llvm-svn: 109506
|
|
|
|
| |
llvm-svn: 106728
|
|
|
|
|
|
| |
The ValueMapper used by various cloning utility maps MDNodes also.
llvm-svn: 106706
|
|
|
|
|
|
| |
Do not use "ValueMap" as a name for a local variable or an argument.
llvm-svn: 106698
|
|
|
|
|
|
|
| |
the newly created allocas may be used by inlined calls, so these
need to have their tail call flags cleared. Fixes PR7272.
llvm-svn: 105255
|
|
|
|
|
|
|
| |
reflect that it includes all inlined calls now, not just
devirtualized ones.
llvm-svn: 102824
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
that can have a big effect :). The first is to enable the
iterative SCC passmanager juice that kicks in when the
scc passmgr detects that a function pass has devirtualized
a call. In this case, it will rerun all the passes it
manages on the SCC, up to the iteration count limit (4). This
is useful because a function pass may devirualize a call, and
we want the inliner to inline it, or pruneeh to infer stuff
about it, etc.
The second patch is to add *all* call sites to the
DevirtualizedCalls list the inliner uses. This list is
about to get renamed, but the jist of this is that the
inliner now reconsiders *all* inlined call sites as candidates
for further inlining. The intuition is this that in cases
like this:
f() { g(1); } g(int x) { h(x); }
We analyze this bottom up, and may decide that it isn't
profitable to inline H into G. Next step, we decide that it is
profitable to inline G into F, and do so, which means that F
now calls H. Even though the call from G -> H may not have been
profitable to inline, the call from F -> H may be (in this case
because a constant allows folding etc).
In my spot checks, this doesn't have a big impact on code. For
example, the LLC output for 252.eon grew from 0.02% (from
317252 to 317308) and 176.gcc actually shrunk by .3% (from 1525612
to 1520964 bytes). 252.eon never iterated in the SCC Passmgr,
176.gcc iterated at most 1 time.
llvm-svn: 102823
|
|
|
|
|
|
|
|
|
| |
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call. The improved inliner behavior is still
disabled for now.
llvm-svn: 102196
|
|
|
|
|
|
|
|
|
|
| |
that appear in the SCC as a result of inlining as candidates
for inlining. Change this so that it *does* consider call
sites that change from being indirect to being direct as a
result of inlining. This allows it to completely
"devirtualize" the testcase.
llvm-svn: 102146
|
|
|
|
|
|
|
|
| |
arguments are handled with a new InlineFunctionInfo class. This
makes it easier to extend InlineFunction to return more info in the
future.
llvm-svn: 102137
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
define void @f3(void (i8*)* %__f) ssp {
entry:
call void %__f(i8* undef)
unreachable
}
define void @f4(i8* %this) ssp align 2 {
entry:
call void @f3(void (i8*)* @f2) ssp
ret void
}
The inliner is turning the indirect call to %__f into a direct
call to F2. Make the call graph more precise when this happens.
The inliner doesn't revisit call sites introduced by inlining,
so there isn't an easy way to test for this, but a more precise
callgraph is a good thing.
llvm-svn: 102131
|
|
|
|
| |
llvm-svn: 102119
|
|
|
|
|
|
|
| |
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.
llvm-svn: 101579
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101465
|
|
|
|
| |
llvm-svn: 101434
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
|
|
|
|
| |
llvm-svn: 101368
|
|
|
|
|
|
|
|
|
|
| |
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
|
|
|
|
|
|
|
| |
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100304
|
|
|
|
| |
llvm-svn: 100199
|
|
|
|
|
|
|
| |
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100191
|
|
|
|
| |
llvm-svn: 99948
|
|
|
|
|
|
|
|
|
| |
memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.
llvm-svn: 99928
|
|
|
|
| |
llvm-svn: 99451
|
|
|
|
|
|
|
|
| |
for the noinline attribute, and make the inliner refuse to
inline a call site when the call site is marked noinline even
if the callee isn't. This fixes PR6682.
llvm-svn: 99341
|
|
|
|
|
|
|
|
|
|
| |
Intrinsic::dbg_stoppoint
Intrinsic::dbg_region_start
Intrinsic::dbg_region_end
Intrinsic::dbg_func_start
AutoUpgrade simply ignores these intrinsics now.
llvm-svn: 92557
|
|
|
|
| |
llvm-svn: 86748
|
|
|
|
|
|
|
|
|
|
| |
with multiple return values it inserts a PHI to merge them all together.
However, if the return values are all the same, it ends up with a pointless
PHI and this pointless PHI happens to really block SRoA from happening in
at least a silly C++ example written by Doug, but probably others. This
fixes rdar://7339069.
llvm-svn: 85206
|
|
|
|
|
|
| |
updating the callgraph when introducing a call.
llvm-svn: 84310
|
|
|
|
|
|
|
| |
where the element is of a basic builtin type. For example, to get
an i8* use getInt8PtrTy.
llvm-svn: 83379
|
|
|
|
|
|
| |
update all the callers.
llvm-svn: 82889
|
|
|
|
| |
llvm-svn: 81138
|
|
|
|
|
|
| |
callgraph. This is now dead because RAUW does the job.
llvm-svn: 80703
|
|
|
|
|
|
|
|
|
|
|
|
| |
for sanity. This didn't turn up any bugs.
Change CallGraphNode to maintain its "callsite" information in the
call edges list as a WeakVH instead of as an instruction*. This fixes
a broad class of dangling pointer bugs, and makes CallGraph have a number
of useful invariants again. This fixes the class of problem indicated
by PR4029 and PR3601.
llvm-svn: 80663
|
|
|
|
|
|
| |
Use MDNodes to encode debug info in llvm IR.
llvm-svn: 80406
|
|
|
|
|
|
| |
a the list of static allocas that it inlined.
llvm-svn: 80203
|
|
|
|
| |
llvm-svn: 80202
|
|
|
|
|
|
| |
and other code cleanups. No functionality change.
llvm-svn: 80199
|
|
|
|
| |
llvm-svn: 80073
|
|
|
|
|
|
|
|
| |
llvm.dbg.... global variables, to encode debugging information in llvm IR. This is mostly a mechanical change that tests metadata support very well.
This change speeds up llvm-gcc by more then 6% at "-O0 -g" (measured by compiling InstructionCombining.cpp!)
llvm-svn: 79977
|
|
|
|
| |
llvm-svn: 78948
|
|
|
|
| |
llvm-svn: 77635
|
|
|
|
| |
llvm-svn: 77516
|
|
|
|
| |
llvm-svn: 77494
|
|
|
|
|
|
| |
thanks to contexts-on-types. More to come.
llvm-svn: 77011
|
|
|
|
| |
llvm-svn: 76702
|
|
|
|
|
|
| |
AllocaInst and MallocInst.
llvm-svn: 75863
|
|
|
|
| |
llvm-svn: 75703
|
|
|
|
|
|
| |
the [I|F]CmpInst constructors. Who knew!?
llvm-svn: 75200
|