summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* Move pattern check outside of the if-then statement. This prevents us from ↵Bill Wendling2008-12-011-10/+12
| | | | | | fiddling with constants unless we have to. llvm-svn: 60340
* Rename some variables, only increment BI once at the start of the loop ↵Chris Lattner2008-12-011-38/+30
| | | | | | instead of throughout it. llvm-svn: 60339
* pull the predMap densemap out of the inner loop of performPRE, soChris Lattner2008-12-011-2/+4
| | | | | | | that it isn't reallocated all the time. This is a tiny speedup for GVN: 3.90->3.88s llvm-svn: 60338
* switch a couple more calls to use array_pod_sort.Chris Lattner2008-12-012-3/+5
| | | | llvm-svn: 60337
* Introduce a new array_pod_sort function and switch LSR to use itChris Lattner2008-12-011-1/+1
| | | | | | | | | instead of std::sort. This shrinks the release-asserts LSR.o file by 1100 bytes of code on my system. We should start using array_pod_sort where possible. llvm-svn: 60335
* Eliminate use of setvector for the DeadInsts set, just use a smallvector.Chris Lattner2008-12-011-17/+31
| | | | | | This is a lot cheaper and conceptually simpler. llvm-svn: 60332
* DeleteTriviallyDeadInstructions is always passed theChris Lattner2008-12-011-10/+9
| | | | | | DeadInsts ivar, just use it directly. llvm-svn: 60330
* simplify DeleteTriviallyDeadInstructions again, unlike my previousChris Lattner2008-12-011-20/+13
| | | | | | | | buggy rewrite, this notifies ScalarEvolution of a pending instruction about to be removed and then erases it, instead of erasing it then notifying. llvm-svn: 60329
* simplify these patterns using m_Specific. No need to grep for Chris Lattner2008-12-011-16/+6
| | | | | | xor in testcase (or is a substring). llvm-svn: 60328
* Teach jump threading to clean up after itself, DCE and constfolding theChris Lattner2008-12-011-1/+24
| | | | | | | | | new instructions it simplifies. Because we're threading jumps on edges with constants coming in from PHI's, we inherently are exposing a lot more constants to the new block. Folding them and deleting dead conditions allows the cost model in jump threading to be more accurate as it iterates. llvm-svn: 60327
* The PreVerifier pass preserves everything. In practice, thisChris Lattner2008-12-011-0/+4
| | | | | | | prevents the passmgr from adding yet-another domtree invocation for Verifier if there is already one live. llvm-svn: 60326
* Change instcombine to use FoldPHIArgGEPIntoPHI to fold two operand PHIsChris Lattner2008-12-011-17/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | instead of using FoldPHIArgBinOpIntoPHI. In addition to being more obvious, this also fixes a problem where instcombine wouldn't merge two phis that had different variable indices. This prevented instcombine from factoring big chunks of code in 403.gcc. For example: insn_cuid.exit: - %tmp336 = load i32** @uid_cuid, align 4 - %tmp337 = getelementptr %struct.rtx_def* %insn_addr.0.ph.i, i32 0, i32 3 - %tmp338 = bitcast [1 x %struct.rtunion]* %tmp337 to i32* - %tmp339 = load i32* %tmp338, align 4 - %tmp340 = getelementptr i32* %tmp336, i32 %tmp339 br label %bb62 bb61: - %tmp341 = load i32** @uid_cuid, align 4 - %tmp342 = getelementptr %struct.rtx_def* %insn, i32 0, i32 3 - %tmp343 = bitcast [1 x %struct.rtunion]* %tmp342 to i32* - %tmp344 = load i32* %tmp343, align 4 - %tmp345 = getelementptr i32* %tmp341, i32 %tmp344 br label %bb62 bb62: - %iftmp.62.0.in = phi i32* [ %tmp345, %bb61 ], [ %tmp340, %insn_cuid.exit ] + %insn.pn2 = phi %struct.rtx_def* [ %insn, %bb61 ], [ %insn_addr.0.ph.i, %insn_cuid.exit ] + %tmp344.pn.in.in = getelementptr %struct.rtx_def* %insn.pn2, i32 0, i32 3 + %tmp344.pn.in = bitcast [1 x %struct.rtunion]* %tmp344.pn.in.in to i32* + %tmp341.pn = load i32** @uid_cuid + %tmp344.pn = load i32* %tmp344.pn.in + %iftmp.62.0.in = getelementptr i32* %tmp341.pn, i32 %tmp344.pn %iftmp.62.0 = load i32* %iftmp.62.0.in llvm-svn: 60325
* Teach inst combine to merge GEPs through PHIs. This is reallyChris Lattner2008-12-011-16/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | important because it is sinking the loads using the GEPs, but not the GEPs themselves. This triggers 647 times on 403.gcc and makes the .s file much much nicer. For example before: je LBB1_87 ## bb78 LBB1_62: ## bb77 leal 84(%esi), %eax LBB1_63: ## bb79 movl (%eax), %eax ... LBB1_87: ## bb78 movl $0, 4(%esp) movl %esi, (%esp) call L_make_decl_rtl$stub jmp LBB1_62 ## bb77 after: jne LBB1_63 ## bb79 LBB1_62: ## bb78 movl $0, 4(%esp) movl %esi, (%esp) call L_make_decl_rtl$stub LBB1_63: ## bb79 movl 84(%esi), %eax The input code was (and the GEPs are merged and the PHI is now eliminated by instcombine): br i1 %tmp233, label %bb78, label %bb77 bb77: %tmp234 = getelementptr %struct.tree_node* %t_addr.3, i32 0, i32 0, i32 22 br label %bb79 bb78: call void @make_decl_rtl(%struct.tree_node* %t_addr.3, i8* null) nounwind %tmp235 = getelementptr %struct.tree_node* %t_addr.3, i32 0, i32 0, i32 22 br label %bb79 bb79: %iftmp.12.0.in = phi %struct.rtx_def** [ %tmp235, %bb78 ], [ %tmp234, %bb77 ] %iftmp.12.0 = load %struct.rtx_def** %iftmp.12.0.in llvm-svn: 60322
* Make GVN be more intelligent about redundant load Chris Lattner2008-12-011-2/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | elimination: when finding dependent load/stores, realize that they are the same if aliasing claims must alias instead of relying on the pointers to be exactly equal. This makes load elimination more aggressive. For example, on 403.gcc, we had: < 68 gvn - Number of instructions PRE'd < 152718 gvn - Number of instructions deleted < 49699 gvn - Number of loads deleted < 6153 memdep - Number of dirty cached non-local responses < 169336 memdep - Number of fully cached non-local responses < 162428 memdep - Number of uncached non-local responses now we have: > 64 gvn - Number of instructions PRE'd > 153623 gvn - Number of instructions deleted > 49856 gvn - Number of loads deleted > 5022 memdep - Number of dirty cached non-local responses > 159030 memdep - Number of fully cached non-local responses > 162443 memdep - Number of uncached non-local responses That's an extra 157 loads deleted and extra 905 other instructions nuked. This slows down GVN very slightly, from 3.91 to 3.96s. llvm-svn: 60314
* Reimplement the non-local dependency data structure in terms of a sortedChris Lattner2008-12-012-72/+116
| | | | | | | | | | | | | | | | | | | vector instead of a densemap. This shrinks the memory usage of this thing substantially (the high water mark) as well as making operations like scanning it faster. This speeds up memdep slightly, gvn goes from 3.9376 to 3.9118s on 403.gcc This also splits out the statistics for the cached non-local case to differentiate between the dirty and clean cached case. Here's the stats for 403.gcc: 6153 memdep - Number of dirty cached non-local responses 169336 memdep - Number of fully cached non-local responses 162428 memdep - Number of uncached non-local responses yay for caching :) llvm-svn: 60313
* Implement ((A|B)&1)|(B&-2) -> (A&1) | B transformation. This also takes care ofBill Wendling2008-12-012-6/+67
| | | | | | permutations of this pattern. llvm-svn: 60312
* Cache analyses in ivars and add some useful DEBUG output.Chris Lattner2008-12-011-37/+30
| | | | | | This speeds up GVN from 4.0386s to 3.9376s. llvm-svn: 60310
* improve indentation, do cheap checks before expensive ones,Chris Lattner2008-11-301-52/+51
| | | | | | | remove some fixme's. This speeds up GVN very slightly on 403.gcc (4.06->4.03s) llvm-svn: 60309
* Eliminate the DepResultTy abstraction. It is now completely Chris Lattner2008-11-301-48/+43
| | | | | | redundant with MemDepResult, and MemDepResult has a nicer interface. llvm-svn: 60308
* Minor cleanup: use getTrue and getFalse where appropriate. No Eli Friedman2008-11-301-20/+20
| | | | | | functional change. llvm-svn: 60307
* Some minor cleanups to instcombine; no functionality change.Eli Friedman2008-11-301-56/+17
| | | | | | | Note that the FoldOpIntoPhi call is dead because it's impossible for the first operand of a subtraction to be both a ConstantInt and a PHINode. llvm-svn: 60306
* Cache TargetData/AliasAnalysis in the pass instead of callingChris Lattner2008-11-301-18/+18
| | | | | | | getAnalysis<>. getAnalysis<> is apparently extremely expensive. Doing this speeds up GVN on 403.gcc by 16%! llvm-svn: 60304
* Add instruction combining for ((A&~B)|(~A&B)) -> A^B and all permutations.Bill Wendling2008-11-302-6/+23
| | | | llvm-svn: 60291
* Implement (A&((~A)|B)) -> A&B transformation in the instruction combiner. ThisBill Wendling2008-11-302-10/+19
| | | | | | takes care of all permutations of this pattern. llvm-svn: 60290
* Forgot one remaining call to getSExtValue().Bill Wendling2008-11-301-1/+1
| | | | llvm-svn: 60289
* getSExtValue() doesn't work for ConstantInts with bitwidth > 64 bits. Use allBill Wendling2008-11-301-5/+5
| | | | | | | | APInt calls instead. This fixes PR3144. llvm-svn: 60288
* Optimize memmove and memset into the LLVM builtins. Note that these Eli Friedman2008-11-301-4/+57
| | | | | | only show up in code from front-ends besides llvm-gcc, like clang. llvm-svn: 60287
* A couple small cleanups, plus a new potential optimization.Eli Friedman2008-11-301-3/+29
| | | | llvm-svn: 60286
* Moving potential optimizations out of PR2330 into lib/Target/README.txt. Eli Friedman2008-11-301-0/+262
| | | | | | Hopefully this isn't too much stuff to dump into this file. llvm-svn: 60285
* Followup to r60283: optimize arbitrary width signed divisions as well Eli Friedman2008-11-301-71/+34
| | | | | | as unsigned divisions. Same caveats as before. llvm-svn: 60284
* Fix for PR2164: allow transforming arbitrary-width unsigned divides intoEli Friedman2008-11-301-95/+65
| | | | | | | | | | | multiplies. Some more cleverness would be nice, though. It would be nice if we could do this transformation on illegal types. Also, we would prefer a narrower constant when possible so that we can use a narrower multiply, which can be cheaper. llvm-svn: 60283
* Don't make TwoToExp signed by default.Bill Wendling2008-11-301-2/+1
| | | | llvm-svn: 60279
* From Hacker's Delight:Bill Wendling2008-11-301-8/+10
| | | | | | | | | | "For signed integers, the determination of overflow of x*y is not so simple. If x and y have the same sign, then overflow occurs iff xy > 2**31 - 1. If they have opposite signs, then overflow occurs iff xy < -2**31." In this case, x == -1. llvm-svn: 60278
* APIntify a test which is potentially unsafe otherwise, and fix the Eli Friedman2008-11-301-3/+10
| | | | | | | | | nearby FIXME. I'm not sure what the right way to fix the Cell test was; if the approach I used isn't okay, please let me know. llvm-svn: 60277
* Instcombine was illegally transforming -X/C into X/-C when either X or CBill Wendling2008-11-301-3/+20
| | | | | | | | overflowed on negation. This commit checks to make sure that neithe C nor X overflows. This requires that the RHS of X (a subtract instruction) be a constant integer. llvm-svn: 60275
* Two changes: Make getDependency remove QueryInst for a dirty record'sChris Lattner2008-11-301-18/+26
| | | | | | | | | | | ReverseLocalDeps when we update it. This fixes a regression test failure from my last commit. Second, for each non-local cached information structure, keep a bit that indicates whether it is dirty or not. This saves us a scan over the whole thing in the common case when it isn't dirty. llvm-svn: 60274
* introduce a typedef, no functionality change.Chris Lattner2008-11-301-16/+15
| | | | llvm-svn: 60272
* Change NonLocalDeps to be a densemap of pointers to densemapChris Lattner2008-11-301-30/+52
| | | | | | | | | | | | instead of containing them by value. This increases the density (!) of NonLocalDeps as well as making the reallocation case faster. This speeds up gvn on 403.gcc by 2% and makes room for future improvements. I'm not super thrilled with having to explicitly manage the new/delete of the map, but it is necesary for the next change. llvm-svn: 60271
* calls never depend on allocations.Chris Lattner2008-11-301-12/+5
| | | | llvm-svn: 60268
* Fix a fixme by making memdep's handling of allocations more logical.Chris Lattner2008-11-302-35/+22
| | | | | | | | If we see that a load depends on the allocation of its memory with no intervening stores, we now return a 'None' depedency instead of "Normal". This tweaks GVN to do its optimization with the new result. llvm-svn: 60267
* implement a fixme by introducing a new getDependencyFromInternalChris Lattner2008-11-301-24/+19
| | | | | | | method that returns its result as a DepResultTy instead of as a MemDepResult. This reduces conversion back and forth. llvm-svn: 60266
* Move the getNonLocalDependency method to a more logical place inChris Lattner2008-11-301-90/+89
| | | | | | the file, no functionality change. llvm-svn: 60265
* REmove an old fixme, resolve another fixme by adding liberalChris Lattner2008-11-301-2/+0
| | | | | | comments about what this class does. llvm-svn: 60264
* remove a bit of incorrect code that tried to be tricky about speeding up Chris Lattner2008-11-301-49/+24
| | | | | | | | | | | | | | | | | | | | | | | | | dependencies. The basic situation was this: consider if we had: store1 ... store2 ... store3 Where memdep thinks that store3 depends on store2 and store2 depends on store1. The problem happens when we delete store2: The code in question was updating dep info for store3 to be store1. This is a spiffy optimization, but is not safe at all, because aliasing isn't transitive. This bug isn't exposed today with DSE because DSE will only zap store2 if it is identifical to store 3, and in this case, it is safe to update it to depend on store1. However, memcpyopt is not so fortunate, which is presumably why the "dropInstruction" code used to exist. Since this doesn't actually provide a speedup in practice, just rip the code out. llvm-svn: 60263
* Eliminate the dropInstruction method, which is not needed any more.Chris Lattner2008-11-292-82/+35
| | | | | | Fix a subtle iterator invalidation bug I introduced in the last commit. llvm-svn: 60258
* implement some fixme's: when deleting an instruction withChris Lattner2008-11-291-14/+62
| | | | | | | | | | | | | an entry in the nonlocal deps map, don't reset entries referencing that instruction to [dirty, null], instead, set them to [dirty,next] where next is the instruction after the deleted one. Use this information in the non-local deps code to avoid rescanning entire blocks. This speeds up GVN slightly by avoiding pointless work. On 403.gcc this makes GVN 1.5% faster. llvm-svn: 60256
* Change MemDep::getNonLocalDependency to return its results asChris Lattner2008-11-292-12/+11
| | | | | | | a smallvector instead of a DenseMap. This speeds up GVN by 5% on 403.gcc. llvm-svn: 60255
* move MemoryDependenceAnalysis::verifyRemoved to the end of the file,Chris Lattner2008-11-291-32/+32
| | | | | | no functionality/code change. llvm-svn: 60254
* reimplement getNonLocalDependency with a simpler worklistChris Lattner2008-11-292-140/+94
| | | | | | | formulation that is faster and doesn't require nonLazyHelper. Much less code. llvm-svn: 60253
* Fix a thinko that manifested as a crash on clamav last night.Chris Lattner2008-11-291-2/+2
| | | | llvm-svn: 60251
OpenPOWER on IntegriCloud