diff options
| author | David Blaikie <dblaikie@gmail.com> | 2015-02-27 19:29:02 +0000 |
|---|---|---|
| committer | David Blaikie <dblaikie@gmail.com> | 2015-02-27 19:29:02 +0000 |
| commit | 79e6c74981f4755ed55b38175d8cd34ec91395b1 (patch) | |
| tree | 3e3d41d853795c46029a07c3fb78b1e2f7668185 /llvm/test/Analysis/ScalarEvolution | |
| parent | bad3ff207f68e69f36b9a1f90a29f22341e505bb (diff) | |
| download | bcm5719-llvm-79e6c74981f4755ed55b38175d8cd34ec91395b1.tar.gz bcm5719-llvm-79e6c74981f4755ed55b38175d8cd34ec91395b1.zip | |
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
Diffstat (limited to 'llvm/test/Analysis/ScalarEvolution')
31 files changed, 115 insertions, 115 deletions
diff --git a/llvm/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll b/llvm/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll index b5eb9fc4878..7380da3ae7f 100644 --- a/llvm/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll +++ b/llvm/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll @@ -11,7 +11,7 @@ entry: bb: ; preds = %bb, %entry %i.01.0 = phi i32 [ 100, %entry ], [ %tmp4, %bb ] ; <i32> [#uses=2] - %tmp1 = getelementptr [101 x i32]* @array, i32 0, i32 %i.01.0 ; <i32*> [#uses=1] + %tmp1 = getelementptr [101 x i32], [101 x i32]* @array, i32 0, i32 %i.01.0 ; <i32*> [#uses=1] store i32 %x, i32* %tmp1 %tmp4 = add i32 %i.01.0, -1 ; <i32> [#uses=2] %tmp7 = icmp sgt i32 %tmp4, -1 ; <i1> [#uses=1] diff --git a/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll b/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll index dcf8fc9dbdb..6896e7a4728 100644 --- a/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll +++ b/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll @@ -19,7 +19,7 @@ bb: ; preds = %bb1, %bb.nph load i32* %srcptr, align 4 ; <i32>:1 [#uses=2] and i32 %1, 255 ; <i32>:2 [#uses=1] and i32 %1, -256 ; <i32>:3 [#uses=1] - getelementptr [256 x i8]* @lut, i32 0, i32 %2 ; <i8*>:4 [#uses=1] + getelementptr [256 x i8], [256 x i8]* @lut, i32 0, i32 %2 ; <i8*>:4 [#uses=1] load i8* %4, align 1 ; <i8>:5 [#uses=1] zext i8 %5 to i32 ; <i32>:6 [#uses=1] or i32 %6, %3 ; <i32>:7 [#uses=1] diff --git a/llvm/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll b/llvm/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll index 7a7a64001a6..1d4a27ccc86 100644 --- a/llvm/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll +++ b/llvm/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll @@ -9,9 +9,9 @@ bb1.thread: bb1: ; preds = %bb1, %bb1.thread %indvar = phi i32 [ 0, %bb1.thread ], [ %indvar.next, %bb1 ] ; <i32> [#uses=4] %i.0.reg2mem.0 = sub i32 255, %indvar ; <i32> [#uses=2] - %0 = getelementptr i32* %alp, i32 %i.0.reg2mem.0 ; <i32*> [#uses=1] + %0 = getelementptr i32, i32* %alp, i32 %i.0.reg2mem.0 ; <i32*> [#uses=1] %1 = load i32* %0, align 4 ; <i32> [#uses=1] - %2 = getelementptr i32* %lam, i32 %i.0.reg2mem.0 ; <i32*> [#uses=1] + %2 = getelementptr i32, i32* %lam, i32 %i.0.reg2mem.0 ; <i32*> [#uses=1] store i32 %1, i32* %2, align 4 %3 = sub i32 254, %indvar ; <i32> [#uses=1] %4 = icmp slt i32 %3, 0 ; <i1> [#uses=1] diff --git a/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll b/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll index 5d1502da179..4f6b90b39f6 100644 --- a/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll +++ b/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll @@ -11,18 +11,18 @@ target datalayout = "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64-i1:8:8-i8:8:8-i16:1 define void @_Z3foov() nounwind { entry: %x = alloca %struct.NonPod, align 8 ; <%struct.NonPod*> [#uses=2] - %0 = getelementptr %struct.NonPod* %x, i32 0, i32 0 ; <[2 x %struct.Foo]*> [#uses=1] - %1 = getelementptr [2 x %struct.Foo]* %0, i32 1, i32 0 ; <%struct.Foo*> [#uses=1] + %0 = getelementptr %struct.NonPod, %struct.NonPod* %x, i32 0, i32 0 ; <[2 x %struct.Foo]*> [#uses=1] + %1 = getelementptr [2 x %struct.Foo], [2 x %struct.Foo]* %0, i32 1, i32 0 ; <%struct.Foo*> [#uses=1] br label %bb1.i bb1.i: ; preds = %bb2.i, %entry %.0.i = phi %struct.Foo* [ %1, %entry ], [ %4, %bb2.i ] ; <%struct.Foo*> [#uses=2] - %2 = getelementptr %struct.NonPod* %x, i32 0, i32 0, i32 0 ; <%struct.Foo*> [#uses=1] + %2 = getelementptr %struct.NonPod, %struct.NonPod* %x, i32 0, i32 0, i32 0 ; <%struct.Foo*> [#uses=1] %3 = icmp eq %struct.Foo* %.0.i, %2 ; <i1> [#uses=1] br i1 %3, label %_ZN6NonPodD1Ev.exit, label %bb2.i bb2.i: ; preds = %bb1.i - %4 = getelementptr %struct.Foo* %.0.i, i32 -1 ; <%struct.Foo*> [#uses=1] + %4 = getelementptr %struct.Foo, %struct.Foo* %.0.i, i32 -1 ; <%struct.Foo*> [#uses=1] br label %bb1.i _ZN6NonPodD1Ev.exit: ; preds = %bb1.i diff --git a/llvm/test/Analysis/ScalarEvolution/2012-03-26-LoadConstant.ll b/llvm/test/Analysis/ScalarEvolution/2012-03-26-LoadConstant.ll index 5746d1c5900..8c6c9b6d1eb 100644 --- a/llvm/test/Analysis/ScalarEvolution/2012-03-26-LoadConstant.ll +++ b/llvm/test/Analysis/ScalarEvolution/2012-03-26-LoadConstant.ll @@ -25,7 +25,7 @@ for.cond: ; preds = %for.body, %lbl_818 for.body: ; preds = %for.cond %idxprom = sext i32 %0 to i64 - %arrayidx = getelementptr inbounds [0 x i32]* getelementptr inbounds ([1 x [0 x i32]]* @g_244, i32 0, i64 0), i32 0, i64 %idxprom + %arrayidx = getelementptr inbounds [0 x i32], [0 x i32]* getelementptr inbounds ([1 x [0 x i32]]* @g_244, i32 0, i64 0), i32 0, i64 %idxprom %1 = load i32* %arrayidx, align 1 store i32 %1, i32* @func_21_l_773, align 4 store i32 1, i32* @g_814, align 4 diff --git a/llvm/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll b/llvm/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll index 2cb8c5bf46f..f7ef0ea9e48 100644 --- a/llvm/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll +++ b/llvm/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll @@ -10,7 +10,7 @@ entry: br label %bb3 bb: ; preds = %bb3 - %tmp = getelementptr [1000 x i32]* @A, i32 0, i32 %i.0 ; <i32*> [#uses=1] + %tmp = getelementptr [1000 x i32], [1000 x i32]* @A, i32 0, i32 %i.0 ; <i32*> [#uses=1] store i32 123, i32* %tmp %tmp2 = add i32 %i.0, 1 ; <i32> [#uses=1] br label %bb3 diff --git a/llvm/test/Analysis/ScalarEvolution/avoid-smax-0.ll b/llvm/test/Analysis/ScalarEvolution/avoid-smax-0.ll index 8abb43074c5..e921544f9b4 100644 --- a/llvm/test/Analysis/ScalarEvolution/avoid-smax-0.ll +++ b/llvm/test/Analysis/ScalarEvolution/avoid-smax-0.ll @@ -20,10 +20,10 @@ bb3.preheader: bb3: %i.0 = phi i32 [ %7, %bb3 ], [ 0, %bb3.preheader ] - getelementptr i32* %p, i32 %i.0 + getelementptr i32, i32* %p, i32 %i.0 load i32* %3, align 4 add i32 %4, 1 - getelementptr i32* %p, i32 %i.0 + getelementptr i32, i32* %p, i32 %i.0 store i32 %5, i32* %6, align 4 add i32 %i.0, 1 icmp slt i32 %7, %n diff --git a/llvm/test/Analysis/ScalarEvolution/avoid-smax-1.ll b/llvm/test/Analysis/ScalarEvolution/avoid-smax-1.ll index d9b83a929aa..685a106c296 100644 --- a/llvm/test/Analysis/ScalarEvolution/avoid-smax-1.ll +++ b/llvm/test/Analysis/ScalarEvolution/avoid-smax-1.ll @@ -35,9 +35,9 @@ bb6: ; preds = %bb7, %bb.nph7 %7 = add i32 %x.06, %4 ; <i32> [#uses=1] %8 = shl i32 %x.06, 1 ; <i32> [#uses=1] %9 = add i32 %6, %8 ; <i32> [#uses=1] - %10 = getelementptr i8* %r, i32 %9 ; <i8*> [#uses=1] + %10 = getelementptr i8, i8* %r, i32 %9 ; <i8*> [#uses=1] %11 = load i8* %10, align 1 ; <i8> [#uses=1] - %12 = getelementptr i8* %j, i32 %7 ; <i8*> [#uses=1] + %12 = getelementptr i8, i8* %j, i32 %7 ; <i8*> [#uses=1] store i8 %11, i8* %12, align 1 %13 = add i32 %x.06, 1 ; <i32> [#uses=2] br label %bb7 @@ -102,18 +102,18 @@ bb14: ; preds = %bb15, %bb.nph3 %x.12 = phi i32 [ %40, %bb15 ], [ 0, %bb.nph3 ] ; <i32> [#uses=5] %29 = shl i32 %x.12, 2 ; <i32> [#uses=1] %30 = add i32 %29, %25 ; <i32> [#uses=1] - %31 = getelementptr i8* %r, i32 %30 ; <i8*> [#uses=1] + %31 = getelementptr i8, i8* %r, i32 %30 ; <i8*> [#uses=1] %32 = load i8* %31, align 1 ; <i8> [#uses=1] %.sum = add i32 %26, %x.12 ; <i32> [#uses=1] - %33 = getelementptr i8* %j, i32 %.sum ; <i8*> [#uses=1] + %33 = getelementptr i8, i8* %j, i32 %.sum ; <i8*> [#uses=1] store i8 %32, i8* %33, align 1 %34 = shl i32 %x.12, 2 ; <i32> [#uses=1] %35 = or i32 %34, 2 ; <i32> [#uses=1] %36 = add i32 %35, %25 ; <i32> [#uses=1] - %37 = getelementptr i8* %r, i32 %36 ; <i8*> [#uses=1] + %37 = getelementptr i8, i8* %r, i32 %36 ; <i8*> [#uses=1] %38 = load i8* %37, align 1 ; <i8> [#uses=1] %.sum6 = add i32 %27, %x.12 ; <i32> [#uses=1] - %39 = getelementptr i8* %j, i32 %.sum6 ; <i8*> [#uses=1] + %39 = getelementptr i8, i8* %j, i32 %.sum6 ; <i8*> [#uses=1] store i8 %38, i8* %39, align 1 %40 = add i32 %x.12, 1 ; <i32> [#uses=2] br label %bb15 @@ -168,10 +168,10 @@ bb23: ; preds = %bb24, %bb.nph %y.21 = phi i32 [ %57, %bb24 ], [ 0, %bb.nph ] ; <i32> [#uses=3] %53 = mul i32 %y.21, %50 ; <i32> [#uses=1] %.sum1 = add i32 %53, %51 ; <i32> [#uses=1] - %54 = getelementptr i8* %r, i32 %.sum1 ; <i8*> [#uses=1] + %54 = getelementptr i8, i8* %r, i32 %.sum1 ; <i8*> [#uses=1] %55 = mul i32 %y.21, %w ; <i32> [#uses=1] %.sum5 = add i32 %55, %.sum3 ; <i32> [#uses=1] - %56 = getelementptr i8* %j, i32 %.sum5 ; <i8*> [#uses=1] + %56 = getelementptr i8, i8* %j, i32 %.sum5 ; <i8*> [#uses=1] tail call void @llvm.memcpy.p0i8.p0i8.i32(i8* %56, i8* %54, i32 %w, i32 1, i1 false) %57 = add i32 %y.21, 1 ; <i32> [#uses=2] br label %bb24 @@ -186,7 +186,7 @@ bb24.bb26_crit_edge: ; preds = %bb24 bb26: ; preds = %bb24.bb26_crit_edge, %bb22 %59 = mul i32 %x, %w ; <i32> [#uses=1] %.sum4 = add i32 %.sum3, %59 ; <i32> [#uses=1] - %60 = getelementptr i8* %j, i32 %.sum4 ; <i8*> [#uses=1] + %60 = getelementptr i8, i8* %j, i32 %.sum4 ; <i8*> [#uses=1] %61 = mul i32 %x, %w ; <i32> [#uses=1] %62 = sdiv i32 %61, 2 ; <i32> [#uses=1] tail call void @llvm.memset.p0i8.i32(i8* %60, i8 -128, i32 %62, i32 1, i1 false) @@ -204,9 +204,9 @@ bb.nph11: ; preds = %bb29 bb30: ; preds = %bb31, %bb.nph11 %y.310 = phi i32 [ %70, %bb31 ], [ 0, %bb.nph11 ] ; <i32> [#uses=3] %66 = mul i32 %y.310, %64 ; <i32> [#uses=1] - %67 = getelementptr i8* %r, i32 %66 ; <i8*> [#uses=1] + %67 = getelementptr i8, i8* %r, i32 %66 ; <i8*> [#uses=1] %68 = mul i32 %y.310, %w ; <i32> [#uses=1] - %69 = getelementptr i8* %j, i32 %68 ; <i8*> [#uses=1] + %69 = getelementptr i8, i8* %j, i32 %68 ; <i8*> [#uses=1] tail call void @llvm.memcpy.p0i8.p0i8.i32(i8* %69, i8* %67, i32 %w, i32 1, i1 false) %70 = add i32 %y.310, 1 ; <i32> [#uses=2] br label %bb31 @@ -220,7 +220,7 @@ bb31.bb33_crit_edge: ; preds = %bb31 bb33: ; preds = %bb31.bb33_crit_edge, %bb29 %72 = mul i32 %x, %w ; <i32> [#uses=1] - %73 = getelementptr i8* %j, i32 %72 ; <i8*> [#uses=1] + %73 = getelementptr i8, i8* %j, i32 %72 ; <i8*> [#uses=1] %74 = mul i32 %x, %w ; <i32> [#uses=1] %75 = sdiv i32 %74, 2 ; <i32> [#uses=1] tail call void @llvm.memset.p0i8.i32(i8* %73, i8 -128, i32 %75, i32 1, i1 false) diff --git a/llvm/test/Analysis/ScalarEvolution/load.ll b/llvm/test/Analysis/ScalarEvolution/load.ll index 2c753f5befc..8b460a806cb 100644 --- a/llvm/test/Analysis/ScalarEvolution/load.ll +++ b/llvm/test/Analysis/ScalarEvolution/load.ll @@ -16,10 +16,10 @@ for.body: ; preds = %entry, %for.body %sum.04 = phi i32 [ 0, %entry ], [ %add2, %for.body ] ; CHECK: --> %sum.04{{ *}}Exits: 2450 %i.03 = phi i32 [ 0, %entry ], [ %inc, %for.body ] - %arrayidx = getelementptr inbounds [50 x i32]* @arr1, i32 0, i32 %i.03 + %arrayidx = getelementptr inbounds [50 x i32], [50 x i32]* @arr1, i32 0, i32 %i.03 %0 = load i32* %arrayidx, align 4 ; CHECK: --> %0{{ *}}Exits: 50 - %arrayidx1 = getelementptr inbounds [50 x i32]* @arr2, i32 0, i32 %i.03 + %arrayidx1 = getelementptr inbounds [50 x i32], [50 x i32]* @arr2, i32 0, i32 %i.03 %1 = load i32* %arrayidx1, align 4 ; CHECK: --> %1{{ *}}Exits: 0 %add = add i32 %0, %sum.04 @@ -51,10 +51,10 @@ for.body: ; preds = %entry, %for.body ; CHECK: --> %sum.02{{ *}}Exits: 10 %n.01 = phi %struct.ListNode* [ bitcast ({ %struct.ListNode*, i32, [4 x i8] }* @node5 to %struct.ListNode*), %entry ], [ %1, %for.body ] ; CHECK: --> %n.01{{ *}}Exits: @node1 - %i = getelementptr inbounds %struct.ListNode* %n.01, i64 0, i32 1 + %i = getelementptr inbounds %struct.ListNode, %struct.ListNode* %n.01, i64 0, i32 1 %0 = load i32* %i, align 4 %add = add nsw i32 %0, %sum.02 - %next = getelementptr inbounds %struct.ListNode* %n.01, i64 0, i32 0 + %next = getelementptr inbounds %struct.ListNode, %struct.ListNode* %n.01, i64 0, i32 0 %1 = load %struct.ListNode** %next, align 8 ; CHECK: --> %1{{ *}}Exits: 0 %cmp = icmp eq %struct.ListNode* %1, null diff --git a/llvm/test/Analysis/ScalarEvolution/max-trip-count-address-space.ll b/llvm/test/Analysis/ScalarEvolution/max-trip-count-address-space.ll index aa5254c758b..1bdb6f2ec45 100644 --- a/llvm/test/Analysis/ScalarEvolution/max-trip-count-address-space.ll +++ b/llvm/test/Analysis/ScalarEvolution/max-trip-count-address-space.ll @@ -21,7 +21,7 @@ bb: ; preds = %bb1, %bb.nph %p.01 = phi i8 [ %4, %bb1 ], [ -1, %bb.nph ] ; <i8> [#uses=2] %1 = sext i8 %p.01 to i32 ; <i32> [#uses=1] %2 = sext i32 %i.02 to i64 ; <i64> [#uses=1] - %3 = getelementptr i32 addrspace(1)* %d, i64 %2 ; <i32*> [#uses=1] + %3 = getelementptr i32, i32 addrspace(1)* %d, i64 %2 ; <i32*> [#uses=1] store i32 %1, i32 addrspace(1)* %3, align 4 %4 = add i8 %p.01, 1 ; <i8> [#uses=1] %5 = add i32 %i.02, 1 ; <i32> [#uses=2] @@ -50,7 +50,7 @@ for.body.lr.ph: ; preds = %entry for.body: ; preds = %for.body, %for.body.lr.ph %indvar = phi i64 [ %indvar.next, %for.body ], [ 0, %for.body.lr.ph ] - %arrayidx = getelementptr i8 addrspace(1)* %a, i64 %indvar + %arrayidx = getelementptr i8, i8 addrspace(1)* %a, i64 %indvar store i8 0, i8 addrspace(1)* %arrayidx, align 1 %indvar.next = add i64 %indvar, 1 %exitcond = icmp ne i64 %indvar.next, %tmp diff --git a/llvm/test/Analysis/ScalarEvolution/max-trip-count.ll b/llvm/test/Analysis/ScalarEvolution/max-trip-count.ll index 31f06a46ad0..4faedde8757 100644 --- a/llvm/test/Analysis/ScalarEvolution/max-trip-count.ll +++ b/llvm/test/Analysis/ScalarEvolution/max-trip-count.ll @@ -17,7 +17,7 @@ bb: ; preds = %bb1, %bb.nph %p.01 = phi i8 [ %4, %bb1 ], [ -1, %bb.nph ] ; <i8> [#uses=2] %1 = sext i8 %p.01 to i32 ; <i32> [#uses=1] %2 = sext i32 %i.02 to i64 ; <i64> [#uses=1] - %3 = getelementptr i32* %d, i64 %2 ; <i32*> [#uses=1] + %3 = getelementptr i32, i32* %d, i64 %2 ; <i32*> [#uses=1] store i32 %1, i32* %3, align 4 %4 = add i8 %p.01, 1 ; <i8> [#uses=1] %5 = add i32 %i.02, 1 ; <i32> [#uses=2] @@ -82,7 +82,7 @@ for.body.lr.ph: ; preds = %entry for.body: ; preds = %for.body, %for.body.lr.ph %indvar = phi i64 [ %indvar.next, %for.body ], [ 0, %for.body.lr.ph ] - %arrayidx = getelementptr i8* %a, i64 %indvar + %arrayidx = getelementptr i8, i8* %a, i64 %indvar store i8 0, i8* %arrayidx, align 1 %indvar.next = add i64 %indvar, 1 %exitcond = icmp ne i64 %indvar.next, %tmp diff --git a/llvm/test/Analysis/ScalarEvolution/min-max-exprs.ll b/llvm/test/Analysis/ScalarEvolution/min-max-exprs.ll index 3e0a35dd829..b9ede6f7e44 100644 --- a/llvm/test/Analysis/ScalarEvolution/min-max-exprs.ll +++ b/llvm/test/Analysis/ScalarEvolution/min-max-exprs.ll @@ -34,7 +34,7 @@ bb2: ; preds = %bb1 ; min(N, i+3) ; CHECK: select i1 %tmp4, i64 %tmp5, i64 %tmp6 ; CHECK-NEXT: --> (-1 + (-1 * ((-1 + (-1 * (sext i32 {3,+,1}<nw><%bb1> to i64))) smax (-1 + (-1 * (sext i32 %N to i64)))))) - %tmp11 = getelementptr inbounds i32* %A, i64 %tmp9 + %tmp11 = getelementptr inbounds i32, i32* %A, i64 %tmp9 %tmp12 = load i32* %tmp11, align 4 %tmp13 = shl nsw i32 %tmp12, 1 %tmp14 = icmp sge i32 3, %i.0 @@ -43,7 +43,7 @@ bb2: ; preds = %bb1 ; max(0, i - 3) ; CHECK: select i1 %tmp14, i64 0, i64 %tmp17 ; CHECK-NEXT: --> (-3 + (3 smax {0,+,1}<nuw><nsw><%bb1>)) - %tmp21 = getelementptr inbounds i32* %A, i64 %tmp19 + %tmp21 = getelementptr inbounds i32, i32* %A, i64 %tmp19 store i32 %tmp13, i32* %tmp21, align 4 %tmp23 = add nuw nsw i32 %i.0, 1 br label %bb1 diff --git a/llvm/test/Analysis/ScalarEvolution/nsw-offset-assume.ll b/llvm/test/Analysis/ScalarEvolution/nsw-offset-assume.ll index 29cf6585779..246f9ad1abc 100644 --- a/llvm/test/Analysis/ScalarEvolution/nsw-offset-assume.ll +++ b/llvm/test/Analysis/ScalarEvolution/nsw-offset-assume.ll @@ -24,13 +24,13 @@ bb: ; preds = %bb.nph, %bb1 ; CHECK: --> {0,+,2}<nuw><nsw><%bb> %1 = sext i32 %i.01 to i64 ; <i64> [#uses=1] -; CHECK: %2 = getelementptr inbounds double* %d, i64 %1 +; CHECK: %2 = getelementptr inbounds double, double* %d, i64 %1 ; CHECK: --> {%d,+,16}<nsw><%bb> - %2 = getelementptr inbounds double* %d, i64 %1 ; <double*> [#uses=1] + %2 = getelementptr inbounds double, double* %d, i64 %1 ; <double*> [#uses=1] %3 = load double* %2, align 8 ; <double> [#uses=1] %4 = sext i32 %i.01 to i64 ; <i64> [#uses=1] - %5 = getelementptr inbounds double* %q, i64 %4 ; <double*> [#uses=1] + %5 = getelementptr inbounds double, double* %q, i64 %4 ; <double*> [#uses=1] %6 = load double* %5, align 8 ; <double> [#uses=1] %7 = or i32 %i.01, 1 ; <i32> [#uses=1] @@ -38,9 +38,9 @@ bb: ; preds = %bb.nph, %bb1 ; CHECK: --> {1,+,2}<nuw><nsw><%bb> %8 = sext i32 %7 to i64 ; <i64> [#uses=1] -; CHECK: %9 = getelementptr inbounds double* %q, i64 %8 +; CHECK: %9 = getelementptr inbounds double, double* %q, i64 %8 ; CHECK: {(8 + %q),+,16}<nsw><%bb> - %9 = getelementptr inbounds double* %q, i64 %8 ; <double*> [#uses=1] + %9 = getelementptr inbounds double, double* %q, i64 %8 ; <double*> [#uses=1] ; Artificially repeat the above three instructions, this time using ; add nsw instead of or. @@ -50,16 +50,16 @@ bb: ; preds = %bb.nph, %bb1 ; CHECK: --> {1,+,2}<nuw><nsw><%bb> %t8 = sext i32 %t7 to i64 ; <i64> [#uses=1] -; CHECK: %t9 = getelementptr inbounds double* %q, i64 %t8 +; CHECK: %t9 = getelementptr inbounds double, double* %q, i64 %t8 ; CHECK: {(8 + %q),+,16}<nsw><%bb> - %t9 = getelementptr inbounds double* %q, i64 %t8 ; <double*> [#uses=1] + %t9 = getelementptr inbounds double, double* %q, i64 %t8 ; <double*> [#uses=1] %10 = load double* %9, align 8 ; <double> [#uses=1] %11 = fadd double %6, %10 ; <double> [#uses=1] %12 = fadd double %11, 3.200000e+00 ; <double> [#uses=1] %13 = fmul double %3, %12 ; <double> [#uses=1] %14 = sext i32 %i.01 to i64 ; <i64> [#uses=1] - %15 = getelementptr inbounds double* %d, i64 %14 ; <double*> [#uses=1] + %15 = getelementptr inbounds double, double* %d, i64 %14 ; <double*> [#uses=1] store double %13, double* %15, align 8 %16 = add nsw i32 %i.01, 2 ; <i32> [#uses=2] br label %bb1 diff --git a/llvm/test/Analysis/ScalarEvolution/nsw-offset.ll b/llvm/test/Analysis/ScalarEvolution/nsw-offset.ll index 88cdcf23d9e..7b8de519429 100644 --- a/llvm/test/Analysis/ScalarEvolution/nsw-offset.ll +++ b/llvm/test/Analysis/ScalarEvolution/nsw-offset.ll @@ -22,13 +22,13 @@ bb: ; preds = %bb.nph, %bb1 ; CHECK: --> {0,+,2}<nuw><nsw><%bb> %1 = sext i32 %i.01 to i64 ; <i64> [#uses=1] -; CHECK: %2 = getelementptr inbounds double* %d, i64 %1 +; CHECK: %2 = getelementptr inbounds double, double* %d, i64 %1 ; CHECK: --> {%d,+,16}<nsw><%bb> - %2 = getelementptr inbounds double* %d, i64 %1 ; <double*> [#uses=1] + %2 = getelementptr inbounds double, double* %d, i64 %1 ; <double*> [#uses=1] %3 = load double* %2, align 8 ; <double> [#uses=1] %4 = sext i32 %i.01 to i64 ; <i64> [#uses=1] - %5 = getelementptr inbounds double* %q, i64 %4 ; <double*> [#uses=1] + %5 = getelementptr inbounds double, double* %q, i64 %4 ; <double*> [#uses=1] %6 = load double* %5, align 8 ; <double> [#uses=1] %7 = or i32 %i.01, 1 ; <i32> [#uses=1] @@ -36,9 +36,9 @@ bb: ; preds = %bb.nph, %bb1 ; CHECK: --> {1,+,2}<nuw><nsw><%bb> %8 = sext i32 %7 to i64 ; <i64> [#uses=1] -; CHECK: %9 = getelementptr inbounds double* %q, i64 %8 +; CHECK: %9 = getelementptr inbounds double, double* %q, i64 %8 ; CHECK: {(8 + %q),+,16}<nsw><%bb> - %9 = getelementptr inbounds double* %q, i64 %8 ; <double*> [#uses=1] + %9 = getelementptr inbounds double, double* %q, i64 %8 ; <double*> [#uses=1] ; Artificially repeat the above three instructions, this time using ; add nsw instead of or. @@ -48,16 +48,16 @@ bb: ; preds = %bb.nph, %bb1 ; CHECK: --> {1,+,2}<nuw><nsw><%bb> %t8 = sext i32 %t7 to i64 ; <i64> [#uses=1] -; CHECK: %t9 = getelementptr inbounds double* %q, i64 %t8 +; CHECK: %t9 = getelementptr inbounds double, double* %q, i64 %t8 ; CHECK: {(8 + %q),+,16}<nsw><%bb> - %t9 = getelementptr inbounds double* %q, i64 %t8 ; <double*> [#uses=1] + %t9 = getelementptr inbounds double, double* %q, i64 %t8 ; <double*> [#uses=1] %10 = load double* %9, align 8 ; <double> [#uses=1] %11 = fadd double %6, %10 ; <double> [#uses=1] %12 = fadd double %11, 3.200000e+00 ; <double> [#uses=1] %13 = fmul double %3, %12 ; <double> [#uses=1] %14 = sext i32 %i.01 to i64 ; <i64> [#uses=1] - %15 = getelementptr inbounds double* %d, i64 %14 ; <double*> [#uses=1] + %15 = getelementptr inbounds double, double* %d, i64 %14 ; <double*> [#uses=1] store double %13, double* %15, align 8 %16 = add nsw i32 %i.01, 2 ; <i32> [#uses=2] br label %bb1 diff --git a/llvm/test/Analysis/ScalarEvolution/nsw.ll b/llvm/test/Analysis/ScalarEvolution/nsw.ll index d776a5a5da7..024b2804c06 100644 --- a/llvm/test/Analysis/ScalarEvolution/nsw.ll +++ b/llvm/test/Analysis/ScalarEvolution/nsw.ll @@ -19,11 +19,11 @@ bb: ; preds = %bb1, %bb.nph ; CHECK: %i.01 ; CHECK-NEXT: --> {0,+,1}<nuw><nsw><%bb> %tmp2 = sext i32 %i.01 to i64 ; <i64> [#uses=1] - %tmp3 = getelementptr double* %p, i64 %tmp2 ; <double*> [#uses=1] + %tmp3 = getelementptr double, double* %p, i64 %tmp2 ; <double*> [#uses=1] %tmp4 = load double* %tmp3, align 8 ; <double> [#uses=1] %tmp5 = fmul double %tmp4, 9.200000e+00 ; <double> [#uses=1] %tmp6 = sext i32 %i.01 to i64 ; <i64> [#uses=1] - %tmp7 = getelementptr double* %p, i64 %tmp6 ; <double*> [#uses=1] + %tmp7 = getelementptr double, double* %p, i64 %tmp6 ; <double*> [#uses=1] ; CHECK: %tmp7 ; CHECK-NEXT: --> {%p,+,8}<%bb> store double %tmp5, double* %tmp7, align 8 @@ -36,7 +36,7 @@ bb1: ; preds = %bb %phitmp = sext i32 %tmp8 to i64 ; <i64> [#uses=1] ; CHECK: %phitmp ; CHECK-NEXT: --> {1,+,1}<nuw><nsw><%bb> - %tmp9 = getelementptr double* %p, i64 %phitmp ; <double*> [#uses=1] + %tmp9 = getelementptr double, double* %p, i64 %phitmp ; <double*> [#uses=1] ; CHECK: %tmp9 ; CHECK-NEXT: --> {(8 + %p),+,8}<%bb> %tmp10 = load double* %tmp9, align 8 ; <double> [#uses=1] @@ -64,7 +64,7 @@ for.body.i.i: ; preds = %for.body.i.i, %for. ; CHECK: %__first.addr.02.i.i ; CHECK-NEXT: --> {%begin,+,4}<nuw><%for.body.i.i> store i32 0, i32* %__first.addr.02.i.i, align 4 - %ptrincdec.i.i = getelementptr inbounds i32* %__first.addr.02.i.i, i64 1 + %ptrincdec.i.i = getelementptr inbounds i32, i32* %__first.addr.02.i.i, i64 1 ; CHECK: %ptrincdec.i.i ; CHECK-NEXT: --> {(4 + %begin),+,4}<nuw><%for.body.i.i> %cmp.i.i = icmp eq i32* %ptrincdec.i.i, %end @@ -90,10 +90,10 @@ for.body.i.i: ; preds = %entry, %for.body.i. %tmp = add nsw i64 %indvar.i.i, 1 ; CHECK: %tmp = ; CHECK: {1,+,1}<nuw><nsw><%for.body.i.i> - %ptrincdec.i.i = getelementptr inbounds i32* %begin, i64 %tmp + %ptrincdec.i.i = getelementptr inbounds i32, i32* %begin, i64 %tmp ; CHECK: %ptrincdec.i.i = ; CHECK: {(4 + %begin),+,4}<nsw><%for.body.i.i> - %__first.addr.08.i.i = getelementptr inbounds i32* %begin, i64 %indvar.i.i + %__first.addr.08.i.i = getelementptr inbounds i32, i32* %begin, i64 %indvar.i.i ; CHECK: %__first.addr.08.i.i ; CHECK: {%begin,+,4}<nsw><%for.body.i.i> store i32 0, i32* %__first.addr.08.i.i, align 4 @@ -127,14 +127,14 @@ exit: ; CHECK: --> {(4 + %arg),+,4}<nuw><%bb1> Exits: (8 + %arg)<nsw> define i32 @PR12375(i32* readnone %arg) { bb: - %tmp = getelementptr inbounds i32* %arg, i64 2 + %tmp = getelementptr inbounds i32, i32* %arg, i64 2 br label %bb1 bb1: ; preds = %bb1, %bb %tmp2 = phi i32* [ %arg, %bb ], [ %tmp5, %bb1 ] %tmp3 = phi i32 [ 0, %bb ], [ %tmp4, %bb1 ] %tmp4 = add nsw i32 %tmp3, 1 - %tmp5 = getelementptr inbounds i32* %tmp2, i64 1 + %tmp5 = getelementptr inbounds i32, i32* %tmp2, i64 1 %tmp6 = icmp ult i32* %tmp5, %tmp br i1 %tmp6, label %bb1, label %bb7 @@ -151,7 +151,7 @@ bb: bb2: ; preds = %bb2, %bb %tmp = phi i32* [ %arg, %bb ], [ %tmp4, %bb2 ] %tmp3 = icmp ult i32* %tmp, %arg1 - %tmp4 = getelementptr inbounds i32* %tmp, i64 1 + %tmp4 = getelementptr inbounds i32, i32* %tmp, i64 1 br i1 %tmp3, label %bb2, label %bb5 bb5: ; preds = %bb2 diff --git a/llvm/test/Analysis/ScalarEvolution/pr22674.ll b/llvm/test/Analysis/ScalarEvolution/pr22674.ll index 7defcb97754..6b7a143f11e 100644 --- a/llvm/test/Analysis/ScalarEvolution/pr22674.ll +++ b/llvm/test/Analysis/ScalarEvolution/pr22674.ll @@ -44,11 +44,11 @@ cond.false: ; preds = %for.end, %for.inc, unreachable _ZNK4llvm12AttributeSet3endEj.exit: ; preds = %for.end - %second.i.i.i = getelementptr inbounds %"struct.std::pair.241.2040.3839.6152.6923.7694.8465.9493.10007.10264.18507"* undef, i32 %I.099.lcssa129, i32 1 + %second.i.i.i = getelementptr inbounds %"struct.std::pair.241.2040.3839.6152.6923.7694.8465.9493.10007.10264.18507", %"struct.std::pair.241.2040.3839.6152.6923.7694.8465.9493.10007.10264.18507"* undef, i32 %I.099.lcssa129, i32 1 %0 = load %"class.llvm::AttributeSetNode.230.2029.3828.6141.6912.7683.8454.9482.9996.10253.18506"** %second.i.i.i, align 4, !tbaa !2 - %NumAttrs.i.i.i = getelementptr inbounds %"class.llvm::AttributeSetNode.230.2029.3828.6141.6912.7683.8454.9482.9996.10253.18506"* %0, i32 0, i32 1 + %NumAttrs.i.i.i = getelementptr inbounds %"class.llvm::AttributeSetNode.230.2029.3828.6141.6912.7683.8454.9482.9996.10253.18506", %"class.llvm::AttributeSetNode.230.2029.3828.6141.6912.7683.8454.9482.9996.10253.18506"* %0, i32 0, i32 1 %1 = load i32* %NumAttrs.i.i.i, align 4, !tbaa !8 - %add.ptr.i.i.i55 = getelementptr inbounds %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509"* undef, i32 %1 + %add.ptr.i.i.i55 = getelementptr inbounds %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509", %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509"* undef, i32 %1 br i1 undef, label %return, label %for.body11 for.cond9: ; preds = %_ZNK4llvm9Attribute13getKindAsEnumEv.exit @@ -70,7 +70,7 @@ _ZNK4llvm9Attribute15isEnumAttributeEv.exit: ; preds = %for.body11 ] _ZNK4llvm9Attribute13getKindAsEnumEv.exit: ; preds = %_ZNK4llvm9Attribute15isEnumAttributeEv.exit, %_ZNK4llvm9Attribute15isEnumAttributeEv.exit - %incdec.ptr = getelementptr inbounds %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509"* %I5.096, i32 1 + %incdec.ptr = getelementptr inbounds %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509", %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509"* %I5.096, i32 1 br i1 undef, label %for.cond9, label %return cond.false21: ; preds = %_ZNK4llvm9Attribute15isEnumAttributeEv.exit, %for.body11 diff --git a/llvm/test/Analysis/ScalarEvolution/scev-aa.ll b/llvm/test/Analysis/ScalarEvolution/scev-aa.ll index a0abbb787b0..9a3b9cd228e 100644 --- a/llvm/test/Analysis/ScalarEvolution/scev-aa.ll +++ b/llvm/test/Analysis/ScalarEvolution/scev-aa.ll @@ -19,9 +19,9 @@ entry: bb: %i = phi i64 [ 0, %entry ], [ %i.next, %bb ] - %pi = getelementptr double* %p, i64 %i + %pi = getelementptr double, double* %p, i64 %i %i.next = add i64 %i, 1 - %pi.next = getelementptr double* %p, i64 %i.next + %pi.next = getelementptr double, double* %p, i64 %i.next %x = load double* %pi %y = load double* %pi.next %z = fmul double %x, %y @@ -58,9 +58,9 @@ bb: %i.next = add i64 %i, 1 %e = add i64 %i, %j - %pi.j = getelementptr double* %p, i64 %e + %pi.j = getelementptr double, double* %p, i64 %e %f = add i64 %i.next, %j - %pi.next.j = getelementptr double* %p, i64 %f + %pi.next.j = getelementptr double, double* %p, i64 %f %x = load double* %pi.j %y = load double* %pi.next.j %z = fmul double %x, %y @@ -68,7 +68,7 @@ bb: %o = add i64 %j, 91 %g = add i64 %i, %o - %pi.j.next = getelementptr double* %p, i64 %g + %pi.j.next = getelementptr double, double* %p, i64 %g %a = load double* %pi.j.next %b = fmul double %x, %a store double %b, double* %pi.j.next @@ -115,9 +115,9 @@ bb: %i.next = add i64 %i, 1 %e = add i64 %i, %j - %pi.j = getelementptr double* %p, i64 %e + %pi.j = getelementptr double, double* %p, i64 %e %f = add i64 %i.next, %j - %pi.next.j = getelementptr double* %p, i64 %f + %pi.next.j = getelementptr double, double* %p, i64 %f %x = load double* %pi.j %y = load double* %pi.next.j %z = fmul double %x, %y @@ -125,7 +125,7 @@ bb: %o = add i64 %j, %n %g = add i64 %i, %o - %pi.j.next = getelementptr double* %p, i64 %g + %pi.j.next = getelementptr double, double* %p, i64 %g %a = load double* %pi.j.next %b = fmul double %x, %a store double %b, double* %pi.j.next @@ -161,12 +161,12 @@ return: define void @foo() { entry: %A = alloca %struct.A - %B = getelementptr %struct.A* %A, i32 0, i32 0 + %B = getelementptr %struct.A, %struct.A* %A, i32 0, i32 0 %Q = bitcast %struct.B* %B to %struct.A* - %Z = getelementptr %struct.A* %Q, i32 0, i32 1 - %C = getelementptr %struct.B* %B, i32 1 + %Z = getelementptr %struct.A, %struct.A* %Q, i32 0, i32 1 + %C = getelementptr %struct.B, %struct.B* %B, i32 1 %X = bitcast %struct.B* %C to i32* - %Y = getelementptr %struct.A* %A, i32 0, i32 1 + %Y = getelementptr %struct.A, %struct.A* %A, i32 0, i32 1 ret void } @@ -181,12 +181,12 @@ entry: define void @bar() { %M = alloca %struct.A - %N = getelementptr %struct.A* %M, i32 0, i32 0 + %N = getelementptr %struct.A, %struct.A* %M, i32 0, i32 0 %O = bitcast %struct.B* %N to %struct.A* - %P = getelementptr %struct.A* %O, i32 0, i32 1 - %R = getelementptr %struct.B* %N, i32 1 + %P = getelementptr %struct.A, %struct.A* %O, i32 0, i32 1 + %R = getelementptr %struct.B, %struct.B* %N, i32 1 %W = bitcast %struct.B* %R to i32* - %V = getelementptr %struct.A* %M, i32 0, i32 1 + %V = getelementptr %struct.A, %struct.A* %M, i32 0, i32 1 ret void } @@ -200,7 +200,7 @@ entry: for.body: ; preds = %entry, %for.body %i = phi i64 [ %inc, %for.body ], [ 0, %entry ] ; <i64> [#uses=2] %inc = add nsw i64 %i, 1 ; <i64> [#uses=2] - %arrayidx = getelementptr inbounds i64* %p, i64 %inc + %arrayidx = getelementptr inbounds i64, i64* %p, i64 %inc store i64 0, i64* %arrayidx %tmp6 = load i64* %p ; <i64> [#uses=1] %cmp = icmp slt i64 %inc, %tmp6 ; <i1> [#uses=1] diff --git a/llvm/test/Analysis/ScalarEvolution/sext-inreg.ll b/llvm/test/Analysis/ScalarEvolution/sext-inreg.ll index 8b3d641943d..8f1d5bdbeba 100644 --- a/llvm/test/Analysis/ScalarEvolution/sext-inreg.ll +++ b/llvm/test/Analysis/ScalarEvolution/sext-inreg.ll @@ -16,7 +16,7 @@ bb: ; preds = %bb, %entry %t2 = ashr i64 %t1, 7 ; <i32> [#uses=1] %s1 = shl i64 %i.01, 5 ; <i32> [#uses=1] %s2 = ashr i64 %s1, 5 ; <i32> [#uses=1] - %t3 = getelementptr i64* %x, i64 %i.01 ; <i64*> [#uses=1] + %t3 = getelementptr i64, i64* %x, i64 %i.01 ; <i64*> [#uses=1] store i64 0, i64* %t3, align 1 %indvar.next = add i64 %i.01, 199 ; <i32> [#uses=2] %exitcond = icmp eq i64 %indvar.next, %n ; <i1> [#uses=1] diff --git a/llvm/test/Analysis/ScalarEvolution/sext-iv-0.ll b/llvm/test/Analysis/ScalarEvolution/sext-iv-0.ll index d5d32689e17..f5d54556d24 100644 --- a/llvm/test/Analysis/ScalarEvolution/sext-iv-0.ll +++ b/llvm/test/Analysis/ScalarEvolution/sext-iv-0.ll @@ -23,11 +23,11 @@ bb1: ; preds = %bb1, %bb1.thread %2 = sext i9 %1 to i64 ; <i64> [#uses=1] ; CHECK: %2 ; CHECK-NEXT: --> {-128,+,1}<nsw><%bb1> Exits: 127 - %3 = getelementptr double* %x, i64 %2 ; <double*> [#uses=1] + %3 = getelementptr double, double* %x, i64 %2 ; <double*> [#uses=1] %4 = load double* %3, align 8 ; <double> [#uses=1] %5 = fmul double %4, 3.900000e+00 ; <double> [#uses=1] %6 = sext i8 %0 to i64 ; <i64> [#uses=1] - %7 = getelementptr double* %x, i64 %6 ; <double*> [#uses=1] + %7 = getelementptr double, double* %x, i64 %6 ; <double*> [#uses=1] store double %5, double* %7, align 8 %8 = add i64 %i.0.reg2mem.0, 1 ; <i64> [#uses=2] %9 = icmp sgt i64 %8, 127 ; <i1> [#uses=1] diff --git a/llvm/test/Analysis/ScalarEvolution/sext-iv-1.ll b/llvm/test/Analysis/ScalarEvolution/sext-iv-1.ll index a6f70dbff9a..07f055e4d17 100644 --- a/llvm/test/Analysis/ScalarEvolution/sext-iv-1.ll +++ b/llvm/test/Analysis/ScalarEvolution/sext-iv-1.ll @@ -23,11 +23,11 @@ bb1: ; preds = %bb1, %bb1.thread %0 = trunc i64 %i.0.reg2mem.0 to i7 ; <i8> [#uses=1] %1 = trunc i64 %i.0.reg2mem.0 to i9 ; <i8> [#uses=1] %2 = sext i9 %1 to i64 ; <i64> [#uses=1] - %3 = getelementptr double* %x, i64 %2 ; <double*> [#uses=1] + %3 = getelementptr double, double* %x, i64 %2 ; <double*> [#uses=1] %4 = load double* %3, align 8 ; <double> [#uses=1] %5 = fmul double %4, 3.900000e+00 ; <double> [#uses=1] %6 = sext i7 %0 to i64 ; <i64> [#uses=1] - %7 = getelementptr double* %x, i64 %6 ; <double*> [#uses=1] + %7 = getelementptr double, double* %x, i64 %6 ; <double*> [#uses=1] store double %5, double* %7, align 8 %8 = add i64 %i.0.reg2mem.0, 1 ; <i64> [#uses=2] %9 = icmp sgt i64 %8, 127 ; <i1> [#uses=1] @@ -46,11 +46,11 @@ bb1: ; preds = %bb1, %bb1.thread %0 = trunc i64 %i.0.reg2mem.0 to i8 ; <i8> [#uses=1] %1 = trunc i64 %i.0.reg2mem.0 to i9 ; <i8> [#uses=1] %2 = sext i9 %1 to i64 ; <i64> [#uses=1] - %3 = getelementptr double* %x, i64 %2 ; <double*> [#uses=1] + %3 = getelementptr double, double* %x, i64 %2 ; <double*> [#uses=1] %4 = load double* %3, align 8 ; <double> [#uses=1] %5 = fmul double %4, 3.900000e+00 ; <double> [#uses=1] %6 = sext i8 %0 to i64 ; <i64> [#uses=1] - %7 = getelementptr double* %x, i64 %6 ; <double*> [#uses=1] + %7 = getelementptr double, double* %x, i64 %6 ; <double*> [#uses=1] store double %5, double* %7, align 8 %8 = add i64 %i.0.reg2mem.0, 1 ; <i64> [#uses=2] %9 = icmp sgt i64 %8, 128 ; <i1> [#uses=1] @@ -69,11 +69,11 @@ bb1: ; preds = %bb1, %bb1.thread %0 = trunc i64 %i.0.reg2mem.0 to i8 ; <i8> [#uses=1] %1 = trunc i64 %i.0.reg2mem.0 to i9 ; <i8> [#uses=1] %2 = sext i9 %1 to i64 ; <i64> [#uses=1] - %3 = getelementptr double* %x, i64 %2 ; <double*> [#uses=1] + %3 = getelementptr double, double* %x, i64 %2 ; <double*> [#uses=1] %4 = load double* %3, align 8 ; <double> [#uses=1] %5 = fmul double %4, 3.900000e+00 ; <double> [#uses=1] %6 = sext i8 %0 to i64 ; <i64> [#uses=1] - %7 = getelementptr double* %x, i64 %6 ; <double*> [#uses=1] + %7 = getelementptr double, double* %x, i64 %6 ; <double*> [#uses=1] store double %5, double* %7, align 8 %8 = add i64 %i.0.reg2mem.0, 1 ; <i64> [#uses=2] %9 = icmp sgt i64 %8, 127 ; <i1> [#uses=1] @@ -92,11 +92,11 @@ bb1: ; preds = %bb1, %bb1.thread %0 = trunc i64 %i.0.reg2mem.0 to i8 ; <i8> [#uses=1] %1 = trunc i64 %i.0.reg2mem.0 to i9 ; <i8> [#uses=1] %2 = sext i9 %1 to i64 ; <i64> [#uses=1] - %3 = getelementptr double* %x, i64 %2 ; <double*> [#uses=1] + %3 = getelementptr double, double* %x, i64 %2 ; <double*> [#uses=1] %4 = load double* %3, align 8 ; <double> [#uses=1] %5 = fmul double %4, 3.900000e+00 ; <double> [#uses=1] %6 = sext i8 %0 to i64 ; <i64> [#uses=1] - %7 = getelementptr double* %x, i64 %6 ; <double*> [#uses=1] + %7 = getelementptr double, double* %x, i64 %6 ; <double*> [#uses=1] store double %5, double* %7, align 8 %8 = add i64 %i.0.reg2mem.0, -1 ; <i64> [#uses=2] %9 = icmp sgt i64 %8, 127 ; <i1> [#uses=1] diff --git a/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll b/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll index 97e252c1fb3..e580cc18d98 100644 --- a/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll +++ b/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll @@ -32,7 +32,7 @@ bb1: ; preds = %bb2, %bb.nph %tmp4 = mul i32 %tmp3, %i.02 ; <i32> [#uses=1] %tmp5 = sext i32 %i.02 to i64 ; <i64> [#uses=1] %tmp6 = sext i32 %j.01 to i64 ; <i64> [#uses=1] - %tmp7 = getelementptr [32 x [256 x i32]]* @table, i64 0, i64 %tmp5, i64 %tmp6 ; <i32*> [#uses=1] + %tmp7 = getelementptr [32 x [256 x i32]], [32 x [256 x i32]]* @table, i64 0, i64 %tmp5, i64 %tmp6 ; <i32*> [#uses=1] store i32 %tmp4, i32* %tmp7, align 4 %tmp8 = add i32 %j.01, 1 ; <i32> [#uses=2] br label %bb2 diff --git a/llvm/test/Analysis/ScalarEvolution/sle.ll b/llvm/test/Analysis/ScalarEvolution/sle.ll index f38f6b63dce..c31f750cddb 100644 --- a/llvm/test/Analysis/ScalarEvolution/sle.ll +++ b/llvm/test/Analysis/ScalarEvolution/sle.ll @@ -14,7 +14,7 @@ entry: for.body: ; preds = %for.body, %entry %i = phi i64 [ %i.next, %for.body ], [ 0, %entry ] ; <i64> [#uses=2] - %arrayidx = getelementptr double* %p, i64 %i ; <double*> [#uses=2] + %arrayidx = getelementptr double, double* %p, i64 %i ; <double*> [#uses=2] %t4 = load double* %arrayidx ; <double> [#uses=1] %mul = fmul double %t4, 2.200000e+00 ; <double> [#uses=1] store double %mul, double* %arrayidx diff --git a/llvm/test/Analysis/ScalarEvolution/trip-count.ll b/llvm/test/Analysis/ScalarEvolution/trip-count.ll index f89125aeb29..f705be60fad 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-count.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-count.ll @@ -10,7 +10,7 @@ entry: br label %bb3 bb: ; preds = %bb3 - %tmp = getelementptr [1000 x i32]* @A, i32 0, i32 %i.0 ; <i32*> [#uses=1] + %tmp = getelementptr [1000 x i32], [1000 x i32]* @A, i32 0, i32 %i.0 ; <i32*> [#uses=1] store i32 123, i32* %tmp %tmp2 = add i32 %i.0, 1 ; <i32> [#uses=1] br label %bb3 diff --git a/llvm/test/Analysis/ScalarEvolution/trip-count11.ll b/llvm/test/Analysis/ScalarEvolution/trip-count11.ll index e14af08e33f..3faa95176e7 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-count11.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-count11.ll @@ -20,7 +20,7 @@ for.cond: ; preds = %for.inc, %entry for.inc: ; preds = %for.cond %idxprom = sext i32 %i.0 to i64 - %arrayidx = getelementptr inbounds [8 x i32]* @foo.a, i64 0, i64 %idxprom + %arrayidx = getelementptr inbounds [8 x i32], [8 x i32]* @foo.a, i64 0, i64 %idxprom %0 = load i32* %arrayidx, align 4 %add = add nsw i32 %sum.0, %0 %inc = add nsw i32 %i.0, 1 @@ -43,7 +43,7 @@ for.cond: ; preds = %for.inc, %entry for.inc: ; preds = %for.cond %idxprom = sext i32 %i.0 to i64 - %arrayidx = getelementptr inbounds [8 x i32] addrspace(1)* @foo.a_as1, i64 0, i64 %idxprom + %arrayidx = getelementptr inbounds [8 x i32], [8 x i32] addrspace(1)* @foo.a_as1, i64 0, i64 %idxprom %0 = load i32 addrspace(1)* %arrayidx, align 4 %add = add nsw i32 %sum.0, %0 %inc = add nsw i32 %i.0, 1 diff --git a/llvm/test/Analysis/ScalarEvolution/trip-count12.ll b/llvm/test/Analysis/ScalarEvolution/trip-count12.ll index 8f960e1c1c7..3fd16b23df6 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-count12.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-count12.ll @@ -16,7 +16,7 @@ for.body: ; preds = %for.body, %for.body %p.addr.05 = phi i16* [ %incdec.ptr, %for.body ], [ %p, %for.body.preheader ] %len.addr.04 = phi i32 [ %sub, %for.body ], [ %len, %for.body.preheader ] %res.03 = phi i32 [ %add, %for.body ], [ 0, %for.body.preheader ] - %incdec.ptr = getelementptr inbounds i16* %p.addr.05, i32 1 + %incdec.ptr = getelementptr inbounds i16, i16* %p.addr.05, i32 1 %0 = load i16* %p.addr.05, align 2 %conv = zext i16 %0 to i32 %add = add i32 %conv, %res.03 diff --git a/llvm/test/Analysis/ScalarEvolution/trip-count2.ll b/llvm/test/Analysis/ScalarEvolution/trip-count2.ll index e76488abfca..d988eff7cfa 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-count2.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-count2.ll @@ -10,7 +10,7 @@ entry: br label %bb3 bb: ; preds = %bb3 - %tmp = getelementptr [1000 x i32]* @A, i32 0, i32 %i.0 ; <i32*> [#uses=1] + %tmp = getelementptr [1000 x i32], [1000 x i32]* @A, i32 0, i32 %i.0 ; <i32*> [#uses=1] store i32 123, i32* %tmp %tmp4 = mul i32 %i.0, 4 ; <i32> [#uses=1] %tmp5 = or i32 %tmp4, 1 diff --git a/llvm/test/Analysis/ScalarEvolution/trip-count3.ll b/llvm/test/Analysis/ScalarEvolution/trip-count3.ll index 850e035e7c6..cce0182d649 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-count3.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-count3.ll @@ -48,10 +48,10 @@ sha_update.exit.exitStub: ; preds = %bb3.i ret void bb2.i: ; preds = %bb3.i - %1 = getelementptr %struct.SHA_INFO* %sha_info, i64 0, i32 3 + %1 = getelementptr %struct.SHA_INFO, %struct.SHA_INFO* %sha_info, i64 0, i32 3 %2 = bitcast [16 x i32]* %1 to i8* call void @llvm.memcpy.p0i8.p0i8.i64(i8* %2, i8* %buffer_addr.0.i, i64 64, i32 1, i1 false) - %3 = getelementptr %struct.SHA_INFO* %sha_info, i64 0, i32 3, i64 0 + %3 = getelementptr %struct.SHA_INFO, %struct.SHA_INFO* %sha_info, i64 0, i32 3, i64 0 %4 = bitcast i32* %3 to i8* br label %codeRepl @@ -61,7 +61,7 @@ codeRepl: ; preds = %bb2.i byte_reverse.exit.i: ; preds = %codeRepl call fastcc void @sha_transform(%struct.SHA_INFO* %sha_info) nounwind - %5 = getelementptr i8* %buffer_addr.0.i, i64 64 + %5 = getelementptr i8, i8* %buffer_addr.0.i, i64 64 %6 = add i32 %count_addr.0.i, -64 br label %bb3.i diff --git a/llvm/test/Analysis/ScalarEvolution/trip-count4.ll b/llvm/test/Analysis/ScalarEvolution/trip-count4.ll index b7184a48fe8..6c1ed89989b 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-count4.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-count4.ll @@ -12,7 +12,7 @@ loop: ; preds = %loop, %entry %indvar = phi i64 [ %n, %entry ], [ %indvar.next, %loop ] ; <i64> [#uses=4] %s0 = shl i64 %indvar, 8 ; <i64> [#uses=1] %indvar.i8 = ashr i64 %s0, 8 ; <i64> [#uses=1] - %t0 = getelementptr double* %d, i64 %indvar.i8 ; <double*> [#uses=2] + %t0 = getelementptr double, double* %d, i64 %indvar.i8 ; <double*> [#uses=2] %t1 = load double* %t0 ; <double> [#uses=1] %t2 = fmul double %t1, 1.000000e-01 ; <double> [#uses=1] store double %t2, double* %t0 diff --git a/llvm/test/Analysis/ScalarEvolution/trip-count5.ll b/llvm/test/Analysis/ScalarEvolution/trip-count5.ll index 68a1ae14a7a..564a75a7458 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-count5.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-count5.ll @@ -21,12 +21,12 @@ bb: ; preds = %bb1, %bb.nph %hiPart.035 = phi i32 [ %tmp12, %bb1 ], [ 0, %bb.nph ] ; <i32> [#uses=2] %peakCount.034 = phi float [ %tmp19, %bb1 ], [ %tmp3, %bb.nph ] ; <float> [#uses=1] %tmp6 = sext i32 %hiPart.035 to i64 ; <i64> [#uses=1] - %tmp7 = getelementptr float* %pTmp1, i64 %tmp6 ; <float*> [#uses=1] + %tmp7 = getelementptr float, float* %pTmp1, i64 %tmp6 ; <float*> [#uses=1] %tmp8 = load float* %tmp7, align 4 ; <float> [#uses=1] %tmp10 = fadd float %tmp8, %distERBhi.036 ; <float> [#uses=3] %tmp12 = add i32 %hiPart.035, 1 ; <i32> [#uses=3] %tmp15 = sext i32 %tmp12 to i64 ; <i64> [#uses=1] - %tmp16 = getelementptr float* %peakWeight, i64 %tmp15 ; <float*> [#uses=1] + %tmp16 = getelementptr float, float* %peakWeight, i64 %tmp15 ; <float*> [#uses=1] %tmp17 = load float* %tmp16, align 4 ; <float> [#uses=1] %tmp19 = fadd float %tmp17, %peakCount.034 ; <float> [#uses=2] br label %bb1 diff --git a/llvm/test/Analysis/ScalarEvolution/trip-count6.ll b/llvm/test/Analysis/ScalarEvolution/trip-count6.ll index 0f394a09d15..9cba1101a6f 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-count6.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-count6.ll @@ -12,7 +12,7 @@ entry: bb: ; preds = %bb4, %entry %mode.0 = phi i8 [ 0, %entry ], [ %indvar.next, %bb4 ] ; <i8> [#uses=4] zext i8 %mode.0 to i32 ; <i32>:1 [#uses=1] - getelementptr [4 x i32]* @mode_table, i32 0, i32 %1 ; <i32*>:2 [#uses=1] + getelementptr [4 x i32], [4 x i32]* @mode_table, i32 0, i32 %1 ; <i32*>:2 [#uses=1] load i32* %2, align 4 ; <i32>:3 [#uses=1] icmp eq i32 %3, %0 ; <i1>:4 [#uses=1] br i1 %4, label %bb1, label %bb2 diff --git a/llvm/test/Analysis/ScalarEvolution/trip-count7.ll b/llvm/test/Analysis/ScalarEvolution/trip-count7.ll index d01a18a468f..a4eb72f0737 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-count7.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-count7.ll @@ -72,7 +72,7 @@ bb.i: ; preds = %bb7.i %tmp = add i32 %j.0.i, 1 ; <i32> [#uses=5] store i32 0, i32* %q, align 4 %tmp1 = sext i32 %tmp to i64 ; <i64> [#uses=1] - %tmp2 = getelementptr [9 x i32]* %a, i64 0, i64 %tmp1 ; <i32*> [#uses=1] + %tmp2 = getelementptr [9 x i32], [9 x i32]* %a, i64 0, i64 %tmp1 ; <i32*> [#uses=1] %tmp3 = load i32* %tmp2, align 4 ; <i32> [#uses=1] %tmp4 = icmp eq i32 %tmp3, 0 ; <i1> [#uses=1] br i1 %tmp4, label %bb.i.bb7.i.backedge_crit_edge, label %bb1.i @@ -80,7 +80,7 @@ bb.i: ; preds = %bb7.i bb1.i: ; preds = %bb.i %tmp5 = add i32 %j.0.i, 2 ; <i32> [#uses=1] %tmp6 = sext i32 %tmp5 to i64 ; <i64> [#uses=1] - %tmp7 = getelementptr [17 x i32]* %b, i64 0, i64 %tmp6 ; <i32*> [#uses=1] + %tmp7 = getelementptr [17 x i32], [17 x i32]* %b, i64 0, i64 %tmp6 ; <i32*> [#uses=1] %tmp8 = load i32* %tmp7, align 4 ; <i32> [#uses=1] %tmp9 = icmp eq i32 %tmp8, 0 ; <i1> [#uses=1] br i1 %tmp9, label %bb1.i.bb7.i.backedge_crit_edge, label %bb2.i @@ -88,24 +88,24 @@ bb1.i: ; preds = %bb.i bb2.i: ; preds = %bb1.i %tmp10 = sub i32 7, %j.0.i ; <i32> [#uses=1] %tmp11 = sext i32 %tmp10 to i64 ; <i64> [#uses=1] - %tmp12 = getelementptr [15 x i32]* %c, i64 0, i64 %tmp11 ; <i32*> [#uses=1] + %tmp12 = getelementptr [15 x i32], [15 x i32]* %c, i64 0, i64 %tmp11 ; <i32*> [#uses=1] %tmp13 = load i32* %tmp12, align 4 ; <i32> [#uses=1] %tmp14 = icmp eq i32 %tmp13, 0 ; <i1> [#uses=1] br i1 %tmp14, label %bb2.i.bb7.i.backedge_crit_edge, label %bb3.i bb3.i: ; preds = %bb2.i - %tmp15 = getelementptr [9 x i32]* %x1, i64 0, i64 1 ; <i32*> [#uses=1] + %tmp15 = getelementptr [9 x i32], [9 x i32]* %x1, i64 0, i64 1 ; <i32*> [#uses=1] store i32 %tmp, i32* %tmp15, align 4 %tmp16 = sext i32 %tmp to i64 ; <i64> [#uses=1] - %tmp17 = getelementptr [9 x i32]* %a, i64 0, i64 %tmp16 ; <i32*> [#uses=1] + %tmp17 = getelementptr [9 x i32], [9 x i32]* %a, i64 0, i64 %tmp16 ; <i32*> [#uses=1] store i32 0, i32* %tmp17, align 4 %tmp18 = add i32 %j.0.i, 2 ; <i32> [#uses=1] %tmp19 = sext i32 %tmp18 to i64 ; <i64> [#uses=1] - %tmp20 = getelementptr [17 x i32]* %b, i64 0, i64 %tmp19 ; <i32*> [#uses=1] + %tmp20 = getelementptr [17 x i32], [17 x i32]* %b, i64 0, i64 %tmp19 ; <i32*> [#uses=1] store i32 0, i32* %tmp20, align 4 %tmp21 = sub i32 7, %j.0.i ; <i32> [#uses=1] %tmp22 = sext i32 %tmp21 to i64 ; <i64> [#uses=1] - %tmp23 = getelementptr [15 x i32]* %c, i64 0, i64 %tmp22 ; <i32*> [#uses=1] + %tmp23 = getelementptr [15 x i32], [15 x i32]* %c, i64 0, i64 %tmp22 ; <i32*> [#uses=1] store i32 0, i32* %tmp23, align 4 call void @Try(i32 2, i32* %q, i32* %b9, i32* %a10, i32* %c11, i32* %x1.sub) nounwind %tmp24 = load i32* %q, align 4 ; <i32> [#uses=1] @@ -114,15 +114,15 @@ bb3.i: ; preds = %bb2.i bb5.i: ; preds = %bb3.i %tmp26 = sext i32 %tmp to i64 ; <i64> [#uses=1] - %tmp27 = getelementptr [9 x i32]* %a, i64 0, i64 %tmp26 ; <i32*> [#uses=1] + %tmp27 = getelementptr [9 x i32], [9 x i32]* %a, i64 0, i64 %tmp26 ; <i32*> [#uses=1] store i32 1, i32* %tmp27, align 4 %tmp28 = add i32 %j.0.i, 2 ; <i32> [#uses=1] %tmp29 = sext i32 %tmp28 to i64 ; <i64> [#uses=1] - %tmp30 = getelementptr [17 x i32]* %b, i64 0, i64 %tmp29 ; <i32*> [#uses=1] + %tmp30 = getelementptr [17 x i32], [17 x i32]* %b, i64 0, i64 %tmp29 ; <i32*> [#uses=1] store i32 1, i32* %tmp30, align 4 %tmp31 = sub i32 7, %j.0.i ; <i32> [#uses=1] %tmp32 = sext i32 %tmp31 to i64 ; <i64> [#uses=1] - %tmp33 = getelementptr [15 x i32]* %c, i64 0, i64 %tmp32 ; <i32*> [#uses=1] + %tmp33 = getelementptr [15 x i32], [15 x i32]* %c, i64 0, i64 %tmp32 ; <i32*> [#uses=1] store i32 1, i32* %tmp33, align 4 br label %bb7.i.backedge |

