summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/AArch64/extern-weak.ll
Commit message (Collapse)AuthorAgeFilesLines
* [AArch64] Add Tiny Code Model for AArch64David Green2018-08-221-0/+9
| | | | | | | | | | | | | | This adds the plumbing for the Tiny code model for the AArch64 backend. This, instead of loading addresses through the normal ADRP;ADD pair used in the Small model, uses a single ADR. The 21 bit range of an ADR means that the code and its statically defined symbols need to be within 1MB of each other. This makes it mostly interesting for embedded applications where we want to fit as much as we can in as small a space as possible. Differential Revision: https://reviews.llvm.org/D49673 llvm-svn: 340397
* [llvm] Remove redundant check-prefix=CHECK from tests. NFC.Mandeep Singh Grang2017-07-171-1/+1
| | | | | | | | | | | | Reviewers: t.p.northover, oren_ben_simhon, niravd, mcrosier Reviewed By: oren_ben_simhon, mcrosier Subscribers: nhaehnle, javed.absar, llvm-commits Differential Revision: https://reviews.llvm.org/D35466 llvm-svn: 308193
* [AArch64] Generate literals by the little endEvandro Menezes2017-01-181-9/+9
| | | | | | | | | | | ARM seems to prefer that long literals be formed from their little end in order to promote the fusion of the instrs pairs MOV/MOVK and MOVK/MOVK on Cortex A57 and others (v. "Cortex A57 Software Optimisation Guide", section 4.14). Differential revision: https://reviews.llvm.org/D28697 llvm-svn: 292422
* Delete AArch64II::MO_CONSTPOOL.Rafael Espindola2016-05-311-14/+1
| | | | | | | | A constant pool holding the address of a variable in equivalent to a got entry. It produces exactly the same instruction sequence as a got use and unlike a got use this is not uniqued by the linker. llvm-svn: 271311
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-02-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | getelementptr instruction One of several parallel first steps to remove the target type of pointers, replacing them with a single opaque pointer type. This adds an explicit type parameter to the gep instruction so that when the first parameter becomes an opaque pointer type, the type to gep through is still available to the instructions. * This doesn't modify gep operators, only instructions (operators will be handled separately) * Textual IR changes only. Bitcode (including upgrade) and changing the in-memory representation will be in separate changes. * geps of vectors are transformed as: getelementptr <4 x float*> %x, ... ->getelementptr float, <4 x float*> %x, ... Then, once the opaque pointer type is introduced, this will ultimately look like: getelementptr float, <4 x ptr> %x with the unambiguous interpretation that it is a vector of pointers to float. * address spaces remain on the pointer, not the type: getelementptr float addrspace(1)* %x ->getelementptr float, float addrspace(1)* %x Then, eventually: getelementptr float, ptr addrspace(1) %x Importantly, the massive amount of test case churn has been automated by same crappy python code. I had to manually update a few test cases that wouldn't fit the script's model (r228970,r229196,r229197,r229198). The python script just massages stdin and writes the result to stdout, I then wrapped that in a shell script to handle replacing files, then using the usual find+xargs to migrate all the files. update.py: import fileinput import sys import re ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))") normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))") def conv(match, line): if not match: return line line = match.groups()[0] if len(match.groups()[5]) == 0: line += match.groups()[2] line += match.groups()[3] line += ", " line += match.groups()[1] line += "\n" return line for line in sys.stdin: if line.find("getelementptr ") == line.find("getelementptr inbounds"): if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("): line = conv(re.match(ibrep, line), line) elif line.find("getelementptr ") != line.find("getelementptr ("): line = conv(re.match(normrep, line), line) sys.stdout.write(line) apply.sh: for name in "$@" do python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name" rm -f "$name.tmp" done The actual commands: From llvm/src: find test/ -name *.ll | xargs ./apply.sh From llvm/src/tools/clang: find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}" From llvm/src/tools/polly: find test/ -name *.ll | xargs ./apply.sh After that, check-all (with llvm, clang, clang-tools-extra, lld, compiler-rt, and polly all checked out). The extra 'rm' in the apply.sh script is due to a few files in clang's test suite using interesting unicode stuff that my python script was throwing exceptions on. None of those files needed to be migrated, so it seemed sufficient to ignore those cases. Reviewers: rafael, dexonsmith, grosser Differential Revision: http://reviews.llvm.org/D7636 llvm-svn: 230786
* [AArch 64] Use a constant pool load for weak symbol references whenAsiri Rathnayake2014-09-101-2/+17
| | | | | | | | | | | | | | using static relocation model and small code model. Summary: currently we generate GOT based relocations for weak symbol references regardless of the underlying relocation model. This should be change so that in static relocation model we use a constant pool load instead. Patch from: Keith Walker Reviewers: Renato Golin, Tim Northover llvm-svn: 217503
* AArch64/ARM64: move ARM64 into AArch64's placeTim Northover2014-05-241-9/+9
| | | | | | | | | | | | | | | This commit starts with a "git mv ARM64 AArch64" and continues out from there, renaming the C++ classes, intrinsics, and other target-local objects for consistency. "ARM64" test directories are also moved, and tests that began their life in ARM64 use an arm64 triple, those from AArch64 use an aarch64 triple. Both should be equivalent though. This finishes the AArch64 merge, and everyone should feel free to continue committing as normal now. llvm-svn: 209577
* AArch64/ARM64: remove AArch64 from tree prior to renaming ARM64.Tim Northover2014-05-241-11/+0
| | | | | | | | | | | | | | | | I'm doing this in two phases for a better "git blame" record. This commit removes the previous AArch64 backend and redirects all functionality to ARM64. It also deduplicates test-lines and removes orphaned AArch64 tests. The next step will be "git mv ARM64 AArch64" and rewire most of the tests. Hopefully LLVM is still functional, though it would be even better if no-one ever had to care because the rename happens straight afterwards. llvm-svn: 209576
* AArch64/ARM64: add more arm64 lines to AArch64 regression testsTim Northover2014-04-151-14/+27
| | | | llvm-svn: 206282
* Convert CodeGen/*/*.ll tests to use the new CHECK-LABEL for easier ↵Stephen Lin2013-07-131-1/+1
| | | | | | | | | | debugging. No functionality change and all tests pass after conversion. This was done with the following sed invocation to catch label lines demarking function boundaries: sed -i '' "s/^;\( *\)\([A-Z0-9_]*\):\( *\)test\([A-Za-z0-9_-]*\):\( *\)$/;\1\2-LABEL:\3test\4:\5/g" test/CodeGen/*/*.ll which was written conservatively to avoid false positives rather than false negatives. I scanned through all the changes and everything looks correct. llvm-svn: 186258
* AArch64: implement large code model access to global variables.Tim Northover2013-05-041-0/+19
| | | | | | | | | | | | | | The MOVZ/MOVK instruction sequence may not be the most efficient (a literal-pool load could be better) but adding that would require reinstating the ConstantIslands pass. For now the sequence is correct, and that's enough. Beware, as of commit GNU ld does not appear to support the relocations needed for this. Its primary purpose (for now) will be to support JITed code, since in that case there is no guarantee of where your code will end up in memory relative to external symbols it references. llvm-svn: 181117
* AArch64: be more careful resorting to inefficient addressing for weak vars.Tim Northover2013-02-281-0/+8
| | | | | | | | If an otherwise weak var is actually defined in this unit, it can't be undefined at runtime so we can use normal global variable sequences (ADRP/ADD) to access it. llvm-svn: 176259
* AArch64: don't drop GlobalAddress offset when handling extern_weak decls.Tim Northover2013-02-281-0/+13
| | | | llvm-svn: 176258
* AArch64: remove ConstantIsland pass & put literals in separate section.Tim Northover2013-02-151-2/+3
| | | | | | | | This implements the review suggestion to simplify the AArch64 backend. If we later discover that we *really* need the extra complexity of the ConstantIslands pass for performance reasons it can be resurrected. llvm-svn: 175258
* Implement external weak (ELF) symbols on AArch64Tim Northover2013-02-061-0/+13
Weakly defined symbols should evaluate to 0 if they're undefined at link-time. This is impossible to do with the usual address generation patterns, so we should use a literal pool entry to materlialise the address. llvm-svn: 174518
OpenPOWER on IntegriCloud