summaryrefslogtreecommitdiffstats
path: root/llvm/docs/LangRef.rst
Commit message (Collapse)AuthorAgeFilesLines
* It should also be legal to pass a swifterror parameter to a call as a swifterrorArnold Schwaighofer2016-09-101-4/+5
| | | | | | | | argument. rdar://28233388 llvm-svn: 281147
* Remove restriction that 'sret' must be on the first parameterReid Kleckner2016-09-081-4/+3
| | | | | | | | | | On Windows, it is often applied to the second parameter, and the x86 backend is prepared to deal with sret appearing on any parameter. Other backends assume it only appears on parameter zero, but those are target-specific requirements, and not an IR-level rule. llvm-svn: 280951
* [LangRef] Clarify !invariant.load semantics.Geoff Berry2016-08-311-6/+4
| | | | | | Based on discussion on llvm-dev. llvm-svn: 280262
* docs: mention that clobbering output regs in inline asm is illegal.Peter Zotov2016-08-301-0/+3
| | | | | | | | | I've found this out the hard way; LLVM will not normally catch this error (unless -verify-machineinstrs is passed), and under certain very specific circumstances (such as register scavenger running under pressure) this would result in an opaque crash in codegen. llvm-svn: 280071
* Revert "Revert "Invariant start/end intrinsics overloaded for address space""Mehdi Amini2016-08-131-2/+4
| | | | | | This reverts commit 32fc6488e48eafc0ca1bac1bd9cbf0008224d530. llvm-svn: 278609
* Revert "Invariant start/end intrinsics overloaded for address space"Mehdi Amini2016-08-131-4/+2
| | | | | | This reverts commit r276447. llvm-svn: 278608
* [LangRef] Fix formatting (no semantic change)Sanjoy Das2016-08-101-5/+4
| | | | llvm-svn: 278294
* Revert "[X86] Support the "ms-hotpatch" attribute."Charles Davis2016-08-081-19/+1
| | | | | | | | This reverts commit r278048. Something changed between the last time I built this--it takes awhile on my ridiculously slow and ancient computer--and now that broke this. llvm-svn: 278053
* [X86] Support the "ms-hotpatch" attribute.Charles Davis2016-08-081-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Based on two patches by Michael Mueller. This is a target attribute that causes a function marked with it to be emitted as "hotpatchable". This particular mechanism was originally devised by Microsoft for patching their binaries (which they are constantly updating to stay ahead of crackers, script kiddies, and other ne'er-do-wells on the Internet), but is now commonly abused by Windows programs to hook API functions. This mechanism is target-specific. For x86, a two-byte no-op instruction is emitted at the function's entry point; the entry point must be immediately preceded by 64 (32-bit) or 128 (64-bit) bytes of padding. This padding is where the patch code is written. The two byte no-op is then overwritten with a short jump into this code. The no-op is usually a `movl %edi, %edi` instruction; this is used as a magic value indicating that this is a hotpatchable function. Reviewers: majnemer, sanjoy, rnk Subscribers: dberris, llvm-commits Differential Revision: https://reviews.llvm.org/D19908 llvm-svn: 278048
* [IR] Introduce a non-integral pointer typeSanjoy Das2016-07-281-0/+24
| | | | | | | | | | | | | | | Summary: This change adds a `ni` specifier in the `datalayout` string to denote pointers in some given address spaces as "non-integral", and adds some typing rules around these special pointers. Reviewers: majnemer, chandlerc, atrick, dberlin, eli.friedman, tstellarAMD, arsenm Subscribers: arsenm, mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D22488 llvm-svn: 277085
* fix some typos in the docSylvestre Ledru2016-07-281-1/+1
| | | | llvm-svn: 276968
* docs: Add reference to type metadata to langref.Peter Collingbourne2016-07-261-0/+4
| | | | llvm-svn: 276817
* Invariant start/end intrinsics overloaded for address spaceAnna Thomas2016-07-221-2/+4
| | | | | | | | | | | | | | | | | | | | | | Summary: The llvm.invariant.start and llvm.invariant.end intrinsics currently support specifying invariant memory objects only in the default address space. With this change, these intrinsics are overloaded for any adddress space for memory objects and we can use these llvm invariant intrinsics in non-default address spaces. Example: llvm.invariant.start.p1i8(i64 4, i8 addrspace(1)* %ptr) This overloaded intrinsic is needed for representing final or invariant memory in managed languages. Reviewers: apilipenko, reames Subscribers: llvm-commits llvm-svn: 276447
* Revert "Invariant start/end intrinsics overloaded for address space"Anna Thomas2016-07-211-4/+2
| | | | | | This reverts commit r276316. llvm-svn: 276320
* Invariant start/end intrinsics overloaded for address spaceAnna Thomas2016-07-211-2/+4
| | | | | | | | | | | | | | | | | | | | | Summary: The llvm.invariant.start and llvm.invariant.end intrinsics currently support specifying invariant memory objects only in the default address space. With this change, these intrinsics are overloaded for any adddress space for memory objects and we can use these llvm invariant intrinsics in non-default address spaces. Example: llvm.invariant.start.p1i8(i64 4, i8 addrspace(1)* %ptr) This overloaded intrinsic is needed for representing final or invariant memory in managed languages. Reviewers: tstellarAMD, reames, apilipenko Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D22519 llvm-svn: 276316
* [docs] Fixing Sphinx warnings to unclog the buildbotRenato Golin2016-07-201-63/+63
| | | | | | | | | | | | | | | | | | | | | | | Lots of blocks had "llvm" or "nasm" syntax types but either weren't following the syntax, or the syntax has changed (and sphinx hasn't keep up) or the type doesn't even exist (nasm?). Other documents had :options: what were invalid. I only removed those that had warnings, and left the ones that didn't, in order to follow the principle of least surprise. This is like this for ages, but the buildbot is now failing on errors. It may take a while to upgrade the buildbot's sphinx, if that's even possible, but that shouldn't stop us from getting docs updates (which seem down for quite a while). Also, we're not losing any syntax highlight, since when it doesn't parse, it doesn't colour. Ie. those blocks are not being highlighted anyway. I'm trying to get all docs in one go, so that it's easy to revert later if we do fix, or at least easy to know what's to fix. llvm-svn: 276109
* PR28516: Fix LangRef description of call and invoke to match IR changes for ↵David Blaikie2016-07-131-14/+16
| | | | | | typeless pointers llvm-svn: 275283
* Update the LangRef description of the 'returned' attributeHal Finkel2016-07-101-6/+7
| | | | | | | | | | | | The description of the 'returned' attribute says that it is only used when code-generating the caller. I'd like to make the optimizer smarter about looking through functions with returned arguments (generally, but motivated by my llvm.noalias work). As David pointed out in the review of D22202, the LangRef should be updated to make its expanded uses clearer. Differential Revision: http://reviews.llvm.org/D22205 llvm-svn: 275026
* NVPTX: Replace uses of cuda.syncthreads with nvvm.barrier0Justin Bogner2016-07-061-1/+1
| | | | | | | Everywhere where cuda.syncthreads or __syncthreads is used, use the properly namespaced nvvm.barrier0 instead. llvm-svn: 274664
* Add writeonly IR attributeNicolai Haehnle2016-07-041-0/+7
| | | | | | | | | | | | | | | | | Summary: This complements the earlier addition of IntrWriteMem and IntrWriteArgMem LLVM intrinsic properties, see D18291. Also start using the attribute for memset, memcpy, and memmove intrinsics, and remove their special-casing in BasicAliasAnalysis. Reviewers: reames, joker.eph Subscribers: joker.eph, llvm-commits Differential Revision: http://reviews.llvm.org/D18714 llvm-svn: 274485
* fix some various typos in the docSylvestre Ledru2016-07-021-2/+2
| | | | llvm-svn: 274449
* Support arbitrary addrspace pointers in masked load/store intrinsicsArtur Pilipenko2016-06-281-10/+10
| | | | | | | | | | | | | | This is a resubmittion of 263158 change after fixing the existing problem with intrinsics mangling (see LTO and intrinsics mangling llvm-dev thread for details). This patch fixes the problem which occurs when loop-vectorize tries to use @llvm.masked.load/store intrinsic for a non-default addrspace pointer. It fails with "Calling a function with a bad signature!" assertion in CallInst constructor because it tries to pass a non-default addrspace pointer to the pointer argument which has default addrspace. The fix is to add pointer type as another overloaded type to @llvm.masked.load/store intrinsics. Reviewed By: reames Differential Revision: http://reviews.llvm.org/D17270 llvm-svn: 274043
* Verifier: Reject non-float !fpmathMatt Arsenault2016-06-271-2/+2
| | | | | | | Code already assumes this is float. getFPAccuracy() crashes on any other type. llvm-svn: 273912
* Revert -r273892 "Support arbitrary addrspace pointers in masked load/store ↵Artur Pilipenko2016-06-271-10/+10
| | | | | | intrinsics" since some of the clang tests don't expect to see the updated signatures. llvm-svn: 273895
* Support arbitrary addrspace pointers in masked load/store intrinsicsArtur Pilipenko2016-06-271-10/+10
| | | | | | | | | | | | | | This is a resubmittion of 263158 change after fixing the existing problem with intrinsics mangling (see LTO and intrinsics mangling llvm-dev thread for details). This patch fixes the problem which occurs when loop-vectorize tries to use @llvm.masked.load/store intrinsic for a non-default addrspace pointer. It fails with "Calling a function with a bad signature!" assertion in CallInst constructor because it tries to pass a non-default addrspace pointer to the pointer argument which has default addrspace. The fix is to add pointer type as another overloaded type to @llvm.masked.load/store intrinsics. Reviewed By: reames Differential Revision: http://reviews.llvm.org/D17270 llvm-svn: 273892
* IR: Introduce llvm.type.checked.load intrinsic.Peter Collingbourne2016-06-251-0/+54
| | | | | | | | | | | | | | | | This intrinsic safely loads a function pointer from a virtual table pointer using type metadata. This intrinsic is used to implement control flow integrity in conjunction with virtual call optimization. The virtual call optimization pass will optimize away llvm.type.checked.load intrinsics associated with devirtualized calls, thereby removing the type check in cases where it is not needed to enforce the control flow integrity constraint. This patch also introduces the capability to copy type metadata between global variables, and teaches the virtual call optimization pass to do so. Differential Revision: http://reviews.llvm.org/D21121 llvm-svn: 273756
* IR: New representation for CFI and virtual call optimization pass metadata.Peter Collingbourne2016-06-241-12/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The bitset metadata currently used in LLVM has a few problems: 1. It has the wrong name. The name "bitset" refers to an implementation detail of one use of the metadata (i.e. its original use case, CFI). This makes it harder to understand, as the name makes no sense in the context of virtual call optimization. 2. It is represented using a global named metadata node, rather than being directly associated with a global. This makes it harder to manipulate the metadata when rebuilding global variables, summarise it as part of ThinLTO and drop unused metadata when associated globals are dropped. For this reason, CFI does not currently work correctly when both CFI and vcall opt are enabled, as vcall opt needs to rebuild vtable globals, and fails to associate metadata with the rebuilt globals. As I understand it, the same problem could also affect ASan, which rebuilds globals with a red zone. This patch solves both of those problems in the following way: 1. Rename the metadata to "type metadata". This new name reflects how the metadata is currently being used (i.e. to represent type information for CFI and vtable opt). The new name is reflected in the name for the associated intrinsic (llvm.type.test) and pass (LowerTypeTests). 2. Attach metadata directly to the globals that it pertains to, rather than using the "llvm.bitsets" global metadata node as we are doing now. This is done using the newly introduced capability to attach metadata to global variables (r271348 and r271358). See also: http://lists.llvm.org/pipermail/llvm-dev/2016-June/100462.html Differential Revision: http://reviews.llvm.org/D21053 llvm-svn: 273729
* LangRef: Note expectations when loading with extra alignmentMatt Arsenault2016-06-161-2/+14
| | | | llvm-svn: 272914
* IR: Introduce local_unnamed_addr attribute.Peter Collingbourne2016-06-141-13/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a local_unnamed_addr attribute is attached to a global, the address is known to be insignificant within the module. It is distinct from the existing unnamed_addr attribute in that it only describes a local property of the module rather than a global property of the symbol. This attribute is intended to be used by the code generator and LTO to allow the linker to decide whether the global needs to be in the symbol table. It is possible to exclude a global from the symbol table if three things are true: - This attribute is present on every instance of the global (which means that the normal rule that the global must have a unique address can be broken without being observable by the program by performing comparisons against the global's address) - The global has linkonce_odr linkage (which means that each linkage unit must have its own copy of the global if it requires one, and the copy in each linkage unit must be the same) - It is a constant or a function (which means that the program cannot observe that the unique-address rule has been broken by writing to the global) Although this attribute could in principle be computed from the module contents, LTO clients (i.e. linkers) will normally need to be able to compute this property as part of symbol resolution, and it would be inefficient to materialize every module just to compute it. See: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160509/356401.html http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160516/356738.html for earlier discussion. Part of the fix for PR27553. Differential Revision: http://reviews.llvm.org/D20348 llvm-svn: 272709
* [SystemZ] Enable index register memory constraints for inline ASMUlrich Weigand2016-06-131-4/+8
| | | | | | | | | | | | | | | | This enables use of the 'R' and 'T' memory constraints for inline ASM operands on SystemZ, which allow an index register as well as an immediate displacement. This patch includes corresponding documentation and test case updates. As with the last patch of this kind, I moved the 'm' constraint to the most general case, which is now 'T' (base + 20-bit signed displacement + index register). Author: colpell Differential Revision: http://reviews.llvm.org/D21239 llvm-svn: 272547
* Update call site attribute documentationMatt Arsenault2016-06-101-2/+2
| | | | | | convergent is also accepted. llvm-svn: 272353
* [SystemZ] Enable long displacement constraints for inline ASM operandsUlrich Weigand2016-06-091-2/+4
| | | | | | | | | | | | | | | | | | This enables use of the 'S' constraint for inline ASM operands on SystemZ, which allows for a memory reference with a signed 20-bit immediate displacement. This patch includes corresponding documentation and test case updates. I've changed the 'T' constraint to match the new behavior for 'S', as 'T' also uses a long displacement (though index constraints are still not implemented). I also changed 'm' to match the behavior for 'S' as this will allow for a wider range of displacements for 'm', though correct me if that's not the right decision. Author: colpell Differential Revision: http://reviews.llvm.org/D21097 llvm-svn: 272266
* [IR] Disallow loading and storing unsized typesSanjoy Das2016-06-011-13/+14
| | | | | | | | | | | | | | | | Summary: It isn't clear what is the operational meaning of loading or storing an unsized types, since it cannot be lowered into something meaningful. Since there does not seem to be any practical need for it either, make such loads and stores illegal IR. Reviewers: majnemer, chandlerc Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D20846 llvm-svn: 271402
* Add support for metadata attachments for global variables.Peter Collingbourne2016-05-311-2/+3
| | | | | | | | | | This patch adds an IR, assembly and bitcode representation for metadata attachments for globals. Future patches will port existing features to use these new attachments. Differential Revision: http://reviews.llvm.org/D20074 llvm-svn: 271348
* [Docs] CodeGen has supported vector icmp/fcmp for a long time.Ahmed Bougacha2016-05-311-6/+0
| | | | | | The IR support is already well-documented. llvm-svn: 271315
* [CaptureTracking] Volatile operations capture their memory locationDavid Majnemer2016-05-261-1/+2
| | | | | | | | | | The memory location that corresponds to a volatile operation is very special. They are observed by the machine in ways which we cannot reason about. Differential Revision: http://reviews.llvm.org/D20555 llvm-svn: 270879
* Fail early on unknown appending linkage variables.Rafael Espindola2016-05-161-0/+5
| | | | | | | | | | | | | In practice only a few well known appending linkage variables work. Currently if codegen sees an unknown appending linkage variable it will just print it as a regular global. That is wrong as the symbol in the produced object file has different semantics as the one provided by the appending linkage. This just errors early instead of producing a broken .o. llvm-svn: 269706
* [Docs] clarify semantics of x.with.overflow intrinsicsJohn Regehr2016-05-121-1/+20
| | | | | | Differential Revision: http://reviews.llvm.org/D20151 llvm-svn: 269346
* All llvm.deoptimize declarations must use the same calling conventionSanjoy Das2016-05-121-0/+3
| | | | | | | | | | | | | | | | | This new verifier rule lets us unambigously pick a calling convention when creating a new declaration for `@llvm.experimental.deoptimize.<ty>`. It is also congruent with our lowering strategy -- since all calls to `@llvm.experimental.deoptimize` are lowered to calls to `__llvm_deoptimize`, it is reasonable to enforce a unique calling convention. Some of the tests that were breaking this verifier rule have had to be split up into different .ll files. The inliner was violating this rule as well, and has been fixed to avoid producing invalid IR. llvm-svn: 269261
* Make "@name =" mandatory for globals in .ll files.Rafael Espindola2016-05-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | An oddity of the .ll syntax is that the "@var = " in @var = global i32 42 is optional. Writing just global i32 42 is equivalent to @0 = global i32 42 This means that there is a pretty big First set at the top level. The current implementation maintains it manually. I was trying to refactor it, but then started wondering why keep it a all. I personally find the above syntax confusing. It looks like something is missing. This patch removes the feature and simplifies the parser. llvm-svn: 269096
* [LowerGuardIntrinsics] Keep track of !make.implicit metadataSanjoy Das2016-04-301-1/+6
| | | | | | | | | | If a guard call being lowered by LowerGuardIntrinsics has the `!make.implicit` metadata attached, then reattach the metadata to the branch in the resulting expanded form of the intrinsic. This allows us to implement null checks as guards and still get the benefit of implicit null checks. llvm-svn: 268148
* Fixed sphinx warning from r267672Adam Nemet2016-04-271-1/+1
| | | | llvm-svn: 267675
* [LoopDist] Add llvm.loop.distribute.enable loop metadataAdam Nemet2016-04-271-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: D19403 adds a new pragma for loop distribution. This change adds support for the corresponding metadata that the pragma is translated to by the FE. As part of this I had to rethink the flag -enable-loop-distribute. My goal was to be backward compatible with the existing behavior: A1. pass is off by default from the optimization pipeline unless -enable-loop-distribute is specified A2. pass is on when invoked directly from opt (e.g. for unit-testing) The new pragma/metadata overrides these defaults so the new behavior is: B1. A1 + enable distribution for individual loop with the pragma/metadata B2. A2 + disable distribution for individual loop with the pragma/metadata The default value whether the pass is on or off comes from the initiator of the pass. From the PassManagerBuilder the default is off, from opt it's on. I moved -enable-loop-distribute under the pass. If the flag is specified it overrides the default from above. Then the pragma/metadata can further modifies this per loop. As a side-effect, we can now also use -enable-loop-distribute=0 from opt to emulate the default from the optimization pipeline. So to be precise this is the new behavior: C1. pass is off by default from the optimization pipeline unless -enable-loop-distribute or the pragma/metadata enables it C2. pass is on when invoked directly from opt unless -enable-loop-distribute=0 or the pragma/metadata disables it Reviewers: hfinkel Subscribers: joker.eph, mzolotukhin, llvm-commits Differential Revision: http://reviews.llvm.org/D19431 llvm-svn: 267672
* [Docs] Try to clarify the concept of domains for noalias scopeAdam Nemet2016-04-271-1/+9
| | | | | | | | | | | | | | | Summary: This tries to anchor down the concept of domains a bit better. I had trouble initially relating this to anything. Also talking to David Majnemer on IRC suggested that I wasn't the only one. Reviewers: hfinkel Subscribers: llvm-commits, majnemer Differential Revision: http://reviews.llvm.org/D18799 llvm-svn: 267647
* [LoopVectorize] Don't consider conditional-load dereferenceability for ↵Hal Finkel2016-04-261-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | marked parallel loops I really thought we were doing this already, but we were not. Given this input: void Test(int *res, int *c, int *d, int *p) { for (int i = 0; i < 16; i++) res[i] = (p[i] == 0) ? res[i] : res[i] + d[i]; } we did not vectorize the loop. Even with "assume_safety" the check that we don't if-convert conditionally-executed loads (to protect against data-dependent deferenceability) was not elided. One subtlety: As implemented, it will still prefer to use a masked-load instrinsic (given target support) over the speculated load. The choice here seems architecture specific; the best option depends on how expensive the masked load is compared to a regular load. Ideally, using the masked load still reduces unnecessary memory traffic, and so should be preferred. If we'd rather do it the other way, flipping the order of the checks is easy. The LangRef is updated to make explicit that llvm.mem.parallel_loop_access also implies that if conversion is okay. Differential Revision: http://reviews.llvm.org/D19512 llvm-svn: 267514
* DebugInfo: Remove MDString-based type referencesDuncan P. N. Exon Smith2016-04-231-11/+15
| | | | | | | | | | | | | | | | | | | | | | | | Eliminate DITypeIdentifierMap and make DITypeRef a thin wrapper around DIType*. It is no longer legal to refer to a DICompositeType by its 'identifier:', and DIBuilder no longer retains all types with an 'identifier:' automatically. Aside from the bitcode upgrade, this is mainly removing logic to resolve an MDString-based reference to an actualy DIType. The commits leading up to this have made the implicit type map in DICompileUnit's 'retainedTypes:' field superfluous. This does not remove DITypeRef, DIScopeRef, DINodeRef, and DITypeRefArray, or stop using them in DI-related metadata. Although as of this commit they aren't serving a useful purpose, there are patchces under review to reuse them for CodeView support. The tests in LLVM were updated with deref-typerefs.sh, which is attached to the thread "[RFC] Lazy-loading of debug info metadata": http://lists.llvm.org/pipermail/llvm-dev/2016-April/098318.html llvm-svn: 267296
* Introduce llvm.load.relative intrinsic.Peter Collingbourne2016-04-221-0/+25
| | | | | | | | | | | | | | | | | | | This intrinsic takes two arguments, ``%ptr`` and ``%offset``. It loads a 32-bit value from the address ``%ptr + %offset``, adds ``%ptr`` to that value and returns it. The constant folder specifically recognizes the form of this intrinsic and the constant initializers it may load from; if a loaded constant initializer is known to have the form ``i32 trunc(x - %ptr)``, the intrinsic call is folded to ``x``. LLVM provides that the calculation of such a constant initializer will not overflow at link time under the medium code model if ``x`` is an ``unnamed_addr`` function. However, it does not provide this guarantee for a constant initializer folded into a function body. This intrinsic can be used to avoid the possibility of overflows when loading from such a constant. Differential Revision: http://reviews.llvm.org/D18367 llvm-svn: 267223
* Document source_filename in LangRef.Teresa Johnson2016-04-221-0/+20
| | | | | | | | | | | | Summary: Add documentation for new LLVM IR source_filename identifier. Reviewers: joker.eph, majnemer Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D18857 llvm-svn: 267150
* [AArch64] [ARM] Make a target-independent llvm.thread.pointer intrinsic.Marcin Koscielnicki2016-04-191-0/+27
| | | | | | | | | | | | | | Both AArch64 and ARM support llvm.<arch>.thread.pointer intrinsics that just return the thread pointer. I have a pending patch that does the same for SystemZ (D19054), and there are many more targets that could benefit from one. This patch merges the ARM and AArch64 intrinsics into a single target independent one that will also be used by subsequent targets. Differential Revision: http://reviews.llvm.org/D19098 llvm-svn: 266818
* [SSP, 2/2] Create llvm.stackguard() intrinsic and lower it to LOAD_STACK_GUARDTim Shen2016-04-191-0/+35
| | | | | | | | | | | | | | | | | | | | | | | With this change, ideally IR pass can always generate llvm.stackguard call to get the stack guard; but for now there are still IR form stack guard customizations around (see getIRStackGuard()). Future SSP customization should go through LOAD_STACK_GUARD. There is a behavior change: stack guard values are not CSEed anymore, since we should never reuse the value in case that it has been spilled (and corrupted). See ssp-guard-spill.ll. This also cause the change of stack size and codegen in X86 and AArch64 test cases. Ideally we'd like to know if the guard created in llvm.stackprotector() gets spilled or not. If the value is spilled, discard the value and reload stack guard; otherwise reuse the value. This can be done by teaching register allocator to know how to rematerialize LOAD_STACK_GUARD and force a rematerialization (which seems hard), or check for spilling in expandPostRAPseudo. It only makes sense when the stack guard is a global variable, which requires more instructions to load. Anyway, this seems to go out of the scope of the current patch. llvm-svn: 266806
OpenPOWER on IntegriCloud