summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/PowerPC/PPCRegisterInfo.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Remove getEHExceptionRegister and getEHHandlerRegister.Rafael Espindola2013-10-071-8/+0
| | | | | | They haven't been used for a long time. Patch by MathOnNapkins. llvm-svn: 192099
* [PowerPC] Add handling for conversions to fast-isel.Bill Schmidt2013-08-301-0/+2
| | | | | | | | | Yet another chunk of fast-isel code. This one handles various conversions involving floating-point. (It also includes some miscellaneous handling throughout the back end for LWA_32 and LWAX_32 that should have been part of the load-store patch.) llvm-svn: 189677
* Use function attributes to indicate that we don't want to realign the stack.Bill Wendling2013-08-011-1/+1
| | | | | | | | Function attributes are the future! So just query whether we want to realign the stack directly from the function instead of through a random target options structure. llvm-svn: 187618
* PPC: Support dynamic allocas with large alignmentHal Finkel2013-07-181-26/+48
| | | | | | | | | | | | | | | Support for dynamic stack alignments in the PPC backend has been unfinished, in part because it depends on dynamic stack realignment (which I only just recently implemented fully). Now we can also support dynamic allocas with higher than the default target stack alignment (16 bytes). In order to round-up the requested size to the maximum requested alignment, we need an additional register to hold the rounded-up size. We're already using one scavenged register to hold the previous stack-pointer value (which needs to be stored with the signal-safe stdux update), and so when we have dynamic allocas and a large alignment, we allocate two emergency spill slots for the scavenger. llvm-svn: 186562
* PPC: Add base-pointer support to builtin setjmp/longjmpHal Finkel2013-07-171-16/+12
| | | | | | | | | | | | | | | | | | First, this changes the base-pointer implementation to remove an unnecessary complication (and one that is incompatible with how builtin SjLj is implemented): instead of using r31 as the base pointer when it is not needed as a frame pointer, now the base pointer will always be r30 when needed. Second, we introduce another pseudo register, BP, which is used just like the FP pseudo register to refer to the base register before we know for certain what register it will be. Third, we now save BP into the jmp_buf, and restore r30 from that slot in longjmp. If the function that called setjmp did not use a base pointer, then r30 will be overwritten by the setjmp-calling-function's restore code. FP restoration (which is restored into r31) works the same way. llvm-svn: 186545
* PPC: Implement base pointer and stack realignmentHal Finkel2013-07-171-11/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This builds on some frame-lowering code that has existed since 2005 (r24224) but was disabled in 2008 (r48188) because it needed base pointer support to function correctly. This implementation follows the strategy suggested by Dale Johannesen in r48188 where the following comment was added: This does not currently work, because the delta between old and new stack pointers is added to offsets that reference incoming parameters after the prolog is generated, and the code that does that doesn't handle a variable delta. You don't want to do that anyway; a better approach is to reserve another register that retains to the incoming stack pointer, and reference parameters relative to that. And now we do exactly that. If we don't need a frame pointer, then we use r31 as a base pointer. If we do need a frame pointer, then we use r30 as a base pointer. The base pointer retains the value of the stack pointer before it was decremented in the prologue. We then use the base pointer to resolve all negative frame indicies. The basic scheme follows that for base pointers in the X86 backend. We use a base pointer when we need to dynamically realign the incoming stack pointer. This currently applies only to static objects (dynamic allocas with large alignments, and base-pointer support in SjLj lowering will come in future commits). llvm-svn: 186478
* [PowerPC] Use mtocrf when availableUlrich Weigand2013-07-031-1/+1
| | | | | | | | | | | | | | | | | | | | Just as with mfocrf, it is also preferable to use mtocrf instead of mtcrf when only a single CR register is to be written. Current code however always emits mtcrf. This probably does not matter when using an external assembler, since the GNU assembler will in fact automatically replace mtcrf with mtocrf when possible. It does create inefficient code with the integrated assembler, however. To fix this, this patch adds MTOCRF/MTOCRF8 instruction patterns and uses those instead of MTCRF/MTCRF8 everything. Just as done in the MFOCRF patch committed as 185556, these patterns will be converted back to MTCRF if MTOCRF is not available on the machine. As a side effect, this allows to modify the MTCRF pattern to accept the full range of mask operands for the benefit of the asm parser. llvm-svn: 185561
* [PowerPC] Always use mfocrf if availableUlrich Weigand2013-07-031-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When accessing just a single CR register, it is always preferable to use mfocrf instead of mfcr, if the former is available on the CPU. Current code makes that distinction in many, but not all places where a single CR register value is retrieved. One missing location is PPCRegisterInfo::lowerCRSpilling. To fix this and make this simpler in the future, this patch changes the bulk of the back-end to always assume mfocrf is available and simply generate it when needed. On machines that actually do not support mfocrf, the instruction is replaced by mfcr at the very end, in EmitInstruction. This has the additional benefit that we no longer need the MFCRpseud hack, since before EmitInstruction we always have a MFOCRF instruction pattern, which already models data flow as required. The patch also adds the MFOCRF8 version of the instruction, which was missing so far. Except for the PPCRegisterInfo::lowerCRSpilling case, no change in generated code intended. llvm-svn: 185556
* Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handlingHal Finkel2013-07-021-17/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are a couple of (small) related changes here: 1. The printed name of the VRSAVE register has been changed from VRsave to vrsave in order to match the name accepted by GNU binutils. 2. Support for parsing vrsave has been added to the asm parser (it seems that there was no test case specifically covering this code, so I've added one). 3. The list of Altivec registers, which was common to all calling conventions, has been separated out. This allows us to define the base CSR lists, and then lists for each ABI with Altivec included. This allows SjLj, for example, to work correctly on non-Altivec targets without using unnatural definitions of the NoRegs CSR list. 4. VRSAVE is now always reserved on non-Darwin targets and all Altivec registers are reserved when Altivec is disabled. With these changes, it is now possible to compile a function containing __builtin_unwind_init() on Linux/PPC64 with debugging information. This did not work previously because GNU binutils assumes that all .cfi_offset offsets will be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned offset). This is not true for the vrsave register, however, because this register is used only on Darwin, GCC does not bother printing a .cfi_offset entry for it (even though there is a slot in the stack frame for it as specified by the ABI). This change allows us to do the same: we will also not print .cfi_offset directives for vrsave. llvm-svn: 185409
* DebugInfo: remove target-specific Frame Index handling for DBG_VALUE ↵David Blaikie2013-06-161-3/+3
| | | | | | | | | | MachineInstrs Frame index handling is now target-agnostic, so delete the target hooks for creation & asm printing of target-specific addressing in DBG_VALUEs and any related functions. llvm-svn: 184067
* Don't cache the instruction and register info from the TargetMachine, becauseBill Wendling2013-06-071-4/+12
| | | | | | | | the internals of TargetMachine could change. No functionality change intended. llvm-svn: 183494
* [PowerPC] Use true offset value in "memrix" machine operandsUlrich Weigand2013-05-161-25/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the second part of the change to always return "true" offset values from getPreIndexedAddressParts, tackling the case of "memrix" type operands. This is about instructions like LD/STD that only have a 14-bit field to encode immediate offsets, which are implicitly extended by two zero bits by the machine, so that in effect we can access 16-bit offsets as long as they are a multiple of 4. The PowerPC back end currently handles such instructions by carrying the 14-bit value (as it will get encoded into the actual machine instructions) in the machine operand fields for such instructions. This means that those values are in fact not the true offset, but rather the offset divided by 4 (and then truncated to an unsigned 14-bit value). Like in the case fixed in r182012, this makes common code operations on such offset values not work as expected. Furthermore, there doesn't really appear to be any strong reason why we should encode machine operands this way. This patch therefore changes the encoding of "memrix" type machine operands to simply contain the "true" offset value as a signed immediate value, while enforcing the rules that it must fit in a 16-bit signed value and must also be a multiple of 4. This change must be made simultaneously in all places that access machine operands of this type. However, just about all those changes make the code simpler; in many cases we can now just share the same code for memri and memrix operands. llvm-svn: 182032
* Implement PPC counter loops as a late IR-level passHal Finkel2013-05-151-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old PPCCTRLoops pass, like the Hexagon pass version from which it was derived, could only handle some simple loops in canonical form. We cannot directly adapt the new Hexagon hardware loops pass, however, because the Hexagon pass contains a fundamental assumption that non-constant-trip-count loops will contain a guard, and this is not always true (the result being that incorrect negative counts can be generated). With this commit, we replace the pass with a late IR-level pass which makes use of SE to calculate the backedge-taken counts and safely generate the loop-count expressions (including any necessary max() parts). This IR level pass inserts custom intrinsics that are lowered into the desired decrement-and-branch instructions. The most fragile part of this new implementation is that interfering uses of the counter register must be detected on the IR level (and, on PPC, this also includes any indirect branches in addition to function calls). Also, to make all of this work, we need a variant of the mtctr instruction that is marked as having side effects. Without this, machine-code level CSE, DCE, etc. illegally transform the resulting code. Hopefully, this can be improved in the future. This new pass is smaller than the original (and much smaller than the new Hexagon hardware loops pass), and can handle many additional cases correctly. In addition, the preheader-creation code has been copied from LoopSimplify, and after we decide on where it belongs, this code will be refactored so that it can be explicitly shared (making this implementation even smaller). The new test-case files ctrloop-{le,lt,ne}.ll have been adapted from tests for the new Hexagon pass. There are a few classes of loops that this pass does not transform (noted by FIXMEs in the files), but these deficiencies can be addressed within the SE infrastructure (thus helping many other passes as well). llvm-svn: 181927
* Use virtual base registers on PPCHal Finkel2013-04-091-15/+151
| | | | | | | | | | | | On PowerPC, non-vector loads and stores have r+i forms; however, in functions with large stack frames these were not being used to access slots far from the stack pointer because such slots were out of range for the signed 16-bit immediate offset field. This increases register pressure because we need a separate register for each offset (when the r+r form is used). By enabling virtual base registers, we can deal with large stack frames without unduly increasing register pressure. llvm-svn: 179105
* Implement PPCInstrInfo::FoldImmediateHal Finkel2013-04-061-0/+2
| | | | | | | | There are certain PPC instructions into which we can fold a zero immediate operand. We can detect such cases by looking at the register class required by the using operand (so long as it is not otherwise constrained). llvm-svn: 178961
* Use ImmToIdxMap.count in PPCRegisterInfoHal Finkel2013-04-011-2/+1
| | | | | | | Code improvement suggested by Jakob (in review of r178450). No functionality change intended. llvm-svn: 178473
* Cleanup ImmToIdxMap and noImmForm in PPCRegisterInfoHal Finkel2013-03-311-18/+4
| | | | | | | | | | | ImmToIdxMap should be a DenseMap (not a std::map) because there is no ordering requirement. Also, we don't need a separate list of instructions for noImmForm in eliminateFrameIndex, because this list is essentially the complement of the keys in ImmToIdxMap. No functionality change intended. llvm-svn: 178450
* Add the PPC lfiwax instructionHal Finkel2013-03-311-0/+1
| | | | | | | | | This instruction is available on modern PPC64 CPUs, and is now used to improve the SINT_TO_FP lowering (by eliminating the need for the separate sign extension instruction and decreasing the amount of needed stack space). llvm-svn: 178446
* Cleanup PPC(64) i32 -> float/double conversionHal Finkel2013-03-311-2/+1
| | | | | | | | | | | | | | | The existing SINT_TO_FP code for i32 -> float/double conversion was disabled because it relied on broken EXTSW_32/STD_32 instruction definitions. The original intent had been to enable these 64-bit instructions to be used on CPUs that support them even in 32-bit mode. Unfortunately, this form of lying to the infrastructure was buggy (as explained in the FIXME comment) and had therefore been disabled. This re-enables this functionality, using regular DAG nodes, but only when compiling in 64-bit mode. The old STD_32/EXTSW_32 definitions (which were dead) are removed. llvm-svn: 178438
* Cleanup PPC CR-spill kill flags and 32- vs. 64-bit instructionsHal Finkel2013-03-281-2/+2
| | | | | | | | There were a few places where kill flags were not being set correctly, and where 32-bit instruction variants were being used with 64-bit registers. After r178180, this code was being triggered causing llc to assert. llvm-svn: 178220
* Allocate r0 on PPCHal Finkel2013-03-271-2/+0
| | | | | | | The R0 register can now be allocated because instructions that cannot use R0 as a GPR have been appropriately marked. llvm-svn: 178123
* Don't spill PPC VRSAVE on non-Darwin (even in SjLj)Hal Finkel2013-03-271-0/+2
| | | | | | | | | | | | | As Bill Schmidt pointed out to me, only on Darwin do we need to spill/restore VRSAVE in the SjLj code. For non-Darwin, don't spill/restore VRSAVE (and I've added some asserts to make sure that we're not). As it turns out, we're not currently handling the Darwin case correctly (I've added a FIXME in the test case). I've tried adding various implied register definitions/uses to force the spill without success, so I'll need to address this later. llvm-svn: 178096
* PPC: Use HWEncoding and TRI->getEncodingValueHal Finkel2013-03-261-2/+2
| | | | | | | | | | | As pointed out by Jakob, we don't need to maintain a separate register-numbering table. Instead we should let TableGen generate the table for us from the information (already present) in PPCRegisterInfo.td. TRI->getEncodingValue is now used to access register-encoding values. No functionality change intended. llvm-svn: 178067
* Use multiple virtual registers in PPC CR spillingHal Finkel2013-03-261-25/+28
| | | | | | | | | | | | Now that the register scavenger can support multiple spill slots, and PEI can use virtual-register-based scavenging for multiple simultaneous registers, we can use a virtual register for the transfer register in the CR spilling code. This should eliminate the last place (outside of the prologue/epilogue) where we depend on the unconditional availability of the r0 register. We will soon be able to allocate it (in a somewhat restricted sense) as a GPR. llvm-svn: 178060
* Update PPCRegisterInfo's use of virtual registers to be SSAHal Finkel2013-03-261-3/+5
| | | | | | | | | | | PPC's use of PEI's virtual-register-based scavenging functionality had redefined the virtual registers (it was non-SSA). Now that PEI supports dealing with instructions with multiple virtual registers, this can be cleanup up to use multiple virtual registers and keep SSA form. No functionality change intended. llvm-svn: 178059
* Cleanup some unused reg. scavenger parameters in PPCRegisterInfoHal Finkel2013-03-231-23/+10
| | | | | | | | | | | These spilling functions will eventually make use of the register scavenger, however, they'll do so by taking advantage of PEI's virtual-register-based delayed scavenging mechanism. As a result, these function parameters will not be used, and can be removed. No functionality change intended. llvm-svn: 177827
* Remove the G8RC_NOX0_and_GPRC_NOR0 PPC register classHal Finkel2013-03-211-0/+1
| | | | | | | | | | | | As Jakob pointed out in his review of r177423, having a shared ZERO register between the 32- and 64-bit register classes causes this odd G8RC_NOX0_and_GPRC_NOR0 class to be created. As recommended, this adds a ZERO8 register which differentiates the 32- and 64-bit zeros. No functionality change intended. llvm-svn: 177683
* Implement builtin_{setjmp/longjmp} on PPCHal Finkel2013-03-211-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements SJLJ lowering on PPC, making the Clang functions __builtin_{setjmp/longjmp} functional on PPC platforms. The implementation strategy is similar to that on X86, with the exception that a branch-and-link variant is used to get the right jump address. Credit goes to Bill Schmidt for suggesting the use of the unconditional bcl form (instead of the regular bl instruction) to limit return-address-cache pollution. Benchmarking the speed at -O3 of: static jmp_buf env_sigill; void foo() { __builtin_longjmp(env_sigill,1); } main() { ... for (int i = 0; i < c; ++i) { if (__builtin_setjmp(env_sigill)) { goto done; } else { foo(); } done:; } ... } vs. the same code using the libc setjmp/longjmp functions on a P7 shows that this builtin implementation is ~4x faster with Altivec enabled and ~7.25x faster with Altivec disabled. This comparison is somewhat unfair because the libc version must also save/restore the VSX registers which we don't yet support. llvm-svn: 177666
* Add support for spilling VRSAVE on PPCHal Finkel2013-03-211-1/+66
| | | | | | | | | | Although there is only one Altivec VRSAVE register, it is a member of a register class, and we need the ability to spill it. Because this register is normally callee-preserved and handled by special code this has never before been necessary. However, this capability will be required by a forthcoming commit adding SjLj support. llvm-svn: 177654
* Correct PPC FRAMEADDR lowering using a pseudo-registerHal Finkel2013-03-211-0/+5
| | | | | | | | | | | | | | | | The old code used to lower FRAMEADDR tried to replicate the logic in the real frame-lowering code that determines whether or not the frame pointer (r31) will be used. When it seemed as through the frame pointer would not be used, the stack pointer (r1) was used instead. Unfortunately, because the stack size is not yet known, this does not work. Instead, this change introduces new always-reserved pseudo-registers (FP and FP8) that are replaced during prologue insertion with the real frame-pointer register (either r1 or r31). It is important that this intrinsic always return a valid frame address because it is used by Clang to store the frame address as part of code generation for __builtin_setjmp. llvm-svn: 177653
* Prepare to make r0 an allocatable register on PPCHal Finkel2013-03-191-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the PPC r0 register is unconditionally reserved. There are two reasons for this: 1. r0 is treated specially (as the constant 0) by certain instructions, and so cannot be used with those instructions as a regular register. 2. r0 is used as a temporary register in the CR-register spilling process (where, under some circumstances, we require two GPRs). This change addresses the first reason by introducing a restricted register class (without r0) for use by those instructions that treat r0 specially. These register classes have a new pseudo-register, ZERO, which represents the r0-as-0 use. This has the side benefit of making the existing target code simpler (and easier to understand), and will make it clear to the register allocator that uses of r0 as 0 don't conflict will real uses of the r0 register. Once the CR spilling code is improved, we'll be able to allocate r0. Adding these extra register classes, for some reason unclear to me, causes requests to the target to copy 32-bit registers to 64-bit registers. The resulting code seems correct (and causes no test-suite failures), and the new test case covers this new kind of asymmetric copy. As r0 is still reserved, no functionality change intended. llvm-svn: 177423
* Don't reserve R31 on PPC64 unless the frame pointer is neededHal Finkel2013-03-191-4/+3
| | | | llvm-svn: 177379
* Improve PPC VR (Altivec) register spillingHal Finkel2013-03-171-3/+24
| | | | | | | | | | | | | | | | This change cleans up two issues with Altivec register spilling: 1. The spilling code was inefficient (using two instructions, and add and a load, when just one would do) 2. The code assumed that r0 would always be available (true for now, but this will change) The new code handles VR spilling just like GPR spills but forced into r+r mode. As a result, when any VR spills are present, we must now always allocate the register-scavenger spill slot. llvm-svn: 177231
* Remove PPC avoidWriteAfterWrite callbackHal Finkel2013-03-161-14/+0
| | | | | | | | | As a follow-up to r158719, remove PPCRegisterInfo::avoidWriteAfterWrite. Jakob pointed out in response to r158719 that this callback is currently unused and so this has no effect (and the speedups that I thought that I had observed as a result of implementing this function must have been noise). llvm-svn: 177228
* Use frame-index scavenging for PPC register spillingHal Finkel2013-03-141-24/+2
| | | | | | | | | | | | | | | Make requiresFrameIndexScavenging return true, and create virtual registers in the spilling code instead of using the register scavenger directly. This makes the target-level code simpler, and importantly, delays the scavenging until after callee-saved register processing (which will be important for later changes). Also cleans up trackLivenessAfterRegAlloc (makes it inline in the header with the other related functions). This makes it clear that it always returns true. No functionality change intended. llvm-svn: 177107
* Don't reserve R2 on Darwin/PPCHal Finkel2013-03-121-16/+2
| | | | | | | | | | | Now that only the register-scavenger version of the CR spilling code remains, we no longer need the Darwin R2 hack. Darwin can use R0 as a spare register in any case where the System V ABI uses it (R0 is special architecturally, and so is reserved under all common ABIs). A few test cases needed to be updated to reflect the register-allocation changes. llvm-svn: 176868
* PPC should always use the register scavenger for CR spillingHal Finkel2013-03-121-58/+17
| | | | | | | | This removes the -disable-ppc[32|64]-regscavenger options; the code that uses the register scavenger has been working well (and has been the default) for some time, and we don't need options to enable the old (broken) CR spilling code. llvm-svn: 176865
* Fix PR14364.Bill Schmidt2013-02-241-17/+7
| | | | | | | | | | | This removes a const_cast hack from PPCRegisterInfo::hasReservedSpillSlot(). The proper place to save the frame index for the CR spill slot is in the PPCFunctionInfo object, not the PPCRegisterInfo object. No new test cases, as this just reimplements existing function. Existing tests such as test/CodeGen/PowerPC/crsave.ll are sufficient. llvm-svn: 175998
* Move the eliminateCallFramePseudoInstr method from TargetRegisterInfoEli Bendersky2013-02-211-39/+0
| | | | | | | | | | | | | | | to TargetFrameLowering, where it belongs. Incidentally, this allows us to delete some duplicated (and slightly different!) code in TRI. There are potentially other layering problems that can be cleaned up as a result, or in a similar manner. The refactoring was OK'd by Anton Korobeynikov on llvmdev. Note: this touches the target interfaces, so out-of-tree targets may be affected. llvm-svn: 175788
* [PEI] Pass the frame index operand number to the eliminateFrameIndex function.Chad Rosier2013-01-311-13/+7
| | | | | | | Each target implementation was needlessly recomputing the index. Part of rdar://13076458 llvm-svn: 174083
* Move all of the header files which are involved in modelling the LLVM IRChandler Carruth2013-01-021-4/+4
| | | | | | | | | | | | | | | | | | | | | into their new header subdirectory: include/llvm/IR. This matches the directory structure of lib, and begins to correct a long standing point of file layout clutter in LLVM. There are still more header files to move here, but I wanted to handle them in separate commits to make tracking what files make sense at each layer easier. The only really questionable files here are the target intrinsic tablegen files. But that's a battle I'd rather not fight today. I've updated both CMake and Makefile build systems (I think, and my tests think, but I may have missed something). I've also re-sorted the includes throughout the project. I'll be committing updates to Clang, DragonEgg, and Polly momentarily. llvm-svn: 171366
* Remove the Function::getFnAttributes method in favor of using the AttributeSetBill Wendling2012-12-301-1/+2
| | | | | | | | | directly. This is in preparation for removing the use of the 'Attribute' class as a collection of attributes. That will shift to the AttributeSet class instead. llvm-svn: 171253
* Rename the 'Attributes' class to 'Attribute'. It's going to represent a ↵Bill Wendling2012-12-191-1/+1
| | | | | | single attribute in the future. llvm-svn: 170502
* Use the new script to sort the includes of every file under lib.Chandler Carruth2012-12-031-13/+13
| | | | | | | | | | | | | | | | | Sooooo many of these had incorrect or strange main module includes. I have manually inspected all of these, and fixed the main module include to be the nearest plausible thing I could find. If you own or care about any of these source files, I encourage you to take some time and check that these edits were sensible. I can't have broken anything (I strictly added headers, and reordered them, never removed), but they may not be the headers you'd really like to identify as containing the API being implemented. Many forward declarations and missing includes were added to a header files to allow them to parse cleanly when included first. The main module rule does in fact have its merits. =] llvm-svn: 169131
* Using const cast to alleviate a warning.Joe Abbey2012-11-161-1/+2
| | | | | | A PR is being filed to address some code issues here. llvm-svn: 168185
* Revert the majority of the next patch in the address space series:Chandler Carruth2012-11-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | r165941: Resubmit the changes to llvm core to update the functions to support different pointer sizes on a per address space basis. Despite this commit log, this change primarily changed stuff outside of VMCore, and those changes do not carry any tests for correctness (or even plausibility), and we have consistently found questionable or flat out incorrect cases in these changes. Most of them are probably correct, but we need to devise a system that makes it more clear when we have handled the address space concerns correctly, and ideally each pass that gets updated would receive an accompanying test case that exercises that pass specificaly w.r.t. alternate address spaces. However, from this commit, I have retained the new C API entry points. Those were an orthogonal change that probably should have been split apart, but they seem entirely good. In several places the changes were very obvious cleanups with no actual multiple address space code added; these I have not reverted when I spotted them. In a few other places there were merge conflicts due to a cleaner solution being implemented later, often not using address spaces at all. In those cases, I've preserved the new code which isn't address space dependent. This is part of my ongoing effort to clean out the partial address space code which carries high risk and low test coverage, and not likely to be finished before the 3.2 release looms closer. Duncan and I would both like to see the above issues addressed before we return to these changes. llvm-svn: 167222
* Resubmit the changes to llvm core to update the functions to support ↵Micah Villmow2012-10-151-1/+1
| | | | | | different pointer sizes on a per address space basis. llvm-svn: 165941
* Revert 165732 for further review.Micah Villmow2012-10-111-1/+1
| | | | llvm-svn: 165747
* Add in the first iteration of support for llvm/clang/lldb to allow variable ↵Micah Villmow2012-10-111-1/+1
| | | | | | per address space pointer sizes to be optimized correctly. llvm-svn: 165726
* Create enums for the different attributes.Bill Wendling2012-10-091-1/+1
| | | | | | | We use the enums to query whether an Attributes object has that attribute. The opaque layer is responsible for knowing where that specific attribute is stored. llvm-svn: 165488
OpenPOWER on IntegriCloud