summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* [MSAN] Add unary FNeg visitor to the MemorySanitizerCameron McInally2019-06-051-0/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D62909 llvm-svn: 362664
* Allow target to handle STRICT floating-point nodesUlrich Weigand2019-06-0520-207/+332
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ISD::STRICT_ nodes used to implement the constrained floating-point intrinsics are currently never passed to the target back-end, which makes it impossible to handle them correctly (e.g. mark instructions are depending on a floating-point status and control register, or mark instructions as possibly trapping). This patch allows the target to use setOperationAction to switch the action on ISD::STRICT_ nodes to Legal. If this is done, the SelectionDAG common code will stop converting the STRICT nodes to regular floating-point nodes, but instead pass the STRICT nodes to the target using normal SelectionDAG matching rules. To avoid having the back-end duplicate all the floating-point instruction patterns to handle both strict and non-strict variants, we make the MI codegen explicitly aware of the floating-point exceptions by introducing two new concepts: - A new MCID flag "mayRaiseFPException" that the target should set on any instruction that possibly can raise FP exception according to the architecture definition. - A new MI flag FPExcept that CodeGen/SelectionDAG will set on any MI instruction resulting from expansion of any constrained FP intrinsic. Any MI instruction that is *both* marked as mayRaiseFPException *and* FPExcept then needs to be considered as raising exceptions by MI-level codegen (e.g. scheduling). Setting those two new flags is straightforward. The mayRaiseFPException flag is simply set via TableGen by marking all relevant instruction patterns in the .td files. The FPExcept flag is set in SDNodeFlags when creating the STRICT_ nodes in the SelectionDAG, and gets inherited in the MachineSDNode nodes created from it during instruction selection. The flag is then transfered to an MIFlag when creating the MI from the MachineSDNode. This is handled just like fast-math flags like no-nans are handled today. This patch includes both common code changes required to implement the new features, and the SystemZ implementation. Reviewed By: andrew.w.kaylor Differential Revision: https://reviews.llvm.org/D55506 llvm-svn: 362663
* Revert "[AArch64][GlobalISel] Optimize G_FCMP + G_SELECT pairs when G_SELECT ↵Petr Hosek2019-06-051-96/+8
| | | | | | | | is fp" This reverts commit r362435 as this triggers ICE, see PR42129 for details. llvm-svn: 362662
* AMDGPU: Invert frame index offset interpretationMatt Arsenault2019-06-059-209/+218
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the beginning, the offset of a frame index has been consistently interpreted backwards. It was treating it as an offset from the scratch wave offset register as a frame register. The correct interpretation is the offset from the SP on entry to the function, before the prolog. Frame index elimination then should select either SP or another register as an FP. Treat the scratch wave offset on kernel entry as the pre-incremented SP. Rely more heavily on the standard hasFP and frame pointer elimination logic, and clean up the private reservation code. This saves a copy in most callee functions. The kernel prolog emission code is still kind of a mess relying on checking the uses of physical registers, which I would prefer to eliminate. Currently selection directly emits MUBUF instructions, which require using a reference to some register. Use the register chosen for SP, and then ignore this later. This should probably be cleaned up to use pseudos that don't refer to any specific base register until frame index elimination. Add a workaround for shaders using large numbers of SGPRs. I'm not sure these cases were ever working correctly, since as far as I can tell the logic for figuring out which SGPR is the scratch wave offset doesn't match up with the shader input initialization in the shader programming guide. llvm-svn: 362661
* [CallSite removal] Refactoring llvm::InlineFunction APIsMircea Trofin2019-06-051-8/+2
| | | | | | | | | | | | | | | | | | | | Summary: This change only unifies the API previous API pair accepting CallInst and InvokeInst, thus making it easier to refactor inliner pass ode to CallBase. The implementation of the unified API still relies on the CallSite implementation. Reviewers: eraman, chandlerc, jdoerfert Reviewed By: jdoerfert Subscribers: jdoerfert, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D62283 llvm-svn: 362656
* [InstCombine] simplify code for bitcast of insertelement; NFCSanjay Patel2019-06-051-5/+4
| | | | llvm-svn: 362655
* NewGVN: Handle addrspacecastMatt Arsenault2019-06-051-2/+3
| | | | | | | | | | The AllConstant check needs to be moved out of the if/else if chain to avoid a test regression. The "there is no SimplifyZExt" comment puzzles me, since there is SimplifyCastInst. Additionally, the Simplify* calls seem to not see the operand as constant, so this needs to be tried if the simplify failed. llvm-svn: 362653
* [X86] Fix mistake that marked ↵Craig Topper2019-06-051-1/+1
| | | | | | | | | | | | | | | VADDSSrrb_Int/VADDSDrrb_Int/VMULSSrrb_Int/VMULSDrrb_Int as commutable. One of the sources controls the pass through value for the upper bits of the result so we can't really commute it. In practice this problem isn't a functional issue because we would only try to commute this instruction in order to fold a load. But we can't do embedded rounding and fold a load at the same time. So the load fold would never succeed so I don't think we would ever commute or at least keep the version after commuting. llvm-svn: 362647
* [LOOPINFO] Extend Loop object to add utilities to get the loop bounds,Whitney Tsang2019-06-051-0/+214
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | step, and loop induction variable. Summary: This PR extends the loop object with more utilities to get loop bounds, step, and loop induction variable. There already exists passes which try to obtain the loop induction variable in their own pass, e.g. loop interchange. It would be useful to have a common area to get these information. /// Example: /// for (int i = lb; i < ub; i+=step) /// <loop body> /// --- pseudo LLVMIR --- /// beforeloop: /// guardcmp = (lb < ub) /// if (guardcmp) goto preheader; else goto afterloop /// preheader: /// loop: /// i1 = phi[{lb, preheader}, {i2, latch}] /// <loop body> /// i2 = i1 + step /// latch: /// cmp = (i2 < ub) /// if (cmp) goto loop /// exit: /// afterloop: /// /// getBounds /// getInitialIVValue --> lb /// getStepInst --> i2 = i1 + step /// getStepValue --> step /// getFinalIVValue --> ub /// getCanonicalPredicate --> '<' /// getDirection --> Increasing /// getInductionVariable --> i1 /// getAuxiliaryInductionVariable --> {i1} /// isCanonical --> false Reviewers: kbarton, hfinkel, dmgreen, Meinersbur, jdoerfert, syzaara, fhahn Reviewed By: kbarton Subscribers: tvvikram, bmahjour, etiotto, fhahn, jsji, hiraditya, llvm-commits Tag: LLVM Differential Revision: https://reviews.llvm.org/D60565 llvm-svn: 362644
* InstCombine: correctly change byval type attribute alongside call args.Tim Northover2019-06-051-4/+20
| | | | | | | | When the byval attribute has a type, it must match the pointee type of any parameter; but InstCombine was not updating the attribute when folding casts of various kinds away. llvm-svn: 362643
* IR: make getParamByValType Just Work. NFC.Tim Northover2019-06-055-5/+11
| | | | | | | | | | | Most parts of LLVM don't care whether the byval type is derived from an explicit Attribute or from the parameter's pointee type, so it makes sense for the main access function to just return the right value. The very few users who do care (only BitcodeReader so far) can find out how it's specified by accessing the Attribute directly. llvm-svn: 362642
* AMDGPU: Remove amdgpu-max-work-group-size attributeMatt Arsenault2019-06-051-10/+1
| | | | | | | This has been deprecated for a long time, and mesa recently switched to amdgpu-flat-work-group-size. llvm-svn: 362641
* AMDGPU: Fix using 2 different enums for same operand flagsMatt Arsenault2019-06-053-11/+8
| | | | | | | These enums are really for the same namespace of flags set on arbitrary MachineOperands, so merge them to avoid value collisions. llvm-svn: 362640
* [WebAssembly] Limit PIC support to the Emscripten targetDan Gohman2019-06-051-2/+11
| | | | | | | | | | | The current PIC support currently only works with Emscripten, so disable it for other targets. This is the PIC portion of https://reviews.llvm.org/D62542. Reviewed By: dschuff, sbc100 llvm-svn: 362638
* [X86] Add the vector integer min/max instructions to ↵Craig Topper2019-06-051-0/+84
| | | | | | | | | | | | | | | | | isAssociativeAndCommutative. As far as I know these should be freely reassociatable just like the floating point MAXC/MINC instructions. The *reduce* test changes are largely regressions and caused by the "generic" CPU we default to not having a scheduler model. The machine-combiner-int-vec.ll test shows the positive benefits of this change. Differential Revision: https://reviews.llvm.org/D62787 llvm-svn: 362629
* Fix shadow local variable warning. NFCI.Simon Pilgrim2019-06-051-6/+6
| | | | llvm-svn: 362622
* [x86] split more 256-bit stores of concatenated vectorsSanjay Patel2019-06-051-3/+4
| | | | | | | | As suggested in D62498 - collectConcatOps() matches both concat_vectors and insert_subvector patterns, and we see more test improvements by using the more general match. llvm-svn: 362620
* [X86][AVX] Generalize split256BitStore to splitVectorStore. NFCI.Simon Pilgrim2019-06-051-12/+17
| | | | | | Enables us to use this to split 512-bit vectors in future patches. llvm-svn: 362617
* Revert "Title: [LOOPINFO] Extend Loop object to add utilities to get the loop"Whitney Tsang2019-06-051-215/+0
| | | | | | This reverts commit d34797dfc26c61cea19f45669a13ea572172ba34. llvm-svn: 362615
* [SLP] Fix regression in broadcasts caused by operand reordering patch D59973.Dinar Temirbulatov2019-06-051-5/+35
| | | | | | | | | | | | This patch fixes a regression caused by the operand reordering refactoring patch https://reviews.llvm.org/D59973 . The fix changes the strategy to Splat instead of Opcode, if broadcast opportunities are found. Please see the lit test for some examples. Committed on behalf of @vporpo (Vasileios Porpodas) Differential Revision: https://reviews.llvm.org/D62427 llvm-svn: 362613
* [LoopUtils][SLPVectorizer] clean up management of fast-math-flagsSanjay Patel2019-06-053-35/+34
| | | | | | | | | | | | | | | | Instead of passing around fast-math-flags as a parameter, we can set those using an IRBuilder guard object. This is no-functional-change-intended. The motivation is to eventually fix the vectorizers to use and set the correct fast-math-flags for reductions. Examples of that not behaving as expected are: https://bugs.llvm.org/show_bug.cgi?id=23116 (should be able to reduce with less than 'fast') https://bugs.llvm.org/show_bug.cgi?id=35538 (possible miscompile for -0.0) D61802 (should be able to reduce with IR-level FMF) Differential Revision: https://reviews.llvm.org/D62272 llvm-svn: 362612
* [LoopInfo] Fix unused variable warning. NFC.Benjamin Kramer2019-06-051-2/+1
| | | | llvm-svn: 362610
* Title: [LOOPINFO] Extend Loop object to add utilities to get the loopWhitney Tsang2019-06-051-0/+216
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bounds, step, and loop induction variable. Summary: This PR extends the loop object with more utilities to get loop bounds, step, and loop induction variable. There already exists passes which try to obtain the loop induction variable in their own pass, e.g. loop interchange. It would be useful to have a common area to get these information. /// Example: /// for (int i = lb; i < ub; i+=step) /// <loop body> /// --- pseudo LLVMIR --- /// beforeloop: /// guardcmp = (lb < ub) /// if (guardcmp) goto preheader; else goto afterloop /// preheader: /// loop: /// i1 = phi[{lb, preheader}, {i2, latch}] /// <loop body> /// i2 = i1 + step /// latch: /// cmp = (i2 < ub) /// if (cmp) goto loop /// exit: /// afterloop: /// /// getBounds /// getInitialIVValue --> lb /// getStepInst --> i2 = i1 + step /// getStepValue --> step /// getFinalIVValue --> ub /// getCanonicalPredicate --> '<' /// getDirection --> Increasing /// getInductionVariable --> i1 /// getAuxiliaryInductionVariable --> {i1} /// isCanonical --> false Reviewers: kbarton, hfinkel, dmgreen, Meinersbur, jdoerfert, syzaara, fhahn Reviewed By: kbarton Subscribers: tvvikram, bmahjour, etiotto, fhahn, jsji, hiraditya, llvm-commits Tag: LLVM Differential Revision: https://reviews.llvm.org/D60565 llvm-svn: 362609
* [MIPS GlobalISel] Select fcmpPetar Avramovic2019-06-053-0/+94
| | | | | | | | Select floating point compare for MIPS32. Differential Revision: https://reviews.llvm.org/D62721 llvm-svn: 362603
* [ARM] Allow "-march=foo+fp" to vary with fooSjoerd Meijer2019-06-051-8/+71
| | | | | | | | | This is the LLVM part of this change, the Clang part contains the full description in its commit message. Differential Revision: https://reviews.llvm.org/D60697 llvm-svn: 362600
* [X86][AVX] combineX86ShuffleChain - combine ↵Simon Pilgrim2019-06-051-3/+10
| | | | | | | | | | shuffle(extractsubvector(x),extractsubvector(y)) We already handle the case where we combine shuffle(extractsubvector(x),extractsubvector(x)), this relaxes the requirement to permit different sources as long as they have the same value type. This causes a couple of cases where the VPERMV3 binary shuffles occur at a wider width than before, which I intend to improve in future commits - but as only the subvector's mask indices are defined, these will broadcast so we don't see any increase in constant size. llvm-svn: 362599
* [TargetLowering] SimplifyDemandedBits - pull out shift value type. NFCI.Simon Pilgrim2019-06-051-1/+2
| | | | | | Will be used more in an upcoming patch. llvm-svn: 362595
* [IPO] Disabled 'default only' switch statements to fix MSVC warnings.Simon Pilgrim2019-06-051-8/+8
| | | | | | @jdoerfert Looks like these are placeholders for incoming abstract attributes patches so I've just commented the code out, even though this is usually frowned upon. llvm-svn: 362592
* Include what you use in PPCFrameLowering.hDmitri Gribenko2019-06-051-1/+0
| | | | llvm-svn: 362590
* Resubmit "[CorrelatedValuePropagation] Fix prof branch_weights metadata ↵Yevgeny Rouban2019-06-051-56/+61
| | | | | | | | | | | | | | | | | handling for SwitchInst" This reverts commit 5b32f60ec31ce136edac6f693538aeb6039f4ad0. The fix is in commit 4f9e68148bd0dada2d6997625432385918ac2e2c. This patch fixes the CorrelatedValuePropagation pass to keep prof branch_weights metadata of SwitchInst consistent. It makes use of SwitchInstProfUpdateWrapper. New tests are added. Reviewed By: nikic Differential Revision: https://reviews.llvm.org/D62126 llvm-svn: 362583
* Suppress false-positive GCC -Wreturn-type warning.Michael Liao2019-06-051-0/+1
| | | | llvm-svn: 362582
* [Attributor] Pass infrastructure and fixpoint frameworkJohannes Doerfert2019-06-057-1/+544
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | NOTE: Note that no attributes are derived yet. This patch will not go in alone but only with others that derive attributes. The framework is split for review purposes. This commit introduces the Attributor pass infrastructure and fixpoint iteration framework. Further patches will introduce abstract attributes into this framework. In a nutshell, the Attributor will update instances of abstract arguments until a fixpoint, or a "timeout", is reached. Communication between the Attributor and the abstract attributes that are derived is restricted to the AbstractState and AbstractAttribute interfaces. Please see the file comment in Attributor.h for detailed information including design decisions and typical use case. Also consider the class documentation for Attributor, AbstractState, and AbstractAttribute. Reviewers: chandlerc, homerdin, hfinkel, fedor.sergeev, sanjoy, spatel, nlopes, nicholas, reames Subscribers: mehdi_amini, mgorny, hiraditya, bollu, steven_wu, dexonsmith, dang, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59918 llvm-svn: 362578
* [PowerPC] Collapse RLDICL/RLDICR into RLDIC when possibleNemanja Ivanovic2019-06-051-0/+52
| | | | | | | | | | | | | | | | | | | Generally speaking, we lower to an optimal rotate sequence for nodes visible in the SDAG. However, there are instances where the two rotates are not visible at ISEL time - most notably those in a very common sequence when lowering switch statements to jump tables. A common situation is a switch on a 32-bit integer. This value has to have the upper 32 bits cleared and because jump table offsets are word offsets, the value needs to be shifted left by 2 bits. We currently emit the clear and the left shift as two separate instructions, but this is not needed as we can lower it to a single RLDIC. This patch just cleans that up. Differential revision: https://reviews.llvm.org/D60402 llvm-svn: 362576
* [llvm-objdump/llvm-readobj/obj2yaml/yaml2obj] Support DT_PPC_GOT and DT_PPC_OPTFangrui Song2019-06-051-0/+9
| | | | | | | | | | | | | | In glibc, DT_PPC_GOT indicates that PowerPC32 Secure PLT ABI is used. I plan to use it in D62464. DT_PPC_OPT currently indicates if a TLSDESC inspired TLS optimization is enabled. Reviewed By: grimar, jhenderson, rupprecht Differential Revision: https://reviews.llvm.org/D62851 llvm-svn: 362569
* Initial support for IBM MASS vector libraryNemanja Ivanovic2019-06-051-0/+10
| | | | | | | This is the LLVM portion of patch https://reviews.llvm.org/D59881. The clang portion is to follow. llvm-svn: 362568
* [X86] Cleanup convertIntLogicToFPLogic a little. NFCICraig Topper2019-06-051-23/+24
| | | | | | | | | | | | | | -Use early returns to reduce indentation -Replace multipe ifs with a switch. -Replace an assert with an llvm_unreachable default in the switch. -Check that the FP type we're going to use for the X86ISD::FAND/FOR/FXOR is legal rather than checking that the integer type matches the width of a legal scalar fp type. This all runs after legalization so it shouldn't really matter, but making sure we're using a valid type in the X86ISD node is really whats important. llvm-svn: 362565
* [Scalarizer] Add UnaryOperator visitor to scalarization passCameron McInally2019-06-041-0/+38
| | | | | | Differential Revision: https://reviews.llvm.org/D62858 llvm-svn: 362558
* [AArch64][GlobalISel] Make extloads to i64 legal.Amara Emerson2019-06-041-0/+3
| | | | | | | | Although we had the support in the prelegalizer combiner to generate the G_SEXTLOAD or G_ZEXTLOAD ops, the legalizer definitions for arm64 had them as lowering back to separate ops. llvm-svn: 362553
* [WebAssembly] Fix ISel crash on sext_inreg/extract type mismatchThomas Lively2019-06-041-2/+26
| | | | | | | | | | | | | | | | | | Summary: Adjusts the index and adds a bitcast around the vector operand of EXTRACT_VECTOR_ELT so that its lane type matches the source type of its parent sext_inreg. Without this bitcast the ISel patterns do not match and ISel fails. Reviewers: aheejin Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D62646 llvm-svn: 362547
* [SelectionDAG][FIX] Allow "returned" arguments to be bit-castedJohannes Doerfert2019-06-041-2/+5
| | | | | | | | | | | | | | | | Summary: An argument that is return by a function but bit-casted before can still be annotated as "returned". Make sure we do not crash for this case. Reviewers: sunfish, stephenwlin, niravd, arsenm Subscribers: wdng, hiraditya, bollu, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59917 llvm-svn: 362546
* Introduce Value::stripPointerCastsSameRepresentationJohannes Doerfert2019-06-042-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | This patch allows current users of Value::stripPointerCasts() to force the result of the function to have the same representation as the value it was called on. This is useful in various cases, e.g., (non-)null checks. In this patch only a single call site was adjusted to fix an existing misuse that would cause nonnull where they may be wrong. Uses in attribute deduction and other areas, e.g., D60047, are to be expected. For a discussion on this topic, please see [0]. [0] http://lists.llvm.org/pipermail/llvm-dev/2018-December/128423.html Reviewers: hfinkel, arsenm, reames Subscribers: wdng, hiraditya, bollu, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D61607 llvm-svn: 362545
* llvm-undname: Correctly demangle vararg parametersNico Weber2019-06-042-5/+10
| | | | | | | FunctionSignatureNode already had an IsVariadic field, but it wasn't used anywhere yet. Set it and use it. llvm-svn: 362541
* llvm-undname: More coverage-related cleanupsNico Weber2019-06-041-11/+9
| | | | | | | | | | | | | | | | | | | | - The loop in demangleFunctionParameterList() only exits on Error, @, and Z. All 3 cases were handled, so the rest of the function is DEMANGLE_UNREACHABLE. - The loop in demangleTemplateParameterList() always returns on Error, so there's no need to check for that in the loop header and after the loop. - Add test cases for invalid function parameter manglings. - Add a (redundant) test case for a simple template parameter list mangling. - Add a test case pointing out that varargs functions aren't demangled correctly. llvm-svn: 362540
* Revert r362472 as it is breaking PPC build botsNemanja Ivanovic2019-06-041-179/+0
| | | | | | | The patch https://reviews.llvm.org/rL362472 broke PPC LNT buildbots. Reverting it to bring the bots back to green. llvm-svn: 362539
* [Utils] Clean another duplicated util method.Alina Sbirlea2019-06-043-62/+13
| | | | | | | | | | | | | | | | | Summary: Following the cleanup in D48202, method foldBlockIntoPredecessor has the same behavior. Replace its uses with MergeBlockIntoPredecessor. Remove foldBlockIntoPredecessor. Reviewers: chandlerc, dmgreen Subscribers: jlebar, javed.absar, zzheng, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D62751 llvm-svn: 362538
* llvm-undname: Add test coverage for demangleInitFiniStub()Nico Weber2019-06-041-2/+2
| | | | llvm-svn: 362536
* [X86] Mutate fceil/ffloor/ftrunc/fnearbyint/frint into X86ISD::RNDSCALE ↵Craig Topper2019-06-043-357/+82
| | | | | | | | | | | | | | | | during PreProcessIselDAG to cut down on pattern permutations We already need to have patterns for X86ISD::RNDSCALE to support software intrinsics. But we currently have 5 sets of patterns for the 5 rounding operations. For of these 6 patterns we have to support 3 vectors widths, 2 element sizes, sse/vex/evex encodings, load folding, and broadcast load folding. This results in a fair amount of bytes in the isel table. This patch adds code to PreProcessIselDAG to morph the fceil/ffloor/ftrunc/fnearbyint/frint to X86ISD::RNDSCALE. This way we can remove everything, but the intrinsic pattern while still allowing the operations to be considered Legal for DAGCombine and Legalization. This shrinks the DAGISel by somewhere between 9K and 10K. There is one complication to this, the STRICT versions of these nodes are currently mutated to their none strict equivalents at isel time when the node is visited. This won't be true in the future since that loses the chain ordering information. For now I've also added support for the non-STRICT nodes to Select so we can change the STRICT versions there after they've been mutated to their non-STRICT versions. We'll probably need a STRICT version of RNDSCALE or something to handle this in the future. Which will take us back to needing 2 sets of patterns for strict and non-strict, but that's still better than the 11 or 12 sets of patterns we'd need. We can probably do something similar for scalar, but I haven't looked at it yet. Differential Revision: https://reviews.llvm.org/D62757 llvm-svn: 362535
* [X86] Fold single-use variable into assert. NFC.Benjamin Kramer2019-06-041-2/+2
| | | | | | Avoids an unused variable warning in Release builds. llvm-svn: 362534
* [DAGCombiner][X86] Fold (not (neg X)) -> (add X, -1)Craig Topper2019-06-041-0/+10
| | | | | | | | | | This is a special case of a more general transform (not (sub Y, X)) -> (add X, ~Y). InstCombine knows the general form. I've restricted to the special case to fix the motivating case PR42118. I tried handling any case where Y was constant, but got some changes on some Mips tests that I couldn't quickly prove where beneficial. Fixes PR42118 Differential Revision: https://reviews.llvm.org/D62828 llvm-svn: 362533
* [MACHO] Replaced calls to getStruct with getStructOrErr in functions ↵Alex Brachet2019-06-041-33/+88
| | | | | | returning Error or Expected or similar llvm-svn: 362526
OpenPOWER on IntegriCloud