summaryrefslogtreecommitdiffstats
path: root/llvm/include
Commit message (Collapse)AuthorAgeFilesLines
...
* [DebugInfo][Support] Replace DWARFDataExtractor size functionJames Henderson2020-01-132-4/+5
| | | | | | | | | | | | | | | | This patch adds a new size function to the base DataExtractor class, which removes the need for the DWARFDataExtractor size function. It is unclear why DWARFDataExtractor's size function returned zero in some circumstances (i.e. when it is constructed without a section, and with a different data source instead), so that behaviour has changed. The old behaviour could cause an assertion in the debug line parser, as the size did not reflect the actual data available, and could be lower than the current offset being parsed. Reviewed by: dblaikie Differential Revision: https://reviews.llvm.org/D72337
* This option allows selecting the TLS size in the local exec TLS model,KAWASHIMA Takahiro2020-01-132-0/+8
| | | | | | | | | | | | | | | | | | which is the default TLS model for non-PIC objects. This allows large/ many thread local variables or a compact/fast code in an executable. Specification is same as that of GCC. For example, the code model option precedes the TLS size option. TLS access models other than local-exec are not changed. It means supoort of the large code model is only in the local exec TLS model. Patch By KAWASHIMA Takahiro (kawashima-fj <t-kawashima@fujitsu.com>) Reviewers: dmgreen, mstorsjo, t.p.northover, peter.smith, ostannard Reviewd By: peter.smith Committed by: peter.smith Differential Revision: https://reviews.llvm.org/D71688
* [NFC] Update loop.decrement.reg intrinsic commentSam Parker2020-01-131-1/+3
| | | | | Note that the intrinsic is now understood by SCEV and that other optimisations can treat it as a sub.
* [Disassembler] Delete the VStream parameter of MCDisassembler::getInstruction()Fangrui Song2020-01-111-4/+0
| | | | | | | | | | The argument is llvm::null() everywhere except llvm::errs() in llvm-objdump in -DLLVM_ENABLE_ASSERTIONS=On builds. It is used by no target but X86 in -DLLVM_ENABLE_ASSERTIONS=On builds. If we ever have the needs to add verbose log to disassemblers, we can record log with a member function, instead of passing it around as an argument.
* [Support] Optionally call signal handlers when a function wrapped by the the ↵Alexandre Ganea2020-01-112-0/+17
| | | | | | | | | CrashRecoveryContext fails This patch allows for handling a failure inside a CrashRecoveryContext in the same way as the global exception/signal handler. A failure will have the same side-effect, such as cleanup of temporarty file, printing callstack, calling relevant signal handlers, and finally returning an exception code. This is an optional feature, disabled by default. This is a support patch for D69825. Differential Revision: https://reviews.llvm.org/D70568
* [TargetLowering][ARM][Mips][WebAssembly] Remove the ordered FP compare from ↵Craig Topper2020-01-101-4/+0
| | | | | | | | | | | | | | | | | | | RunttimeLibcalls.def and all associated usages Summary: This always just used the same libcall as unordered, but the comparison predicate was different. This change appears to have been made when targets were given the ability to override the predicates. Before that they were hardcoded into the type legalizer. At that time we never inverted predicates and we handled ugt/ult/uge/ule compares by emitting an unordered check ORed with a ogt/olt/oge/ole checks. So only ordered needed an inverted predicate. Later ugt/ult/uge/ule were optimized to only call a single libcall and invert the compare. This patch removes the ordered entries and just uses the inverting logic that is now present. This removes some odd things in both the Mips and WebAssembly code. Reviewers: efriedma, ABataev, uweigand, cameron.mcinally, kpn Reviewed By: efriedma Subscribers: dschuff, sdardis, sbc100, arichardson, jgravelle-google, kristof.beyls, hiraditya, aheejin, sunfish, atanasyan, Petar.Avramovic, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D72536
* [LockFileManager] Make default waitForUnlock timeout a parameter, NFCVedant Kumar2020-01-101-1/+3
| | | | Patch by Xi Ge!
* [AArch64] Add isAuthenticated predicate to MCInstDescVedant Kumar2020-01-102-0/+11
| | | | | | | | | | Add a predicate to MCInstDesc that allows tools to determine whether an instruction authenticates a pointer. This can be used by diagnostic tools to hint at pointer authentication failures. Differential Revision: https://reviews.llvm.org/D70329 rdar://55089604
* [CMake] Fix modules build after DWARFLinker reorganizationJonas Devlieghere2020-01-101-0/+7
| | | | | Create a dedicate module for the DWARFLinker and make it depend on intrinsics gen.
* [AArch64] Add function attribute "patchable-function-entry" to add NOPs at ↵Fangrui Song2020-01-101-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | function entry The Linux kernel uses -fpatchable-function-entry to implement DYNAMIC_FTRACE_WITH_REGS for arm64 and parisc. GCC 8 implemented -fpatchable-function-entry, which can be seen as a generalized form of -mnop-mcount. The N,M form (function entry points before the Mth NOP) is currently only used by parisc. This patch adds N,0 support to AArch64 codegen. N is represented as the function attribute "patchable-function-entry". We will use a different function attribute for M, if we decide to implement it. The patch reuses the existing patchable-function pass, and TargetOpcode::PATCHABLE_FUNCTION_ENTER which is currently used by XRay. When the integrated assembler is used, __patchable_function_entries will be created for each text section with the SHF_LINK_ORDER flag to prevent --gc-sections (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93197) and COMDAT (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93195) issues. Retrospectively, __patchable_function_entries should use a PC-relative relocation type to avoid the SHF_WRITE flag and dynamic relocations. "patchable-function-entry"'s interaction with Branch Target Identification is still unclear (see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92424 for GCC discussions). Reviewed By: peter.smith Differential Revision: https://reviews.llvm.org/D72215
* [FPEnv] Invert sense of MIFlag::FPExcept flagUlrich Weigand2020-01-101-4/+4
| | | | | | | | | | | | | | | | | | In D71841 we inverted the sense of the SDNode-level flag to ensure all nodes default to potentially raising FP exceptions unless otherwise specified -- i.e. if we forget to propagate the flag somewhere, the effect is now only lost performance, not incorrect code. However, the related flag at the MI level still defaults to nodes not raising FP exceptions unless otherwise specified. To be fully on the (conservatively) safe side, we should invert that flag as well. This patch does so by replacing MIFlag::FPExcept with MIFlag::NoFPExcept. (Note that this does also introduce an incompatible change in the MIR format.) Reviewed By: craig.topper Differential Revision: https://reviews.llvm.org/D72466
* [FPEnv] Generate constrained FP comparisons from clangUlrich Weigand2020-01-101-0/+41
| | | | | | | | | | | | | | | | | | | | | Update the IRBuilder to generate constrained FP comparisons in CreateFCmp when IsFPConstrained is true, similar to the other places in the IRBuilder. Also, add a new CreateFCmpS to emit signaling FP comparisons, and use it in clang where comparisons are supposed to be signaling (currently, only when emitting code for the <, <=, >, >= operators). Note that there is currently no way to add fast-math flags to a constrained FP comparison, since this is implemented as an intrinsic call that returns a boolean type, and FMF are only allowed for calls returning a floating-point type. However, given the discussion around https://bugs.llvm.org/show_bug.cgi?id=42179, it seems that FCmp itself really shouldn't have any FMF either, so this is probably OK. Reviewed by: craig.topper Differential Revision: https://reviews.llvm.org/D71467
* [MIR] Fix cyclic dependency of MIR formatterPeng Guo2020-01-103-9/+9
| | | | | | | | | | | | | | Summary: Move MIR formatter pointer from TargetMachine to TargetInstrInfo to avoid cyclic dependency between target & codegen. Reviewers: dsanders, bkramer, arsenm Subscribers: wdng, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D72485
* [ThinLTO] Pass CodeGenOpts like UnrollLoops/VectorizeLoop/VectorizeSLPWei Mi2020-01-091-0/+4
| | | | | | | | | | | | | down to pass builder in ltobackend. Currently CodeGenOpts like UnrollLoops/VectorizeLoop/VectorizeSLP in clang are not passed down to pass builder in ltobackend when new pass manager is used. This is inconsistent with the behavior when new pass manager is used and thinlto is not used. Such inconsistency causes slp vectorization pass not being enabled in ltobackend for O3 + thinlto right now. This patch fixes that. Differential Revision: https://reviews.llvm.org/D72386
* TableGen/GlobalISel: Fix pattern matching of immarg literalsMatt Arsenault2020-01-091-4/+9
| | | | | | | | | | | | | | | | | For arguments that are not expected to be materialized with G_CONSTANT, this was emitting predicates which could never match. It was first adding a meaningless LLT check, which would always fail due to the operand not being a register. Infer the cases where a literal should check for an immediate operand, instead of a register This avoids needing to invent a special way of representing timm literal values. Also handle immediate arguments in GIM_CheckLiteralInt. The comments stated it handled isImm() and isCImm(), but that wasn't really true. This unblocks work on the selection of all of the complicated AMDGPU intrinsics in future commits.
* TableGen/GlobalISel: Add way for SDNodeXForm to work on timmMatt Arsenault2020-01-093-2/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current implementation assumes there is an instruction associated with the transform, but this is not the case for timm/TargetConstant/immarg values. These transforms should directly operate on a specific MachineOperand in the source instruction. TableGen would assert if you attempted to define an equivalent GISDNodeXFormEquiv using timm when it failed to find the instruction matcher. Specially recognize SDNodeXForms on timm, and pass the operand index to the render function. Ideally this would be a separate render function type that looks like void renderFoo(MachineInstrBuilder, const MachineOperand&), but this proved to be somewhat mechanically painful. Add an optional operand index which will only be passed if the transform should only look at the one source operand. Theoretically it would also be possible to only ever pass the MachineOperand, and the existing renderers would check the parent. I think that would be somewhat ugly for the standard usage which may want to inspect other operands, and I also think MachineOperand should eventually not carry a pointer to the parent instruction. Use it in one sample pattern. This isn't a great example, since the transform exists to satisfy DAG type constraints. This could also be avoided by just changing the MachineInstr's arbitrary choice of operand type from i16 to i32. Other patterns have nontrivial uses, but this serves as the simplest example. One flaw this still has is if you try to use an SDNodeXForm defined for imm, but the source pattern uses timm, you still see the "Failed to lookup instruction" assert. However, there is now a way to avoid it.
* GlobalISel: Handle llvm.read_registerMatt Arsenault2020-01-093-2/+27
| | | | | Compared to the attempt in bdcc6d3d2638b3a2c99ab3b9bfaa9c02e584993a, this uses intermediate generic instructions.
* CodeGen: Use LLT instead of EVT in getRegisterByNameMatt Arsenault2020-01-091-1/+1
| | | | | | Only PPC seems to be using it, and only checks some simple cases and doesn't distinguish between FP. Just switch to using LLT to simplify use from GlobalISel.
* GlobalISel: Move getLLTForMVT/getMVTForLLTMatt Arsenault2020-01-092-5/+9
| | | | | | As an intermediate step, some TLI functions can be converted to using LLT instead of MVT. Move this somewhere out of GlobalISel so DAG functions can use these.
* TableGen/GlobalISel: Address fixmeMatt Arsenault2020-01-091-0/+5
| | | | Don't call computeAvailableFunctionFeatures for every instruction.
* [ms] [X86] Use "P" modifier on all branch-target operands in inline X86 ↵Eric Astor2020-01-092-5/+9
| | | | | | | | | | | | | | | | | | | assembly. Summary: Extend D71677 to apply to all branch-target operands, rather than special-casing call instructions. Also add a regression test for llvm.org/PR44272, since this finishes fixing it. Reviewers: thakis, rnk Reviewed By: thakis Subscribers: merge_guards_bot, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D72417
* [Support][NFC] Add a comment about the semantics of MF_HUGE_HINT flagBruno Ricci2020-01-091-0/+11
|
* [NFCI][LoopUnrollAndJam] Changing LoopUnrollAndJamPass to a functionWhitney Tsang2020-01-091-5/+2
| | | | | | | | | | | | | | | | | | | pass. Summary: This patch changes LoopUnrollAndJamPass to a function pass, and keeps the loops traversal order same as defined in FunctionToLoopPassAdaptor LoopPassManager.h. The next patch will change the loop traversal to outer to inner order, so more loops can be transform. Discussion in llvm-dev mailing list: https://groups.google.com/forum/#!topic/llvm-dev/LF4rUjkVI2g Reviewer: dmgreen, jdoerfert, Meinersbur, kbarton, bmahjour, etiotto Reviewed By: dmgreen Subscribers: hiraditya, zzheng, llvm-commits Tag: LLVM Differential Revision: https://reviews.llvm.org/D72230
* [ARM,MVE] Add missing IntrNoMem flag on IR intrinsics.Simon Tatham2020-01-091-14/+13
| | | | | | | | | | | | | | | | A lot of the IR-level intrinsics we've been defining for MVE recently accidentally had `props = []` instead of `props = [IntrNoMem]`, so that optimization would have been overcautious about reordering them. All the affected cases were due to instantiating the multiclasses `MVEPredicated` and `MVEMXPredicated` without filling in the `props` parameter, because I //thought// I remembered having set the defaults in those multiclasses to `[IntrNoMem]`. In fact I hadn't done that. Now I have. (The IR intrinsics that //do// read and write memory are all explicitly marked as `[IntrReadMem]` or `[IntrWriteMem]` already, so they will override these defaults.)
* [VE] Target stub for NEC SX-AuroraKazushi (Jam) Marukawa2020-01-091-1/+7
| | | | | | | | | Summary: This patch registers the 've' target: the NEC SX-Aurora TSUBASA Vector Engine. Reviewed By: arsenm Differential Revision: https://reviews.llvm.org/D69103
* [LoopUtils][NFC] Minor refactoring in getLoopEstimatedTripCount.Evgeniy Brevnov2020-01-091-0/+5
|
* [APFloat] Fix checked error assert failuresEhud Katz2020-01-091-1/+2
| | | | | | | | | | | `APFLoat::convertFromString` returns `Expected` result, which must be "checked" if the LLVM_ENABLE_ABI_BREAKING_CHECKS preprocessor flag is set. To mark an `Expected` result as "checked" we must consume the `Error` within. In many cases, we are only interested in knowing if an error occured, without the need to examine the error info. This is achieved, easily, with the `errorToBool()` API.
* Revert "Revert "[MIR] Target specific MIR formating and parsing""Daniel Sanders2020-01-087-3/+106
| | | | | | | There was an unguarded dereference of MF in a function that permitted nullptr. Fixed This reverts commit 71d64f72f934631aa2f12b9542c23f74f256f494.
* Revert "[MIR] Target specific MIR formating and parsing"Nico Weber2020-01-087-106/+3
| | | | | This reverts commit 3ef05d85be8c3666ebfa3ad986eb334da5195a47. It broke check-llvm on many bots, see comments on D69836.
* [MIR] Target specific MIR formating and parsingPeng Guo2020-01-087-3/+106
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Added MIRFormatter for target specific MIR formating and parsing with immediate and custom pseudo source values. Target machine can subclass MIRFormatter and implement custom logic for printing and parsing immediate and custom pseudo source values for better readability. * Target specific immediate mnemonic need to start with "." follows by identifier string. When MIR parser sees immediate it will call target specific parsing function. * Custom pseudo source value need to start with custom follows by double-quoted string. MIR parser will pass the quoted string to target specific PSV parsing function. * MIRFormatter have 2 helper functions to facilitate LLVM value printing and parsing for custom PSV if they refers LLVM values. Patch by Peng Guo Reviewers: dsanders, arsenm Reviewed By: dsanders Subscribers: wdng, jvesely, nhaehnle, hiraditya, jfb, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69836
* Revert "[MIR] Target specific MIR formating and parsing"Daniel Sanders2020-01-087-106/+3
| | | | | | Forgot to credit Peng in the commit message. This reverts commit be841f89d0014b1e0246a4feae941b2f74abd908.
* [MIR] Target specific MIR formating and parsingPeng Guo2020-01-087-3/+106
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Added MIRFormatter for target specific MIR formating and parsing with immediate and custom pseudo source values. Target machine can subclass MIRFormatter and implement custom logic for printing and parsing immediate and custom pseudo source values for better readability. * Target specific immediate mnemonic need to start with "." follows by identifier string. When MIR parser sees immediate it will call target specific parsing function. * Custom pseudo source value need to start with custom follows by double-quoted string. MIR parser will pass the quoted string to target specific PSV parsing function. * MIRFormatter have 2 helper functions to facilitate LLVM value printing and parsing for custom PSV if they refers LLVM values. Reviewers: dsanders, arsenm Reviewed By: dsanders Subscribers: wdng, jvesely, nhaehnle, hiraditya, jfb, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69836
* [Attributor][FIX] Carefully change invokes to calls (after manifest)Johannes Doerfert2020-01-081-0/+10
| | | | | | Before we manually inserted unreachable early but that could lead to broken PHI nodes. Now we use the existing late modification functionality.
* [Attributor][FIX] Avoid dangling value pointers during code modificationJohannes Doerfert2020-01-082-1/+20
| | | | | | | When we replace instructions with unreachable we delete instructions. We now avoid dangling pointers to those deleted instructions in the `ToBeChangedToUnreachableInsts` set. Other modification collections might need to be updated in the future as well.
* [PowerPC]: Add powerpcspe target triple subarch componentJustin Hibbits2020-01-081-1/+3
| | | | | | | | | | Summary: This allows the use of '-target powerpcspe-unknown-linux-gnu' or 'powerpcspe-unknown-freebsd' to be used, instead of '-target powerpc-unknown-linux-gnu -mspe'. Reviewed By: dim Differential Revision: https://reviews.llvm.org/D72014
* Recommit "[MachineVerifier] Improve verification of live-in lists."Jonas Paulsson2020-01-081-0/+3
| | | | | | | | | | | | | | | | | | MachineVerifier::visitMachineFunctionAfter() is extended to check the live-through case for live-in lists. This is only done for registers without aliases and that are neither allocatable or reserved, such as the SystemZ::CC register. The MachineVerifier earlier only catched the case of a live-in use without an entry in the live-in list (as "using an undefined physical register"). A comment in LivePhysRegs.h has been added stating a guarantee that addLiveOuts() can be trusted for a full register both before and after register allocation. Review: Quentin Colombet Differential Revision: https://reviews.llvm.org/D68267
* Revert "Merge memtag instructions with adjacent stack slots."Evgenii Stepanov2020-01-081-7/+0
| | | | | | | | | | | | *** Bad machine code: Tied use must be a register *** - function: stg_alloca17 - basic block: %bb.0 entry (0x20076710580) - instruction: early-clobber %0:gpr64common, early-clobber %1:gpr64sp = STGloop 272, %stack.0.a :: (store 272 into %ir.a, align 16) - operand 3: %stack.0.a http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-win/builds/21481/steps/test-check-all/logs/stdio This reverts commit b675a7628ce6a21b1e4a71c079a67badfb8b073d.
* Revert "[JumpThreading] Thread jumps through two basic blocks"Kazu Hirata2020-01-081-5/+0
| | | | | | | | It looks like my patch breaks the sanitizer-windows build: http://lab.llvm.org:8011/builders/sanitizer-windows/builds/56324 This reverts commit ead815924e6ebeaf02c31c37ebf7a560b5fdf67b.
* Merge memtag instructions with adjacent stack slots.Evgenii Stepanov2020-01-081-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Detect a run of memory tagging instructions for adjacent stack frame slots, and replace them with a shorter instruction sequence * replace STG + STG with ST2G * replace STGloop + STGloop with STGloop This code needs to run when stack slot offsets are already known, but before FrameIndex operands in STG instructions are eliminated; that's the reason for the new hook in PrologueEpilogue. This change modifies STGloop and STZGloop pseudos to take the size as an immediate integer operand, and base address as a FI operand when possible. This is needed to simplify recognizing an STGloop instruction as operating on a stack slot post-regalloc. This improves memtag code size by ~0.25%, and it looks like an additional ~0.1% is possible by rearranging the stack frame such that consecutive STG instructions reference adjacent slots (patch pending). Reviewers: pcc, ostannard Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70286
* [BranchAlign] Compiler support for suppressing branch alignPhilip Reames2020-01-082-0/+14
| | | | | | | | | | | | As discussed heavily in the original review (D70157), there's a need for the compiler to be able to selective suppress padding (either nop or prefix) to respect assumptions about the meaning of labels and instructions in generated code. Rather than wait for syntax to be finalized - which appears to be a very slow process - this patch focuses on the compiler use case and *only* worries about the integrated assembler. To my knowledge, this covers all cases mentioned to date for clang/JIT support. For testing purposes, I wired it up so that if the integrated assembler was using autopadding for branch alignment (e.g. enabled at command line) then the textual assembly output would contain a comment for each location where padding was enabled or disabled. This seemed like the least painful choice overall. Note that the result of this patch effective disables the jcc errata mitigation for many constructs (statepoints, implicit null checks, xray, etc...) which is non ideal. It is at least *correct* and should allow us to enable the mitigation for the compiler. Once that's done, and a few other items are worked through, we probably want to come back to this an explore a bundling based approach instead so that we can pad instructions while keeping labels in the right place. Differential Revision: https://reviews.llvm.org/D72303
* [JumpThreading] Thread jumps through two basic blocksKazu Hirata2020-01-081-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch teaches JumpThreading.cpp to thread through two basic blocks like: bb3: %var = phi i32* [ null, %bb1 ], [ @a, %bb2 ] %tobool = icmp eq i32 %cond, 0 br i1 %tobool, label %bb4, label ... bb4: %cmp = icmp eq i32* %var, null br i1 %cmp, label bb5, label bb6 by duplicating basic blocks like bb3 above. Once we duplicate bb3 as bb3.dup and redirect edge bb2->bb3 to bb2->bb3.dup, we have: bb3: %var = phi i32* [ @a, %bb2 ] %tobool = icmp eq i32 %cond, 0 br i1 %tobool, label %bb4, label ... bb3.dup: %var = phi i32* [ null, %bb1 ] %tobool = icmp eq i32 %cond, 0 br i1 %tobool, label %bb4, label ... bb4: %cmp = icmp eq i32* %var, null br i1 %cmp, label bb5, label bb6 Then the existing code in JumpThreading.cpp can thread edge bb3.dup->bb4 through bb4 and eventually create bb3.dup->bb5. Reviewers: wmi Subscribers: hiraditya, jfb, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70247
* [ARM,MVE] Intrinsics for variable shift instructions.Simon Tatham2020-01-081-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This batch of intrinsics fills in all the shift instructions that take a variable shift distance in a register, instead of an immediate. Some of these instructions take a single shift distance in a scalar register and apply it to all lanes; others take a vector of per-lane distances. These instructions are all basically one family, varying in whether they saturate out-of-range values, and whether they round when bits are shifted off the bottom. I've implemented them at the IR level by a much smaller family of IR intrinsics, which take flag parameters to indicate saturating and/or rounding (along with the usual one to specify signed/unsigned integers). An oddity is that all of them are //left// shift instructions – but if you pass a negative shift count, they'll shift right. So the vector shift distances are always vectors of //signed// integers, regardless of whether you're considering the other input vector to be of signed or unsigned. Also, even the simplest `vshlq` instruction in this family (neither saturating nor rounding) has to be implemented as an IR intrinsic, because the ordinary LLVM IR `shl` operation would consider an out-of-range shift count to be undefined behavior. Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard Reviewed By: dmgreen Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D72329
* [ARM,MVE] Intrinsics for partial-overwrite imm shifts.Simon Tatham2020-01-081-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This batch of intrinsics covers two sets of immediate shift instructions, which have in common that they only overwrite part of their output register and so they need an extra input giving its previous value. The VSLI and VSRI instructions shift each lane of the input vector left or right just as if they were normal immediate VSHL/VSHR, but then they only overwrite the output bits that correspond to actual shifted bits of the input. So VSLI will leave the low n bits of each output lane unchanged, and VSRI the same with the top n bits. The V[Q][R]SHR[U]N family are all narrowing shifts: they take an input vector of 2n-bit integers, shift each lane right by a constant, and then narrowing the shifted result to only n bits. So they only overwrite half of the n-bit lanes in the output register, and the B/T suffix indicates whether it's the bottom or top half of each 2n-bit lane. I've implemented the whole of the latter family using a single IR intrinsic `vshrn`, which takes a lot of i32 parameters indicating which instruction it expands to (by specifying signedness of the input and output types, whether it saturates and/or rounds, etc). Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard Reviewed By: dmgreen Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D72328
* [Intrinsic] Add fixed point division intrinsics.Bevin Hansson2020-01-084-1/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch adds intrinsics and ISelDAG nodes for signed and unsigned fixed-point division: llvm.sdiv.fix.* llvm.udiv.fix.* These intrinsics perform scaled division on two integers or vectors of integers. They are required for the implementation of the Embedded-C fixed-point arithmetic in Clang. Patch by: ebevhan Reviewers: bjope, leonardchan, efriedma, craig.topper Reviewed By: craig.topper Subscribers: Ka-Ka, ilya, hiraditya, jdoerfert, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70007
* [NFC] Move InPQueue into arguments of releaseNodeQiu Chaofan2020-01-081-5/+13
| | | | | | | This patch moves `InPQueue` into function arguments instead of template arguments of `releaseNode`, which is a cleaner approach. Differential Revision: https://reviews.llvm.org/D72125
* [Dsymutil][Debuginfo][NFC] Reland: Refactor dsymutil to separate DWARF ↵Alexey Lapshin2020-01-084-1/+586
| | | | | | | | | | | | | | | | | | | | | | optimizing part. #2. Summary: This patch relands D71271. The problem with D71271 is that it has cyclic dependency: CodeGen->AsmPrinter->DebugInfoDWARF->CodeGen. To avoid cyclic dependency this patch puts implementation for DWARFOptimizer into separate library: lib/DWARFLinker. Thus the difference between this patch and D71271 is in that DWARFOptimizer renamed into DWARFLinker and it`s files are put into lib/DWARFLinker. Reviewers: JDevlieghere, friss, dblaikie, aprantl Reviewed By: JDevlieghere Subscribers: thegameg, merge_guards_bot, probinson, mgorny, hiraditya, llvm-commits Tags: #llvm, #debug-info Differential Revision: https://reviews.llvm.org/D71839
* AArch64: add missing Apple CPU names and use them by default.Tim Northover2020-01-081-0/+18
| | | | | | | | Apple's CPUs are called A7-A13 in official communication, occasionally with weird suffixes which we probably don't need to care about. This adds each one and describes its features. It also switches the default CPU to the canonical name for Cyclone, but leaves legacy support in so that existing bitcode still compiles.
* [X86] Adding fp128 support for strict fcmpWang, Pengfei2020-01-081-0/+6
| | | | | | | | | | | | Summary: Adding fp128 support for strict fcmp Reviewers: craig.topper, LiuChen3, andrew.w.kaylor, RKSimon, uweigand Subscribers: hiraditya, llvm-commits, LuoYuanke Tags: #llvm Differential Revision: https://reviews.llvm.org/D71897
* [SCEV] get more accurate range for AddExpr with wrap flag.czhengsz2020-01-071-0/+1
| | | | | | Reviewed By: nikic Differential Revision: https://reviews.llvm.org/D64869
* [AArch64][GlobalISel] Fold a chain of two G_PTR_ADDs of constant offsets.Amara Emerson2020-01-072-1/+17
| | | | | | | | | | E.g. %addr1 = G_PTR_ADD %base, G_CONSTANT 20 %addr2 = G_PTR_ADD %addr1, G_CONSTANT 8 --> %addr2 = G_PTR_ADD %base, G_CONSTANT 28 Differential Revision: https://reviews.llvm.org/D72351
OpenPOWER on IntegriCloud