summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Don't update NoTrappingFPMath and FPDenormalMode in resetTargetOptionsOliver Stannard2019-07-192-12/+1
| | | | | | | | | | | | | | | | We'd like to remove this whole function, because these are properties of functions, not the target as a whole. These two are easy to remove because they are only used for emitting ARM build attributes, which expects them to represent the defaults for the whole module, not just the last function generated. This is needed to get correct build attributes when using IPRA on ARM, because IPRA causes resetTargetOptions to get called before ARMAsmPrinter::emitAttributes. Differential revision: https://reviews.llvm.org/D64929 llvm-svn: 366562
* [lldb][NFC] Tablegenify targetRaphael Isemann2019-07-193-66/+168
| | | | llvm-svn: 366561
* [NFC] Remove indent after r366433Stefan Granitz2019-07-191-120/+118
| | | | llvm-svn: 366560
* Revert "Revert r366458, r366467 and r366468"Kadir Cetinkaya2019-07-1915-225/+359
| | | | | | | | | | | | | | | | | | | | | | This reverts commit 9c377105da0be7c2c9a3c70035ce674c71b846af. [clangd][BackgroundIndexLoader] Directly store DependentTU while loading shard Summary: We were deferring the population of DependentTU field in LoadedShard until BackgroundIndexLoader was consumed. This actually triggers a use after free since the shards FileToTU was pointing at could've been moved while consuming the Loader. Reviewers: sammccall Subscribers: ilya-biryukov, MaskRay, jkorous, arphaman, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D64980 llvm-svn: 366559
* [llvm-readelf] - A fix for: "--hash-symbols asserts for 64-bit ELFs"George Rimar2019-07-192-4/+84
| | | | | | | | | Fixes https://bugs.llvm.org/show_bug.cgi?id=42622. (--hash-symbols switch is currently broken for 64-bit ELF files, due to r352630.) Differential revision: https://reviews.llvm.org/D64788 llvm-svn: 366558
* [IPRA] Don't rely on non-exact function definitionsOliver Stannard2019-07-192-1/+49
| | | | | | | | | If a function definition is not exact, then the linker could select a differently-compiled version of it, which could use different registers. https://reviews.llvm.org/D64909 llvm-svn: 366557
* [ARM] Add <saturate> operand to SQRSHRL and UQRSHLLMikhail Maltsev2019-07-198-17/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: According to the new Armv8-M specification https://static.docs.arm.com/ddi0553/bh/DDI0553B_h_armv8m_arm.pdf the instructions SQRSHRL and UQRSHLL now have an additional immediate operand <saturate>. The new assembly syntax is: SQRSHRL<c> RdaLo, RdaHi, #<saturate>, Rm UQRSHLL<c> RdaLo, RdaHi, #<saturate>, Rm where <saturate> can be either 64 (the existing behavior) or 48, in that case the result is saturated to 48 bits. The new operand is encoded as follows: #64 Encoded as sat = 0 #48 Encoded as sat = 1 sat is bit 7 of the instruction bit pattern. This patch adds a new assembler operand class MveSaturateOperand which implements parsing and encoding. Decoding is implemented in DecodeMVEOverlappingLongShift. Reviewers: ostannard, simon_tatham, t.p.northover, samparker, dmgreen, SjoerdMeijer Reviewed By: simon_tatham Subscribers: javed.absar, kristof.beyls, hiraditya, pbarrio, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D64810 llvm-svn: 366555
* Revert r366458, r366467 and r366468Azharuddin Mohammed2019-07-1915-367/+225
| | | | | | | | | | | r366458 is causing test failures. r366467 and r366468 had to be reverted as they were casuing conflict while reverting r366458. r366468 [clangd] Remove dead code from BackgroundIndex r366467 [clangd] BackgroundIndex stores shards to the closest project r366458 [clangd] Refactor background-index shard loading llvm-svn: 366551
* [OpenCL] Define CLK_NULL_EVENT without castSven van Haastregt2019-07-192-2/+3
| | | | | | | | | | | | | | Defining CLK_NULL_EVENT with a `(void*)` cast has the (unintended?) side-effect that the address space will be fixed (as generic in OpenCL 2.0 mode). The consequence is that any target specific address space for the clk_event_t type will not be applied. It is not clear why the void pointer cast was needed in the first place, and it seems we can do without it. Differential Revision: https://reviews.llvm.org/D63876 llvm-svn: 366546
* [clangd] Handle windows line endings in QueryDriverKadir Cetinkaya2019-07-192-3/+5
| | | | | | | | | | | | | | | | | | | | Summary: The previous patch did not fix the end mark. D64789 fixes second case of https://github.com/clangd/clangd/issues/93 Patch by @lh123 ! Reviewers: sammccall, kadircet Reviewed By: kadircet Subscribers: MaskRay, ilya-biryukov, jkorous, arphaman, cfe-commits Tags: #clang-tools-extra, #clang Differential Revision: https://reviews.llvm.org/D64970 llvm-svn: 366545
* [sanitizers] Use covering ObjectFormatType switchesHubert Tong2019-07-192-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch removes the `default` case from some switches on `llvm::Triple::ObjectFormatType`, and cases for the missing enumerators (`UnknownObjectFormat`, `Wasm`, and `XCOFF`) are then added. For `UnknownObjectFormat`, the effect of the action for the `default` case is maintained; otherwise, where `llvm_unreachable` is called, `report_fatal_error` is used instead. Where the `default` case returns a default value, `report_fatal_error` is used for XCOFF as a placeholder. For `Wasm`, the effect of the action for the `default` case in maintained. The code is structured to avoid strongly implying that the `Wasm` case is present for any reason other than to make the switch cover all `ObjectFormatType` enumerator values. Reviewers: sfertile, jasonliu, daltenty Reviewed By: sfertile Subscribers: hiraditya, aheejin, sunfish, llvm-commits, cfe-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D64222 llvm-svn: 366544
* [AMDGPU] Simplify the exclusive scan used for optimized atomicsJay Foad2019-07-192-12/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Change the scan algorithm to use only power-of-two shifts (1, 2, 4, 8, 16, 32) instead of starting off shifting by 1, 2 and 3 and then doing a 3-way ADD, because: 1. It simplifies the compiler a little. 2. It minimizes vgpr pressure because each instruction is now of the form vn = vn + vn << c. 3. It is more friendly to the DPP combiner, which currently can't combine into an ADD3 instruction. Because of #2 and #3 the end result is improved from this: v_add_u32_dpp v4, v3, v3 row_shr:1 row_mask:0xf bank_mask:0xf bound_ctrl:0 v_mov_b32_dpp v5, v3 row_shr:2 row_mask:0xf bank_mask:0xf v_mov_b32_dpp v1, v3 row_shr:3 row_mask:0xf bank_mask:0xf v_add3_u32 v1, v4, v5, v1 s_nop 1 v_add_u32_dpp v1, v1, v1 row_shr:4 row_mask:0xf bank_mask:0xe s_nop 1 v_add_u32_dpp v1, v1, v1 row_shr:8 row_mask:0xf bank_mask:0xc s_nop 1 v_add_u32_dpp v1, v1, v1 row_bcast:15 row_mask:0xa bank_mask:0xf s_nop 1 v_add_u32_dpp v1, v1, v1 row_bcast:31 row_mask:0xc bank_mask:0xf To this: v_add_u32_dpp v1, v1, v1 row_shr:1 row_mask:0xf bank_mask:0xf bound_ctrl:0 s_nop 1 v_add_u32_dpp v1, v1, v1 row_shr:2 row_mask:0xf bank_mask:0xf bound_ctrl:0 s_nop 1 v_add_u32_dpp v1, v1, v1 row_shr:4 row_mask:0xf bank_mask:0xe s_nop 1 v_add_u32_dpp v1, v1, v1 row_shr:8 row_mask:0xf bank_mask:0xc s_nop 1 v_add_u32_dpp v1, v1, v1 row_bcast:15 row_mask:0xa bank_mask:0xf s_nop 1 v_add_u32_dpp v1, v1, v1 row_bcast:31 row_mask:0xc bank_mask:0xf I.e. two fewer computational instructions, one extra nop where we could schedule something else. Reviewers: arsenm, sheredom, critson, rampitec, vpykhtin Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D64411 llvm-svn: 366543
* [Loop Peeling] Enable peeling of multiple exits by default.Serguei Katkov2019-07-191-1/+1
| | | | | | | | | | | | Enable loop peeling with multiple exits where all non-latch exits ends up with deopt by default. Reviewers: reames, fhahn Reviewed By: reames Subscribers: xbolva00, hiraditya, zzheng, llvm-commits Differential Revision: https://reviews.llvm.org/D64619 llvm-svn: 366542
* [clangd] cleanup: unify the implemenation of checking a location is inside ↵Haojian Wu2019-07-1912-21/+54
| | | | | | | | | | | | | | | | main file. Summary: We have variant implementations in the codebase, this patch unifies them. Reviewers: ilya-biryukov, kadircet Subscribers: MaskRay, jkorous, arphaman, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D64915 llvm-svn: 366541
* [InstCombine] Dropping redundant masking before left-shift [5/5] (PR42563)Roman Lebedev2019-07-192-12/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: If we have some pattern that leaves only some low bits set, and then performs left-shift of those bits, if none of the bits that are left after the final shift are modified by the mask, we can omit the mask. There are many variants to this pattern: f. `((x << MaskShAmt) a>> MaskShAmt) << ShiftShAmt` All these patterns can be simplified to just: `x << ShiftShAmt` iff: f. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`) Normally, the inner pattern is sign-extend, but for our purposes it's no different to other patterns: alive proofs: f: https://rise4fun.com/Alive/7U3 For now let's start with patterns where both shift amounts are variable, with trivial constant "offset" between them, since i believe this is both simplest to handle and i think this is most common. But again, there are likely other variants where we could use ValueTracking/ConstantRange to handle more cases. https://bugs.llvm.org/show_bug.cgi?id=42563 Differential Revision: https://reviews.llvm.org/D64524 llvm-svn: 366540
* [InstCombine] Dropping redundant masking before left-shift [4/5] (PR42563)Roman Lebedev2019-07-192-11/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: If we have some pattern that leaves only some low bits set, and then performs left-shift of those bits, if none of the bits that are left after the final shift are modified by the mask, we can omit the mask. There are many variants to this pattern: e. `((x << MaskShAmt) l>> MaskShAmt) << ShiftShAmt` All these patterns can be simplified to just: `x << ShiftShAmt` iff: e. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`) alive proofs: e: https://rise4fun.com/Alive/0FT For now let's start with patterns where both shift amounts are variable, with trivial constant "offset" between them, since i believe this is both simplest to handle and i think this is most common. But again, there are likely other variants where we could use ValueTracking/ConstantRange to handle more cases. https://bugs.llvm.org/show_bug.cgi?id=42563 Differential Revision: https://reviews.llvm.org/D64521 llvm-svn: 366539
* [InstCombine] Dropping redundant masking before left-shift [3/5] (PR42563)Roman Lebedev2019-07-192-12/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: If we have some pattern that leaves only some low bits set, and then performs left-shift of those bits, if none of the bits that are left after the final shift are modified by the mask, we can omit the mask. There are many variants to this pattern: d. `(x & ((-1 << MaskShAmt) >> MaskShAmt)) << ShiftShAmt` All these patterns can be simplified to just: `x << ShiftShAmt` iff: d. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`) alive proofs: d: https://rise4fun.com/Alive/I5Y For now let's start with patterns where both shift amounts are variable, with trivial constant "offset" between them, since i believe this is both simplest to handle and i think this is most common. But again, there are likely other variants where we could use ValueTracking/ConstantRange to handle more cases. https://bugs.llvm.org/show_bug.cgi?id=42563 Differential Revision: https://reviews.llvm.org/D64519 llvm-svn: 366538
* [InstCombine] Dropping redundant masking before left-shift [2/5] (PR42563)Roman Lebedev2019-07-192-26/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: If we have some pattern that leaves only some low bits set, and then performs left-shift of those bits, if none of the bits that are left after the final shift are modified by the mask, we can omit the mask. There are many variants to this pattern: c. `(x & (-1 >> MaskShAmt)) << ShiftShAmt` All these patterns can be simplified to just: `x << ShiftShAmt` iff: c. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`) alive proofs: c: https://rise4fun.com/Alive/RgJh For now let's start with patterns where both shift amounts are variable, with trivial constant "offset" between them, since i believe this is both simplest to handle and i think this is most common. But again, there are likely other variants where we could use ValueTracking/ConstantRange to handle more cases. https://bugs.llvm.org/show_bug.cgi?id=42563 Differential Revision: https://reviews.llvm.org/D64517 llvm-svn: 366537
* [InstCombine] Dropping redundant masking before left-shift [1/5] (PR42563)Roman Lebedev2019-07-192-13/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: If we have some pattern that leaves only some low bits set, and then performs left-shift of those bits, if none of the bits that are left after the final shift are modified by the mask, we can omit the mask. There are many variants to this pattern: b. `(x & (~(-1 << maskNbits))) << shiftNbits` All these patterns can be simplified to just: `x << ShiftShAmt` iff: b. `(MaskShAmt+ShiftShAmt) u>= bitwidth(x)` alive proof: b: https://rise4fun.com/Alive/y8M For now let's start with patterns where both shift amounts are variable, with trivial constant "offset" between them, since i believe this is both simplest to handle and i think this is most common. But again, there are likely other variants where we could use ValueTracking/ConstantRange to handle more cases. https://bugs.llvm.org/show_bug.cgi?id=42563 Differential Revision: https://reviews.llvm.org/D64514 llvm-svn: 366536
* [InstCombine] Dropping redundant masking before left-shift [0/5] (PR42563)Roman Lebedev2019-07-192-11/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: If we have some pattern that leaves only some low bits set, and then performs left-shift of those bits, if none of the bits that are left after the final shift are modified by the mask, we can omit the mask. There are many variants to this pattern: a. `(x & ((1 << MaskShAmt) - 1)) << ShiftShAmt` All these patterns can be simplified to just: `x << ShiftShAmt` iff: a. `(MaskShAmt+ShiftShAmt) u>= bitwidth(x)` alive proof: a: https://rise4fun.com/Alive/wi9 Indeed, not all of these patterns are canonical. But since this fold will only produce a single instruction i'm really interested in handling even uncanonical patterns, since i have this general kind of pattern in hotpaths, and it is not totally outlandish for bit-twiddling code. For now let's start with patterns where both shift amounts are variable, with trivial constant "offset" between them, since i believe this is both simplest to handle and i think this is most common. But again, there are likely other variants where we could use ValueTracking/ConstantRange to handle more cases. https://bugs.llvm.org/show_bug.cgi?id=42563 Reviewers: spatel, nikic, huihuiz, xbolva00 Reviewed By: xbolva00 Subscribers: efriedma, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D64512 llvm-svn: 366535
* [ELF][test] Fix aarch64-condb-reloc.sFangrui Song2019-07-191-1/+1
| | | | llvm-svn: 366534
* [NFC] Fix an indentation issue in llvm/Support/TargetRegistry.hHubert Tong2019-07-191-2/+2
| | | | llvm-svn: 366533
* [ELF][AArch64] Improve some aarch64-*.s testsFangrui Song2019-07-1913-244/+188
| | | | | | | | | * Delete aarch64-tls-static.s: it is covered by aarch64-tlsdesc.c * Add --no-show-raw-insn to llvm-objdump -d tests * When linking an executable with %t.so, the path %t.so will be recorded in the DT_NEEDED entry if %t.so doesn't have DT_SONAME. The DT_NEEDED has varying lengths on different systems. Add -soname to make tests more robust. This issue will become outstanding if we allow overlapping PT_LOAD (D64930). llvm-svn: 366532
* [DebugInfo] Some fields do not need relocations even relax is enabled.Hsiangkai Wang2019-07-194-15/+32
| | | | | | | | | | | | | | | | In debug frame information, some fields, e.g., Length in CIE/FDE and Offset in FDE are attributes to describe the structure of CIE/FDE. They are not related to the relaxed code. However, these attributes are symbol differences. So, in current design, these attributes will be filled as zero and LLVM generates relocations for them. We only need to generate relocations for symbols in executable sections. So, if the symbols are not located in executable sections, we still evaluate their values under relaxation. Differential Revision: https://reviews.llvm.org/D61584 llvm-svn: 366531
* unbreak linksChris Lattner2019-07-1911-11/+11
| | | | llvm-svn: 366530
* replace the old kaleidoscope tutorial files with orphaned pages that forward ↵Chris Lattner2019-07-1913-5638/+47
| | | | | | to the new copy. llvm-svn: 366529
* Point to the dusted off version of the kaleidoscope tutorial.Chris Lattner2019-07-192-2/+4
| | | | llvm-svn: 366528
* [test] [llvm-objcopy] Fix broken test caseAlex Brachet2019-07-191-8/+15
| | | | | | | | | | | | | | | | Summary: The test case added in D62718 did not work unless the user was root because write bits were not set for the output file. This change uses only permissions with user write (0200) to ensure tests pass regardless of the users permissions. Reviewers: jhenderson, rupprecht, MaskRay, espindola, alexshap Reviewed By: MaskRay Subscribers: emaste, arichardson, jakehehrlich, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D64302 llvm-svn: 366527
* [NFC][PowerPC] Modify the test case add_cmp.llKang Zhang2019-07-191-32/+12
| | | | llvm-svn: 366526
* [libFuzzer] Set Android specific ALL_FUZZER_SUPPORTED_ARCHYi Kong2019-07-191-0/+2
| | | | | | Build libFuzzer for all Android supported architectures. llvm-svn: 366525
* [DebugInfo] Generate fixups as emitting DWARF .debug_frame/.eh_frame.Hsiangkai Wang2019-07-1916-88/+234
| | | | | | | | | | | | | It is necessary to generate fixups in .debug_frame or .eh_frame as relaxation is enabled due to the address delta may be changed after relaxation. There is an opcode with 6-bits data in debug frame encoding. So, we also need 6-bits fixup types. Differential Revision: https://reviews.llvm.org/D58335 llvm-svn: 366524
* Use the MachineBasicBlock symbol for a callbr targetBill Wendling2019-07-193-10/+34
| | | | | | | | | | | | | | | | | | | Summary: Inline asm doesn't use labels when compiled as an object file. Therefore, we shouldn't create one for the (potential) callbr destination. Instead, use the symbol for the MachineBasicBlock. Reviewers: nickdesaulniers, craig.topper Reviewed By: nickdesaulniers Subscribers: xbolva00, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D64888 llvm-svn: 366523
* [Target] Fix formatting and whitespace (NFC)Jonas Devlieghere2019-07-192-204/+191
| | | | llvm-svn: 366522
* [Target] Return an llvm::Expected from GetEntryPointAddress (NFC)Jonas Devlieghere2019-07-193-36/+33
| | | | | | | Instead of taking a status and potentially returning an invalid address, return an expected which is guaranteed to contain a valid address. llvm-svn: 366521
* check for interrupt from fgets on WindowsNathan Lanza2019-07-191-0/+3
| | | | | | | | Windows does not have the error EINTR when a blocking syscall is interrupted by a signal. The ReadFile API that fgets is implemented with instead use ERROR_OPERATION_ABORTED. Check for that after fgets. llvm-svn: 366520
* [NFC] Remove instances of unused ClangASTContext headerAlex Langford2019-07-193-3/+2
| | | | llvm-svn: 366519
* Fix formatting of inline argument comments. NFC.Sam Clegg2019-07-191-7/+6
| | | | | | | | | Also, remove the final arg from ItaniumCXXABI in the PNaCl case since its not needed. Differential Revision: https://reviews.llvm.org/D64955 llvm-svn: 366518
* [Commands] Remove unused header from CommandObjectFrameAlex Langford2019-07-191-1/+0
| | | | llvm-svn: 366517
* [GlobalISel] Translate calls to memcpy et al to G_INTRINSIC_W_SIDE_EFFECTs ↵Amara Emerson2019-07-1915-123/+277
| | | | | | | | | | | | | | and legalize later. I plan on adding memcpy optimizations in the GlobalISel pipeline, but we can't do that unless we delay lowering to actual function calls. This patch changes the translator to generate G_INTRINSIC_W_SIDE_EFFECTS for these functions, and then have each target specify that using the new custom legalizer for intrinsics hook that they want it expanded it a libcall. Differential Revision: https://reviews.llvm.org/D64895 llvm-svn: 366516
* [cmake] Fix typo where a varible was checked for Apple instead of DarwinNathan Lanza2019-07-191-1/+1
| | | | | | | | | | Subscribers: mgorny, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D64965 llvm-svn: 366515
* [cmake] Convert the NATIVE llvm build process to be project agnosticNathan Lanza2019-07-192-28/+32
| | | | | | | | | lldb recently added a tablegen tool. In order to properly cross compile lldb standalone there needs to be a mechanism to generate the native lldb build, analgous to what's done for the NATIVE llvm build. Thus, we can simply modify this setup to allow for any project to be used. llvm-svn: 366514
* [cmake] Update NATIVE build variables to account for standalone changesNathan Lanza2019-07-181-5/+4
| | | | | | | | | | | | Summary: LLDB_PATH_TO_{CLANG,LLVM}_BUILD were removed and replaced with {LLVM,Clang}_DIR. Adjust the NATIVE build to account for this. Subscribers: mgorny Differential Revision: https://reviews.llvm.org/D64959 llvm-svn: 366513
* Reapply [llvm-lipo] Implement -create (with hardcoded alignments)Shoaib Meenai2019-07-188-18/+528
| | | | | | | | | | | | | | | | This reapplies r366142 with a fix for the failing Windows test. Original commit message: Creates universal binary output file from input files. Currently uses hard coded value for alignment. Want to get the create functionality approved before implementing the alignment function. Patch by Anusha Basana <anusha.basana@gmail.com> Differential Revision: https://reviews.llvm.org/D64102 llvm-svn: 366512
* Update the SimpleJIT class in the clang-interpreter example to use ORCv2.Lang Hames2019-07-181-44/+53
| | | | | | This will remove the ORCv1 deprecation warnings. llvm-svn: 366511
* Update polly test for SCEV change.Eli Friedman2019-07-181-1/+1
| | | | | | | r366419 adds nsw to more SCEV expressions, which allows polly to make more aggressive assumptions about the input expressions. llvm-svn: 366510
* [clang-scan-deps] Dependency directives source minimizer: handle #pragma onceAlex Lorenz2019-07-183-1/+40
| | | | | | | | | We should re-emit `#pragma once` to ensure the preprocessor will still honor it when running on minimized sources. Differential Revision: https://reviews.llvm.org/D64945 llvm-svn: 366509
* Remember to sort the Xcode project!!!Jim Ingham2019-07-181-2680/+2680
| | | | llvm-svn: 366508
* Add an expectedFailure test for type finding.Jim Ingham2019-07-183-0/+83
| | | | | | | | | | | | When two .c files define a type of the same name, lldb just picks one and uses it regardless of context. That is not correct. When stopped in a frame in one of the .c files that define this type, it should use that local definition. This commit just adds a test that checks for the correct behavior. It is currently xfailed. llvm-svn: 366507
* The switch to table-genning command options brokeJim Ingham2019-07-181-2680/+2705
| | | | | | | | the xcode project. This gets it a little closer to working, but I still have to figure out how to generate the lldb tablegen backend from the Xcode project. llvm-svn: 366506
* [AMDGPU] Drop Reg32 and use regular AsmNameStanislav Mekhanoshin2019-07-183-25/+21
| | | | | | | | This allows to reduce generated AMDGPUGenAsmWriter.inc by ~100Kb. Differential Revision: https://reviews.llvm.org/D64952 llvm-svn: 366505
OpenPOWER on IntegriCloud