| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
| |
llvm-svn: 356867
|
| |
|
|
|
|
|
|
| |
immediate into instructions under optsize."
Looking back over how the one use optimization works, I don't think this is the right way to fix this.
llvm-svn: 356866
|
| |
|
|
|
|
|
|
| |
Enable SSE41 ZERO_EXTEND_VECTOR_INREG shuffle combines - for the PMOVZX(PSHUFD(V)) -> UNPCKH(V,0) pattern we reduce the shuffles (port5-bottleneck on Intel) at the expense of creating a zero (pxor v,v) and an extra register move - which is a good trade off as these are pretty cheap and in most cases it doesn't increase register pressure.
This also exposed a missed opportunity to use combine to ZERO_EXTEND_VECTOR_INREG with folded loads - even if we're in the float domain.
llvm-svn: 356864
|
| |
|
|
|
|
|
|
| |
Class `RegionInfo` was `SortUnitInfo` before, so the variables were
named `SUI`. Now the class name is `RegionInfo`, so this renames `SUI`
to `RI`, matching the class name.
llvm-svn: 356861
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on DAG combine.
An i16 bswap can be implemented with an i16 rotate by 8. We previously emitted
a shift and OR sequence that DAG combine should be able to turn back into
rotate. But we might as well go there directly. If rotate isn't legal,
LegalizeDAG should further legalize it to either the opposite rotate, or the
shift and OR pattern.
I don't know of any way to get the existing DAG combine reliance to fail. So
I don't know any way to add new tests for this that wouldn't have worked
previously.
llvm-svn: 356860
|
| |
|
|
|
|
|
|
| |
Just enable this for AVX for now as SSE41 introduces extra register moves for the PMOVZX(PSHUFD(V)) -> UNPCKH(V,0) pattern (but otherwise helps reduce port5 usage on Intel targets).
Only AVX support is required for PR40685 as the issue is due to 8i8->8i32 zext shuffle leftovers.
llvm-svn: 356858
|
| |
|
|
|
|
|
| |
This is extracted from D59696 as suggested in the review. It is
preparation for making the DominatorTree a member variable.
llvm-svn: 356857
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is yet another step towards solving PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
uaddsat X, Y --> (X >u (X + Y)) ? -1 : X + Y
usubsat X, Y --> (X >u Y) ? X - Y : 0
We can't count on a sane vector ISA, so override the default (umin/umax)
expansion of unsigned add/sub saturate in cases where we do not have umin/umax.
Differential Revision: https://reviews.llvm.org/D59006
llvm-svn: 356855
|
| |
|
|
|
|
| |
Remove the I.getOperand() calls from inside shouldReorderOperands - reorderInputsAccordingToOpcode should handle the creation of the operand lists and shouldReorderOperands should just check to see whether the i'th element should be commuted.
llvm-svn: 356854
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds ConstantRange::getFull(BitWidth) and
ConstantRange::getEmpty(BitWidth) named constructors as more readable
alternatives to the current ConstantRange(BitWidth, /* full */ false)
and similar. Additionally private getFull() and getEmpty() member
functions are added which return a full/empty range with the same bit
width -- these are commonly needed inside ConstantRange.cpp.
The IsFullSet argument in the ConstantRange(BitWidth, IsFullSet)
constructor is now mandatory for the few usages that still make use of it.
Differential Revision: https://reviews.llvm.org/D59716
llvm-svn: 356852
|
| |
|
|
| |
llvm-svn: 356841
|
| |
|
|
| |
llvm-svn: 356840
|
| |
|
|
| |
llvm-svn: 356838
|
| |
|
|
| |
llvm-svn: 356836
|
| |
|
|
|
|
| |
directly. NFCI.
llvm-svn: 356832
|
| |
|
|
|
|
|
| |
Using an unsigned range to stay NFC, but a signed range would really
be more useful here.
llvm-svn: 356831
|
| |
|
|
| |
llvm-svn: 356830
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
[Symbolizer] Add getModuleSectionIndexForAddress() helper routine
The https://reviews.llvm.org/D58194 patch changed symbolizer interface.
Particularily it requires not only Address but SectionIndex also.
Note object::SectionedAddress parameter:
Expected<DILineInfo> symbolizeCode(const std::string &ModuleName,
object::SectionedAddress ModuleOffset,
StringRef DWPName = "");
There are callers of symbolizer which do not know particular section index.
That patch creates getModuleSectionIndexForAddress() routine which
will detect section index for the specified address. Thus if caller
set ModuleOffset.SectionIndex into object::SectionedAddress::UndefSection
state then symbolizer would detect section index using
getModuleSectionIndexForAddress routine.
Differential Revision: https://reviews.llvm.org/D58848
llvm-svn: 356829
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As a followup to newpm -time-passes fix (D59366), now adding a similar
functionality to legacy time-passes.
Enhancing llvm::reportAndResetTimings to accept an optional stream
for reporting output. By default it still reports into the stream created
by CreateInfoOutputFile (-info-output-file).
Also fixing to actually reset after printing as declared.
Reviewed By: philip.pfaffe
Differential Revision: https://reviews.llvm.org/D59416
llvm-svn: 356824
|
| |
|
|
|
|
| |
Try using a move constructor instead.
llvm-svn: 356823
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add basic infrastructure for reading and writting TBD files (version 1 - 3).
The TextAPI library is not used by anything yet (besides the unit tests). Tool
support will be added in a separate commit.
The TBD format is currently documented in the implementation file (TextStub.cpp).
https://reviews.llvm.org/D53945
Update: This contains changes to fix issues discovered by the bots:
- add parentheses to silence warnings.
- rename variables
- use PlatformType from BinaryFormat
- Trying if switching from a vector to an array will appeas the bots.
- Replace the tuple with a struct to work around an explicit constructor bug.
- This fixes an issue where we were leaking the YAML document if there was a
parsing error.
Updated the license information in all files.
llvm-svn: 356820
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
buildTree() and later in vectorizeTree().
This is a refactoring patch that removes the redundancy of performing operand reordering twice, once in buildTree() and later in vectorizeTree().
To achieve this we need to keep track of the operands within the TreeEntry struct while building the tree, and later in vectorizeTree() we are just accessing them from the TreeEntry in the right order.
This patch is the first in a series of patches that will allow for better operand reordering across chains of instructions (e.g., a chain of ADDs), as presented here: https://www.youtube.com/watch?v=gIEn34LvyNo
Patch by: @vporpo (Vasileios Porpodas)
Differential Revision: https://reviews.llvm.org/D59059
llvm-svn: 356814
|
| |
|
|
|
|
| |
of range C1. NFCI.
llvm-svn: 356810
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In r322972/r323136, the iteration here was changed to catch cases at the
beginning of a basic block... but we accidentally deleted an important
safety check. Restore that check to the way it was.
Fixes https://bugs.llvm.org/show_bug.cgi?id=41116
Differential Revision: https://reviews.llvm.org/D59680
llvm-svn: 356809
|
| |
|
|
|
|
|
|
|
|
|
|
| |
possible if popcnt instruction is not available
On 32-bit targets without popcnt, we currently expand 64-bit popcnt to sequences of arithmetic and logic ops for each 32-bit half and then add the 32 bit halves together. If we have xmm registers we can use use those to implement the operation instead. This results in less instructions then doing two separate 32-bit popcnt sequences.
This mitigates some of PR41151 for the i64 on i686 case when we have SSE2.
Differential Revision: https://reviews.llvm.org/D59662
llvm-svn: 356808
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used a lock cmpxchg8b to do i64 atomic loads. But if we have SSE2 we can do better and use a plain movq to do the load instead.
I tried to just use an f64 atomic load and add isel patterns to MOVSD(which the domain fixing pass can turn to MOVQ), but the atomic_load SDNode in TargetSelectionDAG.td requires the type to be integer.
So I've emitted VZEXT_LOAD instead which should be selected by isel to a MOVQ. Hopefully we don't need a specific atomic flavor of this. I kept the memory operand from the original AtomicSDNode. I wasn't sure if I might need to set the MOVolatile flag?
I've left some FIXMEs for improvements we can do without SSE2.
Differential Revision: https://reviews.llvm.org/D59679
llvm-svn: 356807
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Between building the pair map and querying it there are a few places that
erase and create Values. It's rare but the address of these newly created
Values is occasionally the same as a just-erased Value that we already
have in the pair map. These coincidences should be accounted for to avoid
non-determinism.
Thanks to Roman Tereshin for the test case.
Reviewers: rtereshin, bogner
Reviewed By: rtereshin
Subscribers: mgrang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59401
llvm-svn: 356803
|
| |
|
|
|
|
| |
Add Exynos M5 support and test cases.
llvm-svn: 356793
|
| |
|
|
|
|
|
|
|
|
| |
This doesn't have any practical effect at the moment, as far as I know,
because high registers aren't allocatable in Thumb1 mode. But it might
matter in the future.
Differential Revision: https://reviews.llvm.org/D59675
llvm-svn: 356791
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just as as llvm IR supports explicitly specifying numeric value ids
for instructions, and emits them by default in textual output, now do
the same for blocks.
This is a slightly incompatible change in the textual IR format.
Previously, llvm would parse numeric labels as string names. E.g.
define void @f() {
br label %"55"
55:
ret void
}
defined a label *named* "55", even without needing to be quoted, while
the reference required quoting. Now, if you intend a block label which
looks like a value number to be a name, you must quote it in the
definition too (e.g. `"55":`).
Previously, llvm would print nameless blocks only as a comment, and
would omit it if there was no predecessor. This could cause confusion
for readers of the IR, just as unnamed instructions did prior to the
addition of "%5 = " syntax, back in 2008 (PR2480).
Now, it will always print a label for an unnamed block, with the
exception of the entry block. (IMO it may be better to print it for
the entry-block as well. However, that requires updating many more
tests.)
Thus, the following is supported, and is the canonical printing:
define i32 @f(i32, i32) {
%3 = add i32 %0, %1
br label %4
4:
ret i32 %3
}
New test cases covering this behavior are added, and other tests
updated as required.
Differential Revision: https://reviews.llvm.org/D58548
llvm-svn: 356789
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
computeOverflowForSignedAdd()
We're already computing the known bits of the operands here. If the
known bits of the operands can determine the sign bit of the result,
we'll already catch this in signedAddMayOverflow(). The only other
way (and as the comment already indicates) we'll get new information
from computing known bits on the whole add, is if there's an assumption
on it.
As such, we change the code to only compute known bits from assumptions,
instead of computing full known bits on the add (which would unnecessarily
recompute the known bits of the operands as well).
Differential Revision: https://reviews.llvm.org/D59473
llvm-svn: 356785
|
| |
|
|
|
|
| |
(PR41203)
llvm-svn: 356784
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Adding contained caching to AliasAnalysis. BasicAA is currently the only one using it.
AA changes:
- This patch is pulling the caches from BasicAAResults to AAResults, meaning the getModRefInfo call benefits from the IsCapturedCache as well when in "batch mode".
- All AAResultBase implementations add the QueryInfo member to all APIs. AAResults APIs maintain wrapper APIs such that all alias()/getModRefInfo call sites are unchanged.
- AA now provides a BatchAAResults type as a wrapper to AAResults. It keeps the AAResults instance and a QueryInfo instantiated to batch mode. It delegates all work to the AAResults instance with the batched QueryInfo. More API wrappers may be needed in BatchAAResults; only the minimum needed is currently added.
MemorySSA changes:
- All walkers are now templated on the AA used (AliasAnalysis=AAResults or BatchAAResults).
- At build time, we optimize uses; now we create a local walker (lives only as long as OptimizeUses does) using BatchAAResults.
- All Walkers have an internal AA and only use that now, never the AA in MemorySSA. The Walkers receive the AA they will use when built.
- The walker we use for queries after the build is instantiated on AliasAnalysis and is built after building MemorySSA and setting AA.
- All static methods doing walking are now templated on AliasAnalysisType if they are used both during build and after. If used only during build, the method now only takes a BatchAAResults. If used only after build, the method now takes an AliasAnalysis.
Subscribers: sanjoy, arsenm, jvesely, nhaehnle, jlebar, george.burgess.iv, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59315
llvm-svn: 356783
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
In C++, the behavior of casting a double value that is beyond the range
of a single precision floating-point to a float value is undefined. This
change replaces such a cast with APFloat::convert to convert the value,
which is consistent with how we convert a double value to a half value.
Reviewers: sanjoy
Subscribers: lebedev.ri, sanjoy, jlebar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59500
llvm-svn: 356781
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
intrinsics
This helps to avoid the situation where RA spots that only 3 of the
v4f32 result of a load are used, and immediately reallocates the 4th
register for something else, requiring a stall waiting for the load.
Differential Revision: https://reviews.llvm.org/D58906
Change-Id: I947661edfd5715f62361a02b100f14aeeada29aa
llvm-svn: 356768
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some image ops return three or five dwords. Previously, we modeled that
with a 4 or 8 dword register class. The register allocator could
cleverly spot that some subregs were dead and allocate something else
there, but that caused the de-optimization that waitcnt insertion would
think that the result was used immediately.
This commit allows such an image op to have a result with a three or
five dword result, avoiding the above de-optimization.
Differential Revision: https://reviews.llvm.org/D58905
Change-Id: I3651211bbd7ed22721ee7b9fefd7bcc60a809d8b
llvm-svn: 356757
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now we have vec3 MVTs, this commit implements dwordx3 variants of the
buffer intrinsics.
On gfx6, a dwordx3 buffer load intrinsic is implemented as a dwordx4
instruction, and a dwordx3 buffer store intrinsic is not supported.
We need to support the dwordx3 load intrinsic because it is generated by
subtarget-unaware code in InstCombine.
Differential Revision: https://reviews.llvm.org/D58904
Change-Id: I016729d8557b98a52f529638ae97c340a5922a4e
llvm-svn: 356755
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This patch adds the ability to read a yaml form of a minidump file and
write it out as binary. Apart from the minidump header and the stream
directory, only three basic stream kinds are supported:
- Text: This kind is used for streams which contain textual data. This
is typically the contents of a /proc file on linux (e.g.
/proc/PID/maps). In this case, we just put the raw stream contents
into the yaml.
- SystemInfo: This stream contains various bits of information about the
host system in binary form. We expose the data in a structured form.
- Raw: This kind is used as a fallback when we don't have any special
knowledge about the stream. In this case, we just print the stream
contents in hex.
For this code to be really useful, more stream kinds will need to be
added (particularly for things like lists of memory regions and loaded
modules). However, these can be added incrementally.
Reviewers: jhenderson, zturner, clayborg, aprantl
Subscribers: mgorny, lemo, llvm-commits, lldb-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59482
llvm-svn: 356753
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The RISC-V ISA defines RV32E as an alternative "base" instruction set
encoding, that differs from RV32I by having only 16 rather than 32 registers.
This patch adds basic definitions for RV32E as well as MC layer support
(assembling, disassembling) and tests. The only supported ABI on RV32E is
ILP32E.
Add a new RISCVFeatures::validate() helper to RISCVUtils which can be called
from codegen or MC layer libraries to validate the combination of TargetTriple
and FeatureBitSet. Other targets have similar checks (e.g. erroring if SPE is
enabled on PPC64 or oddspreg + o32 ABI on Mips), but they either duplicate the
checks (Mips), or fail to check for both codegen and MC codepaths (PPC).
Codegen for the ILP32E ABI support and RV32E codegen are left for a future
patch/patches.
Differential Revision: https://reviews.llvm.org/D59470
llvm-svn: 356744
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch optimizes the emission of a sequence of SELECTs with the same
condition, avoiding the insertion of unnecessary control flow. Such a sequence
often occurs when a SELECT of values wider than XLEN is legalized into two
SELECTs with legal types. We have identified several use cases where the
SELECTs could be interleaved with other instructions. Therefore, we extend the
sequence to include non-SELECT instructions if we are able to detect that the
non-SELECT instructions do not impact the optimization.
This patch supersedes https://reviews.llvm.org/D59096, which attempted to
address this issue by introducing a new SelectionDAG node. Hat tip to Eli
Friedman for his feedback on how to best handle this issue.
Differential Revision: https://reviews.llvm.org/D59355
Patch by Luís Marques.
llvm-svn: 356741
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Indicates in the TargetLowering interface that conversions from CC logic to
bitwise logic are allowed. Adds tests that show the benefit when optimization
opportunities are detected. Also adds tests that show that when the optimization
is not applied correct code is generated (but opportunities for other
optimizations remain).
Differential Revision: https://reviews.llvm.org/D59596
Patch by Luís Marques.
llvm-svn: 356740
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Spec says about the first symbol table entry that index 0 both designates the first entry in the table
and serves as the undefined symbol index. It should have zero value.
Hence the first symbol table entry has no name. And so has to have a st_name == 0.
(http://refspecs.linuxbase.org/elf/gabi4+/ch4.symtab.html)
Currently, we do not emit zero value for the first symbol table entry.
That happens because we add empty strings to the string builder, which
for each such case adds a zero byte:
(https://github.com/llvm-mirror/llvm/blob/master/lib/MC/StringTableBuilder.cpp#L185)
After the string optimization performed it might return non zero indexes for the
empty string requested.
The patch fixes this issue for the case above and other sections with no names.
Differential revision: https://reviews.llvm.org/D59496
llvm-svn: 356739
|
| |
|
|
|
|
|
|
|
|
| |
They are not used by anything yet, but a subsequent commit will start
using them for image ops that return 5 dwords.
Differential Revision: https://reviews.llvm.org/D58903
Change-Id: I63e1904081e39a6d66e4eb96d51df25ad399d271
llvm-svn: 356735
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
relocation entries
Summary:
getRelocatedValue may compute incorrect value for SHT_RELA-typed relocation entries.
// DWARFDataExtractor.cpp
uint64_t DWARFDataExtractor::getRelocatedValue(uint32_t Size, uint32_t *Off,
...
// This formula is correct for REL, but may be incorrect for RELA if the value
// stored in the location (getUnsigned(Off, Size)) is not zero.
return getUnsigned(Off, Size) + Rel->Value;
In this patch, we
* refactor these visit* functions to include a new parameter `uint64_t A`.
Since these visit* functions are no longer used as visitors, rename them to resolve*.
+ REL: A is used as the addend. A is the value stored in the location where the
relocation applies: getUnsigned(Off, Size)
+ RELA: The addend encoded in RelocationRef is used, e.g. getELFAddend(R)
* and add another set of supports* functions to check if a given relocation type is handled.
DWARFObjInMemory uses them to fail early.
Reviewers: echristo, dblaikie
Reviewed By: echristo
Subscribers: mgorny, aprantl, aheejin, fedor.sergeev, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57939
llvm-svn: 356729
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the type id for a derived type is computed incorrectly.
For example,
type #1: int
type #2: ptr to #1
For a global variable "int *a", type #1 will be attributed to variable "a".
This is due to a bug which assigns the type id of the basetype of
that derived type as the derived type's type id. This happens
to "const", "volatile", "restrict", "typedef" and "pointer" types.
This patch fixed this bug, fixed existing test cases and added
a new one focusing on pointers plus other derived types.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356727
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the result of discussions on the list about how to deal with intrinsics
which require codegen to disambiguate them via only the integer/fp overloads.
It causes problems for GlobalISel as some of that information is lost during
translation, while with other operations like IR instructions the information is
encoded into the instruction opcode.
This patch changes clang to emit the new faddp intrinsic if the vector operands
to the builtin have FP element types. LLVM IR AutoUpgrade has been taught to
upgrade existing calls to aarch64.neon.addp with fp vector arguments, and
we remove the workarounds introduced for GlobalISel in r355865.
This is a more permanent solution to PR40968.
Differential Revision: https://reviews.llvm.org/D59655
llvm-svn: 356722
|
| |
|
|
|
|
|
|
| |
LoadInst->getPointerOperandType()->getElementType(). NFCI
For the future day when the pointer's don't have element types, we shoudl just use the type of the load result instead.
llvm-svn: 356721
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, this fails with many tools, e.g.
$ clang -fembed-bitcode-marker -c -o test.o test.c
$ nm test.o
nm: test.o The file was not recognized as a valid object file
-fembed-bitcode-marker creates a LLVM,bitcode section consisting of a single
byte. When reading the object file, IRObjectFile::findBitcodeInObject succeeds,
causing SymbolicFile::createSymbolicFile to try to read the "bitcode" rather
than using the outer Mach-O data - when then fails.
Fix this by making findBitcodeInObject return an error if the section size <= 1.
Patched by: Nicholas Allegra
Differential Revision: https://reviews.llvm.org/D44373
llvm-svn: 356718
|
| |
|
|
| |
llvm-svn: 356717
|
| |
|
|
|
|
|
|
|
| |
This was creating a copy of the register the pseudo itself was
def'ing, leaving a copy of an undefined register. I'm not sure how
the verifier is not catching this, but this avoids asserting in a
future change to RegAllocFast
llvm-svn: 356716
|