| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: javed.absar, trentxintong, courbet
Reviewed By: trentxintong
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D52433
llvm-svn: 342919
|
| |
|
|
|
|
|
|
| |
We can handle patterns where the elements have different
sizes, so refactoring ahead of trying to add another blob
within these clauses.
llvm-svn: 342918
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
avoid libm dependencies when possible.
Summary:
The complex division builtins (div?c3) use logb methods from libm to scale numbers during division and avoid rounding issues. However, these come from libm, meaning anyone that uses --rtlib=compiler-rt also has to include -lm. Implement logb* methods for standard ieee 754 floats so we can avoid -lm on those platforms, falling back to the old behavior (using either logb() or `__builtin_logb()`) when not supported.
These new methods are defined internally as `__compiler_rt_logb` so as not to conflict with the libm definitions in any way.
This fixes just the libm methods mentioned in PR32279 and PR28652. libc is still required, although that seems to not be an issue.
Note: this is proposed as an alternative to just adding -lm: D49330.
Reviewers: efriedma, compnerd, scanon, echristo
Reviewed By: echristo
Subscribers: jsji, echristo, nemanjai, dberris, mgorny, kbarton, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D49514
llvm-svn: 342917
|
| |
|
|
|
|
| |
The uops are slightly different to the register variant, so requires a +1uop tweak
llvm-svn: 342916
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The target-select-so-path test might hang on
some platforms. The reason of that behavior
was in incorrect usage of Filecheck and lldb-mi
processes. Instead of redirecting lldb-mi's output
to Filecheck, we should run lldb-mi session,
finish the session, collect its output and then pass
it to Filecheck.
Also, this patch adds a timer to the test to prevent
it from hanging in the future.
Reviewers: tatyana-krasnukha, aprantl, teemperor
Reviewed By: tatyana-krasnukha, teemperor
Subscribers: apolyakov, aprantl, teemperor, ki.stfu, abidh, lldb-commits
Differential Revision: https://reviews.llvm.org/D52139
llvm-svn: 342915
|
| |
|
|
|
|
| |
After r341022, we more strictly check the 64bit feature in X86Subtargets constructor when a 64-bit triple is used. If we don't infer this feature for autodetected CPUs we might incorrectly report an error if the CPU name wasn't autodetected to a CPU that supports 64-bit.
llvm-svn: 342914
|
| |
|
|
| |
llvm-svn: 342913
|
| |
|
|
| |
llvm-svn: 342912
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Added
__builtin_vsx_scalar_extract_expq
__builtin_vsx_scalar_insert_exp_qp
Builtins should behave the same way as in GCC.
Differential Revision: https://reviews.llvm.org/D48184
llvm-svn: 342911
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Added
__builtin_vsx_scalar_extract_expq
__builtin_vsx_scalar_insert_exp_qp
Builtins should behave the same way as in GCC.
Differential Revision: https://reviews.llvm.org/D48185
llvm-svn: 342910
|
| |
|
|
|
|
|
| |
Shifting into the sign bit is technically undefined behavior. No known
compiler exploits it though.
llvm-svn: 342909
|
| |
|
|
| |
llvm-svn: 342908
|
| |
|
|
| |
llvm-svn: 342907
|
| |
|
|
| |
llvm-svn: 342906
|
| |
|
|
| |
llvm-svn: 342905
|
| |
|
|
|
|
|
|
| |
We're missing quite a bit of data for these instruction, removing the overrides makes this obvious - inconsistent reg/mem variants is a concern as well.
Also, we have Divider resources (HWDivider etc.) but they aren't actually used consistently.
llvm-svn: 342904
|
| |
|
|
| |
llvm-svn: 342903
|
| |
|
|
|
|
|
|
|
|
|
| |
'width' of a vector usually refers to the bit-width.
https://bugs.llvm.org/show_bug.cgi?id=39016
shows a case where we could extend this fold to handle
a case where the number of elements in the bitcasted
vector is not equal to the resulting value.
llvm-svn: 342902
|
| |
|
|
|
|
| |
- The used builtins do not compile for pre arm v8.3a targets with gcc
llvm-svn: 342901
|
| |
|
|
|
|
|
| |
Tune `MaxInterleaveFactor` and `LdStMultipleTiming`and remove
`PartialUpdateClearance` for the Exynos processors.
llvm-svn: 342900
|
| |
|
|
|
|
| |
Enable crypto and literals fusion for the Exynos processors.
llvm-svn: 342899
|
| |
|
|
|
|
|
|
|
| |
A simple MOVS rd, imm8 can materialize [-128, 127] in signed i8 type or
[0, 255] in unsigned i8 type on Thumb1.
Differential Revision: https://reviews.llvm.org/D52257
llvm-svn: 342898
|
| |
|
|
|
|
|
|
| |
Update expected completions to match output generated by clang-7.0.
Differential Revision: https://reviews.llvm.org/D50171
llvm-svn: 342897
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implementing -print-before-all/-print-after-all/-filter-print-func support
through PassInstrumentation callbacks.
- PrintIR routines implement printing callbacks.
- StandardInstrumentations class provides a central place to manage all
the "standard" in-tree pass instrumentations. Currently it registers
PrintIR callbacks.
Reviewers: chandlerc, paquette, philip.pfaffe
Differential Revision: https://reviews.llvm.org/D50923
llvm-svn: 342896
|
| |
|
|
|
|
|
|
|
|
| |
- When return address signing is enabled, the LR may be signed on function entry
- When an exception is thrown the return address is inspected used to unwind the call stack
- Before this happens, the return address must be correctly authenticated to avoid causing an abort by dereferencing the signed pointer
Differential Revision: https://reviews.llvm.org/D51432
llvm-svn: 342895
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
/debug:{none,full,fastlink,ghash,symtab}
Implement final argument precedence if multiple /debug arguments are passed on the command-line to match expected link.exe behavior.
Support /debug:none and emit warning for /debug:fastlink with automatic fallback to /debug:full.
Emit error if last /debug:option is unknown.
Emit warning if last /debugtype:option is unknown.
https://reviews.llvm.org/D50404
llvm-svn: 342894
|
| |
|
|
|
|
|
|
| |
`-fconstant-cfstrings`."
Seems to be causing buildbot failures, need to look into it.
llvm-svn: 342893
|
| |
|
|
|
|
|
|
| |
Split WriteIMul by size and also by IMUL multiply-by-imm and multiply-by-reg cases.
This removes all the scheduler overrides for gpr multiplies and stops WriteMULH being ignored for BMI2 MULX instructions.
llvm-svn: 342892
|
| |
|
|
|
|
|
|
|
|
| |
- The assembler accepts VSTM/VLDM with register lists (specifically double registers lists) with more than 16 registers specified
- The Arm architecture reference manual says this instruction must not contain more than 16 registers when the registers are doubleword registers
- This addresses one of the concerns in https://bugs.llvm.org/show_bug.cgi?id=38389
Differential Revision: https://reviews.llvm.org/D52082
llvm-svn: 342891
|
| |
|
|
|
|
| |
Accidetanlly forgot to update it, big sorry.
llvm-svn: 342890
|
| |
|
|
| |
llvm-svn: 342889
|
| |
|
|
|
|
| |
are too painful. NFC
llvm-svn: 342888
|
| |
|
|
| |
llvm-svn: 342887
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a preliminary step towards solving PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
If we have an 'add' instruction that sets flags, we can use that to eliminate an
explicit compare instruction or some other instruction (cmn) that sets flags for
use in the later select.
As shown in the unchanged tests that use 'icmp ugt %x, %a', we're effectively
reversing an IR icmp canonicalization that replaces a variable operand with a
constant:
https://rise4fun.com/Alive/V1Q
But we're not using 'uaddo' in those cases via DAG transforms. This happens in
CGP after D8889 without checking target lowering to see if the op is supported.
So AArch already shows 'uaddo' codegen for the i8/i16/i32/i64 test variants with
"using_cmp_sum" in the title. That's the pattern that CGP matches as an unsigned
saturated add and converts to uaddo without checking target capabilities.
This patch is gated by isOperationLegalOrCustom(ISD::UADDO, VT), so we see only
see AArch diffs for i32/i64 in the tests with "using_cmp_notval" in the title
(unlike x86 which sees improvements for all sizes because all sizes are 'custom').
But the AArch code (like x86) looks better when translated to 'uaddo' in all cases.
So someone that is involved with AArch may want to set i8/i16 to 'custom' for UADDO,
so this patch will fire on those tests.
Another possibility given the existing behavior: we could remove the legal-or-custom
check altogether because we're assuming that a UADDO sequence is canonical/optimal
before we ever reach here. But that seems like a bug to me. If the target doesn't
have an add-with-flags op, then it's not likely that we'll get optimal DAG combining
using a UADDO node. This is similar justification for why we don't canonicalize IR to
the overflow math intrinsic sibling (llvm.uadd.with.overflow) for UADDO in the first
place.
Differential Revision: https://reviews.llvm.org/D51929
llvm-svn: 342886
|
| |
|
|
|
|
|
|
|
|
| |
dialects. Remove OpenCL special case."
Discussed on cfe-commits (Week-of-Mon-20180820), this change leads to
the generation of invalid IR for OpenCL without giving an error.
Therefore, the conclusion was to revert.
llvm-svn: 342885
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The r337288 tried to fix result of icmp i1 when its input is not sanitized
by falling back to DagISel. While it now produces the correct result for
bit 0, the other bits can still hold arbitrary value which is not supported
by MipsFastISel branch lowering. This patch fixes the issue by falling back
to DagISel in this case.
Patch by Dragan Mladjenovic.
Differential Revision: https://reviews.llvm.org/D52045
llvm-svn: 342884
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
[Clang][CodeGen][ObjC]: Fix non-bridged CoreFoundation builds on ELF targets
that use `-fconstant-cfstrings`. The original changes from differential
for a similar patch to PE/COFF (https://reviews.llvm.org/D44491) did not
check for an edge case where the global could be a constant which surfaced
as an issue when building for ELF because of different linkage semantics.
This patch addresses several issues with crashes related to CF builds on ELF
as well as improves data layout by ensuring string literals that back
the actual CFConstStrings end up in .rodata in line with Mach-O.
Change itself tested with CoreFoundation on Linux x86_64 but should be valid
for BSD-like systems as well that use ELF as the native object format.
Differential Revision: https://reviews.llvm.org/D52344
llvm-svn: 342883
|
| |
|
|
|
|
|
|
|
|
|
| |
gcc uses operand modifier 'x' in inline asm for VSX registers.
Without this modifier, instructions which use VSX numbering for their
operands are printed as VMX registers. This patch adds support for the
operand modifier 'x'.
Differential Revision: https://reviews.llvm.org/D52244
llvm-svn: 342882
|
| |
|
|
|
|
|
|
| |
LSan can be enabled by itself or as part of the address sanitizer.
Rather than checking the enabled sanitizers for both, just set the LSan
env options whenever a sanitizer is enabled.
llvm-svn: 342881
|
| |
|
|
|
|
|
|
|
|
| |
It would be best to introduce ISD::BitFieldExtract,
because clearly more than one backend faces the same problem.
But for now let's solve this in the x86-specific DAG combine.
https://bugs.llvm.org/show_bug.cgi?id=38938
llvm-svn: 342880
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
If the alignment is at least 4, this should report true.
Something still seems off with how < 4-byte types are
handled here though.
Fixing this seems to change how some combines get
to where they get, but somehow isn't changing the net
result.
llvm-svn: 342879
|
| |
|
|
| |
llvm-svn: 342878
|
| |
|
|
| |
llvm-svn: 342877
|
| |
|
|
|
|
|
| |
Check for definedness of the NDEBUG macro rather than its value,
to be consistent with other uses.
llvm-svn: 342876
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
NativeProcessProtocol is an abstract class, but it still contains a
significant amount of code. Some of that code is tested via tests of
specific derived classes, but these tests don't run everywhere, as they
are OS and arch-specific. They are also relatively high-level, which
means some functionalities (particularly the failure cases) are
hard/impossible to test.
In this approach, I replace the abstract methods with mocks, which
allows me to inject failures into the lowest levels of breakpoint
setting code and test the class behavior in this situation.
Reviewers: zturner, teemperor
Subscribers: mgorny, lldb-commits
Differential Revision: https://reviews.llvm.org/D52152
llvm-svn: 342875
|
| |
|
|
|
|
|
|
|
|
|
|
| |
A sequence of VMUL and VADD instructions always give the same or better
performance than a fused VMLA instruction on the Cortex-M4 and Cortex-M33.
Executing the VMUL and VADD back-to-back requires the same cycles, but
having separate instructions allows scheduling to avoid the hazard between
these 2 instructions.
Differential Revision: https://reviews.llvm.org/D52289
llvm-svn: 342874
|
| |
|
|
|
|
|
|
|
|
|
| |
This caused miscompilation of WebRTC for Android: PR39060.
> We've had the pass enabled downstream for a couple of weeks and it
> seems to be okay, so enable it by default.
>
> Differential Revision: https://reviews.llvm.org/D51920
llvm-svn: 342873
|
| |
|
|
|
|
|
|
|
|
|
| |
- The load store optimizer is currently merging multiple loads/stores into VLDM/VSTM with more than 16 doubleword registers
- This is an UNPREDICTABLE instruction and shouldn't be done
- It looks like the Limit for how many registers included in a merge got dropped at some point so I am reintroducing it in this patch
- This fixes https://bugs.llvm.org/show_bug.cgi?id=38389
Differential Revision: https://reviews.llvm.org/D52085
llvm-svn: 342872
|
| |
|
|
|
|
|
|
|
|
|
|
| |
DeadArgElim pass marks unused function arguments as ‘undef’ without updating
existing dbg.values referring to it. As a consequence the debug info
metadata in the final executable was wrong.
Patch by Djordje Todorovic.
Differential Revision: https://reviews.llvm.org/D51968
llvm-svn: 342871
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally committed in rL342210 but was reverted in rL342260 because
it was causing issues in vectorized code, because I had forgotten to
ensure that we're operating on scalar values.
Original commit message:
On failing to find sequences that can be converted into dual macs,
try to find sequential 16-bit loads that are used by muls which we
can then use smultb, smulbt, smultt with a wide load.
Differential Revision: https://reviews.llvm.org/D51983
llvm-svn: 342870
|