summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/BPF/BPFISelLowering.h
Commit message (Collapse)AuthorAgeFilesLines
* [BPF] disable ReduceLoadWidth during SelectionDag phaseYonghong Song2020-02-101-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | The compiler may transform the following code ctx = ctx + reloc_offset ... (*(u32 *)ctx) & 0x8000 ... to ctx = ctx + reloc_offset ... (*(u8 *)(ctx + 1)) & 0x80 ... where reloc_offset will be replaced with a constant during AsmPrinter phase. The above transformed code will be rejected the kernel verifier as it does not allow *(type *)((ctx + non_zero_offset1) + non_zero_offset2) style access pattern. It is hard at SelectionDag phase to identify whether a load is related to context or not. Sometime, interprocedure analysis may be needed. So let us simply prevent such optimization from happening. Differential Revision: https://reviews.llvm.org/D73997 (cherry picked from commit d96c1bbaa03574daf759e5e9a6c75047c5e3af64)
* [TargetLowering] Change getOptimalMemOpType to take a function attribute listSjoerd Meijer2019-04-301-1/+1
| | | | | | | | | | | | The MachineFunction wasn't used in getOptimalMemOpType, but more importantly, this allows reuse of findOptimalMemOpLowering that is calling getOptimalMemOpType. This is the groundwork for the changes in D59766 and D59787, that allows implementation of TTI::getMemcpyCost. Differential Revision: https://reviews.llvm.org/D59785 llvm-svn: 359537
* [BPF] add code-gen support for JMP32 instructionsJiong Wang2019-02-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | | JMP32 instructions has been added to eBPF ISA. They are 32-bit variants of existing BPF conditional jump instructions, but the comparison happens on low 32-bit sub-register only, therefore some unnecessary extensions could be saved. JMP32 instructions will only be available for -mcpu=v3. Host probe hook has been updated accordingly. JMP32 instructions will only be enabled in code-gen when -mattr=+alu32 enabled, meaning compiling the program using sub-register mode. For JMP32 encoding, it is a new instruction class, and is using the reserved eBPF class number 0x6. This patch has been tested by compiling and running kernel bpf selftests with JMP32 enabled. Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> llvm-svn: 353384
* Update the file headers across all of the LLVM projects in the monorepoChandler Carruth2019-01-191-4/+3
| | | | | | | | | | | | | | | | | to reflect the new license. We understand that people may be surprised that we're moving the header entirely to discuss the new license. We checked this carefully with the Foundation's lawyer and we believe this is the correct approach. Essentially, all code in the project is now made available by the LLVM project under our new license, so you will see that the license headers include that license only. Some of our contributors have contributed code under our old license, and accordingly, we have retained a copy of our old license notice in the top-level files in each project and repository. llvm-svn: 351636
* bpf: new option -bpf-expand-memcpy-in-order to expand memcpy in orderYonghong Song2018-07-251-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Some BPF JIT backends would want to optimize memcpy in their own architecture specific way. However, at the moment, there is no way for JIT backends to see memcpy semantics in a reliable way. This is due to LLVM BPF backend is expanding memcpy into load/store sequences and could possibly schedule them apart from each other further. So, BPF JIT backends inside kernel can't reliably recognize memcpy semantics by peephole BPF sequence. This patch introduce new intrinsic expand infrastructure to memcpy. To get stable in-order load/store sequence from memcpy, we first lower memcpy into BPF::MEMCPY node which then expanded into in-order load/store sequences in expandPostRAPseudo pass which will happen after instruction scheduling. By this way, kernel JIT backends could reliably recognize memcpy through scanning BPF sequence. This new memcpy expand infrastructure is gated by a new option: -bpf-expand-memcpy-in-order Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 337977
* bpf: Support i32 in getScalarShiftAmountTy methodYonghong Song2018-02-231-0/+2
| | | | | | | | | | getScalarShiftAmount method should be implemented for eBPF backend to make sure shift amount could still get correct type once 32-bit subregisters support are enabled. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Yonghong Song <yhs@fb.com> llvm-svn: 325986
* bpf: Support condition comparison on i32Yonghong Song2018-02-231-0/+6
| | | | | | | | | | | | | | | | | | | We need to support condition comparison on i32. All these comparisons are supposed to be combined into BPF_J* instructions which only support i64. For ISD::BR_CC we need to promote it to i64 first, then do custom lowering. For ISD::SET_CC, just expand to SELECT_CC like what's been done for i64. For ISD::SELECT_CC, we also want to do custom lower for i32. However, after 32-bit subregister support enabled, it is possible the comparison operands are i32 while the selected value are i64, or the comparison operands are i64 while the selected value are i32. We need to define extra instruction pattern and support them in custom instruction inserter. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Yonghong Song <yhs@fb.com> llvm-svn: 325985
* bpf: New calling convention for 32-bit subregistersYonghong Song2018-02-231-0/+2
| | | | | | | | | | | This patch add new calling conventions to allow GPR32RegClass as valid register class for arguments and return types. New calling convention will only be choosen when -mattr=+alu32 specified. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Yonghong Song <yhs@fb.com> llvm-svn: 325983
* Fix a bunch more layering of CodeGen headers that are in TargetDavid Blaikie2017-11-171-1/+1
| | | | | | | | All these headers already depend on CodeGen headers so moving them into CodeGen fixes the layering (since CodeGen depends on Target, not the other way around). llvm-svn: 318490
* bpf: add inline-asm supportYonghong Song2017-09-181-0/+4
| | | | | | Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Alexei Starovoitov <ast@kernel.org> llvm-svn: 313593
* bpf: add variants of -mcpu=# and support for additional jmp insnsYonghong Song2017-08-231-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | -mcpu=# will support: . generic: the default insn set . v1: insn set version 1, the same as generic . v2: insn set version 2, version 1 + additional jmp insns . probe: the compiler will probe the underlying kernel to decide proper version of insn set. We did not not use -mcpu=native since llc/llvm will interpret -mcpu=native as the underlying hardware architecture regardless of -march value. Currently, only x86_64 supports -mcpu=probe. Other architecture will silently revert to "generic". Also added -mcpu=help to print available cpu parameters. llvm will print out the information only if there are at least one cpu and at least one feature. Add an unused dummy feature to enable the printout. Examples for usage: $ llc -march=bpf -mcpu=v1 -filetype=asm t.ll $ llc -march=bpf -mcpu=v2 -filetype=asm t.ll $ llc -march=bpf -mcpu=generic -filetype=asm t.ll $ llc -march=bpf -mcpu=probe -filetype=asm t.ll $ llc -march=bpf -mcpu=v3 -filetype=asm t.ll 'v3' is not a recognized processor for this target (ignoring processor) ... $ llc -march=bpf -mcpu=help -filetype=asm t.ll Available CPUs for this target: generic - Select the generic processor. probe - Select the probe processor. v1 - Select the v1 processor. v2 - Select the v2 processor. Available features for this target: dummy - unused feature. Use +feature to enable a feature, or -feature to disable it. For example, llc -mcpu=mycpu -mattr=+feature1,-feature2 ... Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Alexei Starovoitov <ast@kernel.org> llvm-svn: 311522
* [bpf] disallow global_addr+off foldingAlexei Starovoitov2017-05-261-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | Wrong assembly code is generated for a simple program with clang. If clang only produces IR and llc is used for IR lowering and optimization, correct assembly code is generated. The main reason is that clang feeds default Reloc::Static to llvm and llc feeds no RelocMode to llvm, where for llc case, BPF backend picks up Reloc::PIC_ mode. This leads different IR lowering behavior and clang permits global_addr+off folding while llc doesn't. This patch introduces isOffsetFoldingLegal function into BPF backend and the function always return false. This will make clang and llc behave the same for the lowering. Bug https://bugs.llvm.org//show_bug.cgi?id=33183 has more detailed explanation. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> llvm-svn: 304043
* CodeGen: Use MachineInstr& in TargetLowering, NFCDuncan P. N. Exon Smith2016-06-301-1/+1
| | | | | | | | | | | | | This is a mechanical change to make TargetLowering API take MachineInstr& (instead of MachineInstr*), since the argument is expected to be a valid MachineInstr. In one case, changed a parameter from MachineInstr* to MachineBasicBlock::iterator, since it was used as an insertion point. As a side effect, this removes a bunch of MachineInstr* to MachineBasicBlock::iterator implicit conversions, a necessary step toward fixing PR26753. llvm-svn: 274287
* Pass DebugLoc and SDLoc by const ref.Benjamin Kramer2016-06-121-4/+4
| | | | | | | | This used to be free, copying and moving DebugLocs became expensive after the metadata rewrite. Passing by reference eliminates a ton of track/untrack operations. No functionality change intended. llvm-svn: 272512
* [BPF] Remove exit-on-error flag in test (PR27766)Diana Picus2016-05-231-0/+3
| | | | | | | | | | | | | | The exit-on-error flag on the many_args1.ll test is needed to avoid an unreachable in BPFTargetLowering::LowerCall. We can also avoid it by ignoring any superfluous arguments to the call (i.e. any arguments after the first 5). Fixes PR27766. Differential Revision: http://reviews.llvm.org/D20471 v2 of r270419 llvm-svn: 270440
* Reverts "[BPF] Remove exit-on-error flag in test (PR27766)"Renato Golin2016-05-231-3/+0
| | | | | | | | This patch reverts r270419 because it broke a lot of buildbots, mostly Windows. We'd like help in investigating the issues, but for now, it should stay out. llvm-svn: 270433
* [BPF] Remove exit-on-error flag in test (PR27766)Diana Picus2016-05-231-0/+3
| | | | | | | | | | The exit-on-error flag on the many_args1.ll test is needed to avoid an unreachable in BPFTargetLowering::LowerCall. We can also avoid it by ignoring any superfluous arguments to the call (i.e. any arguments after the first 5). Fixes PR27766 llvm-svn: 270419
* Revert r240137 (Fixed/added namespace ending comments using clang-tidy. NFC)Alexander Kornienko2015-06-231-1/+1
| | | | | | Apparently, the style needs to be agreed upon first. llvm-svn: 240390
* Fixed/added namespace ending comments using clang-tidy. NFCAlexander Kornienko2015-06-191-1/+1
| | | | | | | | | | | | | The patch is generated using this command: tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \ -checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \ llvm/lib/ Thanks to Eugene Kosov for the original patch! llvm-svn: 240137
* Change getTargetNodeName() to produce compiler warnings for missing cases, ↵Matthias Braun2015-05-071-1/+1
| | | | | | fix them llvm-svn: 236775
* bpf: fix buildAlexei Starovoitov2015-02-281-0/+1
| | | | | | | complete the plumbing of passing TargetRegisterInfo through computeRegisterProperties started by r230583 llvm-svn: 230858
* Remove an argument-less call to getSubtargetImpl from TargetLoweringBase.Eric Christopher2015-02-261-1/+1
| | | | | | | | | This required plumbing a TargetRegisterInfo through computeRegisterProperties and into findRepresentativeClass which uses it for register class iteration. This required passing a subtarget into a few target specific initializations of TargetLowering. llvm-svn: 230583
* BPF backendAlexei Starovoitov2015-01-241-0/+89
Summary: V8->V9: - cleanup tests V7->V8: - addressed feedback from David: - switched to range-based 'for' loops - fixed formatting of tests V6->V7: - rebased and adjusted AsmPrinter args - CamelCased .td, fixed formatting, cleaned up names, removed unused patterns - diffstat: 3 files changed, 203 insertions(+), 227 deletions(-) V5->V6: - addressed feedback from Chandler: - reinstated full verbose standard banner in all files - fixed variables that were not in CamelCase - fixed names of #ifdef in header files - removed redundant braces in if/else chains with single statements - fixed comments - removed trailing empty line - dropped debug annotations from tests - diffstat of these changes: 46 files changed, 456 insertions(+), 469 deletions(-) V4->V5: - fix setLoadExtAction() interface - clang-formated all where it made sense V3->V4: - added CODE_OWNERS entry for BPF backend V2->V3: - fix metadata in tests V1->V2: - addressed feedback from Tom and Matt - removed top level change to configure (now everything via 'experimental-backend') - reworked error reporting via DiagnosticInfo (similar to R600) - added few more tests - added cmake build - added Triple::bpf - tested on linux and darwin V1 cover letter: --------------------- recently linux gained "universal in-kernel virtual machine" which is called eBPF or extended BPF. The name comes from "Berkeley Packet Filter", since new instruction set is based on it. This patch adds a new backend that emits extended BPF instruction set. The concept and development are covered by the following articles: http://lwn.net/Articles/599755/ http://lwn.net/Articles/575531/ http://lwn.net/Articles/603983/ http://lwn.net/Articles/606089/ http://lwn.net/Articles/612878/ One of use cases: dtrace/systemtap alternative. bpf syscall manpage: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=b4fc1a460f3017e958e6a8ea560ea0afd91bf6fe instruction set description and differences vs classic BPF: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/networking/filter.txt Short summary of instruction set: - 64-bit registers R0 - return value from in-kernel function, and exit value for BPF program R1 - R5 - arguments from BPF program to in-kernel function R6 - R9 - callee saved registers that in-kernel function will preserve R10 - read-only frame pointer to access stack - two-operand instructions like +, -, *, mov, load/store - implicit prologue/epilogue (invisible stack pointer) - no floating point, no simd Short history of extended BPF in kernel: interpreter in 3.15, x64 JIT in 3.16, arm64 JIT, verifier, bpf syscall in 3.18, more to come in the future. It's a very small and simple backend. There is no support for global variables, arbitrary function calls, floating point, varargs, exceptions, indirect jumps, arbitrary pointer arithmetic, alloca, etc. From C front-end point of view it's very restricted. It's done on purpose, since kernel rejects all programs that it cannot prove safe. It rejects programs with loops and with memory accesses via arbitrary pointers. When kernel accepts the program it is guaranteed that program will terminate and will not crash the kernel. This patch implements all 'must have' bits. There are several things on TODO list, so this is not the end of development. Most of the code is a boiler plate code, copy-pasted from other backends. Only odd things are lack or < and <= instructions, specialized load_byte intrinsics and 'compare and goto' as single instruction. Current instruction set is fixed, but more instructions can be added in the future. Signed-off-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> Subscribers: majnemer, chandlerc, echristo, joerg, pete, rengolin, kristof.beyls, arsenm, t.p.northover, tstellarAMD, aemerson, llvm-commits Differential Revision: http://reviews.llvm.org/D6494 llvm-svn: 227008
OpenPOWER on IntegriCloud