summaryrefslogtreecommitdiffstats
path: root/clang/lib/CodeGen/CGOpenMPRuntimeNVPTX.cpp
Commit message (Collapse)AuthorAgeFilesLines
* [OpenMP] Remove implicit data sharing code gen that aims to use device ↵Gheorghe-Teodor Bercea2018-03-071-186/+26
| | | | | | | | | | | | | | | | shared memory Summary: Remove this scheme for now since it will be covered by another more generic scheme using global memory. This code will be worked into an optimization for the generic data sharing scheme. Removing this completely and then adding it via future patches will make all future data sharing patches cleaner. Reviewers: ABataev, carlo.bertolli, caomhin Reviewed By: ABataev Subscribers: jholewinski, guansong, cfe-commits Differential Revision: https://reviews.llvm.org/D43625 llvm-svn: 326948
* Pass a GlobalDecl to SetInternalFunctionAttributes. NFC.Rafael Espindola2018-02-281-6/+6
| | | | | | | | This just reduces the noise in a followup patch. Part of D43900. llvm-svn: 326385
* [OpenMP] Extend NVPTX SPMD implementation of combined constructsCarlo Bertolli2018-02-281-21/+12
| | | | | | | | Differential Revision: https://reviews.llvm.org/D43852 This patch extends the SPMD implementation to all target constructs and guards this implementation under a new flag. llvm-svn: 326368
* Recommit rL323952: [DebugInfo] Enable debug information for C99 VLA types.Sander de Smalen2018-02-031-1/+1
| | | | | | Fixed build issue when building with g++-4.8 (specialization after instantiation). llvm-svn: 324173
* Reverting patch rL323952 due to build errors that ISander de Smalen2018-02-011-1/+1
| | | | | | haven't encountered in local builds. llvm-svn: 323956
* [DebugInfo] Enable debug information for C99 VLA typesSander de Smalen2018-02-011-1/+1
| | | | | | | | | | | | | | | | | | | | | Summary: This patch enables debugging of C99 VLA types by generating more precise LLVM Debug metadata, using the extended DISubrange 'count' field that takes a DIVariable. This should implement: Bug 30553: Debug info generated for arrays is not what GDB expects (not as good as GCC's) https://bugs.llvm.org/show_bug.cgi?id=30553 Reviewers: echristo, aprantl, dexonsmith, clayborg, pcc, kristof.beyls, dblaikie Reviewed By: aprantl Subscribers: jholewinski, schweitz, davide, fhahn, JDevlieghere, cfe-commits Differential Revision: https://reviews.llvm.org/D41698 llvm-svn: 323952
* [OPENMP] Remove more empty SourceLocations() from the code.Alexey Bataev2018-01-231-18/+15
| | | | | | | Removed more empty SourceLocations() from the OpenMP code and replaced with the correct locations for better debug info emission. llvm-svn: 323232
* [OPENMP] Replace calls of getAssociatedStmt().Alexey Bataev2018-01-121-1/+1
| | | | | | | | | | | | | getAssociatedStmt() returns the outermost captured statement for the OpenMP directive. It may return incorrect region in case of combined constructs. Reworked the code to reduce the number of calls of getAssociatedStmt() and used getInnermostCapturedStmt() and getCapturedStmt() functions instead. In case of firstprivate variables it may lead to an extra allocas generation for private copies even if the variable is passed by value into outlined function and could be used directly as private copy. llvm-svn: 322393
* [OPENMP] Fix debug info for outlined functions in NVPTX + add more tests.Alexey Bataev2018-01-081-4/+4
| | | | | | | Fixed name of emitted outlined functions in NVPTX target + extra tests for the debug info. llvm-svn: 322022
* [OPENMP] Fix casting in NVPTX support library.Alexey Bataev2018-01-041-37/+49
| | | | | | | | If the reduction required shuffle in the NVPTX codegen, we may need to cast the reduced value to the integer type. This casting was implemented incorrectly and may cause compiler crash. Patch fixes this problem. llvm-svn: 321818
* [OPENMP] Add debug info for generated functions.Alexey Bataev2018-01-041-71/+81
| | | | | | | Most of the generated functions for the OpenMP were generated with disabled debug info. Patch fixes this for better user experience. llvm-svn: 321816
* [OpenMP] Further adjustments of nvptx runtime functionsJonas Hahnfeld2017-12-271-11/+20
| | | | | | | | Pass in default value of 1, similar to previous commit r318836. Differential Revision: https://reviews.llvm.org/D41012 llvm-svn: 321486
* [OpenMP] Add function attribute for triggering data sharing.Gheorghe-Teodor Bercea2017-12-121-0/+2
| | | | | | | | | | | | | | | | Summary: The backend should only emit data sharing code for the cases where it is needed. A new function attribute is used by Clang to enable data sharing only for the cases where OpenMP semantics require it and there are variables that need to be shared. Reviewers: hfinkel, Hahnfeld, ABataev, carlo.bertolli, caomhin Reviewed By: ABataev Subscribers: cfe-commits, jholewinski Differential Revision: https://reviews.llvm.org/D41123 llvm-svn: 320527
* [OPENMP] Do not mark captured variables as artificial in debug info.Alexey Bataev2017-11-221-3/+11
| | | | | | | Captured variables should not be marked as artificial parameters in outlined functions in debug info. llvm-svn: 318843
* [OpenMP] Adjust arguments of nvptx runtime functionsJonas Hahnfeld2017-11-221-12/+20
| | | | | | | | | | | | | In the future the compiler will analyze whether the OpenMP runtime needs to be (fully) initialized and avoid that overhead if possible. The functions already take an argument to transfer that information to the runtime, so pass in the default value 1. (This is needed for binary compatibility with libomptarget-nvptx currently being upstreamed.) Differential Revision: https://reviews.llvm.org/D40354 llvm-svn: 318836
* [OpenMP] Add implicit data sharing support when offloading to NVIDIA GPUs ↵Gheorghe-Teodor Bercea2017-11-211-23/+171
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | using OpenMP device offloading Summary: This patch is part of the development effort to add support in the current OpenMP GPU offloading implementation for implicitly sharing variables between a target region executed by the team master thread and the worker threads within that team. This patch is the first of three required for successfully performing the implicit sharing of master thread variables with the worker threads within a team. The remaining two patches are: - Patch D38978 to the LLVM NVPTX backend which ensures the lowering of shared variables to an device memory which allows the sharing of references; - Patch (coming soon) is a patch to libomptarget runtime library which ensures that a list of references to shared variables is properly maintained. A simple code snippet which illustrates an implicit data sharing situation is as follows: ``` #pragma omp target { // master thread only int v; #pragma omp parallel { // worker threads // use v } } ``` Variable v is implicitly shared from the team master thread which executes the code in between the target and parallel directives. The worker threads must operate on the latest version of v, including any updates performed by the master. The code generated in this patch relies on the LLVM NVPTX patch (mentioned above) which prevents v from being lowered in the thread local memory of the master thread thus making the reference to this variable un-shareable with the workers. This ensures that the code generated by this patch is correct. Since the parallel region is outlined the passing of arguments to the outlined regions must preserve the original order of arguments. The runtime therefore maintains a list of references to shared variables thus ensuring their passing in the correct order. The passing of arguments to the outlined parallel function is performed in a separate function which the data sharing infrastructure constructs in this patch. The function is inlined when optimizations are enabled. Reviewers: hfinkel, carlo.bertolli, arpith-jacob, Hahnfeld, ABataev, caomhin Reviewed By: ABataev Subscribers: cfe-commits, jholewinski Differential Revision: https://reviews.llvm.org/D38976 llvm-svn: 318773
* [clang] Remove redundant return [NFC]Mandeep Singh Grang2017-11-131-1/+0
| | | | | | | | | | | | | | Reviewers: rsmith, sfantao, mcrosier Reviewed By: mcrosier Subscribers: jholewinski, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D39915 llvm-svn: 318074
* [OPENMP] Codegen for `#pragma omp target parallel for simd`.Alexey Bataev2017-11-091-0/+1
| | | | | | Added codegen for `#pragma omp target parallel for simd` and clauses. llvm-svn: 317813
* [OPENMP] Codegen for `#pragma omp target parallel for`.Alexey Bataev2017-11-081-0/+1
| | | | llvm-svn: 317719
* Convert clang::LangAS to a strongly typed enumAlexander Richardson2017-10-151-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Convert clang::LangAS to a strongly typed enum Currently both clang AST address spaces and target specific address spaces are represented as unsigned which can lead to subtle errors if the wrong type is passed. It is especially confusing in the CodeGen files as it is not possible to see what kind of address space should be passed to a function without looking at the implementation. I originally made this change for our LLVM fork for the CHERI architecture where we make extensive use of address spaces to differentiate between capabilities and pointers. When merging the upstream changes I usually run into some test failures or runtime crashes because the wrong kind of address space is passed to a function. By converting the LangAS enum to a C++11 we can catch these errors at compile time. Additionally, it is now obvious from the function signature which kind of address space it expects. I found the following errors while writing this patch: - ItaniumRecordLayoutBuilder::LayoutField was passing a clang AST address space to TargetInfo::getPointer{Width,Align}() - TypePrinter::printAttributedAfter() prints the numeric value of the clang AST address space instead of the target address space. However, this code is not used so I kept the current behaviour - initializeForBlockHeader() in CGBlocks.cpp was passing LangAS::opencl_generic to TargetInfo::getPointer{Width,Align}() - CodeGenFunction::EmitBlockLiteral() was passing a AST address space to TargetInfo::getPointerWidth() - CGOpenMPRuntimeNVPTX::translateParameter() passed a target address space to Qualifiers::addAddressSpace() - CGOpenMPRuntimeNVPTX::getParameterAddress() was using llvm::Type::getPointerTo() with a AST address space - clang_getAddressSpace() returns either a LangAS or a target address space. As this is exposed to C I have kept the current behaviour and added a comment stating that it is probably not correct. Other than this the patch should not cause any functional changes. Reviewers: yaxunl, pcc, bader Reviewed By: yaxunl, bader Subscribers: jlebar, jholewinski, nhaehnle, Anastasia, cfe-commits Differential Revision: https://reviews.llvm.org/D38816 llvm-svn: 315871
* [OPENMP] Fix types for the target specific parameters in debug mode.Alexey Bataev2017-09-131-3/+3
| | | | | | | Used incorrect types for target specific parameters in debug mode, should use original pointers rather than the pointee types. llvm-svn: 313186
* [OPENMP] Fix compiler crash on argument translate for NVPTX.Alexey Bataev2017-08-151-0/+5
| | | | | | | | When translating arguments for NVPTX target it is not taken into account that function may have variable number of arguments. Patch fixes this problem. llvm-svn: 310920
* [OPENMP] Generalization of calls of the outlined functions.Alexey Bataev2017-08-141-25/+21
| | | | | | General improvement of the outlined functions calls. llvm-svn: 310840
* [OPENMP][DEBUG] Set proper address space info if required by target.Alexey Bataev2017-08-081-0/+78
| | | | | | | | | | | Arguments, passed to the outlined function, must have correct address space info for proper Debug info support. Patch sets global address space for arguments that are mapped and passed by reference. Also, cuda-gdb does not handle reference types correctly, so reference arguments are represented as pointers. llvm-svn: 310387
* Revert "[OPENMP][DEBUG] Set proper address space info if required by target."Alexey Bataev2017-08-081-78/+0
| | | | | | This reverts commit r310377. llvm-svn: 310379
* [OPENMP][DEBUG] Set proper address space info if required by target.Alexey Bataev2017-08-081-0/+78
| | | | | | | | | | | Arguments, passed to the outlined function, must have correct address space info for proper Debug info support. Patch sets global address space for arguments that are mapped and passed by reference. Also, cuda-gdb does not handle reference types correctly, so reference arguments are represented as pointers. llvm-svn: 310377
* Revert "[OPENMP][DEBUG] Set proper address space info if required by target."Alexey Bataev2017-08-081-78/+0
| | | | | | This reverts commit r310360. llvm-svn: 310364
* [OPENMP][DEBUG] Set proper address space info if required by target.Alexey Bataev2017-08-081-0/+78
| | | | | | | | | | | Arguments, passed to the outlined function, must have correct address space info for proper Debug info support. Patch sets global address space for arguments that are mapped and passed by reference. Also, cuda-gdb does not handle reference types correctly, so reference arguments are represented as pointers. llvm-svn: 310360
* Revert "[OPENMP][DEBUG] Set proper address space info if required by target."Alexey Bataev2017-08-041-79/+0
| | | | | | This reverts commit r310104. llvm-svn: 310135
* [OPENMP][DEBUG] Set proper address space info if required by target.Alexey Bataev2017-08-041-0/+79
| | | | | | | | | | | Arguments, passed to the outlined function, must have correct address space info for proper Debug info support. Patch sets global address space for arguments that are mapped and passed by reference. Also, cuda-gdb does not handle reference types correctly, so reference arguments are represented as pointers. llvm-svn: 310104
* [OPENMP] Unify generation of outlined function calls.Alexey Bataev2017-08-041-5/+5
| | | | llvm-svn: 310098
* [DebugInfo] Add kind of ImplicitParamDecl for emission of FlagObjectPointer.Alexey Bataev2017-06-091-31/+18
| | | | | | | | | | | | | | | | | Summary: If the first parameter of the function is the ImplicitParamDecl, codegen automatically marks it as an implicit argument with `this` or `self` pointer. Added internal kind of the ImplicitParamDecl to separate 'this', 'self', 'vtt' and other implicit parameters from other kind of parameters. Reviewers: rjmccall, aaron.ballman Subscribers: cfe-commits Differential Revision: https://reviews.llvm.org/D33735 llvm-svn: 305075
* IRGen: Add optnone attribute on function during O0Mehdi Amini2017-05-291-0/+1
| | | | | | | | | | | Amongst other, this will help LTO to correctly handle/honor files compiled with O0, helping debugging failures. It also seems in line with how we handle other options, like how -fnoinline adds the appropriate attribute as well. Differential Revision: https://reviews.llvm.org/D28404 llvm-svn: 304127
* Make helper functions static. NFC.Benjamin Kramer2017-05-261-7/+7
| | | | llvm-svn: 304028
* [OpenMP] Teams reduction on the NVPTX device.Arpith Chacko Jacob2017-02-161-7/+443
| | | | | | | | | | | | | | | | | | | | This patch implements codegen for the reduction clause on any teams construct for elementary data types. It builds on parallel reductions on the GPU. Subsequently, the team master writes to a unique location in a global memory scratchpad. The last team to do so loads and reduces this array to calculate the final result. This patch emits two helper functions that are used by the OpenMP runtime on the GPU to perform reductions across teams. Patch by Tian Jin in collaboration with Arpith Jacob Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D29879 llvm-svn: 295335
* [OpenMP] Parallel reduction on the NVPTX device.Arpith Chacko Jacob2017-02-161-0/+1009
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements codegen for the reduction clause on any parallel construct for elementary data types. An efficient implementation requires hierarchical reduction within a warp and a threadblock. It is complicated by the fact that variables declared in the stack of a CUDA thread cannot be shared with other threads. The patch creates a struct to hold reduction variables and a number of helper functions. The OpenMP runtime on the GPU implements reduction algorithms that uses these helper functions to perform reductions within a team. Variables are shared between CUDA threads using shuffle intrinsics. An implementation of reductions on the NVPTX device is substantially different to that of CPUs. However, this patch is written so that there are minimal changes to the rest of OpenMP codegen. The implemented design allows the compiler and runtime to be decoupled, i.e., the runtime does not need to know of the reduction operation(s), the type of the reduction variable(s), or the number of reductions. The design also allows reuse of host codegen, with appropriate specialization for the NVPTX device. While the patch does introduce a number of abstractions, the expected use case calls for inlining of the GPU OpenMP runtime. After inlining and optimizations in LLVM, these abstractions are unwound and performance of OpenMP reductions is comparable to CUDA-canonical code. Patch by Tian Jin in collaboration with Arpith Jacob Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D29758 llvm-svn: 295333
* Revert r295319 while investigating buildbot failure.Arpith Chacko Jacob2017-02-161-1008/+0
| | | | llvm-svn: 295323
* [OpenMP] Parallel reduction on the NVPTX device.Arpith Chacko Jacob2017-02-161-0/+1008
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements codegen for the reduction clause on any parallel construct for elementary data types. An efficient implementation requires hierarchical reduction within a warp and a threadblock. It is complicated by the fact that variables declared in the stack of a CUDA thread cannot be shared with other threads. The patch creates a struct to hold reduction variables and a number of helper functions. The OpenMP runtime on the GPU implements reduction algorithms that uses these helper functions to perform reductions within a team. Variables are shared between CUDA threads using shuffle intrinsics. An implementation of reductions on the NVPTX device is substantially different to that of CPUs. However, this patch is written so that there are minimal changes to the rest of OpenMP codegen. The implemented design allows the compiler and runtime to be decoupled, i.e., the runtime does not need to know of the reduction operation(s), the type of the reduction variable(s), or the number of reductions. The design also allows reuse of host codegen, with appropriate specialization for the NVPTX device. While the patch does introduce a number of abstractions, the expected use case calls for inlining of the GPU OpenMP runtime. After inlining and optimizations in LLVM, these abstractions are unwound and performance of OpenMP reductions is comparable to CUDA-canonical code. Patch by Tian Jin in collaboration with Arpith Jacob Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D29758 llvm-svn: 295319
* [OpenMP] Codegen support for 'target teams' on the NVPTX device.Arpith Chacko Jacob2017-01-261-0/+1
| | | | | | | | | | This is a simple patch to teach OpenMP codegen to emit the construct in Generic mode. Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D29143 llvm-svn: 293183
* [OpenMP] Support for the proc_bind-clause on 'target parallel' on the NVPTX ↵Arpith Chacko Jacob2017-01-251-0/+11
| | | | | | | | | | | | | | device. This patch adds support for the proc_bind clause on the Spmd construct 'target parallel' on the NVPTX device. Since the parallel region is created upon kernel launch, this clause can be safely ignored on the NVPTX device at codegen time for level 0 parallelism. Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D29128 llvm-svn: 293069
* Reverting commit because an NVPTX patch sneaked in. Break up into twoArpith Chacko Jacob2017-01-251-1/+0
| | | | | | patches. llvm-svn: 293003
* [OpenMP] Codegen support for 'target teams' on the host.Arpith Chacko Jacob2017-01-251-0/+1
| | | | | | | | | | | | | | | This patch adds support for codegen of 'target teams' on the host. This combined directive has two captured statements, one for the 'teams' region, and the other for the 'parallel'. This target teams region is offloaded using the __tgt_target_teams() call. The patch sets the number of teams as an argument to this call. Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D29084 llvm-svn: 293001
* [OpenMP] Support for the num_threads-clause on 'target parallel' on the ↵Arpith Chacko Jacob2017-01-251-0/+11
| | | | | | | | | | | | | | NVPTX device. This patch adds support for the Spmd construct 'target parallel' on the NVPTX device. This involves ignoring the num_threads clause on the device since the number of threads in this combined construct is already set on the host through the call to __tgt_target_teams(). Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D29083 llvm-svn: 292999
* [OpenMP] Codegen for the 'target parallel' directive on the NVPTX device.Arpith Chacko Jacob2017-01-181-9/+198
| | | | | | | | | | | | | | | | | | This patch adds codegen for the 'target parallel' directive on the NVPTX device. We term offload OpenMP directives such as 'target parallel' and 'target teams distribute parallel for' as SPMD constructs. SPMD constructs, in contrast to Generic ones like the plain 'target', can never contain a serial region. SPMD constructs can be handled more efficiently on the GPU and do not require the Warp Loop of the Generic codegen scheme. This patch adds SPMD codegen support for 'target parallel' on the NVPTX device and can be reused for other SPMD constructs. Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D28755 llvm-svn: 292428
* [OpenMP] Codegen support for 'target parallel' on the host.Arpith Chacko Jacob2017-01-181-15/+13
| | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for codegen of 'target parallel' on the host. It is also the first combined directive that requires two or more captured statements. Support for this functionality is included in the patch. A combined directive such as 'target parallel' has two captured statements, one for the 'target' and the other for the 'parallel' region. Two captured statements are required because each has different implicit parameters (see SemaOpenMP.cpp). For example, the 'parallel' has 'global_tid' and 'bound_tid' while the 'target' does not. The patch adds support for handling multiple captured statements based on the combined directive. When codegen'ing the 'target parallel' directive, the 'target' outlined function is created using the outer captured statement and the 'parallel' outlined function is created using the inner captured statement. Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D28753 llvm-svn: 292419
* Revert r292374 to debug Windows buildbot failure.Arpith Chacko Jacob2017-01-181-13/+15
| | | | llvm-svn: 292400
* [OpenMP] Codegen support for 'target parallel' on the host.Arpith Chacko Jacob2017-01-181-15/+13
| | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for codegen of 'target parallel' on the host. It is also the first combined directive that requires two or more captured statements. Support for this functionality is included in the patch. A combined directive such as 'target parallel' has two captured statements, one for the 'target' and the other for the 'parallel' region. Two captured statements are required because each has different implicit parameters (see SemaOpenMP.cpp). For example, the 'parallel' has 'global_tid' and 'bound_tid' while the 'target' does not. The patch adds support for handling multiple captured statements based on the combined directive. When codegen'ing the 'target parallel' directive, the 'target' outlined function is created using the outer captured statement and the 'parallel' outlined function is created using the inner captured statement. Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D28753 llvm-svn: 292374
* Remove unused lambda captures. NFCMalcolm Parsons2017-01-131-4/+3
| | | | llvm-svn: 291939
* [OpenMP] Basic support for a parallel directive in a target region on an ↵Arpith Chacko Jacob2017-01-101-7/+216
| | | | | | | | | | | | | | | | | | | | | NVPTX device Summary: This patch introduces support for the execution of parallel constructs in a target region on the NVPTX device. Parallel regions must be in the lexical scope of the target directive. The master thread in the master warp signals parallel work for worker threads in worker warps on encountering a parallel region. Note: The patch does not yet support capture of arguments in a parallel region so the test cases are simple. Reviewers: ABataev Differential Revision: https://reviews.llvm.org/D28145 llvm-svn: 291565
* [OpenMP] Add fields for flags in the offload entry descriptor.Samuel Antao2017-01-051-1/+1
| | | | | | | | | | | | | | | | | Summary: This patch adds two fields to the offload entry descriptor. One field is meant to signal Ctors/Dtors and `link` global variables, and the other is reserved for runtime library use. Currently, these fields are only filled with zeros in the current code generation, but that will change when `declare target` is added. The reason, we are adding these fields now is to make the code generation consistent with the runtime library proposal under review in https://reviews.llvm.org/D14031. Reviewers: ABataev, hfinkel, carlo.bertolli, kkwli0, arpith-jacob, Hahnfeld Subscribers: cfe-commits, caomhin, jholewinski Differential Revision: https://reviews.llvm.org/D28298 llvm-svn: 291124
OpenPOWER on IntegriCloud