summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Revert "[X86] Use push-pop for materializing small constants under 'minsize'"David Majnemer2016-01-051-8/+2
| | | | | | | | | | | | The red zone consists of 128 bytes beyond the stack pointer so that the allocation of objects in leaf functions doesn't require decrementing rsp. In r255656, we introduced an optimization that would cheaply materialize certain constants via push/pop. Push decrements the stack pointer and stores it's result at what is now the top of the stack. However, this means that using push/pop would encroach on the red zone. PR26023 gives an example where this corrupts an object in the red zone. llvm-svn: 256808
* [X86] Use push-pop for materializing small constants under 'minsize'Hans Wennborg2015-12-171-2/+8
| | | | | | | | | | | | | Use the 3-byte (4 with REX prefix) push-pop sequence for materializing small constants. This is smaller than using a mov (5, 6 or 7 bytes depending on size and REX prefix), but it's likely to be slower, so only used for 'minsize'. This is a follow-up to r255656. Differential Revision: http://reviews.llvm.org/D15549 llvm-svn: 255936
* add a SelectionDAG method to check if no common bits are set in two nodes; NFCISanjay Patel2015-11-091-17/+5
| | | | | | | | | | | | | | | This was suggested in: http://reviews.llvm.org/D13956 and is a follow-on to: http://reviews.llvm.org/rL252515 http://reviews.llvm.org/rL252519 This lets us remove logically equivalent/duplicated code from DAGCombiner and X86ISelDAGToDAG. A corresponding function for IR instructions already exists in ValueTracking. llvm-svn: 252539
* [x86] try harder to match bitwise 'or' into an LEASanjay Patel2015-11-091-11/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The motivation for this patch starts with the epic fail example in PR18007: https://llvm.org/bugs/show_bug.cgi?id=18007 ...unfortunately, this patch makes no difference for that case, but it solves some simpler cases. We'll get there some day. :) The current 'or' matching code was using computeKnownBits() via isBaseWithConstantOffset() -> MaskedValueIsZero(), but that's an unnecessarily limited use. We can do more by copying the logic in ValueTracking's haveNoCommonBitsSet(), so we can treat the 'or' as if it was an 'add'. There's a TODO comment here because we should lift the bit-checking logic into a helper function, so it's not duplicated in DAGCombiner. An example of the better LEA matching: leal (%rdi,%rdi), %eax andl $1, %esi orl %esi, %eax Becomes: andl $1, %esi leal (%rsi,%rdi,2), %eax Differential Revision: http://reviews.llvm.org/D13956 llvm-svn: 252515
* [x86] move recursive add match for LEA to helper function; NFCISanjay Patel2015-10-211-30/+37
| | | | llvm-svn: 250926
* Do not use `dyn_cast<X>` after `isa<X>` (NFC)Mehdi Amini2015-10-211-1/+1
| | | | | From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 250883
* X86: Remove implicit ilist iterator conversions, NFCDuncan P. N. Exon Smith2015-10-191-2/+2
| | | | llvm-svn: 250741
* function names should start with a lower case letter; NFCSanjay Patel2015-10-131-102/+102
| | | | llvm-svn: 250174
* don't repeat function/class/variable names in comments; NFCSanjay Patel2015-10-131-60/+46
| | | | llvm-svn: 250162
* fix typos; NFCSanjay Patel2015-10-121-3/+2
| | | | llvm-svn: 250059
* x32. Fixes jmp %reg in x32JF Bastien2015-08-191-0/+21
| | | | | | | | | | | | | | x32 has 32-bit pointers; x86-64 can't jmp %r32. This patch addresses this issue by explicitly zero-extending brind's target to 64-bits. Author: jpp Reviewers: jfb, dschuff, pavel.v.chupin Subscribers: llvm-commits Differential revision: http://reviews.llvm.org/D12112 llvm-svn: 245452
* Revert "[X86] Widen the 'AND' mask if doing so shrinks the encoding size"Tobias Grosser2015-08-191-61/+2
| | | | | | | This reverts commit 245169 which miscompiles MultiSource/Applications/siod from LNT. llvm-svn: 245432
* [X86] Widen the 'AND' mask if doing so shrinks the encoding sizeDavid Majnemer2015-08-161-2/+61
| | | | | | | | | | | | | We can set additional bits in a mask given that we know the other operand of an AND already has some bits set to zero. This can be more efficient if doing so allows us to use an instruction which implicitly sign extends the immediate. This fixes PR24085. Differential Revision: http://reviews.llvm.org/D11289 llvm-svn: 245169
* [X86] Allow merging of immediates within a basic block for code size savingsMichael Kuperstein2015-08-111-0/+76
| | | | | | | | | | | First step in preventing immediates that occur more than once within a single basic block from being pulled into their users, in order to prevent unnecessary large instruction encoding .Currently enabled only when optimizing for size. Patch by: zia.ansari@intel.com Differential Revision: http://reviews.llvm.org/D11363 llvm-svn: 244601
* fix minsize detection: minsize attribute implies optimizing for sizeSanjay Patel2015-08-101-2/+1
| | | | llvm-svn: 244460
* wrap OptSize and MinSize attributes for easier and consistent access (NFCI)Sanjay Patel2015-08-041-0/+1
| | | | | | | | | | | | | | | | | Create wrapper methods in the Function class for the OptimizeForSize and MinSize attributes. We want to hide the logic of "or'ing" them together when optimizing just for size (-Os). Currently, we are not consistent about this and rely on a front-end to always set OptimizeForSize (-Os) if MinSize (-Oz) is on. Thus, there are 18 FIXME changes here that should be added as follow-on patches with regression tests. This patch is NFC-intended: it just replaces existing direct accesses of the attributes by the equivalent wrapper call. Differential Revision: http://reviews.llvm.org/D11734 llvm-svn: 243994
* Make TargetLowering::getPointerTy() taking DataLayout as an argumentMehdi Amini2015-07-091-4/+7
| | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: jholewinski, ted, yaron.keren, rafael, llvm-commits Differential Revision: http://reviews.llvm.org/D11028 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241775
* Rename llvm.frameescape and llvm.framerecover to localescape and localrecoverReid Kleckner2015-07-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Initially, these intrinsics seemed like part of a family of "frame" related intrinsics, but now I think that's more confusing than helpful. Initially, the LangRef specified that this would create a new kind of allocation that would be allocated at a fixed offset from the frame pointer (EBP/RBP). We ended up dropping that design, and leaving the stack frame layout alone. These intrinsics are really about sharing local stack allocations, not frame pointers. I intend to go further and add an `llvm.localaddress()` intrinsic that returns whatever register (EBP, ESI, ESP, RBX) is being used to address locals, which should not be confused with the frame pointer. Naming suggestions at this point are welcome, I'm happy to re-run sed. Reviewers: majnemer, nicholas Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11011 llvm-svn: 241633
* Revert r240137 (Fixed/added namespace ending comments using clang-tidy. NFC)Alexander Kornienko2015-06-231-2/+2
| | | | | | Apparently, the style needs to be agreed upon first. llvm-svn: 240390
* Avoid a Symbol -> Name -> Symbol conversion.Rafael Espindola2015-06-221-14/+27
| | | | | | | | | | | | | | Before this we were producing a TargetExternalSymbol from a MCSymbol. That meant extracting the symbol name and fetching the symbol again down the pipeline. This patch adds a DAG.getMCSymbol that lets the MCSymbol pass unchanged on the DAG. Doing so removes the need for MO_NOPREFIX and fixes the root cause of pr23900, allowing r240130 to be committed again. llvm-svn: 240300
* Fixed/added namespace ending comments using clang-tidy. NFCAlexander Kornienko2015-06-191-2/+2
| | | | | | | | | | | | | The patch is generated using this command: tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \ -checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \ llvm/lib/ Thanks to Eugene Kosov for the original patch! llvm-svn: 240137
* [x86] Distinguish the 'o', 'v', 'X', and 'i' inline assembly memory constraints.Daniel Sanders2015-05-161-1/+7
| | | | | | | | | | | | | | | | | | | | | | Summary: But still handle them the same way since I don't know how they differ on this target. Of these, 'o' and 'v' are not tested but were already implemented. I'm not sure why 'i' is required for X86 since it's supposed to be an immediate constraint rather than a memory constraint. A test asserts without it so I've included it for now. No functional change intended. Reviewers: nadav Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8254 llvm-svn: 237517
* [X86] Fix assertion while DAG combining offsets and ExternalSymbolsReid Kleckner2015-05-041-1/+4
| | | | | | ExternalSymbol nodes do not contain offsets, unlike GlobalValue nodes. llvm-svn: 236471
* Silence unused warning in non-assert builds.Daniel Jasper2015-04-301-2/+3
| | | | llvm-svn: 236213
* Masked gather and scatter - added DAGCombine visitorsElena Demikhovsky2015-04-301-0/+38
| | | | | | | | | and AVX-512 instruction selection patterns. All other patches, including tests will follow. http://reviews.llvm.org/D7665 llvm-svn: 236211
* [X86] Avoid mangling frameescape labelsReid Kleckner2015-04-291-0/+2
| | | | | | | | | | | | | | x86 Windows uses the '_' prefix for all global symbols, and this was mistakenly being applied to frameescape labels, which are not externally visible global symbols. They use the private global prefix 'L'. The *right* way to fix this is probably to stop masquerading this label as an ExternalSymbol and create a new SDNode type. These labels are not "external", and we know they will be resolved by assembly time. Having a custom SDNode type would allow us to do better X86 address mode matching, so it's probably worth doing eventually. llvm-svn: 236123
* Reapply r235977 "[DebugInfo] Add debug locations to constant SD nodes"Sergey Dmitrouk2015-04-281-43/+53
| | | | | | | | | | | | | | | | | | | | | | | | | [DebugInfo] Add debug locations to constant SD nodes This adds debug location to constant nodes of Selection DAG and updates all places that create constants to pass debug locations (see PR13269). Can't guarantee that all locations are correct, but in a lot of cases choice is obvious, so most of them should be. At least all tests pass. Tests for these changes do not cover everything, instead just check it for SDNodes, ARM and AArch64 where it's easy to get incorrect locations on constants. This is not complete fix as FastISel contains workaround for wrong debug locations, which drops locations from instructions on processing constants, but there isn't currently a way to use debug locations from constants there as llvm::Constant doesn't cache it (yet). Although this is a bit different issue, not directly related to these changes. Differential Revision: http://reviews.llvm.org/D9084 llvm-svn: 235989
* Revert "[DebugInfo] Add debug locations to constant SD nodes"Daniel Jasper2015-04-281-53/+43
| | | | | | | This breaks a test: http://bb.pgr.jp/builders/cmake-llvm-x86_64-linux/builds/23870 llvm-svn: 235987
* [DebugInfo] Add debug locations to constant SD nodesSergey Dmitrouk2015-04-281-43/+53
| | | | | | | | | | | | | | | | | | | | | | | This adds debug location to constant nodes of Selection DAG and updates all places that create constants to pass debug locations (see PR13269). Can't guarantee that all locations are correct, but in a lot of cases choice is obvious, so most of them should be. At least all tests pass. Tests for these changes do not cover everything, instead just check it for SDNodes, ARM and AArch64 where it's easy to get incorrect locations on constants. This is not complete fix as FastISel contains workaround for wrong debug locations, which drops locations from instructions on processing constants, but there isn't currently a way to use debug locations from constants there as llvm::Constant doesn't cache it (yet). Although this is a bit different issue, not directly related to these changes. Differential Revision: http://reviews.llvm.org/D9084 llvm-svn: 235977
* [X86] Don't accidentally select shll $1, %eax when shrinking an immediate.Benjamin Kramer2015-04-011-1/+6
| | | | | | | | | | addl has higher throughput and this was needlessly picking a suboptimal encoding causing PR23098. I wish there was a way of doing this without further duplicating tbl- generated patterns, but so far I haven't found one. llvm-svn: 233832
* Recommit r232027 with PR22883 fixed: Add infrastructure for support of ↵Daniel Sanders2015-03-131-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | multiple memory constraints. The operand flag word for ISD::INLINEASM nodes now contains a 15-bit memory constraint ID when the operand kind is Kind_Mem. This constraint ID is a numeric equivalent to the constraint code string and is converted with a target specific hook in TargetLowering. This patch maps all memory constraints to InlineAsm::Constraint_m so there is no functional change at this point. It just proves that using these previously unused bits in the encoding of the flag word doesn't break anything. The next patch will make each target preserve the current mapping of everything to Constraint_m for itself while changing the target independent implementation of the hook to return Constraint_Unknown appropriately. Each target will then be adapted in separate patches to use appropriate Constraint_* values. PR22883 was caused the matching operands copying the whole of the operand flags for the matched operand. This included the constraint id which needed to be replaced with the operand number. This has been fixed with a conversion function. Following on from this, matching operands also used the operand number as the constraint id. This has been fixed by looking up the matched operand and taking it from there. llvm-svn: 232165
* Revert "r232027 - Add infrastructure for support of multiple memory constraints"Hal Finkel2015-03-121-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | This (r232027) has caused PR22883; so it seems those bits might be used by something else after all. Reverting until we can figure out what else to do. Original commit message: The operand flag word for ISD::INLINEASM nodes now contains a 15-bit memory constraint ID when the operand kind is Kind_Mem. This constraint ID is a numeric equivalent to the constraint code string and is converted with a target specific hook in TargetLowering. This patch maps all memory constraints to InlineAsm::Constraint_m so there is no functional change at this point. It just proves that using these previously unused bits in the encoding of the flag word doesn't break anything. The next patch will make each target preserve the current mapping of everything to Constraint_m for itself while changing the target independent implementation of the hook to return Constraint_Unknown appropriately. Each target will then be adapted in separate patches to use appropriate Constraint_* values. llvm-svn: 232093
* Add infrastructure for support of multiple memory constraints.Daniel Sanders2015-03-121-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The operand flag word for ISD::INLINEASM nodes now contains a 15-bit memory constraint ID when the operand kind is Kind_Mem. This constraint ID is a numeric equivalent to the constraint code string and is converted with a target specific hook in TargetLowering. This patch maps all memory constraints to InlineAsm::Constraint_m so there is no functional change at this point. It just proves that using these previously unused bits in the encoding of the flag word doesn't break anything. The next patch will make each target preserve the current mapping of everything to Constraint_m for itself while changing the target independent implementation of the hook to return Constraint_Unknown appropriately. Each target will then be adapted in separate patches to use appropriate Constraint_* values. Reviewers: hfinkel Reviewed By: hfinkel Subscribers: hfinkel, jholewinski, llvm-commits Differential Revision: http://reviews.llvm.org/D8171 llvm-svn: 232027
* X86: Optimize address mode matching for FRAME_ALLOC_RECOVER nodesDavid Majnemer2015-03-051-0/+9
| | | | | | | We know that the absolute symbol will be less than 2GB and thus will always fit. llvm-svn: 231389
* X86: Call __main using the SelectionDAGDavid Majnemer2015-02-211-9/+13
| | | | | | | | Synthesizing a call directly using the MI layer would confuse the frame lowering code. This is problematic as frame lowering is highly sensitive the particularities of calls, etc. llvm-svn: 230129
* Prefer SmallVector::append/insert over push_back loops.Benjamin Kramer2015-02-171-5/+2
| | | | | | Same functionality, but hoists the vector growth out of the loop. llvm-svn: 229500
* X86: Canonicalize access to function attributes, NFCDuncan P. N. Exon Smith2015-02-141-2/+1
| | | | | | | | | | | | Canonicalize access to function attributes to use the simpler API. getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind) => getFnAttribute(Kind) getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind) => hasFnAttribute(Kind) llvm-svn: 229214
* MathExtras: Bring Count(Trailing|Leading)Ones and CountPopulation in line ↵Benjamin Kramer2015-02-121-1/+1
| | | | | | | | with countTrailingZeros Update all callers. llvm-svn: 228930
* AVX-512: Fixed the "test" operation for i1 typeElena Demikhovsky2015-02-121-19/+2
| | | | | | | | | | | | Using KORTESTW for comparison i1 value with zero was wrong since the instruction tests 16 bits. KORTESTW may be used with KSHIFTL+KSHIFTR that clean the 15 upper bits. I removed (X86cmp i1, 0) pattern and zero-extend i1 to i8 and then use TESTB. There are some cases where i1 is in the mask register and the upper bits are already zeroed. Then KORTESTW is the better solution, but it is subject for optimization. Meanwhile, I'm fixing the correctness issue. llvm-svn: 228916
* Reuse a bunch of cached subtargets and remove getSubtarget callsEric Christopher2015-02-021-6/+4
| | | | | | without a Function argument. llvm-svn: 227814
* [X86] Make isel select the shorter form of jump instructions instead of the ↵Craig Topper2015-01-061-2/+2
| | | | | | | | long form. The assembler backend will relax to the long form if necessary. This removes a swap from long form to short form in the MCInstLowering code. Selecting the long form used to be required by the old JIT. llvm-svn: 225242
* [X86] Clean up whitespace as well as minor coding styleMichael Liao2014-12-041-1/+1
| | | | llvm-svn: 223339
* [X86] Lower VSELECT into SHRUNKBLEND when we shrink the bits used into theQuentin Colombet2014-11-061-0/+10
| | | | | | | | | | | | | | | condition to match a blend. This prevents optimizations that work on VSELECT to perform invalid transformations. Indeed, the optimized condition does not match the vector boolean content that is expected and bad things may happen. This patch yields the exact same code on the whole test-suite + specs (-O3 and -O3 -march=core-avx2), it improves one test case (vector-blend.ll) and fixes a bug reduced in vselect-avx.ll. <rdar://problem/18819506> llvm-svn: 221429
* [X86] 8bit divrem: Improve codegen for AH register extraction.Ahmed Bougacha2014-11-031-25/+38
| | | | | | | | | | | | | | | | | | | | | | | | For 8-bit divrems where the remainder is used, we used to generate: divb %sil shrw $8, %ax movzbl %al, %eax That was to avoid an H-reg access, which is problematic mainly because it isn't possible in REX-prefixed instructions. This patch optimizes that to: divb %sil movzbl %ah, %eax To do that, we explicitly extend AH, and extract the L-subreg in the resulting register. The extension is done using the NOREX variants of MOVZX. To support signed operations, MOVSX_NOREX is also added. Further, this introduces a new SDNode type, [us]divrem_ext_hreg, which is then lowered to a sequence containing a single zext (rather than 2). Differential Revision: http://reviews.llvm.org/D6064 llvm-svn: 221176
* [X86] Improve mul w/ overflow codegen, to MUL8+SETO.Ahmed Bougacha2014-10-231-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, @llvm.smul.with.overflow.i8 expands to 9 instructions, where 3 are really needed. This adds X86ISD::UMUL8/SMUL8 SD nodes, and custom lowers them to MUL8/IMUL8 + SETO. i8 is a special case because there is no two/three operand variants of (I)MUL8, so the first operand and return value need to go in AL/AX. Also, we can't write patterns for these instructions: TableGen refuses patterns where output operands don't match SDNode results. In this case, instructions where the output operand is an implicitly defined register. A related special case (and FIXME) exists for MUL8 (X86InstrArith.td): // FIXME: Used for 8-bit mul, ignore result upper 8 bits. // This probably ought to be moved to a def : Pat<> if the // syntax can be accepted. [(set AL, (mul AL, GR8:$src)), (implicit EFLAGS)] Ideally, these go away with UMUL8, but we still need to improve TableGen support of implicit operands in patterns. Before this change: movsbl %sil, %eax movsbl %dil, %ecx imull %eax, %ecx movb %cl, %al sarb $7, %al movzbl %al, %eax movzbl %ch, %esi cmpl %eax, %esi setne %al After: movb %dil, %al imulb %sil seto %al Also, remove a made-redundant testcase for PR19858, and enable more FastISel ALU-overflow tests for SelectionDAG too. Differential Revision: http://reviews.llvm.org/D5809 llvm-svn: 220516
* [X86] Don't transform atomic-load-add into an inc/dec when inc/dec is slowRobin Morisset2014-10-081-3/+4
| | | | llvm-svn: 219357
* Cache TargetLowering on SelectionDAGISel and update previousEric Christopher2014-10-081-7/+6
| | | | | | calls to getTargetLowering() with the cached variable. llvm-svn: 219284
* [X86] Fix a bug with fetch_add(INT32_MIN)Robin Morisset2014-10-071-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Fix pr21099 The pseudocode of what we were doing (spread through two functions) was: if (operand.doesNotFitIn32Bits()) Opc.initializeWithFoo(); if (operand < 0) operand = -operand; if (operand.doesFitIn8Bits()) Opc.initializeWithBar(); else if (operand.doesFitIn32Bits()) Opc.initializeWithBlah(); doStuff(Opc); So for operand == INT32_MIN, Opc was never initialized because the operand changes from fitting in 32 bits to not fitting, causing the various bugs/error messages noted by pr21099. This patch adds an extra test at the beginning for this case, and an llvm_unreachable to have better error message if the operand ends up not fitting in 32-bits at the end. Test Plan: new test + make check Reviewers: jfb Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D5655 llvm-svn: 219257
* [ISel] Keep matching state consistent when folding during X86 address matchAdam Nemet2014-10-031-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the X86 backend, matching an address is initiated by the 'addr' complex pattern and its friends. During this process we may reassociate and-of-shift into shift-of-and (FoldMaskedShiftToScaledMask) to allow folding of the shift into the scale of the address. However as demonstrated by the testcase, this can trigger CSE of not only the shift and the AND which the code is prepared for but also the underlying load node. In the testcase this node is sitting in the RecordedNode and MatchScope data structures of the matcher and becomes a deleted node upon CSE. Returning from the complex pattern function, we try to access it again hitting an assert because the node is no longer a load even though this was checked before. Now obviously changing the DAG this late is bending the rules but I think it makes sense somewhat. Outside of addresses we prefer and-of-shift because it may lead to smaller immediates (FoldMaskAndShiftToScale is an even better example because it create a non-canonical node). We currently don't recognize addresses during DAGCombiner where arguably this canonicalization should be performed. On the other hand, having this in the matcher allows us to cover all the cases where an address can be used in an instruction. I've also talked a little bit to Dan Gohman on llvm-dev who added the RAUW for the new shift node in FoldMaskedShiftToScaledMask. This RAUW is responsible for initiating the recursive CSE on users (http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-September/076903.html) but it is not strictly necessary since the shift is hooked into the visited user. Of course it's safer to keep the DAG consistent at all times (e.g. for accurate number of uses, etc.). So rather than changing the fundamentals, I've decided to continue along the previous patches and detect the CSE. This patch installs a very targeted DAGUpdateListener for the duration of a complex-pattern match and updates the matching state accordingly. (Previous patches used HandleSDNode to detect the CSE but that's not practical here). The listener is only installed on X86. I tested that there is no measurable overhead due to this while running through the spec2k BC files with llc. The only thing we pay for is the creation of the listener. The callback never ever triggers in spec2k since this is a corner case. Fixes rdar://problem/18206171 llvm-svn: 219009
* [X86] Improve commentAdam Nemet2014-09-161-3/+4
| | | | llvm-svn: 217885
OpenPOWER on IntegriCloud