summaryrefslogtreecommitdiffstats
path: root/llvm/test/Analysis
Commit message (Collapse)AuthorAgeFilesLines
...
* [SCEV] Don't expand Wrap predicate using inttoptr in ni addrspacesKeno Fischer2018-07-262-8/+62
| | | | | | | | | | | | | | | Summary: In non-integral address spaces, we're not allowed to introduce inttoptr/ptrtoint intrinsics. Instead, we need to expand any pointer arithmetic as geps on the base pointer. Luckily this is a common task for SCEV, so all we have to do here is hook up the corresponding helper function and add test case. Fixes PR38290 Reviewers: sanjoy Differential Revision: https://reviews.llvm.org/D49832 llvm-svn: 338073
* Fix llvm::ComputeNumSignBits with some operations and llvm.assumeStanislav Mekhanoshin2018-07-251-0/+109
| | | | | | | | | | Currently ComputeNumSignBits does early exit while processing some of the operations (add, sub, mul, and select). This prevents the function from using AssumptionCacheTracker if passed. Differential Revision: https://reviews.llvm.org/D49759 llvm-svn: 337936
* [SCEV] Add zext(C + x + ...) -> D + zext(C-D + x + ...)<nuw><nsw> transformRoman Tereshin2018-07-241-0/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | if the top level addition in (D + (C-D + x + ...)) could be proven to not wrap, where the choice of D also maximizes the number of trailing zeroes of (C-D + x + ...), ensuring homogeneous behaviour of the transformation and better canonicalization of such expressions. This enables better canonicalization of expressions like 1 + zext(5 + 20 * %x + 24 * %y) and zext(6 + 20 * %x + 24 * %y) which get both transformed to 2 + zext(4 + 20 * %x + 24 * %y) This pattern is common in address arithmetics and the transformation makes it easier for passes like LoadStoreVectorizer to prove that 2 or more memory accesses are consecutive and optimize (vectorize) them. Reviewed By: mzolotukhin Differential Revision: https://reviews.llvm.org/D48853 llvm-svn: 337859
* [SCEV] Fix buggy behavior in getAddExpr with truncsMax Kazantsev2018-07-191-0/+34
| | | | | | | | | | | | | | | | | SCEV tries to constant-fold arguments of trunc operands in SCEVAddExpr, and when it does that, it passes wrong flags into the recursion. It is only valid to pass flags that are proved for narrow type into a computation in wider type if we can prove that trunc instruction doesn't actually change the value. If it did lose some meaningful bits, we may end up proving wrong no-wrap flags for sum of arguments of trunc. In the provided test we end up with `nuw` where it shouldn't be because of this bug. The solution is to conservatively pass `SCEV::FlagAnyWrap` which is always a valid thing to do. Reviewed By: sanjoy Differential Revision: https://reviews.llvm.org/D49471 llvm-svn: 337435
* [NFC] Make a test more neatMax Kazantsev2018-07-181-2/+5
| | | | llvm-svn: 337379
* Re-apply "[SCEV] Strengthen StrengthenNoWrapFlags (reapply r334428)."Tim Shen2018-07-1316-38/+42
| | | | llvm-svn: 337075
* DivergenceAnalysis: added debug outputTim Renouf2018-07-131-1/+4
| | | | | | | | | | | | | | | | | | Summary: This commit does two things: 1. modified the existing DivergenceAnalysis::dump() so it dumps the whole function with added DIVERGENT: annotations; 2. added code to do that dump if the appropriate -debug-only option is on. Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D47700 Change-Id: Id97b605aab1fc6f5a11a20c58a99bbe8c565bf83 llvm-svn: 336998
* Simplify recursive launder.invariant.group and stripPiotr Padlewski2018-07-121-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch is crucial for proving equality laundered/stripped pointers. eg: bool foo(A *a) { return a == std::launder(a); } Clang with -fstrict-vtable-pointers will emit something like: define dso_local zeroext i1 @_Z3fooP1A(%struct.A* %a) { entry: %c = bitcast %struct.A* %a to i8* %call = tail call i8* @llvm.launder.invariant.group.p0i8(i8* %c) %0 = bitcast %struct.A* %a to i8* %1 = tail call i8* @llvm.strip.invariant.group.p0i8(i8* %0) %2 = tail call i8* @llvm.strip.invariant.group.p0i8(i8* %call) %cmp = icmp eq i8* %1, %2 ret i1 %cmp } and because %2 can be replaced with @llvm.strip.invariant.group(%0) and that %2 and %1 will produce the same value (because strip is readnone) we can replace compare with true. Reviewers: rsmith, hfinkel, majnemer, amharc, kuhar Subscribers: llvm-commits, hiraditya Differential Revision: https://reviews.llvm.org/D47423 llvm-svn: 336963
* [TargetTransformInfo] Add pow2 analysis for scalar constantsSimon Pilgrim2018-07-112-144/+144
| | | | | | Add ConstantInt analysis to getOperandInfo so we get more realistic div/rem expansion costs comparable to the vector costs. llvm-svn: 336827
* llvm: Add support for "-fno-delete-null-pointer-checks"Manoj Gupta2018-07-091-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Support for this option is needed for building Linux kernel. This is a very frequently requested feature by kernel developers. More details : https://lkml.org/lkml/2018/4/4/601 GCC option description for -fdelete-null-pointer-checks: This Assume that programs cannot safely dereference null pointers, and that no code or data element resides at address zero. -fno-delete-null-pointer-checks is the inverse of this implying that null pointer dereferencing is not undefined. This feature is implemented in LLVM IR in this CL as the function attribute "null-pointer-is-valid"="true" in IR (Under review at D47894). The CL updates several passes that assumed null pointer dereferencing is undefined to not optimize when the "null-pointer-is-valid"="true" attribute is present. Reviewers: t.p.northover, efriedma, jyknight, chandlerc, rnk, srhines, void, george.burgess.iv Reviewed By: efriedma, george.burgess.iv Subscribers: eraman, haicheng, george.burgess.iv, drinkcat, theraven, reames, sanjoy, xbolva00, llvm-commits Differential Revision: https://reviews.llvm.org/D47895 llvm-svn: 336613
* [CostModel][X86] Add SREM/UREM general and constant costs (PR38056)Simon Pilgrim2018-07-071-216/+748
| | | | | | | | | | We penalize general SDIV/UDIV costs but don't do the same for SREM/UREM. This patch makes general vector SREM/UREM x20 as costly as scalar, the same approach as we do for SDIV/UDIV. The patch also extends the existing SDIV/UDIV constant costs for SREM/UREM - at the moment this means the additional cost of a MUL+SUB (see D48975). Differential Revision: https://reviews.llvm.org/D48980 llvm-svn: 336486
* Revert "[SCEV] Strengthen StrengthenNoWrapFlags (reapply r334428)."Tim Shen2018-07-0616-42/+38
| | | | | | This reverts commit r336140. Our tests shows that LSR assert fails with it. llvm-svn: 336473
* Revert "[InstCombine] Delay foldICmpUsingKnownBits until simple transforms ↵Max Kazantsev2018-07-061-1/+1
| | | | | | are done" llvm-svn: 336410
* [CostModel][X86] Add UDIV/UREM by pow2 costsSimon Pilgrim2018-07-052-174/+421
| | | | | | Normally InstCombine would have simplified these to SRL/AND instructions but we may still see these during SLP vectorization etc. llvm-svn: 336371
* [InstCombine] Delay foldICmpUsingKnownBits until simple transforms are doneMax Kazantsev2018-07-031-1/+1
| | | | | | | | | | | | This patch changes order of transform in InstCombineCompares to avoid performing transforms based on ranges which produce complex bit arithmetics before more simple things (like folding with constants) are done. See PR37636 for the motivating example. Differential Revision: https://reviews.llvm.org/D48584 Reviewed By: spatel, lebedev.ri llvm-svn: 336172
* [SCEV] Strengthen StrengthenNoWrapFlags (reapply r334428).Tim Shen2018-07-0216-38/+42
| | | | | | | | | | | | | | | | Summary: Comment on Transforms/LoopVersioning/incorrect-phi.ll: With the change SCEV is able to prove that the loop doesn't wrap-self (due to zext i16 to i64), disabling the entire loop versioning pass. Removed the zext and just use i64. Reviewers: sanjoy Subscribers: jlebar, hiraditya, javed.absar, bixia, llvm-commits Differential Revision: https://reviews.llvm.org/D48409 llvm-svn: 336140
* [CostModel][X86] Add cost tests for fp rounding intrinsicsSimon Pilgrim2018-07-021-0/+519
| | | | | | Add cost tests for fp ceil, floor, nearbyint, rint and trunc. llvm-svn: 336122
* Implement strip.invariant.groupPiotr Padlewski2018-07-021-2/+19
| | | | | | | | | | | | | | | | Summary: This patch introduce new intrinsic - strip.invariant.group that was described in the RFC: Devirtualization v2 Reviewers: rsmith, hfinkel, nlopes, sanjoy, amharc, kuhar Subscribers: arsenm, nhaehnle, JDevlieghere, hiraditya, xbolva00, llvm-commits Differential Revision: https://reviews.llvm.org/D47103 Co-authored-by: Krzysztof Pszeniczny <krzysztof.pszeniczny@gmail.com> llvm-svn: 336073
* Fix overconfident assert in ScalarEvolution::isImpliedViaMergeRoman Shirokiy2018-06-291-0/+127
| | | | | | | | | We can have AddRec with loops having many predecessors. This changes an assert to an early return. Differential Revision: https://reviews.llvm.org/D48766 llvm-svn: 335965
* Add a PhiValuesAnalysis pass to calculate the underlying values of phisJohn Brawn2018-06-283-0/+502
| | | | | | | | | | | | This pass is being added in order to make the information available to BasicAA, which can't do caching of this information itself, but possibly this information may be useful for other passes. Incorporates code based on Daniel Berlin's implementation of Tarjan's algorithm. Differential Revision: https://reviews.llvm.org/D47893 llvm-svn: 335857
* [AArch64] Add custom lowering for v4i8 trunc storeAdhemerval Zanella2018-06-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a custom trunc store lowering for v4i8 vector types. Since there is not v.4b register, the v4i8 is promoted to v4i16 (v.4h) and default action for v4i8 is to extract each element and issue 4 byte stores. A better strategy would be to extended the promoted v4i16 to v8i16 (with undef elements) and extract and store the word lane which represents the v4i8 subvectores. The construction: define void @foo(<4 x i16> %x, i8* nocapture %p) { %0 = trunc <4 x i16> %x to <4 x i8> %1 = bitcast i8* %p to <4 x i8>* store <4 x i8> %0, <4 x i8>* %1, align 4, !tbaa !2 ret void } Can be optimized from: umov w8, v0.h[3] umov w9, v0.h[2] umov w10, v0.h[1] umov w11, v0.h[0] strb w8, [x0, #3] strb w9, [x0, #2] strb w10, [x0, #1] strb w11, [x0] ret To: xtn v0.8b, v0.8h str s0, [x0] ret The patch also adjust the memory cost for autovectorization, so the C code: void foo (const int *src, int width, unsigned char *dst) { for (int i = 0; i < width; i++) *dst++ = *src++; } can be vectorized to: .LBB0_4: // %vector.body // =>This Inner Loop Header: Depth=1 ldr q0, [x0], #16 subs x12, x12, #4 // =4 xtn v0.4h, v0.4s xtn v0.8b, v0.8h st1 { v0.s }[0], [x2], #4 b.ne .LBB0_4 Instead of byte operations. llvm-svn: 335735
* [DA] Delinearise AddRecs if we can prove they don't wrapDavid Green2018-06-251-1/+35
| | | | | | | | | | | We can prove that some delinearized subscripts do not wrap around to become negative by the fact that they are from inbound geps of load/store locations. This helps improve the delinearisation in cases where we can't prove that they are non-negative from SCEV alone. Differential Revision: https://reviews.llvm.org/D48481 llvm-svn: 335481
* [CostModel][AArch64] Add some initial costs for SK_Select and ↵Simon Pilgrim2018-06-221-6/+6
| | | | | | | | | | | | | | SK_PermuteSingleSrc AArch64 was only setting costs for SK_Transpose, which meant that many of the simpler shuffles (e.g. SK_Select and SK_PermuteSingleSrc for larger vector elements) was being severely overestimated by the default shuffle expansion. This patch adds costs to help improve SLP performance and avoid a regression in reductions introduced by D48174. I'm not very knowledgeable about AArch64 shuffle lowering so I've kept the extra costs to a minimum - someone who knows this code can add extra costs which should improve vectorization a lot more. Differential Revision: https://reviews.llvm.org/D48172 llvm-svn: 335329
* [SCEV] Re-apply r335197 (with Polly fixes).Tim Shen2018-06-211-0/+42
| | | | | | | | | | | | | | | | | Summary: This initiates a discussion on changing Polly accordingly while re-applying r335197 (D48338). I have never worked on Polly. The proposed change to param_div_div_div_2.ll is not educated, but just patterns that match the output. All LLVM files are already reviewed in D48338. Reviewers: jdoerfert, bollu, efriedma Subscribers: jlebar, sanjoy, hiraditya, llvm-commits, bixia Differential Revision: https://reviews.llvm.org/D48453 llvm-svn: 335292
* AMDGPU: Convert test cases to the dimension-aware intrinsicsNicolai Haehnle2018-06-211-39/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Also explicitly port over some tests in llvm.amdgcn.image.* that were missing. Some tests are removed because they no longer apply (i.e. explicitly testing building an address vector via insertelement). This is in preparation for the eventual removal of the old-style intrinsics. Some additional notes: - constant-address-space-32bit.ll: change some GCN-NEXT to GCN because the instruction schedule was subtly altered - insert_vector_elt.ll: the old test didn't actually test anything, because %tmp1 was not used; remove the load, because it doesn't work (Because of the amdgpu_ps calling convention? In any case, it's orthogonal to what the test claims to be testing.) Change-Id: Idfa99b6512ad139e755e82b8b89548ab08f0afcf Reviewers: arsenm, rampitec Subscribers: MatzeB, qcolombet, kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, javed.absar, llvm-commits Differential Revision: https://reviews.llvm.org/D48018 llvm-svn: 335229
* [DA] Enable -da-delinearize by defaultDavid Green2018-06-2114-31/+578
| | | | | | | | | | | | | This enables da-delinearize in Dependence Analysis for delinearizing array accesses into multiple dimensions. This can help to increase the power of Dependence analysis on multi-dimensional arrays and prevent having to fall back to the slower and less accurate MIV tests. It adds static checks on the bounds of the arrays to ensure that one dimension doesn't overflow into another, and brings our code in line with our tests. Differential Revision: https://reviews.llvm.org/D45872 llvm-svn: 335217
* [X86][AVX] Reduce v4f64/v4i64 shuffle costs (PR37882)Simon Pilgrim2018-06-213-61/+47
| | | | | | These were being over cautious for costs for one/two op general shuffles - VSHUFPD doesn't have to replicate the same shuffle in both lanes like VSHUFPS does. llvm-svn: 335216
* Revert "[SCEV] Improve zext(A /u B) and zext(A % B)"Tim Shen2018-06-211-42/+0
| | | | | | This reverts commit r335197, as some bots are not happy. llvm-svn: 335198
* [SCEV] Improve zext(A /u B) and zext(A % B)Tim Shen2018-06-211-0/+42
| | | | | | | | | | | | | | | Summary: Try to match udiv and urem patterns, and sink zext down to the leaves. I'm not entirely sure why some unrelated tests change, but the added <nsw>s seem right. Reviewers: sanjoy Subscribers: jlebar, hiraditya, bixia, llvm-commits Differential Revision: https://reviews.llvm.org/D48338 llvm-svn: 335197
* [NFC][SCEV] Add tests related to bit masking (PR37793)Roman Lebedev2018-06-206-0/+551
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Related to https://bugs.llvm.org/show_bug.cgi?id=37793, https://reviews.llvm.org/D46760#1127287 We'd like to do this canonicalization https://rise4fun.com/Alive/Gmc But it is currently restricted by rL155136 / rL155362, which says: ``` // This is a constant shift of a constant shift. Be careful about hiding // shl instructions behind bit masks. They are used to represent multiplies // by a constant, and it is important that simple arithmetic expressions // are still recognizable by scalar evolution. // // The transforms applied to shl are very similar to the transforms applied // to mul by constant. We can be more aggressive about optimizing right // shifts. // // Combinations of right and left shifts will still be optimized in // DAGCombine where scalar evolution no longer applies. ``` I think these tests show that for *constants*, SCEV has no issues with that canonicalization. Reviewers: mkazantsev, spatel, efriedma, sanjoy Reviewed By: mkazantsev Subscribers: sanjoy, javed.absar, llvm-commits, stoklund, bixia Differential Revision: https://reviews.llvm.org/D48229 llvm-svn: 335101
* Revert "[SCEV] Add nuw/nsw to mul ops in StrengthenNoWrapFlags"Sanjoy Das2018-06-1914-35/+23
| | | | | | | | | | | | | | This reverts r334428. It incorrectly marks some multiplications as nuw. Tim Shen is working on a proper fix. Original commit message: [SCEV] Add nuw/nsw to mul ops in StrengthenNoWrapFlags where safe. Summary: Previously we would add them for adds, but not multiplies. llvm-svn: 335016
* [MSSA] Print more optimization informationGeorge Burgess IV2018-06-141-4/+4
| | | | | | | | | | | | | | In particular, when asked to print a MemoryAccess, we'll now print where defs are optimized to, and we'll print optimized access types. This patch also introduces an operator<< to make printing AliasResults easier. Patch by Juneyoung Lee! Differential Revision: https://reviews.llvm.org/D47860 llvm-svn: 334760
* [SCEV] Simplify zext/trunc idiom that appears when handling bitmasks.Justin Lebar2018-06-142-2/+16
| | | | | | | | | | | | | | | | | | | Summary: Specifically, we transform zext(2^K * (trunc X to iN)) to iM -> 2^K * (zext(trunc X to i{N-K}) to iM)<nuw> This is helpful because pulling the 2^K out of the zext allows further optimizations. Reviewers: sanjoy Subscribers: hiraditya, llvm-commits, timshen Differential Revision: https://reviews.llvm.org/D48158 llvm-svn: 334737
* [SCEV] Simplify trunc-of-add/mul to add/mul-of-trunc under more circumstances.Justin Lebar2018-06-145-8/+34
| | | | | | | | | | | | | | | | | | | | Summary: Previously we would do this simplification only if it did not introduce any new truncs (excepting new truncs which replace other cast ops). This change weakens this condition: If the number of truncs stays the same, but we're able to transform trunc(X + Y) to X + trunc(Y), that's still simpler, and it may open up additional transformations. While we're here, also clean up some duplicated code. Reviewers: sanjoy Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D48160 llvm-svn: 334736
* [CostModel][AArch64] Add cost tests for ALTERNATE/SELECT style shuffle masksSimon Pilgrim2018-06-141-0/+95
| | | | | | Precursor to fixing a regression with SLP vectorizer for supporting SELECT shuffles (vs the current ALTERNATE) llvm-svn: 334714
* [CostModel] Recognise REVERSE shuffle mask if the elements come from the ↵Simon Pilgrim2018-06-141-7/+7
| | | | | | second src llvm-svn: 334698
* [CostModel][X86] Test showing failure to recognise REVERSE shuffle mask if ↵Simon Pilgrim2018-06-131-0/+47
| | | | | | the elements come from the second src llvm-svn: 334623
* [CostModel] Recognise BROADCAST shuffle mask if the elements come from the ↵Simon Pilgrim2018-06-131-7/+7
| | | | | | second src llvm-svn: 334620
* [CostModel][X86] Test showing failure to recognise BROADCAST shuffle mask if ↵Simon Pilgrim2018-06-131-0/+47
| | | | | | the elements come from the second src llvm-svn: 334616
* [SimplifyIndVars] Ignore dead usersMax Kazantsev2018-06-132-0/+7
| | | | | | | | | | | | | IndVarSimplify sometimes makes transforms basing on users that are trivially dead. In particular, if DCE wasn't run before it, there may be a dead `sext/zext` in loop that will trigger widening transforms, however it makes no sense to do it. This patch teaches IndVarsSimplify ignore the mist trivial cases of that. Differential Revision: https://reviews.llvm.org/D47974 Reviewed By: sanjoy llvm-svn: 334567
* [CostModel] Replace ShuffleKind::SK_Alternate with ShuffleKind::SK_Select ↵Simon Pilgrim2018-06-121-128/+96
| | | | | | | | | | | | | | | | | | (PR33744) As discussed on PR33744, this patch relaxes ShuffleKind::SK_Alternate which requires shuffle masks to only match an alternating pattern from its 2 sources: e.g. v4f32: <0,5,2,7> or <4,1,6,3> This seems far too restrictive as most SIMD hardware which will implement it using a general blend/bit-select instruction, so replaces it with SK_Select, permitting elements from either source as long as they are inline: e.g. v4f32: <0,5,2,7>, <4,1,6,3>, <0,1,6,7>, <4,1,2,3> etc. This initial patch just updates the name and cost model shuffle mask analysis, later patch reviews will update SLP to better utilise this - it still limits itself to SK_Alternate style patterns. Differential Revision: https://reviews.llvm.org/D47985 llvm-svn: 334513
* [CostModel] Treat Identity shuffle masks as zero costSimon Pilgrim2018-06-122-92/+56
| | | | | | | | | | As discussed on D47985, identity shuffle masks should probably be free. I've limited this to the case where the input and output types all match - but we could probably accept all cases. Differential Revision: https://reviews.llvm.org/D47986 llvm-svn: 334506
* [CostModel][X86] Add extra Identity shuffle mask cost tests (D47986)Simon Pilgrim2018-06-121-0/+59
| | | | llvm-svn: 334486
* Fix incorrect CHECK-LABELTim Shen2018-06-111-1/+1
| | | | llvm-svn: 334434
* [SCEV] Add transform zext((A * B * ...)<nuw>) --> (zext(A) * zext(B) * ↵Justin Lebar2018-06-111-0/+31
| | | | | | | | | | | | ...)<nuw>. Reviewers: sanjoy Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D48041 llvm-svn: 334429
* [SCEV] Add nuw/nsw to mul ops in StrengthenNoWrapFlags where safe.Justin Lebar2018-06-1113-21/+36
| | | | | | | | | | | | | Summary: Previously we would add them for adds, but not multiplies. Reviewers: sanjoy Subscribers: llvm-commits, hiraditya Differential Revision: https://reviews.llvm.org/D48038 llvm-svn: 334428
* [SCEV] Canonicalize "A /u C1 /u C2" to "A /u (C1*C2)".Tim Shen2018-06-111-0/+28
| | | | | | | | | | | | Summary: FWIW InstCombine already folds this. Also avoid the case where C1*C2 overflows. Reviewers: sunfish, sanjoy Subscribers: hiraditya, bixia, llvm-commits Differential Revision: https://reviews.llvm.org/D47965 llvm-svn: 334425
* [CostModel][X86] Add 'select' style shuffle costs tests (PR33744)Simon Pilgrim2018-06-091-2/+324
| | | | llvm-svn: 334351
* [SCEV] Look through zero-extends in howFarToZeroKrzysztof Parzyszek2018-06-081-0/+45
| | | | | | | | | | | | An expression like (zext i2 {(trunc i32 (1 + %B) to i2),+,1}<%while.body> to i32) will become zero exactly when the nested value becomes zero in its type. Strip injective operations from the input value in howFarToZero to make the value simpler. Differential Revision: https://reviews.llvm.org/D47951 llvm-svn: 334318
* [BPI] Apply invoke heuristic before loop branch heuristicArtur Pilipenko2018-06-082-0/+67
| | | | | | | | | | Currently the loop branch heuristic is applied before the invoke heuristic which makes us overestimate the probability of the unwind destination of invokes inside loops. This in turn makes us grossly underestimate the frequencies of loops with invokes. Reviewed By: skatkov, vsk Differential Revision: https://reviews.llvm.org/D47371 llvm-svn: 334285
OpenPOWER on IntegriCloud