diff options
| author | Craig Topper <craig.topper@intel.com> | 2019-07-31 22:43:08 +0000 |
|---|---|---|
| committer | Craig Topper <craig.topper@intel.com> | 2019-07-31 22:43:08 +0000 |
| commit | b51dc64063e6ef01f457a3e6e8453126302bde10 (patch) | |
| tree | 425ebfd3adb41fc46e97a9d1681db88244c81a19 /llvm/lib | |
| parent | c724215a70038c02efa06695719e4a384ba2ace2 (diff) | |
| download | bcm5719-llvm-b51dc64063e6ef01f457a3e6e8453126302bde10.tar.gz bcm5719-llvm-b51dc64063e6ef01f457a3e6e8453126302bde10.zip | |
[X86] Add DAG combine to fold any_extend_vector_inreg+truncstore to an extractelement+store
We have custom code that ignores the normal promoting type legalization on less than 128-bit vector types like v4i8 to emit pavgb, paddusb, psubusb since we don't have the equivalent instruction on a larger element type like v4i32. If this operation appears before a store, we can be left with an any_extend_vector_inreg followed by a truncstore after type legalization. When truncstore isn't legal, this will normally be decomposed into shuffles and a non-truncating store. This will then combine away the any_extend_vector_inreg and shuffle leaving just the store. On avx512, truncstore is legal so we don't decompose it and we had no combines to fix it.
This patch adds a new DAG combine to detect this case and emit either an extract_store for 64-bit stoers or a extractelement+store for 32 and 16 bit stores. This makes the avx512 codegen match the avx2 codegen for these situations. I'm restricting to only when -x86-experimental-vector-widening-legalization is false. When we're widening we're not likely to create this any_extend_inreg+truncstore combination. This means we should be able to remove this code when we flip the default. I would like to flip the default soon, but I need to investigate some performance regressions its causing in our branch that I wasn't seeing on trunk.
Differential Revision: https://reviews.llvm.org/D65538
llvm-svn: 367488
Diffstat (limited to 'llvm/lib')
| -rw-r--r-- | llvm/lib/Target/X86/X86ISelLowering.cpp | 35 |
1 files changed, 35 insertions, 0 deletions
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp index 159ce60af45..4331cb9c231 100644 --- a/llvm/lib/Target/X86/X86ISelLowering.cpp +++ b/llvm/lib/Target/X86/X86ISelLowering.cpp @@ -40179,6 +40179,41 @@ static SDValue combineStore(SDNode *N, SelectionDAG &DAG, MVT::v16i8, St->getMemOperand()); } + // Look for a truncating store to a less than 128 bit vector that has been + // truncated from an any_extend_inreg from a 128 bit vector with the same + // element size. We can use a 64/32/16-bit extractelement and store that. + // Disabling this when widening legalization is in effect since the trunc + // store would have been unlikely to be created in that case. Only doing this + // when truncstore is legal since it would otherwise be decomposed below and + // then combined away. + if (St->isTruncatingStore() && TLI.isTruncStoreLegal(VT, StVT) && + StoredVal.getOpcode() == ISD::ANY_EXTEND_VECTOR_INREG && + StoredVal.getValueType().is128BitVector() && + !ExperimentalVectorWideningLegalization) { + EVT OrigVT = StoredVal.getOperand(0).getValueType(); + if (OrigVT.is128BitVector() && + OrigVT.getVectorElementType() == StVT.getVectorElementType()) { + unsigned StoreSize = StVT.getSizeInBits(); + assert((128 % StoreSize == 0) && "Unexpected store size!"); + MVT IntVT = MVT::getIntegerVT(StoreSize); + MVT CastVT = MVT::getVectorVT(IntVT, 128 / StoreSize); + StoredVal = DAG.getBitcast(CastVT, StoredVal.getOperand(0)); + // Use extract_store for the 64-bit case to support 32-bit targets. + if (IntVT == MVT::i64) { + SDVTList Tys = DAG.getVTList(MVT::Other); + SDValue Ops[] = {St->getChain(), StoredVal, St->getBasePtr()}; + return DAG.getMemIntrinsicNode(X86ISD::VEXTRACT_STORE, dl, Tys, Ops, + IntVT, St->getMemOperand()); + } + + // Otherwise just use an extract and store. + StoredVal = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl, IntVT, StoredVal, + DAG.getIntPtrConstant(0, dl)); + return DAG.getStore(St->getChain(), dl, StoredVal, St->getBasePtr(), + St->getMemOperand()); + } + } + // Optimize trunc store (of multiple scalars) to shuffle and store. // First, pack all of the elements in one place. Next, store to memory // in fewer chunks. |

