summaryrefslogtreecommitdiffstats
path: root/mlir/lib/Dialect/QuantOps/Utils
Commit message (Collapse)AuthorAgeFilesLines
* Adjust License.txt file to use the LLVM licenseMehdi Amini2019-12-233-39/+12
| | | | PiperOrigin-RevId: 286906740
* NFC: Cleanup non-conforming usages of namespaces.River Riddle2019-12-182-33/+27
| | | | | | | * Fixes use of anonymous namespace for static methods. * Uses explicit qualifiers(mlir::) instead of wrapping the definition with the namespace. PiperOrigin-RevId: 286222654
* Fix minor spelling tweaks (NFC)Kazuaki Ishizaki2019-10-201-1/+1
| | | | | | Closes tensorflow/mlir#177 PiperOrigin-RevId: 275692653
* Quantize attribute values by per axis quantization parametersFeng Liu2019-09-192-8/+55
| | | | | | | | | | | A new converter with per axis quantization parameters is added to quantize a dense elements attribute. For each slice along the quantization axis, it creates an uniform quantized value converter, with different scale and zero point, and quantizes the values in the slice. The current implementation doesn't handle sparse elements attributes. PiperOrigin-RevId: 270121986
* Remove the constraint that min / max should stride zeroFeng Liu2019-09-101-12/+13
| | | | | | | | | | | | Since we apply nudging for the zero point to make sure the nudged zerop points can be in the range of [qmin, qmax], the constraint that rmin / rmax should stride zero isn't necessary. This also matches the documentation of tensorflow's FakeQuantWithMinMaxArgs op, where min and max don't need to stride zero: https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args PiperOrigin-RevId: 268296285
* Convert ConstFakeQuantPerAxis to qcast and dcast pairFeng Liu2019-09-101-3/+2
| | | | | | This is also to add the test to the fakeQuantAttrsToType for per-channel fake quant. PiperOrigin-RevId: 268260032
* [NFC] Rename ExpressedToUniformQuantizedType to ExpressedToQuantizedTypeFeng Liu2019-09-091-8/+7
| | | | PiperOrigin-RevId: 268090906
* Convert per channel fake quant attributes to typeFeng Liu2019-09-091-36/+102
| | | | | | | | For per channel fake quant attributes, the returned type should be UniformQuantizedPerAxisType. Currently, this method isn't under test because we haven't added the quant_ConstFakeQuantPerAxis op and the convert method. PiperOrigin-RevId: 268084017
* Add tests to verify 0.0 is quantized correctlyFeng Liu2019-08-291-1/+1
| | | | | | We should consider both signed and narrow_range cases. PiperOrigin-RevId: 266167366
* Fix the equality check of two floating point valuesFeng Liu2019-08-281-3/+5
| | | | PiperOrigin-RevId: 266022088
* Refactor DenseElementAttr::getValues methods to return full ranges for splats.River Riddle2019-08-111-3/+8
| | | | | | The current implementation only returns one element for the splat case, which often comes as a surprise; leading to subtle/confusing bugs. The new behavior will include an iterate over the full range of elements, as defined by the shaped type, by providing the splat value for each iterator index. PiperOrigin-RevId: 262756780
* Add an "is_signed" attribute to the quant_ConstFakeQuant opFeng Liu2019-07-191-3/+9
| | | | | | | | | | | Some TensorFlow simulated quantize ops such as QuantizeAndDequantizeV2Op have attribute for the sign of the quantization, so quant_ConstFakeQuant should be able to represent it with the new attribute is added. The method for converting these attributes to an QuantizedType is updated to handle this new argument. PiperOrigin-RevId: 258810290
* Support signed and unsigned quantization typesFeng Liu2019-07-161-8/+13
| | | | | | | | This patch added a new argument to the fakeQuantAttrsToType utility method, so it can be used to convert min/max to quantized type with different signed storage types. PiperOrigin-RevId: 258382538
* Move the emitError/Warning/Remark utility methods out of MLIRContext and ↵River Riddle2019-06-251-2/+2
| | | | | | | | into the mlir namespace. Now that Locations are attributes, they have direct access to the MLIR context. This allows for simplifying error emission by removing unnecessary context lookups. PiperOrigin-RevId: 255112791
* Simplify usages of SplatElementsAttr now that it inherits from ↵River Riddle2019-06-191-37/+1
| | | | | | DenseElementsAttr. PiperOrigin-RevId: 253910543
* Refactor DenseElementsAttr to support auto-splatting the dense data on ↵River Riddle2019-06-191-1/+1
| | | | | | construction. This essentially means that we always auto-detect splat data and only store the minimum amount of data necessary. Support for parsing dense splats, and removing SplatElementsAttr(now that it is redundant) will come in followup cls PiperOrigin-RevId: 252720561
* Rename VectorOrTensorType to ShapedTypeGeoffrey Martin-Noble2019-05-202-8/+8
| | | | | | | | | | | | This is in preparation for making it also support/be a parent class of MemRefType. MemRefs have similar shape/rank/element semantics and it would be useful to be able to use these same utilities for them. This CL should not change any semantics and only change variables, types, string literals, and comments. In follow-up CLs I will prepare all callers to handle MemRef types or remove their dependence on ShapedType. Discussion/Rationale in https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/cHLoyfGu8y8 -- PiperOrigin-RevId: 248476449
* Move Quantization -> Dialect/QuantOps, FxpMathOps -> Dialect/FxpMathOps.Stella Laurenzo2019-05-203-0/+364
Adding the additional layer of directory was discussed offline and matches the Target/ tree. The names match the defacto convention we seem to be following where the C++ namespace is ^(.+)Ops/$ matched against the directory name. This is in preparation for patching the Quantizer into this tree, which would have been confusing without moving the Quantization dialect to its more proper home. It is left to others to move other dialects if desired. Tested: ninja check-mlir -- PiperOrigin-RevId: 248171982
OpenPOWER on IntegriCloud