| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
PiperOrigin-RevId: 286906740
|
|
|
|
|
|
|
| |
* Fixes use of anonymous namespace for static methods.
* Uses explicit qualifiers(mlir::) instead of wrapping the definition with the namespace.
PiperOrigin-RevId: 286222654
|
|
|
|
|
|
| |
Closes tensorflow/mlir#177
PiperOrigin-RevId: 275692653
|
|
|
|
|
|
|
|
|
|
|
| |
A new converter with per axis quantization parameters is added to quantize a
dense elements attribute. For each slice along the quantization axis, it
creates an uniform quantized value converter, with different scale and zero
point, and quantizes the values in the slice.
The current implementation doesn't handle sparse elements attributes.
PiperOrigin-RevId: 270121986
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since we apply nudging for the zero point to make sure the nudged zerop points
can be in the range of [qmin, qmax], the constraint that rmin / rmax should
stride zero isn't necessary.
This also matches the documentation of tensorflow's FakeQuantWithMinMaxArgs op,
where min and max don't need to stride zero:
https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args
PiperOrigin-RevId: 268296285
|
|
|
|
|
|
| |
This is also to add the test to the fakeQuantAttrsToType for per-channel fake quant.
PiperOrigin-RevId: 268260032
|
|
|
|
| |
PiperOrigin-RevId: 268090906
|
|
|
|
|
|
|
|
| |
For per channel fake quant attributes, the returned type should be
UniformQuantizedPerAxisType. Currently, this method isn't under test because we
haven't added the quant_ConstFakeQuantPerAxis op and the convert method.
PiperOrigin-RevId: 268084017
|
|
|
|
|
|
| |
We should consider both signed and narrow_range cases.
PiperOrigin-RevId: 266167366
|
|
|
|
| |
PiperOrigin-RevId: 266022088
|
|
|
|
|
|
| |
The current implementation only returns one element for the splat case, which often comes as a surprise; leading to subtle/confusing bugs. The new behavior will include an iterate over the full range of elements, as defined by the shaped type, by providing the splat value for each iterator index.
PiperOrigin-RevId: 262756780
|
|
|
|
|
|
|
|
|
|
|
| |
Some TensorFlow simulated quantize ops such as QuantizeAndDequantizeV2Op have
attribute for the sign of the quantization, so quant_ConstFakeQuant should be
able to represent it with the new attribute is added.
The method for converting these attributes to an QuantizedType is updated to
handle this new argument.
PiperOrigin-RevId: 258810290
|
|
|
|
|
|
|
|
| |
This patch added a new argument to the fakeQuantAttrsToType utility method, so
it can be used to convert min/max to quantized type with different signed
storage types.
PiperOrigin-RevId: 258382538
|
|
|
|
|
|
|
|
| |
into the mlir namespace.
Now that Locations are attributes, they have direct access to the MLIR context. This allows for simplifying error emission by removing unnecessary context lookups.
PiperOrigin-RevId: 255112791
|
|
|
|
|
|
| |
DenseElementsAttr.
PiperOrigin-RevId: 253910543
|
|
|
|
|
|
| |
construction. This essentially means that we always auto-detect splat data and only store the minimum amount of data necessary. Support for parsing dense splats, and removing SplatElementsAttr(now that it is redundant) will come in followup cls
PiperOrigin-RevId: 252720561
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is in preparation for making it also support/be a parent class of MemRefType. MemRefs have similar shape/rank/element semantics and it would be useful to be able to use these same utilities for them.
This CL should not change any semantics and only change variables, types, string literals, and comments. In follow-up CLs I will prepare all callers to handle MemRef types or remove their dependence on ShapedType.
Discussion/Rationale in https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/cHLoyfGu8y8
--
PiperOrigin-RevId: 248476449
|
|
Adding the additional layer of directory was discussed offline and matches the Target/ tree. The names match the defacto convention we seem to be following where the C++ namespace is ^(.+)Ops/$ matched against the directory name.
This is in preparation for patching the Quantizer into this tree, which would have been confusing without moving the Quantization dialect to its more proper home. It is left to others to move other dialects if desired.
Tested:
ninja check-mlir
--
PiperOrigin-RevId: 248171982
|