diff options
author | wenxu <wenxu@ucloud.cn> | 2020-01-07 17:16:06 +0800 |
---|---|---|
committer | Saeed Mahameed <saeedm@mellanox.com> | 2020-01-22 22:28:29 -0800 |
commit | 6d65bc64e232896251daba7c43933f0f35859bc3 (patch) | |
tree | 883079010ce80daee36f31e9aff7b8de8605ba10 /drivers/net/ethernet/mellanox | |
parent | e15cf98ee8a76472144a19a24ca73d26fefa5237 (diff) | |
download | blackbird-op-linux-6d65bc64e232896251daba7c43933f0f35859bc3.tar.gz blackbird-op-linux-6d65bc64e232896251daba7c43933f0f35859bc3.zip |
net/mlx5e: Add mlx5e_flower_parse_meta support
In the flowtables offload all the devices in the flowtables
share the same flow_block. An offload rule will be installed on
all the devices. This scenario is not correct.
It is no problem if there are only two devices in the flowtable,
The rule with ingress and egress on the same device can be reject
by driver.
But more than two devices in the flowtable will install the wrong
rules on hardware.
For example:
Three devices in a offload flowtables: dev_a, dev_b, dev_c
A rule ingress from dev_a and egress to dev_b:
The rule will install on device dev_a.
The rule will try to install on dev_b but failed for ingress
and egress on the same device.
The rule will install on dev_c. This is not correct.
The flowtables offload avoid this case through restricting the ingress dev
with FLOW_DISSECTOR_KEY_META.
So the mlx5e driver also should support the FLOW_DISSECTOR_KEY_META parse.
Signed-off-by: wenxu <wenxu@ucloud.cn>
Acked-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Diffstat (limited to 'drivers/net/ethernet/mellanox')
-rw-r--r-- | drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 39 |
1 files changed, 39 insertions, 0 deletions
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 26f559b453dc..4f184c770a45 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -1810,6 +1810,40 @@ static void *get_match_headers_value(u32 flags, outer_headers); } +static int mlx5e_flower_parse_meta(struct net_device *filter_dev, + struct flow_cls_offload *f) +{ + struct flow_rule *rule = flow_cls_offload_flow_rule(f); + struct netlink_ext_ack *extack = f->common.extack; + struct net_device *ingress_dev; + struct flow_match_meta match; + + if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_META)) + return 0; + + flow_rule_match_meta(rule, &match); + if (match.mask->ingress_ifindex != 0xFFFFFFFF) { + NL_SET_ERR_MSG_MOD(extack, "Unsupported ingress ifindex mask"); + return -EINVAL; + } + + ingress_dev = __dev_get_by_index(dev_net(filter_dev), + match.key->ingress_ifindex); + if (!ingress_dev) { + NL_SET_ERR_MSG_MOD(extack, + "Can't find the ingress port to match on"); + return -EINVAL; + } + + if (ingress_dev != filter_dev) { + NL_SET_ERR_MSG_MOD(extack, + "Can't match on the ingress filter port"); + return -EINVAL; + } + + return 0; +} + static int __parse_cls_flower(struct mlx5e_priv *priv, struct mlx5_flow_spec *spec, struct flow_cls_offload *f, @@ -1830,6 +1864,7 @@ static int __parse_cls_flower(struct mlx5e_priv *priv, u16 addr_type = 0; u8 ip_proto = 0; u8 *match_level; + int err; match_level = outer_match_level; @@ -1873,6 +1908,10 @@ static int __parse_cls_flower(struct mlx5e_priv *priv, spec); } + err = mlx5e_flower_parse_meta(filter_dev, f); + if (err) + return err; + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) { struct flow_match_basic match; |