netfilter: nft_set_rbtree: use read spinlock to avoid datapath contention
authorSasha Levin <sashal@kernel.org>
Fri, 22 Sep 2023 17:01:15 +0000 (19:01 +0200)
committerSalvatore Bonaccorso <carnil@debian.org>
Fri, 29 Sep 2023 04:25:15 +0000 (05:25 +0100)
commit 96b33300fba880ec0eafcf3d82486f3463b4b6da upstream.

rbtree GC does not modify the datastructure, instead it collects expired
elements and it enqueues a GC transaction. Use a read spinlock instead
to avoid data contention while GC worker is running.

Fixes: f6c383b8c31a ("netfilter: nf_tables: adapt set backend to use GC transaction API")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Gbp-Pq: Topic bugfix/all
Gbp-Pq: Name netfilter-nft_set_rbtree-use-read-spinlock-to-avoid-.patch

net/netfilter/nft_set_rbtree.c

index 535076b4de53dc224acb1eaf831dfe62b90b07b2..cc32e19b4041a96a181adba17d5aa838a31bd084 100644 (file)
@@ -624,8 +624,7 @@ static void nft_rbtree_gc(struct work_struct *work)
        if (!gc)
                goto done;
 
-       write_lock_bh(&priv->lock);
-       write_seqcount_begin(&priv->count);
+       read_lock_bh(&priv->lock);
        for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {
 
                /* Ruleset has been updated, try later. */
@@ -672,8 +671,7 @@ dead_elem:
                nft_trans_gc_elem_add(gc, rbe);
        }
 try_later:
-       write_seqcount_end(&priv->count);
-       write_unlock_bh(&priv->lock);
+       read_unlock_bh(&priv->lock);
 
        if (gc)
                nft_trans_gc_queue_async_done(gc);