Structural Watermarking to Deep Neural Networks via Network Channel Pruning. (arXiv:2107.08688v1 [cs.CR])

In order to protect the intellectual property (IP) of deep neural networks
(DNNs), many existing DNN watermarking techniques either embed watermarks
directly into the DNN parameters or insert backdoor watermarks by fine-tuning
the DNN parameters, which, however, cannot resist against various attack
methods that remove watermarks by altering DNN parameters. In this paper, we
bypass such attacks by introducing a structural watermarking scheme that
utilizes channel pruning to embed the watermark into the host DNN architecture
instead of crafting the DNN parameters. To be specific, during watermark
embedding, we prune the internal channels of the host DNN with the channel
pruning rates controlled by the watermark. During watermark extraction, the
watermark is retrieved by identifying the channel pruning rates from the
architecture of the target DNN model. Due to the superiority of pruning
mechanism, the performance of the DNN model on its original task is reserved
during watermark embedding. Experimental results have shown that, the proposed
work enables the embedded watermark to be reliably recovered and provides a
high watermark capacity, without sacrificing the usability of the DNN model. It
is also demonstrated that the work is robust against common transforms and
attacks designed for conventional watermarking approaches.