Differential privacy (DP) allows the quantification of privacy loss when the
data of individuals is subjected to algorithmic processing such as machine
learning, as well as the provision of objective privacy guarantees. However,
while techniques such as individual R’enyi DP (RDP) allow for granular,
per-person privacy accounting, few works have investigated the impact of each
input feature on the individual’s privacy loss. Here we extend the view of
individual RDP by introducing a new concept we call partial sensitivity, which
leverages symbolic automatic differentiation to determine the influence of each
input feature on the gradient norm of a function. We experimentally evaluate
our approach on queries over private databases, where we obtain a feature-level
contribution of private attributes to the DP guarantee of individuals.
Furthermore, we explore our findings in the context of neural network training
on synthetic data by investigating the partial sensitivity of input pixels on
an image classification task.

By admin