Provable Gradient Editing of Deep Neural Networks
Tao, Zhe and Thakur, Aditya V.Advances in Neural Information Processing Systems : Annual Conference on Neural Information Processing Systems (NeurIPS), 2025
In explainable AI, DNN gradients are used to interpret the prediction; in safety-critical control systems, gradients could encode safety constraints; in scientific-computing applications, gradients could encode physical invariants. While recent work on provable editing of DNNs has focused on input-output constraints, the problem of enforcing hard constraints on DNN gradients remains unaddressed. We present ProGrad, the first efficient approach for editing the parameters of a DNN to provably enforce hard constraints on the DNN gradients.
@inproceedings{NeurIPS2025, author = {Tao, Zhe and Thakur, Aditya V.}, year = {2025}, title = {Provable Gradient Editing of Deep Neural Networks}, booktitle = {Advances in Neural Information Processing Systems : Annual Conference on Neural Information Processing Systems (NeurIPS)}, note = {To appear} }