Research


Secure Machine learning

 
 
Most machine learning (ML) systems are designed with an assumption that test inputs will belong to a particular pre-defined training distribution. However, most real-world ML applications, such as autonomous driving, biometric authentication are likely to receive inputs beyond the training distribution. This is where Open-wold machine learning becomes necessary. It not only learns that task on the training distribution but also learns to reject out-of-distribution images. Though the open-world machine is the closest analog of real-world ML, there exists almost no rigorous security analysis of it. We are developing novel adaptive adversarial attacks and defenses to move closer to secure open-world machine learning.
However, embedding security in deep neural networks comes at an increase in their size, where most state-of-the-art networks already have millions of parameters. This is particularly a huge drawback in safety-critical but computationally resource-constrained applications of machine learning. Our goal to bridge this gap by developing novel solutions for reducing the number of parameters in neural networks without impacting their robustness to adversarial attacks. We are currently focusing on network pruning based approaches to achieve this goal.