A team of North Carolina State University researchers recently published a paper that highlights the vulnerability of machine learning (ML) models to side-channel attacks. Specifically, the team used power-based side-channel attacks to extract the secret weights of a Binarized Neural Network (BNN) in a highly-parallelized hardware implementation.
“Physical side-channel leaks in neural networks call for a new line of side-channel analysis research because it opens up a new avenue of designing countermeasures tailored for deep learning inference engines,” the researchers wrote. “On a SAKURA-X FPGA board, [our] experiments show that the first-order DPA attacks on [an] unprotected implementation can succeed with only 200 traces.”
According to Jeremy Hsu of IEE Spectrum, machine learning algorithms that enable smart home devices and smart cars to automatically recognize various types of images or sounds such as words or music are among the artificial intelligence (AI) systems “most vulnerable” to such attacks.
Click here to read more ...