Post by account_disabled on Feb 11, 2024 22:19:59 GMT -8
Most Deep Learning models use 32-bit (4-byte) floating-point numbers to store parameter values. According to the researchers' tests, an attacker can store up to 3 bytes of malware in each parameter without significantly affecting its value. Changing values in a deep learning model (Deep Learning) Changing values in a deep learning model (Deep Learning) When infecting a deep learning model, the attacker divides the Malware into 3-byte parts and embeds data into its parameters. To deliver malware to a target, an attacker can publish an infected neural network on one of several online locations that host Deep Learning models such as GitHub or TorchHub. Additionally, attackers can carry out a more sophisticated form of supply chain attack, in which the infected model is distributed through automatic updates to the software installed on the target device. . Once the Malware infected model is delivered to the victim, a software extracts the payload and executes it.
Hide malware in complex neural networks To verify the feasibility of EvilModel, researchers tested it on several complex Neural Networks (CNNs). Because they are quite large, often containing dozens of classes and millions of parameters. At the same time CNN contains a diverse architecture and includes different types of layers (fully connected, convolutional) and different generalization techniques (batch normalization, skip layer, pooling layer , etc.), making it possible to evaluate the impact of embedding malware in different installations. Besides CNNs are widely used in computer Costa Rica Telemarketing Data vision applications, this can make them a prime target for bad actors. And there are also a lot of pre-trained CNNs ready to be integrated into applications without any changes, and many enterprises use pre-trained CNNs in applications without necessarily knowing their workings in depth. how to move. Neural Network hides malware without being detected Researchers first tried embedding malware into AlexNet, a popular CNN that helped renew interest in deep learning in 2012. Researchers embedded 26.8 megabytes of malware malware into the model while keeping the accuracy within 1 percent of the clean version.
If they increase the volume of Malware data, the accuracy will start to decrease significantly. 4. Secure the machine learning pipeline Since malware scanners cannot detect malicious payloads embedded in deep learning models, the only countermeasure against is to destroy the malware. The payload only maintains its integrity if its bytes remain intact. Therefore, if the Neural Network receiver does not freeze the infected layer, its parameter values will change and malware data will be destroyed. Even one epoch of training can be enough to destroy any malware embedded in a Deep Learning model. However, most developers use pre-trained models unless they want to adapt them for another application. This means that in addition to data poisoning and other security issues, malware-infected Neural Networks are a real threat to the future of Deep Learning.
Hide malware in complex neural networks To verify the feasibility of EvilModel, researchers tested it on several complex Neural Networks (CNNs). Because they are quite large, often containing dozens of classes and millions of parameters. At the same time CNN contains a diverse architecture and includes different types of layers (fully connected, convolutional) and different generalization techniques (batch normalization, skip layer, pooling layer , etc.), making it possible to evaluate the impact of embedding malware in different installations. Besides CNNs are widely used in computer Costa Rica Telemarketing Data vision applications, this can make them a prime target for bad actors. And there are also a lot of pre-trained CNNs ready to be integrated into applications without any changes, and many enterprises use pre-trained CNNs in applications without necessarily knowing their workings in depth. how to move. Neural Network hides malware without being detected Researchers first tried embedding malware into AlexNet, a popular CNN that helped renew interest in deep learning in 2012. Researchers embedded 26.8 megabytes of malware malware into the model while keeping the accuracy within 1 percent of the clean version.
If they increase the volume of Malware data, the accuracy will start to decrease significantly. 4. Secure the machine learning pipeline Since malware scanners cannot detect malicious payloads embedded in deep learning models, the only countermeasure against is to destroy the malware. The payload only maintains its integrity if its bytes remain intact. Therefore, if the Neural Network receiver does not freeze the infected layer, its parameter values will change and malware data will be destroyed. Even one epoch of training can be enough to destroy any malware embedded in a Deep Learning model. However, most developers use pre-trained models unless they want to adapt them for another application. This means that in addition to data poisoning and other security issues, malware-infected Neural Networks are a real threat to the future of Deep Learning.