Reputation: 11
I am trying to generate adversarial images using the FastGradientMethod attack in the ART library on a YOLOv5 object detection model. However, I am running into an error when attempting to generate the adversarial image using the fgm.generate() method. Specifically, I receive the following error message:
Traceback (most recent call last): File "C:\Users\ben\OneDrive\pc\work\yolov5\FSGM.py", line 36, in adversarial_image_fgm = fgm.generate(image_np) File "C:\Users\ben\OneDrive\pc\work\yolov5.venv\lib\site-packages\art\attacks\evasion\fast_gradient.py", line 312, in generate y_array = self.estimator.predict(x, batch_size=self.batch_size) File "C:\Users\ben\OneDrive\pc\work\yolov5.venv\lib\site-packages\art\estimators\object_detection\pytorch_object_detector.py", line 375, in predict predictions = self._model(image_tensor_list) File "C:\Users\ben\OneDrive\pc\work\yolov5.venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\ben\OneDrive\pc\work\yolov5.venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\ben/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 689, in forward im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) TypeError: transpose() received an invalid combination of arguments - got (tuple), but expected one of: (int dim0, int dim1) or (name dim0, name dim1)
import os
import torch
import torchvision.transforms as transforms
from torchvision.transforms import ToPILImage
import numpy as np
from PIL import Image
from art.estimators.object_detection import PyTorchObjectDetector
from art.attacks.evasion import FastGradientMethod, ProjectedGradientDescent
model = torch.hub.load('ultralytics/yolov5','yolov5n')
# Initialize ART object detection estimator
estimator = PyTorchObjectDetector(model=model)
test_image_folder = 'data/KITTI/test/images'
output_folder = 'adversarial_images'
# Get list of image file names
image_filenames = os.listdir(test_image_folder)
image_filenames = [filename for filename in image_filenames if filename.endswith('.png')]
# Define the transformations to be applied to the images
# Loop over images and generate adversarial images
for filename in image_filenames:
# Load image
image_path = os.path.join(test_image_folder, filename)
image = Image.open(image_path).convert('RGB')
image_np = np.array(image)
# Create the FastGradientMethod attack object
fgm = FastGradientMethod(estimator=estimator, eps=0.01, targeted=False)
# Generate adversarial image using the FastGradientMethod attack
adversarial_image_fgm = fgm.generate(image_np)
# Create the ProjectedGradientDescent attack object
pgd = ProjectedGradientDescent(estimator=estimator, eps=0.01, eps_step=0.005, max_iter=100, targeted=False)
# Generate adversarial image using the ProjectedGradientDescent attack
adversarial_image_pgd = pgd.generate(x=image_np)
# Convert the adversarial images to numpy arrays
adversarial_image_fgm = adversarial_image_fgm.squeeze(0).permute(1, 2, 0).numpy()
adversarial_image_pgd = adversarial_image_pgd.squeeze(0).permute(1, 2, 0).numpy()
# Convert pixel values from [0,1] to [0,255] and cast to uint8
adversarial_image_fgm = np.uint8(adversarial_image_fgm * 255)
adversarial_image_pgd = np.uint8(adversarial_image_pgd * 255)
# Create output folder if it doesn't exist
if not os.path.exists(output_folder):
os.makedirs(output_folder)
# Save the adversarial images in the output folder
Image.fromarray(adversarial_image_fgm).save(os.path.join(output_folder, 'fgm_' + filename))
Image.fromarray(adversarial_image_pgd).save(os.path.join(output_folder, 'pgd_' + filename))
The images are from the KITTI dataset in .jpg format.
I have tried transposing the image or resizing it, but it always throws the same error. Could it be that it has something to do with the PyTorch YOLOv5 and ART libraries not working together?
Upvotes: 1
Views: 977
Reputation: 614
This is quite old at this point, but I stumbled across a very similar error when setting up a sequence of data augmentations (using the albumentations library) for a pytorch project-- maybe this comment/response will help someone else.
In my case, the error ultimately stemmed from the image being a torch.Tensor
instead of a numpy.ndarray
. Both of these have transpose
methods, but the call signature is different and the stack trace looks similar to your issue.
Practical example (which fails!):
import albumentations as alb
from albumentations.pytorch import ToTensorV2
import numpy as np
# albumentations expects "channel-last" format
x = np.random.randint(0, 255, size=(5,8,3)).astype(np.float32)
# Note that applying the `ToTensorV2` transform results in an error
transforms = alb.Compose([
ToTensorV2(),
alb.Transpose(p=1.0)
])
img = transforms(image=x) # <-- fail!
This fails with:
TypeError: transpose() received an invalid combination of arguments - got (list), but expected one of:
* (int dim0, int dim1)
* (name dim0, name dim1)
If you swap the order of those (Transpose
, then ToTensorV2
), then it works as expected.
When I went to the albumentations source, the function states that it expects a numpy.ndarray
. It's also worth noting that transpose
is called with [1, 0, 2]
which implies that the function is expecting a channel-last format. Hence, even if the snippet above worked, placing the ToTensorV2
first converts to channel-FIRST format and the [1, 0, 2]
permutation of axes would be weird/unexpected given the typical expectation of a transpose operation.
Upvotes: 0