anaandreea
anaandreea

Reputation: 1

Export Detectron2 Model

I am trying to export a model from the panoptic-deeplab project that uses detectron2. I want to export it as .pt so that I can load it in LibTorch later on.
I want to predict the panoptic segmentation of a single image using the DefaultPredictor, trace it and then save it using torch.jit.trace
I know there is a deploy example available on the detectron2 repo, but unfortunately it is an example that runs inference with Mask R-CNN model in TorchScript format.
If anyone knows what could I have done wrong in my approach, or some advices about should I modify in my code or how should I export a model that does panoptic segmentation.
Let me know if additional information is needed.
Here is the relevant code I got so far

class DefaultPredictor:
    def __init__(self, cfg):
        self.cfg = cfg.clone()  # cfg can be modified by model
        self.model = build_model(self.cfg)
        self.model.eval()
        if len(cfg.DATASETS.TEST):
            self.metadata = MetadataCatalog.get(cfg.DATASETS.TEST[0])
    
        checkpointer = DetectionCheckpointer(self.model)
        checkpointer.load(cfg.MODEL.WEIGHTS)
    
        self.aug = T.ResizeShortestEdge(
            [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST
        )
    
        self.input_format = cfg.INPUT.FORMAT
        assert self.input_format in ["RGB", "BGR"], self.input_format
    
    def __call__(self, original_image):
        
        with torch.no_grad(): 
            if self.input_format == "RGB":
                original_image = original_image[:, :, ::-1]
            height, width = original_image.shape[:2]
            image = self.aug.get_transform(original_image).apply_image(original_image)
            image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)).unsqueeze(0)
            print(image)
            print(image.shape)
            image.to(self.cfg.MODEL.DEVICE)
            inputs = {"image": image, "height": height, "width": width}
            
            predictions = self.model([inputs])[0]
            self.model = self.model.to(self.cfg.MODEL.DEVICE)
    
            traced_model = torch.jit.trace(self.model, image, strict=False)
            torch.jit.save(traced_model, "/home/model.pt")
            return predictions

As the configuration file, I am using the panoptic_fpn_R_50_inference_acc_test.yaml that can be found in the quick_schedules module of the detectron project
However I get this error:
File “/home/panoptic-deeplab/tools_d2/export_model.py”, line 236, in
main() # pragma: no cover
File “/home/panoptic-deeplab/tools_d2/export_model.py”, line 219, in main
predictions = predictor(img)
File “/home/.local/lib/python3.10/site-packages/detectron2/engine/defaults.py”, line 327, in call
predictions = self.model([inputs])[0]
File “/home/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/.local/lib/python3.10/site-packages/detectron2/modeling/meta_arch/panoptic_fpn.py”, line 115, in forward
return self.inference(batched_inputs)
File “/home/.local/lib/python3.10/site-packages/detectron2/modeling/meta_arch/panoptic_fpn.py”, line 154, in inference
features = self.backbone(images.tensor)
File “/home/anamudura/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/.local/lib/python3.10/site-packages/detectron2/modeling/backbone/fpn.py”, line 139, in forward
bottom_up_features = self.bottom_up(x)
File “/home/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1527, in _call_impl
return forward_call(*args, **kwargs)
File “/home/.local/lib/python3.10/site-packages/detectron2/modeling/backbone/resnet.py”, line 443, in forward
assert x.dim() == 4, f"ResNet takes an input of shape (N, C, H, W). Got {x.shape} instead!"
AssertionError: ResNet takes an input of shape (N, C, H, W). Got torch.Size([1, 1, 3, 800, 1280]) instead!

Upvotes: 0

Views: 133

Answers (0)

Related Questions