Reputation: 341
I get error on line x_stats = dec(z).float()
.
import torch.nn.functional as F
z_logits = enc(x)
z = torch.argmax(z_logits, axis=1)
z = F.one_hot(z, num_classes=enc.vocab_size).permute(0, 3, 1, 2).float()
x_stats = dec(z).float()
x_rec = unmap_pixels(torch.sigmoid(x_stats[:, :3]))
x_rec = T.ToPILImage(mode='RGB')(x_rec[0])
display_markdown('Reconstructed image:')
display(x_rec)
I tried to downgrade and reinstall the torch
package but that didn't help the issue. My package version is torch==1.11.0
Full traceback:
AttributeError Traceback (most recent call last)
/Users/hanpham/github/DALL-E/notebooks/usage.ipynb Cell 4' in <cell line: 7>()
4 z = torch.argmax(z_logits, axis=1)
5 z = F.one_hot(z, num_classes=enc.vocab_size).permute(0, 3, 1, 2).float()
----> 7 x_stats = dec(z).float()
8 x_rec = unmap_pixels(torch.sigmoid(x_stats[:, :3]))
9 x_rec = T.ToPILImage(mode='RGB')(x_rec[0])
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/lib/python3.9/site-packages/dall_e/decoder.py:94, in Decoder.forward(self, x)
91 if x.dtype != torch.float32:
92 raise ValueError('input must have dtype torch.float32')
---> 94 return self.blocks(x)
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/container.py:141, in Sequential.forward(self, input)
139 def forward(self, input):
140 for module in self:
--> 141 input = module(input)
142 return input
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/container.py:141, in Sequential.forward(self, input)
139 def forward(self, input):
140 for module in self:
--> 141 input = module(input)
142 return input
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/upsampling.py:154, in Upsample.forward(self, input)
152 def forward(self, input: Tensor) -> Tensor:
153 return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners,
--> 154 recompute_scale_factor=self.recompute_scale_factor)
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1185, in Module.__getattr__(self, name)
1183 if name in modules:
1184 return modules[name]
-> 1185 raise AttributeError("'{}' object has no attribute '{}'".format(
1186 type(self).__name__, name))
AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'
Upvotes: 8
Views: 20539
Reputation: 9
Comment out the recompute_scale_factor=self.recompute_scale_factor part in the source code. File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/upsampling.py:154, in Upsample.forward(self, input)
152 def forward(self, input: Tensor) -> Tensor:
153 return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners)
154
#recompute_scale_factor=self.recompute_scale_factor)
Upvotes: -1
Reputation: 1174
pip install torchvision==0.10.1
pip install torch==1.9.1
Upvotes: 10
Reputation: 53
Also getting this error with torch 1.11.0 Would love to hear how people solve it
Looks like it's an issue with 1.11.0: https://github.com/openai/DALL-E/issues/54
Edit: Following these instructions solved it for me: https://github.com/openai/DALL-E/issues/54#issuecomment-1092826376
Upvotes: 0
Reputation: 2425
I think your issue might be along the lines of https://github.com/ultralytics/yolov5/issues/6948.
I'm not familiar with pytorch; but suggestions were:
wait for the next version (not really that great a suggestion, sorry)
comment out the code as pointed in https://github.com/ultralytics/yolov5/issues/6948#issuecomment-1075528897, that is:
In /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/upsampling.py
in line 153-154
:
Change:
return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners,
recompute_scale_factor=self.recompute_scale_factor)
To:
return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners)
# recompute_scale_factor=self.recompute_scale_factor)
or
return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners,
# recompute_scale_factor=self.recompute_scale_factor
)
In my opinion, as a 'workaround', you could do the comment out option until a new version comes out at which, you can undo the comment out
, and upgrade.
While I agree this is an 'answer', it isn't the perfect answer. My apologies.
Upvotes: 1