Reputation: 1432
This is the error I got today at <a href"http://filmaster.com">filmaster.com:
PicklingError: Can't pickle <class 'decimal.Decimal'>: it's not the same object as decimal.Decimal
What does that exactly mean? It does not seem to be making a lot of sense... It seems to be connected with django caching. You can see the whole traceback here:
Traceback (most recent call last): File "/home/filmaster/django-trunk/django/core/handlers/base.py", line 92, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/home/filmaster/film20/film20/core/film_views.py", line 193, in show_film workflow.set_data_for_authenticated_user() File "/home/filmaster/film20/film20/core/film_views.py", line 518, in set_data_for_authenticated_user object_id = self.the_film.parent.id) File "/home/filmaster/film20/film20/core/film_helper.py", line 179, in get_others_ratings set_cache(CACHE_OTHERS_RATINGS, str(object_id) + "_" + str(user_id), userratings) File "/home/filmaster/film20/film20/utils/cache_helper.py", line 80, in set_cache return cache.set(CACHE_MIDDLEWARE_KEY_PREFIX + full_path, result, get_time(cache_string)) File "/home/filmaster/django-trunk/django/core/cache/backends/memcached.py", line 37, in set self._cache.set(smart_str(key), value, timeout or self.default_timeout) File "/usr/lib/python2.5/site-packages/cmemcache.py", line 128, in set val, flags = self._convert(val) File "/usr/lib/python2.5/site-packages/cmemcache.py", line 112, in _convert val = pickle.dumps(val, 2) PicklingError: Can't pickle <class 'decimal.Decimal'>: it's not the same object as decimal.Decimal
And the source code for Filmaster can be downloaded from here: bitbucket.org/filmaster/filmaster-test
Any help will be greatly appreciated.
Upvotes: 92
Views: 113436
Reputation: 134
Had the same problem, my problem came from a decorator, more specifically @lru_cache. I'd recommend heading to Pickle and decorated classes (PicklingError: not the same object)
Upvotes: 0
Reputation: 938
I got this error when using a factory pattern with a decorator to produce my objects:
class MyFactory:
_constructors = {}
@classmethod
def register(cls, other):
cls._constructors[other.__name__] = other
@classmethod
def make_from(cls, specification):
name = specification["name"]
kwargs = specification["kwargs"]
return cls._constructors[name](**kwargs)
@MyFactory.register
class SomeClass:
def __init__(self, foo=None):
self._foo = foo
my_obj = MyFactory.make_from({"name": "SomeClass", "kwargs": {"foo": 3}})
print(type(my_obj))
This works as expected and yields <class '__main__.SomeClass'>
However, I implemented the register
decorator incorrectly; I should have written it like so:
def register(cls, other):
cls._constructors[other.__name__] = other
return other
The key that I was missing was to return the original class in the decorator, which manifested with this error. In the case of a class like this, the returned object is the class that actually gets saved at the module level, which in my case was None. I didn't notice this at first, because the factory has cached the class, and all of my code was using the factory to generate these objects. Since pickle
uses sys.modules
directly, this error only popped up when I tried to pickle one of the objects from the broken factory.
Upvotes: 0
Reputation: 1125
Building on these two answers: if you get an error of the form PicklingError: Can't pickle <class 'foo.Bar'>: it's not the same object as foo.Bar
, try replacing Bar
with foo.Bar
.
You can use this snippet to try and debug where things go wrong:
from foo import Bar
import foo
print(isinstance(foo.Bar(), foo.Bar)) # True
print(isinstance(Bar(), foo.Bar)) # Sometimes True, sometimes False
Upvotes: 0
Reputation: 73
This miraculous function solves the mentioned error, but for me it turned out to another error 'permission denied' which comes out of the blue. However, I guess it might help someone find a solution so I am still posting the function:
import tempfile
import time
from tensorflow.keras.models import save_model, Model
# Hotfix function
def make_keras_picklable():
def __getstate__(self):
model_str = ""
with tempfile.NamedTemporaryFile(suffix='.hdf5', delete=True) as fd:
save_model(self, fd.name, overwrite=True)
model_str = fd.read()
d = {'model_str': model_str}
return d
def __setstate__(self, state):
with tempfile.NamedTemporaryFile(suffix='.hdf5', delete=True) as fd:
fd.write(state['model_str'])
fd.flush()
model = load_model(fd.name)
self.__dict__ = model.__dict__
cls = Model
cls.__getstate__ = __getstate__
cls.__setstate__ = __setstate__
# Run the function
make_keras_picklable()
### create then save your model here ###
Upvotes: 0
Reputation: 1
I had the same error in Spyder. Turned out to be simple in my case. I defined a class named "Class" in a file also named "Class". I changed the name of the class in the definition to "Class_obj". pickle.dump(Class_obj,fileh)
works, but pickle.dump(Class,fileh)
does not when its saved in a file named "Class".
Upvotes: 0
Reputation: 541
I had a problem that no one has mentioned yet. I have a package with a __init__
file that does, among other things:
from .mymodule import cls
Then my top-level code says:
import mypkg
obj = mypkg.cls()
The problem with this is that in my top-level code, the type appears to be mypkg.cls
, but it's actually mypkg.mymodule.cls
. Using the full path:
obj = mypkg.mymodule.cls()
avoids the error.
Upvotes: 0
Reputation: 31
Due to the restrictions based upon reputation I cannot comment, but the answer of Salim Fahedy and following the debugging-path set me up to identify a cause for this error, even when using dill
instead of pickle
:
Under the hood, dill
also accesses some functions of dill
. And in pickle._Pickler.save_global()
there is an import
happening. To me it seems, that this is more of a "hack" than a real solution as this method fails as soon as the class of the instance you are trying to pickle is not imported from the lowest level of the package the class is in. Sorry for the bad explanation, maybe examples are more suitable:
The following example would fail:
from oemof import solph
...
(some code here, giving you the object 'es')
...
model = solph.Model(es)
pickle.dump(model, open('file.pickle', 'wb))
It fails, because while you can use solph.Model
, the class actually is oemof.solph.models.Model
for example. The save_global()
resolves that (or some function before that which passes it to save_global()
), but then imports Model
from oemof.solph.models
and throws an error, because it's not the same import as from oemof import solph.Model
(or something like that, I'm not 100% sure about the workings).
The following example would work:
from oemof.solph.models import Model
...
some code here, giving you the object 'es')
...
model = Model(es)
pickle.dump(model, open('file.pickle', 'wb'))
It works, because now the Model
object is imported from the same place, the pickle._Pickler.save_global()
imports the comparison object (obj2
) from.
Long story short: When pickling an object, make sure to import the class from the lowest possible level.
Addition: This also seems to apply to objects stored in the attributes of the class-instance you want to pickle. If for example model
had an attribute es
that itself is an object of the class oemof.solph.energysystems.EnergySystem
, we would need to import it as:
from oemof.solph.energysystems import EnergySystem
es = EnergySystem()
Upvotes: 3
Reputation: 468
I had same problem while debugging (Spyder). Everything worked normally if run the program. But, if I start to debug I faced the picklingError.
But, once I chose the option Execute in dedicated console in Run configuration per file (short-cut: ctrl+F6) everything worked normally as expected. I do not know exactly how it is adapting.
Note: In my script I have many imports like
from PyQt5.QtWidgets import *
from PyQt5.Qt import *
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
import os, sys, re, math
My basic understanding was, because of star (*) I was getting this picklingError.
Upvotes: 1
Reputation: 675
My issue was that I had a function with the same name defined twice in a file. So I guess it was confused about which one it was trying to pickle.
Upvotes: 0
Reputation: 1460
I will demonstrate the problem with simple Python classes in Python2.7:
In [13]: class A: pass
In [14]: class B: pass
In [15]: A
Out[15]: <class __main__.A at 0x7f4089235738>
In [16]: B
Out[16]: <class __main__.B at 0x7f408939eb48>
In [17]: A.__name__ = "B"
In [18]: pickle.dumps(A)
---------------------------------------------------------------------------
PicklingError: Can't pickle <class __main__.B at 0x7f4089235738>: it's not the same object as __main__.B
This error is shown because we are trying to dump A, but because we changed its name to refer to another object "B", pickle is actually confused with which object to dump - class A or B. Apparently, pickle guys are very smart and they have already put a check on this behavior.
Solution: Check if the object you are trying to dump has conflicting name with another object.
I have demonstrated debugging for the case presented above with ipython and ipdb below:
PicklingError: Can't pickle <class __main__.B at 0x7f4089235738>: it's not the same object as __main__.B
In [19]: debug
> /<path to pickle dir>/pickle.py(789)save_global()
787 raise PicklingError(
788 "Can't pickle %r: it's not the same object as %s.%s" %
--> 789 (obj, module, name))
790
791 if self.proto >= 2:
ipdb> pp (obj, module, name) **<------------- you are trying to dump obj which is class A from the pickle.dumps(A) call.**
(<class __main__.B at 0x7f4089235738>, '__main__', 'B')
ipdb> getattr(sys.modules[module], name) **<------------- this is the conflicting definition in the module (__main__ here) with same name ('B' here).**
<class __main__.B at 0x7f408939eb48>
I hope this saves some headaches! Adios!!
Upvotes: 29
Reputation: 3096
I can't explain why this is failing either, but my own solution to fix this was to change all my code from doing
from point import Point
to
import point
this one change and it worked. I'd love to know why... hth
Upvotes: 15
Reputation: 2817
There can be issues starting a process with multiprocessing
by calling __init__
. Here's a demo:
import multiprocessing as mp
class SubProcClass:
def __init__(self, pipe, startloop=False):
self.pipe = pipe
if startloop:
self.do_loop()
def do_loop(self):
while True:
req = self.pipe.recv()
self.pipe.send(req * req)
class ProcessInitTest:
def __init__(self, spawn=False):
if spawn:
mp.set_start_method('spawn')
(self.msg_pipe_child, self.msg_pipe_parent) = mp.Pipe(duplex=True)
def start_process(self):
subproc = SubProcClass(self.msg_pipe_child)
self.trig_proc = mp.Process(target=subproc.do_loop, args=())
self.trig_proc.daemon = True
self.trig_proc.start()
def start_process_fail(self):
self.trig_proc = mp.Process(target=SubProcClass.__init__, args=(self.msg_pipe_child,))
self.trig_proc.daemon = True
self.trig_proc.start()
def do_square(self, num):
# Note: this is an synchronous usage of mp,
# which doesn't make sense. But this is just for demo
self.msg_pipe_parent.send(num)
msg = self.msg_pipe_parent.recv()
print('{}^2 = {}'.format(num, msg))
Now, with the above code, if we run this:
if __name__ == '__main__':
t = ProcessInitTest(spawn=True)
t.start_process_fail()
for i in range(1000):
t.do_square(i)
We get this error:
Traceback (most recent call last):
File "start_class_process1.py", line 40, in <module>
t.start_process_fail()
File "start_class_process1.py", line 29, in start_process_fail
self.trig_proc.start()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/context.py", line 274, in _Popen
return Popen(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/popen_spawn_posix.py", line 33, in __init__
super().__init__(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/popen_fork.py", line 21, in __init__
self._launch(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/popen_spawn_posix.py", line 48, in _launch
reduction.dump(process_obj, fp)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function SubProcClass.__init__ at 0x10073e510>: it's not the same object as __main__.__init__
And if we change it to use fork
instead of spawn
:
if __name__ == '__main__':
t = ProcessInitTest(spawn=False)
t.start_process_fail()
for i in range(1000):
t.do_square(i)
We get this error:
Process Process-1:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/process.py", line 254, in _bootstrap
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
TypeError: __init__() missing 1 required positional argument: 'pipe'
But if we call the start_process
method, which doesn't call __init__
in the mp.Process
target, like this:
if __name__ == '__main__':
t = ProcessInitTest(spawn=False)
t.start_process()
for i in range(1000):
t.do_square(i)
It works as expected (whether we use spawn
or fork
).
Upvotes: 10
Reputation: 8345
I got this error when running in an jupyter notebook. I think the problem was that I was using %load_ext autoreload
autoreload 2
. Restarting my kernel and rerunning solved the problem.
Upvotes: 128
Reputation: 4315
Did you somehow reload(decimal)
, or monkeypatch the decimal module to change the Decimal class? These are the two things most likely to produce such a problem.
Upvotes: 9
Reputation: 23028
One oddity of Pickle is that the way you import a class before you pickle one of it's instances can subtly change the pickled object. Pickle requires you to have imported the object identically both before you pickle it and before you unpickle it.
So for example:
from a.b import c
C = c()
pickler.dump(C)
will make a subtly different object (sometimes) to:
from a import b
C = b.c()
pickler.dump(C)
Try fiddling with your imports, it might correct the problem.
Upvotes: 39