Reputation: 13778
It only happened when using jieba
my code:
from sklearn.feature_extraction.text import TfidfVectorizer
import jieba
data = ["十二届全国政协副秘书长黄小祥被免职撤委员资格-人事任免-时政频道-中工网", "银联持卡人境外可获紧急现金支援-财经网", "国家煤矿安全监察局关于印发《国家煤矿安全监察局领导同志工作分工》的通知", "扎克伯格净资产增至431亿美元 成第九大富豪 -科技频道-和讯网", "供电局领导注意了", "廊坊进口电源失电,全城大面积停电,请不要再打95598,今晚预计无法恢复送电!http://tieba.baidu.com/p/3077856046", "小区楼道因公摊电费争议被停电 供电部门:会彻查", "如何帮助员工“理解”战略-哈佛商业评论", "荣威950 1.8T正式上市 售17.98-20.98万_凤凰汽车_凤凰网", " 怀化电业局,你摊上事了,你摊上大事了!视频已经曝光,速度围观! "]
jieba_tokenizer = lambda x: jieba.cut(x)
vect = TfidfVectorizer(tokenizer=jieba_tokenizer, min_df=3, max_df=0.95)
X_train_features = vect.fit_transform(data)
m = pickle.dumps(vect)
error:
TypeError Traceback (most recent call last)
<ipython-input-44-556c978e0043> in <module>()
----> 1 pickle.dumps(vect)
C:\Python27\lib\copy_reg.pyc in _reduce_ex(self, proto)
68 else:
69 if base is self.__class__:
---> 70 raise TypeError, "can't pickle %s objects" % base.__name__
71 state = base(self)
72 args = (self.__class__, base, state)
TypeError: can't pickle function objects
chinese article need to use jieba
as tokenizer, but I have no idea how to pickle that vect
..
Upvotes: 1
Views: 3401
Reputation: 381
For the record, the dill
package pickles lambda
transparently, and has the same API as pickle
.
Upvotes: 2
Reputation: 36545
I think the problem is the use of lambda
, it should work if you instead use the syntax
def jieba_tokenizer(x):
return jieba.cut(x)
Upvotes: 1