Moonwalker
Moonwalker

Reputation: 2232

Clustering of sparse matrix in python and scipy

I'm trying to cluster some data with python and scipy but the following code does not work for reason I do not understand:

from scipy.sparse import *
matrix = dok_matrix((en,en), int)

for pub in pubs:
    authors = pub.split(";")
    for auth1 in authors:
        for auth2 in authors:
            if auth1 == auth2: continue
            id1 = e2id[auth1]
            id2 = e2id[auth2]
            matrix[id1, id2] += 1

from scipy.cluster.vq import vq, kmeans2, whiten
result = kmeans2(matrix, 30)
print result

It says:

Traceback (most recent call last):
  File "cluster.py", line 40, in <module>
    result = kmeans2(matrix, 30)
  File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 683, in kmeans2
    clusters = init(data, k)
  File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 576, in _krandinit
    return init_rankn(data)
  File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 563, in init_rankn
    mu  = np.mean(data, 0)
  File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2374, in mean
    return mean(axis, dtype, out)
TypeError: mean() takes at most 2 arguments (4 given)

When I'm using kmenas instead of kmenas2 I have the following error:

Traceback (most recent call last):
  File "cluster.py", line 40, in <module>
    result = kmeans(matrix, 30)
  File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 507, in kmeans
    guess = take(obs, randint(0, No, k), 0)
  File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 103, in take
    return take(indices, axis, out, mode)
TypeError: take() takes at most 3 arguments (5 given)

I think I have the problems because I'm using sparse matrices but my matrices are too big to fit the memory otherwise. Is there a way to use standard clustering algorithms from scipy with sparse matrices? Or I have to re-implement them myself?

I created a new version of my code to work with vector space

el = len(experts)
pl = len(pubs)
print el, pl

from scipy.sparse import *
P = dok_matrix((pl, el), int)

p_id = 0
for pub in pubs:
    authors = pub.split(";")
    for auth1 in authors:
        if len(auth1) < 2: continue
        id1 = e2id[auth1]
        P[p_id, id1] = 1

from scipy.cluster.vq import kmeans, kmeans2, whiten
result = kmeans2(P, 30)
print result

But I'm still getting the error:

TypeError: mean() takes at most 2 arguments (4 given)

What am I doing wrong?

Upvotes: 1

Views: 5391

Answers (3)

ericmjl
ericmjl

Reputation: 14684

Alternatively, if you're looking to cluster graphs, I'd take a look at NetworkX. That might be a useful tool for you. The reason I suggest this is because it looks like the data you're looking to work with networks of authors. Hence, with NetworkX, you can put in an adjacency matrix and find out which authors are clustered together.

For a further elaboration on this, you can see a question that I had asked earlier for inspiration here.

Upvotes: 0

ericmjl
ericmjl

Reputation: 14684

May I suggest, "Affinity Propagation" from scikit-learn? On the work I've been doing with it, I find that it has generally been able to find the 'naturally' occurring clusters within my data set. The inputs into the algorithm are an affinity matrix, or similarity matrix, of any arbitrary similarity measure.

I don't have a good handle on the kind of data you have on hand, so I can't speak to the exact suitability of this method to your data set, but it may be worth a try, perhaps?

Upvotes: 2

Has QUIT--Anony-Mousse
Has QUIT--Anony-Mousse

Reputation: 77454

K-means cannot be run on distance matrixes.

It needs a vector space to compute means in, that is why it is called k-means. If you want to use a distance matrix, you need to look into purely distance based algorithms such as DBSCAN and OPTICS (both on Wikipedia).

Upvotes: 5

Related Questions