user3378649
user3378649

Reputation: 5354

Finding elements inside every cluster in scikit DBSCAN?

I am trying to explore Scikit DBSCAN. There is something that I want to know. How can I know the points in every cluster.

This code is an example in the scipy website :

import numpy as np

from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
from sklearn.preprocessing import StandardScaler


##############################################################################
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=750, centers=centers, cluster_std=0.4,
                            random_state=0)

X = StandardScaler().fit_transform(X)

##############################################################################
# Compute DBSCAN
db = DBSCAN(eps=0.3, min_samples=10).fit(X)
core_samples = db.core_sample_indices_
labels = db.labels_

# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)

print('Estimated number of clusters: %d' % n_clusters_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f"
      % metrics.adjusted_rand_score(labels_true, labels))
print("Adjusted Mutual Information: %0.3f"
      % metrics.adjusted_mutual_info_score(labels_true, labels))
print("Silhouette Coefficient: %0.3f"


         % metrics.silhouette_score(X, labels))
        ##############################################################################
    #Modification I am doing 
    print labels 
    print labels[0]

unique_labels = set(labels)

for k in unique_labels:
    class_members = [index[0] for index in np.argwhere(labels == k)]
    #cluster_core_samples = [index for index in core_samples if labels[index] == k]

    print class_members[0]

    for index in class_members:
        x = X[index]
    print x

It seems that I need to find an algorithm to reverse engineering

StandardScaler().fit_transform(X)

The scipy implementation of DBSCAN is presented on DBSCAN Code - DBSCAN Test Unit

I'd like to print the three clusters and points that belong to every cluster.

UPDATE

When I am trying to run inverse_transform() function, I get an error at line

File "/Users/macbook/anaconda/lib/python2.7/site-packages/sklearn/preprocessing/data.py", line 384, in inverse_transform

You can find the code here: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/data.py

        if self.with_std:
            X *= self.std_
        if self.with_mean:
            X += self.mean_

This is where I got the error. Any ideas to solve this problem ?

Upvotes: 2

Views: 3640

Answers (1)

lejlot
lejlot

Reputation: 66805

It seems that I need to find an algorithm to reverse engineering

StandardScaler().fit_transform(X)

Data transformes in sklearn are "reversable" (if they are not lossy), you should store your scaler object.

s = StandardScaler()
X = s.fit_transform(X)

and then if you want to retrieve unscaled versions

X = s.inverse_transform(X)

regarding comment

Standard Scaler transforms data both ways just fine.

>>> from sklearn.preprocessing import StandardScaler
>>> import numpy as np
>>> x = np.array( [[1.0,2.0],[0.0,-4.0]])
>>> s = StandardScaler()
>>> x
array([[ 1.,  2.],
       [ 0., -4.]])
>>> a=s.fit_transform(x)
>>> a
array([[ 1.,  1.],
       [-1., -1.]])
>>> s.inverse_transform(a)
array([[ 1.,  2.],
       [ 0., -4.]])
>>> 

Upvotes: 2

Related Questions