Reputation: 2117
Being new to unsupervised methods I'm in need of a push in the right direction with some semi-simple code to run through some data as a case study. The data I'm working on only has 300 or so observations but I'm wanting to learn how I can apply clustering to very large sets as well that behave similarly.
I have a 2 feature set of data and I'd like to run an DBSCAN or something similar using Euclidean distances (if this is the correct clustering approach).
As an example the data looks like this:
I can just tell by eye that clustering this way might not be the best method as the distribution looks irregular.
What method should I use to begin understanding similar distributions like these - especially when the set is very large (100s of thousands of observations).
Upvotes: 2
Views: 4826
Reputation: 16573
For most machine learning tasks, scikit-learn is your friend here. For DBSCAN, scikit-learn has sklearn.cluster.DBSCAN
. From the scikit-learn docs:
>>> from sklearn.cluster import DBSCAN
>>> import numpy as np
>>> X = np.array([[1, 2], [2, 2], [2, 3],
... [8, 7], [8, 8], [25, 80]])
>>> clustering = DBSCAN(eps=3, min_samples=2).fit(X)
>>> clustering.labels_
array([ 0, 0, 0, 1, 1, -1])
>>> clustering
DBSCAN(algorithm='auto', eps=3, leaf_size=30, metric='euclidean',
metric_params=None, min_samples=2, n_jobs=None, p=None)
You also have other clustering algorithms available to you through scikit-learn. You can see all of them here.
Upvotes: 4