Reputation: 1181
I am using caffe, a deep neural network library, to generate image features for image based retrieval. The particular network I am using generates a 4096 dimensional feature.
I am using LSHash to generate hash buckets from the features. When I do a brute for comparison of all available feature, sorting images by euclidean distance, I find the features represent image similarity well. When I use LSHash, however, I find that similar features rarely land in the same bucket.
Are the source features too large for use with LSH? Are there other ways to reduce the dimensions of the image features before attempting to hash them?
Upvotes: 1
Views: 210
Reputation: 114966
If you are looking for intelligent dimensionality reduction, you can simply add another "InnerProduct"
layer on top of your net with lower output dimension.
To train only this layer without altering the rest of the weights you can set the lr_mult
values for all the layers (apart from the new one) to zero thus training (aka "finetuning") only the top dim-reduction layer.
Upvotes: 0