Funzo
Funzo

Reputation: 1290

How to compute the cosine_similarity in pytorch for all rows in a matrix with respect to all rows in another matrix

In pytorch, given that I have 2 matrixes how would I compute cosine similarity of all rows in each with all rows in the other.

For example

Given the input =

matrix_1 = [a b] 
           [c d] 
matrix_2 = [e f] 
           [g h]

I would like the output to be

output =

 [cosine_sim([a b] [e f])  cosine_sim([a b] [g h])]
 [cosine_sim([c d] [e f])  cosine_sim([c d] [g h])] 

At the moment I am using torch.nn.functional.cosine_similarity(matrix_1, matrix_2) which returns the cosine of the row with only that corresponding row in the other matrix.

In my example I have only 2 rows, but I would like a solution which works for many rows. I would even like to handle the case where the number of rows in the each matrix is different.

I realize that I could use the expand, however I want to do it without using such a large memory footprint.

Upvotes: 30

Views: 43759

Answers (7)

Theis Jendal
Theis Jendal

Reputation: 111

same as Zhang Yu's answer but using clamp instead of max and without creating a new tensor. I did a small test with timeit, which indicated that clamp was faster.

def sim_matrix(a, b, eps=1e-8):
    """
    added eps for numerical stability
    """
    a_n, b_n = a.norm(dim=1)[:, None], b.norm(dim=1)[:, None]
    a_norm = a / torch.clamp(a_n, min=eps)
    b_norm = b / torch.clamp(b_n, min=eps)
    sim_mt = torch.mm(a_norm, b_norm.transpose(0, 1))
    return sim_mt

Upvotes: 7

user11011176
user11011176

Reputation: 11

This is the most concise way to write batch-wise cosine similarity between two matrices, or, collections of vectors, in my opinion.
Assume a and b are torch tensors with sizes (A, k) and (B, k)

def batch_cos_sim(a,b, eps=1e-8):
    numer = torch.mm(a, b.T)
    denom = torch.mm(a.x.norm(dim=1)[:, None], b.norm(dim=1)[:, None].T)
    demon = torch.where(denom < eps, eps, denom)
    return numer/denom

Upvotes: 1

Richie Bendall
Richie Bendall

Reputation: 9192

You can expand the 2 input batches, perform the pairwise cosine similarity operation, then transpose it:

Non-cloning equivalents of torch.repeat_interleave and torch.repeat are used.

def cosine_distance_matrix(x, y):
    return F.cosine_similarity(
        x.view(x.size(0), 1, x.size(1)).expand(x.size(0), y.size(0), x.size(1)).contiguous().view(-1, x.size(1)),
        y.expand(x.size(0), y.size(0), y.size(1)).flatten(end_dim=1),
    ).view(x.size(0), y.size(0))
from torch.nn import functional as F

cosine_distance_matrix(x, y)

Here's a version that works with any pairwise distance function:

def distance_matrix(x, y, distance_function):
    return distance_function(
        x.view(x.size(0), 1, x.size(1)).expand(x.size(0), y.size(0), x.size(1)).contiguous().view(-1, x.size(1)),
        y.expand(x.size(0), y.size(0), y.size(1)).flatten(end_dim=1),
    ).view(x.size(0), y.size(0))

Upvotes: 0

Minstein
Minstein

Reputation: 582

You could use TorchMetrics's from torchmetrics.functional import pairwise_cosine_similarity to calculate cosine similarity for two matrices with different shapes. Refer to https://torchmetrics.readthedocs.io/en/stable/pairwise/cosine_similarity.html

>>> import torch
>>> from torchmetrics.functional import pairwise_cosine_similarity
>>> x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)
>>> y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)
>>> pairwise_cosine_similarity(x, y)
tensor([[0.5547, 0.8682],
        [0.5145, 0.8437],
        [0.5300, 0.8533]])
>>> pairwise_cosine_similarity(x)
tensor([[0.0000, 0.9989, 0.9996],
        [0.9989, 0.0000, 0.9998],
        [0.9996, 0.9998, 0.0000]])

Upvotes: 3

lawson
lawson

Reputation: 1

It is unnecessary to use loop in calculate the similarity between the row/column vector in a matrix. Here an example.

import torch as t
a = t.randn(2,4)
print(a)

# step 1. 计算行向量的长度
len_a = t.sqrt(t.sum(a**2,dim=-1))
print(len_a)

b = len_a.unsqueeze(1).expand(-1,2)
c = len_a.expand(2,-1)
# print(b)
# print(c)

# step2. 计算乘积
x = a @ a.T
print(x)

# step3. 计算最后的结果
res = x/(b*c)
print(res)

Upvotes: 0

Zhang Yu
Zhang Yu

Reputation: 669

Adding eps for numerical stability base on benjaminplanche's answer:

def sim_matrix(a, b, eps=1e-8):
    """
    added eps for numerical stability
    """
    a_n, b_n = a.norm(dim=1)[:, None], b.norm(dim=1)[:, None]
    a_norm = a / torch.max(a_n, eps * torch.ones_like(a_n))
    b_norm = b / torch.max(b_n, eps * torch.ones_like(b_n))
    sim_mt = torch.mm(a_norm, b_norm.transpose(0, 1))
    return sim_mt

Upvotes: 25

benjaminplanche
benjaminplanche

Reputation: 15129

By manually computing the similarity and playing with matrix multiplication + transposition:

import torch
from scipy import spatial
import numpy as np

a = torch.randn(2, 2)
b = torch.randn(3, 2) # different row number, for the fun

# Given that cos_sim(u, v) = dot(u, v) / (norm(u) * norm(v))
#                          = dot(u / norm(u), v / norm(v))
# We fist normalize the rows, before computing their dot products via transposition:
a_norm = a / a.norm(dim=1)[:, None]
b_norm = b / b.norm(dim=1)[:, None]
res = torch.mm(a_norm, b_norm.transpose(0,1))
print(res)
#  0.9978 -0.9986 -0.9985
# -0.8629  0.9172  0.9172

# -------
# Let's verify with numpy/scipy if our computations are correct:
a_n = a.numpy()
b_n = b.numpy()
res_n = np.zeros((2, 3))
for i in range(2):
    for j in range(3):
        # cos_sim(u, v) = 1 - cos_dist(u, v)
        res_n[i, j] = 1 - spatial.distance.cosine(a_n[i], b_n[j])
print(res_n)
# [[ 0.9978022  -0.99855876 -0.99854881]
#  [-0.86285472  0.91716063  0.9172349 ]]

Upvotes: 37

Related Questions