fedvasu
fedvasu

Reputation: 1252

root mean square in numpy and complications of matrix and arrays of numpy

Can anyone direct me to the section of numpy manual where i can get functions to accomplish root mean square calculations ... (i know this can be accomplished using np.mean and np.abs .. isn't there a built in ..if no why?? .. just curious ..no offense)

can anyone explain the complications of matrix and arrays (just in the following case):

U is a matrix(T-by-N,or u say T cross N) , Ue is another matrix(T-by-N) I define k as a numpy array

U[ind,:] is still matrix

in the following fashion k = np.array(U[ind,:])

when I print k or type k in ipython

it displays following

K = array ([[2,.3 .....
              ......
                9]])

You see the double square brackets (which makes it multi-dim i guess) which gives it the shape = (1,N)

but I can't assign it to array defined in this way

l = np.zeros(N)
shape = (,N) or perhaps (N,) something like that

l[:] = k[:]
error:
matrix dimensions incompatible

Is there a way to accomplish the vector assignment which I intend to do ... Please don't tell me do this l = k (that defeats the purpose ... I get different errors in program .. I know the reasons ..If you need I may attach the piece of code)

writing a loop is the dumb way .. which I'm using for the time being ...

I hope I was able to explain .. the problems I'm facing ..

regards ...

Upvotes: 23

Views: 102493

Answers (7)

Teque5
Teque5

Reputation: 420

If you have complex vectors and are using pytorch, the vector norm is the fastest approach on CPU & GPU:

import torch
batch_size, length = 512, 4096
batch = torch.randn(batch_size, length, dtype=torch.complex64)
scale = 1 / torch.sqrt(torch.tensor(length))
rms_power = batch.norm(p=2, dim=-1, keepdim=True)
batch_rms = batch / (rms_power * scale)

Using batch vdot like goodboy's approach is 60% slower than above. Using naïve method similar to deprecated's approach is 85% slower than above.

Upvotes: 0

goodboy
goodboy

Reputation: 191

For rms, the fastest expression I have found for small x.size (~ 1024) and real x is:

def rms(x):
    return np.sqrt(x.dot(x)/x.size)

This seems to be around twice as fast as the linalg.norm version (ipython %timeit on a really old laptop).

If you want complex arrays handled more appropriately then this also would work:

def rms(x):
    return np.sqrt(np.vdot(x, x)/x.size)

However, this version is nearly as slow as the norm version and only works for flat arrays.

Upvotes: 13

deprecated
deprecated

Reputation: 2215

For the RMS, I think this is the clearest:

from numpy import mean, sqrt, square, arange
a = arange(10) # For example
rms = sqrt(mean(square(a)))

The code reads like you say it: "root-mean-square".

Upvotes: 90

dashesy
dashesy

Reputation: 2645

I use this for RMS, all using NumPy, and let it also have an optional axis similar to other NumPy functions:

import numpy as np   
rms = lambda V, axis=None: np.sqrt(np.mean(np.square(V), axis))

Upvotes: 1

Ben
Ben

Reputation: 9733

I don't know why it's not built in. I like

def rms(x, axis=None):
    return sqrt(mean(x**2, axis=axis))

If you have nans in your data, you can do

def nanrms(x, axis=None):
    return sqrt(nanmean(x**2, axis=axis))

Upvotes: 8

Xingzhong
Xingzhong

Reputation: 173

For the RMS, how about

norm(V)/sqrt(V.size)

Upvotes: 8

highBandWidth
highBandWidth

Reputation: 17316

Try this:

U = np.zeros((N,N))
ind = 1
k = np.zeros(N)
k[:] = U[ind,:]

Upvotes: 5

Related Questions