Reputation: 969
I am aware that numpy arrays are pointer arrays. And I know that is possible to define pointers in python. But I am wondering, if I make a variable equal to an element in a numpy vector, is it still a pointer or is it de-referenced? Is there a way I can find out or test this?
Example
import scipy
vec = scipy.randn(10)
vecptr = vec # vecptr is a pointer to vec
vecval = scipy.copy(vec) # vecval is not a pointer.
var = vec[3] # is var pointer or is it copied by value ???
print(type(var)) # returns numpy.float64. does this mean its a 1x1 numpy vec and therefore a pointer ?
The reason I ask is, what I really want to know is; will the code below this double up my memory? I am trying to create more meaningful variable names to my vector that is returned
v = self.viewCoefs[sz][sv][sa]
gw = v[0]
G0 = v[1]
G1 = v[2]
G2 = v[3]
alpha0 = v[4]
alpha1 = v[5]
alpha2 = v[6]
beta0 = v[7]
beta1 = v[8]
beta2 = v[9]
beta3 = v[10]
gamma0 = v[11]
gamma1 = v[12]
gamma2 = v[12]
gamma3 = v[12]
gamma4 = v[13]
delta0 = v[14]
delta1 = v[15]
delta2 = v[16]
delta3 = v[17]
delta4 = v[18]
delta5 = v[19]
zeta_prime_0 = v[20]
zeta_prime_1 = v[21]
zeta_prime_2 = v[22]
Gamma_prime_0 = v[23]
Gamma_prime_1 = v[24]
Gamma_prime_2 = v[25]
Gamma_prime_3 = v[26]
Because I have lots of these to follow
p0 = alpha0 + alpha1*scipy.log(bfrac) + alpha2*scipy.log(bfrac)**2
p1 = beta0 + beta1*scipy.log(bfrac) + beta2*scipy.log(bfrac)**2 + beta3*scipy.log(bfrac)**3
p2 = gamma0 + gamma1*scipy.log(bfrac) + gamma2*scipy.log(bfrac)**2 + gamma3*scipy.log(bfrac)**3 + gamma4*scipy.log(bfrac)**4
p3 = delta0 + delta1*scipy.log(bfrac) + delta2*scipy.log(bfrac)**2 + delta3*scipy.log(bfrac)**3 + delta4*scipy.log(bfrac)**4 + delta5*scipy.log(bfrac)**5
subSurfRrs = g*(p0*u + p1*u**2 + p2*u**3 + p3*u**4)
## and lots more
So I would like meaningful variable names without doubling my memory foot print.
#Okay, If I got it right, the solution to NOT double up my memory is :
v = self.veiwCoefs[sz][sv][sa]
gw = v[0:1]
G0 = v[1:2]
G1 = v[2:1]
alpha0 = v[3:4]
alpha1 = v[4:5]
alpha2 = v[5:6]
beta0 = v[6:7]
beta1 = v[7:8]
beta2 = v[8:9]
beta3 = v[9:10]
## etc
p0 = alpha0[0] + alpha1*scipy.log(bfrac) + alpha2[0]*scipy.log(bfrac)**2
p1 = beta0[0] + beta1[0]*scipy.log(bfrac) + beta2[0]*scipy.log(bfrac)**2 + beta3[0]*scipy.log(bfrac)**3
## etc
Upvotes: 4
Views: 379
Reputation: 25823
Views are very useful, and using them well can help save quite a bit of memory, but in your case I don't think views are appropriate. While a view does reuse the underlying data, I would not call it a pointer. Each view is a unique ndarray object, meaning it has it's own properties, for example shape:
In [4]: a = np.arange(7)
In [5]: b = a[1:5]
In [6]: b.shape = (2,2)
In [7]: b
Out[7]:
array([[1, 2],
[3, 4]])
In [8]: a.shape
Out[8]: (7,)
so when you do b = a[0:1]
, you're creating a brand new ndarray object to hold one int/float/... or whatever. If you want you have meaningful names for each element of your array, you're probably not going to get much more efficient than:
v = self.viewCoefs[sz][sv][sa]
gw = v[0]
G0 = v[1]
G1 = v[2]
G2 = v[3]
alpha0 = v[4]
## etc
That being said, you should try and see if there is a better way to vectorized you code, meaning try to write your code as operations on arrays instead of operations on elements of arrays. For example you might write:
coefs = np.zeros((5,5))
lt = np.tril_indices(5)
coefs[lt] = self.viewCoefs[sz][sv][sa]
p = (coefs * scipy.log(bfrac)**[1, 2, 3, 4, 5]).sum(-1)
subSurfRrs = g*(p*u**[1, 2, 3, 4]).sum()
Vectorized code can be much faster when using numpy. In this case we also exploit numpy's broadcasting, which I thought was very confusing until I got to know it a little better and realized how useful it could be.
Upvotes: 1
Reputation: 68682
You almost have it, but here is how to create a view of a single element:
In [1]: import numpy as np
In [23]: v = np.arange(10)
In [24]: a = v[3:4]
In [25]: a[0] = 100
In [26]: v
Out[26]: array([ 0, 1, 2, 100, 4, 5, 6, 7, 8, 9])
Here a
is a view of the fourth element of v
, so when you change a
you change the corresponding position in v
.
Upvotes: 4