Reputation: 103
x = array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. ])
nbd = array([array([1], dtype=int64), array([0, 2], dtype=int64), array([1, 3], dtype=int64),
array([2, 4], dtype=int64), array([3, 5], dtype=int64), array([4, 6], dtype=int64),
array([5, 7], dtype=int64), array([6, 8], dtype=int64), array([7, 9], dtype=int64),
array([ 8, 10], dtype=int64), array([9], dtype=int64)], dtype=object)
nbd is array of arrays with indices for x.
I am looking for array of arrays 'x_nbd' such that it is holding x values at nbd indices.
i.e.,
x_nbd = array([array([0.5]), array([0., 1.]), array([0.5, 1.5]), array([1., 2.]),
array([1.5, 2.5]), array([2., 3.]), array([2.5, 3.5]),
array([3., 4.]), array([3.5, 4.5]), array([4., 5.]), array([4.5])],
dtype=object)
I have tried with x_nbd = x[nbd] but no luck. I found a way but it is very slow for large data, as it is running each element through a loop. That is
x_nbd = np.array([np.array(x[e]) for e in nbd]).
Moreover, it is showing "VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray."
Is there any faster way to get this in python? or How to actually achieve this using numpy in python ?
Upvotes: 0
Views: 440
Reputation: 231615
Forget about the unnecessary array
wrappers. The simple list comprehension is the right way - to go from a list of indices to a list arrays. There's nothing "multidimensional" about this.
In [84]: array=np.array
...: int64=np.int64
...: x = array([0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. ])
...:
...: nbd = [array([1], dtype=int64), array([0, 2], dtype=int64), array([1, 3], dtype=int64)
...: ,
...: array([2, 4], dtype=int64), array([3, 5], dtype=int64), array([4, 6], dtype=int64),
...: array([5, 7], dtype=int64), array([6, 8], dtype=int64), array([7, 9], dtype=int64),
...: array([ 8, 10], dtype=int64), array([9], dtype=int64)]
In [85]: [x[i] for i in nbd]
Out[85]:
[array([0.5]),
array([0., 1.]),
array([0.5, 1.5]),
array([1., 2.]),
array([1.5, 2.5]),
array([2., 3.]),
array([2.5, 3.5]),
array([3., 4.]),
array([3.5, 4.5]),
array([4., 5.]),
array([4.5])]
Now if indexing arrays were all the same size, you could make a 2d array:
In [86]: np.stack(nbd[1:-1])
Out[86]:
array([[ 0, 2],
[ 1, 3],
[ 2, 4],
[ 3, 5],
[ 4, 6],
[ 5, 7],
[ 6, 8],
[ 7, 9],
[ 8, 10]])
In [87]: x[np.stack(nbd[1:-1])]
Out[87]:
array([[0. , 1. ],
[0.5, 1.5],
[1. , 2. ],
[1.5, 2.5],
[2. , 3. ],
[2.5, 3.5],
[3. , 4. ],
[3.5, 4.5],
[4. , 5. ]])
Upvotes: 1
Reputation: 1383
You could try
x_nbd = array([array([x[i], nbd[i]]) for i in range(len(x))])
I noticed you have an array([0.5])
at the start of your expected output which I assume to be a typo, but you could easily prepend this element without changing the time complexity.
Upvotes: 0