Georg Sievelson
Georg Sievelson

Reputation: 300

Multi-dimensional diff/gradient in Julia

I am looking for an efficient way to compute the derivatives of a multidimensional array in Julia. To be precise, I would like to have an equivalent of numpy.gradient in Julia. However, the Julia function diff :

It is straightforward to extend the definition of diff of Julia so it can work on 3-dimensional arrays, e.g. with

function diff3D(A::Array, dim::Integer)
    if dim == 1
        [A[i+1,j,k] - A[i,j,k] for i=1:size(A,1)-1, j=1:size(A,2), k=1:size(A,3)]
    elseif dim == 2
       [A[i,j+1,k] - A[i,j,k] for i=1:size(A,1), j=1:size(A,2)-1, k=1:size(A,3)]
    elseif dim == 3
       [A[i,j,k+1] - A[i,j,k] for i=1:size(A,1), j=1:size(A,2), k=1:size(A,3)-1]
    else
        throw(ArgumentError("dimension dim must be 1, 2, or 3 got $dim"))
    end
end

which would work with e.g.

a = [i*j*k for i in 1:10, j in 1:10, k in 1:20]

However, the extension to an arbitrary dimension is not possible, and the boundary are not taken into account so the gradient can have the same dimension as the original array.

I have some ideas to implement an analogue of numpy's gradient in Julia, but I fear they would be extremely slow and ugly, hence my questions : is there a canonical way to do this in Julia that I missed ? And if there is none, what would be optimal ?

Thanks.

Upvotes: 4

Views: 3315

Answers (2)

digbyterrell
digbyterrell

Reputation: 3619

Even simpler way to do it:

mydiff(A::AbstractArray,dim) = mapslices(diff, A, dim)

Not sure how this would compare in terms of speed though.

Edit: Maybe slightly slower, but this is a more general solution to extending functions to higher-order arrays:

julia> using BenchmarkTools

julia> function mydiff{T,N}(A::Array{T,N}, dim::Int)
           @assert dim <= N
           idxs_1 = [1:size(A,i) for i in 1:N]
           idxs_2 = copy(idxs_1)
           idxs_1[dim] = 1:(size(A,dim)-1)
           idxs_2[dim] = 2:size(A,dim)
           return A[idxs_2...] - A[idxs_1...]
       end
mydiff (generic function with 1 method)

julia> X = randn(500,500,500);

julia> @benchmark mydiff($X,3)
BenchmarkTools.Trial: 
  samples:          3
  evals/sample:     1
  time tolerance:   5.00%
  memory tolerance: 1.00%
  memory estimate:  2.79 gb
  allocs estimate:  22
  minimum time:     2.05 s (15.64% GC)
  median time:      2.15 s (14.62% GC)
  mean time:        2.16 s (11.05% GC)
  maximum time:     2.29 s (3.61% GC)

julia> @benchmark mapslices(diff,$X,3)
BenchmarkTools.Trial: 
  samples:          2
  evals/sample:     1
  time tolerance:   5.00%
  memory tolerance: 1.00%
  memory estimate:  1.99 gb
  allocs estimate:  3750056
  minimum time:     2.52 s (7.90% GC)
  median time:      2.61 s (9.17% GC)
  mean time:        2.61 s (9.17% GC)
  maximum time:     2.70 s (10.37% GC)

Upvotes: 1

IainDunning
IainDunning

Reputation: 11664

I'm not too familiar with diff, but from what I understand about what its doing I've made a n-dimensional implementation, that uses Julia features like parametric types and splatting:

function mydiff{T,N}(A::Array{T,N}, dim::Int)
    @assert dim <= N
    idxs_1 = [1:size(A,i) for i in 1:N]
    idxs_2 = copy(idxs_1)
    idxs_1[dim] = 1:(size(A,dim)-1)
    idxs_2[dim] = 2:size(A,dim)
    return A[idxs_2...] - A[idxs_1...]
end

with some sanity checks:

A = rand(3,3)
@assert diff(A,1) == mydiff(A,1)  # Base diff vs my impl.
@assert diff(A,2) == mydiff(A,2)  # Base diff vs my impl.

A = rand(3,3,3)
@assert diff3D(A,3) == mydiff(A,3)  # Your impl. vs my impl.

Note that there are more magical ways to do this, like using code generation to make specialized methods up to a finite dimension, but I think thats probably not needed to get good-enough performance.

Upvotes: 4

Related Questions