sour_proton
sour_proton

Reputation: 69

Where are these allocations coming from and how does declaring the parameters' types prevent them?

So I'm learning Julia by solving ProjectEuler problems and I came up with this code for problem 27:

function isPrime(num)
    num < 2 && return false
    for i in 2:trunc(Int, √num)
        if num % i == 0
            return false
        end
    end
    return true
end

function quadratic(n, a, b)
    return n^2 + a*n + b
end

function consecutivePrimes(a, b)
    count = 0
    tryNum = 0
    while true
        currResult = quadratic(tryNum, a, b)
        !isPrime(currResult) && (return count)
        count += 1
        tryNum += 1
    end
end

function longestConsecutivePrimes(modA, modB)
    longest = (0,0,0)
    for a in -modA:modA, b in -modB:modB
        consecutive = consecutivePrimes(a, b)
        longest[1] < consecutive && (longest = (consecutive, a, b))
    end
    return longest
end

A, B = 999, 1000
@time size, a, b = longestConsecutivePrimes(A, B)
println("size: $size, a: $a, b: $b")
println("a*b = $(a*b)")

Which yields

  0.106938 seconds (36.28 k allocations: 2.239 MiB, 38.39% compilation time)
size: 71, a: -61, b: 971
a*b = -59231

Now, by replacing the parameters in the function longestConsecutivePrimes for

function longestConsecutivePrimes(modA::Int64, modB::Int64)

The script, when run in a separate terminal independent from the one before, yields

  0.064875 seconds (1 allocation: 16 bytes)
size: 71, a: -61, b: 971
a*b = -59231

Which is double as fast and has just 1 allocation.

Each script run was made in the linux terminal, not in the Julia REPL, closing the terminal between each run, so one run doesn't "profit" from the other's compilations.

Where were the allocations coming from and how does declaring the parameters' types prevent them?

I feel like understanding this will help me improve previous and future Julia scripts I write.

Upvotes: 2

Views: 98

Answers (2)

Sundar R
Sundar R

Reputation: 14695

Here, the allocations go away also when A and B stop being global variables, so it seems likely the difference is from type inference being more difficult (i.e. allocations from the type inference code itself).

Is it considered bad practice to leave the end of my script as is, outside a function? Would it be more Julian to wrap it in a function that prints the result and the benchmarks?

Of particular importance here is to avoid (non-constant) global variables. It's fine to have simple print statements (or @benchmark lines with setup code) in the global scope, but anything more complicated will likely need non-constant variables and hence should be put into a function as the manual recommends. (Also see the "Annotate values taken from untyped locations" section in the same manual page, which is what your second script is doing.)


Edit: I'll leave this in in case it's useful to anyone else in the future, but this explanation doesn't apply in OP's case as mentioned in the comments.

The first time you @timed the function, since you were calling it with two Int arguments, Julia automatically compiled a specialized version of the function for modA and modB being Ints. The @time macro included the allocations and time taken for this compilation step also, in its result (as you can see from the "38.39% compilation time" part of the output).

When you changed the function to have type annotations and ran it again, Julia didn't have to compile anything because a compiled version of longestConsecutivePrimes that takes two Int arguments already existed. So in this run, only the execution of the function itself was timed, only its own allocations were reported.

So adding the type annotations didn't change the allocations or running time. However, what it did accomplish was adding a new "method" to the longestConsecutivePrimes function - one with more specialized arguments. So after this point, any calls to the function with purely Int argument would use your second definition, whereas calls with other types - UInt8 or Int128 or even String for eg. - would use your first definition of the function. It's effectively defined for Any type. In this case of course, both the defintions happen to be identical. But in general, this way of type-specializing functions is core to using Julia's multiple dispatch paradigm.

Upvotes: 2

Oscar Smith
Oscar Smith

Reputation: 6388

You were timing compilation. If you run the untyped function again, you'll see that it runs without extra allocation.

Upvotes: 1

Related Questions