the_spectator
the_spectator

Reputation: 1623

Better way writing nested for loops in Elixir

Please suggest better way(more of elixir way) of writing following C code into elixir.

int some_num = 0;

for(int i = 0; i < 100; i++){
  for(int j = 0; j < 1000; j++){
    for(int k = 0; k < 10000; k++){
      some_num += 1;
    }
  }
}
printf("%d", some_num);

And can it be implemented with getting benefits elixir concurrency?

EDIT: Bit of background, I am fresh to elixir and still learning. The main motive of the question was to write more idiomatic elixir code than applying concurrency.

Upvotes: 2

Views: 1556

Answers (2)

GavinBrelstaff
GavinBrelstaff

Reputation: 3069

Below is a working example on how you could achieve concurrency for your rather toy problems of achieving 1000000000 increment operations - in case you are curious about how it can be done.

The code below spawns 100 Elixir processes corresponding to your outer loop. The inside code - those 2 nested loops - (written in more idiomatic form using Enum.reducesee) is thus ran concurrently (as possible by the VM). The results of each process are sent to a dedicated receiver process that counts down starting from 100 whenever it receives a new result. Each subtotal get added to the grand total which is then printed out when 100 subtotals have been received. To test: Save the code as a file nested.ex and compile in the Elixir shell using c nested.ex. The run it in that shell using Main.main . You should see the following output:

iex(4)> Main.main    
:ok
total = 1000000000

with the ok coming a few second before the total. You should also experience high cpu multi-core usage.

defmodule Main do 
def main(  ) do
    pid = spawn fn -> Receiver.loop( 100,0 ) end
    1..100 |> Enum.each( fn x -> spawn (fn -> Nested.run(pid) end ) end)
  end

end

#################################
defmodule Nested do
 def run(pid) do
    sub_total= 
    Enum.reduce( 1..1000, 0, fn x, acc_n -> acc_n +
      Enum.reduce( 1..10000, 0, fn y, acc_m -> acc_m + 1  end )
    end )
   send pid, sub_total
   Process.exit(self(), :kill )
 end
end

#################################
defmodule Receiver do
 def loop(0, total) do
   IO.puts "total = #{total}"
   Process.exit(self(), :kill )
 end
 #
 def loop(count_down, total ) do # count down to zero collecting totals
    receive do
      sub_total ->
        loop(count_down-1, sub_total + total)
    end
 end
end
#################################

Parallelism can be obtained to gain advantage over pure concurrency by judiciously converting from plain spawn to Node.spawn see docs

Informal Speed test Testing on my Win10 PC that reports:

Erlang/OTP 20 [erts-9.0] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10]
Interactive Elixir (1.8.2) ..

The code I give here computes the result in 16s where as that of @Hauleth takes over 10 minutes - since his seems to get allocated just a single core where mine gets all 4.

Upvotes: 1

Hauleth
Hauleth

Reputation: 23586

Simplest way to implement exactly what you have written is to use for macro:

sum =
  for i <- 0..100,
      j <- 0..1_000,
      k <- 0..10_000,
      reduce: 0 do
    acc -> acc + 1
  end

EDIT:

:reduce option is available in new versions of Elixir (1.8+). In older versions you can use nested Enum.reduce/3:

Enum.reduce(0..100, 0, fn _, acc ->
  acc + Enum.reduce(0..1_000, 0, fn _, acc ->
    acc + Enum.reduce(0..10_000, 0, fn _, acc ->
      acc + 1
    end)
  end)
end)

About second part of the question: no, this loop would not gain much from the concurrency, and if it would change times in any way then it will be only slower. In this particular case it could be written as sum = 100 * 1_000 * 10_000 which could be faster as could be easily optimized by the compiler to 10_000_000 (IIRC Erlang compiler cannot optimize given loop to the constant).

TL;DR such obvious loop cannot be improved by the concurrency and in general case it is hard to say whether processes (aka parallelization) will help or not. It is also very important to remember that parallel != concurrent, so you will gain no speedup from running N Erlang's processes on machine with N-1 schedulers (by default number of CPUs).

Upvotes: 10

Related Questions