Andrew No
Andrew No

Reputation: 535

Java Random(seed) get random at index?

Explanation below, here is my current code.

Random seeded = new Random(seed);

    for(int j = 0; j < 3; j++)
    {
        for (int i = 0; i < 10; i++)
            System.out.println(i + " : " + seeded.nextInt(100));
        System.out.println("{}");
        seeded= new Random(seed);
    }

That outputs the following

0 : 91
1 : 76
2 : 3
3 : 56
4 : 92
5 : 98
6 : 92
7 : 46
8 : 74
9 : 60
{}
0 : 91
1 : 76
2 : 3
3 : 56
4 : 92
5 : 98
6 : 92
7 : 46
8 : 74
9 : 60
{}
0 : 91
1 : 76
2 : 3
3 : 56
4 : 92
5 : 98
6 : 92
7 : 46
8 : 74
9 : 60
{}

My goal is exactly what is above. I want to make a class that extends Random, that includes a getSeededIntAtIndex function. For example, I would want to implement the class as follows.

IndexedRandom random = new IndexedRandom(seed);
int iAt30 = random.getIntAtIndex(30);

Currently, to do this, I have the following code.

public int getIntAtIndex (int index)
{
    Random temp = new Random(seed);
    for(int i = 0; i < index - 1; i++)
        temp.nextInt();

    return temp.nextInt();
}

I am simply cycling through the random X number of times until it hits the desired number. However, if I select a large index, like 3,000,000 I would prefer this method be optimized. Is there any way this can be optimized? This method only takes ~0.34 seconds to run on seemingly any device, PC or cell phone. I had only asked this question because it seems like a waste of 2,999,999 calculations, even if it is not really hard on the system. Why not optimize if it is possible?

Upvotes: 3

Views: 428

Answers (2)

Gerald M&#252;cke
Gerald M&#252;cke

Reputation: 11132

Simple answher is: No, it's a random function that generates pseudo-random numbers that are a result of the previous execution, so it only produces the same results when applied the same way.

But there are some alternatives to to your implementation. But before you optimize you should verify if you actually require optimization. Because this smells a bit like premature optimization to me. So why do you prefer this method to be optimized? Have you already identified a performance bottleneck to be solved? How often is it invoked? What's the tradeoff for optimization? etc.

So let's assume you have very good reasons for having that method optimized, here are some alternatives:

  • Depending on your memory constraints, you can pre-populate a complete array of the random numbers and keep it in memory. This would require ~ 12 MB. That's probably the most efficient way regarding speed. If only lower values are requested frequently, you could pre-compute only first 1000 values and keep them in memory, and computer everything above that. This approach is good, if values below 1000 are requested more frequent than higher values.
  • In case you never generate 3M values, but the the 3M only define the value-range, another approach would be to keep already computed values in a map, so you can look them up instead of recompute them. In case the requested indexes are rather random, but the probability of requesting the same value for the same index more than once is high, this approach would be good.
  • Another way would be to keep the Random instances of previously generated numbers in a map and start computing from a Random for for an index that is next-smallest to the requested number. For example

    • getIntAtIndex(10) -> no Random for index < 10, compute for 10, map.put(10, temp)
    • getIntAtIndex(5) -> no Random for index < 5, compute for 5, map.put(5,temp)
    • getIntAtIndex(7) -> Random temp = map.remove(5), compute for 7, map.put(7,temp)
    • getIntAtIndex(12) -> Random temp = map.remove(10), compute for 12, map.put(12,temp)

But I wouldn't be surprised, if this approach is slower than just recomputing, especially for smaller indexes, because you have to search the keys on every request.

Actually, the implementation of Random is based on basic operators so should be rather efficient and optimizable by the compiler. So before taking any action I strongly recommend to analyze if there really is a bottleneck.


Edit

I run the test with various combinations for index and number of iterations. And the result is, optimization is the minimum you should do. It took 100s to calculate the value for index 100'000 on my 5 year old Core Duo. The execution time increases linearly with the index number and the impact is much bigger than expected.

This makes all optimizations listed above completely obsolete.

Assuming the use case is, you want reproducible but pseudo-random values for a given input value that are different per execution of your program, but constant while running it.

So I suggest a different approach. Use the index value as seed, and add randomnes by changing the number of iterations for calculating the result. The following snippet would produce random series for 100 program executions. If you need more randomnes, change the modulo for calculating the "seed", but be aware, that the execution time increases in linear fashion. So maybe 100 is good enough.

private long seed = System.currentTimeMillis() % 100;
public int getIntAtIndex (int index)
{
    Random temp = new Random(index);
    for(int i = 0; i < seed; i++)
        temp.nextInt();
    return temp.nextInt();
}

If gaming is the case, you should initialize a random with a given seed, and then compute all values required by reusing the same random object for computing the "start parameters" of your game. I.e. if you need 100 int parameters, call 100 times nextInt().

Upvotes: 1

user2357112
user2357112

Reputation: 281842

java.util.Random is specified to use the following recurrence to advance the seed:

(seed * 0x5DEECE66DL + 0xBL) & ((1L << 48) - 1)

This is a linear congruential generator, and as such, we can advance it N steps in only O(log(N)) time.

Let a = 0x5DEECE66DL, b = 0xBL, and m = 1L << 48. The value of the seed after N steps has the following closed form (not actual Java):

new_seed = (a^N * seed mod m) + (b * ((a^N - 1) mod (a-1)m)/(a-1) mod m)

There are two reasons I didn't try to write this as a valid Java statement. The first is that we need modular exponentiation to evaluate this efficiently, and the second is that we need 80-bit arithmetic to compute (a^N - 1) mod (a-1)m. Both of these problems can be solved using Java's BigInteger class, which provides modular exponentiation and arbitrary-precision integers. Alternatively, we can use a variation on the modular exponentiation algorithm to calculate ((a^N - 1) mod (a-1)m)/(a-1) as (a^(N-1) + a^(N-2) + ... + a + 1) mod m within the bounds of 64-bit arithmetic, saving us from having to go through the overhead of BigInteger.

Nayuki, one of the guys I linked to earlier, has a working demo of an LCG with the ability to skip forward and backward, based on BigInteger. You could use that with the parameters specified for java.util.Random to implement a java.util.Random equivalent with skipahead capabilities, or you could write it yourself.

Upvotes: 5

Related Questions