As others say, the secure RNG can have limited throughput. To mitigate this you can either stretch that secure randomness by seeding a CPRNG, or you can try to optimise your usage of the bitstream.
To shuffle a pack of cards, for example, you need only 226 bits, but a naive
algorithm (calling nextInt(n)
for each card) will likely use 1600 or 3200
bits, wasting 85% of your entropy and making you seven times more susceptible
to delays.
For this situation I think the Doctor Jacques method would be appropriate.
To go with that, here's some performance analysis against progressively more costly entropy sources (also contains code):
Bit recycling for scaling random number generators
I would lean towards efficient usage rather than stretching, because I think it would be a lot easier to prove the fairness of an efficient consumer of a trustworthy entropy stream, than to prove the fairness of any drawing method with a well-seeded PRNG.
EDIT2: I don't really know Java, but I put this together:
public class MySecureRandom extends java.security.SecureRandom {
private long m = 1;
private long r = 0;
@Override
public final int nextInt(int n) {
while (true) {
if (m < 0x80000000L) {
m <<= 32;
r <<= 32;
r += (long)next(32) - Integer.MIN_VALUE;
}
long q = m / n;
if (r < n * q) {
int x = (int)(r % n);
m = q;
r /= n;
return x;
}
m -= n * q;
r -= n * q;
}
}
}
This does away with the greedy default uniform [0,n-1] generator and replaces it with a modified Doctor Jacques version. Timing a card-shuffle range of values shows almost a 6x speed-up over the SecureRandom.nextInt(n)
.
My previous version of this code (only 2x speed-up) assumed that SecureRandom.next(b)
was efficient, but it turns out that call was discarding entropy and dragging the whole loop down. This version manages its own chunking.