I will try integrating the Random123 library into Chai. The Supercomputing 2011 paper Parallel Random Numbers: As Easy as 1, 2, 3 makes a convincing case. There’s also a thread about merging this code into Boost. The license is generous and reasonable too.
To be honest, I really don’t know much about randomness. If I tried to reinvent this wheel, it is likely I would be fooled by randomness. (Excuse this pun, I could not resist.)
My past experience (somewhat vicarious at that) of machine learning never used Monte Carlo methods. There was no sampling based quadrature. Error was calculated by iteration over every member of a cluster. The learning part was finding those clusters of similar things. These clusters implied distributions based on historical data.
To give a sense of my ignorance, my understanding of random numbers was limited to: the truly random (from nature, i.e. hardware generated); the cryptographically secure pseudo-random (deterministically computed in software but indistinguishable from truly random with today’s technology); the cryptographically insecure pseudo-random. I did not know of quasi-random numbers (low discrepancy, even space filling).
This is ironic as I still remember Martin Billik‘s Hilbert space class as an undergraduate. He was big on measure theory. At the time I took his class, I still believed I was a real math guy.
So the right thing to do is work around my ignorance by using other people’s stuff who know far more than I do. This also happens to minimize development time and (potentially) spread good karma to the people behind Random123.