|Posted: May 12, 2016, 12:32 pm - IP Logged|
I prefer making best guesses which I don't consider predictions. Using math to predict random does not
work IMHO, if it did a lottery game would not last very long. Even when given 50/50 data choices I find
they often do worse than the expected.
NN's rely on mathematically deduced patterns of which there are many but these patterns are a product
of random and the next value is not tied to a games history. If a NN starts hitting it's a sure sign that some
thing is wrong with the RNG.
Thanks for sharing your thoughts. But, I disagree in a way. You're correct that predicting random in "math" may be very complex. However, with lottery, my subjective opinion is that we're playing against nothing else but a game being MANIPULATED by a human being to create a random factor - not RANDOM as whole. For instance, in a 5/36 game, if numbers were really to be random, then you would not find patterns or repeated 3 in 5 numbers every so often. Life itself is not random. Time is not random. Distances are not random. Dimensional spaces are not random. Speed is not random, as you can set it to your specific pace based on the force you add.
As calculus says;
Calculus is looking at curves which we have no way of understanding, and instead finding lines which we do have some hope of understanding. Its the case of lottery patterns. By analyzing large amounts of data, you will run in to a chaos of curves - and not giving up to the research is the key to finding that exact slope to the tangent line which will potentially give a winning number. :-)
Here's proof of my concept. I finally was able to train a network with real lottery data, to learn in predicting the next winning number. But of course, so far with past winning numbers.
Example; X is equal to yesterdays played numbers, and Y is equal today's played numbers, provided that the numbers exist. I'm still in the process of learning how to predict numbers without providing the last Y example.
4 layers in total
2 hidden layers of 100 neurons each
Iteration 0 Error: 7.456628
Iteration 2000 Error: 3.619826
Iteration 4000 Error: 0.913117
Iteration 6000 Error: 0.003957
Iteration 8000 Error: 0.000681
Iteration 10000 Error: 0.000198
Iteration 12000 Error: 0.000059
Iteration 14000 Error: 0.000019
Iteration 16000 Error: 0.000006
Iteration 18000 Error: 0.000002
Iteration 20000 Error: 0.000001
Input: [[ 0.5 12. 21. 22. 24. ]
[ 0.11 0.15 0.29 0.3 0.35]
[ 0.4 0.1 0.25 0.3 0.34]
[ 0.3 0.18 0.2 0.22 0.23]
[ 0.1 0.8 0.1 0.14 0.27]
[ 0.9 0.11 0.13 0.23 0.32]
[ 0.6 0.11 0.16 0.2 0.36]]
Output: [[ 0.11009014 0.14992358 0.29003687 0.30004684 0.34999857]
[ 0.40039511 0.09986544 0.25012274 0.30010834 0.34000936]
[ 0.30035255 0.17970315 0.20005376 0.22011255 0.229985 ]
[ 0.10014598 0.79975052 0.10005339 0.14006107 0.26999491]
[ 0.90013675 0.10987874 0.13004681 0.23007653 0.32002067]
[ 0.60040263 0.10984396 0.16003662 0.20005118 0.36000606]
[ 0.30037689 0.1497681 0.20013523 0.23010225 0.25999885]]
And using Java;
Note that input and output values were converted to decimal. But the concept is shown that I have successfully trained the neural network to learn the next number fed as example...