Welcome Guest
Log In | Register )
You last visited December 5, 2016, 7:25 am
All times shown are
Eastern Time (GMT-5:00)

Lottery prediction using Python's Numpy (Some one pitch in?)

Topic closed. 17 replies. Last post 7 months ago by lottoburg.

Page 1 of 2
51
PrintE-mailLink
Avatar

United States
Member #168877
September 20, 2015
43 Posts
Offline
Posted: March 10, 2016, 11:01 am - IP Logged

Hi,

As of late I have been learning much about machine learning, basically trying to rehearse how computers learn and possibly use this method to gain an "achievement" with the lottery!!? lol - Thus far I wanted to present what I currently have figured out in terms of coding using python to predict a vector of 5 dimensions or a pick 5 game. What I have, is very simple - however complex if you're not familiar with programming and machine learning. I thought to share my code and see if there's any one here whom would like to contribute in this project and make it complete.

So far the following functions have been called;

One and only one input training sample for X (Multiple batches are needed to predict better), likewise, only one sample for y.

There are 5 inputs, 5 outputs and 4 hidden layers.

5 random weights per neuron connection wljk.

The sigmoid function is assigned to each activation layer.

And sigmoid prime which depicts the amount of error for the output y-hat.

 

import numpy as np
 #Sample training input
X = np.array(([3,5,20,23,26]), dtype=float)
y = np.array(([3,20,25,28,30]), dtype=float)

X = X/np.amax(X, axis=0)
y = y/36 #Max number size is 36

class Neural_Network(object):
       def __init__(self):
      #define Hyperparameters
          self.inputLayerSize = 5
          self.outputLayerSize = 5
          self.hiddenLayerSize_1 = 7
          self.hiddenLayerSize_2 = 7
          self.hiddenLayerSize_3 = 7
          self.hiddenLayerSize_4 = 7
 
  #weights (parameters)
 
         self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize_1)
         self.W2 = np.random.randn(self.hiddenLayerSize_1, self.hiddenLayerSize_2)
         self.W3 = np.random.randn(self.hiddenLayerSize_2, self.hiddenLayerSize_3)
         self.W4 = np.random.randn(self.hiddenLayerSize_3, self.hiddenLayerSize_4)
         self.W5 = np.random.randn(self.hiddenLayerSize_4, self.outputLayerSize)
 
     def forward(self, X):
     #propagate inputs through network
         self.z2 = np.dot(X, self.W1)
        self.a2 = self.sigmoid(self.z2)
        self.z3 = np.dot(self.a2, self.W2)
        self.a3 = self.sigmoid(self.z3)
        self.z4 = np.dot(self.a3, self.W3)
        self.a4 = self.sigmoid(self.z4)
        self.z5 = np.dot(self.a4, self.W5)
        yHat = self.sigmoid(self.z5)
 
     return yHat
 
     def sigmoid(z):
      #Apply sigmoid activation function to scalar, vector or matrix
      return 1/(1+np.exp(-z))
 
     def sigmoidPrime(z):
      #Derivative of sigmoid function
     return np.exp(-z)/((1+np.exp(-z))**2)
 
NN = Neural_Network()

yHat = NN.forward(X)

print yHat

print y

 

Still, what's missing is adding;

Backpropagation,

Computing the cost function with respect to each derivatives weight,

Numerical gradient checking,

training the network,

Testing and overfitting.

Any one interested in adding information is welcome. This is a software to get a visual sense in how Neural Networks can perform predictions in the background.

    Avatar

    United States
    Member #168877
    September 20, 2015
    43 Posts
    Offline
    Posted: March 10, 2016, 6:57 pm - IP Logged

    Adding backprop and the cost function to the above code;

     

    def costFunction(self, X, y):
    #compute cost for given x,y, use weights already stored in class
    self.yHat = self.forward(X)
    J = 0.5*sum((y-self.yHat)**2)
    return J

    def costFunctionPrime(self, X, y):
    #compute derivative with respect to W's for a given x and y
    self.yHat = self.forward(X)

    delta6 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z5))
    dJdW5 = np.dot(self.a4.T, delta6)

    delta5 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z4))
    dJdW4 = np.dot(self.a3.T, delta5)

    delta4 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z3))
    dJdW3 = np.dot(self.a2.T, delta4)

    delta3 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z2))
    dJdW2 = np.dot(self.a4.T, delta3)

    delta2 = np.dot(delta3, self.W5.T)*self.sigmoidPrime(self.z2)
    dJdW1 = np.dot(X.T, delta2)

    return dJdW1, dJdW2, dJdW3, dJdW4, dJdW5

    NN = Neural_Network()
    cost1 = NN.costFunctionPrime(X,y)
    dJdW1, dJdW2, dJdW3, dJdW4, dJdW5 = NN.costFunctionPrime(X,y)

     

    print dJdW1, dJdW2, dJdW3, dJdW4, dJdW5

      SergeM's avatar - slow icon.png
      Economy class
      Belgium
      Member #123700
      February 27, 2012
      4035 Posts
      Offline
      Posted: March 11, 2016, 7:26 pm - IP Logged

       You use random. Are you on qpicks?

        Avatar

        United States
        Member #168877
        September 20, 2015
        43 Posts
        Offline
        Posted: March 12, 2016, 10:07 pm - IP Logged

        No. Random is simply used to initialize the value of "weights" in order to run gradient descent. Without random values, it is very complex to choose your own weight values - mathematically speaking, it could take more than the universe has existed to find all possible values for a 3 dimensional vector, for example.

          Avatar

          United States
          Member #168877
          September 20, 2015
          43 Posts
          Offline
          Posted: March 12, 2016, 10:11 pm - IP Logged

          Adding numerical Gradient checking to the above code. Adding code as I am testing it works for me;

           

          #helper functions for interacting with other classes
          def getParams(self):
          #get w's unrolled into vector
          params = np.concatenate((self.W1.ravel(), self.W2.ravel(), self.W3.ravel(), self.W4.ravel(), self.W5.ravel()))
          #params_1 = np.concatenate((self.W3.ravel(), self.W4.ravel(), self.W5.ravel()))
          return params#, params_1

          def setParams(self, params):
          #Set W1 and W2 using single paramater vector.
          W1_start=0

          W1_end=self.hiddenLayerSize_1 * self.inputLayerSize

          self.W1=np.reshape(params[W1_start:W1_end], (self.inputLayerSize , self.hiddenLayerSize_1))

          W2_end=W1_end + self.hiddenLayerSize_1*self.hiddenLayerSize_2

          self.W2=np.reshape(params[W1_end:W2_end], (self.hiddenLayerSize_2, self.hiddenLayerSize_3))

          W3_end = W2_end + self.hiddenLayerSize_2*self.hiddenLayerSize_3

          self.W3=np.reshape(params[W2_end:W3_end], (self.hiddenLayerSize_3, self.hiddenLayerSize_4))

          W4_end=W3_end + self.hiddenLayerSize_3*self.hiddenLayerSize_4

          self.W4=np.reshape(params[W3_end:W4_end], (self.hiddenLayerSize_4, self.outputLayerSize))

          #W5_end=W4_end + self.hiddenLayerSize_5*self.outputLayerSize

          #self.W5 = np.reshape(params[W4_end:W5_end], (self.hiddenLayerSize_5, self.outputLayerSize))

          def computeGradients(self, X, y):
          dJdW1, dJdW2, dJdW3, dJdW4, dJdW5 = self.costFunctionPrime(X, y)
          return np.concatenate((dJdW1.ravel(), dJdW2.ravel(), dJdW3.ravel(), dJdW4.ravel(), dJdW5.ravel()))

          def computeNumericalGradient(N, X, y):
          paramsInitial = N.getParams()
          numgrad = np.zeros(paramsInitial.shape)
          perturb = np.zeros(paramsInitial.shape)
          e = 1e-4

          for p in range(len(paramsInitial)):
          #Set perturbation vector
          perturb[p] = e
          N.setParams(paramsInitial + perturb)
          loss2 = N.costFunction(X, y)

          N.setParams(paramsInitial - perturb)
          loss1 = N.costFunction(X, y)

          #Compute Numerical Gradient
          numgrad[p] = (loss2 - loss1) / (2*e)

          #Return the value we changed to zero:
          perturb[p] = 0

          #Return Params to original value:
          N.setParams(paramsInitial)

          return numgrad

          NN = Neural_Network()
          numgrad = computeNumericalGradient(NN, X, y)
          grad = NN.computeGradients(X,y)

          print numgrad
          print grad

           

          Note that this code should be used in proper format, in order to work. If you need the exact format let me know.

            SergeM's avatar - slow icon.png
            Economy class
            Belgium
            Member #123700
            February 27, 2012
            4035 Posts
            Offline
            Posted: March 13, 2016, 5:57 pm - IP Logged

            No. Random is simply used to initialize the value of "weights" in order to run gradient descent. Without random values, it is very complex to choose your own weight values - mathematically speaking, it could take more than the universe has existed to find all possible values for a 3 dimensional vector, for example.

            You can do manually.

              Avatar

              United States
              Member #168877
              September 20, 2015
              43 Posts
              Offline
              Posted: April 8, 2016, 5:55 pm - IP Logged

              Just want to show visually the hidden layers that machine learning tries to ascertain in a regular 5 dimensional game. Or pick 5.

               

               

               

              As oppose to an easier 3 dimensional matrix;

               

                Avatar

                United States
                Member #168877
                September 20, 2015
                43 Posts
                Offline
                Posted: April 8, 2016, 6:05 pm - IP Logged

                The above python code is as follows, written in mathematical terms;

                 

                 

                  Avatar

                  United States
                  Member #168877
                  September 20, 2015
                  43 Posts
                  Offline
                  Posted: April 8, 2016, 6:07 pm - IP Logged

                    Avatar
                    South Carolina
                    United States
                    Member #18322
                    July 9, 2005
                    1704 Posts
                    Offline
                    Posted: April 8, 2016, 6:14 pm - IP Logged

                    Hi,

                    As of late I have been learning much about machine learning, basically trying to rehearse how computers learn and possibly use this method to gain an "achievement" with the lottery!!? lol - Thus far I wanted to present what I currently have figured out in terms of coding using python to predict a vector of 5 dimensions or a pick 5 game. What I have, is very simple - however complex if you're not familiar with programming and machine learning. I thought to share my code and see if there's any one here whom would like to contribute in this project and make it complete.

                    So far the following functions have been called;

                    One and only one input training sample for X (Multiple batches are needed to predict better), likewise, only one sample for y.

                    There are 5 inputs, 5 outputs and 4 hidden layers.

                    5 random weights per neuron connection wljk.

                    The sigmoid function is assigned to each activation layer.

                    And sigmoid prime which depicts the amount of error for the output y-hat.

                     

                    import numpy as np
                     #Sample training input
                    X = np.array(([3,5,20,23,26]), dtype=float)
                    y = np.array(([3,20,25,28,30]), dtype=float)

                    X = X/np.amax(X, axis=0)
                    y = y/36 #Max number size is 36

                    class Neural_Network(object):
                           def __init__(self):
                          #define Hyperparameters
                              self.inputLayerSize = 5
                              self.outputLayerSize = 5
                              self.hiddenLayerSize_1 = 7
                              self.hiddenLayerSize_2 = 7
                              self.hiddenLayerSize_3 = 7
                              self.hiddenLayerSize_4 = 7
                     
                      #weights (parameters)
                     
                             self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize_1)
                             self.W2 = np.random.randn(self.hiddenLayerSize_1, self.hiddenLayerSize_2)
                             self.W3 = np.random.randn(self.hiddenLayerSize_2, self.hiddenLayerSize_3)
                             self.W4 = np.random.randn(self.hiddenLayerSize_3, self.hiddenLayerSize_4)
                             self.W5 = np.random.randn(self.hiddenLayerSize_4, self.outputLayerSize)
                     
                         def forward(self, X):
                         #propagate inputs through network
                             self.z2 = np.dot(X, self.W1)
                            self.a2 = self.sigmoid(self.z2)
                            self.z3 = np.dot(self.a2, self.W2)
                            self.a3 = self.sigmoid(self.z3)
                            self.z4 = np.dot(self.a3, self.W3)
                            self.a4 = self.sigmoid(self.z4)
                            self.z5 = np.dot(self.a4, self.W5)
                            yHat = self.sigmoid(self.z5)
                     
                         return yHat
                     
                         def sigmoid(z):
                          #Apply sigmoid activation function to scalar, vector or matrix
                          return 1/(1+np.exp(-z))
                     
                         def sigmoidPrime(z):
                          #Derivative of sigmoid function
                         return np.exp(-z)/((1+np.exp(-z))**2)
                     
                    NN = Neural_Network()

                    yHat = NN.forward(X)

                    print yHat

                    print y

                     

                    Still, what's missing is adding;

                    Backpropagation,

                    Computing the cost function with respect to each derivatives weight,

                    Numerical gradient checking,

                    training the network,

                    Testing and overfitting.

                    Any one interested in adding information is welcome. This is a software to get a visual sense in how Neural Networks can perform predictions in the background.

                    You appear to be a mathematically gifted person, but now how does all of this genius relate to the Lottery ???  How do you choose which lottery numbers to play, based on all of your calculations ??

                      Avatar

                      United States
                      Member #168877
                      September 20, 2015
                      43 Posts
                      Offline
                      Posted: April 8, 2016, 6:43 pm - IP Logged

                      Hi,

                      The main reason of this post, is to hopefully find help (from someone) in finishing the algorithm to find patterns in previous lottery draws in order to predict future numbers.

                      This algorithm is still not complete. But to answer your question, the outlook of the algorithm is to find hidden variables in order to compute predictions with zero error, such using Artificial Intelligence.

                      In choosing the numbers, I have found that prior numbers drawn prior to the day of game play, provide higher chances of giving a true prediction of today's winning numbers.

                      Reality is, the idea is for machine learning or a computer to predict next winning numbers on a given game.

                        MillionsWanted's avatar - 24Qa6LT

                        Norway
                        Member #9517
                        December 10, 2004
                        1272 Posts
                        Online
                        Posted: April 8, 2016, 9:00 pm - IP Logged
                          MoneyMike$'s avatar - Lottery-050.jpg
                          Ny
                          United States
                          Member #167314
                          July 5, 2015
                          1805 Posts
                          Offline
                          Posted: April 10, 2016, 1:25 pm - IP Logged

                          Hi,

                          The main reason of this post, is to hopefully find help (from someone) in finishing the algorithm to find patterns in previous lottery draws in order to predict future numbers.

                          This algorithm is still not complete. But to answer your question, the outlook of the algorithm is to find hidden variables in order to compute predictions with zero error, such using Artificial Intelligence.

                          In choosing the numbers, I have found that prior numbers drawn prior to the day of game play, provide higher chances of giving a true prediction of today's winning numbers.

                          Reality is, the idea is for machine learning or a computer to predict next winning numbers on a given game.

                          This is a great topic.. I do believe it may be possible to make accurate predictions on such a game like the pick 3 win 4 (ideally) and take 5 any Mega and powrball is a different beast but it may be possible to conquer.. From my experiences I have come to the conclusion that the lottery results stem from the first day it was ever played on any game.. Now this is something almost out of the universe the correlations from then and now but is possible to create an advantage without even going to day 1.. Its is something to keep in mind as we all play these games and stop looking for answers that mostly mean nothing.. the game is based og calculated risks and some people seem not to believe in an educated guess (hypothesis) yes u may get lucky here and there but an educated guess(s) every draw is by far more powerful to succeeding in the game.. The lottery ball catcher shown on tv and our qp generator are 2 different entitys so it kills me when people say more people win on qp's than any other way into he game.. that is pure luck that the two come together.. Brother honestly to cracking the code of a game like win 4 may be easier than math we can talk more about this ...tell me your take

                          Creativity..

                          " What's more likely to happen will happen.. "

                          Million dollar operation 

                          Wink

                            RL-RANDOMLOGIC's avatar - usafce

                            United States
                            Member #59354
                            March 13, 2008
                            3964 Posts
                            Offline
                            Posted: April 11, 2016, 12:26 am - IP Logged

                            Hi,

                            As of late I have been learning much about machine learning, basically trying to rehearse how computers learn and possibly use this method to gain an "achievement" with the lottery!!? lol - Thus far I wanted to present what I currently have figured out in terms of coding using python to predict a vector of 5 dimensions or a pick 5 game. What I have, is very simple - however complex if you're not familiar with programming and machine learning. I thought to share my code and see if there's any one here whom would like to contribute in this project and make it complete.

                            So far the following functions have been called;

                            One and only one input training sample for X (Multiple batches are needed to predict better), likewise, only one sample for y.

                            There are 5 inputs, 5 outputs and 4 hidden layers.

                            5 random weights per neuron connection wljk.

                            The sigmoid function is assigned to each activation layer.

                            And sigmoid prime which depicts the amount of error for the output y-hat.

                             

                            import numpy as np
                             #Sample training input
                            X = np.array(([3,5,20,23,26]), dtype=float)
                            y = np.array(([3,20,25,28,30]), dtype=float)

                            X = X/np.amax(X, axis=0)
                            y = y/36 #Max number size is 36

                            class Neural_Network(object):
                                   def __init__(self):
                                  #define Hyperparameters
                                      self.inputLayerSize = 5
                                      self.outputLayerSize = 5
                                      self.hiddenLayerSize_1 = 7
                                      self.hiddenLayerSize_2 = 7
                                      self.hiddenLayerSize_3 = 7
                                      self.hiddenLayerSize_4 = 7
                             
                              #weights (parameters)
                             
                                     self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize_1)
                                     self.W2 = np.random.randn(self.hiddenLayerSize_1, self.hiddenLayerSize_2)
                                     self.W3 = np.random.randn(self.hiddenLayerSize_2, self.hiddenLayerSize_3)
                                     self.W4 = np.random.randn(self.hiddenLayerSize_3, self.hiddenLayerSize_4)
                                     self.W5 = np.random.randn(self.hiddenLayerSize_4, self.outputLayerSize)
                             
                                 def forward(self, X):
                                 #propagate inputs through network
                                     self.z2 = np.dot(X, self.W1)
                                    self.a2 = self.sigmoid(self.z2)
                                    self.z3 = np.dot(self.a2, self.W2)
                                    self.a3 = self.sigmoid(self.z3)
                                    self.z4 = np.dot(self.a3, self.W3)
                                    self.a4 = self.sigmoid(self.z4)
                                    self.z5 = np.dot(self.a4, self.W5)
                                    yHat = self.sigmoid(self.z5)
                             
                                 return yHat
                             
                                 def sigmoid(z):
                                  #Apply sigmoid activation function to scalar, vector or matrix
                                  return 1/(1+np.exp(-z))
                             
                                 def sigmoidPrime(z):
                                  #Derivative of sigmoid function
                                 return np.exp(-z)/((1+np.exp(-z))**2)
                             
                            NN = Neural_Network()

                            yHat = NN.forward(X)

                            print yHat

                            print y

                             

                            Still, what's missing is adding;

                            Backpropagation,

                            Computing the cost function with respect to each derivatives weight,

                            Numerical gradient checking,

                            training the network,

                            Testing and overfitting.

                            Any one interested in adding information is welcome. This is a software to get a visual sense in how Neural Networks can perform predictions in the background.

                            I suggest you work on stocks or sports betting as NN don't work on random.  I have built many and tried

                            just about every prediction method.  Sure you can build something that will give you the best numbers to

                            play based on the history of the game, just don't expect the best numbers to show in the next game or

                            the one after that or even the game after that.   Using a NN to come to a solution looks attractive until the

                            drawing.  At best it might get a couple numbers, just enough to keep you chasing you tail so to say.  Check

                            the math forum as there is a ongoing NN topic with download.

                            RL

                            Working on my Ph.D.  "University of hard Knocks"

                            I will consider the opinion that my winnings are a product of chance if you are willing to consider

                            they are not.  Many great discoveries come while searching for something else

                            USAF https://en.wikipedia.org/wiki/Prime_Base_Engineer_Emergency_Force

                              US Flag Trump / 2016 & 2020  

                              RJOh's avatar - chipmunk
                              mid-Ohio
                              United States
                              Member #9
                              March 24, 2001
                              19824 Posts
                              Offline
                              Posted: May 7, 2016, 1:26 pm - IP Logged

                              I suggest you work on stocks or sports betting as NN don't work on random.  I have built many and tried

                              just about every prediction method.  Sure you can build something that will give you the best numbers to

                              play based on the history of the game, just don't expect the best numbers to show in the next game or

                              the one after that or even the game after that.   Using a NN to come to a solution looks attractive until the

                              drawing.  At best it might get a couple numbers, just enough to keep you chasing you tail so to say.  Check

                              the math forum as there is a ongoing NN topic with download.

                              RL

                              "just don't expect the best numbers to show in the next game"

                              Isn't that what trying to predict winners is all about?

                               * you don't need to buy more tickets, just buy a winning ticket * 
                                 
                                           Evil Looking