Welcome Guest
Log In | Register )
You last visited December 3, 2016, 5:30 am
All times shown are
Eastern Time (GMT-5:00)

Neural Net Lottery Picker

677 replies. Last post 7 days ago by dr san.

Page 7 of 46
511
PrintE-mailLink
riscknight's avatar - riscknight
Athens
Greece
Member #133234
September 24, 2012
188 Posts
Offline
Posted: August 26, 2015, 8:14 pm - IP Logged

That's 3 on 1 of 8 lines (not profitable). I;m  looking for settings which will give me at least 4 numbers on one line in 30 lines or less . Only playing on paper, not playing live.

That's 3 on 1 of 8 lines (not profitable).

It is not but, you only play 8 lines; try with 8 lines for at least 10 - 14 drawings. *Then* change the settings of the program, play again with your new settings for another 10 - 14 drawings and compare the results. Do the same with the 30 lines.

To be honest with you though, even if it is not profitable - can't argue with that - getting 3 correct numbers in 8 lines, is quite admirable for a program like this particular one.

 

6/49 dis(assembly)

    MillionsWanted's avatar - 24Qa6LT

    Norway
    Member #9517
    December 10, 2004
    1272 Posts
    Offline
    Posted: August 27, 2015, 1:45 am - IP Logged

    I played 20 lines with 300 neurons and 6 lines with 500 neurons on the last saturday lottery draw. Won nothing.

    I played 30 lines with 500 neurons on the last Viking Lotto(yesterday) draw and won nothing.

    I will go back to 300 neuron on Viking Lotto and see if I start winning again.

    I think I've won enough already to prove theres's more than luck involved. If there really is a pattern in the lottery, NNLP will find it.

    I have also tested the program on Eurojackpot, playing a couple of lines, but without luck. Playing Eurojackpot is five times more expensive than Viking Lotto. I'll do a backtest later. It costs less.


      United States
      Member #168172
      August 19, 2015
      18 Posts
      Offline
      Posted: August 27, 2015, 11:23 am - IP Logged

      I finally tried it live and the first try was pretty impressive. Playing 5 lines with 40 neurons and 50 draws as input yielded the following:

      LN1:2/5

      LN2:3/5

      LN3:2/5

      LN4:1/5

      LN5:2/5

      Profit:$13.00 USD

      There's definitely something here. Evening when there are no winners,  there is always at least one winning number per line for most of the lines produced in my trials.

      The two most important parameters are the ALPHA and Hidden Layer Neurons settings. Load at least thirty to forty draws and back test 1 draw. Adjust the two parameters I mentioned above until you get mostly 1 or 2 winning numbers per line in at least 20 lines in your back testing. Those are the settings you want to keep. Unfortunately, for those who want magic settings there are none. You will need to adjust the parameters for your own game. It will take some time to fine tune the parameters, but in the end it will be well worth it.

      Good Luck

        Avatar
        Krakow
        Poland
        Member #86302
        February 2, 2010
        858 Posts
        Offline
        Posted: August 28, 2015, 6:53 am - IP Logged

        Hi guys,

        I haven't loaded my game, do not have it in the proper format so used UK results to do some backtesting. It looks nice and promising.

        I did the backtest for these 5 draws:

        4-8-10-21-41-49
        6-15-17-20-24-25
        6-22-28-30-39-44
        5-8-12-28-39-41
        1-16-27-42-46-47

         

        The results are below:

        08-14-20-28-33-41-;27,0292331838992;0,735244628459117;1,20393388888417;12:44:21.9978788;Alpha:0,007976| Noise:0,25|Threshold Error:0,01|Hidden Neurons:101|Test Data Range:50 | Fixed_Seed_Input_Weights: 1
        07-14-20-29-34-41-;23,362587000095;0,683558146759951;1,17878855016416;12:44:22.6549163;Alpha:0,007976| Noise:0,25|Threshold Error:0,01|Hidden Neurons:101|Test Data Range:50 | Fixed_Seed_Input_Weights: 1
        08-14-21-28-33-44-;22,8539688429666;0,676076457850243;1,23318178764682;12:44:23.3119539;Alpha:0,007976| Noise:0,25|Threshold Error:0,01|Hidden Neurons:101|Test Data Range:50 | Fixed_Seed_Input_Weights: 1
        08-15-21-28-34-41-;32,9932256073855;0,8123204491749;1,22913373891144;12:44:26.9191602;Alpha:0,007976| Noise:0,25|Threshold Error:0,01|Hidden Neurons:101|Test Data Range:50 | Fixed_Seed_Input_Weights: 1
        06-14-21-27-33-41-;32,6854220788955;0,808522381618413;1,1156014321201;12:44:27.2601798;Alpha:0,007976| Noise:0,25|Threshold Error:0,01|Hidden Neurons:101|Test Data Range:50 | Fixed_Seed_Input_Weights: 1
        08-13-21-28-35-41-;22,6268941674496;0,672709360235899;1,19282685371649;12:44:27.5911987;Alpha:0,007976| Noise:0,25|Threshold Error:0,01|Hidden Neurons:101|Test Data Range:50 | Fixed_Seed_Input_Weights: 1
        07-13-21-28-34-42-;28,9299658339941;0,760657161065274;1,21272832339235;12:44:32.1914618;Alpha:0,007976| Noise:0,25|Threshold Error:0,01|Hidden Neurons:101|Test Data Range:50 | Fixed_Seed_Input_Weights: 1
        07-13-21-28-33-42-;23,3824865197406;0,68384920150192;1,17924726976141;12:44:32.5334814;Alpha:0,007976| Noise:0,25|Threshold Error:0,01|Hidden Neurons:101|Test Data Range:50 | Fixed_Seed_Input_Weights: 1
        06-14-21-28-33-41-;23,1192405002388;0,679988830794136;1,15809220950457;12:44:34.8356130;Alpha:0,007976| Noise:0,25|Threshold Error:0,01|Hidden Neurons:101|Test Data Range:50 | Fixed_Seed_Input_Weights: 1
        08-13-21-28-34-42-;22,8702182621652;0,676316763982162;1,18364044535099;12:44:35.1656319;Alpha:0,007976| Noise:0,25|Threshold Error:0,01|Hidden Neurons:101|Test Data Range:50 | Fixed_Seed_Input_Weights: 1

        As you can see there's a couple of 3 of 6's on those lines. The settings you can see above.

         

        Adam


          United States
          Member #168172
          August 19, 2015
          18 Posts
          Offline
          Posted: August 29, 2015, 12:54 am - IP Logged

          Tonight's result:

          Five lines played:

          LN1:2/5

          LN2:1/5

          LN3:0/5

          LN4:2/5

          LN5:3/5

          Profit:$12.00 USD

           

          It's kinda cool to win a few bucks for a change. This program has put some fun back into the game for me. Thanks again stoopendaal for great programming.

            stoopendaal's avatar - archer

            Netherlands
            Member #3476
            January 24, 2004
            212 Posts
            Offline
            Posted: August 29, 2015, 7:27 am - IP Logged

            I refined the neural network algorithm by adding the moment factor (as an optional parameter) to it.

            I changed the parameter name 'Alpha' to 'Learning Rate'  because this is actually the official name of this parameter.

            Moment factor  meaning:

             

            1. This can improve the learning rate in some situations, by helping to smooth out unusual conditions in the training set.
            2. Momentum simply adds a fraction m of the previous weight update to the current one. The momentum parameter is used to prevent the system from converging to a local minimum or saddle point.A high momentum parameter can also help to increase the speed of convergence of the system. However,setting the momentum parameter too high can create a risk of overshooting the minimum,which can cause the system to become unstable.A momentum coefficient that is too low cannot reliably avoid local minima, and can also slow down the training of the system.

            Learing Rate meaning (source http://www.cheshireeng.com/Neuralyst/nnbg.htm):

            The learning rate, LR, applies a greater or lesser portion of the respective adjustment to the old weight.
            If the factor is set to a large value, then the neural network may learn more quickly, but if there is a large variability in the input set then the network may
            not learn very well or at all. In real terms, setting the learning rate to a large value is analogous to giving a child a spanking,
            but that is inappropriate and counter-productive to learning if the offense is so simple as forgetting to tie their shoelaces.
            Usually, it is better to set the factor to a small value and edge it upward if the learning rate seems slow.

             

            Version 2.0 is now available for download. 

            Note: I still can't get rid of the pesky 'false positives'  some of the ant-virus tools generate. Even when the application is not obfuscated some anti-virus tools still give false positives! Luckily the more reputable virus scanners like AVG, Kaspersky, McAfee didn't 'detect' one (https://www.virustotal.com).

             

            Good luck!

              MillionsWanted's avatar - 24Qa6LT

              Norway
              Member #9517
              December 10, 2004
              1272 Posts
              Offline
              Posted: August 29, 2015, 7:34 am - IP Logged

              I refined the neural network algorithm by adding the moment factor (as an optional parameter) to it.

              I changed the parameter name 'Alpha' to 'Learning Rate'  because this is actually the official name of this parameter.

              Moment factor  meaning:

               

              1. This can improve the learning rate in some situations, by helping to smooth out unusual conditions in the training set.
              2. Momentum simply adds a fraction m of the previous weight update to the current one. The momentum parameter is used to prevent the system from converging to a local minimum or saddle point.A high momentum parameter can also help to increase the speed of convergence of the system. However,setting the momentum parameter too high can create a risk of overshooting the minimum,which can cause the system to become unstable.A momentum coefficient that is too low cannot reliably avoid local minima, and can also slow down the training of the system.

              Learing Rate meaning (source http://www.cheshireeng.com/Neuralyst/nnbg.htm):

              The learning rate, LR, applies a greater or lesser portion of the respective adjustment to the old weight.
              If the factor is set to a large value, then the neural network may learn more quickly, but if there is a large variability in the input set then the network may
              not learn very well or at all. In real terms, setting the learning rate to a large value is analogous to giving a child a spanking,
              but that is inappropriate and counter-productive to learning if the offense is so simple as forgetting to tie their shoelaces.
              Usually, it is better to set the factor to a small value and edge it upward if the learning rate seems slow.

               

              Version 2.0 is now available for download. 

              Note: I still can't get rid of the pesky 'false positives'  some of the ant-virus tools generate. Even when the application is not obfuscated some anti-virus tools still give false positives! Luckily the more reputable virus scanners like AVG, Kaspersky, McAfee didn't 'detect' one (https://www.virustotal.com).

               

              Good luck!

              Hi. I sent the false positives to Bitdefender so they can put them on a whitelist.

              I can recommend you do this with every new version you release and not just Bitdefender.

              Edit: I tried to download your latest. Same virus message. I reported it as false positive to Bitdefender.

                bootleg233's avatar - Lottery-034.jpg
                Tn
                United States
                Member #54963
                September 4, 2007
                1164 Posts
                Online
                Posted: August 29, 2015, 7:34 am - IP Logged

                I refined the neural network algorithm by adding the moment factor (as an optional parameter) to it.

                I changed the parameter name 'Alpha' to 'Learning Rate'  because this is actually the official name of this parameter.

                Moment factor  meaning:

                 

                1. This can improve the learning rate in some situations, by helping to smooth out unusual conditions in the training set.
                2. Momentum simply adds a fraction m of the previous weight update to the current one. The momentum parameter is used to prevent the system from converging to a local minimum or saddle point.A high momentum parameter can also help to increase the speed of convergence of the system. However,setting the momentum parameter too high can create a risk of overshooting the minimum,which can cause the system to become unstable.A momentum coefficient that is too low cannot reliably avoid local minima, and can also slow down the training of the system.

                Learing Rate meaning (source http://www.cheshireeng.com/Neuralyst/nnbg.htm):

                The learning rate, LR, applies a greater or lesser portion of the respective adjustment to the old weight.
                If the factor is set to a large value, then the neural network may learn more quickly, but if there is a large variability in the input set then the network may
                not learn very well or at all. In real terms, setting the learning rate to a large value is analogous to giving a child a spanking,
                but that is inappropriate and counter-productive to learning if the offense is so simple as forgetting to tie their shoelaces.
                Usually, it is better to set the factor to a small value and edge it upward if the learning rate seems slow.

                 

                Version 2.0 is now available for download. 

                Note: I still can't get rid of the pesky 'false positives'  some of the ant-virus tools generate. Even when the application is not obfuscated some anti-virus tools still give false positives! Luckily the more reputable virus scanners like AVG, Kaspersky, McAfee didn't 'detect' one (https://www.virustotal.com).

                 

                Good luck!

                Yes stoopendaal if you EVER get it where those false positives and everything else some viruse scanners find are fixed where some of us can run it without jumping through hoops I would love to try it!! But guess I will just keep watching how others are doing with it until Norton and others actually let it run!! But looks like some are figuring out settings for it so if it ever gets to run on my machine then maybe I will be able to also!! Maybe put it to run online someplace would work lol. Anyway good luck all!!!  Da Boot.....................

                WHEN IT FEELS THE WHOLE WORLD SUCKS!

                RELAX.........IT'S ONLY GRAVITY Big Smile

                I think I can I think I can!!!!

                  Avatar
                  Krakow
                  Poland
                  Member #86302
                  February 2, 2010
                  858 Posts
                  Offline
                  Posted: August 29, 2015, 7:58 am - IP Logged

                  I refined the neural network algorithm by adding the moment factor (as an optional parameter) to it.

                  I changed the parameter name 'Alpha' to 'Learning Rate'  because this is actually the official name of this parameter.

                  Moment factor  meaning:

                   

                  1. This can improve the learning rate in some situations, by helping to smooth out unusual conditions in the training set.
                  2. Momentum simply adds a fraction m of the previous weight update to the current one. The momentum parameter is used to prevent the system from converging to a local minimum or saddle point.A high momentum parameter can also help to increase the speed of convergence of the system. However,setting the momentum parameter too high can create a risk of overshooting the minimum,which can cause the system to become unstable.A momentum coefficient that is too low cannot reliably avoid local minima, and can also slow down the training of the system.

                  Learing Rate meaning (source http://www.cheshireeng.com/Neuralyst/nnbg.htm):

                  The learning rate, LR, applies a greater or lesser portion of the respective adjustment to the old weight.
                  If the factor is set to a large value, then the neural network may learn more quickly, but if there is a large variability in the input set then the network may
                  not learn very well or at all. In real terms, setting the learning rate to a large value is analogous to giving a child a spanking,
                  but that is inappropriate and counter-productive to learning if the offense is so simple as forgetting to tie their shoelaces.
                  Usually, it is better to set the factor to a small value and edge it upward if the learning rate seems slow.

                   

                  Version 2.0 is now available for download. 

                  Note: I still can't get rid of the pesky 'false positives'  some of the ant-virus tools generate. Even when the application is not obfuscated some anti-virus tools still give false positives! Luckily the more reputable virus scanners like AVG, Kaspersky, McAfee didn't 'detect' one (https://www.virustotal.com).

                   

                  Good luck!

                  Hi,

                  Which link should be used to download version.2.0 ?

                  Thanks.

                   

                  Adam

                    stoopendaal's avatar - archer

                    Netherlands
                    Member #3476
                    January 24, 2004
                    212 Posts
                    Offline
                    Posted: August 29, 2015, 9:07 am - IP Logged

                    I finally got rid of the false positives some anti-virus tools generated (according to https://www.virustotal.com).

                    The solution was switching off some compiler options in Visual Studio and recompiling the executable Banana

                     

                    You can download version 2.0 (the same links earlier posted here) here:

                     http://intelbet.somee.com/tools/nnlp.rar (zipped file)  or  http:/intelbet.somee.com/tools/nnlp.exe

                      Avatar
                      Krakow
                      Poland
                      Member #86302
                      February 2, 2010
                      858 Posts
                      Offline
                      Posted: August 29, 2015, 9:09 am - IP Logged

                      I finally got rid of the false positives some anti-virus tools generated (according to https://www.virustotal.com).

                      The solution was switching off some compiler options in Visual Studio and recompiling the executable Banana

                       

                      You can download version 2.0 (the same links earlier posted here) here:

                       http://intelbet.somee.com/tools/nnlp.rar (zipped file)  or  http:/intelbet.somee.com/tools/nnlp.exe

                      Thanks a lot.

                      Adam

                        MillionsWanted's avatar - 24Qa6LT

                        Norway
                        Member #9517
                        December 10, 2004
                        1272 Posts
                        Offline
                        Posted: August 29, 2015, 9:23 am - IP Logged

                        I finally got rid of the false positives some anti-virus tools generated (according to https://www.virustotal.com).

                        The solution was switching off some compiler options in Visual Studio and recompiling the executable Banana

                         

                        You can download version 2.0 (the same links earlier posted here) here:

                         http://intelbet.somee.com/tools/nnlp.rar (zipped file)  or  http:/intelbet.somee.com/tools/nnlp.exe

                        That's great!

                          MillionsWanted's avatar - 24Qa6LT

                          Norway
                          Member #9517
                          December 10, 2004
                          1272 Posts
                          Offline
                          Posted: August 29, 2015, 9:34 am - IP Logged

                          Your next goal should be to speed up the training. Increase calculating speed with X100.

                          Ever heard of CUDA? Everyone with a NVIDIA card on their PC got a hidden supercomputer which is unused.

                          The same goes for ATI/AMD Radeon, but they use a different system.

                          Read more here:

                          CUDA

                          NVIDIA cuDNN – GPU Accelerated Deep Learning

                            Avatar
                            Krakow
                            Poland
                            Member #86302
                            February 2, 2010
                            858 Posts
                            Offline
                            Posted: August 30, 2015, 5:31 am - IP Logged

                            stoopendale,

                            I got no idea why I get the error message that the file format is incorrect. What's wrong with it? It looks like that:

                             

                            8-11-15-17-44-45 
                            1-7-13-14-36-37 
                            3-4-14-15-42-45 
                            1-14-23-31-33-35 
                            6-16-21-30-47-48 
                            1-4-23-33-48-49 
                            4-22-28-33-45-47 
                            14-23-33-40-41-46 
                            8-16-17-18-38-46
                            15-21-29-34-48-49 
                            4-25-26-30-32-35 
                            3-14-16-28-39-40 
                            4-9-17-37-40-45 
                            6-22-24-33-34-47 
                            8-11-12-13-20-29
                            3-13-32-34-41-45 
                            17-28-34-36-43-47 
                            14-17-27-30-36-45
                            4-14-28-32-33-37 
                            9-19-32-35-40-43 
                            5-13-14-28-42-45
                            9-10-42-45-47-49 
                            15-22-24-26-30-35 
                            6-9-15-19-20-21
                            10-22-36-38-44-48 
                            6-8-23-32-33-40
                            13-24-25-33-37-41 
                            2-15-18-38-40-46
                            5-22-27-32-38-40
                            3-9-27-30-43-47 
                            7-21-25-35-38-40
                            9-11-13-18-29-49 
                            5-6-11-17-23-28
                            2-3-16-24-28-33
                            4-6-21-26-33-36
                            2-13-17-32-35-44 
                            3-7-12-17-18-21 
                            4-20-22-30-36-39
                            2-7-14-15-35-40 
                            1-8-9-21-31-40
                            3-15-29-32-34-44 
                            8-10-14-17-28-32 
                            1-2-4-40-47-49
                            8-9-11-18-27-46 
                            2-4-19-29-43-44 
                            13-28-31-37-41-46 
                            3-6-14-23-32-43
                            20-23-28-30-43-46 
                            11-15-17-24-41-49 
                            4-7-18-25-26-47
                            7-8-10-12-19-23 
                            6-9-28-32-33-42 
                            1-2-24-31-35-39 
                            2-3-20-21-36-43 
                            11-21-23-30-37-39 
                            3-10-12-14-20-41 
                            3-20-23-24-31-48 
                            10-21-34-37-43-46 
                            1-6-26-29-38-49
                            1-4-17-19-25-46
                            1-16-18-24-31-32
                            2-4-14-16-31-37
                            9-11-28-29-32-44

                              stoopendaal's avatar - archer

                              Netherlands
                              Member #3476
                              January 24, 2004
                              212 Posts
                              Offline
                              Posted: August 30, 2015, 12:03 pm - IP Logged

                              Hi MillionsWanted,

                              No, I never heard of CUDA. It' s an interesting technique/platform for parallel programming.

                              I found a CUDA wrapper (http://managedcuda.codeplex.com/)  for C# projects so first I have to study to see how to implement this technique in NNLP.

                              Thanks for the information!

                                 
                                Page 7 of 46