Welcome Guest
Log In | Register )
You last visited April 29, 2017, 5:19 pm
All times shown are
Eastern Time (GMT-5:00)

Artificial Intelligence

Topic closed. 73 replies. Last post 7 years ago by martor854.

Page 4 of 5
51
PrintE-mailLink

United States
Member #93947
July 10, 2010
2180 Posts
Offline
Posted: August 23, 2010, 1:17 am - IP Logged

Here is the best neural net I found, you can set it up for cash five. Out of 3 sets you get two numbers in one set.

Many times you get one number in each set.

The problem is to find the correct setting for the AI.

Here it is,  http://stonisa.info/hobby.php

Go to number one

"# Lotto Artificial Intelligence
Excellent software that uses Artificial Neural Network to try to predict numbers to be drawn in the current round using history of previous draws. The prediction can be used to manually or pseudo-randomly carefully choose numbers to make winning combinati
current version: 1.2.1.1 "

Be careful using this software!  I could design a neural network that would predict winners with astonishing acuracy, provided you only tested it on the Draws used to Train it.  It would revert to a mediocre 50/50 picker the moment you turned it loose on Draws it never saw before.

Be sure to FORWARD TEST the parameters you develop "on paper" BEFORE you throw any real money at it.

    themagician's avatar - artificial intelligence-1.jpg
    Thread Starter

    France
    Member #92435
    June 7, 2010
    43 Posts
    Offline
    Posted: August 25, 2010, 11:02 am - IP Logged

    I've tested this software, the only thing is that you must go through the painstaking work to try and test results with differents NN parameters like the learning rate,

    I think an interesting thing would be to analize parameters besides past drawing data

    and that those parameters should be adapted to your local lottery

    in fact as you say: the combination of statistics with AI

    well, that's what I would like to investigate anyhow

    Imagination can take you anywhere

    http://www.artificialintelligencelotto.com 


      United States
      Member #93947
      July 10, 2010
      2180 Posts
      Offline
      Posted: August 25, 2010, 3:32 pm - IP Logged

      I've tested this software, the only thing is that you must go through the painstaking work to try and test results with differents NN parameters like the learning rate,

      I think an interesting thing would be to analize parameters besides past drawing data

      and that those parameters should be adapted to your local lottery

      in fact as you say: the combination of statistics with AI

      well, that's what I would like to investigate anyhow

      "I've tested this software, the only thing is that you must go through the painstaking work to try and test results with differents NN parameters like the learning rate,"

      As I suggested above, you could optimize parameters like the learning rate on one set of results, not including, say, the last 3 months.  If the software works, it should work for at least a few Draws into the test period, the beginning of the 3 month period it has never seen before.

      You said, "I've tested this software,..."  What did you find?

        pick4hawk's avatar - Trek HAND3.gif

        United States
        Member #19982
        August 9, 2005
        226 Posts
        Offline
        Posted: August 26, 2010, 11:18 am - IP Logged

        "I've tested this software, the only thing is that you must go through the painstaking work to try and test results with differents NN parameters like the learning rate,"

        As I suggested above, you could optimize parameters like the learning rate on one set of results, not including, say, the last 3 months.  If the software works, it should work for at least a few Draws into the test period, the beginning of the 3 month period it has never seen before.

        You said, "I've tested this software,..."  What did you find?

        If your running things upon sequences the back draw period will vary. commercial software out there is not designed to work with several different types of sequences.

        Quantity of the digits over periods will not get you vary far.

        HAWK

        *We may see something that isn’t there because of what we expect to see

        Or conversely, we may not see something because we don’t expect to see it.*


          United States
          Member #93947
          July 10, 2010
          2180 Posts
          Offline
          Posted: August 26, 2010, 11:35 pm - IP Logged

          If your running things upon sequences the back draw period will vary. commercial software out there is not designed to work with several different types of sequences.

          Quantity of the digits over periods will not get you vary far.

          HAWK

          HAWK,

          Could you please explain further?  I don't understand what you mean here.  The link to my post that you were commenting on is here:

          http://www.lotterypost.com/thread/215821/1754681

          Thanks,

          --Jimmy4164


            United States
            Member #93947
            July 10, 2010
            2180 Posts
            Offline
            Posted: August 26, 2010, 11:38 pm - IP Logged

            How is the backtesting going?

            http://www.lotterypost.com/thread/215821/1751417


              United States
              Member #43694
              July 23, 2006
              184 Posts
              Offline
              Posted: August 27, 2010, 12:00 am - IP Logged

              How is the backtesting going?

              http://www.lotterypost.com/thread/215821/1751417

              IN ONE WEEK I WON 2 TIMES WITH 2 NUMBERS AND ONE TIME WITH 3 NUMBERS.

                Avatar

                United States
                Member #83701
                December 13, 2009
                225 Posts
                Offline
                Posted: August 27, 2010, 12:30 am - IP Logged

                Hi,

                I will like to know if somebody is seriously studying or using artificial intelligence for lotto, I know there are some old posts about it and some explanations but has somebody got good results using artificial intelligence software or algorithms?

                Is important to me because I am currently working on this kind of software, using backpropagation and it will be very helpful to get some feedback from your experience.

                Thank you ;-)

                when you talk about back propagation, you're talking about neural networks and connectionism.   That's not the only kind of artificial intelligence, the other main branch is heuristics and is closer to traditional programming.  

                Connectionism is mathematically very simple, it's a matrix multiplication, back propagation is adjusting the elements of the multiplier vector up or down according to what the impact of that multiplier should've been.   This results in a single layer neural net which is actually two layers.   Basically the input is a vector which gets multiplied with a multiplier matrix to produce the output vector.  

                The choice of representation is critical for example if you were drawing three binary numbers ie.: three draws from pools of two balls, you could have each element of the input and output vector represent one of the two possibilities in each draw resulting in each vector having six elements, each pair representing a draw or you could have each vector have three elements with each element either asserted or not asserted corresponding to the 1 or 0 state of that digit's result or you could have eight elements each representing a composite result i.e.: 000, 001, 010, 100, 101, 110, and 111.   The latter will allow the neural network to consider the most possibilities and the threshold requirements to identify the solution is simply the element with the strongest signal.  Historical data would be represented by additional elements to the input vector with the data shifted down the line.

                These are just the earliest known neural networks and are not considered the state of the art.   Most neural nets are known as three layer neural networks.   Basically, an input vector, a multiplier, an intermediate hypothesis vector, another multiplier matrix  producing an output vector.   The middle layer is seeded with half ones and half zeros, either arbitrarily, at random or as deemed necessary.   Each asserted value in this middle layer will allow the neural net to form one hypothesis to consider.   Back propagation is done the same way but traces through both multipliers hence adjusting the elements in both.  

                In the late 80's, I proposed a compiler that took standard single flow computer languages mapping them into an arbitrary number of layers much in the same way that a TTL circuit would be built, the unused portion of each layer would be given a small probability of becoming asserted during each back propagation so that over time additional hypothesis in each layer could occur, the idea was to mimic the human learning process where a few simple rule of thumbs are learned by rote to use as a crutch and over time we abandon them for the expertise that we develop and of course to leverage existing conventional programming solutions to teach a neural net.   The size of the net would be unbearably large and hence difficult to model in the computers of the day as well as impossible to build into hardware, plus as an undergrad, my ideas weren't given much attention though I have noticed that the new LRT trains that particular city now buys uses principles that I outlined in a paper onc.   I have discussed my ideas with some fairly well known computer scientists in the decades since then with good acceptance but we haven't done anything further, some of the platforms that it would've been suited to no longer exists today.

                You don't see a lot of work on connectionism anymore and I'm not sure why.   It's probably because the field was dominated by some very unusual characters at Thinking Machines, the company I worked for in the mid 90's and of course that Manhattan project style company is no more.   I was there at the tail end so I never did see the conference room filled with Rubik's cubes and Lego blocks.


                  United States
                  Member #93947
                  July 10, 2010
                  2180 Posts
                  Offline
                  Posted: August 27, 2010, 12:31 am - IP Logged

                  IN ONE WEEK I WON 2 TIMES WITH 2 NUMBERS AND ONE TIME WITH 3 NUMBERS.

                  In the PA Pick-5 that would have earned about $15.  At $3 per day for 7 days, that's a $21 outlay.

                  That's better than the expected $10.50!   (21 - 15) is a $6 loss, but I think of it as the cost of 21 chances on a nice Jackpot!

                  Keep testing, one week is not enough to draw any conclusions.


                    United States
                    Member #93947
                    July 10, 2010
                    2180 Posts
                    Offline
                    Posted: August 27, 2010, 1:13 am - IP Logged

                    when you talk about back propagation, you're talking about neural networks and connectionism.   That's not the only kind of artificial intelligence, the other main branch is heuristics and is closer to traditional programming.  

                    Connectionism is mathematically very simple, it's a matrix multiplication, back propagation is adjusting the elements of the multiplier vector up or down according to what the impact of that multiplier should've been.   This results in a single layer neural net which is actually two layers.   Basically the input is a vector which gets multiplied with a multiplier matrix to produce the output vector.  

                    The choice of representation is critical for example if you were drawing three binary numbers ie.: three draws from pools of two balls, you could have each element of the input and output vector represent one of the two possibilities in each draw resulting in each vector having six elements, each pair representing a draw or you could have each vector have three elements with each element either asserted or not asserted corresponding to the 1 or 0 state of that digit's result or you could have eight elements each representing a composite result i.e.: 000, 001, 010, 100, 101, 110, and 111.   The latter will allow the neural network to consider the most possibilities and the threshold requirements to identify the solution is simply the element with the strongest signal.  Historical data would be represented by additional elements to the input vector with the data shifted down the line.

                    These are just the earliest known neural networks and are not considered the state of the art.   Most neural nets are known as three layer neural networks.   Basically, an input vector, a multiplier, an intermediate hypothesis vector, another multiplier matrix  producing an output vector.   The middle layer is seeded with half ones and half zeros, either arbitrarily, at random or as deemed necessary.   Each asserted value in this middle layer will allow the neural net to form one hypothesis to consider.   Back propagation is done the same way but traces through both multipliers hence adjusting the elements in both.  

                    In the late 80's, I proposed a compiler that took standard single flow computer languages mapping them into an arbitrary number of layers much in the same way that a TTL circuit would be built, the unused portion of each layer would be given a small probability of becoming asserted during each back propagation so that over time additional hypothesis in each layer could occur, the idea was to mimic the human learning process where a few simple rule of thumbs are learned by rote to use as a crutch and over time we abandon them for the expertise that we develop and of course to leverage existing conventional programming solutions to teach a neural net.   The size of the net would be unbearably large and hence difficult to model in the computers of the day as well as impossible to build into hardware, plus as an undergrad, my ideas weren't given much attention though I have noticed that the new LRT trains that particular city now buys uses principles that I outlined in a paper onc.   I have discussed my ideas with some fairly well known computer scientists in the decades since then with good acceptance but we haven't done anything further, some of the platforms that it would've been suited to no longer exists today.

                    You don't see a lot of work on connectionism anymore and I'm not sure why.   It's probably because the field was dominated by some very unusual characters at Thinking Machines, the company I worked for in the mid 90's and of course that Manhattan project style company is no more.   I was there at the tail end so I never did see the conference room filled with Rubik's cubes and Lego blocks.

                    I'm currently tinkering with several Backpropagation models attempting to predict the current day's high and low at the open (& after the gap) for several ETFs for daytrading purposes.  The inputs, among other things, include fairly recent volume and price data for gold, foreign market action, and the issue of interest.  I anticipated possible connections with cause & effect relationships among these variables, and have found some.  However, notwithstanding the possibility of ball machine wear or larceny in the heart of at least one programmer at the lottery IT shop, I find it very hard to envision a scenario where these possible non-random effects would be sufficient to overcome the overwhelming effects of the majority of the random inputs.  As I'm sure you are aware, Backpropagation algorithms are difficult to control, and over training could easily produce parameters that would look great applied to the Training Set, but miserable when applied to subsequent Test Sets.

                      Avatar

                      United States
                      Member #83701
                      December 13, 2009
                      225 Posts
                      Offline
                      Posted: August 27, 2010, 1:45 am - IP Logged

                      I'm currently tinkering with several Backpropagation models attempting to predict the current day's high and low at the open (& after the gap) for several ETFs for daytrading purposes.  The inputs, among other things, include fairly recent volume and price data for gold, foreign market action, and the issue of interest.  I anticipated possible connections with cause & effect relationships among these variables, and have found some.  However, notwithstanding the possibility of ball machine wear or larceny in the heart of at least one programmer at the lottery IT shop, I find it very hard to envision a scenario where these possible non-random effects would be sufficient to overcome the overwhelming effects of the majority of the random inputs.  As I'm sure you are aware, Backpropagation algorithms are difficult to control, and over training could easily produce parameters that would look great applied to the Training Set, but miserable when applied to subsequent Test Sets.

                      I find that backpropagation often becomes tricky when the information representation has value rather than just the boolean of it exists or not.   This is because it's essentially tweaking a constant that's multiplied to that value hence becoming a gain on a signal.   The consequence is that you're winding up with an analog computer and not a neural net that deals with concepts.   That was why I had the example of the three digits being represented by three elements, six elements or eight elements with my suggestion that perhaps eight would be better as it allows more relationships to be made and removes controversy over threshold for representation.   It's just a personal theory but I think that data representation is overlooked.

                      It sounds like you have concepts like volume or price represented by an element each.   You may wish to define how much resolution you want placed on the value and if it's say an 8 bit resolution that's suitable, use 256 elements to represent it then your representation threshold decision is simply when one and only one element is asserted.

                      Most people just wind up with a limited analog computer without the integrator and differentiator feedback and not a neural net.

                      I think the world lost a lot of the ability to conceptualize analog systems when we all went digital to the point that we can't recognize analog systems anymore.

                      As to training sets.   The advice has always been to divide your historical data into two, train on one half, verify on the other but that assumes you're looking for a static neural net to freeze when you have good results rather than having one that's constantly learning and evolving.   It may be necessary to think about proportional training where the increments by which the adjustments are made are reduced progressively as an inverse exponential value, the idea is that new concepts never overwhelm or negate previous concepts, at least not individually.

                      Keep in mind that I was the renegade without a PhD at Thinking Machines that confounded all the PhD's that were convinced that I had one.   The company had a policy of only hiring PhD's whenever possible for certain positions.   I got hired from their client for saving their ass at the client site.

                      Also keep in mind, I tend to write my code from scratch so what I talk about may not be possible on whatever development platform you are using.

                        Avatar

                        United States
                        Member #83701
                        December 13, 2009
                        225 Posts
                        Offline
                        Posted: August 27, 2010, 1:47 am - IP Logged

                        For investments, I suggest Shannon and Thorp.   It's not neural nets but it's brilliant.

                          Avatar

                          United States
                          Member #83701
                          December 13, 2009
                          225 Posts
                          Offline
                          Posted: August 27, 2010, 1:54 am - IP Logged

                          The opens are set by designated market makers for each stock with access to information at the brokerage houses that you don't have.   They are set to sweep the buy sell orders already placed in the system.   It would be difficult for a neural net to predict the opens as it is essentially estimating an intermediate latent demand for transactions that can never be known.   Then again, perhaps that's why a neural net would be used when all the inputs cannot be quantified.


                            United States
                            Member #93947
                            July 10, 2010
                            2180 Posts
                            Offline
                            Posted: August 27, 2010, 2:04 am - IP Logged

                            I find that backpropagation often becomes tricky when the information representation has value rather than just the boolean of it exists or not.   This is because it's essentially tweaking a constant that's multiplied to that value hence becoming a gain on a signal.   The consequence is that you're winding up with an analog computer and not a neural net that deals with concepts.   That was why I had the example of the three digits being represented by three elements, six elements or eight elements with my suggestion that perhaps eight would be better as it allows more relationships to be made and removes controversy over threshold for representation.   It's just a personal theory but I think that data representation is overlooked.

                            It sounds like you have concepts like volume or price represented by an element each.   You may wish to define how much resolution you want placed on the value and if it's say an 8 bit resolution that's suitable, use 256 elements to represent it then your representation threshold decision is simply when one and only one element is asserted.

                            Most people just wind up with a limited analog computer without the integrator and differentiator feedback and not a neural net.

                            I think the world lost a lot of the ability to conceptualize analog systems when we all went digital to the point that we can't recognize analog systems anymore.

                            As to training sets.   The advice has always been to divide your historical data into two, train on one half, verify on the other but that assumes you're looking for a static neural net to freeze when you have good results rather than having one that's constantly learning and evolving.   It may be necessary to think about proportional training where the increments by which the adjustments are made are reduced progressively as an inverse exponential value, the idea is that new concepts never overwhelm or negate previous concepts, at least not individually.

                            Keep in mind that I was the renegade without a PhD at Thinking Machines that confounded all the PhD's that were convinced that I had one.   The company had a policy of only hiring PhD's whenever possible for certain positions.   I got hired from their client for saving their ass at the client site.

                            Also keep in mind, I tend to write my code from scratch so what I talk about may not be possible on whatever development platform you are using.

                            I've been sigmoiding all my inputs to [0...1], but your idea of segmenting that range may have merit for certain variables.  We're getting into the "Fuzzy Logic" realm here.  I also tend to train on 75% of the input and test on 25%, somewhat arbitrarily.  Once a viable model is attained, I like to retrain as often as possible with up to date input.

                            Must turn in now but really would like you to address the issue of the reasonableness of Backpropagation of essentially random inputs, and overtraining as I mentioned in my last post.

                            A brief perusal of your system of equations problem reminded me of Non-Linear Programming (NLP) approaches I've read about.  Wikipedia is a source for that.

                            --Later (Tomorrow!)


                              United States
                              Member #93947
                              July 10, 2010
                              2180 Posts
                              Offline
                              Posted: August 27, 2010, 2:20 am - IP Logged

                              The opens are set by designated market makers for each stock with access to information at the brokerage houses that you don't have.   They are set to sweep the buy sell orders already placed in the system.   It would be difficult for a neural net to predict the opens as it is essentially estimating an intermediate latent demand for transactions that can never be known.   Then again, perhaps that's why a neural net would be used when all the inputs cannot be quantified.

                              Made the mistake of peaking after logout!

                              My High/Low predictor model uses the (Open minus Yesterday's Close), so I KNOW the Open.

                              --Later