Welcome Guest
Log In | Register )
You last visited September 21, 2017, 1:39 pm
All times shown are
Eastern Time (GMT-5:00)

Artificial Intelligence

Topic closed. 73 replies. Last post 7 years ago by martor854.

Page 5 of 5
51
PrintE-mailLink
Avatar

United States
Member #83701
December 13, 2009
225 Posts
Offline
Posted: August 27, 2010, 10:06 am - IP Logged

Made the mistake of peaking after logout!

My High/Low predictor model uses the (Open minus Yesterday's Close), so I KNOW the Open.

--Later

Sorry, I thought you were trying to predict the open.   The day's open minus yesterday's close is an interesting index as it's an indicator what the latent demand is.   The market maker would've selected the open such that it would sweep through any orders, stops and limits placed so as to maximize his brokerage's commissions.   This would give you the initial day's momentum which may very well define what the day's high or low would be.    Another interesting indicator to factor in would be the product of the previous day's average minus close (I was going to say open minus close but as I've mentioned, the open has an artificial element hence perhaps an average would be more appropriate) and the volume to see if there's an overall consumer based momentum.

It's kind of interesting that some writer's such as Alex Doulis ("Take Your Money and Run") say that the stock market has become too efficient and that you profit best from inefficiencies hence bond markets and hedge funds are more attractive.   Yet the efficiencies are also making it more possible to harvest volatility as per Shannon et al.


    United States
    Member #93947
    July 10, 2010
    2180 Posts
    Offline
    Posted: August 27, 2010, 5:17 pm - IP Logged

    Sorry, I thought you were trying to predict the open.   The day's open minus yesterday's close is an interesting index as it's an indicator what the latent demand is.   The market maker would've selected the open such that it would sweep through any orders, stops and limits placed so as to maximize his brokerage's commissions.   This would give you the initial day's momentum which may very well define what the day's high or low would be.    Another interesting indicator to factor in would be the product of the previous day's average minus close (I was going to say open minus close but as I've mentioned, the open has an artificial element hence perhaps an average would be more appropriate) and the volume to see if there's an overall consumer based momentum.

    It's kind of interesting that some writer's such as Alex Doulis ("Take Your Money and Run") say that the stock market has become too efficient and that you profit best from inefficiencies hence bond markets and hedge funds are more attractive.   Yet the efficiencies are also making it more possible to harvest volatility as per Shannon et al.

    "Another interesting indicator to factor in would be the product of the previous day's average minus close (I was going to say open minus close but as I've mentioned, the open has an artificial element hence perhaps an average would be more appropriate) and the volume to see if there's an overall consumer based momentum."

    Interesting idea to avoid the Open ambiguity.  When I'm optimizing 2 or more parameters I assume the network software is sorting out their products with at least one hidden layer.  I look at V, [delta]V, C, and [delta]C, among other things, so if I include (Avg - C), Backpropagation should find the meaningful products.  (Right?)

    BTW, does this help with your system of equations?

    http://www.matrixlab-examples.com/nonlinear-programming.html

    Will check out Shannon & Thorp.

      Avatar

      United States
      Member #83701
      December 13, 2009
      225 Posts
      Offline
      Posted: August 27, 2010, 6:49 pm - IP Logged

      "Another interesting indicator to factor in would be the product of the previous day's average minus close (I was going to say open minus close but as I've mentioned, the open has an artificial element hence perhaps an average would be more appropriate) and the volume to see if there's an overall consumer based momentum."

      Interesting idea to avoid the Open ambiguity.  When I'm optimizing 2 or more parameters I assume the network software is sorting out their products with at least one hidden layer.  I look at V, [delta]V, C, and [delta]C, among other things, so if I include (Avg - C), Backpropagation should find the meaningful products.  (Right?)

      BTW, does this help with your system of equations?

      http://www.matrixlab-examples.com/nonlinear-programming.html

      Will check out Shannon & Thorp.

      That's the interesting thing about neural nets.   We never really know what it finds out.   At various times, people would try to work backwards and try to determine what logic the neural net has developed but of course, it could be anything.

      With regards to non-linear programming.   I was really hoping to avoid doing a 38 dimensional gradient for a min max Newton method solution.   Just the very mention of 38 dimensions turns my head into a pretzel.   It may all be moot anyways, although the method I was thinking of is mathematically sound, it still lacks sufficient causality for it to be useful.   It was an interesting thought experiment (It was an attempt to read Bingo style scratchers without scratching by treating it as a cryptographic simple substitution code).   The results are as I expected, the cards read as possible loser and possible winner, it would have them at various probability levels but without a causality link, those probabilities are meaningless.   I'll post a summary of the experiment in the scratchers section entitled Enigma machine for Scratchers.


        United States
        Member #93947
        July 10, 2010
        2180 Posts
        Offline
        Posted: August 27, 2010, 11:21 pm - IP Logged

        That's the interesting thing about neural nets.   We never really know what it finds out.   At various times, people would try to work backwards and try to determine what logic the neural net has developed but of course, it could be anything.

        With regards to non-linear programming.   I was really hoping to avoid doing a 38 dimensional gradient for a min max Newton method solution.   Just the very mention of 38 dimensions turns my head into a pretzel.   It may all be moot anyways, although the method I was thinking of is mathematically sound, it still lacks sufficient causality for it to be useful.   It was an interesting thought experiment (It was an attempt to read Bingo style scratchers without scratching by treating it as a cryptographic simple substitution code).   The results are as I expected, the cards read as possible loser and possible winner, it would have them at various probability levels but without a causality link, those probabilities are meaningless.   I'll post a summary of the experiment in the scratchers section entitled Enigma machine for Scratchers.

        But I'm still wondering about this question, which we both forgot about last night!

        "...but really would like you to address the issue of the reasonableness of Backpropagation of essentially random inputs, and overtraining..."

          Avatar

          United States
          Member #83701
          December 13, 2009
          225 Posts
          Offline
          Posted: August 28, 2010, 2:53 pm - IP Logged

          But I'm still wondering about this question, which we both forgot about last night!

          "...but really would like you to address the issue of the reasonableness of Backpropagation of essentially random inputs, and overtraining..."

          I would say that you have to assume that what has been learned from previous iterations had been of some value so the danger is in having that value negated by noise in subsequent iterations.   Of course, noise should cancel out but that isn't a given.   The assumption that previous back propagation iterations have value that you would not want to lose leads to the concept that each iteration should adjust the values by a fraction of the previous iteration.   Intuitively I would suggest that the increments would be an inverse exponential relationship to the number of iterations.   This would reflect the concept of an initial steep learning curve and an eventual plateau.   With a case where there are hidden layers, perhaps the assertion of an additional element in the hidden layer should be complemented with resetting the backpropagation increments to a certain value, perhaps 0.75 or 0.5.   If you have a random mutation effect in the hidden layers where there's a small chance with each iteration that a new concept may be introduced then it would be logical for the increments to reset when such a mutation occurs.

          There may be a rationale of backpropagation of known random inputs and that is to try and average out the effect of any random components from actual training.   In this case the backpropagation increments should probably be a fixed value such that a standard deviation in number of random increments in one direction not exceed the scale of the increments that you used with real data since again you don't want the randomness to overwhelm any real relationships developed.   Introducing known random noise may allow the neural net to literally "think outside the box" and pursue another local minimum/maximum solution.   I would prefer the occasional random addition of an asserted element in the hidden layers over feeding intentional random inputs.


            United States
            Member #93947
            July 10, 2010
            2180 Posts
            Offline
            Posted: August 28, 2010, 3:53 pm - IP Logged

            I would say that you have to assume that what has been learned from previous iterations had been of some value so the danger is in having that value negated by noise in subsequent iterations.   Of course, noise should cancel out but that isn't a given.   The assumption that previous back propagation iterations have value that you would not want to lose leads to the concept that each iteration should adjust the values by a fraction of the previous iteration.   Intuitively I would suggest that the increments would be an inverse exponential relationship to the number of iterations.   This would reflect the concept of an initial steep learning curve and an eventual plateau.   With a case where there are hidden layers, perhaps the assertion of an additional element in the hidden layer should be complemented with resetting the backpropagation increments to a certain value, perhaps 0.75 or 0.5.   If you have a random mutation effect in the hidden layers where there's a small chance with each iteration that a new concept may be introduced then it would be logical for the increments to reset when such a mutation occurs.

            There may be a rationale of backpropagation of known random inputs and that is to try and average out the effect of any random components from actual training.   In this case the backpropagation increments should probably be a fixed value such that a standard deviation in number of random increments in one direction not exceed the scale of the increments that you used with real data since again you don't want the randomness to overwhelm any real relationships developed.   Introducing known random noise may allow the neural net to literally "think outside the box" and pursue another local minimum/maximum solution.   I would prefer the occasional random addition of an asserted element in the hidden layers over feeding intentional random inputs.

            This doesn't agree with my experiences trying to find predictive value in data with at best marginal non-random elements.  In such cases, to elicit any useful information from the inputs required reducing the number of variables and/or hidden layers, rather than increasing them.  Otherwise, the result was "curve fitting" extrordinaire!
             
            "There may be a rationale of backpropagation of known random inputs and that is to try and average out the effect of any random components from actual training."
             
            This statement implies that the sought after, non-random elements are the rule, rather than the exception.  In the current case under inspection, based on my analysis, I must disagree.
             
              Avatar

              United States
              Member #83701
              December 13, 2009
              225 Posts
              Offline
              Posted: August 28, 2010, 10:40 pm - IP Logged
              This doesn't agree with my experiences trying to find predictive value in data with at best marginal non-random elements.  In such cases, to elicit any useful information from the inputs required reducing the number of variables and/or hidden layers, rather than increasing them.  Otherwise, the result was "curve fitting" extrordinaire!
               
              "There may be a rationale of backpropagation of known random inputs and that is to try and average out the effect of any random components from actual training."
               
              This statement implies that the sought after, non-random elements are the rule, rather than the exception.  In the current case under inspection, based on my analysis, I must disagree.
               

              Well, I wasn't thinking of increasing the number of hidden layers but the very infrequent additional hidden layer element being asserted which has the effect of allowing the neural net to consider another avenue of thought.   Of course such an assertion has a destabilizing effect and an excessive number of exerted elements in the hidden layer would be excessive curve fitting.   Ideally, you want the minimum number of elements in the hidden layer exerted.   I always wondered about that since many three layer neural nets have 50% of the hidden layer elements asserted by default.

              If the noise to signal ratio is too high, a neural net may not be able to help, you'll need to have some non-random events to look for.

              How goes the day trading?   Do you hedge your bets?   Do they allow shorting warrants in the States?   Without warrants, how would you hedge?   If you don't negate volatility with a hedge, are you trying to harvest volatility?   What's your strategy for harvesting volatility?   How do you minimize your transaction costs?


                United States
                Member #93947
                July 10, 2010
                2180 Posts
                Offline
                Posted: August 29, 2010, 12:30 pm - IP Logged

                Well, I wasn't thinking of increasing the number of hidden layers but the very infrequent additional hidden layer element being asserted which has the effect of allowing the neural net to consider another avenue of thought.   Of course such an assertion has a destabilizing effect and an excessive number of exerted elements in the hidden layer would be excessive curve fitting.   Ideally, you want the minimum number of elements in the hidden layer exerted.   I always wondered about that since many three layer neural nets have 50% of the hidden layer elements asserted by default.

                If the noise to signal ratio is too high, a neural net may not be able to help, you'll need to have some non-random events to look for.

                How goes the day trading?   Do you hedge your bets?   Do they allow shorting warrants in the States?   Without warrants, how would you hedge?   If you don't negate volatility with a hedge, are you trying to harvest volatility?   What's your strategy for harvesting volatility?   How do you minimize your transaction costs?

                In the case of inputs from lottery results, I believe the noise to signal ratio is VERY high.  TOO high!

                For daytrading, my vehicle of choice is QQQQ for Long and PSQ for short.  I use a simple approach, no warrants, futures, or options.  I'll occasionaly pair PSQ and QQQQ as a short term hedge, but generally, I rely on close scrutiny of my [one] trade, my indicators, and the "kill" switch!  Even Kelley's Criterion allows for quite liberal  percentages of capital in this market.

                I'm an adherent of Trading in the Zone by Mark Douglas, and Design, Testing, and Optimization of Trading Systems by Robert Pardo.

                  Avatar
                  Tahiti- Polynesia
                  Tuvalu
                  Member #34524
                  March 4, 2006
                  57 Posts
                  Offline
                  Posted: September 17, 2010, 6:40 pm - IP Logged

                  Hi,

                  I wonder how Lotto can be defined. Is it a chaotic or a random or both space state vector? Is it linear or nonlinear, stationary or not stationary, deterministic or not, periodic or not periodic? Would the definition change if I use the drawing in grawing order or sorting order?

                  It is probably chaotic as the lyapunov exponent and shannon entropy are positive. I don't know for others parameters.

                  Defining the caracteristics of each lotto is important in choosing the right tools for analysis.

                  Anyone knows about that subject?

                  Thank you very much.


                    United States
                    Member #93947
                    July 10, 2010
                    2180 Posts
                    Offline
                    Posted: September 18, 2010, 10:37 pm - IP Logged

                    Hi,

                    I wonder how Lotto can be defined. Is it a chaotic or a random or both space state vector? Is it linear or nonlinear, stationary or not stationary, deterministic or not, periodic or not periodic? Would the definition change if I use the drawing in grawing order or sorting order?

                    It is probably chaotic as the lyapunov exponent and shannon entropy are positive. I don't know for others parameters.

                    Defining the caracteristics of each lotto is important in choosing the right tools for analysis.

                    Anyone knows about that subject?

                    Thank you very much.

                    bob790,

                    Most of your questions above are outside my areas of expertise.  I am familiar with Shannon's analyses as they apply to changes in stock prices.  However, believing that in the present, the probability of a past event is 1.0, I fail to see how his work would have much value applied to the classes of chaos/randomness produced by lottery ping pong ball machines or random number generators producing sequences with extremely long periods.

                    John Conover relies heavily on Shannon's work.

                    http://www.johncon.com/ntropix/

                    The blogger and posters here may share your interests.  (Look for "Shannon")

                    http://www.lotterypost.com/blog/70396

                    Also, the originator of this thread may be fishing in these waters, so to speak.

                    http://www.lotterypost.com/thread/214856/1664499

                    --Jimmy4164

                      Avatar
                      Tahiti- Polynesia
                      Tuvalu
                      Member #34524
                      March 4, 2006
                      57 Posts
                      Offline
                      Posted: September 19, 2010, 12:49 am - IP Logged

                      Thanks for replying Jimmy

                       

                      Bob

                        Avatar
                        Edinburgh
                        United Kingdom
                        Member #97833
                        September 24, 2010
                        41 Posts
                        Offline
                        Posted: September 24, 2010, 11:58 pm - IP Logged

                        Hi,

                        I'm the former martor54. I don't know why I can't log in as martor54 on the site.

                        Could you please shed some light on the matter? I can see you're good, very good. Any nudge in the right direction will be appreciated. Thanks.


                          United States
                          Member #93947
                          July 10, 2010
                          2180 Posts
                          Offline
                          Posted: September 25, 2010, 10:20 am - IP Logged

                          Hi,

                          I'm the former martor54. I don't know why I can't log in as martor54 on the site.

                          Could you please shed some light on the matter? I can see you're good, very good. Any nudge in the right direction will be appreciated. Thanks.

                          martor[8]54,

                          This entry in LotteryPost Help may provide a solution to your login problem.

                          http://www.lotterypost.com/help/62704

                          Jimmy4164

                            Avatar
                            Edinburgh
                            United Kingdom
                            Member #97833
                            September 24, 2010
                            41 Posts
                            Offline
                            Posted: September 25, 2010, 8:36 pm - IP Logged

                            Hi jimmy4164,

                            Thanks for your reply. I haven't made any changes to my browser and/or security software. I do not know why I can't log in as martor54. I follow the procedure, I get "Log in successful" but my user name is not there. However, I can do that "no problemo" under my slightly changed user name. I'll have to stick with that.

                            Best regards,

                            martor854