Quote: Originally posted by apagogeas on July 18, 2004
this is the correct example. But I have to correct some comments you make.
You say "...for a very easy to see example of "typical" win rates.". I wonder what is NOT considered typical rate then. I show you that I produce 5/6 hit in 5 draws out of 30 which performs 3.1 times better than luck. I also show you that the 0/6, 1/6, 2/6 hits have been reduced below luck rates (which is welcome of course). Is it still a typical rate??? For god's shake!
...
You say : "It would be useful for players to see how well (or poorly) software does for pure prediction. Tests have been proposed in the past, but they haven't attracted much response from software vendors."
I agree on this but keep in mind that it is not so easy to do such tests. There are several reasons why it is inpractical, especially for programs that require user input. It is better to see what a program can do for you by your own hands.
No problem at all with what you wrote, and no criticism intended.
On the first part:
You wrote:
> I wonder what is NOT considered typical rate then. I show you that I produce 5/6 hit in 5 draws out of 30 which performs 3.1 times better than luck. I also show you that the 0/6, 1/6, 2/6 hits have been reduced below luck rates (which is welcome of course). Is it still a typical rate??? For god's shake!
OK, but we are talking about two different things. I thought I expressed it in two earlier messages.
You can show a "better than typical" win rate in the last 30 draws for one game, the Florida 6/53. There is no criticism of that.
What's next? How does it apply to, say, a California or British Columbia or U.K. player for the next 30 draws? Or for any other game and any other future draws?
What's important is to show the same thing in many draws for many games. That's the only thing that matters to other players who might want to use the system.
Try the _next_ 30 draws for 10 other games (for example), and the data from those games will be more relevant.
That means: It will be more relevant for showing "better than typical" chances for players elsewhere, outside of Florida.
That's all I meant. Not to take anything away from your data. Well done, in your one example.
Make it provable to other players, in other games, and do it consistently. Then you will show that you have something.
- - - - - -
On the second part:
You wrote:
> I agree on this but keep in mind that it is not so easy to do such tests. There are several reasons why it is inpractical, especially for programs that require user input. It is better to see what a program can do for you by your own hands.
I agree totally with you, but I had emphasized it differently.
Yes. It may be difficult to do some tests. But there is no published evidence that major software vendors have done _any_ tests of forecasting. I mean, "any" tests done and given out to the public, including bonafide data.
That doesn't stop them from advertising, and suggesting big wins in that advertising.
That is the point I tried to convey.
Agreed: Lottery software requires user input. I already suggested that lottery software is best thought of as an "idea machine", not a "prediction machine".
I was contrasting these two things against each other.
(1) Software Advertisements. ... Advertisements seem to promote "big wins", without ever mentioning the need for user interpretation of the data, and for user inputs in selecting the next draw's numbers.
(2) No Testing. ... The reluctance of software vendors to join into any kind of real "prediction" trial.
That's all.
Look at it from this viewpoint.
Major lottery software programs have been sold by the hundreds of thousands. A software house advertises a big win by one of its users. But what is never discussed is this. There were hundreds, maybe thousands, of players using that same software, looking at exactly the same screens, for exactly the same drawing.
... The software vendor advertises the one big win to promote new sales.
... The software vendor never joins in a "forecast" test to show that it does (or does not) do any better than "typical" expectation, given a large population of players.
... In fact, when major software vendors have been invited publicly to join such a "forecast" test, they have simply ignored the invitation. That means, they ignored multiple requests, not even responding to direct e-mails. (cf. archives of rec.gambling.lottery)
Clearly I am not associating your system with the above. Not in any way. We were talking about two different things. Or maybe, we've been talking about two pieces of the same thing.
- - - - -
Summary:
Prediction/Forecasting systems need to be tested in a reasonably large set of lottery games, using a reasonably large history of draws.
The odds are the odds. After we've read the advertisements with their win claims, and have bought a product, the advertisements don't do anything for us from that point.
Something in the software has to "work", and it's a pity each one of us has to try to discover it for ourselves.
Just to be honest about it. Not to make a big deal out of it.
Joe Roberts
CDEX