I hope it will be inspiring for other lottery players. List contains also few terms related to gambling, like "combinadic".
"Classical" views: http://en.wikipedia.org/wiki/Classical_definition_of_probability http://en.wikipedia.org/wiki/Frequency_probability
Alternative points of view: http://en.wikipedia.org/wiki/Probability_interpretations
http://en.wikipedia.org/wiki/Empirical_probability "Empirical probability, also known as relative frequency, or experimental probability, is the ratio of the number favorable outcomes to the total number of trials, not in a sample space but in an actual sequence of experiments. In a more general sense, empirical probability estimates probabilities from experience and observation. The phrase a posteriori probability has also been used an alternative to empirical probability or relative frequency."
Bayesian probability: http://en.wikipedia.org/wiki/Bayesian_probability "According to the Bayesian probability calculus, the probability of a hypothesis given the data (the posterior) is proportional to the product of the likelihood times the prior probability (often just called the prior). The likelihood brings in the effect of the data, while the prior specifies the belief in the hypothesis before the data was observed."
http://en.wikipedia.org/wiki/Propensity_probability "Theorists who adopt this interpretation think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind, or to yield a long run relative frequency of such an outcome."
http://en.wikipedia.org/wiki/Algorithmically_random_sequence "An algorithmically random sequence (or random sequence) is an infinite sequence of binary digits that appears random to any algorithm."
Please, see http://en.wikipedia.org/wiki/Algorithmically_random_sequence#Relative_randomness
http://en.wikipedia.org/wiki/Kolmogorov_complexity "In algorithmic information theory (a subfield of computer science), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov-Chaitin complexity, stochastic complexity, algorithmic entropy, or program-size complexity) of an object such as a piece of text is a measure of the computational resources needed to specify the object."
http://en.wikipedia.org/wiki/Pseudorandomness "A pseudo random process is a process that appears random but is not. Pseudo random sequences typically exhibit statistical randomness while being generated by an entirely deterministic causal process."
http://en.wikipedia.org/wiki/Statistical_randomness "A numeric sequence is said to be statistically random when it contains no recognizable patterns or regularities; sequences such as the results of an ideal die roll, or the digits of Pi exhibit statistical randomness. Statistical randomness does not necessarily imply "true" randomness, i.e., objective unpredictability."
http://en.wikipedia.org/wiki/Ramsey_theory "Ramsey theory, named after Frank P. Ramsey, is a branch of mathematics that studies the conditions under which order must appear. Problems in Ramsey theory typically ask a question of the form: how many elements of some structure must there be to guarantee that a particular property will hold?"
http://www.ciphersbyritter.com/RES/RANDTEST.HTM Excellent randomness literature survey compiled by Terry Ritter. Highly recommended.
See also unpublished PhD dissertation by Michiel van Lambalgen on randomness: http://staff.science.uva.nl/~michiell/docs/fFDiss.pdf
This document is very interesting, because it refers to intuitive opinion of children: "Children’s understanding of randomness as a model" http://www.bsrlm.org.uk/IPs/ip28-3/BSRLM-IP-28-3-09.pdf Definitely worth reading!
"PEOPLE’S INTUITIONS ABOUT RANDOMNESS AND PROBABILITY" http://tinyurl.com/mvz8m5 "A recent empirical study indicates that students in introductory statistics class are generally confused about the different notions of probability (Albert, 2003). Clearly, continuing to teach only the frequentist conception cannot reduce the confusion. This implies to students either that there is only one “correct” conception of probability or that the frequentist and Bayesian conceptions are competitive, which should not be the case (Vranas, 2001). Moreover, an exclusive focus on frequentist notions may conflict with the students’ intuitions and representations about probability (see e.g., Hawkins and Kapadia, 1984). In any case, as emphasized by Konold (1991, p. 144), “the teacher cannot, by decree, enforce a normative view.”
http://en.wikipedia.org/wiki/Calculus_of_predispositions "According to Aron Katsenelinboigen, calculus of predispositions is another method of computing probability. Both methods may lead to the same results and, thus, can be interchangeable. However, it is not always possible to interchange them since computing via frequencies requires availability of statistics, possibility to gather the data as well as having the knowledge of the extent to which one can interlink the system’s constituent elements. Also, no statistics can be obtained on unique events and, naturally, in such cases the calculus of predispositions becomes the only option."
http://en.wikipedia.org/wiki/Outlier "In statistics, an outlier is an observation that is numerically distant from the rest of the data. They can occur by chance in any distribution, but they are often indicative either of measurement error or that the population has a heavy-tailed distribution. In the former case one wishes to discard them or use statistics that are robust to outliers, while in the latter case they indicate that the distribution has high kurtosis and that one should be very cautious in using tool or intuitions that assume a normal distribution. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate "correct trial" versus "measurement error"; this is modeled by a mixture model."
http://en.wikipedia.org/wiki/Imprecise_probability "The notion of Imprecise probability is used as a generic term to cover all mathematical models which measure chance or uncertainty without sharp numerical probabilities. It includes both qualitative (comparative probability, partial preference orderings,...) and quantitative modes (interval probabilities, possibility theory, belief functions, upper and lower previsions, upper and lower probabilities, ...). Imprecise probability models are needed in inference problems where the relevant information is scarce, vague or conflicting, and in decision problems where preferences may also be incomplete. Imprecise Probability Theory aims not to replace, but to complement and enlarge the classical notion of Bayesian probability, approach to probability theory, by providing it with tools to work with weaker information states."
http://en.wikipedia.org/wiki/Combinadic "In mathematics, a combinadic is an ordered integer partition, or composition. Combinadics provide a lexicographical index for combinations. Applications for combinadics include software testing, sampling, quality control, and the analysis of gambling games."
http://en.wikipedia.org/wiki/Factoradic "In combinatorics, factoradic is a specially constructed number system. Factoradics provide a lexicographical index for permutations"
http://www.designinference.com/documents/2002.09.rndmnsbydes.pdf "Randomness by Design", William A. Dembski
Informative and inspiring materials from Alan Hajek: http://philrsss.anu.edu.au/people-defaults/alanh/ I recommend "Fifteen Arguments Against Hypothetical Frequentism"
Another interesting web page, summarizes different views on probability: http://www.geocities.com/potential_continuity/physicalprobability.html
"From Algorithmic to Subjective Randomness" Thomas L. Griffiths, Joshua B. Tenenbaum http://cocosci.berkeley.edu/tom/papers/algrand.pdf
"Probability, algorithmic complexity, and subjective randomness" http://web.mit.edu/cocosci/Papers/complex.pdf
"Interpretations of Probability" http://plato.stanford.edu/entries/probability-interpret/
Terence Tao: "Structure and randomness in combinatorics" (http://arxiv.org/abs/0707.4269) "Combinatorics, like computer science, often has to deal with large objects of unspecified (or unusable) structure. One powerful way to deal with such an arbitrary object is to decompose it into more usable components. In particular, it has proven profitable to decompose such objects into a structured component, a pseudo-random component, and a small component (i.e. an error term); in many cases it is the structured component which then dominates." _________________