Timed Entry into RPS game using Entropy sees gains of > 50% (Win/Total ratio)

Published:

Updated:

First a little background, what is Entropy ?

According to Wiki entry (ref. 1) In information theory, entropy is a measure of the uncertainty associated with a random variable. (there are other types of entropies such as associated with thermodynamics, but they don't apply in our context)

The concept was introduced by Claude E. Shannon in his 1948 paper "A Mathematical Theory of Communication". (ref. 2)

Very short way to describe information entropy (H) is to understand it in terms of measure of certainty (maximum certainty H = 1.0) or uncertainty (maximum uncertainty H = 0.0) and of course anything in between (0.0 < H < 1.0) representing degree of (un) certainty in the information content. 

For the purpose of using it in a random strategy game (ie RPS) we want to measure value of using H in timing profitable entry into the game when certainty of the information is at the lowest (H = 0.0) (see graph 1).

These single events (H = 0.0) were measured to be few (in the span of 100 game iterations) and surprisingly result in up-wards of 50% (Win/Total ratio) gain!  when measured against a player that does "blind" bets.

Below, is application (ref. 3) console output representing run using 2 types of strategies (0 strategy (-) and strategy employing entropy measurement with a threshold of 0, that is a bets are made <=> [only and only iff] Entropy H = 0.0

Each run (total of 5) consists of 525 game iterations, notice player using Entropy strategy places significantly less bets than player using no strategy and betting on every single game occurence. Results are consistent and show not only gains but also that no single game resulted in the loss bet.

Strategy without(-)
Total Game Iterations: 525 Super Agent Played: 525 times. Wins: 179 Losses: 170 Ties: 176 Ratio Win/Total: 0.340952

Strategy with   (+) Entropy (sampling time window: 5, Threshold min: 0, max: 0)
Total Game Iterations: 525 Super Agent Played: 49 times. Wins: 35 Losses: 14 Ties: 0 Ratio Win/Total: 0.714286                 [GAIN of 2 x]

Press any key to continue . . .


Strategy without(-)
Total Game Iterations: 525 Super Agent Played: 525 times. Wins: 167 Losses: 176 Ties: 182 Ratio Win/Total: 0.318095

Strategy with   (+) Entropy (sampling time window: 5, Threshold min: 0, max: 0)
Total Game Iterations: 525 Super Agent Played: 56 times. Wins: 39 Losses: 15 Ties: 2 Ratio Win/Total: 0.696429                  [GAIN of 2 x]

Press any key to continue . . .

 

Strategy without(-)
Total Game Iterations: 525 Super Agent Played: 525 times. Wins: 194 Losses: 150 Ties: 181 Ratio Win/Total: 0.369524

Strategy with   (+) Entropy (sampling time window: 5, Threshold min: 0, max: 0)
Total Game Iterations: 525 Super Agent Played: 39 times. Wins: 23 Losses: 16 Ties: 0 Ratio Win/Total: 0.589744                  [GAIN]

Press any key to continue . . .


Strategy without(-)
Total Game Iterations: 525 Super Agent Played: 525 times. Wins: 172 Losses: 182 Ties: 171 Ratio Win/Total: 0.327619

Strategy with   (+) Entropy (sampling time window: 5, Threshold min: 0, max: 0)
Total Game Iterations: 525 Super Agent Played: 52 times. Wins: 34 Losses: 17 Ties: 1 Ratio Win/Total: 0.653846                  [GAIN]

Press any key to continue . . .

Strategy without(-)
Total Game Iterations: 525 Super Agent Played: 525 times. Wins: 169 Losses: 161 Ties: 195 Ratio Win/Total: 0.321905        [GAIN]

Strategy with   (+) Entropy (sampling time window: 5, Threshold min: 0, max: 0)
Total Game Iterations: 525 Super Agent Played: 21 times. Wins: 13 Losses: 8 Ties: 0 Ratio Win/Total: 0.619048

Press any key to continue . . .

Graph 1 showcases a sample run of a Super Agent making bets against Player ONE using his total Entropy derived from total sequence of his bets

a pseudo code used in calculating his total entropy is: entropy := probability[event type] * log (1 / probability[event type]) for v = R, P, S

 

Reference:

1. Entropy From Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/Entropy_(Information_theory)

2. Claude E. Shannon "A Mathematical Theory of Communication"

http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf

3. Application Source code (Entropy methods added):

https://members.lotterypost.com/edge/programs/src/rps_game.h

https://members.lotterypost.com/edge/programs/src/rps_game.cpp

Entry #25

Comments

This Blog entry currently has no comments.

Post a Comment

Please Log In

To use this feature you must be logged into your Lottery Post account.

Not a member yet?

If you don't yet have a Lottery Post account, it's simple and free to create one! Just tap the Register button and after a quick process you'll be part of our lottery community.

Register