Skip to main content

Algorithms

A process or set of rules to be followed in calculations or other problem-solving operations.

Computer science algorithms to live by can shed light on how to improve or exploit everyday decision makining processes.

Decisions

The "37% rule" refers to a series of steps, or algorithms, that someone must follow to make the best decision within a set amount of time. Someone allots 37% of their time to research before they make a decision, then commits to the very next "best choice" they find.

The explore/exploit trade-off refers to the need to balance the tried and tested with the new and risky. The payoff of this algorithm depends entirely on how much time you have to make decisions. People are more likely to visit their favorite restaurant on their last night in town than risk something new.

Developed in 1952 by mathematician Herbert Robins, the "Win-Stay, Lose-Shift" algorithm uses slot machines as a metaphor. Choose a machine at random and play it until you lose. Then switch to another machine; this method was proven to be more reliable than chance.

A psychology study found that given choices, people often "over explore" rather than exploit a win. Given 15 opportunities to choose which slot machine would win, 47% used Win-Stay, Lose-Shift strategies, and 22% chose machines randomly instead of staying with a machine that paid out.

Hollywood is a prime example of the exploit tactic. The number of movie sequels has steadily increased over the last decade. In both 2013 and 2014, seven of the Top 10 films were either sequels or prequels. The trend is likely to change if new movie ideas draw more box office dollars.

The A/B test is similar to the two slot machine scenario in that you stick with the option that performs best. More than 90% of Google's $50 million in annual revenue is from paid advertisements, which means that explore/exploit algorithms power a large portion of the internet.

The Gittins Index provides a framework of odds that assume you have an indefinite amount of time to achieve the best payoff, but the chances reduce the longer you wait. For example: choose a slot machine with a track record of one-to-one wins/losses (50%) over the machine that has won nine out of 18 times.

"Upper Confidence Bound" algorithms offer more room for discovery than the "Win-Stay, Lose-Shift" method. This algorithm assigns a value based on what "could be" based on the information available. A new restaurant has a 50/50 chance to provide a good experience because you have never been there.

The "Shortest Processing Time" algorithm requires that you complete the quickest tasks first. Divide the importance of the task by how long it will take. Only prioritize a task that takes two times as long if it is two times as important.

Laplace's Law calculates the odds that something will occur with only small amounts of data. Count how many times that result has happened, add one, then divide by the number of opportunities plus two. For example: Your softball team plays eight games per season. It has already won two games. 2+1/ 6+2=3/8, or a 37.5% chance you win the next game.

The Copernican Principle allows you to predict how long something will last without much of anything about it. The solution is that it will go on as long as it has gone on so far. Based on this principle, Google will reasonably last until 2044 (23 years since 1998 + 23 from 2021).

"Power-law distribution" considers that, in life, most things fall below the mean and a few rise above. Two-thirds of the US population makes less than the mean income, but the top 1% make almost ten times the mean. Few movies make "Titanic" level money in the box office, but some do.

The "Nash Equilibrium" explores the phenomenon of two-player games and the way that players form strategies that neither wants to change based on what the other person does. This creates stability. In Rock-Paper-Scissors with three options, players adopt a 1/3-1/3-1/3 strategy unless the other person changes tactics, and the process starts again.

Human brains have a nearly infinite capacity for memories, but we have a finite amount of time to access them. This results in the "forgetting curve." A study by Hermann Ebbinghaus found that he could recall nonsense syllables 60% of the time after he read them, but it declined to 20% after 800 hours.

Ebbinghaus' "forgetting curve" was shown to closely match how often words are used in society. The recurrence of words found in headlines of The New York Times declined at a rate of 15% over 100 days and implied that human brains naturally tune their processes to the world around us.

The stock market "flash crash" of May 6, 2010 was caused by an "information cascade." When one person does something different, then other people follow suit, assuming that the first person knows something they don't. This behavior causes people to panic buy or exhibit mob behavior.

Sociologist Barry Glassner noted that murders in the United States declined by 20% throughout the 1990s, and yet the mention of gun violence on American news increased by 600%. An information cascade can be caused more by public information than private information.

When authors Brian Christian and Tom Griffiths scheduled interviews for the book, they found that experts were more likely to accept a narrow, predetermined window than a wide-open one. It is less challenging to accommodate restraints than find another solution.

Believe it or not, randomness is part of life's algorithm, too. Nobel prize-winner Salvador Luria realized that random mutations could produce viral resistance by watching his friend win the jackpot on a slot machine.

The best-laid plans are often the simplest. Use a thick marker when brainstorming because it limits room and forces simplification and focuses on the big picture.

Schema