Does a .300 hitter actually hit .300? A small sample size can make a seemingly incredible stat not all that impressive, but when does a sample stop being a small sample? Mike Richmond looks to uncover the answer to that question using a random-number generator.
If a baseball player hits .300, does that mean that he’s a .300 hitter? It may sound like a silly question, but if we examine it closely, it can lead us to some interesting conclusions. Let’s make a simple mathematical model of the batter and see what it tells us; in particular, how large a sample do we need to draw reliable conclusions? For example, Boston Red Sox fans have been frustrated by Jackie Bradley Jr.’s continued inability to hit major-league pitching. Is his sub-.200 batting average a true reflection of his skill level, or has he just been unlucky?
Batting as a roll of the die
In real life, athletes are subject to a host of influences: major injuries, minor injuries, sleepless nights, sick kids, food poisoning – the list goes on and on. Major league batters face a number of factors which are peculiar to the game of baseball: night games and day games, right-handers or southpaws, bases empty or runners in scoring position. On any given night, in any given inning, the sum of all these influences may give the batter a small boost, or drag him down. Trying to separate his innate ability from the vagaries of any one plate appearance would be… difficult.
So, let’s make things simple. Instead of considering a real player, we will examine the performance of an ideal system:
Not a die, exactly, but a random-number generator running on a computer. To simulate one at-bat by a player with a “true” .270 batting average, we’ll generate a random number between 0 and 1. If the value is less than or equal to .270, we’ll say that the batter got a hit; otherwise, we’ll say he made an out.
Since discussions of statistical interpretation sometimes involve at-bats, and other times involve games played, we’ll say that our random player gets exactly 4 at-bats (AB) per game played. Moreover, we’ll ignore the possibility of walks, errors, and other complications. He has four chances per game, each of which results in either a hit or an out.
Two representative groups of batters
To make things more interesting, let’s watch the performance of two different ideal batters. The “average” player will have a true batting average of .270, while the “good” player will have a true batting average of .330. It ought to be easy to distinguish the good player from the average one – right?
Of course, since both the real game and this simple simulation contain a large element of randomness, it could be misleading to follow the results of just a single example of each type. Let’s create a large group of identical copies of each player: 1,000 “average” batters and 1,000 “good” batters. With such a large group of players, we are very likely to see examples of the best of good luck and the worst of bad luck.
After one game…
Our ideal batters step up to the plate four times in this single game. What does the box score tell us the next day?
Well, there are only five possibilities for each player: going 0-4, 1-4, 2-4, 3-4 or 4-4. We can make a table showing the number of batters which fell into each category:
|Hits-ABs||Batting Average||Red Group||Blue Group|
We can also display the results graphically, using a histogram:
Can you tell which color – red or blue – represents the “average” players and which the “good” ones? It isn’t clear to me. Clearly, we need a larger sample.
After five games (or about one week)
After each player has 20 ABs, the distribution becomes considerably less broad:
It’s obvious now that the blue symbols represent the “average” players, and the red bars the “good” players, right?
But – wait a minute. The “average” players should have a .270 batting average and the “good” players a .330 average. That’s a difference of .060. So why do the two groups have so much overlap?
The answer is that this dataset is still too small to make definitive statements about the players intrinsic ability. Twenty trips to the plate is so few that even a couple of lucky bounces can make the difference between a nice .300 (6-20) and an eye-catching .400 (8-20).
One way to quantify the range of values in each group is to compute the mean and the standard deviation of its batting averages. I’d rather not delve into the mathematical details in this article, but these quantities are relatively quick and easy to calculate, used widely by people in a range of communities, and, to a rough approximation, give us a simple handle on the range of the distributions:
|Red Group||Blue Group|
If we assume that the outcomes follow a normal distribution (yes, yes, this isn’t true; if you have been yelling “Poisson” at the screen, give yourself a cookie) then two-thirds of all the batters will fall within one standard deviation of the mean, and 95 percent of all batters within two standard deviations:
|Red Group||Blue Group|
|+/- 1 stdev||.229 – .439||.171 – .367|
|+/- 2 stdev||.123 – .544||.073 – .465|
It’s quicker and easier to see the results on a graph. The colored bands on the figure below show the range from two standard deviations below the mean of each group to two standard deviations above the mean. Only five percent or so of each group will fall outside the region of the bands.
No matter how you look at it, though, it’s hard to tell the difference between these two groups of batters. There’s still much more overlap than separation. We need a bigger sample!
After twenty games (or a bit less than one month)
Let’s give each player almost a month of playing time:
The two groups are starting to pull away from each other, but the bulk of players in each still fall within the 95-percent region of the other.
After eighty games (or half a season)
As I write this article, it’s July 1, and the Red Sox have played 79 games – just about half the season. Surely a player who had participated in every game so far would have had a chance to show his true worth by now – right?
Alas, it’s still not possible to separate with certainty the batters with a “true” batting average of .270 from those with .330. For example, more than 10% of the “average” players have batting averages over .350, while 16% of the “good” players are sitting below .250.
After 162 games
Suppose we give each player a full season to perform. Is that enough?
For some purposes, yes, this is enough. The two distributions still touch, and there’s just a little bit of overlap between the colored bands which contain 95 percent of each group… but if one is willing to overlook occasional outliers, one can make a pretty good guess at the “true” batting average of a player after this much time has passed.
A brief statistical summary
It might be useful for future reference to summarize what we’ve seen so far: just how confident can one be in the batting average of a player after some number of games? As long as the “true” value lies somewhere in the range between about .250 and .350, the following table (updated July 8) should be a decent guide:
|After this many games, …||… or this many ABs||95 percent of batting averages will be within this much of “true” value|
In our next article, we’ll put this model to the test, by comparing its predictions to the actual performances of several MLB hitters – Jackie Bradley Jr. among them. Stay tuned!