The Picks
On April 14, StatScope identified three games with meaningful edges according to our 9-factor model. The Los Angeles Dodgers carried a 79% projected win probability at home against the New York Mets, supported by a dominant lineup wOBA and superior starting pitcher quality. The Pittsburgh Pirates checked in at 74.7% against the Washington Nationals, driven by recent form and home-field advantage. Finally, the Milwaukee Brewers graded out at 72.1% against the Toronto Blue Jays, anchored by bullpen strength and schedule positioning.
Combined, the three picks represented an expected value scenario: if you could replicate these exact matchups 100 times, our model suggested you would win approximately 76% of the bets by betting the favorite on each. But baseball, of course, does not repeat. Each game is singular.
What Went Right: The Dodgers Win
The Dodgers delivery was textbook. Los Angeles' lineup wOBA of .325 (well above the league average of .315) paired with a Mets offense in decline created the offensive mismatch our model weighted heavily. The starting pitcher advantage—Dodgers starter FIP of 2.85 versus Mets' 3.90—gave Los Angeles the early-inning edge. Home-field advantage (modeled at +6.6 percentage points) provided the final tilt.
Result: Dodgers won 2-1 in a tight game. This was a correct pick driven by correct reasoning: the process was sound, the outcome confirmed it. This is what we want to see in the track record—not every pick winning, but wins clustering around high-probability scenarios and losses clustering around lower ones.
What Went Wrong: The Pirates and Brewers
The Pirates loss to Washington (5-4) illustrates baseball's inherent variance. Our model weighted the Pirates' recent 10-game form (scoring 5.2 runs/game, allowed 3.8) and home advantage heavily—both genuine statistical edges. Yet a single game is not a large sample. Bullpen collapse, one bad inning, a couple of timely hits—these are noise in the model's inputs but are how games are actually decided.
Similarly, the Brewers fell to Toronto 9-7 despite favorable fundamentals. Strong bullpen depth and recent offensive form suggested Milwaukee should win more often than not. Yet Toronto's lineup connected early, and the Brewers could not mount enough of a counterattack. The model's inputs remained correct; the outcome did not follow.
The Bigger Picture
This mixed result—one win, two losses on three picks with a combined 76% edge—is exactly what a calibrated model should produce. Over a large sample (50+games), picks with 75% average probability should win approximately 75% of the time. Over three games, we expect noise. Some weeks the model hits 3-for-3; others 1-for-3 or 0-for-3. That volatility is expected.
What matters is whether the model's probabilities are honest. If we claim 75% confidence and win 75% of the time over a full season, the model is calibrated. If we claim 75% and win 60%, we are overconfident. Our /track page logs every pick and tracks calibration continuously, so you can verify these claims yourself.
The process—weighting recent form, starter quality, lineup strength, park factors, and regression to mean—is designed to be correct more often than not. But baseball remains fundamentally uncertain. Process beats results in the long run, but in any given week, results can deceive.