2022 Midterms Review: How My Forecasts Stacked Up With Actual Results


Where Things Stand After The Midterms:

The midterms did not go the way most people, including experts and the media, thought it would. The red wave that was expected by many due to the historical precedent for the President's party to lose a large number of seats in both houses of Congress did not happen. However, the House of Representatives will now have a (slim) GOP majority, but Democrats will hold control of the Senate with 51 seats, thanks to a Pennsylvania flip and Georgia hold. Below you can see how accurate (or not) my forecasts were, both generally and in specific races.

U.S. Senate:

In terms of the U.S. Senate, both my forecasts and the simulations using forecast and polling average data correctly predicted a Democratic majority. However, my forecast incorrectly projected a 52-48 majority, and at the moment the most Democrats could have is 51 counting Georgia. The forecast incorrectly had Wisconsin electing Lt. Gov Mandela Barnes over Incumbent Sen. Ron Johnson. It also predicted that Dem. Tim Ryan would defeat Rep. J.D. Vance in Ohio, but in reality Vance won by 6 points. However, this was balanced out in part by another incorrect projection for Pennsylvania, My forecast had Rep. Mehmet oz with a 4.9 point advantage, but in reality Dem. John Fetterman won, flipping the seat for Democrats. All in all, three races were called incorrectly, shown below:

  • Ohio - Forecast Prediction: Dem. Tim Ryan +2.4 - Result: Rep. J.D. Vance +6.5
  • Pennsylvania - Forecast Prediction: Rep. Mehmet Oz +4.9 - Result: Dem. John Fetterman +4.9
  • Wisconsin - Forecast Prediction: Dem. Mandela Barnes +5.6 - Result: Rep. Ron Johnson +1

My Senate Forecast vs The Pros:

Of course, the obvious way to check my work/accuracy is to look at the actual results, but we also should compare my performance to that of experts. For this, we will compare my forecasts to Nate Silver's FiveThirtyEight's.

Forecast: Overall Accuracy: Senate Makeup:
HenryDRiley 91.4% (32/35 Races Correct) Predicted a 52-48 Democratic Senate
FiveThirtyEight 91.4% (32/35 Races Correct) Most likely predicted outcome was a 51-49 Republican Senate

We can see that my model performed equally with FiveThirtyEight's in overall accuracy, predicting the same number of races correctly correctly. However, my model outperformed FiveThirtyEight in terms of the forecasted makeup of the Senate. They projected that a Republican majority was most likely, whereas I had a 52-48 Democratic Senate. In reality, it will be a Democratic Majority of 51, thanks to Raphael Warnock holding Georgia's seat and a Democratic flip in Pennsylvania.

U.S. House Of Representatives:

For the U.S. House, my forecasts correctly predicted a Republican majority when adding the competitive races I forecasted to the "safe" ones already assumed. Of the 59 races I created forecasts for, eighteen were called incorrectly, as seen below:

  • Alaska's At Large District - Forecast Prediction: Republican - Result: Dem. M. Peltola +10
  • Arizona's 6th District - Forecast Prediction: Dem. K. Engel - Result: Rep. J. Ciscomani +1.4
  • California's 13th District - Forecast Prediction: Dem. A. Gray - Result: Rep. J. Duarte +0.4
  • Colorado's 8th District - Forecast Prediction: Rep. B. Kirkmeyer - Result: Dem. Y. Caraveo +0.7
  • Iowa's 3rd District - Forecast Prediction: Dem. C. Axne - Result: Rep. Z. Nunn +0.7
  • North Carolina's 13th District - Forecast Prediction: Rep. B. Hines - Result: Dem. W. Nickel +3.2
  • New Jersey's 7th District - Forecast Prediction: Dem. T. Malinowski - Result: Rep. T. Kean +4.2
  • New Mexico's 2nd District - Forecast Prediction: Rep. Y. Herrell - Result: Dem. G. Vasquez +0.7
  • New York's 3rd District - Forecast Prediction: Dem. R. Zimmerman - Result: Rep. G. Santos +8.2
  • New York's 4th District - Forecast Prediction: Dem. L. Gillen - Result: Rep. A. D'Esposito +3.8
  • New York's 17th District - Forecast Prediction: Dem. S. Maloney - Result: Rep. M. Lawler +0.9
  • New York's 19th District - Forecast Prediction: Dem. J. Riley - Result: Rep. M. Molinaro +2.2
  • Ohio's 1st District - Forecast Prediction: Rep. S. Chabot - Result: Dem. G. Landsman +5.8
  • Ohio's 13th District - Forecast Prediction: Rep. M. Gesiotto Gilbert - Result: Dem. E. Sykes +5.2
  • Oregon's 5th District - Forecast Prediction: Dem. J. McLeod-Skinner - Result: Rep. L. Chavez-DeRemer +2.2
  • Oregon's 6th District - Forecast Prediction: Rep. M. Erickson - Result: Dem. A. Salinas +2.5
  • Tennessee's 5th District - Forecast Prediction: Dem. H. Campbell - Result: Rep. A. Ogles +13.5
  • Virginia's 2nd District - Forecast Prediction: Dem. E. Luria - Result: Rep. J. Kiggans +3.4

My House Forecast vs The Pros:

Let's see how my forecasts compared to FiveThirtyEight's.

Forecast: Overall Accuracy: Senate Makeup:
HenryDRiley 69.5% (41/59 Competitive Races Correct) Predicted a Narrow Republican House
FiveThirtyEight 78% (46/59 Competitive Races Correct) Most likely predicted outcome was a 230-205 Republican House

We can see that once again, FiveThirtyEight outperformed my model in overall accuracy, predicting five additional races correctly. However, my model again outperformed FiveThirtyEight in terms of the margins of who would control the House. They projected that a Republican majority of 230 seats was most likely, whereas I had around 220-225. In reality, it will end up being a 222 Republican House.

Gubernatorial Races:

Of the 36 gubernatorial races, three were called incorrectly:

  • Arizona - Forecast Prediction: Rep. Kari Lake - Result: Dem. Katie Hobbs +0.6
  • Nevada - Forecast Prediction: Dem. Steve Sisolak - Result: Rep. Joe Lombardo +1.5
  • Wisconsin - Forecast Prediction: Rep. Tim Michels - Result: Dem. Tony Evers +3.3

My Gubernatorial Forecast vs The Pros:

Let's see how my forecasts compared to FiveThirtyEight's.

Forecast: Overall Accuracy:
HenryDRiley.com 92% (33/36 Races Correct)
FiveThirtyEight.com 94.4% (34/36 Races Correct)

The takeaway here is that FiveThirtyEight outperformed my governor model but by only one race, that being Nevada. Otherwise, we both incorrectly projected Wisconsin and Arizona.

Possible Reasons For Low Performance:

Of course, the Senate and Governor forecasts could've performed better, but realistically spekaing they did quite well, and in some senses better than the experts. However, the House forecast was drastically worse compared to the other two. Why is that? Well, one major possibility is in how the models pick winners. The Governor and Senate models were upgraded to "v2" which meant they were calculating probabilities in a more fair and competitive way, while also basing predictions purely off average quantitative data, not just categorical leaders. In English, how the House model worked is it would see the winner in each data category, say Fundraising, and then whoever won more categories would automatically be projected winner. The actual probability number was an average of all of the winning percentages. But that doesn't make sense. Using v1, if Candidate A in wins three categories with 52, 60, and 55 percents, and Candidate B wins the fourth category with 90 percent, then the prediction would be Candidate A with an average probability of 64.25 percent. But with v2, Candidate A's probability would be a true average of their percents for each category, so it would end up being 44.25 percent. Candidate B's probability would be 55.75 percent. This literally could change the predicted winner either way, and it makes a huge difference. It also means that the probabilities will be much closer to even and therefore more realistic. To see if using v2 would improve forecasts, I am currently working on plugging the data into the v2 system to compare results. I will post them once I am done with it.