banner



What We Learned From The 2016 Recount

The 2016 presidential ballot was notable for many reasons, not the least of which was the fact that the integrity of vote counting was called into question. The overall accurateness of the count was challenged past both the winning candidate, Republican Donald Trump, and Dark-green Party candidate Jill Stein. Questions raised by Stein in item also generated a brief flurry of interest in the consequence of post-election audits, a piece of ballot arcana that rarely sees the light of day in mainstream public soapbox.

Stein's challenge to the accurateness of the vote count contained a direct assail on the use of computers to tally votes. Without a hand recount of the vote, she claimed, there would be no way of knowing whether computerized voting equipment had in fact accurately counted the vote, or worse, had been maliciously hacked.

Stein tried to force recounts in three battleground states—Michigan, Pennsylvania, and Wisconsin—that had given narrow and surprising majorities to Trump over the Democratic nominee, Hillary Clinton. In the stop, she was successful only in Wisconsin. That recount produced revised vote totals for the two major party candidates that were near identical to the originally canvassed tally. Most likely because of this, the Wisconsin recount quickly vanished from the public mind, to become yet another footnote to an extraordinary election flavor.

However, there is more to be learned from a recount than whether the right winner was declared or the vote margin was right. A focus on irresolute vote margins masks richer information that emerges during a recount that can help quantify the degree to which errors were fabricated in the original count, including errors that had no bearing on the event.

In this article, we take advantage of the fact that Wisconsin has had two statewide recounts in recent years—after the 2016 presidential ballot and the 2011 state supreme court election—and that the state has kept detailed records about how the vote count inverse from sheet to recount. In improver to detailed vote-count statistics, the state also employed a mix of voting technologies, which allows for a comparison of initial-count error rates, broken down past whether the votes were originally counted by paw, by scanner, or electronic voting machine.

We reach the following conclusions in this commodity.

  • i. The most common way of comparison the recounted victory margin with the election night victory margin significantly understates the number of errors made in the original count of ballots.

  • 2. At least 0.59% of the ballots originally counted in the 2016 presidential election in Wisconsin were miscounted, compared to 0.21% in the land supreme courtroom election that was recounted in 2011. The difference in these ii error rates is due almost entirely to the miscounting of small party and write-in ballots in 2016.

  • 3. Scanning paper ballots produces a more accurate election night count than hand-counting ballots.

  • 4. Differences in vote counts betwixt election nighttime and the recount are largely due to administrative factors, such as transcription errors, rather than the accuracy of the vote-tallying methods per se.

These conclusions are supported by the show generated by the 2011 and 2016 Wisconsin recounts. They are consistent with the prior empirical literature on recounts, and thus should provide guidance as academics and the media empirically analyze recounts in the hereafter.

The residue of this commodity gain every bit follows. First, nosotros discuss the general issue of recounts as a method for measuring the accuracy of original vote counts. Second, we review the specific case of Wisconsin and the ii contempo recounted elections that provide the empirical fodder for this article. Third, we delve more deeply into these 2 recounts, exploring the human relationship between voting applied science and vote-count accuracy.

Recounts As a Method for Measuring Accuracy of the Vote Count

Recounts provide an opportunity to gauge the accurateness of election nighttime vote counting,1 although few scholars have availed themselves of this opportunity.2 The overall logic is simple. If there is a recount, the recounted results are taken to exist the right vote count, or the "basis truth" of the election. The difference betwixt this more careful count and the tally conducted in the hectic hours immediately following the shut of the polls is a mensurate of how accurately the ballots were counted in the first instance.

Although at that place might not exist such every bit a matter as a perfectly correct vote count, recounts are designed to improve on the process used to count ballots on Election Day, and thus it is reasonable to consider the recounted tally every bit ground truth. The recount is done more slowly and deliberately; is conducted under the watchful eyes of election officials, journalists, and election observers from the campaigns; and is focused on getting a single contest right rather than processing the unabridged election. In this setting, previously undetected errors and miscounts ofttimes go revealed and remedied. For case, the 2016 recount in Wisconsin identified a few communities where tabulating machines failed to read some ballots because voters used the wrong kinds of pens to mark their scanned paper ballots.3

Taking the recounted vote equally ground truth, then, the election dark deviation from the recounted vote can exist assumed to mensurate counting error. This error, withal, can be measured in unlike ways and at dissimilar levels of aggregation.

The most accurate cess of counting errors would compare how each individual ballot was interpreted both on election night and in the recount. Indeed, the most rigorous post-election auditing techniques, such equally risk-limiting audits, crave scrutiny of individual ballots, fifty-fifty if they are selected based on sampling techniques (Lindeman and Stark 2012). The best measure of accented vote-counting error would simply be the percentage of ballots that were interpreted differently on ballot night and in the recount.

Practically speaking, knowledge of how private ballots were interpreted during the ballot night count is rarely preserved, and thus the near accurate measure out of vote-count accuracy is generally unavailable to researchers and the public.4 As a result, the comparing of the ballot nighttime and the recount tallies must take identify at some higher level of aggregation, unremarkably the precinct (called the "ward" or "reporting unit" in the case of Wisconsin). As we discuss below, it may further exist necessary to aggregate above the precinct level, such as at the local or land level, as the method used to calculate fault tin collaborate with the level of aggregate in means that might be unanticipated.

There are 2 ways to measure the corporeality of vote-counting error revealed by the recount: (i) cyberspace error and (two) absolute error. The 2 methods are illustrated in Tabular array 1 by means of hypothetical precinct-level vote totals for 3 candidates named Brown, Garcia, and Lee.

Table 1a. Adding of Vote-Count Mistake Using Hypothetical Vote Totals: Precinct Aggregation

Election night Recount Net difference Absolute departure
Muni. Precinct Brown Garcia Lee Row total Brownish Garcia Lee Row full Brown Garcia Lee Row total Chocolate-brown Garcia Lee Row total
 A 1 377 207 four 588 379 202 five 586 2 −five one −two 2 5 1 8
 A ii 300 169 4 473 303 166 3 472 3 −iii −1 −1 3 3 1 7
 A 3 85 42 two 129 83 42 two 127 −2 0 0 −2 2 0 0 2
Municipality cavalcade total 762 418 ten 1,190 765 410 ten 1,185 3 −8 0 −5 7 viii 2 17
 B ane 478 303 10 791 481 300 xi 792 3 −iii 1 1 3 3 1 7
 B ii 334 169 ane 504 331 172 1 504 −3 iii 0 0 3 3 0 6
 B 3 312 224 iii 539 310 223 v 538 −2 −1 2 −1 2 ane 2 5
Municipality column total one,124 696 14 1,834 1,122 695 17 i,834 −2 −1 3 0 8 7 3 18
State cavalcade total 1,886 1,114 24 3,024 ane,887 1,105 27 3,019 1 −9 three −5 15 15 5 35

Table 1b. Calculation of Vote-Count Error Using Hypothetical Vote Totals: Municipality Aggregation

Election nighttime Recount Net departure Absolute difference
Muni. Dark-brown Garcia Lee Row total Brown Garcia Lee Row total Brown Garcia Lee Row total Brown Garcia Lee Row total
 A 762 418 ten 1,190 765 410 x one,185 3 −8 0 −5 3 viii 0 11
 B i,124 696 fourteen one,834 i,122 695 17 1,834 −2 −i 3 0 2 1 3 vi
State column total 1,886 1,114 24 3,024 1,887 one,105 27 3,019 1 −nine 3 −five 5 9 3 17

Table 1a displays hypothetical returns from a state with ii municipalities, each of which has three precincts. The election dark returns are reported get-go, and so the recounted returns, for the 3 candidates on the ballot. The final 2 sets of columns report, first, the net difference in returns for each candidate, and then the absolute value of those differences. For instance, in Municipality A, Precinct 1 (Precinct A-one), Garcia received five fewer votes in the recount than she received on election nighttime, which results in a internet difference of −5 votes, but an absolute deviation of +v.

Because net errors may be positive or negative, the values will frequently cancel out. As a result, the magnitude of the sum of the cyberspace differences is significantly less than the summed accented differences. This can happen both within precincts—annotation Precinct B-ii, where three votes switched from Brown to Garcia, leaving no net alter in total votes—and between precincts—note that Lee'southward gain of 1 vote in Precinct A-1 is offset by her loss of i vote in Precinct A-ii. The net and absolute error calculations generally diverge as one includes more than jurisdictions. With the results reported at the precinct level in Table 1a, the sum of all internet differences is −5 and the sum of all absolute differences is 35. The fault rates thus announced to be smaller using the internet calculation (5 out of 3,019, or 0.17%) than the absolute adding (35 out of 3,019, or 1.two%)

Table 1b shows what happens when we amass the vote totals up to the municipality level. Here, the statewide candidate vote totals are the same as before, as is the internet divergence. Yet, the sum of absolute differences is at present significantly lower than before, 17 rather than 35 (a rate of 0.56%). Because of the associative property of add-on, aggregating the net differences to the municipality level does not affect the final calculation of total net differences. The same associative holding does not utilise to summing absolute values.

This example illustrates ii important properties of error rate measures using differences in election returns that we will employ for the rest of this commodity. First, the absolute difference measure retains much more data most the actual amount of mistake in the organization than the internet difference measure.v 2d, the absolute error will generally fall as vote totals are aggregated at increasingly college levels.6 Lower levels of aggregation are meliorate at revealing errors that would otherwise be hidden.

These ii properties take of import implications for our analysis. We focus on the accented error because information technology preserves more information. We will also report absolute errors at the smallest unit of aggregation possible, given the availability of data. When we written report statewide totals, we will practise so by summing across all individual reporting units, which are generally wards or aggregations of wards. We volition only aggregate further, out of necessity. For instance, in comparing recount errors at the local level between 2011 and 2016, it will be necessary showtime to aggregate to the municipality level because ward boundaries and reporting units inverse between these two years.

Recounts in Wisconsin

Wisconsin provides a valuable context for studying the accuracy of elections. In recent years, in that location take been two statewide recounts that allow us to compare the original vote totals for each candidate with vote totals as corrected by the recount, numbers that nosotros assume to be more than authentic measures of the real intent of voters. The two elections took place under unlike circumstances with different numbers of candidates and much different levels of voter participation. This variety provides useful leverage and generalizability for our analysis.

The kickoff recount happened after a nonpartisan state supreme court election on Apr 5, 2011, betwixt sitting Justice David Prosser and challenger JoAnne Kloppenburg. Initial results following the statewide sheet showed Prosser as the winner 752,323 to 745,007, a margin of 7,316 votes or 0.49% of the full votes cast and counted.7 Because the margin fell under the 0.5% threshold set past constabulary, Kloppenburg was able to request a statewide recount without having to pay a fee. Her petition was motivated in role by business organisation well-nigh errors in Waukesha Canton, where her statewide ballot night lead vanished afterward the county clerk discovered that thousands of votes were unrecorded in the initial tally.eight An agreement reached between the state and the candidates mandated mitt recounts in parts of 31 counties that used Optech Eagle scanners.9 Afterwards several weeks of recounting by county boards of canvassers, the state reported 752,694 for Prosser and 745,690 for Kloppenburg, a deviation of 7,004 votes or 0.46%. The difference in the margin betwixt the two candidates changed by only 0.03 percent points between the election night count and the recount.

The 2d recount followed the November 2016 presidential election. The initial canvassed results produced a win for Donald Trump, awarding him 1,404,440 votes compared to 1,381,823 for Hillary Clinton, a difference of 22,617 or 0.76% of votes cast and counted. There were also meaning votes for minor political party candidates Gary Johnson, Jill Stein, and several write-in candidates. Stein requested a recount, motivated in function past Trump's assertions throughout the campaign about vote "rigging" of the elections, and concerns about voting machines and the furnishings of a new voter identification requirement. Because the margin betwixt the top two candidates was above the threshold for a state-financed recount, the Stein entrada raised the roughly $2 meg needed.10

The recount took approximately two weeks, with almost counties recounting ballots past hand, a small number re-tabulating optical scan machines, and others using a mix of the two methods.11 Following the recount, the state certified the results as 1,405,284 votes for Trump and ane,382,536 for Clinton, for a difference of 22,748 or 0.76% of the vote full. In other words, the recount revealed no modify in the winner's vote margin, measured to two decimal places.

Our review of printing accounts and minutes of the county ballot boards makes it clear that the recount was more complicated than a uncomplicated re-running of the tabulation process. Outset, the country-mandated recount procedures required counties to review other administrative practices in the wards and reporting units, and to balance the number of ballots in manus with turnout data from poll books. Discrepancies discovered here could lead to a change in the vote count even before the votes were re-tallied. For example, if it was decided that disputed absentee ballots had been improperly included in the vote count, so a number of ballots equal to the number of improperly included ballots must exist randomly removed from the counting. This is called a "drawdown" of absentee ballots. A similar process is followed if the number of ballots institute in the ballot box exceeds the number of voters accounted for on the poll listing.12

From the perspective of judging the accuracy of the original count, 1 of the most consequential set up of decisions that local officials reviewed during the recount was how carefully write-in ballots had been counted and recorded on election night. It is easy to imagine how the interaction of Wisconsin police and the functioning of voting technologies, especially optical scanners, would make it likely that many write-in votes are not counted on election night. Wisconsin election law states that there is "no requirement for a voter to brand an X or other marking, fill in an oval, or connect an arrow in order to cast a write-in vote.13 What this ways practically is that poll workers must visually inspect each election to ensure that all write-in votes accept been accounted for. In the example of scanned ballots specifically, inspectors cannot rely on ballots with write-in votes beingness diverted to the auxiliary ballot box because the voter had filled in the oval next to the write-in line; even ballots that had not been diverted must be examined to see if they contain a write-in vote without the corresponding oval or arrow being marked.xiv

Fifty-fifty aside from the event of accurately accounting for all write-in votes on election night, municipal and county officials make clerical errors on election dark that are then corrected in the recount. Unfortunately, the only systematic data we take to compare the election night and recounted tallies comes from the election returns published by the state election committee.15 When there are differences betwixt the two tallies, we cannot reliably distinguish between discrepancies caused past motorcar errors and clerical errors.16

Our assay of the 2011 and 2016 recounts begins with data provided by the Wisconsin Elections Commission that reported the original vote totals for each candidate and the concluding vote totals after recounts.17 The data are provided at the level of a "reporting unit." In many municipalities, the reporting unit is the same as a ward (or what would be chosen a precinct in most states), merely municipalities can combine multiple wards into a single reporting unit.eighteen The state had 3,636 reporting units in 2016 with an boilerplate of 818 votes cast in each. This is the lowest level of assemblage available to us. Aggregation to the municipality level is necessary to merge the 2011 and 2016 data, because of the many changes to reporting units between the two elections. Even the number of municipalities inverse slightly over time; there were i,879 in 2011 and 1,886 in 2016.19

Tabular array 2 presents absolute mistake rates from both elections computed at four increasingly large levels of analysis: reporting unit, municipality, county, and state. It also reports net mistake rates, which are unchanged by the level of aggregation. The tabular array confirms our previous observations that absolute error rates are more often than not greater in magnitude than the value of corresponding cyberspace error rates, and that absolute error rates generally pass up as the level of aggregation increases. As we aggregate upwards from reporting units to municipalities, to counties, and then to the state, the estimated absolute mistake rates in both elections decrease.

Table 2. Errors in Ii Statewide Recounts in Wisconsin

Level of assemblage 2011 Supreme Court 2016 president
Absolute fault
Reporting Unit 3,181 17,681
(0.21%) (0.59%)
Municipality 2,309 xv,343
(0.15%) (0.52%)
County 1,617 12,871
(0.11%) (0.43%)
State i,223 6,901
(0.082%) (0.23%)
Net error
ane,233 397
(0.082%) (0.013%)

At the reporting unit, the lowest level of aggregation available, the absolute fault was 0.21% in 2011 and 0.59% in 2016. That is, a conservative estimate is that about one out of every 475 ballots in 2011 and ane out of every 170 ballots in 2016 was miscounted in the election nighttime tabulation. Because the statistics were not calculated at the individual ballot level, this number represents a lower bound on the true fault rate.

Although private ballot information are unavailable, nosotros tin use regression to extrapolate from these 4 levels of assemblage (country, county, municipality, and reporting unit) to gauge the truthful individual election-level accented mistake rates. The technique we apply is only to backslide the absolute mistake rates for each level of aggregation on the logged average number of voters at each level of aggregation. The dependent variables are taken from Table 2, and are simply the accented error rates for the four levels of analysis. The independent variables are the corresponding average number of votes at each level of aggregation, transformed to natural logarithms.twenty

The results of the regressions are reported in Table iii. The associated graphs are shown in Figure 1. The correlations betwixt the amass error rates and the logarithm of the average number of voters represented at each succeeding level of aggregation is very high. The R2 for the 2011 regression is .81, and .99 for 2016. To estimate the true individual ballot-level mistake rate for each year, nosotros extrapolate the regression line to the signal where the number of voters equals i. Because the logarithm of i is goose egg, the estimated value we are looking for is the intercept of the regression. In 2011, the estimated individual ballot-level fault rate is 0.262%, compared to 0.857% in 2016.21 These rough extrapolations translate to errors in one out of every 382 ballots in 2011 and i of every 117 ballots in 2016. As both Figure 1a and 1b illustrate, the 95% confidence intervals of the prediction are quite large. Furthermore, because this is an out-of-sample prediction, the truthful prediction error is likely greater than shown and calculated here.22

FIG. 1. 

FIG. ane. Relations of absolute error rate to the number of voters at different levels of assemblage in Wisconsin.

Table three. Regression Results Used to Predict Accented Fault Rate of Individual Ballot Recounts

2011 2016
Natural log of the average number of voters −0.0134 −0.0417
(0.0046) (0.0036)
Intercept 0.262 0.857
(0.045) (0.037)
Due north 4 4
R 2 .81 .99
Adj. R ii .72 .98

The higher absolute fault rate reported in Tabular array 2 for 2016 compared to 2011 might suggest that votes were less accurately counted than five years earlier, an alarming decision if we believe the elections systems have improved over time. However, the structures of the two elections were and so unlike that we are reluctant to draw this decision.23 About notably, the 2016 election featured many more candidates—those officially listed on the ballot, candidates qualified as registered write-ins, and the scattering vote24—which provided more opportunities for error to be introduced into the counting than in 2011.

Further examination of the recount patterns makes it articulate that the accented error charge per unit in 2016 was driven largely by write-in candidates. This is axiomatic in Table four, which shows the absolute error (calculated at the reporting unit level before summing) for the seven candidates on the ballot, plus quantities associated with the nine registered write-in candidates and the scattering vote.25

Table four. Absolute Mistake Rates by Candidate in 2016

Candidate Original votes Absolute difference Absolute rate
Trump 1,404,440 2,236 0.159%
Clinton 1,381,823 2,227 0.161%
Johnson 106,585 291 0.273%
Stein 31,006 160 0.516%
Castle 12,156 88 0.724%
Moorehead 1,769 15 0.848%
De La Fuente ane,514 54 three.567%
Registered write-in 10,458 2,818 26.946%
Scattering 26,002 9,724 37.397%
Full ii,975,753 17,613 0.592%

This breakup shows that the absolute mistake rates associated with Trump and Clinton were 0.159% and 0.161%, respectively, both of which are slightly lower than the absolute error rates in 2011. The absolute error rates for the minor party candidates on the election were higher, ranging from 0.273% for Johnson to 3.567% for De La Fuente.26 Finally, the absolute error rates for the write-in candidates were in a league of their own: almost 27% for the registered write-in candidates and over 37% for the handful vote.27

This big contribution of write-ins to the accented error rate led to our farther investigation of the recount statistics, which revealed that many counties simply did not recount write-in ballots in 2016, either on election night, in the recount, or both.

This is illustrated in Figure 2, which displays four scatterplots that report the county-level percentage of the vote attributed to iv types of candidates on the Wisconsin presidential ballot: major party candidates (Trump and Clinton), minor political party candidates (five other candidates listed on the ballot), registered write-in candidates (nine candidates certified past the land to receive write-ins), and the scattering votes (all other write-in candidates). Circles in the scatterplots are sized proportional to the number of votes counted in each canton on election dark.

FIG. 2. 

FIG. 2. Comparison of votes received past categories of candidates in Wisconsin on election night and in the recount 2016. Notation: Major candidates = Trump and Clinton; minor candidates = other candidates printed on the statewide ballot (Castle, Johnson, Stein, Moorehead, and De La Fuente); write-in candidates = candidates officially registered to receive write-in votes (Fox, McMullin, Maturen, Schoenke, Keniston, Kotlikoff, Hoefling, Maldonado, and Soltysik); handful votes = all other write-in candidates. Size of circles is proportional to the number of votes counted in each county on ballot night.

The percentage of votes attributed to the two major candidates and the 5 minor candidates for both the election nighttime count and the recount are quite similar, but with some instructive differences. With one exception, the minor party vote share remained virtually unchanged in the recount. In almost every canton, the vote-share of the registered write-in candidates increased in the recount.28 This is entirely consequent with the give-and-take above related to the counting of write-in votes on ballot night. What most likely explains the nearly uniform increase in registered write-in votes statewide is that in the recount, all ballots were scrutinized. This uncovered a number of ballots that contained write-in votes but lacked marks in the oval adjacent to the write-in line.

Finally, the vote share of the scattering vote varied considerably among almost half the counties. There were substantial inconsistencies in counting of the scattering vote. Fifteen of Wisconsin'due south 72 counties reported zero scattering votes in both counts, three counties reported a positive scattering vote on election night but zero in the recount, and six reported goose egg scattering votes on election night and a positive number of scattering votes in the recount. These patterns strike us as unlikely reflections of the actual distribution of scattering votes across the land. Nigh probable, the scattering vote that did exist was non counted at all in some counties (or at least not reported on the tally sheets). In counties where there was at least some counting of the scattering vote, there was considerable variability in how thoroughly the municipalities and reporting units accounted for the scattering vote during the two tallies.

The ways municipalities and counties handled write-ins, both in the original count and the recount, had a significant touch on on the overall error rate, as calculated past comparing the original canvass with the recount. It is not hard to encounter why write-ins have such outsized influence on counting errors. Write-ins present specific challenges for both voters and election officials.29 For scanned paper ballots, accurately counting write-in votes depends on poll workers advisedly examining every ballot by manus, either to catch write-ins that did not have the corresponding oval marked, or to record write-ins correctly when ballots are counted by paw. Compared to a vote for a candidate printed on the election, write-in ballots provide more ways for a ballot to exist inaccurately counted.

Write-in candidates contributed simply about one per centum of total votes cast but accounted for roughly half the absolute errors in the original vote tally. If simply Trump and Clinton had been on the election in 2016, with no write-ins immune, the absolute error rate would take been simply 0.sixteen%, slightly less than the 2011 recount, even if we include the write-in vote from 2011. If no write-ins had been allowed at all in 2016, and nosotros only consider the seven candidates printed on the statewide ballot, the accented error rate would have been 0.17%—which is also below the 2011 fault rate. The improver of the registered write-in candidates raised the error rate to 0.27%, which is slightly college than 2011. Finally, when we add together the scattering vote, the mistake charge per unit more than doubles, to 0.57%.

All of this suggests that only using the pure absolute difference between the ballot night count and the recount complicates the idea of using the recount as ground truth. At the very least, the xv counties that failed to count unregistered write-ins in either round of counting, plus the iii counties that had counted the scattering vote on election night but non in the recount, should be excluded from whatsoever assay that includes write-in votes as the footing for calculating vote count errors.

On pinnacle of that, the scattering-vote graph in Effigy 2 besides suggests that some municipalities and reporting units did not recount the scattering vote at all, fifty-fifty though other municipalities and reporting units in the same canton did. In other cases, the reduction in the scattering vote may be the event of the correction of other errors. A good case is Waukesha County, which saw the full number of scattering votes reduced from 4,319 to 2,534, a drop that is due to double-reporting of write-in votes during the original count.xxx

For all these reasons, it appears that the best apples-to-apples comparison of error rates in Wisconsin focuses on the candidates printed on the ballot, excluding write-in candidates, both registered and unregistered.

With this in mind, Tabular array 5 recalculates error rates for 2011 and 2016, this time using merely the votes bandage for candidates printed on the ballot. Focusing only on the candidates printed on the ballot in each election, the absolute error rates revealed by each recount are comparable for each degree of assemblage across the two years. Putting the two elections on a more common footing by comparing only the votes for listed candidates shows that the fault charge per unit did not increase over time. In fact, if we compared but the error rates for the top two candidates in each election, nosotros would find a drop in the mistake rate betwixt 2011 and 2016.

Table 5. Errors in Wisconsin Recounts, Using Just Candidates Printed on Election

Level of assemblage 2011 Supreme Court 2016 president
Absolute error
Reporting unit of measurement 2,762 5,071
(0.18%) (0.17%)
Municipality i,978 4,093
(0.xiii%) (0.14%)
County ane,354 2,581
(0.090%) (0.087%)
State one,054 one,731
(0.070%) (0.058%)
Net error
1,054 1,707
(0.070%) (0.057%)

We conclude this section by comparing error rates in 2011 and 2016 at the municipality level. The comparison within municipalities over time is useful because information technology reveals whether errors are largely idiosyncratic and thus display little continuity from one election to the next or whether they are endemic to particular jurisdictions and thus display pregnant continuity over time. Effigy 3 graphs the absolute mistake rates in 2016 against 2011 where the circumvolve sizes are again weighted by the number of votes bandage in 2016. Figure 3a shows the mistake rate including write-in votes; Figure 3b shows the error rates calculated using simply the candidates on the election. To aid in legibility that might be impeded by a small-scale number of farthermost outliers, the fault rates accept been transformed by taking cube roots.

FIG. 3. 

FIG. iii. Scatterplot of absolute error rates in Wisconsin in 2011 and 2016. Note: The fault rate along the vertical and horizontal scales have been transformed by a cube root.

The overall correlation between the absolute 2011 and 2016 fault rates is a mere .058 when we include write-in votes and .059 when we exclude them.31Figure 3 suggests i reason why the fault rates are so weakly connected across the ii elections: the modal error is 0%. Including write-ins, municipalities reporting no errors accounted for 63% of observations in 2011 and 38% of observations in 2016; excluding write-ins, these percentages rise to 66% and 56%, respectively. However, the large number of zeros is non responsible for the low correlation between 2011 and 2016; eliminating the municipalities with 0% error rates in either year produces a similarly low over-time correlation of .097 including write-ins and .xiv excluding them.

It therefore seems that errors in vote tabulation are not owned to detail communities over fourth dimension but rather vary in somewhat unpredictable means from 1 election to the next. This lack of relationship over time is in contrast to polling place "incidents," which show a substantial amount of continuity in their prevalence in particular communities over time in Wisconsin (Burden et al. 2017). These 2 facts are not as incongruous as it initially seems. This is in part because many incidents are in fact "benign" or fifty-fifty successful resolutions of potential concerns such every bit spoiled ballots. In these instances, the remedy by a poll worker on the "forepart finish" helps to avoid a tabulation problem on the "dorsum cease."

Voting Engineering science and Wisconsin Recounts

A significant controversy surrounding the 2016 recount in Wisconsin was a claim that vote counts produced using computerized equipment—both ballot scanners and direct recording electronic (DRE) devices—are inherently suspicious and decumbent to error.32 If this is truthful, and so it was especially important to recount Wisconsin'south votes, because the margin of victory was tight, and so many of Wisconsin's ballots had been counted on equipment that relied on computers to do the tabulation.

The criticism of computerized vote-tallying equipment every bit being unreliable, or at least less reliable than hand-counting paper ballots, is open to empirical test in states such every bit Wisconsin that rely on a mix of voting technologies to count the ballots. The most obvious exam to conduct is whether paper ballots originally cast on paper and counted by scanner showed more discrepancies between the ballot night tally and the recount, compared to ballots that were originally cast on paper and counted by manus. This is the cleanest test because newspaper ballots are verifiable by the voter, then the only textile difference is the method of tabulation.

Including DREs in the comparison creates an ambiguous test because it is impossible for the voter to independently verify whether the votes he or she bandage on the touchscreen were in fact recorded faithfully by the DRE's internal retention. A "paw" recount of DRE votes in Wisconsin means reviewing the paper tape that is produced for each voter. Considering these records are non subject area to interpretation near voter intent, if there is a difference between election night and recount tallies in reporting units that use DREs, it is likely due to procedural issues related to the treatment of absentee ballots in the reporting unit or transcription errors, non differences in how the DREs reported the outcomes from ane time to the next.

The 2011 recount is about as make clean a test equally possible, since the recount was conducted entirely by manus, regardless of how the ballots were originally cast. The 1 significant departure from a clean examination in 2011 is that a minor fraction of ballots in 2011 were cast on DREs. We nonetheless report discrepancy statistics separately for ballots originally bandage on DREs, because they help quantify vote-counting errors due to purely clerical errors.

The 2016 recount does not provide every bit clean a test as 2011. Although most jurisdictions recounted all their ballots by hand, fifty-fifty those that had been originally counted with scanners, some recounted optically scanned ballots past running them through the scanners once more. Unfortunately, state records from the 2016 recount exercise not always clearly delineate which reporting units were recounted by hand and which were recounted by scanner. For the most part, counties reported that all ballots in their jurisdiction were recounted either by hand (51 of Wisconsin'due south 72 counties) or by scanner (nine counties). Even so, twelve counties reported that they employed a mix of optical scan and hand recounts, without specifying which municipalities used which recount methods.

We accept scrutinized the minutes of the county election boards, with an centre toward discerning whether it was possible to determine the recount methods used across specific reporting units or municipalities in these twelve counties. On the whole, we were unsuccessful in producing a make clean coding of the precise use of recount methods within these counties. Therefore, we treat these twelve "mixed" counties separately from the counties that were either 100% mitt or scanner recounts.

Furthermore, state records are not ever clear about which method was used to count ballots on ballot night. A pre-election Wisconsin Elections Committee report on the voting technologies used by each municipality in 2016 is sometimes at odds with a post-election report that identifies the equipment used by each reporting unit.33 In low-cal of this disagreement of sources, we choose the postal service-ballot written report, because it provides fine-grained information about how many ballots were counted by each type of voting engineering science at the level of reporting unit, whereas the pre-election written report merely provides information almost voting technologies at the municipality level.

Turning to 2011 start, state records betoken that 81.3% of ballots were originally counted by scanners, ten.9% were counted past DREs, and seven.8% were counted past paw. On the whole, one technology type dominated each reporting unit of measurement, simply even within reporting units at that place was some heterogeneity of usage. This is illustrated in Figure 4a, which shows the distribution of ballots counted by the iii main voting technologies in each reporting unit of measurement in each yr.

FIG. 4. 

FIG. iv. Usage of voting technology in Wisconsin in 2011 and 2016. DRE, direct recording electronic device.

The spikes at 0% and 100% (indicating that all ballots in a reporting unit of measurement were bandage via a single method) make information technology possible to classify most reporting units into a category—predominantly scanner, DRE, paper, or ballot-mark device (for 2016). We allocate a unit into one of these categories if at to the lowest degree 90% of ballots were counted using the associated technology. If no technology was used in a reporting unit to count more than 90% of a reporting unit's votes, it was assigned to an "other" category.

Table six reports the results of this analysis for 2011, the election in which the recount was washed entirely by hand. In 2011, ballots originally counted by hand, and in reporting units with a mix of applied science employ (the "other" category) had the largest mean absolute error, at 0.276% and 0.278% respectively, whereas ballots originally counted on DREs34 and on scanned paper had the lowest error (0.128% and 0.152%).35

Tabular array vi. Net Counting Errors by Dominant Voting Technology in Wisconsin in 2011 (Absolute Errors Measured Based Only on Candidates Printed on Ballot)

Technology Mean absolute error Number of reporting units Number of ballots
DRE 0.128% 270 49,283
Hand-counted paper 0.276% 179 66,705
Scanned newspaper 0.152% 1,911 one,050,670
Other 0.278% one,084 332,222
Total 0.184% 3,444 1,498,880

Nosotros now turn our attention to 2016. Hither we report the mean accented error for all reporting units (Table 7a), and then dissever the results for counties with full manus recounts (Table 7b) and for counties with at least partial automobile recounts (Table 7c). Doing this allows for a more than apples-to-apples comparing with the 2011 election among those counties that used hand recounts in 2016. Kickoff with the counties with pure hand recounts, nosotros encounter that reporting units that used DREs and scanned paper for the election night tally had the smallest mean absolute error (0.113% and 0.122%), much lower rates than in reporting units that used hand-counted paper (0.243%), and a mix of technologies (0.423%).36 Although the presence of the "other" reporting units complicates things in Table 7c, the results for the hand recount in Table 7b reinforce our findings from the 2011 election.

Table 7a. Internet Counting Errors past Dominant Voting Technology in Wisconsin in 2016 (Accented Errors Measured Based Only on Candidates Printed on Election): All Counties

Engineering Hateful absolute fault Number of reporting units Number of ballots
Ballot-marking device 0.000% iii 790
DRE 0.160% 185 54,859
Manus-counted paper 0.183% 194 169,635
Scanned paper 0.132% 2,145 2,205,278
Other 0.344% 1,109 508,731
Total 0.173% 3,636 ii,939,293

Table 7b. Net Counting Errors by Dominant Voting Technology in Wisconsin in 2016 (Accented Errors Measured Based Merely on Candidates Printed on Ballot): Counties with Hand Recounts

Applied science Mean accented fault Number of reporting units Number of ballots
Ballot-marking device 0.000% 3 790
DRE 0.113% 139 44,080
Paw-counted paper 0.243% 153 114,348
Scanned paper 0.122% 1,056 1,065,495
Other 0.423% 757 328,359
Total 0.191% 2,108 ane,553,072

Table 7c. Cyberspace Counting Errors past Dominant Voting Technology in Wisconsin in 2016 (Absolute Errors Measured Based Only on Candidates Printed on Election): Counties with Car Recounts (Including Mixed Counties)

Technology Hateful absolute fault Number of reporting units Number of ballots
Election-marking device NA 0 0
DRE 0.350% 46 10,779
Mitt-counted paper 0.060% 41 55,287
Scanned paper 0.142% 1,089 1,139,873
Other 0.201% 352 180,372
Total 0.148% 1,528 1,386,311

Every bit we demonstrated before, the major complicating factor in because the election nighttime-recount comparison is the thing of registered write-in votes and handful votes.37 As Effigy 1 showed, registered write-in votes and scattering votes fared differently in the 2016 recount. On the one manus, nigh every canton had more registered write-in votes in the recount than on election night. Indeed, the recount reported 1,928 more votes (12,386 versus x,458) for the registered write-in candidates than were counted on election night. On the other mitt, some counties had more handful votes in the recount than on election dark, some had fewer in the recount, and some reported precisely no scattering votes in either tally.

In the end, the recount reported 22,764 scattering write-in votes, compared to 26,002 counted on ballot night, for a reduction of three,238. These differences amount to internet error rates of 18.4% and −12.five% for the registered and handful write-in votes, respectively, compared to the net error rate of 0.057% for candidates who were printed on the election (as reported in Table 5). While a negative error rate might seem nonsensical at first, these numbers show it is results from the "uncounting" of ballots in the recount that had been included in the election dark count. When we calculate the absolute error rates for the registered and scattering write-in candidates alone, the rates are 27.6% and 37.4%, respectively, compared to the accented fault rate of 0.17% for the candidates printed on the election. These error rates for all of the write-in candidates, whether registered or non, are between 2 and three orders of magnitude greater than the error rates for candidates printed on the ballot.

An important question is whether some voting technologies were more prone to write-in counting errors than others. To respond this question, we calculated the counting fault for write-in candidates, breaking down the percentages co-ordinate to the engineering used to count ballots on election night. Here, we look only at ballots that were recounted by mitt, although the conclusions remain the same if we examine all ballots, regardless of how they were recounted.

Table 8 reports the results of this test. Although there is some variation in the error rates past voting engineering, all technologies showed substantial counting errors, both amidst the registered write-in candidates and the handful vote. Because all write-in votes had to ultimately be tallied past mitt, both on ballot dark and in the recount, it seems nigh likely that these large counting errors are due to choices made by local election officials well-nigh how diligently to pursue these hand counts.

Table 8a. Counting Errors among Write-In Votes in Wisconsin in 2016: Registered Write-in Candidates

Engineering science Mean absolute error Mean net error Number of reporting units Number of ballots
Election-marking device 0.0% 0.0% 3 ii
DRE 72.1% 0.0% 185 122
Hand-counted newspaper 45.five% 7.5% 194 683
Scanned paper 35.ii% eleven.8% 2,145 8,287
Other 128.4% 49.7% 1,109 1,364
Total 26.9% xvi.3% iii,636 10,458

Table 8b. Counting Errors among Write-In Votes in Wisconsin in 2016: Scattering Vote

Applied science Mean accented error Mean net error Number of reporting units Number of ballots
Election-marking device 75.0% 75.0% 3 4
DRE 55.7% −27.vii% 185 307
Hand-counted paper 29.5% −x.vi% 194 1,675
Scanned paper 37.1% −15.2% ii,145 21,977
Other 44.iv% 17.8% 1,109 2,039
Full 37.4% −12.5% 3,636 26,002

Conclusion

Recounts are significant in elections for many reasons. Well-nigh patently, they provide an opportunity to double-check the counting that was done in the days immediately following the election, to either confirm or overturn the initial verdict of the election officials who administered the election. Using the proper measurement strategy, recounts also provide a glimpse into the accuracy of initial vote counts. When recounts are held in jurisdictions that use different methods to count ballots on election nighttime, they can also provide boosted insight into the relative accuracy of the different tabulation methods that are used to adjudicate winners and losers in most elections.

Using the mean absolute deviation metric, we establish that at least 0.21% of ballots were counted differently when they were recounted in 2011 and 0.59% in 2016. Using linear regression to extrapolate these error rates to the level of the private ballot, we found that these error rates could have been as large as 0.26% in 2011 and 0.86% in 2016. Stated some other way, these latter statistics are equivalent to one ballot in 385 in 2011 and one ballot in 116 in 2016.

The errors rates in 2016 were so much greater than in 2011 because the structure of the ballots was so different. The 2011 supreme court election was a competitive, simple two-candidate thing with little interest in pocket-sized party or write-in candidates. The resulting ballot was simple and piece of cake to count. The 2016 presidential ballot was a complicated, closely contested matter with 2 major-party candidates, v small-party candidates, nine registered write-in candidates, and countless idiosyncratic candidates who received a handful of votes. The resulting election was long. A significant number of voters who chose to write in a candidate could easily make a mistake mark their ballots. Counting the ballots, especially the write-in votes, was tedious and decumbent to error, specially in the context of a busy presidential election night. Confining our attending to the candidates actually printed on the 2016 presidential ballot, the absolute error rate was nearly identical to 2011.

We suspect that many informed observers of election administration would be surprised to learn that ballots were so frequently miscounted, especially in a state that has a reputation for well-run elections. However, in comparison to the but other study of this type, which was conducted by Ansolabehere and Reeves (2012), who examined New Hampshire recount information, the mistake rates we observed in Wisconsin were small. Examining elections from 2000 to 2004, Ansolabehere and Reeves establish an average absolute error rate among ballots initially counted by manus 1.98% and a rate amongst optically scanned ballots of 0.95%.

In reaching our conclusion nearly the magnitude of the counting error revealed by the Wisconsin recount, we have had to be circumspect to the issue of measurement. We take shown that the well-nigh mutual statistic reported by the press to describe the difference in the vote count, which simply compares the original sail to the recount, can dramatically underestimate the magnitude of the errors made in counting votes on ballot night. This measure, the internet fault rate, tends to cancel out errors, so that even if the number of miscounted ballots is relatively large, the net error rate can look small. Based on a comparison of the absolute and net errors reported in Table five, which focuses on errors made in counting votes for candidates printed on the ballot, the magnitude of the absolute mistake was approximately 3 times that of the net error.38 If private election information was bachelor to compute the true accented error rate (or we rely on extrapolation from a regression assay to judge it), the internet error rate would be even further from the actual accuracy rate.

It has been suggested to u.s. that the internet error rate is all that should affair to the public and students of election assistants, because the purpose of elections is to choose leaders based on their popular support. If vote-counting errors tend to balance out across candidates, and so the problem of vote-counting inaccuracies is modest. We disagree. Like all areas of autonomous accountability, the legitimacy of elections rests on a public demonstration that the electoral process was managed competently. Nosotros practice not believe that the goal of election administration should exist to make sure that mistakes remainder out, but rather, that mistakes be minimized, and that the few mistakes that remain not systematically reward one candidate above the other. To that end, election administration should strive both to minimize absolute error and to have net error equal nix.

We as well propose in this article a elementary regression method that allows u.s.a. to simulate the size of the absolute mistake we would observe if we were able to calculate the absolute error on a election-past-ballot basis, rather than having to rely on aggregate election returns. As ballot-based auditing techniques, such equally risk-limiting audits, get more common, it will be possible to examination the accuracy of this method straight.

We end with the ostensible topic that prompted the recount in the first place—skepticism most the accurateness of vote counts conducted with the assistance of computers. We find, equally did Ansolabehere and Reeves, that vote counts originally conducted past computerized scanners were, on average, more authentic than votes that were originally tallied by manus. This finding should not exist surprising, either to people who have administered elections or to those who take a grasp of the extension of automation into the workplace. Computers tend to be more accurate than humans in performing long, slow, repetitive tasks. The demanding election night surround but drives a bigger wedge between human and machine performance.

The fact that the average scanner is more accurate than the average human in counting ballots on ballot dark is not an statement against checking the work of computers. Quite the opposite. The statistics presented in Table 2, for instance, show that, at a minimum, 0.59% of all ballots counted for president on election night in 2016 contained a counting error, which works out to one election out of every 169 cast. The regression technique described in this article suggests that if information had been retained about how each ballot was originally interpreted, 0.85% of ballots, or one out of every 117 cast, would accept been shown to be in mistake. That election-counting errors can be and then high in a country such as Wisconsin, which has a reputation for running clean elections, calls for greater attending to exist paid to the initial vote count, and to the criteria used by vote counters in interpreting ballots. However, the assay that compares error rates of optically scanned ballots with that of mitt-counted ballots reveals the need for ballot-level audits of hand-counted ballots, also.

References

  • Alvarez Michael R., Atkeson Lonna Rae, and Hall Thad E.. 2013. Evaluating Elections: A Handbook of Methods and Standards . New York, NY: Cambridge University Press. Google Scholar
  • Ansolabehere Stephen and Reeves Andrew. 2012. "Using Recounts to Measure out the Accuracy of Vote Tabulations: Prove from New Hampshire Elections 1946–2002." In Confirming Elections: Creating Confidence and Integrity through Election Auditing , eds. Michael Alvarez R., Atkeson Lonna Rae, and Hall Thad Eastward.. New York: Palgrave. Google Scholar
  • Atkeson Lonna Rae R. Alvarez Michael, and Hall Thad Eastward.. 2009. "The New Mexico 2006 Postal service Election Audit Report," University of New Mexico. <https://polisci.unm.edu/mutual/c-sved/papers/the-2006-post-ballot-audit-written report.pdf>. Google Scholar
  • Burden Barry C., Canon David T., Mayer Kenneth R., Moynihan Donald P., and Neiheisel Jacob R.. 2017. "What Happens at the Polling Identify: Using Administrative Information to Expect Inside Elections." Public Administration Review 77:354–364. Crossref, Google Scholar
  • Herron Michael C. and Wand Jonathan. 2007. "Assessing Partisan Bias in Voting Technology: The Case of the 2004 New Hampshire Recount." Electoral Studies 26:247–61. Crossref, Google Scholar
  • Lindeman Marker and Stark Philip B.. 2012. "A Gentle Introduction to Risk-Limiting Audits." IEEE Security and Privacy 10: 42–49. Crossref, Google Scholar
  • Tufte Edward R. 1974. Information Analysis for Politics and Policy . Englewood Cliffs, Northward.J.: Prentice-Hall. Google Scholar

1 We use the term "election nighttime vote count" as a synonym for the vote count that is produced for the original canvass of votes.

2 The one scholarly paper we know of that has used recounts in such equally way is Ansolabehere and Reeves (2012), which provides the methodological ground for this article. Also encounter Herron and Wand (2007), Atkeson, Alvarez, and Hall (2009), and Alvarez, Atkeson, and Hall (2013).

iii The customs where this affected the most votes was the City of Marinette, in Marinette County, which discovered that hundreds of absentee ballots had been marked in ways that "would cause issues during browse—i.e., red ink, ball point pen, incomplete connecting arrows, create in ballot, etc." Meet Marinette Canton Board of Canvass, Marinette County Unapproved Recount Minutes, pp. 38–53, available at <http://elections.wi.gov/sites/default/files/recount_2016/marinette_county_unapproved_recount_minutes_pdf_85823.pdf>.

iv This is changing, as vendors develop digital scanning technologies that preserve together both the image of each ballot and a record of how each election was interpreted—both on election nighttime and in a recount, if it occurs.

5 In theory, information technology is possible to have a recount that reveals every election having been incorrectly counted (absolute error rate of 100%) and yet for the recounted vote totals to lucifer the ballot nighttime vote totals. This would be true, for instance, if the candidate names had been erroneously matched up with the locations of marks on a election, but that the total number of votes bandage in the election night count equaled the full number of votes in the recount. With all the votes reallocated to a different candidate, the errors would remainder out equally, even though all the ballots were counted incorrectly.

6 To be more precise, the amount of absolute error must remain the same or decrease with each boosted level of aggregation. If all the net errors at the lowest level of aggregation—the individual ballot, in this case—are non-negative, and so aggregation will have no consequence on the calculation of the full amount of accented mistake. The deviation of internet and absolute error at greater levels of assemblage depends on the mix of positive and negative errors at the lower levels.

7 At that place were 1,550 "scattered" votes for other candidates.

viii Returns from an entire municipality (the Metropolis of Brookfield) were uncounted because of a data entry fault. Jason Stein, Laurel Walker, and Bill Glauber, "Corrected Brookfield Tally Puts Prosser Ahead Afterward 7,500 Vote Gain," Milwaukee Journal-Sentinel, April 7, 2011.

nine Wisconsin Elections Commission, "2011 Supreme Court Statewide Recount Data," <http://elections.wi.gov/node/1719>.

10 A country law adopted in 2015 lowered the threshold for a "free" recount to 0.25% of votes cast and counted. The 2016 margin did non fall below that threshold (or even the previous threshold of 0.5%), so Stein was required to reimburse state and local election officials for expenses related to the recount.

xi Wisconsin Board of Elections, "Presidential Recount Canton Price Estimate and Recount Method," <http://elections.wi.gov/sites/default/files/story/presidential_recount_county_cost_estimate_and_reco_16238.pdf>.

12 Wisconsin Elections Committee, Election Twenty-four hours Transmission for Wisconsin Election Officials, July 2016, p. 101, <http://elections.wi.gov/sites/default/files/publication/65/election_day_manual_july_2016_pdf_12281.pdf>.

xiii Ibid., p. 107.

14 Write-in votes cast on DREs too require shut attention in order to be reported accurately. The DRE used in Wisconsin is the AVC Edge. A voter wishing to vote a write-in candidate touches a "write-in" button, which brings up a keyboard for the voter to indicate his or her choice. At the end of the voting mean solar day, the results tape indicates the number of write-in votes for each office. A separate write-in report lists all the write-in candidates for each race.

15 Nosotros explored using the minutes of county ballot boards as a data source for distinguishing specific reasons why the recounted tally did non match the election dark tally. Although the information contained in these minutes is invaluable for developing a general understanding of the practical details of the recount procedure, information technology is not systematic enough to be used for the purposes discussed here.

16 So far as nosotros know, the but country that allows the public to distinguish different reasons why the election dark tally might differ from the recount, or fifty-fifty the official canvass, is Virginia, which posts a change log on its website that documents the source of every deviation between the ballot nighttime tally and the official election returns. See Virginia Department of Elections, "Changes to Unofficial Results Activity," <https://www.elections.virginia.gov/resultsreports/dataproject/ChangesUnofficialResults.html>>.

17 The 2011 information comparing the original and recount was downloaded from <http://elections.wi.gov/sites/default/files/COUNTY_BY_COUNTY_FOR_SPRING_2011_ELECTION_AND_RECOUNT.xls>. The 2016 data comparing the original and recount was downloaded from <http://elections.wi.gov/sites/default/files/Ward%20by%20Ward%20Original%20and%20Recount%20President%20of%20the%20United%20States.xlsx>.

eighteen Wisconsin municipalities with populations nether 35,000 can combine individual wards into amass reporting units. Run across Wisconsin Statutes § v.fifteen(half-dozen)(b).

xix Nosotros lose merely vii observations in the merging process, all from municipalities that appeared in the 2016 vote information only were not in the 2011 information.

20 These average values in 2011 were 416 for the reporting unit, 930 for the municipality, xx,818 for the county, and ane,498,880 for the land. The corresponding numbers for 2016 were 818, 1,840, 41,330, and 2,975,753.

21 The 95% confidence intervals of these predictions are 0.216% and 0.172% for 2011 and 2016, respectively.

22 The extrapolation technique employed here is valid just if the relationship between the absolute error rate and the boilerplate number of voters continues to be linear beyond the premises of the values of the independent variables. Encounter Tufte (1974, pp. 32–33). Whether this assumption really holds in do awaits the availability of information from ballot-level post-election audits, which should become more than mutual with spread of risk-limiting audits and the adoption of digital ballot scanners that retain information about how each ballot was interpreted past the scanner.

23 In add-on, the comparing of the 2 years in Tabular array 2 demonstrates that the net mistake rate can fifty-fifty go downwardly betwixt two elections while the absolute mistake rate goes upwards.

24 The scattering vote consists of write-in votes cast for candidates, but not reported on an private candidate basis. In Wisconsin, the simply write-in votes that are reported on an private candidate basis are those for registered write-in candidates. See Wisconsin Elections Commission, "Reporting 'Scattering' Votes," <http://elections.wi.gov/node/3283>. Also run into Wisconsin Statutes §§ 7.50(2)(d) and 7.50(2)(em).

25 9 write-in candidates were eligible to receive votes and have their tallies individually reported. Among the nine write-in candidates who were eligible to receive votes, conservative Evan McMullin drew the largest number. The 2016 election saw an exceptionally high number of write-in votes in Wisconsin. See Matt DeFour, "More Write-Ins This Twelvemonth in Wisconsin Than All Previous Presidential Elections Combined," Wisouthwardconsin Land Journal, December 2, 2016.

26 The other five tickets were led by Darrell Castle (Constitution), Gary Johnson (Libertarian), Jill Stein (Wisconsin Green), Monica Moorehead (Workers World), and Rocky Roque De La Fuente (American Delta).

27 These estimates count all the qualified write-in candidates and all the handful vote as ii candidates. Thus, the absolute fault rate for these two sets of candidates is probable an under-approximate, owing to the fact that we take calculated these rate later on accumulation across numerous candidates.

28 The one notable exception here was Menominee County, Wisconsin's least populous canton, which is coextensive with the Menominee Indian Reservation. Information technology stands apart on the right side of the first scatterplot and left side of the second scatterplot. In the ballot night vote count, the canton'south sole reporting unit recorded zero votes for any candidate other than Trump and Clinton. In the recount, two votes were removed from Trump's count and one from Clinton. In add-on, Castle (3 votes), Johnson (11), and Stein (17) were credited with votes. No write-in votes were recorded in the recount.

29 The document "Reporting 'Scattering' Votes," cited higher up, notes dubiety about how contempo changes to Wisconsin election police impact the counting of write-in ballots, and states its purpose as ensuring "that all counties are reporting handful votes uniformly. … " Conspicuously, the country elections board has achieved limited success in this regard.

30 See Waukesha County Board of Canvass, "Waukesha Canton Recount Summary and General BOC Meeting Minutes," p. 23, bachelor at <http://elections.wi.gov/sites/default/files/recount_2016/waukesha_county_recount_summary_and_general_boc_me_13166.pdf>.

31 The observations were weighted past total ballots cast in 2016. The correlations subsequently transforming the rates by taking cube roots are higher, but even so a meagre .073 and .10, respectively.

32 J. Alex Halderman, "Want to Know if the Election Was Hacked? Look at the Ballots," medium.com, November 23, 2016, <https://medium.com/@jhalderm/want-to-know-if-the-election-was-hacked-look-at-the-ballots-c61a6113b0ba#.cph8nrhce>.

33 Compare, for instance, the report of municipality voting equipment for 2016 (<https://web.archive.org/spider web/20170113061148/http://elections.wi.gov/sites/default/files/page/179/voting_eq_list_12_2016_xlsx_16214.xlsx) with the 2016 post-election written report>, <https://web.archive.org/web/20170315013617/http://elections.wi.gov/sites/default/files/publication/2016_presidential_and_general_election_el_190_2017_83144.xlsx>).

34 Equally noted above, the difference between election night and the recount for DREs is due either to clerical errors, such as transcription mistakes, or to differences in how absentee ballots were counted.

35 An analysis of variance rejects the goose egg hypothesis that the four counting methods had equal error rates at very loftier levels of certainty (F = 11.xl, p < .00005).

36 A ballot-marking device (BMD) is a hybrid voting applied science, which uses a touchscreen to receive a voter'due south choices, merely then produces a paper election to be scanned. BMDs were grouped with DREs in the 2011 recount, but were reported every bit a carve up category in 2016. Nosotros leave aside the two reporting units that used ballot-marking devices, because of the small numbers.

37 Recall that the scattering vote consists of write-in votes cast for unregistered candidates and is non reported separately past candidate.

38 For case, in 2011, the absolute fault was 0.18%, compared to a net fault of 0.070%. In this case, the absolute error was 2.6 times greater than the net error.

What We Learned From The 2016 Recount,

Source: https://www.liebertpub.com/doi/10.1089/elj.2017.0440

Posted by: clapperhavers.blogspot.com

0 Response to "What We Learned From The 2016 Recount"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel