Email your state's report card to your officials

State Integrity Investigation methodology FAQ

Where can I get the full State Integrity Investigation dataset?

What does a state’s Corruption Risk Report Card measure?

How did you score the report cards?

How did you determine the letter grades?

Are the rankings and report cards tied to a specific date?

How did you decide what to measure?

Who gathered the information on each state?

What method did you use to score each indicator?

How do you vet the information to be sure it is accurate?

What was the job of the peer reviewers?

What is the “enforcement gap” that you calculated for each state?

How confident are you in the Corruption Risk Report Card scores?

Download the Methodology White Paper.

Read the list of peer reviewers for state data.

State Integrity Investigation methodology

What does a state’s Corruption Risk Report Card measure?

The report card measures the strength of state laws and practices intended to ensure open, transparent government and prevent corruption. We see this as measuring the risk of corruption in each state. The report card does not try to measure the level of actual corruption. Actual corruption is the result of the actions and integrity of individual public officials and the governance culture in a state.

The Corruption Risk Report Card for each state examines three concepts: 

  1. The existence of public integrity mechanisms, including laws and institutions, which promote public accountability and limit corruption. 
  2. The effectiveness of those mechanisms, such as their insulation from political interference, their level of staffing, and their ability to impose penalties.
  3. The access that citizens have to those mechanisms, such as access to public records at reasonable cost and within a reasonable time

How did you score the report cards?

The Investigation researched a list of 330 statements about the laws and practices that promote open, accountable state government and deter corruption. We call these statements Corruption Risk Indicators, and they are organized into 14 areas of state government oversight. Reporters scored each state on how well it lives up to these statements.  Here are a few examples of the indicators we used on the issue of lobbying disclosure:

  • In law, lobbyists are required to file a registration form.
  • In law, lobbyists are required to file a spending report.
·         In practice, citizens can access lobbying disclosure documents at a reasonable cost.

Reporters had precise criteria for scoring each of these indicators. You can see the scoring criteria, the reporters’ notes, and the sources they used by clicking on a state’s report card, clicking a category, clicking a question or subcategory and then clicking on one of the indicators.

The indicators assess the existence, effectiveness, and citizen access to key governance and anti-corruption mechanisms in the fifty states. They aim to diagnose the strengths and weaknesses of the medicine applied against corruption in each state – openness, transparency, and accountability – rather than the disease of corruption itself. They examine issues such as information transparency; political financing at the state level; conflicts of interest issues in the executive, state legislatures and judiciary; fiscal and budgetary management; the state civil service and its management; state pension fund transparency; ethics commissions; and redistricting.  

How did you determine the letter grades?

The State Integrity Index places states into the following 11 performance "tiers" according to a state's overall aggregated score: 

  • 90 and above:  A
  • 87 – 89:  B+
  • 84 – 86:  B
  • 80 – 83:  B-
  • 77 – 79:  C+
  • 74 – 76:  C
  • 70 – 73:  C-
  • 67 – 69:  D+
  • 64 – 66:  D
  • 60 – 63:  D-
  • 59 and below:  F

To create a state’s score and grade in a category, each indicator score was averaged within its subcategory, and then each subcategory score was in turn averaged with the other subcategory scores into a category score. To create a state’s overall score and grade, we averaged the category scores.

Because some aspects of governance and anti-corruption mechanisms are harder to measure definitively, some categories require a more complex matrix of indicators questions than others. Thus, the categories are equally valued, even if some categories are derived from a more lengthy set of indicators than others. Similarly, the subcategories are equally valued within their parent category. 

Are the rankings and report cards tied to a specific date?

Yes. The reporting and research to score the indicators were conducted during the summer of 2011, with a formal cut-off date of September 15, 2011. Recent developments and reforms are not reflected in the report cards. However, in some state report cards, the reporter and peer reviewer notes mention recent developments and laws that were set to take effect after September 15, 2011.

How did you decide what to measure?

To identify the project’s Corruption Risk Indicators, staff from Global Integrity and the Center for Public Integrity contacted nearly 100 state-level organizations working in the areas of good government and public sector reform around the country. We asked them a simple question: what issue areas mattered most in their state when it came to the risk of significant corruption occurring in the public sector? The outcome was a list of questions, rooted in the reality of state government in the US, that these stakeholders identified as mattering most when it came to assessing the core risks of corruption in their states.

In addition, Global Integrity and the Center for Public Integrity then included additional indicators that the two organizations had previously fielded in similar projects and hypothesized were relevant to this project’s aims. (Specifically, these indicators were drawn from the Center’s States of Disclosure project and Global Integrity’s Global Integrity Report and Local Integrity Initiative efforts.)

Who gathered the information on each state?

The State Integrity Investigation mobilized a highly qualified network of state reporters to generate quantitative data and qualitative reporting on the health of the anti-corruption framework at the state level. To score each state, reporters combined extensive desk research with thousands of original interviews with experts from state government, the private sector, and local civil society and good government organizations. Click here to see a list of the reporters.

What method did you use to score each indicator?

There are two types of indicators in the State Integrity Investigation: "in law" and "in practice" indicators.  All indicators, regardless of type, are scored on the same scale of 0 to 100 with zero being the worst score and 100 best. 

"In law" indicators provide an objective assessment of whether certain legal codes, fundamental rights, government institutions, and regulations exist. These “in law” indicators are scored with a simple "yes" or "no" with "yes" receiving a 100 score and "no" receiving a zero. 

"In practice" indicators address issues such as implementation, effectiveness, enforcement, and citizen access. As these usually require a more nuanced assessment, the "in practice" indicators are scored along a scale of zero to 100 with possible scores at 0, 25, 50, 75 and 100. Scoring criteria are defined for each of the 100, 50, and 0 scores with 25 and 75 deliberately left undefined to serve as in-between scoring options. In only a few cases, the “in practice” indicators are scored with “yes” or “no.”

How do you vet the information to be sure it is accurate?

Editors worked with the state reporters to ensure that data was sourced appropriately and scored against the established criteria. The data were then blindly reviewed by a peer reviewer for each state who was asked to flag indicators that appeared inaccurate, inconsistent, biased, or otherwise deserving of correction. Project managers at Global Integrity and the Center for Public Integrity worked for more than half a year with the reporters and peer reviewers to resolve questions and debates around each and every one of the 16,500 indicators compiled during the course of the reporting.

Reporters were required to provide multiple references to substantiate each of their scores. This could be an interview conducted with a knowledgeable individual, a website link to a relevant report, or the name of a specific law or institution, depending on the particular indicator. Reporters had the opportunity to include additional comments to support their score. Their comments help capture the nuances of a particular situation, namely the "Yes, but…" phenomenon which is often the reality in undertaking this type of research. 

What was the job of the peer reviewers?

We hired at least one expert reviewer in each state as a quality control mechanism to ensure our data was as accurate and balanced as possible. We individually contracted and carefully vetted these reviewers, selected for their independence and expertise, and asked them to review the state data for each Corruption Risk Indicator.

This was a blind review, meaning that reporters did not know the names of the reviewers. We wanted the reviewers to be candid in their comments and not worried about criticizing a specific person. The reviewer comments were used to interpret—and in some cases adjust—scores and reporting that they identified as containing errors, bias, or out-of-date information. Any score adjustments followed specific rules and generally required solid references to facts. 

For the Corruption Risk Indicators in this project, reviewers were asked to consider the following: 

  • Is the particular indicator scored by the reporter factually accurate? 
  • Are there any significant events or developments that were not addressed? 
  • Does the indicator offer a fair and balanced view of the anti-corruption environment? 
  • Is the scoring consistent within the entire set or sub-set of Corruption Risk Indicators? 
  • Is the scoring controversial or widely accepted? Is controversial scoring sufficiently sourced? 
  • Are the sources used reliable and reputable? 

Reviewers were offered one of four standardized choices in responding to a given indicator, using the above guidance to evaluate each data point: 

  1. "Yes, I agree with the score and have no comments to add." 
  2. "Yes, I agree with the score but wish to add a comment, clarification, or suggest another reference." Reviewers then provided their comment or additional reference in a separate text box which is published alongside an indicator’s score. 
  3. "No, I do not agree with the score." In this third case, reviewers were asked to explain and defend their criticism of the score and suggest an appropriate alternative score or reference. 
  4. I am not qualified to respond to this indicator.

What is the “enforcement gap” that you calculated for each state?

The enforcement gap refers to the difference between the state’s legal framework for good governance and anti-corruption and the actual implementation and enforcement of that framework. We see this gap as the difference between having good laws and policies on the books and whether the state government actually enforces the laws and policies. We calculated this enforcement gap for each state.

We used the same aggregation and averaging technique that we describe above for determining scores and letter grades, except that we first removed either all “in law” or “in practice” indicators from the data set (for example, to generate the “legal framework” score, we first removed all “in practice” indicators from the state’s data set and then averaged the remaining scores in each subcategory, then averaged the subcategories and then the categories to get the overall state score).  Once the legal framework and actual implementation scores had been calculated, we simply subtracted the implementation score from the legal score to generate the enforcement gap for the state. 

How confident are you in the Corruption Risk Report Card scores?

The project partners take full and final responsibility for the scores for each state. These scores were generated following an elaborate and collaborative review process that included balancing information from several (sometimes conflicting) sources while being guided by the master scoring criteria.

Following the peer review process, the partner organizations staff identified specific data points where peer reviewers had flagged problematic scores. The staff then engaged the state reporters in a discussion of the issue in question and ultimately decided on appropriate changes, when necessary, to the original data based on the reporter’s feedback. 

While the State Integrity Investigation partner organizations created a rigorous process for vetting the data to produce credible information, we welcome feedback on the accuracy of our data. If you wish to comment on or question specific indicator scores, please send a note detailing your concerns to nkusnetz@publicintegrity.org.

Do you like this page?