Support the project
Anti-Corruption Portal
Support the project
Report on AI’s Anti-Corruption Capacities Released

The U4 research centre has issued a condensed analytical note entitled Artificial intelligence in anti-corruption – a timely update on AI technology.

The U4 experts focus the publication on three key topics: (1) the use of AI to combat corruption in “classic” areas where the technology has already shown certain achievements, (2) the areas where AI has been introduced but has produced no positive results, and (3) some promising areas of AI employment.

1. The “classic” areas where AI helps counter corruption more effectively include monitoring of public procurement, anti-fraud and anti-money laundering, and big data analysis. Successful use of AI in these areas is prevalently due to its capacity to process and connect unstructured information at scale and flexible approach to unsupervised learning.

As regards public procurement for instance, AI makes it possible to:

  • Identify new (at least theoretically) and adjust existing indicators of corruption (“red flags”);
  • Consider a much broader set of data inputs in analysis and monitoring of procurement to detect more complex patterns of collusion and conflicts of interest;
  • Analyse and monitor procurement at unprecedented scale and scope, which would unlikely be feasible manually.

The basis for analysis in this case consists of big data collected under such initiatives as DIGIWHIST (information on 17 million tenders across the European Union) or ProACT (data from at least 42 domestic procurement systems).

As regards fraud detection and anti-money laundering, AI helps to turn episodic spot-checks into more efficient, comprehensive and real-time monitoring efforts. The report provides the example of a global beverage conglomerate that consolidated more than a dozen internal enterprise resource management systems with a number of external data streams. The result was a consolidated, AI-supported supplier vetting function that reduced costs by more than 90%.

In terms of processing big data, AI allows for more affective detection and analysis of relevant information. In Peru for example, investigators use AI to screen a growing volume of reported suspicious financial transactions. The HSBC financial conglomerate doubled its detection rate of confirmed bad transactions and cut transaction processing time from more than a month to just a few days.

2. At the same time, the attempts to use AI to adopt public decisions to reduce their discretion generating corruption have been unsuccessful. AI systems were found to produce many incorrect or disputable decisions: the authors provide such most notorious cases as unfounded denial of unemployment benefits to thousands of rightful claimants in Michigan, wrongful withdrawal of social benefits in Serbia and child benefits allowances from Dutch parents. This shortcoming is related to the specifics of AI functioning (the principle of decision-making by machine-learning models, especially complex ones, is called “black box” for external users) and proprietary ownership of AI models and data. Because AI decisions cannot be fully explained, this violates a basic principle of administrative justice.

Another challenge consists in biases in training data and the significant tendency of AI to “hallucination” (to make things up and present misleading information as fact). Even AI systems that specialize in legal issues are found to “hallucinate” in up to 30% of cases – for example, referencing legal clauses that do not exist.

All these issues suggest that AI can provide decision support, but that a human as ultimate decision-maker is a prerequisite for achieving just and accurate outcomes and also to establish clear lines of accountability when things go wrong.

3. Remote sensing and inclusive participation are indicated as new anti-corruption spaces for AI by the authors of the report.

Over the last ten years, AI systems have developed their ability to extract information from images and recognise complex patterns. This opens up novel opportunities to harness AI for making progress on tackling a number of illicit activities closely intertwined with corruption, for instance, illicit logging.

Forest Foresight developed an AI-based technology able to predict illicit deforestation months before it occurs; in a trial implementation in Gabon, it helped park rangers carry out 34 enforcement actions and stopped an illicit gold mine. Additionally, remote sensing with the use of AI has been employed to:

  • Tripling the productivity of rangers to detect snares in Cambodia;
  • Assigning responsibility for remote oil spills in the Mediterranean;
  • Revealing fake suppliers in procurement in Brazil;
  • Illegal bitcoin miners in Iran;
  • Illicit fishing activities around the world;
  • Misreporting of economic activity in China;
  • Methane emissions in the USA etc.

The U4 experts consider the use of AI to protect against “policy capture” and promote inclusive participation of citizens in public decision-making equally promising.

The growth of remote feedback systems made them almost unmanageable manually. For example, the US Consumer Financial Protection Bureau receives more than 1 million comments annually, and the public consultation for a new Chilean constitution drew more than 280,000 individual comments. In these cases, the government encounters two main challenges:

  • Find a small number of comments that contain very high informational value among a flood of low-quality submissions;
  • Identify, consolidate and numerically weigh similar concerns and opinions for an overall overview of where public sentiments are.

AI’s ability to categorise and summarise very high volumes of natural language text offers ways to address these problems and make processes meaningful and effective. For example, the implementation of AI in the Consul Democracy software platform used by more than 300 cities and organisations around the world to gather civic initiatives helped considerably increase the effectiveness of its operation. Its early iteration, pioneered in Madrid, attracted more than 26,000 proposals. The manual summarisation of information meant that only two proposals reached the threshold to be considered by the city council; a new, AI-supported version of the platform allows for much better search and consolidation of similar results.


In addition to analysis of the areas of application of AI with the aim to counter corruption, the authors of the publication invite the individuals responsible for development and implementation of relevant decisions to take into consideration a number of related specifics and challenges:

1) Need to address digital divide issues. Access to AI and initial data used by AI are not distributed evenly across country, gender, ethnic, or socio-economic lines, for example:

  • Only 22% of AI professionals are women;
  • Countries with high proportion of marginalised communities may have low representation in the digital realm;
  • Certain groups of individuals can have a disproportionately high visibility in relation to specific adverse events (for example, more prevalence in crime statistics due to more police attention).

This can lead to AI systems that are likely to produce more erroneous and/or biased results: for example, AI-enabled decision-support systems in hiring personnel might reproduce gender disparities when relying on legacy data that is skewed towards hiring and promoting male candidates. It is important to take into consideration these specifics of employing AI in analysis and/or decision-making.

2) Possibilities to connect disparate disclosure, transparency, and open data initiatives with the use of AI. In Armenia, for example AI is being deployed to more effectively scrutinise asset disclosures by officials, and in Czechia, AI helps identify long chains of political connections with finance and open ownership data. The separate pools of data are valuable as such, but, put together, they allow for getting to a new level in anti-corruption analysis and monitoring.

3) Importance of investments in targeted training data and open ownership models. Limited availability of unbiased data is one of the major constraints in fully harnessing the potential of AI, including its application in anti-corruption. Targeted support to build up specific, openly accessible training datasets in close consultation with the professional anti-corruption community might help to unlock this potential.

4) Build resources and infrastructures for challenging unfair AI outcomes. Given the scale of AI systems’ operations, it is inevitable that, even in optimal conditions, AI will produce a large absolute number of false negatives – that is, individuals erroneously accused of fraud, denied social benefits they are entitled to, and so on. Supporting affected individuals, especially those from socially disadvantaged groups, by building practical capabilities to reconsider decisions, file effective complaints and, if needed, launch court cases. Additionally, investing in analytical tools and assessment frameworks to help identify when and how AI fails to deliver, and produces biased and otherwise erroneous outcomes can also be helpful.

5) Importance of building capacity for AI in the broader anti-corruption community, including understanding of its limitations (biases and “hallucinations”) and developing skills to formulate queries to AI systems. In order to create targeted anti-corruption products, experts able to adapt the existing AI systems and train them with the use of relevant open data are needed. Support from public bodies, civil society and donors is also important in this sense to consolidate resources, support technical capacities and retain a pragmatic perspective towards assessing AI’s effectiveness, and identify the areas where either advanced technologies are needed or simple solutions are sufficient.

Tags
ICT
Tags
ICT
Support the project