The fight against AI bias

Share
  • February 26, 2021

Bias in AI applications – prevalence and risk

Artificial intelligence has become part of the fabric of everyday life. From chatbots and smart recommendation functions in video and audio services to driver assistance systems and disease diagnosis, its use is constantly expanding. And in the corporate context, too, AI is now far more than just a passing trend. According to a recent Bitkom study, one in two German companies with more than 100 employees is convinced that AI-supported technology is the key to securing long-term competitiveness.

SEE ALSO: Computer vision in the fight against the COVID-19 pandemic

However, although the technology brings extensive benefits, its use poses some dangers. The aforementioned study also painted this picture. Two factors that inhibit the rapid adoption of AI are its complexity and the lack of expert knowledge. If this creates a black box, there is an increased risk that negative effects of prejudiced algorithms will not be discovered, or that they will be discovered very late. This is quite problematic, because once implemented, an AI bias is very difficult to correct. In addition, companies that use poor AI often suffer severe reputational damage.

AI has no emotions, it simply executes orders

The bad news first: every AI is biased. And there is a simple reason for this: it is developed by humans and human actions are inherently biased. The biases of developers are automatically incorporated into the programming, whether intentionally or (most often) unintentionally. But how exactly does bias occur in practice? This question needs to be addressed from two different angles: How is AI developed? And why?

The “how” describes the development process and includes two aspects, the algorithm itself and the database it is trained on. During the initial development of the algorithm, programmers should always keep in mind that the choice of target output should not be made lightly. Many companies, of course, use algorithms to try to make processes more efficient and maximize profits, for example. But programming an AI to achieve this goal soon creates a “maximization bias”, where the program pursues cost efficiency without any regard for potential negative consequences – a scenario that can endanger not only peoples’ livelihoods, but even their lives, especially in the healthcare or financial sectors.

The predetermined goal of the algorithm should not be underestimated. Much more decisive, however, is the input, i.e., the variables as well as the proxies the algorithm uses. The latter, in particular, often poses a challenge to developers, because proxies are by definition not exactly quantifiable and are defined by assumptions. This means that they can pose a risk if a developer does not think his system through to the end. This, in turn, can quickly lead to technologically produced, unconscious distortions that are very difficult to detect later.

Such distortions often have their origin in the assumption that they do not exist. Suppose a company wants to develop a program to help HR departments recruit the best possible candidates. The company is very committed to diversity, which is why the IT development department is instructed not to use the variables “gender” or “ethnic background”. And yet, the system ends up suggesting mainly white men. This is due to the fact that historical company data, which is often used to train algorithms, usually has a non-uniform distribution. In addition, poorly chosen proxies result in the system still identifying women that have a migrant background as such, for example. This is what is meant by an algorithm not reaching the desired result when it comes to the question of “why” mentioned above. The issue here is that the use case and the training case do not match.

Why many companies are losing the battle against AI bias

The example above clearly shows that it is usually not the algorithms themselves that are to blame. A heterogeneous team of developers is, of course, an advantage, because diverse perspectives improve the chance that prejudices of homogeneous groups do not have such a strong impact. In most cases, however, the main problem is the quantity as well as the quality of the data with which algorithms are trained. Loosely based on Clint Eastwood’s quote: “What you put in is what you get out.” If the quality of the input is poor, you cannot expect the output to be great. Unfortunately, companies usually use the data that is available to them, for example customer data. But this data is neither stored in sufficient quantity nor with all the required attributes. In addition, the much needed data variance is usually not given. And even if extensive data sets are available, this does not mean that they can be used. If data is not available in a cleansed and suitable format, even the best data set in the world will not provide correct, unbiased results.

Key aspects in the fight against bias

A good strategy for training data is therefore crucial in the fight against bias. This already starts with the merging of the data, the so-called ETL process (Extract, Transform, Load). Care must be taken to ensure that the data extracted from ERP or CRM systems is transformed to a uniform format and transferred to the test environment together with other external data of the same format. The second step is about cleaning and labeling, in other words, about the correct labeling and storing of annotations required for the training purpose. In addition, unusable third-party data and duplicated content must be removed. This step also involves defining input and output variables. After that, it is a matter of fine-tuning. Data sets that do not have sufficient diversity ought to be further enriched, or if that is not possible, they should be weighted in order to minimize bias.

SEE ALSO: Introducing software fuzzing – part of AI and ML in DevOps

Finding the right balance

When preparing the data, especially when labeling and defining the variables, companies should take care to strike the right balance. As described above, biases can never be completely eliminated, neither in technical systems nor in the people who create them. To minimize bias in target definition, statistical bias, or the reproduction of historically entrenched biases, it is therefore advisable to have as diverse a team as possible for input preparation, development, and testing of the output. In this way, companies come closest to “ground truth,” i.e. a match between the AI output and real-world conditions.

The post The fight against AI bias appeared first on JAXenter.

Source : JAXenter