Machine learning gone wrong: Why should de-biasing be a priority?

Share
  • December 28, 2018

The prevalence of machine learning today is undeniable. It is employed in a variety of areas and ways; from predicting a person ’s behavior, outlook, and other preferences across a wide range of areas to customized selling in businesses. However, given the acceleration of change and increasing complexity today, we can see many cases of high-profile samples of machine learning models not working as intended.

In recent years, there were many instances which illustrated the disastrous consequences when biases slid into machine learning models. The models trained using biased data, in turn, reflected the biases. In one case, a bot trained using data from Twitter feeds started spewing racial slurs and had to be shut down. In another case, a widely used search engine showed difficulty in differentiating pictures of colored humans from gorillas.

When machine learning models don’t work as expected, the problem is usually with the training data and the training method. Biases will present themselves in machine learning models at various levels of the method, such as information assortment, modeling, data preparation, preparation, and evaluation. Some of the common biases in ML are sampling, performance, confirmation & anchoring bias.

Sampling bias can lead to models trained on information that is not completely representative of future cases. In the case of performance bias, it will exaggerate how the predictive power, performance uniformity across data sections and generalizability is perceived. Confirmation bias will cause data to be wanted, taken, emphasized, and recalled in ways that establish the pre-conceived notions. Anchoring bias can result in over-dependence on the primary piece of data examined.

So, is there a way to eliminate the bias in machine learning?

Some of the steps that can be taken to ensure a fair ML design are:

Bring together a social scientist with a data scientist: Both these type of scientists communicate in completely different languages. The term bias has a very specific technical meaning for a Data scientist – it indicates the amount of segmentation in a very classification model. On a similar note, the concept of “discriminatory potential” indicates the models’ capability to accurately differentiate categories of data. In the area of data science, larger “discriminatory potential” could be the primary aim. In contrast, when social scientists remark on discrimination, it is more likely that they are talking about the question of equity which means that they are thus more likely to supply a humanistic overview on the issue of fairness and discrimination.

Exercise caution during annotation: Unstructured information like text and pictures are typically created by human annotators who offer structured class labels which are utilized for training ML models. For example, annotators will label pictures that show people or even classify texts as containing positive and negative expression.
Annotation made by humans has gradually turned to a kind of business, with various platforms rising at a level where crowd-sourcing & gig economy intersects. Though the standard of annotation is enough for several tasks, annotation generated by humans is essentially vulnerable to culturally planted biases.

Mix fair measures into traditional machine learning metrics: The way ML  classification models perform is often measured by employing the traditionally used metrics that specialize in performance at class-level, all-around model generalizability, and overall performance. But, there is always room to improve it with measures that can promote fairness and eliminate bias in ML models. This kind of  performance metrics are necessary for achieving situational awareness — because like the saying goes, “if it can’t be gauged in some way, it can’t be improved.”

SEE ALSO: AI and machine learning in software development: Benefits for developers

During sampling, strike a balance in representativeness with important mass constraints: During the process of sampling data, the traditional method has been to confirm that samples statistically represent any future cases that the model can encounter in the future. This is the standard practice that is followed. One of the primary concerns with ensuring that the sample is representative is that it does not give adequate importance to minority cases — people who are less common statistically. All in all, this might appear intuitive and acceptable, however, problems arise when some demographic category of people are statistical minorities in the dataset. Basically, ML  models are reinforced to find out patterns that can then be generalized to a large group. So, if the group is not adequately represented in the data, the ML model will ignore all the learning concerning it.

Prioritize de-biasing:  Having said all the previous step, still it is important to make sure that, de-biasing is an integral step throughout developing and training the model. There are many ways to go about it. One way of doing it is to utterly strip the training information of any demographic signals, both implicit and explicit. Yet another way involves incorporating into the model’s training design some fairness measures.

Thus by making fairness a tenet in machine learning development process, it will lead to, not only building fairer models but will also result in the designing better models too.

The post Machine learning gone wrong: Why should de-biasing be a priority? appeared first on JAXenter.

Source : JAXenter