Ensuring the reliability of machine learning models is not an easy task, but it is essential as the use of these models becomes ubiquitous in society, writes Dr. Adrian Byrne.

The importance of ensuring that artificial intelligence (AI) and machine learning are credible, ethical and responsive is becoming increasingly acute as the world faces the maximization of the benefits of this increasingly sophisticated technology for society while also minimizing its potential damages.

While AI is a broad concept related to the ability of computers to simulate human thinking and behavior, machine learning refers to computational algorithms that learn from data without explicit programming. Simply put, machine learning enables systems to identify patterns, make decisions, and improve through experience and data.

Machinery learning models or algorithms provide the basis for automated decision-making systems that help businesses run operations and reduce costs. This has led to an explosion of applications in sectors such as healthcare, medicine, marketing, cyber security and even finance, where machine learning is now used by banks to determine if applicants should be considered for a loan.

While these models promise to provide the basis for a fairer and more equitable society, the algorithms are by no means infallible. They can degrade over time, discriminate against individuals and groups, and are open to abuse and assault.

The terms trustworthy, ethical, responsible, which are used so often in relation to AI, should now be extended to defining the type of machine learning we are prepared to tolerate in society. Machine learning must encompass accuracy, fairness, privacy, and security, and the responsibility falls on caretakers, goalkeepers, and field developers to ensure that everyone is protected.

Ensuring the reliability of machine learning models is not an easy task and certainly not a single task for any single discipline to deal with on its own. It is now widely accepted that a more holistic approach is needed to study and promote reliability in AI, with input from a wide range of experts in mathematics, philosophy, law, psychology, sociology and business.

A two-day seminar, held in person in the Swiss city of Zurich earlier this month, brought together international researchers working on algorithmic justice. The purpose of this seminar, which I attended, was to foster dialogue among these scholars in the context of legal and social frameworks, especially in light of the European Union’s efforts to promote ethical AI.

The seminar covered a wide range of topics that deserve consideration in relation to the reliable learning of machinery.

Utility versus justice

There is always an exchange between utility from the point of view of the decision maker and justice from the point of view of the person subject to that decision.

On the one hand, the decision maker creates and masters the machine learning decision system to advance the goals of the business or organization. Machine learning model predictions are used to remove uncertainty by assessing utility for the decision maker. Typically, broader concepts of social justice and equality are often not part of the decision-making utility group.

On the other hand, the subject of the decision benefits or is harmed by the decision based on the predictions of the model. While the subject of the decision understands that the decision may not be favorable, they at least expect to be treated fairly in the process.

The question arises: to what extent is there an exchange between the usefulness of the decision-maker and the fairness of the subject of the decision?

Different models produce different prejudices

Different machine learning systems can produce different results. Decision support systems use risk prediction models that may result in a selection process that produces discriminatory results.

Digital markets use matchmaking machine learning models, which may lack transparency as to how sellers match buyers. Public Internet sites use search engine optimization machine learning models, which may include implicit bias regarding content suggestion based on assumptions made about the user.

Machine learning focuses more on goals than on procedure, an approach that has more to do with data collection and minimizing the gap between the actual outcome and the intended outcome.

Bridging this gap – known as the ‘loss function’ – usually pushes developers to address individual forecasting errors and ignore group-level forecasting errors, leading to one-sided learning objectives.

Further bias can be introduced through the data used to train the machine learning model.

Poor data selection can also lead to problems related to under-representation or over-representation of particular groups, while what constitutes bias varies from person to person.

Indeed, model traits are determined by human judgment, so these prejudices can produce one-sided representations of machine-learned reality, and these prejudices can be fueled by unfair decisions that affect the lives of individuals and groups.

For example, Uber taxis use accumulated driver data to calculate in real time the probability that a driver will receive another fare after a discount, as well as the potential value of the next fare and the time it will take to arrive.

This type of information embedded in a machine learning model can end up discriminating against passengers based in an economically deprived area compared to what is higher.

Situation testing

The third and final area includes the investigation of discrimination, which factories sensitive information and requires factual reasoning for testing the meaningful situation.

The example given in the seminar was related to a female academic from the mid-40s, who applied for promotion and was sidelined in favor of a male colleague with similar education and experience. The female academic did not accept the decision of the promotion panel as fair and impartial and so continued with the appeal of the decision.

This example highlights the effort required by the injured individual in appealing a ‘dark’ decision. It also shows how the injured individual must disclose his sensitive information in order to build a case against an unfair decision.

As they offer all their energy, data and reasoning, can they successfully prove their case without having access to the data and learning model of the machines that helped make the decision?

Discrimination is illegal, but uncertainty about how people make decisions often makes it difficult for the legal system to know if someone has actively discriminated. From this perspective, incorporating machine learning models into the decision-making process can improve our ability to detect discrimination.

This is because processes involving algorithms can provide crucial forms of transparency that are otherwise not available. However, for that to be the case, we must make it so.

We need to ensure that the use of machine learning models makes it easier to examine and question the entire decision-making process, making it easier to know if discrimination has occurred.

This use of algorithms should make exchanges between competing values ​​more transparent. Therefore, reliable machine learning is not necessarily all about regulation, but, done properly, can improve human decision-making for the betterment of all of us.

By Dr Adrian Byrne

Dr Adrian Byrne is a member of the Marie Skłodowska-Curie Career-Fit Plus at CeADAR, the Irish center for applied AI and machine learning.

10 things you need to know directly in your inbox every day of the week. Sign up for Daily summarySummary of essential science technology news from Silicon Republic.

Leave a Reply

Your email address will not be published.