top of page
Search

Removing class bias in algorithms?

Gender, race and algorithms

As a Chinese woman, the voice recognition software at my utility provider has never understood my surname. It always informs me that there is a problem after three repetitions of ‘Lui’. My ‘unusual’ surname and/or my gender may have caused some difficulties.

Whilst this is not evidence of race or gender bias, Rachael Tatman’s research of 2016 shows that Google’s speech recognition contains gender bias. This is manifested by a more consistent performance by Google’s speech recognition on male voices than female—a phenomenon commonly known as the ‘white guy’ syndrome in algorithms.

Scientists have realised that machine learning programmes can discriminate on the protected characteristics of race and gender. Men receive more high-paying Google search job advertisements than women. Predictive text in Google shows that the Chinese are ‘rude’, ‘cruel’ and ‘horrible’ when one types in the phrase ‘Chinese are…’.

Decisions and algorithms

Complex algorithms and the lack of transparency make lending decisions difficult to analyse. Rather than focusing on the opaque decision-making process by machines made in black boxes, three Google researchers, Hardt, Price and Srebro analysed the decisions made by machines.

In 2016, they devised a test for discrimination by analysing the data going into a programme and the decisions made afterwards. Their approach is called ‘Equality of Opportunity in Supervised Learning’. Using a case study, their methodology excluded two principles of discrimination, namely ‘fairness through unawareness’ and ‘democratic parity’. By eliminating these principles, Hardt, Price and Srebo submit that people who can repay a loan should be given be granted one regardless of any sensitive attributes--the equal opportunity principle.

Hardt, Price and Srebo’s research helps detect discriminatory processes. However, the use of black box predictors attracted criticism from Sharkey, an Emeritus Professor in robotics and artificial intelligence at the University of Sheffield. Sharkey argues that black box predictors are great for construction projects such as planning the best way to lay down an oil pipeline. When it comes to making decisions impacting on human lives however, he believes that black box predictors are unsuitable for lack of transparency. Srebo acknowledges that this can be a problem in some cases although he believes that black box predictors can be suitable when individual stakes are lower.

Class and algorithms

In banking, I would argue that Hardt, Price and Srebo’s methodology needs to be improved to include class bias. Black box predictors also need to be transparent to protect consumers. In the period before the most recent financial crisis, some customers signed contracts without fully understanding them because they felt that they did not have many options. This is mainly due to the problem of access to the financial market. Access to the financial market has not been equal in some cases as a result of higher prices charged to customers with lower salaries.

There is evidence in the UK that customers with lower salaries have to pay more for banking services and products because of restricted access to credit due to lower credit scores. This ‘poverty premium’ phenomenon will be demonstrated in the account of a former employee and whistle-blower who worked for one of the big four banks in the UK.

Putting customers first

The whistle-blower’s main grievance is about the unfair sales segmentation policy. Consumer protection at his former employer is manifested by segregating customers into three classes: very wealthy customers belong to the ‘Mayfair’ category, the middle category is called the ‘mass affluent’, and the bottom category is called the ‘mass market’. The whistle-blower argues that this segregation was unfair for the following reasons.

First, the ‘mass market’ customers were most vulnerable and needed the most protection but they were offered the least advice. They had “five lines of advice when the wealthy people had twelve lines of advice”. Secondly, staff advising the ‘mass market’ class had fewer qualifications, and customers’ access to financial products was restricted.

The practical effect is that ‘mass market’ customers were sold more expensive financial products compared to ‘mass affluent’ customers purchasing the same products. ‘Mass affluent’ sales advisors earned the most since they were well qualified and sold the most expensive commission products such as insurance bonds which generated a commission of 5-7%.

The segregation sales policy essentially discriminated against ‘mass market’ customers and ignored the principle of ‘Putting Customers First’ in the bank’s Code of Conduct.

This incident highlights the issue of unequal access to information and credit due to class bias. The lack of timely information received by consumers is a key factor of information asymmetry.

In the financial sector, information problems facing consumers include complex products; financial illiteracy and opaque pricing.

However, these limitations are further aggravated when financial institutions do not disclose sufficient information to customers about complex financial products. Unless lenders explain financial products to customers in a simple manner, disclosure per se would not assist.

Hardt, Price and Srebo’s research includes customers’ incomes in their case study of predicting whether customers will default on loans. However, the consequential factors of restricted access to credit and information asymmetry due to lack of advice to poorer customers have to be considered as well. Unless the banking culture changes, machine bias will remain.

Treat customers fairly

The Financial Conduct Authority sets out in Principle 6 of PRIN 2.1 FCA Handbook that: ‘A firm must pay due regard to the interests of its customers and treat them fairly’. This principle has to be taken seriously for the removal of class bias. After all, artificial intelligence reflects the values of its creators. Mathematical models in artificial intelligence can identify problems such as class bias.

Nevertheless, a multi-disciplinary approach in artificial intelligence is required to remove such issues. Legislation is often reactive and lags behind technology. It is also prone to the cycle of over-regulation, deregulation and re-regulation. As such, soft law, in the form of self-regulation by industry participants complement hard legislation. Banking and finance experts need to work with regulators to eliminate class bias.

 

Dr Alison Lui is a Senior Lecturer at Liverpool John Moores University (LJMU). Dr. Lui obtained her LL.B (European Legal Studies) from the University of Bristol. She holds a LL.M (Corporate and Commercial Law) at the London School of Economics and a doctorate degree from the University of Liverpool. Dr. Lui qualified as a Solicitor and practised commercial law before joining LJMU. She teaches a number of business related modules on the LL.B, LL.M and LPC programmes.

Alison’s research interests are predominantly in financial regulation, ethics and artificial intelligence in financial regulation and corporate governance. She has published articles and book chapters. Her monograph “Financial stability and prudential regulation: A comparison between the regulators and central banks of the United Kingdom, the United States, Canada, Australia and Germany” was published in September 2016 with Routledge. She has also appeared in radio programmes.

Alison has won a number of awards to date. These include a Winston Churchill Fellowship, a Max Planck Society Fellowship, an Academic Fellowship with the Honourable Society of the Inner Temple, London and a Fellowship with the Royal Society of Arts.

Follow Us
  • Twitter Classic
bottom of page