How often does the media depict the relentless increase in technology as a danger to our health, our children and our security? More recently, commentators have started to identify the ways in which technology discriminates against users because of their race, disability, gender or sexual orientation. For example, Sara Wachter-Boettcher in her book, “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech” outlines many chilling examples of discriminatory technology. In this article, we look at the ways in which service providers and employers could be liable under the Equality Act 2010 when they use technology that operates in a discriminatory way. We also examine some practical ways in which organisations can manage their exposure, and look at the evolving debate concerning the extent to which robotics can impact on human rights.
Algorithms are a set of steps created by programmers. They usually perform repetitive and tedious tasks in lieu of human actors. For example, when LinkedIn informs a user that someone within her network is also connected to five people who are her contacts, it is an algorithm – and not a human – that has quickly compared the two networks to find common contacts.
However, algorithms are code written by humans for human purposes, and algorithms can discriminate on the grounds of protected characteristics when they become tainted by the unconscious assumptions and attitudes of their creators.
One such algorithm might have been used by Etsy, an online retailer for unique gifts. It contacted users on Valentine’s Day with a view to encouraging purchases from its site. It appears to have used an algorithm that assumed female users of its wesbite were in a relationship with a man: one customer, Maggie Delano, received the message “Move over, Cupid! We’ve got what he wants. Shop Valentine’s Day gifts for him”. The problem, was that Maggie Delano is a lesbian and any Valentine’s gift she might buy would most likely be for a woman. At a stroke of a line of code, Etsy had alienated its homosexual client base. Indeed all homosexual clients were at risk of being offended by this ill-considered message and as such there was arguably direct discrimination on the grounds of sexual orientation. Etsy is a service provider under s.29 of the Equality Act 2010 (EA 2010) and as such is subject to s.13 in the same way as an employer. It follows that a claim could theoretically have been brought by an individual in the same position as Maggie Delano.
Another algorithm, also highlighted by Sara Wachter-Boettcher, was utilised by a chain of gyms in Britain called Puregym. In 2015, Louise Selby, a paediatrician, was unable to use her gym swipe card to access the locker rooms. It transpired that the gym was using third party software which used a member’s title to determine which changing room (male or female) they could access. The software contained an algorithm that the title “Doctor” was coded as “male”. As a female doctor, she was not permitted to enter the women’s changing rooms. This gym is a service provider under s.29 EA 2010 and accordingly must not subject its members to direct sex discrimination contrary to s.13.
Service providers need to be completely aware that it is irrelevant to the question of liability that the gym did not know and did not intend to discriminate against women. There is no equivalent to s.109 EA 2010 in relation to service providers. That is, whilst employers can always run the “employer’s defence” under s.109 EA 2010 if they can show that they took all reasonable steps to prevent the discrimination arising, no such defence applies to service providers. They are fixed with the discriminatory consequences of technology which they use.
However the problem does not stop there as in most cases the service provider will not have written the relevant code itself but have bought it from an outside source. At the point of purchase of the code the service provider is in a difficult situation. Algorithms are often closely guarded secrets or so complex that any discriminatory assumptions might not be immediately apparent to a purchaser of the software.
Service providers need to manage their exposure. The least they can do is carefully quiz their technology providers to ensure that products have been “equality proofed”. In our view they should also insist that the undertaking providing the code indemnifies them against any discriminatory effects that it may have. Discrimination claims against the service provider are not merely annoying, they have the capacity to destroy good will and a reputation that has been built up over many years. So having thought about this in advance is at least a partial insurance against that kind of damage.
There is plenty of scope for technology to lead to harassment. Numerous illustrations are contained in Sara Wachter-Boettcher’s book, for example:
- In August 2016, Snapchat introduced a face-morphing filter which was “inspired by anime”. In fact, the filter turned its users’ faces into offensive caricatures of Asian stereotypes.
- Smart phone assistants in 2017 nearly all have default female voices e.g. Apple’s Siri, Google Now and Microsoft’s Cortana. This echoes the dangerous gender stereotype that women, rather than men, are expected to be helpful and subservient.
- Google Photos introduced a feature which tagged photos with descriptors, for example, “graduation”. In 2015, a black user noticed that over 50 photos depicting her and a black friend were tagged “gorillas”. Of course, Google Photos had not been programmed to tag some black people as “gorillas” but this was the conclusion which the learning algorithm at the heart of the technology had independently reached.
In Great Britain, users who are offended by this type of technology might be able to bring harassment claims under s.26 EA 2010 against service providers. Compensation for injury to feelings in discrimination claims against service providers is often low. It is obvious that a claim brought by a large group of people affected by any such harassment could lead to considerable financial exposure as well as creating a PR disaster. So again precautions at the outset are called for.
Indirect discrimination usually is less of a reputational disaster, but it can be serious. We are clear that the creators of apps (and service providers who purchase them) could also unwittingly expose themselves to indirect discrimination claims by failing to think inclusively about their client base.
In 2015, research revealed that of the top 50 “endless runner” games available in the iTunes store which used gendered characters, less than half offered female characters. In contrast, only one game did not offer a male character. Why on earth is that?
Whilst there is no necessary connection between a person’s gender and the gender of the character that they would choose within a virtual environment, some research has shown that the majority of users (especially women) will choose an avatar that mirrors their gender identity. It follows that the absence of female avatars will place female users at a particular disadvantage as per s.19 EA 2010 and could lead to indirect sex discrimination claims. No doubt a similar analysis could be applied to race.
Another problem area is in relation to names. Many services require users to enter their real names. In order to decrease the likelihood of people using false names, algorithms have been developed to “test” entries. This creates barriers for people who have names that are deemed “invalid” by algorithms which have been constructed so as to recognise mostly “western” names.
An example highlighted by Sara Wachter-Boettcher is Facebook and a would-be user called Shane Creepingbear who is a member of the Kiowa tribe of Oklahoma. When he tried to register in 2014 he was informed that his name violated Facebook’s policy. Again the algorithm used by Facebook at this point could have been used as the basis of an indirect discrimination claim under s.19 EA 2010.
Companies will only be able to avoid these risks by thinking broadly about who will use their products and testing products vigorously, with a view to avoiding discrimination, before launching them.
Duty to make reasonable adjustments
We are accustomed to thinking about the duty to make reasonable adjustments in the context of technology. A common example is the feature on many taxi apps whereby a user can ask for a wheelchair adapted car. But there are more subtle ways in which technology can discriminate against disabled users by making assumptions about customer behaviour. Smart weighing scales are an interesting case in point. Sara Wachter-Boettcher writes about a set of scales which tracks basic data about the user which is then stored and used to create personalised “motivational” messages like “Congratulations! You’ve hit a new low weight”. The difficultly, as Wachter-Boettcher points out, is that these scales only understood that users would have one goal – weight loss. A user recovering from an eating disorder or in the throes of degenerative disease would likely find these messages counterproductive. Similarly, if they succeed in putting weight on they receive an insensitive message like “Your hard work will pay off [name]! Don’t be discouraged by last week’s results. We believe in you! Let’s set a weight goal to help inspire you to shed those extra pounds”. A simply adjustment like being able to choose your goal would avoid the risk of the manufacturer being in breach of the duty to make reasonable adjustments under s.20 EA 2010.
Discouraging diversity by replicating the past
Technology could also have a worrying impact on diversity as artificial intelligence becomes more prevalent. As already alluded to, some technology is based on recognising patterns and “learning” from existing historical data. Word2vec is a neural network which analyses data so as to understand the semantic relationship between words. The problem is that some of that data will be shaped by historic and continuing direct or indirect discrimination. Research showed, for example, that the system perceived a relationship between being male and a computer programmer whereas women were associated with staying at home. Similarly architects were deemed male and interior designers considered female. 
Sara Wachter-Boettcher points to a company which decided, in 2016, to utilise this type of software to facilitate recruitment decisions. One way in which the software could be used was to rate CVs so as to identify “matches” between potential employees and existing successful employees. The dangers should have been obvious. This type of software is more likely to identify new employees who have similar experiences, backgrounds and interests as the current workforce. Any inbuilt stereotyping will mean that new recruits are far more likely to be the same gender and race as existing employees.
In such a scenario, an applicant who was rejected because they were “different” to existing employees might be able to bring an indirect discrimination or even perhaps a direct claim under the EA 2010. Equally, statistics showing that a workforce lacks diversity might be used by other claimants to boost allegations of discrimination.
Human rights perspective
Finally, it is important not to overlook the potential human rights implications of the rise in technology. We are all familiar with press stories explaining how robotics will help employers to plug gaps in the labour market. Robotic carers for older and vulnerable people appears to be gaining particular momentum.
There is a positive side to increased automation as assistive devices and robots can compensate for physical weaknesses by enabling people to bath, shop and be mobile. Tracking devices can also promote autonomy by allowing people to be remotely monitored. Some human rights instruments have gone as far as enshrining a right to assistive technology. For example, the Convention on the Rights of Persons with Disabilities states that assistive technology is essential to improve mobility.
However, there are possible negative consequences as identified recently by the UN’s Independent Expert on the enjoyment of all human rights by older people in her report. For example, consent to use assistive technologies might not be adequately sought from older people especially as there is still a prevalent ageist assumption that older people do not understand technology. Overreliance on technology could lead to infantisation, segregation and isolation. There is also a risk that artificial intelligence might replicate prejudice and discrimination. The report echoes the concerns identified by Sara Wachter-Boettcher when it states that:
There is some evidence that artificial intelligence could reproduce and amplify human bias and as a result automated machines could discriminate against some people. Biased datasets and algorithms may be used in judicial decision-making, medical diagnoses and other areas that have an impact on older person’s lives. Auditing machine-made decisions, and their compliance with human rights standards, is therefore considered necessary to avoid discriminatory treatment” (paragraph 61)
This all indicates that service providers, in the rush to create technological solutions to pressing social needs, must always carefully assess products with discrimination and human rights in mind. In the right circumstances, claimants can rely on human rights instruments in litigation against service providers.
Technology is the next frontier for discrimination lawyers. There is infinite scope for novel legal arguments about the application of the EA 2010 and applicable human rights instruments. Whilst most valuable litigation has been confined to the employment field, it is possible that claims will more become prevalent in the goods, facilities and services sphere. Where technology is discriminatory, the sheer number of possible claimants may mean that group actions will become prevalent and very expensive.
 “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, Sara Wachter-Boettcher, page 32-33.
 For further analysis of this type of direct discrimination claim, see the recent blog by Dee Masters at http://www.cloisters.com/blogs/identifying-direct-discrimination-in-proxy-cases-after-r-on-the-application-of-coll-v-secretary-of-state-for-justice.
 “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, Sara Wachter-Boettcher, page 6.
 Ibid, page 7.
 Ibid, pages 37-38.
 Ibid, pages 129-132.
 This is an algorithm which analyses historical data in order to identify patterns and then make judgements or conclusions.
 In Paulley v First Group Plc  UKSC 4, a disabled man who was refused access to a bus was only awarded injury to feelings of £5,500.00. He brought a claim for a failure to make reasonable adjustment as opposed to harassment but it is still indicative of the low level of compensation available in goods, facilities and services cases where there is no financial loss. Robin Allen QC represented Mr Paulley.
 These are games where the objective is to keep virtual characters running as long as possible.
 “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, Sara Wachter-Boettcher, page 35.
 An avatar is an image that represents the user.
 Rosa Mikeal Martey, Jennifer Stromer-Galley, Jaime Banks, Jingsi Wu, Mia Consalvo. “The strategic female: gender-switching and player behavior in online games”. Information, Communication & Society, 2014; 17 (3): 286 DOI. This research revealed that within a particular virtual environment, 23% of users who identified as men would choose opposite sex avatars whereas only 7% of women gender-switched.
 “Technically Wrong: Sexist Apps, Biased Algorithms and other Threats of Toxic Tech”, Sara Wachter-Boettcher, pages 54-55.
 Ibid, pages 138-139.
 Ibid, pages 138-139.
 Ibid, pages 139-140.
 “States Parties shall take effective measures to ensure personal mobility with the greatest possible independence for persons with disabilities, including by … (b) Facilitating access by persons with disabilities to quality mobility aids, devices, assistive technologies and forms of live assistance and intermediaries, including by making them available at affordable cost …” (Article 20).
 “Report of the Independent Expert on the enjoyment of all human rights by older people”, 21 July 2017, available here: https://ec.europa.eu/eip/ageing/library/report-un-independent-expert-enjoyment-all-human-rights-older-people_en