• Menu

Using LLMs for more equal opportunities (great pros, sobering cons)

Oct 26 2023


Aurore Geraud

 “Technology is neither good nor bad, nor is it neutral.” — Melvin Kranzberg

As commentators urge legislators to regulate the use of Large Language Models (LLMs) and reflect on their ethical challenges, everyday tech users are deluged with promises about the technology’s benefits. McKinsey predicts LLMs will boost labour productivity from 0,1 to 0,6 percent annually through 2040.

It's natural to speculate on how LLMs can help mitigate day-to-day life and social challenges—notably when it comes to designing new opportunities. This is something we’ve seen time and time again across various industrial and technological revolutions: The respective rises of the gig economy, influencer economy, and virtual economies created observable income opportunities—at least at outset—for those able to engage with them.

What could LLMs contribute to a more perfect society?

Accessible communication—removing barriers to inclusion for people with difficulty perceiving, comprehending, or interacting with traditional forms of communication—could be a good use case for them, and for generative AI more broadly.

A 2018 report found that 52 percent of people with dyslexia experienced discrimination during interview or selection processes in job application contexts. In December 2022, an independent contractor used Open AI’s GPT-3 to overcome dyslexia and write emails to clients on their behalf. More than a helpful spell-check tool, the contractor credited the technology with helping them sign their first “big” contract, with a client unaware of the contractor’s extra technological advantage. 

Work like this is more than asking a tool to construct a form letter. It's about transposing information coherently, and adapting it to one’s interlocutor (making prompt-engineering an increasingly important skill). This could make LLMs immensely useful as an assistive technology. Some neurodiverse individuals—for which category the unemployment rate is sometimes estimated to be 75 percent—are using ChatGPT to better articulate ideas and thoughts to others, in educational and professional settings.

We could also imagine LLMs bridging the gap between different cultures and languages, facilitating communication that is comfortable and comprehensible to both parties. This is especially relevant in education, where it could result in greater access to higher learning ... if we forget, for a moment, the digital divide or other barriers to entry, like hardware availability, or access to undisturbed study space or time.

The non-profit Climate Cardinals used ChatGPT to translate documentation and provide climate literacy to Taiwan’s Ministry of Education, demonstrating the tool's potential amongst children learning English.

These positive societal effects are limited by the accuracy or relevance of a given translation in context. In many cases, otherwise “good” translations can be detrimental to "minority" populations. We’ve seen repeated instances of AI-driven “anti-bias” tools filtering out candidates of colour in hiring situations. Refugees seeking asylum have been misquoted by AI tools, compromising their immigration chances. Perhaps for reasons like these, current OpenAI user policy discourages using its tool in “high-risk government decision-making,” including migration and asylum work.

All this seems ironic, given that the current fashion positions LLMs as great “equalisers,” notably when it comes to making unbiased hiring decisions. The general idea is that the tool can ignore gender, race, and sexual orientation data in résumés, instead comparing candidates on the basis of experience, qualifications, and other more “objective” factors.

Unfortunately, the latter are often dramatically impacted by the former. At the moment, examples of biases in the use of these tools are uncomfortably frequent. When written by ChatGPT, job adverts appear 40 percent more biased than when humans write them.

Corrections for these issues will necessarily be intersectional and take time. To reduce bias, it’s worth considering what types of data is used to train AI models.

Meta’s AI draws partly from a dataset called Books3, which includes over 191.000 books, mostly from the western canon. It stands to reason that the resulting generative AI models may suffer from limited perspective. The books were also used without copyright permission—another common AI issue, raising questions about how to avoid disenfranchising creators.

We also require appropriate regulations and policy guidelines at different levels of use. When drafting them, it’s important to consider not only the need for protection, but the need for empowerment in interactions with these technologies.

For better or worse, LLMs are already out there, integrating themselves into almost every industry and evolving at a dizzying rate. Policies don’t evolve nearly as quickly.

Data on publications and patents courtesy of The Lens. Investment data: Pitchbook. For more information on how technological advances impact social mobility, check out our report on social mobility in the digital age.

Illustration by Mariah Quintanilla.

Aurore Geraud

Senior Researcher

Aurore Geraud joined L’Atelier in 2012 as a journalist and managing editor, before becoming an analyst with the company’s consulting and acceleration units. Today she uses her investigative skills to research emerging technologies and their unforeseen opportunities. Originally from France, Aurore is an alumna of Sorbonne Nouvelle & University of Paris 8 Vincennes-Saint-Denis.

Connect

About L'Atelier

We are an interdisciplinary team of economists, journalists, scientists and researchers interested in emerging technology related cultural phenomena and change.

Learn more

Sign up for the latest news and insights

Your e-mail address is used to send you newsletters and marketing information on L’Atelier. You can unsubscribe at any time by using the unsubscribe link in our emails. More information about the management of your personal data and your rights can be found in our Data Protection Notice.