Back

Using LLMs for more equal opportunities (great pros, sobering cons)

Insight

26 Oct 2023

-

Aurore Geraud

 “Technology is neither good nor bad, nor is it neutral.” — Melvin Kranzberg

As commentators urge legislators to regulate the use of Large Language Models (LLMs) and reflect on their ethical challenges, everyday tech users are deluged with promises about the technology’s benefits. McKinsey predicts LLMs will boost labour productivity from 0,1 to 0,6 percent annually through 2040.

It's natural to speculate on how LLMs can help mitigate day-to-day life and social challenges—notably when it comes to designing new opportunities. This is something we’ve seen time and time again across various industrial and technological revolutions: The respective rises of the gig economy, influencer economy, and virtual economies created observable income opportunities—at least at outset—for those able to engage with them.

What could LLMs contribute to a more perfect society?

Accessible communication—removing barriers to inclusion for people with difficulty perceiving, comprehending, or interacting with traditional forms of communication—could be a good use case for them, and for generative AI more broadly.

A 2018 report found that 52 percent of people with dyslexia experienced discrimination during interview or selection processes in job application contexts. In December 2022, an independent contractor used Open AI’s GPT-3 to overcome dyslexia and write emails to clients on their behalf. More than a helpful spell-check tool, the contractor credited the technology with helping them sign their first “big” contract, with a client unaware of the contractor’s extra technological advantage. 

Work like this is more than asking a tool to construct a form letter. It's about transposing information coherently, and adapting it to one’s interlocutor (making prompt-engineering an increasingly important skill). This could make LLMs immensely useful as an assistive technology. Some neurodiverse individuals—for which category the unemployment rate is sometimes estimated to be 75 percent—are using ChatGPT to better articulate ideas and thoughts to others, in educational and professional settings.

We could also imagine LLMs bridging the gap between different cultures and languages, facilitating communication that is comfortable and comprehensible to both parties. This is especially relevant in education, where it could result in greater access to higher learning ... if we forget, for a moment, the digital divide or other barriers to entry, like hardware availability, or access to undisturbed study space or time.

The non-profit Climate Cardinals used ChatGPT to translate documentation and provide climate literacy to Taiwan’s Ministry of Education, demonstrating the tool's potential amongst children learning English.

These positive societal effects are limited by the accuracy or relevance of a given translation in context. In many cases, otherwise “good” translations can be detrimental to "minority" populations. We’ve seen repeated instances of AI-driven “anti-bias” tools filtering out candidates of colour in hiring situations. Refugees seeking asylum have been misquoted by AI tools, compromising their immigration chances. Perhaps for reasons like these, current OpenAI user policy discourages using its tool in “high-risk government decision-making,” including migration and asylum work.

All this seems ironic, given that the current fashion positions LLMs as great “equalisers,” notably when it comes to making unbiased hiring decisions. The general idea is that the tool can ignore gender, race, and sexual orientation data in résumés, instead comparing candidates on the basis of experience, qualifications, and other more “objective” factors.

Unfortunately, the latter are often dramatically impacted by the former. At the moment, examples of biases in the use of these tools are uncomfortably frequent. When written by ChatGPT, job adverts appear 40 percent more biased than when humans write them.

Corrections for these issues will necessarily be intersectional and take time. To reduce bias, it’s worth considering what types of data is used to train AI models.

Meta’s AI draws partly from a dataset called Books3, which includes over 191.000 books, mostly from the western canon. It stands to reason that the resulting generative AI models may suffer from limited perspective. The books were also used without copyright permission—another common AI issue, raising questions about how to avoid disenfranchising creators.

We also require appropriate regulations and policy guidelines at different levels of use. When drafting them, it’s important to consider not only the need for protection, but the need for empowerment in interactions with these technologies.

For better or worse, LLMs are already out there, integrating themselves into almost every industry and evolving at a dizzying rate. Policies don’t evolve nearly as quickly.

Data on publications and patents courtesy of The Lens. Investment data: Pitchbook. For more information on how technological advances impact social mobility, check out our report on social mobility in the digital age.

26 Oct 2023

-

Aurore Geraud

Illustration by Mariah Quintanilla.

02/03

Related Insights

Insight

The technology that may transform trash into treasure

The amount of trash humans produce has arrived at an existential tipping point. What technological advancements could help turn the tide? 

    Insight

    Filters Empower Lolita Syndrome... But It's Not (Quite) Tech's Fault

    In 2015, Snapchat released beautification filters, a seemingly benign way to enhance your look, but they wreak havoc on mental health. Why do these filters exist? What vacuum do they fill, and what do they say about technology and its implications for the lived human experience? 

      Dissenters of all kinds march out of a laptop computer screen.
      Insight

      Digital Adoption is Transforming Dissent... and Facilitating the Rise of the State

      It is tempting to reduce dissent to a series of disruptive activities. But it’s crucial to remember it is a direct response to the acts of public institutions and structures of authority. In the virtual world, however, the state feels less visible. This is about to change.

        03/03

        L’Atelier is a data intelligence company based in Paris.

        We use advanced machine learning and generative AI to identify emerging technologies and analyse their impact on countries, companies, and capital.

        About us