Blog

ChatGPT, Bard & Where Caution Should Lie


AI Chatbots like ChatGPT and Bard have garnered a ton of attention in recent months and for good reason. Many have been impressed by their capability to produce thorough, thought-out answers to complex queries in real time. Others are concerned that it will be used to cheat in academic settings or that it will unknowingly steal artwork and writing from human creators.

And yes, there are several reasons to be concerned, and not all of them entail plagiarism or intellectual property.

But understanding the technology’s drawbacks requires understanding how they work. And it its drawbacks seem to lie in how — and where — it gains the information it provides to its users.

What Are They?

In a nutshell, Bard and ChatGPT use large language models to process the seemingly-infinite silos of information that exist online to solve questions or prompts of varying difficulty. AltexSoft explains that ChatGPT uses an OpenAI-created Generative Pre-Trained Transformer (GPT) which allows it to partake in dialogue with users and produce answers in a conversational manner. Bard, which is Google’s creation and a competitor to ChatGPT, employs a deep neural network (the A.I’s brain) to process language that sources from a large language model (called “language models for dialogue applications,” or LaMDA) to detect patterns in language and understand context.

“GPT-3 has hundreds of billions of parameters and was trained by reading huge swaths of text on the internet, from Wikipedia articles to Reddit posts. So, when someone shows the model examples of a new task, it has likely already seen something very similar because its training dataset included text from billions of websites. It repeats patterns it has seen during training, rather than learning to perform new tasks.”

Massachusetts Institute of Technology

Large language models use machine learning — a process that doesn’t require human intervention — to do just that: learn. Large language models are an application of natural language processing (NLP), a field of computer science that AI programs employ to translate human language into computer language. Open AI programs, predictive text, and online searches all use this kind of technology.

Possible Issues

It would be amiss to glaze over how these programs are making the Internet and day-to-day life more accessible to users with disabilities. However, concerns have been raised about how these technologies could be used to perpetuate discrimination in the workplace.

The concern mainly is about the technology being used to eliminate job applicants from diverse backgrounds (see: Levi’s consideration of AI generated models to “expand diversity” in their advertisements). Varying companies have been criticized for their lack of diversity, and these programs are dependent on the data that already exists — which can be rife with misinformation or disinformation. And as we already know, marginalized groups are often underrepresented or misrepresented in online content and in online spaces. If used in this manner, these programs would perpetuate the status quo instead of challenging it.

“The concern is not about what ChatGPT can do.It’s about what its default settings are. It’s about how ChatGPT is configured to treat some forms of writing as normal, typical and expected. And it’s about how ChatGPT requires a special request to generate non-normative forms of writing.

Collin Bjork, Communication Lecturer at Massey University

In fact, research released in March found that job listings automated by ChatGPT were almost twice as biased than human-created adverts. The study determined that the program was “particularly biased” against neurodivergent people, those with physical disabilities, and people from marginalized ethnic groups.

Further, Poynter, a non-profit research and journalism organization, cautioned in February that ChatGPT was able to create a fabricated newspaper in minutes — including AI-generated photos of staff. The nonexistent publication went on to produce false stories and writer bios.

So not only can this program source misinformation, it can create its own. Great.

There’s also the concern that AI can take over jobs from literally everyone else – as discussed in the Writer’s Guild Strike piece by Lillie, and potential upcoming SAG-AFTRA strikes as well.

The Takeaway

To expect a program, albeit a useful one, to fix all of society’s ills is misguided. It may be a starting point but it’s certainly not one that should be operated unchecked or with caution.

If anything, the emergence of programs like ChatGPT and Bard act as a mirror by showing us what progress still needs to be done when it comes to online content and how to make it more accessible for everyone.

Leave a Comment

Your email address will not be published. Required fields are marked *