Do We Really Need More Powerful Language Models?
Large language models are becoming more and more popular. However, their development also makes us face certain challenges. GPT models aren't the only approach.
Join the DZone community and get the full member experience.
Join For FreeToday people rarely question the connection: bigger models mean better models. Every new release of GPT by Open AI generates tremendous interest in traditional media and social media alike. However, do we need more powerful language models (foundational models) to help us with daily tasks?
For this article, I have talked to Ivan Smetannikov, Data Science Team Lead at Serokell, Ph.D. Computer Science. He explains why ChatGPT might often be a massive waste of time and resources. And talks about alternative approaches to building NLP models that could bring the same results.
What Is Unique About ChatGPT?
ChatGPT is a foundation model designed to process and generate responses in natural human language. Built on top of GPT 3.5, it can understand and develop sequences of words and sentences and remember previous conversations and learn from them. It also uses self-supervised learning, enabling it to correct its mistakes. It generally shows better results than most generative models across various tasks. At least, this is what Open AI wants you to think.
They suggest that ChatGPT is universally good at writing essays, scheming out a rental agreement, and summarizing a children’s story. And the universality of their solution is captivating.
Peter Liang, Stanford professor of AI and one of the leading experts on generative AI, claims that this adaptability makes transformer models very attractive:
Once they undergo their initial training stage, self-supervised models can be further fine-tuned on a wide range of smaller, more specific downstream tasks. Their potential influence is, therefore, vast. ― Techmonitor
However, this is where the benefits of using the chatbot end. ChatGPT is prone to several serious shortcomings: from hallucinations to security vulnerabilities. And the reason for that lies in the architecture.
Why Is ChatGPT Worrisome?
Here are some reasons that make me question whether generative machine learning models such as ChatGPT are as great as everyone thinks.
Consumes Enormous Resources
To answer any request, ChatGPT needs a lot of power. One user has calculated that ChatGPT may have consumed as much electricity as 175,000 people in January 2023. When you ask a simple question such as “What is a foundational model?” the chatbot consumes more energy than you do in a month. For comparison, one Google search equals turning on a 60W light bulb for 17 seconds. It seems like a massive waste to me, especially considering that the same could be achieved with much simpler means (which we will discuss further in this article).
Moreover, since the algorithm consumes electricity, it generates an enormous carbon footprint. The further development and use of foundational models can significantly affect the future of our planet.
Questionable Performance
We have seen how many resources are spent to keep the chatbot running. But maybe it’s worth it because the model learns to understand human language.
Well, not really. ChatGPT is based on stochastic algorithms, and what it does is simply try and predict the probability of a word being next in a sequence. But it cannot understand human language and conversation.
GPT and other large language models are aesthetic instruments rather than epistemological ones. Imagine a weird, unholy synthesizer whose buttons sample textual information, style, and semantics. Such a thing is compelling not because it offers answers in the form of text, but because it makes it possible to play text — all the text, almost — like an instrument. ― The Atlantic
This explains why it often generates false answers, also called “hallucinations.” It invents facts, quotes, and sources. Moreover, by default, it’s not even aware of the falseness of the answer ― unless you specifically request it, it doesn’t notify you when it’s not sure. Therefore, to me, the resource accuracy payoff is questionable. Is it worth spending so much energy and money on something that continuously lets you down?
Power in the One Hand
Finally, a big problem with the chatbot model is that it accumulates a lot of power in the one hand.
Only several companies worldwide, such as Google, Facebook, and Open AI, can afford to deploy foundation models. Most researchers and policymakers that investigate how foundation models function and how they should function simply don’t have access to these resources.
Today startups (OpenAI, Anthropic, AI21 Labs, etc.) are much more well-resourced than academia and can, therefore still afford to train the largest foundation models (e.g., OpenAI’s GPT-3). [...]The fundamental centralizing nature of foundation models means that the barrier to entry for developing them will continue to rise so that even startups, despite their agility, will find it difficult to compete, a trend that is reflected in the development of search engines. ― Kira Radinsky for Harvard Business Review
Moreover, companies like Open AI don’t disclose their technology. GPT-3 wasn’t released at all, only granting API access to some people. The datasets aren’t released either. The community has no control or say in creating foundation models.
What Can Be Done Instead?
An important thing to understand is that whatever you can achieve with LLM, you can obtain with a more straightforward, narrow AI. Yes, it wouldn’t be so universal, but most business tasks in the real world don’t require a general AI. Almost always developing a bot and supporting it on simpler models will be easier and cheaper than buying tokens from prominent providers.
Moreover, this way, you won’t have to entrust sensitive data to third parties when you know nothing about how they protect that data (recent scandals with data leakages from ChatGPT don’t encourage you either). ChatGPT and similar solutions are great for creating an MVP to test your hypothesis; however, in the long run, switching its components to in-house solutions that you have developed and tailored to your needs makes more sense.
Opinions expressed by DZone contributors are their own.
Comments