Concerns for businesses using LLMs
Integrating LLMs into your business may not be a quick fix.
Large language models (LLMs) seem to be expensive, energy-hogging toys at this point. Some companies—most notably Microsoft—think integrating LLMs like ChatGPT into everyday business is a great idea. But I’m not so sure.
Below are some concerns I have for businesses going all in on LLMs.
It’s well known that LLMs make stuff up (AKA they hallucinate).
What’s the root of these hallucinations? Will an LLM hallucinate with your business' proprietary data? Does the amount of data processed by the LLM affect its likelihood to hallucinate? If so, what is that threshold? How much time do you expect employees to spend validating the LLMs claims? Is that cheaper than having a human do the work in the first place? Who, outside of AI developers, wants to babysit an LLM all day?
LLMs are terrible at math
Most business reports are math heavy. People use Micorosft Excel almost exclusively for calculations. But LLMs struggle with basic math. (I shared a simple example on LinkedIn recently.(1)
How can anyone trust an LLM to create crucial reports that may heavily rely on math? How can we know that the LLM understands these numbers?
How can you train an LLM to your company’s style?
LLMs are kind of like supercharged search engines. You put in a prompt (kind of like a search term) and you get a well-written answer. But what you get isn’t perfect, even if it’s 100% accurate.
LLMs tend to be verbiose and give way more information than needed (which also makes their claims harder to validate).
Every industry has its jargon, and individual companies may even have unique jargon.
How do these non-AI companies train LLMs for their needs and wants? How expensive is this training? How much time will it take?
Jake LaCaze doesn’t hate the idea of using AI where it works and is appropriate. But a career in oil and gas with a brief stint in marketing has made him wary of any hype.