AI chatbot letdown: Hype hits rocky reality
Published Date: 3/27/2024
Source: axios.com

Grumbles about generative AI's shortcomings are coalescing into a "trough of disillusionment" after a year and a half of hype about ChatGPT and other bots.

Why it matters: AI is still changing the world — but improving and integrating the technology is raising harder and more complex questions than first envisioned, and no chatbot has the magic answers.


Driving the news: The hurdles are everything from embarrassing errors, such as extra fingers or Black founding fathers in generated images, to significant concerns about intellectual property infringement, cost, environmental impact and other issues.

  • Some leading startups of generative AI's first wave are falling by the wayside. Last week, Inflection AI's leadership and top researchers decamped for Microsoft, while on Friday, Stability AI's CEO and co-founder resigned as the firm faced a talent and financial crunch.

A year ago, every board was pressuring its CEO to find ways to adopt generative AI as quickly as possible.

  • Now, many are finding that even promising early experiments have proven tough to scale and that what appeared to be "good enough" results often aren't.

What they're saying: Gary Marcus, a scientist who penned a blog post last year titled "What if generative AI turned out to be a dud?" tells Axios that, outside of a few areas such as coding, companies have found generative AI isn't the panacea they once imagined.

  • "Almost everybody seemed to come back with a report like, 'This is super cool, but I can't actually get it to work reliably enough to roll out to our customers,'" Marcus said.

AI ethics expert Rumman Chowdhury tells Axios that the challenges are numerous and significant.

  • "No one wants to build a product on a model that makes things up," says Chowdhury, CEO and co-founder of AI consulting firm Humane Intelligence.
  • "The core problem is that GenAI models are not information retrieval systems," she says. "tThey are synthesizing systems, with no ability to discern from the data it's trained on unless significant guardrails are put in place."
  • And, even when such issues are addressed, Chowdhury says the technology remains a "party trick" unless and until work is done to mitigate bias and discrimination.

Yes, but: This isn't the end of the road for generative AI, by any means. Every major new technology — even, or especially, a world-changing one — goes through this phase.

  • The "trough of disillusionment" was first named and defined by consulting firm Gartner in 1995 as part of its theory of hype cycles in tech.
  • Usable speech recognition was famously always five years away from reality until it finally arrived, and today is remarkably good even under less-than-ideal conditions.
  • VR has famously entered several troughs, and its entry into the mainstream remains an open question.

The other side: Much of the industry remains very optimistic, envisioning years of sustained investment in ever more gigantic models requiring ever more enormous data centers powered by ever- more advanced chips.

  • The continued enthusiasm despite the setbacks was palpable at Nvidia's GTC conference in Silicon Valley last week, according to Chetan Sharma, a longtime telecom industry consultant who was at the event.
  • Researchers at the biggest tech companies and leaders from other industries touted promising work on how generative AI can aid lofty goals, such as curing cancer.
  • Some tasks, like customer service and employee training, are seeing meaningful improvement from today's generative AI, while there's an emerging consensus that benefits in other areas will require better models and more refined data sets.
  • "I think we are in that kind of mushy phase," Sharma told Axios.

Between the lines: Don't forget how new generative technology is compared to other branches of artificial intelligence, which took decades to produce significant benefits.

  • Computing costs have been coming down, with models often getting significant price cuts coupled with performance improvements just months after initial release.

Zoom in: OpenAI CEO Sam Altman has offered few details on the next generation of OpenAI's underlying GPT engine, but has hinted it will offer the same order of magnitude improvement in general reasoning that the jump from GPT-3 to GPT-4 brought.

  • "The thing that matters most is just that it gets smarter," Altman told me in a January on stage interview at the World Economic Forum. "GPT-2 couldn't do very much. GPT-3 could do more. GPT-4 could do a lot more. GPT-5 will be able to do a lot lot more."
  • "The thing that matters most is not that it can, you know, have this new modality, or it can solve this new problem. It is that generalized intelligence keeps increasing."

Yes, but: Marcus remains skeptical.

  • "It's easy to say, 'Oh, we're just a few months away,'" Marcus said. "I don't think that we are in this particular case. Not because I don't think AI is or AGI is impossible, but just because [this] particular technology has a lot of problems."

My thought bubble: I remain pretty optimistic about the long-term future of generative AI and think it will help with a lot of tasks, both creative and routine.

  • But the underlying models need improvement, and they will also have to be refined on customized data sets.
  • The companies creating the models will need to figure out how to compensate the people whose work is in those data sets.
  • Applying genAI to many problems will require developing better interfaces for human-AI collaboration.
  • Adding all this will take additional time and money.