Subscribe

Nvidia’s New Solution Could Address AI ‘Hallucination’ Issues

Nvidia recently rolled out NeMo Guardrails, a software program that makes AI models more accurate and secure, to complement its dominant hold on the AI chip sector.

Artificial intelligence has made massive leaps in the past year. The technology has advanced from something that can be integrated into other programs to one that writes its own—and much, much more. Language models like OpenAI’s GPT and Google’s LaMDA are the most advanced AIs the world has seen. But they aren’t perfect.

In the midst of spitting out paragraphs of text that reads like it was written by a human, AI often makes mistakes. The models tend to make up facts despite being trained by terabytes of data. Known as “hallucinations” within the industry, this is a key problem researchers are working to address.

Perhaps more concerning is the fact that AI hallucinations can allow the technology to talk about harmful subjects—even when programmed against doing so. The flaw can also open up security vulnerabilities that are exploitable by bad actors. So, what can be done?

Fortunately, Nvidia claims to have a solution. The GPU giant recently announced NeMo Guardrails, a software program designed to address hallucination issues for today’s large language models (LLM).

What is Nvidia’s NeMo Software?

Models trained on terabytes of data—especially when sourced from the web—are well-versed in many topics. Unfortunately, some of those are dangerous or harmful subjects.

Currently, programmers design their models to interact with users based on a set of instructions and guidelines to avoid these topics. With the right prompting, however, users can trick the AI into bypassing these rules. This can also happen accidentally if the model gets confused by the prompts or misinterprets the user’s meaning.

NeMo Guardrails takes this concept to a new level. The software adds boundaries to prevent large language models from discussing specified topics. Likewise, the software prevents AI from carrying out commands that may harm the user’s computer. To accomplish this, NeMo Guardrails sits between the user and the AI model or tool.

Nvidia vice president of applied research Jonathan Cohen said in a statement, “You can write a script that says, if someone talks about this topic, no matter what, respond this way.”

“You don’t have to trust that a language model will follow a prompt or follow your instructions. It’s actually hard coded in the execution logic of the guardrail system what will happen,” he adds.

Aside from blocking certain topics, NeMo Guardrails identifies AI hallucinations by cross-checking one model’s answers with another. If the two don’t match, signaling a hallucination, the model simply tells the user, “I don’t know,” instead of providing an incorrect or non-factual answer.

The tech also works to protect the user’s security. Similar to how it puts hard limits on topics that can’t be discussed, the tool can force AI to only interact with third-party software on a pre-defined “green list.” This makes it far less likely any vulnerabilities can be exploited.

Real-World Uses

Nvidia’s new Guardrails software is currently being offered as an open-source project that can be used in commercial projects. The tool uses the Colang programming language, so familiar developers can write their own custom rules for NeMo Guardrails.

It will be interesting to see whether Google and OpenAI adopt this tool or continue using their existing methods. The two AI giants currently use human feedback from testers to train their models on which answers are acceptable and refine future outputs.

In its announcement, Nvidia also offered another real-world use case for the software. Cohen says, “If you have a customer service chatbot, designed to talk about your products, you probably don’t want it to answer questions about your competitors… And if that happens, you want to steer the conversation back to the topics you prefer.”

This application makes sense for third-party developers looking to improve their AI-based offerings with more accurate and on-topic answers.

Synergistic Strategy

Nvidia is increasingly focused on AI as this technology is driving massive demand for new semiconductors. Conveniently, the company already dominates the segment responsible for producing the chips needed by AI. Analysts suggest Nvidia currently controls 95% of the market for AI chips—though the competition is heating up.

Indeed, Nvidia has seen its stock price skyrocket by 103% since the start of the year at the time of this writing. It has easily been the best performer on the S&P 500 in that timeframe and shows no signs of slowing down.

The strategy of developing software to complement and enhance AI projects is wise. It goes hand-in-hand with the company’s hardware strategy, creating an effective combination that will help it maintain control of the AI sector even as competition in the chip space intensifies.

Author of article
Author
linkedin logox logofacebook logoinstagram logo