Lawmakers, AI Leaders Meet to Discuss Need for Government Regulation

OpenAI CEO Sam Altman and other AI industry leaders met with lawmakers in a Senate committee hearing to discuss what regulations for the technology might look like.

It’s no secret the popularity of artificial intelligence (AI) has exploded in the past year. That’s one reason why OpenAI CEO Sam Altman and other industry leaders recently went in front of a Senate tech and privacy committee. As the technology’s influence grows, leading U.S. tech companies argue, so does the need for regulation.  

During the hearing, Altman said of AI, “We believe that we can and must work together to manage the potential downsides so that we can all enjoy the tremendous upsides.”

Indeed, there are many potential downsides of AI, and their relevance grows greater each day. From data sets reinforcing negative stereotypes and perceptions to “hallucination” issues creating misleading outputs, AI is far from perfect.

The need for regulation is clear. But what does AI regulation look like? This is the question both industry leaders and lawmakers are now asking themselves—but time is ticking.  

How to Regulate AI

Those familiar with recent Senate tech hearings would have noticed a key difference in this one. While Altman, Christina Montgomery of IBM, and Gary Marcus, a professor emeritus at New York University, sat before the Senate committee, the mood in the room was amenable. Clearly, both sides are on the same page as far as wanting regulation.  

The hours-long discussion was an important step in the right direction. However, a solution remains far from reality. The fact that many lawmakers on the committee hadn’t heard of ChatGPT, arguably the most popular AI application today, speaks volumes of their ability to successfully regulate the technology.  

Even so, some senators began brainstorming regulation ideas with OpenAI’s Altman. Senator Richard Blumenthal of Connecticut said, “Should we consider independent testing labs to provide scorecard and nutrition labels or the equivalent? Packaging that indicates to people whether or not the content can be trusted…?”

Indeed, the data an AI model is trained on has a significant impact on its eventual output. As the saying goes, what you put in is what you get out. A resume-screening tool trained on years of previous employment data might show bias toward male employees. After all, this is what the data most often reflects. Models trained on data from social media sites could end up creating output filled with the same racist and sexist hate speech so often seen online.  

Ultimately, AI is a technology too vast for most people to understand. But ensuring the technology is transparent goes a long way. When users are aware of potential biases and companies using AI tools know what’s happening “beneath the hood,” outcomes are more likely to be positive.  

Wolf in Sheep

Having the very companies developing this technology championing its regulation may seem odd. However, this is often the case in the tech world. Consider the recent push for better privacy laws, for instance. Tech giants like Apple, Microsoft, Google, and Meta are at the forefront of these discussions.  

Of course, their involvement isn’t as wholesome as it appears. Rather, when the government puts specific laws into place, companies no longer need to self-regulate to the same intense degree. They can simply comply with the laws—even to the smallest legal extent. Then, if something goes wrong, they can pass the blame to the government that made poor rules rather than shouldering it themselves.  

While not all companies act with this mindset, it’s worth remembering that having regulations in place is extremely advantageous for large tech companies. Those at the forefront of AI are no different.  

The Public Will Adapt

Despite the fears currently surrounding AI, Altman compared the technology to the early days of Photoshop. He says, “When Photoshop came onto the scene a long time ago, for a while people were really quite fooled by photoshopped images and then pretty quickly developed an understanding that images were photoshopped.”  

Of various AI applications, he adds, “This will be like that, but on steroids.”  

Already, users have started adapting to AI tools as a part of daily life. As people get more used to working with AI and engaging with its outputs, telling real from fake will be easier.

Hopefully, this comes to pass. However, regulation of the AI industry is still needed. For this to happen, close collaboration between lawmakers and industry experts will be needed to navigate the immense scope of this technology.

Author of article
linkedin logox logofacebook logoinstagram logo