02.27.24

Should you be concerned about AI chatbots?

The recent flourishing of artificial intelligence (AI) tools and chatbots can mean different things to different people. Some use them as learning tools, asking questions about scientific and historical topics. Others give instructions to write lyrics or make up stories. Still more use them to write papers.

But a student plagiarizing a paper is harmless compared to what someone with malicious intentions could do with control of these systems. Or at least, that’s what most of the American public thinks.

If you have concerns over the meteoric rise of AI, you’re not alone. At the same time, you should be assured that those apprehensions aren’t going unnoticed, and steps are being taken by tech leaders and government to keep AI in check.

Here, we’ll discuss common concerns Americans have reported, current issues with AI tools and what various technology and federal groups are doing about it.

Widespread concern persists

A study published in May 2023 by the Anti-Defamation League sought to uncover more about Americans’ attitudes toward AI, including chatbots. The most prominent concerns the respondents had about bad actors using the power of AI were:

  • 84% expressed concern about AI being used for criminal purposes.
  • The same percentage thought AI could spread misleading or false information.
  • 77% believed bad actors might take advantage of AI to radicalize people into extremism.
  • 75% said those with ill intentions may spread hate and harassment.
  • 74% were concerned with bias.
  • 70% thought AI will worsen extremism, hate and/or antisemitism.

How AI chatbots work remains a mystery

No one — including the companies that create AI tools such as ChatGPT, Bard and Bing Chat — fully understands how the technology works. For example, when Kevin Roose of the New York Times tried to have a philosophical conversation with Bing Chat, he largely succeeded. Microsoft Chief Technology Officer (CTO) Kevin Scott, with noted surprise, admitted he didn’t understand why the AI responded the way it did.

These AI tools are capable of programming as well as reprogramming themselves, meaning they’re in a constant state of flux as they adapt to new challenges.

You can’t control something you don’t understand

Therein lies one of the major issues surrounding AI and AI chatbots: While companies like OpenAI, Google and Microsoft are racing to make their offerings more intelligent, more creative and less distinguishable from humans, the concern that these enterprises may lose control of their own tech has, until recently, been secondary.

This hasn’t gone unnoticed. In March 2023, over 1,000 leaders in technology and its research, wrote an open letter calling for a six-month ban on developing AI tools any further.

Later in May, the U.S. Senate Judiciary Committee held a hearing with Sam Altman, the CEO of OpenAI, to discuss how AI should be regulated, if at all. Altman made a case that his OpenAI technology should have safety precautions in place, with incentives for other technology companies for abiding by these requirements. A related topic was AI’s economic impact: In some ways, productivity is increased, but there’s also the risk of AI replacing jobs typically performed by humans.

Altman encouraged the U.S. government to be flexible with regulations, as it’s likely they’ll need to change them frequently to keep up with new advancements and concerns.

Government is ready to regulate AI

AI includes many benefits, but it holds an undeniable risk of being used for malicious purposes. However, there are reasons to be optimistic, as widespread public and federal concern is inspiring those in power to act before an incident happens.

And if you’re one of the few who aren’t concerned about AI, ask yourself: How can you be sure an AI didn’t write this article?