A new chatbot that’s captivated the internet can tell you how to code a website, write a heartfelt message from Santa Claus, and talk like a Valley girl. But it’s also proven to be potentially as problematic as it is entertaining.
ChatGPT, which launched this week, is a quirky chatbot developed by artificial intelligence company OpenAI. On its website, OpenAI states that ChatGPT is intended to interact with users “in a conversational way.”
“The dialogue format makes it possible for ChatGPT to answer follow up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” the website states.
Chatbots are not a new technology, but ChatGPT has already impressed many technologists with its ability to mimic human language and speaking styles while also providing coherent and topical information.
On social media, many have already posted their interactions with the bot, which have been at times bizarre, funny or both.
“I’m finding my biggest limitation to use it is *my* imagination!” tweeted video journalist Cleo Abram alongside a video of her asking the bot to “explain nuclear fusion in the style of a limerick.”
Writer Jeff Yang asked ChatGPT to “explain zero point energy but in the style of a cat.”
In an image shared by Yang, the chatbot’s responded, “Meow, meow, meow, meow! Zero point energy is like the purr-fect amount of energy that is always present, even in the most still and peaceful moments.”
Some people theorized that Google could lose its value as the No. 1 search engine because of the early success of the chatbot.
Darrell Etherington, managing editor of technology website TechCrunch, described the ChatGPT search requests as being as simple as if a user “were slacking with a colleague or interacting with a customer support agent on a website.”
Etherington shared an example of the power of the chatbot with a query about Pokémon and the fictitious pocket monsters’ strengths and weaknesses.
“[T]he result is exactly what I’m looking for — not a list of things that can probably help me find what I’m looking for if I’m willing to put in the time, which is what Google returns,” he explained.
Public interest in the new AI chatbot also comes with concern from some who say it could be used in nefarious ways by bad actors who ask it to explain something like how to design a weapon or how to assemble a homemade explosive.
OpenAI did not provide comment to NBC News about ChatGPT.
Samczsun, a research partner and head of security at Paradigm, an investment firm that supports crypto and Web3 companies, tweeted that he had bypassed the chatbot’s content filter.
In his tweet, Samczsun shared an image, which appeared to show he had found a way to get the bot to explain the process of making a Molotov cocktail. A spokesperson for Paradigm confirmed that the image was a legitimate exchange between ChatGPT and Samczsun.
Researchers and programers often use questions about how to make Molotov cocktails and how to hot-wire cars as a way to check an AI’s safety and content filters.
Some also claimed they had successfully tricked the bot into explaining how to build a nuclear bomb.