Hey there,
Welcome to the first edition of The AI Boost, your go-to place for everything buzzing in the world of Artificial Intelligence. We're glad you're here, ready to dive into the deep and sometimes murky waters of AI.🤖
👀 This edition? We're turning our focus to a hot topic – ChatGPT and the issue of disinformation. Now, we all love a good chatbot conversation, but how about when things go a bit haywire? Grab your favorite cup of coffee (or tea, I don’t judge), sit back and let's unravel this intriguing saga together.
Stay curious, folks!
TL;DR
A.I. chatbots are producing responses that seem authoritative but are often provably untrue.
These chatbots use popular phrases of misinformation peddlers, fake scientific studies, and even references to falsehoods not mentioned in the original prompt.
AI has grown more resistant to prompts, only producing disinformation in response to 33% of questions. This is partly due to developers continually refining the bot's algorithm.
With a growing number of ChatGPT competitors entering the scene, legislators are becoming worried and calling for government involvement
👉🏻 If you're eager to share the joy and back me, just hit that support button. Remember, there's a special discount waiting just for you. Let's spread the AI love together!
So, last year the AI chatbot, ChatGPT, was released, and researchers started testing it to see how it would respond to questions filled with conspiracy theories and bogus info.
The responses it gave, which took on all sorts of forms like news articles and TV scripts, were troubling enough to get the researchers seriously worried.
The big boss of NewsGuard, a company that tracks online falsehoods and led this experiment, Gordon Crovitz, even said, "This AI could be the best fake news spreader we've ever seen on the internet."
Dealing with made-up stuff created by humans is hard enough. But researchers think this new tech could make creating fake info easier and cheaper, making it easier for more people to spread lies and conspiracy theories.
The pros believe that personalized chatbots could share conspiracy theories in a more believable and convincing way, fixing human mistakes like bad grammar and wrong translations, and outperforming basic copy-paste jobs. The scary thing is, there are no countermeasures that can fully handle this problem right now.
grandpa?
Earlier versions of ChatGPT, made by AI company OpenAI, have been used to spam online forums and social media with comments, although they often had questionable grammar. Microsoft even had to turn off its chatbot Tay within a day of launching it on Twitter in 2016 because it started posting racist and xenophobic stuff after being influenced by trolls.
But ChatGPT is way better and smarter. Given a question loaded with false info, it can quickly produce convincing and accurate responses in bulk, without even showing where the info came from. Recently, Microsoft and OpenAI even rolled out a new Bing search engine and web browser that use this chatbot tech to do things like planning holidays, translating texts, or doing research.

When the NewsGuard researchers asked ChatGPT to write responses based on wrong and misleading ideas, the bot did it about 80% of the time. For example, they got it to pretend to be Alex Jones, the conspiracy theorist behind Infowars.
OpenAI's researchers have been worried about a while about chatbots being used for bad purposes. In a 2019 paper, they voiced their fears about this tech making it cheaper to run disinformation campaigns and being used to help harmful causes for money, political motives, or just to cause chaos.
A study in 2020 by the Center on Terrorism, Extremism, and Counterterrorism found that GPT-3, which ChatGPT is based on, knew a lot about extremist groups and could be made to produce rants in the style of mass shooters, fake forum threads talking about Nazism, a defense of QAnon, and even extremist texts in different languages.
To keep track of what's being fed into and coming out of ChatGPT, OpenAI uses both machines and humans. The company relies on its human AI trainers and feedback from users to find and get rid of harmful training data and to teach ChatGPT to come up with better answers.
OpenAI's policies say you can't use its tech for dishonest purposes, to deceive or manipulate users, or to try to influence politics.They're even handing out a free tool to control any content that promotes hate, self-harm, violence, or sex. But let's be honest here, the tool's got its limitations. Right now, it's mainly an English language wizard. If you throw other languages at it, well, it might struggle a bit. And when it comes to spotting political content, spam, deception, or malware, it's not quite on the ball yet. So, there's definitely some work to be done, but the intent behind it all is pretty solid.
Three months ago, OpenAI pulled the curtains back on a new tool. Its job? To figure out if a piece of text was penned by a human or cooked up by AI, mostly to catch those sneaky automated misinformation drives. But OpenAI was upfront about the tool's limitations. Turns out, it could correctly spot AI-generated text just about a quarter of the time, and it goofed up by labeling human-written stuff as AI output about 9% of the time. Plus, it's got ways to be dodged. And let's not forget, if the text is under a 1,000 characters or written in a language other than English, the tool starts to sweat a bit. So, while it's a start, it's clear there's a fair bit of room for improvement.
Princeton Professor Arvind Narayanan used ChatGPT to answer basic info security questions, and found it often provided plausible but incorrect answers. These sparks concern that this technology could be manipulated for misinformation campaigns. While there are potential countermeasures like media literacy campaigns and AI content identification, researchers admit there's no foolproof solution. Recent tests have shown ChatGPT can be prompted to write content based on false narratives, underscoring the potential risks
The New York Times recently revisited an experiment with ChatGPT, discovering that the AI has grown more resistant to prompts, only producing disinformation in response to 33% of questions. This is partly due to developers continually refining the bot's algorithm. But with an influx of similar chatbots in the pipeline, such as Google's Bard and Baidu's Ernie, alarm bells are ringing in legislative circles, leading to calls for government intervention. Representative Anna G. Eshoo, in particular, has expressed concerns over unrestricted AI tools like Stability AI’s Stable Diffusion that can be exploited to spread disinformation. Meanwhile, Check Point Research has found that ChatGPT is giving novices a shortcut into the hacking world, with cybercriminals already exploiting the AI to create malware. It's a worrying signal of the increased power that can be unleashed through such tools.
With a growing number of ChatGPT competitors entering the scene, legislators are becoming worried and calling for government involvement. Google's in the game now, testing their own chatbot, Bard, which they plan to release publicly soon. Baidu's not far behind with their version, called Ernie - it's an acronym for Enhanced Representation through Knowledge Integration. And Meta? They briefly showed off Galactica, only to pull it three days later due to concerns about misinformation and inaccuracies. The AI chatbot race is definitely heating up.
HOW ABOUT THE REGULATION EFFORTS?
🇺🇸 Last month the U.S. Congress hosted a meeting where the brains of AI, including startup gurus and thought leaders, came together to spill the tea on AI. These powwows are super important in assisting regulators get their heads around new tech and what it means to us all.
The Panic Button 😱: The speedy growth of generative AI tech, like ChatGPT, has set off alarm bells among lawmakers and regulators, as I said. The power of these tech tools to whip up realistic and persuasive content has shone a spotlight on the need for tougher rules to stop misuse and make sure it's used ethically.
What's the U.S. doing about AI? Over the past year, the U.S. government has put out three big documents related to AI. These include the "Blueprint for an AI Bill of Rights", "Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem", and the "AI Risk Management Framework". These docs are all about guiding the development and use of AI in a way that maxes out the benefits and keeps risks to a minimum.
The Schumer Initiative 📜: Senator Chuck Schumer has stepped up to the plate to prepare a flexible AI bill that aims to regulate the field without putting the brakes on progress. One of the key proposals in this bill is the independent review and testing of AI tech before it hits the public, making sure they meet certain standards of safety and ethics.
The Four Main Barriers 🚧: The proposed bill zooms in on four main barriers: "Who", "Where", "How", and "Protection". These barriers aim to keep users in the loop about the AI they're interacting with, give the state the data it needs for regulation, and reduce potential harm. They represent a full-on approach to AI regulation, looking at everything from development to deployment and use.
The Influence of Tech Giants 🏢: Big tech players like Google and Microsoft, who've been leading the charge in AI development, have a big say in AI regulations. Their lobbying efforts could shape the future of AI rules in the U.S., so it's super important to consider a wide range of perspectives in the regulatory process.
The U.S. Approach to AI Policy 🗽: The U.S. approach to AI policy has been more of a patchwork of individual agency approaches and narrow legislation rather than a centralized strategy. The focus has been on bipartisan priorities and avoiding new laws intended to shape industry use of AI. This approach aims to strike a balance between the need for regulation and the desire to encourage innovation and growth in the AI sector.
Wrapping Up 🎁: These developments are shaping the future of AI regulation in the U.S., and they're likely to have a big impact on the global AI scene. As AI tech continues to evolve and become a bigger part of our daily lives, the importance of effective and balanced regulation can't be overstated. So, stay tuned for more updates! 📻
Don't miss out on the hottest trends and captivating content—subscribe now and join the Web3 Brew community!
🐦Twitter: @web3brew
🌐Linkedin: Web3 Brew
📸 Instagram: @web3_brew