Home News AI Regulation & Ethics Google Launches SynthID Detector – A Revolutionary AI Detection tool. Is this...

Google Launches SynthID Detector – A Revolutionary AI Detection tool. Is this the beginning of responsible AI development?

0
Google Launches SynthID Detector – A Revolutionary AI Detection tool. Is this the beginning of responsible AI development?

Key Takeaways.

Google has launched SynthID Detector. It is a powerful tool which can detect AI-generated contents. It works by identifying watermarks generated by SynthID in content provided by Google AI tools such as Imagen Gemini and Lyria. The detector is in testing and can only be used by those who join a waiting list.

  • SynthID Detector’s tech architecture is open-source and anyone can build on it.
  • Google has launched SynthID detectoris a tool which can recognize any content created through the Google suite AI tools.

    SynthID () is a cutting-edge watermarking tool that was launched by Google in 2023. This technology adds an invisible watermark to AI-generated content.

    SynthID, initially launched for AI-generated pictures, has now been expanded to include text, audio, and video content created using tools such as Imagen, Gemini Lyria and Veo.

    This SynthID watermarking is used by the detector to identify AI content. This watermark will be checked when you upload an audio, video, or image to the detector tool. If it finds a watermark, it will highlight the most likely part of the content.

    However, it’s important to note that the SynthID detector is currently in testing. Google has released a form for journalists, researchers, and media professionals.

    Google has also partnered with NVIDIA to watermark videos generated on their NVIDIA Cosmos AI model. More importantly, Google announced a partnership with GetReal Security,is a pioneer in detecting fake media and has raised $17.5 million equity funding.

    It’s likely that we’ll see more of these partnerships from Google, which will mean the scope of SynthID Detector will continue to expand. You’ll be able not only to detect Google-generated AI but also other AI platforms.

    Need for SynthID Detector.

    Despite all the benefits artificial intelligence has brought to us, it is also a powerful tool for criminals. We have seen hundreds incidents where innocent people have been scammed or threatened with AI-generated content.

    As an example, on May 13th, Sandra Rogers of Lackawanna County was found guilty for possessing a firearm. AI generated child sex abuse images . In another incident, the 17-year old kid extorted information from 19 victims using images created by AI. Deepfakes with sexually explicit content threatening to leak.

    In China, a man was allegedly raped. was scammed out $622,000 using a voice generated by AI that impersonated the man’s best friend. Similar scams are common in the US, and even in countries such as India that don’t have the most advanced AI technology.

    AI is being used not only to commit crimes against civilians but also to cause political unrest. For example, a consultant was was fined $6M after using fake robocalls (19659026) during the US Presidential elections. He used AI to imitate Joe Biden’s vocals and urged New Hampshire voters not to vote in their state’s Democratic Primary.

    In 2022, a Ukraine 24 broadcasted a fake videoof Ukrainian President Zelensky (19659028) on a Ukrainian news site, which was allegedly hacker. The fake AI video shows Zelensky ‘laying down his arms’ and surrendering to Russia. There are many such cases on the internet, and new ones are appearing almost every day. AI is increasingly weaponized against institutions and government to cause social and political unrest.

    Image Credit – Statista

    Therefore, a tool like SynthID Detector can be a beacon of hope to combat such perpetrators. News houses, publications, and regulators can run a suspected image or content through the detector to verify a story before running it for millions to view.

    More importantly, tools like SynthID will also go a long way in instilling some semblance of fear among criminals, who will know that they can be busted anytime.

    And What About the Legal Grey Area of AI Usage?

    Besides the above outright illegal use of AI, there’s also a moral dilemma attached to increasing AI use. Educators are specifically worried about the use of LLMs and text-generating AI models in schools, colleges, and universities.

    Instead of putting in the hard yards, students now just punch in a couple of prompts to generate detailed, human-like articles and assignments. Research at the University of Pennsylvania formed two groups of students: one with access to ChatGPT and another without any such LLM tools.

    The students who had used ChatGPT could solve 48% more mathematical problems correctly. However, when a test was conducted, the students who had used ChatGPT solved 17% fewer problems than those who didn’t.

    This shows that the use of LLM models isn’t really contributing to learning and academic development. They’re, instead, tools to simply ‘complete tasks,’ which is slowly robbing us of our ability to think.

    Another study called ‘ AI Tools in Society : Impacts on cognitive offloading and the Future of critical thinkingshows that people in the 17-25 age group have the highest AI usage, but also the lowest critical reasoning scores. Coincidence? We don’t believe so.

    It’s clear that the use of AI isn’t helping to develop young minds. It has instead become a watchdog of laziness, for those who want to cut corners.

    This is a moral dilemma, because using AI tools to educate or for any other purpose is not illegal. It’s more a conscious decision, according to many, to let go our own critical reasoning, which is what makes us humans.

    Current AI Detectors are Worthless

    Since AI has replaced critical thinking as well as being used to outsource student work, it is understandable that educational institutions have turned to AI detectors to detect the presence of AI generated content in student submissions.

    These AI detectors, however, are no more accurate than blind people telling you where to go. We apologize if we have stepped on anyone’s toes! We forgot our stick.

    Christopher Penn is an AI expert who posted a LinkedIn post titled “We forgot our stick! AI Detectors Are a Joke. He fed the US Declaration of Independence into a’market leading’ AI detector and guess what? Our forefathers wrote the Declaration of Independence using 97% AI. Time travel?

    The inaccurate results from these detectors stem from their use of parameters such as perplexity and burstiness to analyze texts. Consequently, if you write an article that sounds somewhat robotic, lacks vocabulary variety, and features similar line lengths, these ‘AI detectors’ may classify your work as that of an AI language model.

    Bottom line, these tools are not reliable, which is possibly why OpenAI discontinued its AI detection tool in mid-2023, citing accuracy issues. However, the sad part is that a large part of the system, including universities, still relies on these tools to make major decisions such as student expulsions and suspensions.

    This is exactly why we need a better and more reliable tool to call out AI-generated content. Enter SynthID Detector.

    SynthID Detector Is Open-Source

    Possibly the biggest piece of positive news with regard to Google’s SynthID Detector announcement is that the tool has been kept open source. This will allow other companies and creators to build on the existing architecture and incorporate AI watermark detection in their own artificial intelligence models.

    Remember, SynthID Detector currently only works for Google’s AI tools, which is just a small part of the whole artificial intelligence market. So, if someone generates a text using ChatGPT, there’s still no reliable way to tell if it was AI-generated.

    Maybe that’s why Google has kept the detector open-source, hoping that other developers would take a cue from it.

    All in all, it’s really appreciable that Google hasn’t gate-kept this essential development. Other companies that are concerned about the increasing misuse of their AI models should go ahead and contribute to the greater good of making AI safe for society.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he’s not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles written by Krishi Chowdhary

    Our editorial policy at Tech Report is to provide helpful, accurate content which offers real value to readers. We only work with writers who are experienced and have a specific knowledge of the topics they cover. This includes the latest technology, online privacy issues, cryptocurrencies, software and more. Our editorial policy ensures each topic is researched by our in-house editors. We adhere to strict journalistic standards and all articles are written by real writers.

    www.aiobserver.co

    NO COMMENTS

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Exit mobile version