AI – Artificial Intelligence – has grown and expanded exponentially in just the past few years to every industry, from various science and biomedical fields, to education, gaming, and entertainment. The ways in which AI will continue to integrate into and shape these various industries will continue for years, even decades to come. Whether you see it in a positive or negative light, AI is here to stay.
But with this vast and expansive integration into every profession and field, it’s important to understand not only the impacts and complexities of such integration, but it’s equally as important to be able to understand exactly when and where the AI is and has interacted with, and possibly altered, the results of one’s or someone else’s work. As such, AI Detector tools – often powered by the same Large Language Models (“LLMs”) that power AI themselves – are becoming an ever-increasingly important part of checking and maintaining the reputability and accuracy of AI and their respective LLMs.
This is especially important in fields like education, where plagiarism has been growing as an issue among the work of students for several decades. The International Center of Academic Integrity, a collaboration between multiple colleges, universities and other educational and academic institutions, conducted a study in 2020 of 70,000 high school students from 24 high schools in the US. According to their findings, 58% of the students admitted to committing plagiarism.
With the proliferation of generative AI writing and chat tools, most notably ChatGPT, the temptation to simply request that the AI tool write your academic assignments, work or research for you has skyrocketed, as the barrier for entry is as simple as asking the AI model to write an essay for you.
Obviously, this is not to discount or diminish the usefulness of such generative AI tools, as their applications for improving and enhancing one’s work are increasingly vast. But these same powerful generative tools require equally generative checks to maintain their reputability.
Similarly, as generative writing and chat AI tools have grown and expanded recently, so too have generative image AI tools, with generative video AI quickly improving just around the corner. With generative AI image tools, the ability to alter existing images, or create entirely new images from a simply written prompt, has become a convenient and cost-effective way for clients in various fields to create or generate images necessary for their work.
But much like the potential misuse of generative writings, generative image tools can also be abused. A common example is so-called “bot”, or automated social media accounts. These accounts are often used for varying purposes, from spam, to artificial engagement (“boosting”), but can also be used for more nefarious purposes, such as engaging in targeted harassment of one or more individuals. In the past, such bot accounts would often be created in bulk, with boilerplate account descriptions and bios, along with a generic stock photo of an individual for the profile picture, often taken from free or even paid stock photos platforms.
With the recent proliferation of and improvements to generative AI image tools, however, the need for these bad actors to rely on stock photo platforms for passable profile pictures, which could often easily be reverse-image searched via most search engines, can now simply generate a new, completely nonexistent individual with a simple text prompt description.
Since the image is of someone who doesn’t exist, a reverse-image search would return no results. And these images have become increasingly detailed and uncanny. As a result, the need for tools and platforms to be able to detect such fake images, along with AI-generated text, is more important than ever; Not only to prevent academic issues like plagiarism, but also to both protect individuals and industries from potential abuse and misuse, while also assisting the general public at large with recognizing and understanding when and where such AI-generated material can be and is being used.
For instance, as recently as just 2 years ago, AI-generated images were, for the most part, easily detectable to the naked eye of the average person. Early AI image generation often had glaring issues, most notably with human fingers and distorted facial expressions. Fast-forward to the present, however, and there are a number of platforms, such as Portrait Pal, offering professional, convincing AI-generated images of oneself or other individuals. As a result, it’s become substantially more difficult for someone who isn’t intimately familiar with AI models or tools to now recognize AI-generated images of individuals.
Likewise, these same image generation tools continue to improve upon creating convincing images of landscapes, cities, buildings, and even plants and animals, thus blurring the line between reality and artificiality even further. As such, AI-Detector tools have now become just as important as AI-generative tools; In fact, one could think of this relationship as a form of checks and balances.
There are now a number of services available that are designed to scan and determine whether or not a body of text, or an image, was created by a human (that is, actually typed/written or taken by a human with a camera), or was generated with one of the many AI LLMs. These detectors often utilize the same LLMs that power the AI-generative tools in the first place, giving them the unique advantage of having access to the same oceans of datasets that created the AI images or text in the first place. It’s important to look at these AI detection services not as an impediment or detriment to generative AI or LLMs, but rather as a complement of them.
Navigating this brave, new world of AI infused media and communication can be seen as a daunting, if not somewhat unnerving prospect. But like the many new industries and innovations before it that gave way to similar feelings of uncertainty, those feelings were soon quelled as newer tools and processes meant to maintain them were swiftly developed. So too, with AI, we now have those tools in the form of the quickly developing field of AI detection.