CheckMag | AI-generated content in tech media: Good or bad - have your say
Unless you've been living under a rock, you've heard about all the different generative AI systems and how they're going to steal your jobs and replace you in your families. A more optimistic, and perhaps generous, outlook sees models like ChatGPT and DALL-E as tools to help creatives get their jobs done more easily and quickly.
There have been a number of websites, both in tech and not, that have delved head-first into AI-generated content, with occasionally disastrous results. CNET was famously subject to firm public and internal backlash when it published dozens of AI-generated news stories, more than half of which later required edits for factual accuracy, while BuzzFeed reportedly plans to use AI heavily in the coming years as an essential part of its content strategy.
Since models, like OpenAI's ChatGPT, have become easier and cheaper to use than ever, hundreds of sites have popped up that use AI to peddle misinformation — not to mention low-quality content that, honestly, everyone is sick of seeing pollute the web. AI-generated misinformation has become such a severe problem that organisations like NewsGuard have come up with AI misinformation trackers.
While NotebookCheck still maintains an editorial policy against publishing wholly AI-generated content, regular readers might have noticed that a handful of NotebookCheck articles have recently been published with header images created by DALL-E 3. The idea is that our team of news writers and editors can create fresh, engaging header images quickly in order to focus on what matters most — quality reporting.
AI image generation is especially useful when reporting on topics like leaks and rumours. When we're writing a review about something like the new Lenovo Legion Pro 7 16, our review team meticulously takes studio-quality images, but that's not possible when images of the device featured in a news piece haven't surfaced.
However, it's important to remember that the debates around AI image generation are hardly settled, and there are many issues, ethical, legal, and moral, that are worth discussing.
Some AI detractors adamantly believe that, because AI models, like DALL-E and Stable Diffusion, are trained by effectively scraping the internet for art to use as training data, often without remunerating or offering credit, AI art is akin to theft. Others argue that these training models are no different from the way humans see and take inspiration from art around them.
While AI content can certainly be helpful in speeding up creation of both written content and images, these questions around the use of AI and the public perception of AI image generation complicate the decisions on whether AI is a force for good or a blight on online content.
We'd like to hear your opinion, so please weigh in the comments or utilise the StrawPoll survey embedded below. We will write a follow-up piece based on the results and comments that we receive.
If you're into AI or machine learning, check out the NVIDIA Jetson Orin Nano Developer Kit on Amazon.