OpenAI’s splashy ChatGPT rollout has generated untold amounts of text, both directly and indirectly. While much of what’s been written so far has been about creative work, which some fear will be completely upended by ChatGPT, CCI’s copy chief, Jennifer L. Gaskin, looks at how generative AI tools will change the corporate integrity landscape.
Like any good journalist, I’m not one to miss out on a trend, though a look at my wardrobe would probably lead you to believe otherwise. Be that as it may, I have contributed to the newswriting-trope-du-jour by engaging ChatGPT to do my work for me. And … I’m not sure what all the fuss is about — at least, not yet.
Since it was released late last year, people have used ChatGPT, OpenAI’s chatbot, to write everything from e-books to movie scripts to dating-app messaging, all with varying levels of quality. Some have used the platform for memes, while others fear it will put them out of work. (I used OpenAI’s Dall-E 2 to generate the artwork for this post.)
We and other outlets have covered artificial intelligence and machine learning in depth over the past several years, including efforts in the U.S. and around the world to regulate such technologies, as well as the risk of bias in AI-powered tools.
While AI is nothing new, the intense popularity of ChatGPT (it’s the fastest-growing consumer app in history) does seem to speak to a new level of AI sophistication. This isn’t Siri or Alexa, and it’s certainly not the frustrating phone tree that might have you screaming that you just want to talk to a person.
With so much of my job description focused on creative work, this ramp-up in sophistication made me uneasy, to say the least, so I wanted to see for myself if I have anything to worry about. I started with a simple prompt: “Why is SOX compliance important?” ChatGPT’s answer is convincingly coherent, if not exactly nuanced or clever.
That’s because while the answer appears advanced, that’s all it is — an appearance. It has the general cadence of an article you might read on a website like this but not really any of the topic expertise we try to serve up.
“These sorts of chatbot models are just predicting the next word over and over again,” said Liam Dugan, a doctoral student at Penn who has researched humans’ aptitude for spotting machine-generated text. “These models don’t have a plan and, therefore, it’s very easy for them to kind of get sidetracked and generate things that are totally irrelevant to the task at hand and kind of go off on tangents and lose the thread.”
Asking the chatbot about the compliance and risk management issues inherent in the use of technologies like itself resulted in a similar, though considerably longer, result — a simulacrum of an informational article about the topic (see the full text at the bottom of the page, including the mid-sentence cut-off).

AI writers are an editing nightmare
Many publishers already have integrated bespoke AI and machine learning tools into their editorial processes, including Forbes’ Bertie content management system, which is said to help its writers work more quickly by doing things like suggesting headlines and proposing topics.
And BuzzFeed recently announced that it would use OpenAI’s technology to create quizzes and other content, news that was met with a 200% surge in the publisher’s stock price, no doubt a signal to other companies that shareholders will welcome AI with open arms.
But there already are big, flashing red lights warning against unmanaged use of generative AI like ChatGPT at least when it comes to media organizations. Early this year, CNET had to issue multiple corrections, including five to a single AI-written post about interest rates, and the site copped to using AI only after other media outlets began digging.
For technology and compliance lawyer Jonathan Armstrong, a partner at London-based Cordery, quality is among the most obvious issues with ChatGPT and other AI models.
“There are some people who seem to think that AI is all-knowing and always right,” Armstrong said. “That’s simply not the case — for example, many of us have seen the advice that it’s OK to dispose of batteries in the ocean to recharge electric eels. Clearly that’s not true, and at least the error is so obvious that it’s recognisable, but some errors are more subtle and promoting something as a ‘knowledge engine’ brings its own concerns when the data just isn’t true.”
Microsoft made a splash in announcing it would use OpenAI’s technology in its Bing search engine, with New York Times columnist Kevin Roose so impressed by its capabilities that he found himself preferring Bing to Google. But after a lengthy and bizarre conversation one night, Roose has changed his mind.
“I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors,” Roose wrote. “Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”
CNET says its bot-generated articles are all fact-checked by a human editor, though that hasn’t stopped every error from being published, and it’s easy to understand why — it’s creating too much work for these editors, and the quality of their line-of-defense role is diminished.
For any media or publishing company, accuracy should be their primary goal; otherwise, they risk fatal erosion of trust with readers. If you don’t trust that what we print is correct, you will stop reading. It seems clear that eliminating human reporters and writers in favor of bot-written content and expecting editors to simply mop up after them isn’t the answer for publications with flagging readership and cost-cutting edicts.

The horse is out of the barn
Using AI for content creation is far from its only function or even its most popular. After all, ChatGPT was made for, well, chatting, and companies have long used automation and algorithm-based tools to offload repetitive tasks and those that machines simply do better than humans.
Given investors’ love of AI, partly because it can make goods and services cheaper to produce, there’s little to no chance that generative AI tools like the ones OpenAI makes won’t continue to proliferate across job functions and industries.
And the technology is advancing apace. OpenAI is widely expected to release GPT-4 this year, which may help resolve some of ChatGPT’s problems with producing incorrect information, as well as smoothing out the rough edges of its AI-speak.
If AI gets better at hiding, that just means humans have to become better detectives, Dugan said.
“As models get better and better, it becomes harder and harder for us to detect their errors because their errors become less frequent,” he said. “Models from, let’s say five years ago, would make grammatical errors or other sorts of errors like that, and [today’s] models almost never make those errors. And so I think that it’s important for people to get some literacy in what sorts of errors models make and continue to update as the models get better and better.”
But models are already advanced enough to do real damage, and AI’s biggest threat, says Sandy Fliderman, founder of financial services firm Industry FinTech, isn’t in text-generating tools like ChatGPT; it’s in nefarious uses of AI-powered deepfake technology, which can convincingly mimic a person’s voice or even create a video of them.
Microsoft’s recently introduced text-to-speech AI model, VALL-E, can synthesize audio of a person based on just a three-second clip, and celebrities are frequent targets of generative audio technology like ElevenLabs’ Prime Voice AI. Fliderman expects such technology to continue improving to the point that it cannot be distinguished from a real voice speaking in real time.
“We will no longer be able to use the phone or recordings to verify a person,” Fliderman said. “This will affect business security. But, it will also affect our legal system — will a jury ever be able to convict someone because a witness says they recognized a defendant’s voice? We are nearing the end of an era where a person’s voice is unique to them and can be used to recognize them. With this loss, so comes the inherent privacy associated with it.”
Writer and technologist David B. Auerbach argues in his new book, “Meganets,” that humans have already lost control of the enormous digital networks we interact with on a daily basis, so it stands to reason that we’re behind the eight-ball when it comes to deepfakes.
“The ability to generate convincing video and audio is growing by the day, and we will soon be at the point where it will not be obvious at all whether a given piece of video is genuine or not,” Auerbach said. “Combined with the text-generation capacities of ChatGPT, creating convincing real-time interactions with fake simulations of actual people is very plausible. Authentication mechanisms will need to be set up for people to verify that it is them instead of a simulation. Such technology exists but the infrastructure will need to be built quickly.
A corporate integrity minefield
Alt-right provocateur Jack Posobiec tweeted an unconvincing deepfake that purported to show President Joe Biden announcing plans to reinstate the military draft. As of March 6, Posobiec’s tweet was still up and had been retweeted about 4,000 times despite the video being obviously fake. Of course, given his track record, nobody would expect Jack Posobiec to care about accuracy, and his role as a shitposter means he doesn’t have to hew to ethical, compliance or regulatory standards.
And here’s where corporate integrity professionals can look to AI’s rollout into media and publishing for guidance: AI isn’t going to take your job; it’s going to make your work harder.
While creative types like myself could reasonably be concerned that their jobs might be phased out as AI grows more intelligent, can the same be said for folks who work in corporate integrity? There’s a good chance that if you’re working in such a role at a company right now, a portion of your work involves advanced algorithms and maybe even some form of AI. But as these tools continue to become more sophisticated and as regulations around their use proliferate, businesses will figure out how to integrate them. And chances are that will mean more work — not less.
From issues of transparency to use of protected intellectual property to conversations over explainability, compliance, risk and governance teams will likely have their hands full, as will the hundreds of technology providers in the industry.
“With the proliferation of AI models, companies will need to devise preventive strategies to deter Web crawling of copyrighted content and IP,” said Paresh Chiney, a partner at global advisory firm StoneTurn. “In addition to that, they may need to invest in tools and methods to compare suspicious AI-generated content in any form (e.g., music, video, text, images) with their copyrighted, original works to identify potential infringement.”
Lawmakers around the U.S. and the world already have begun to flex their legislative muscle regarding AI; chatbot Replika was banned in Italy because of concerns over child safety, a New York City ban on AI in hiring is expected to go into effect in April after it was delayed so city officials could make some tweaks.
For its part, ChatGPT seems to believe it can’t bear any legal responsibility for people misusing the platform or for unauthorized dissemination of IP, but Armstrong isn’t so sure about that.
“The terms and conditions for using the chatbot might seek to limit liability, but they won’t bind a third party,” he said. “So, for example, if the chatbot reprints the lyrics to a Jay-Z song, Jay-Z hasn’t given consent and hasn’t signed up to an agreement with the chatbot developer, it’s likely that Jay-Z can sue the developer for using his IP.”
He added: “AI doesn’t add a protective shield — if something is an IP infringement in the ‘real’ world, it might well be an infringement in the ‘virtual’ world, too.”
Plus, AI-powered tools are already being used by fraudsters and scammers, and Fliderman expects this phenomenon to grow as the technology becomes more advanced.
“Unfortunately, we will start to hear many stories about people convinced they are speaking to a coworker, boss, spouse or someone they trust asking them to send money somewhere,” Fliderman said. “This type of scam isn’t always focused on sending a wire or ACH somewhere; often it tricks the victim into buying gift cards and sharing the card ID number for redemption. Also, unfortunately, with new technology, the bad actors will create new schemes that we haven’t yet considered, making it difficult to educate people in advance.”
In October, the White House released an outline for a possible AI Bill of Rights, covering areas like data privacy and explainability. Whether that document results in any actual federal lawmaking (a similar document was written into pending legislation in California this year), the jury is still out on whether it’ll make a difference.
“The AI Bill of Rights and $3 will get you a cup of coffee,” Auerbach said. “The combination of vague principles like ‘unsafe systems’ and underdefined concepts like ‘discrimination’ put this document far behind the curve in terms of both realism and efficacy. A toothless ‘blueprint’ that doesn’t cite a single actual example or precedent is not going to advance the conversation.”

Can AI be used ethically?
We strive to do our reporting and content curation ethically, which means, in part, that every editorial decision we make can be explained. And if my reporting includes an error, I can be held accountable for that because I am the one who gathered the facts.
With generative AI tools, such explanations are difficult and, depending on what inputs the model has received, could even be impossible. But, as ethics consultant and CCI columnist Lisa Schor Babin said, transparency is the key to ethical AI.
“Both transparency and explainability are essential to building trust in the ethical use of AI. If we do it right, we can still carve out space for and continue to have trust in genuine, human connection and emotion.”
For companies, that often means finding the right vendor or supplier, Armstrong said.
“If you’re buying AI … and the developer won’t tell you the basis of the decision making you’re probably going to have to give that particular application a wide berth,” he said.
Dugan isn’t so sure explainability is possible for advanced AI, at least not yet.
“The whole idea of explaining how an AI makes its predictions and explaining what an AI actually is doing under the hood is both incredibly difficult and, in my opinion, incredibly understudied work,” Dugan said. “You can ask ChatGPT, ‘What did you do to make this article?’ and it can give you a response. There’s no way that that was actually what it did. They could just lie. And they frequently do so.”
Full transparency here: I wasn’t great at picking out AI-generated content from one written by a human, using a tool Dugan and his fellow researchers created. But I did get better the more I tried, and that’s important, both for general consumers and business AI users.
“We’re just at the beginning of an era where this sort of text is going to be everywhere,” Dugan said. “It’s so easy to generate, and if you know the common signs of generative text, or if you keep up to date with the latest models and know what sorts of errors they make, you’re going to become a lot better at having those alarm bells go off when you see text that makes those errors. And maybe you think twice before believing or taking at face value text that has those sorts of errors.”
AI isn’t a toaster; you can’t look inside the guts of the machine and know exactly what’s going on. And a 100% safe, ethical application of advanced AI, particularly a broad application, just doesn’t seem realistic, but putting up guardrails, whether that’s helping people spot machine-generated text or limiting its usage, is a good start.
“Some lack of transparency is baked into the very technology: The nature of machine learning prevents you from knowing exactly why an AI responded the way that it did,” Auerbach said. “All you can do is ‘train’ or ‘nudge’ it in better directions. But ethical usage can be attained by informing users and the public about the technology’s capabilities and limitations, and by not stepping into areas designed to exploit people’s emotional and addictive weaknesses.”
(Click here to read the ChatGPT-generated article on AI in compliance and risk management.)