The prevalence of AI in hiring — on both sides of the equation — points to a deep shift in the very nature of work itself, says Ask an Ethicist columnist Vera Cherepanova. In determining whether a candidate is cheating or simply resourceful, employers should remember their values.
“I’m a head of talent acquisition at a tech firm. Lately, we’ve noticed that some candidates clearly use AI to pass technical interviews — sometimes giving perfect, but oddly generic, answers. I’m torn. On one hand, they’re gaming the system. On the other, they’re using available tools — just like we all use spellcheck or calculators. Should we reject them for dishonesty or hire them for resourcefulness?” — JK
You’re not alone in grappling with this question. As AI tools become more powerful and accessible, the line is blurring between “cheating” and “leveraging tech,” between enablement and deceit. And that blur points to a deeper shift, the one that’s not just about tools but about trust, values and the meaning of work itself.
Consider the manifesto of Cluely, a controversial startup that openly declares itself a tool to help users “cheat” on job interviews, exams and sales calls. Fresh off a $15 million Series A funding round, Cluely claims that cheating is not an ethical failing but the next phase of optimization:
“And yes, the world will call it cheating. But so was the calculator. So was spellcheck. So was Google. Every time technology makes us smarter, the world panics. Then it adapts. Then it forgets. And suddenly, it’s normal.”
Strategically provocative, the comparison of AI cheating tools to calculators or spellcheck reframes dishonesty as technological progress, with a further implication of ethics being rebranded as inefficiency. This puts new pressure on governance structures to draw boundaries around “acceptable automation.” What happens when the business model requires controversy? And how will we define integrity in an age when the line between automation and deception is blurred?
But let’s come back to your dilemma. If your interviews are designed to test unassisted, real-time thinking and candidates are using AI to bypass that, then yes, they’re violating the spirit of the evaluation, even if they’re technically not breaking any rules. In education, we’d call that cheating. Students using AI to write essays when writing is the exact skill being developed by the class, is a problem.
But in the workplace, and especially in hiring, the waters are murkier. We need to draw a clear distinction between using AI to prepare and using AI to perform. When a candidate uses AI to prepare for an interview, is that any different than hiring a coach or reading a book of practice problems? But what if they use AI during the interview, generating code in real time or even whispering prompts to an earpiece? That’s not preparation and that’s not about using tools; it’s about misrepresenting ability or effort.
Still, before we cast judgment, let’s zoom out.
Cluely’s manifesto is not just marketing; it reflects an emergent worldview: The economic nihilism of the younger generation is a coping mechanism and an entrepreneurial strategy. Today’s economy doesn’t reward loyalty, mastery or meaning; it rewards optimization, virality and arbitrage. Job applicants apply for dozens of roles, go through layers of automated screening and are rewarded for mastering tests, not for originality or for depth. Employers routinely use AI to screen résumés, scan video interviews and automate rejection emails. When the system feels impersonal, gameable and cold, is it any wonder that candidates respond in kind?
In that context, “cheating” becomes not a breach of trust but a rational response and adaptation to a reality where most outputs feel fake and reward systems are opaque. There is a generational reckoning, and maybe what’s most chilling is that the “cheating economy” doesn’t feel fringe or dystopian. It feels like a business case for the not-too-distant future. So, what should ethical frameworks look like in a world where cheating is rebranded as optimization and employees view rules as outdated rituals?
Let’s be clear: Just because something is understandable doesn’t mean it’s acceptable. Cheating with AI on job interviews may be clever. It may even be widespread. But it still doesn’t make anything better. It doesn’t help with the employee-employer crisis of trust. On the contrary, it speeds the transactional spiral and exacerbates disloyalty on both sides of the “psychological contract.”
At the same time, if your interview process is easy to game with a chatbot, maybe you’re looking for the wrong thing. What are you really testing for? Knowledge or character? Independence or collaboration? The ability to memorize syntax or the judgment to use tools wisely? Maybe the system needs rethinking.
Here’s a rule of thumb you might adopt internally:
- If a candidate uses AI to prepare smarter — great.
- If a candidate uses AI to perform dishonestly — that’s a red flag.
- If your process makes the second easy and the first invisible — maybe the real work isn’t punishing “cheaters,” it is redesigning the assessments.
Ultimately, ethical leadership isn’t about banning AI outright. It’s about clarifying your values, setting expectations and building processes that reflect the world we’re actually in — not the one we wish we still had.
Good Vibes Do Not Always Mean Good Ethics
Sound ethics can’t exist without a culture of accountability
Read moreDetailsReaders respond
The previous question came from a team leader in a high-performing organization grappling with a hidden ethical risk: a culture so invested in being “nice” that team members hesitate to hold each other accountable. The dilemma revolved around whether a workplace built on kindness can unintentionally suppress moral feedback — and how to preserve trust while making space for ethical friction and constructive challenge.
In his response, guest ethicist Brennan Jacoby noted: “To move forward on this issue, work to normalize an understanding of ethics and accountability across your team that is aligned with the trusting and supportive culture you want to protect. To do this, explore with your team how they think about ethics and accountability.
“Unfortunately, it is common for ethics to come with connotations of shame and guilt. To say that you think the team is dealing with an ethical dilemma, therefore, might be heard by your team as a criticism. It doesn’t have to be this way, though.
If you find that your team has a view of ethics that is closely correlated with shame, then try introducing a more accurate understanding of ethics as the practice of navigating morally gray areas of life.” Read the full question and answer here.
This article gets close to something I’ve seen a lot but rarely hear named so clearly, when a team gets so “good” at being good that ethics becomes off-limits. We conflate being supportive with being agreeable. We start thinking that raising a tough ethical concern is somehow disloyal. But that’s not safety, it’s silence. The article does a nice job reframing ethics as tension, not shame. Still, it feels like we’re trying to tidy ethics up a little too quickly. Neo-Socratic ethics wouldn’t just say “good people face tough choices.” It would ask: What counts as a tough choice here? Who decided that? And what values are we not willing to question? It’s not about catching someone doing something wrong. It’s about keeping the conversation alive when it’d be easier to just nod along. What questions are we no longer allowed to ask in this “good” culture? — Mike Cardus
In the original question, Anonymous says, “Trust and psychological safety are strong. … (But) we’re not talking about each other’s morally grey behaviors for fear of being seen as too critical.” This means psychological safety is not high. It’s a misunderstanding of what psychological safety is. — Andy Currie