Alex Nichiporchik, CEO of publisher Hello Neighbor TinyBuild, raised his eyebrows during a speech in which he described potential uses for AI tools in the workplace, including monitoring employee Slack messages and meeting transcripts to help identify “potential problem players” – a discussion he has since insisted was “hypothetical”.
Nichiporchik (as reported by WhyNowGaming) was speaking at this week’s Develop: Brighton conference, in a talk titled ‘AI in Gamedev: Is My Job Safe?’ which promised an “in-depth study [look] how [TinyBuild] adopted AI into daily practices to exponentially increase efficiency.”
One part of the presentation in particular, focusing on “AI for HR”, has proven particularly controversial since news of its contents began to spread across the internet. Here, Nichiporchik explained how AI could be used by HR to detect burnout (later described as synonymous with “toxicity”) among employees by first identifying “potentially problematic team members.” and then gathering and running their Slack messages and automatic transcriptions from people like Google Meet and Zoom via GPT Chat, in a process he calls “I, Me Analysis.”
“There’s a direct correlation between the number of times someone uses ‘I’ or ‘me’ in a meeting,” Nichiporchik said, “versus the amount of words they use overall, to the likelihood that the person will burn out.
According to Nichiporchik, by identifying employees who “talk a lot about themselves,” who “absorb too much time in meetings” for “nothing to get done,” and who receive negative feedback in 360-degree peer reviews , it is then possible to “identify someone who is on the verge of burnout, who could be the reason why the colleagues who work with this person are burning out, and you may be able to identify them and y remedy quickly.”
This is where the exact scope of Nichiporchik’s somewhat dystopian vision begins to blur. WhyNowGaming quotes the CEO as saying that TinyBuild had experimented with the technology retroactively on workers who had already left the company, but was now beginning to use it proactively, pointing to a “first case last week, where a studio head Wasn’t in a good place, no one told us. If we had waited a month, we probably wouldn’t have a studio.
In a statement later provided to the website, however, Nichiporchik contradicted WhyNowGaming’s account, insisting that “the HR part of my presentation was hypothetical, hence the Black Mirror reference” and that TinyBuild “does not monitor employees and does not use AI to identify problematic ones.”
“I could have been clearer when viewing out of context,” Nichiporchik said in a statement provided to the publication following his report. “We don’t monitor employees or use AI to identify problematic ones. The presentation explored how AI tools can be used, and some are getting into scary territory. I wanted to explore how they can be used for good.”
The takeout, however, seems to be quite simple. Regardless of Nichiporchik’s intentions and TinyBuild’s internal practices, there will undoubtedly be other CEOs who view the use of AI in at least as harmful a manner. AI is a hot topic right now, and as Eurogamer’s Chris Tapsell recently discovered, it’s a topic that gaming industry players, across all disciplines, have a very strong feeling about. mixed – but clearly not a problem that will go away.