Since OpenAI launched ChatGPT, a virtual chatbot that answers a variety of questions, in November 2022, companies have struggled to keep use of the tool under control in the workplace.
Many companies are concerned about their data being leaked Not only are OpenAI’s algorithms unintentionally being trained with confidential information, but they are also potentially exposing corporate secrets to competitors’ requests, says Simon Johnson, head of the Global Economics and Management Group at the Sloan School of Management at the Massachusetts Institute of Technology in Massachusetts, US.
However, many workers love technology, have access to it, and even rely on it.
“These are practical tools that make life easier, like content aggregation. Instead of digging through multiple sources to find an obscure regulatory policy, ChatGPT can deliver a useful first draft in moments,” says Brian Hancock, a partner at Washington-based McKinsey & Company.
“They can also help with technical tasks, such as programming, and perform routine tasks that ease the cognitive load and schedules of employees.”
Business consultant Matt and a colleague were the first to discover ChatGPT in their workplace, just weeks after its launch. He says the chatbot changed his workdays overnight. “It was like discovering a trick in a video game,” says Matt, who lives in Berlin.
“It asked a really technical question from my doctoral thesis and provided an answer that no one would be able to find without consulting people with very specific expertise. I knew it would be a game-changer.”
Everyday tasks in your fast-paced work environment—such as researching scientific topics, gathering sources, and producing comprehensive presentations to clients—suddenly become easy.
The only problem: he and his colleague died They had to keep their use of ChatGPT a closely guarded secret. They have been able to access the tool secretly, especially on work-from-home days.
“We had a huge competitive advantage over our colleagues – our production was much faster and they could not understand how to do it,” he says. “Our manager was very impressed and talked about our performance with senior management.”
Whether the technology is outright prohibited, highly frowned upon, or gives some employees a secret advantage, some employees are looking for ways to continue using generative AI tools discreetly.
Technology is increasingly becoming a feedback channel for employees: In a study conducted by the specialized social network Fishbowl in February 2023, 68% of 5,067 participants who used AI at work said that they did not disclose its use to their bosses.
Even in cases where it is not prohibited in the workplace, employees may still want to keep their use of AI hidden, or at least protected, from their colleagues.
“We don’t yet have established standards around AI,” Johnson says. “At first, it might feel like you’re admitting that you’re not good at your job if a machine is doing many of your tasks.” “It’s natural for people to want to hide it.”
As a result, forums have begun to emerge for workers to share strategies for staying out of the limelight.
In communities like Reddit, many people are looking for ways to secretly bypass workplace bans, whether through high-tech solutions (integrating ChatGPT into an app disguised as a workplace tool) or primitive solutions to hide usage (adding a privacy screen or secretly accessing the technology). . on your cell phone).
An increasing number of workers who have become dependent on AI may have to start looking for ways to maintain usage. According to an August 2023 BlackBerry survey of 2,000 global IT decision makers, 75% are currently considering or implementing a ChatGPT ban and other applications of generative AI in the workplace, with 61% saying measures should be long-term or permanent.
While such bans could help companies keep sensitive information out of the wrong hands, Hancock says keeping generative AI away from workers, especially in the long term, could backfire.
“AI tools are poised to become part of the employee experience, so restricting access to them without providing insight into when and how they will be adopted — such as after introducing guardrails — can lead to frustration,” he says.
“This may prompt people to consider working somewhere with access to the tools they need.”
As for Matt, he found a solution to stay one step ahead: he and a colleague secretly started using the research platform Perplexity.
Like ChatGPT, it is a generative AI tool that returns complex typed responses to basic requests in an instant.
Matt prefers Perplexity over ChatGPT: it provides real-time information and cites verifiable sources quickly, which is ideal when your presentations require in-depth, up-to-date knowledge.
He checks it hundreds of times a day on his laptop, often when working from home, and uses it more than Google.
Matt hopes he can keep using his latest AI tools in secret for as long as possible.
For him, it’s worth the minor inconvenience of occasionally having to dim his laptop screen in the office — and not sharing resources with his team. “I prefer to maintain a competitive advantage,” he says.
More Stories
The 4-day work week could become a reality for those who have a formal contract
Limpa Nome promises discounts of up to 99%.
Foz de Amazonas: Obama technicians recommend rejection – 10/29/2024 – Environment