Is using generative AI at work cheating?
- Diverse employee reactions and data privacy risks emerge from generative AI use at work, underscoring the need for clear guidelines.
- Lack of generative AI policies in companies leads to security risks and lost productivity opportunities.
- Adobe’s new AI Assistant in Acrobat exemplifies the potential of generative AI to improve workplace efficiency.
Generative AI is bringing a wave of transformative potential, coupled with unprecedented challenges. From automating mundane tasks to facilitating complex decision-making, generative AI stands at the frontier of a new corporate efficiency and innovation era. But its rapid integration into daily work processes is not without complications.
Recent findings from Veritas Technologies reveal that the integration of generative AI into workplaces is causing mixed reactions among employees, leading to internal divisions and heightened risks of disclosing confidential information.
A striking 80% of office workers in Singapore have admitted to using generative AI tools, such as ChatGPT and Bard, for work purposes. This includes potentially hazardous actions like entering customers’, employees’, and financial data of their companies into these tools. On the other hand, around 24% not only refrain from using these tools but also believe their colleagues should face salary deductions for doing so. In fact, almost half (49%) argue that generative AI users should share their knowledge with the entire team, advocating for equitable skill distribution.
The need for guidelines on generative AI use
Despite differing views on using generative AI in work, an overwhelming 95% of respondents agree on the need for clear usage guidelines and policies. But only 43% report that their employers have established any mandatory protocols regarding its use.
Andy Ng, the vice president and managing director for the Asia South and Pacific region at Veritas, commented that the benefits of generative AI are undeniable, but critical questions associated with its use, such as ethical and cybersecurity concerns, remain unresolved.
“Without guidance on how or if to utilize generative AI, some employees are using it in ways that put their organizations at risk, even as others are reluctant to use it at all,” says Ng. “To harness the full potential of generative AI, organizations can put guardrails with effective generative AI guidelines and policies to minimize concerns related to data security and data privacy.”
The absence of explicit policies and guidelines for generative AI is a growing concern for organizations. Over a third (36%) of office workers acknowledge inputting sensitive data such as customer and employee details, along with financial information, into generative AI tools. This risky behavior, likely due to 53% of them not realizing the potential for leaking sensitive data, poses significant compliance and privacy challenges.
This gap in generative AI policy is also causing missed opportunities for organizations. While 58% of workers in Singapore use generative AI tools weekly, a notable 20% don’t use them at all. The lack of employer guidance, reported by 62% of respondents, results in two primary issues: it potentially creates workplace divisions, as 56% feel that those using generative AI might have an unfair advantage, and it prevents many from enhancing their productivity through these tools. The reported benefits of using generative AI include quicker information access (63%), increased productivity (46%), automation of routine tasks (46%), idea generation (45%), and advice on workplace challenges (26%).
Adobe’s AI Assistant
In a related development highlighting the expanding reach of generative AI, Adobe has recently announced the integration of a generative AI experience into its Acrobat PDF management software. This new feature, named AI Assistant in Acrobat, is poised to revolutionize the digital document experience.
As reported by The Verge, the AI Assistant, now available in beta for paying Acrobat users, is designed as a conversational engine to make navigating and understanding information in lengthy documents more efficient. This tool can summarize files, answer questions, and offer recommendations based on a document’s content, letting users interact more intuitively with their documents.
This innovation is particularly relevant for workplace efficiency and data handling. For example, the AI Assistant can significantly reduce the time spent managing extensive text documents, assisting with tasks such as quickly locating research information or condensing lengthy reports into concise summaries for emails, meetings, and presentations. The tool is compatible with various document formats, including Word and PowerPoint, and adheres to Adobe’s stringent data security protocols. It does not store or use customer document data for training purposes, addressing some privacy concerns prevalent in the workplace.
However, despite its recent release, the updated Adobe Acrobat has not been without complaints. One user expressed frustration on X, saying, “Adobe Acrobat is becoming increasingly unstable/glitchy. How can a $250bn company not get a PDF reader perfectly right?” They elaborated on the issues, mentioning, “Highlight text in a PDF and it glitches out and brings in the text from the previous page. Have a few too many tabs open and get a note about “too much memory” and a blank page.”
Another user vented on X about the updated Adobe Acrobat interface in Google Chrome: “It takes forever to load, it lags, it has a bunch of tools literally no one asked for. I just want to look at a pdf, why did they make it this complicated??”
Despite complaints being raised by Acrobat’s users, the introduction of Adobe’s AI Assistant in Acrobat underscores the potential of generative AI tools to enhance productivity and streamline workflows in a corporate setting. It also highlights the importance of developing comprehensive guidelines and policies for generative AI use as organizations adopt these advanced technologies to stay competitive and efficient. Adobe’s initiative in incorporating generative AI into its widely used software could set a precedent for how other companies might integrate such technology in a user-friendly and secure manner.
The demand for clear generative AI policies at work
A significant majority of employees in Singapore—over 80%—express a desire for clear guidelines, policies, and training on generative AI usage within their organizations. The primary reasons include understanding proper tool usage (70%), risk mitigation (51%), and fostering fairness in the workplace (30%).
Ng emphasizes a fundamental principle: “One thing is certain: if you fail to plan, plan to fail. Without establishing any proper guidelines on the use of generative AI, organizations could face regulatory compliance violations.”
He further adds, “To enjoy the benefits without increasing risk, it is critical for organizations to develop, implement and clearly communicate guidelines and policies on the appropriate use of generative AI, along with putting the right data compliance and governance tools in place for ongoing enforcement.”
READ MORE
- 3 Steps to Successfully Automate Copilot for Microsoft 365 Implementation
- Trustworthy AI – the Promise of Enterprise-Friendly Generative Machine Learning with Dell and NVIDIA
- Strategies for Democratizing GenAI
- The criticality of endpoint management in cybersecurity and operations
- Ethical AI: The renewed importance of safeguarding data and customer privacy in Generative AI applications