Ethical AI: The renewed importance of safeguarding data and customer privacy in Generative AI applications
A recent study from the IMF found almost 40 per cent of global employment is now exposed to AI in some way, be it through spotting patterns in data, or generating text or image-based content. As the realm of this technology expands, and more organisations employ it to boost productivity, so does the amount of data that algorithms consume. Of course, with great amounts of data come great responsibility, and the spotlight is on ethical considerations surrounding data’s use and privacy concerns.
The conversation around data misuse extends further than generative AI. Consumers are arguably savvier about whom they give their information to and the permissions they grant. This is a consequence of organisational data misuse in the past – individuals are fed up with spam texts and calls. Significant data breaches also frequently make the mainstream news, and word quickly spreads, tarnishing brand reputations.
In recent years, data regulations have tightened to help protect consumers and their information. However, we are only at the start of this journey with AI. While laws are being introduced elsewhere in the world to regulate the technology, like the EU’s AI Act, the Australian government has yet to reach that stage. Saying that, in September, Canberra agreed to amend the Privacy Act to give individuals the right to greater transparency over how their personal data might be used in AI. The government has been put under pressure by business groups to prevent AI causing harm and, in June 2023, a paper was published exploring potential regulatory frameworks. However, at the moment, the onus is primarily on individual organisations to handle their AI technologies responsibly. This includes where the initial training data is sourced and how user data is stored.
Using untrustworthy public data to train algorithms does have consequences. These include so-called ‘hallucinations’, where the AI generates incorrect information presented in a manner that appears accurate. Toxicity can also be an issue, where results contain inappropriate language or biases that can be offensive or discriminatory. Air Canada was recently ordered to pay damages to a passenger for misleading advice given by its customer service chatbot, resulting in them paying nearly double for their plane tickets.
On the other hand, if an organisation uses its own customer data for AI system training, it faces a distinct set of risks. Improper handling can result in the violation of data protection regulations, leading to heavy fines or other legal action. In December 2023, researchers at Google managed to trick ChatGPT into revealing some of its training material, and OpenAI is currently facing a number of lawsuits in relation to the data used to train its chatbot. In January, another data breach exposed that the Midjourney AI image generator was trained on the works of over 16,000 artists without authorisation, which could lead to significant legal action.
Many core business technologies, like contact centres, utilise large volumes of data, and these are often one of the first targets in a digital transformation. Continuous modernisation of CX is essential to meet the rising expectations of customers. AI instils new levels of intelligence in the platforms used by organisations, for example, anticipating customer needs, making tailored recommendations and delivering more personalised services.
Organisations need to evaluate platforms that have processes in place to ensure they safeguard data and privacy, especially if leveraging AI. So-called ‘green flags’ include compliance with the Notifiable Data Breach (NBD) scheme and the PCI Data Security Standard (PCI-DSS). Enabling consumer trust and confidence in how their sensitive data and transaction history are leveraged and stored is essential. Adherence to relevant governance means organisations are reducing the risk of fraud and security breach by improving data security and bolstering authentication methods, to name just a couple of necessary measures.
It can be easy to get in hot water when embarking on a new venture without expert guidance, and AI journeys are no exception. Partnering with a reputable organisation which understands how the technology best fits in a business can be the difference between success and failure. With Nexon’s expertise, organisations have successfully leveraged a range of AI-powered solutions, from Agent Assist and Co-Pilot tools that streamline customer support workflows, to Predictive Web Engagement strategies that deliver personalised digital experiences and increase sales.
Nexon has forged a strategic partnership with Genesys, a global cloud leader in AI-powered experience orchestration, which prioritises ethical data sourcing and customer privacy. Genesys is committed to understanding and reducing bias in generative AI models, which it uses in its software to automatically summarise conversations for support agents and auto-generate email content for leads and prospects. This is achieved through ‘privacy by design’ principles enacted from the inception of its AI development, an emphasis on transparency into how the technology is applied and the use of tools to find and mitigate possible bias.
Genesys envisions a future where ethical considerations play a central role in all AI applications. Genesys AI brings together Conversational, Predictive and Generative AI into a single foundation to enable capabilities that make CX and EX smarter and more efficient and delivers meaningful personalised conversations (digital & voice) between people and brands.
The company’s customer-centric approach ensures that its cloud platform and AI solutions meet ongoing needs and adhere to strict data, privacy and security protocols.
As AI elements are introduced, they are tested rigorously to ensure they do not violate the protections that its cloud platform promises. Unlike other solutions, Genesys AI was built securely from its inception. Genesys provides users with control over AI use, providing understanding of its impact on experiences and enabling continual optimisation for better outcomes. Additionally, it provides a thorough exploration of the transformative potential of AI and how to responsibly leverage its capabilities for unparalleled customer experiences. You can read more into this subject in the white paper ‘Generative AI 101‘
Genesys has named Nexon a Partner of the Year twice in a row, thanks to its proven experience and expertise in delivering integrated digital CX solutions. This partnership solidifies the two companies’ collaborative efforts to provide organisations with innovative AI-driven solutions while upholding the highest standards of data ethics and customer privacy. Through this strategic alliance, organisations can navigate the complexities of AI technology, harnessing its transformative potential and drive growth and customer satisfaction responsibly and sustainably.
Contact Nexon today to discover how its AI expertise can drive superior customer interactions and streamline your business operations.
READ MORE
- 3 Steps to Successfully Automate Copilot for Microsoft 365 Implementation
- Trustworthy AI – the Promise of Enterprise-Friendly Generative Machine Learning with Dell and NVIDIA
- Strategies for Democratizing GenAI
- The criticality of endpoint management in cybersecurity and operations
- How Japan balances AI-driven opportunities with cybersecurity needs