Software - Tech Wire Asia https://techwireasia.com/tag/software/ Where technology and business intersect Wed, 01 May 2024 06:15:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 3 Steps to Successfully Automate Copilot for Microsoft 365 Implementation https://techwireasia.com/05/2024/how-do-i-use-copilot-ai-best-in-my-business/ Wed, 01 May 2024 06:15:56 +0000 https://techwireasia.com/?p=238686 Consolidating data and workflows around O365 and its AI core, Copilot, means companies benefit from the information they use and gather naturally.

The post 3 Steps to Successfully Automate Copilot for Microsoft 365 Implementation appeared first on Tech Wire Asia.

]]>
Written in collaboration with Janine Morris Senior Solution Engineer, AvePoint

Microsoft Copilot for M365 is the generative AI that revolutionises organisations’ efficiencies by surfacing strategic insights, finding information, assisting with the curation of content, even summarising and planning your day as efficiently as clicking a button! What is missing from this equation is that deploying the technology isn’t as simple as flicking a switch and ‘going live” with Copilot for Microsoft 365. While the competitive advantage Copilot for Microsoft 365 offers is unparalleled, ensuring the protection of your organisation and its information assets requires more than activating the licenses.

This article examines the necessary preparation organisations should go through to prepare and mitigate potential pitfalls before integrating Copilot for Microsoft 365 into your organisation.

Despite the recent surge in interest in AI over the last 12 months, generative AI is a relatively young discipline as a technology that can engage with everyday users conversationally. Just a few years ago, AI was confined to academic research institutions, the subject of peer-reviewed papers and the research conference circuit. Organisations like OpenAI brought this technology to everyday users and now the workplace has realised the possibilities that context-based GenAI offers, igniting the interest in Copilot for Microsoft 365.

Source: Shutterstock

Copilot for Microsoft 365  allows users and workgroups to automate their day, collaborate more efficiently, be more productive and use collated data from across the organisation to make more informed decisions; all based on information reserves that are continually added to.

While there are benefits generative AI can bring, organisations must address significant concerns regarding security and data governance when embracing such technology.

Here we delve into three steps crucial for the successful implementation of generative AI:

 Step One: Prepare the Environment and Consolidate Your Data

“Make Microsoft 365 the core of organisational information “

Making Microsoft 365 the core basis for business data, and therefore business intelligence is the most operationally logical choice for the majority of organisations. Most IT professionals understand that information becomes much more valuable when it is at users’ fingertips in the most-used platforms, rather than kept  in isolated silos. Migrating data into M365 allows the smart algorithms of Copilot for Microsoft 365 to access relevant information, enabling the AI to enhance its understanding of how your organisation works based on the information available to it.

Fortunately, the AvePoint Confidence Platform helps organisations achieve this consolidation of content through migration, a process which goes beyond a straightforward lift-and-shift.

AvePoint Fly empowers organisations to move from on-premises or remote email to digital collaboration platforms like Google Workspaces, Box, Dropbox, Slack and other collaboration platforms to M365. It discovers and maps existing applications and content, creating migration schedules that minimise operational disruption and downtime. With multiple legacy tenants’ data in one place, businesses can start to better capitalise on their digital resources immediately.

Step Two: Identify and Organise

“Strengthen your data to enable strong Copilot for M365 results”

AvePoint’s Insights and Policies provides the framework to identify high risk content and build efficient and compliant workflows automatically to remediate any potential breaches. Reviewing and strengthening information security provides the ability to establish a solid foundation, encompassing robust cyber protections, identifying areas of sensitive and overshared content and classifying of information according to sensible privilege rules. This is a vital preparatory stage to protect your intellectual property (IP) and ensure your information remains secure and accessible to the right audience.

AvePoint Opus streamlines the classification and organisation of information within M365 (amongst other repositories), ensuring a standard approach to manage content that minimises user intervention. The aim is to ensure information remains accessible and supports compliance with relevant standards and legislation while being available to those who need it.

Simultaneously, organisations should reassess privilege hierarchies and security rules, considering the large investments in solutions like Teams, Groups, OneDrive, SharePoint, Power Automate and so on. Resolving and revising access accumulated access rights accrued over the years, helps bolster security and internal operational efficiency through simplification and consolidation.

Source: Shutterstock

Step Three: Continuous data management

“Ensuring relevant resources”

Tools such as Copilot for Microsoft 365 increase their value to the organisation over time, as they continually improve and refine their capabilities with the accumulation of information.

This is however contingent on the accuracy of your information. Keeping information that is no longer relevant and no longer serves a purpose can impact on the accuracy of machine learning’s findings. Information may represent older products or abandoned work practices that are no longer suitable for the organisation’s current environment.

It is however important to properly store and archive all data, both for compliance and as a source of contextual information for business intelligence with Copilot AI. Here, AvePoint Opus has you covered, keeping data accessible and secure.

Addressing ‘data hygiene’ regularly is imperative, yet many organisations have the resources to manage every aspect of their information assets. AvePoint   automates content lifecycle management by automating the archival of inactive or ROT content and disposing of content that has exceeded its regulatory lifecycle. Furthermore, addressing data hygiene issues also offers a mechanism to reducing storage costs associated with M365.

As outlined in this article, the presence of shadow IT and siloed information can significantly impact the effectiveness of generative AI-driven content curation. Just having access to Copilot for M365 as part of a M365 license does not guarantee quality generate content. Without careful consideration of the organisations information structures, classification methods and content lifecycle management, the potential power of generative AI tools are unrealised.

For organisations committed to realising the maximum benefits of Copilot for M365, we recommend seeking specialised guidance to become “AI ready”. As an early adopter of machine learning technologies, AvePoint has assisted hundreds of organisations across various customer segments to unlock the full potential of generative AI.

With AvePoint’s expert guidance, you can navigate the Copilot for Microsoft 365 journey confidently, leveraging  technology that is specifically designed to deliver optimal results.

To find out more about Copilot for M365 and how it and AvePoint can transform your organisation’s approach to data-based driven operations, contact AvePoint for a demo.

The post 3 Steps to Successfully Automate Copilot for Microsoft 365 Implementation appeared first on Tech Wire Asia.

]]>
Trustworthy AI – the Promise of Enterprise-Friendly Generative Machine Learning with Dell and NVIDIA https://techwireasia.com/04/2024/safe-ai-ml-trustworthy-artificial-intelligence-not-compromise-intellectual-property-ip/ Mon, 29 Apr 2024 05:30:12 +0000 https://techwireasia.com/?p=238676 NVIDIA and Dell with its PowerEdge range offers generative artificial intellignce that works without fear of compromise or hallucination. Protect your IP from misuse and gamification.

The post Trustworthy AI – the Promise of Enterprise-Friendly Generative Machine Learning with Dell and NVIDIA appeared first on Tech Wire Asia.

]]>
Any early adoption of an emerging technology that promises a huge market advantage comes with risks. Generative AI promises organizations the potential for significant market differentiation, dramatic cost reduction, and a slew of other pluses, such as improved CX, but its safe implementation is by no means a given. Its dangers include potential resource overrun, customer-facing misfires, and significant PR fallout.

Recent mainstream media coverage of Canada Air’s AI bot mis-step and a New York lawyer’s submission of hallucinated case law show that, at least in the public’s perception, running effective AI instances leaves a great deal to be desired.

Perhaps the disparity between the technology’s potential and its real-world worst-case outcomes is down to the nature of decision-making in large organizations. In fact, an Innovation Catalysts study published this year found that 81% of business decision makers believe there are reasons to exclude the IT department from strategic business decision making. Sure, the IT function has a responsibility to investigate and advocate for technology’s benefits, but it could be argued that there may be broader enterprise concerns that need to be addressed which involves all stakeholders, including IT.

The advantages of deploying AI in workflows for data processing, creativity, and operations are well-known, although every use case varies according to the organization and its approach. But harnessing the technology in production where results are based on local data means considering safeguards around intellectual property and legal & compliance issues, plus the need to embed transparency into the solution. This transparency is important to satisfy regulatory authorities concerned over issues like data processing and sovereignty, as well as customers’ and service users’ concerns about privacy and data practice.

Trustworthy generative AI is a phrase that encompasses a set of smart services and practices that ensure safe operation: trustworthy, legally compliant, and transparent. Building those necessary elements is not a simple undertaking and represents a significant addition to the normal overheads associated with machine learning (compute and storage), and includes extra processes like query & response validation, bias monitoring, data sanitization, and provenance checking.

Source: Shutterstock

Some vendors offer pre-trained models that can form some of the basis of a GenAI solution. But until now, there has been nothing on the market where a solution includes data security, use and development guardrails, manageability, and vendor support. In short, those elements that are mandatory to transform what’s essentially experimental (and therefore has potential risk) into a reliable production-ready platform – which is what Dell and NVIDIA can now offer.

The AI industry is doing its best to address many of the needs of larger organizations that are concerned about some of the potential misfirings that a premature rollout of GenAI could create. NIST’s Artificial Intelligence Safety Institute Consortium (AISIC), for example, has come about to create safe and trustworthy artificial intelligence and comprises more than 200 bodies. It produces empirically backed standards for AI measurement and policy, so organizations leveraging GenAI have guides to safe and legal AI deployment.

NVIDIA, a key member of AISIC, now offers NeMo Guardrails which is designed to support enterprise data security and governance standards, acting as a two-way arbiter between user queries and AI responses.

In enterprise use cases, working with internal data also brings challenges with regard to an organization’s intellectual property. Without proper safeguards, any GenAI instance represents a potential danger to an organization’s ongoing viability. It’s with that challenge and those detailed above that Dell and NVIDIA have partnered to offer a GenAI system that boasts topical, safety and security features, producing the closest to a production-ready, drop-in GenAI solution currently available on the market.

Dell Technologies’ Generative AI Solutions encompass best-of-breed infrastructure designed to greatly simplify the adoption of generative AI for organizations that need the power of machine learning technologies to leverage the value of their digital assets without compromising their ethos, data, customers, or third parties.

Based around the Dell PowerEdge XE9680 GPU-accelerated server, it’s designed for generative AI training, model customization and large-scale inferencing. It comes with NVIDIA AI Enterprise software, which allows rapid deployment of production-ready models in local, hybrid, and remote computing topologies.

The Dell Generative AI Solution range is highly scalable, with hardware that can be expanded according to need, with eight NVIDIA H100 or A100 GPUs fully interconnected with NVLink. The air-cooled 6U devices offer any variation of local and remote deployment at a lower TCO than equitable processing power from other vendors.

With NVIDIA AI Workbench, developers can experience easy GPU environment setup and the freedom to work, manage, and collaborate across workstation and data center platforms regardless of skill-level.

The combination of hardware and software designed from the ground up for generative AI development and deployment, comes with guardrails, data governance, and security baked in. Together, the two mean that organizations can deploy powerful AI-based applications safely and responsibly.

Source: Shutterstock

Building trustworthy generative AI means greater buy-in from business decision-leaders outside IT, as many of their rightly-held concerns around the technology are addressed: transparent development and use, safeguarded IP and customer-facing responses, statutory compliance, and best-in-class operating costs.

To find out more about how the Dell Generative AI Solution portfolio takes machine learning to a fully-viable production setting, contact your nearest representative.

Dell Technologies: https://www.dell.com/en-sg/dt/solutions/artificial-intelligence/generative-ai.htm

NVIDIA: https://www.nvidia.com/en-sg/ai-data-science/generative-ai/

The post Trustworthy AI – the Promise of Enterprise-Friendly Generative Machine Learning with Dell and NVIDIA appeared first on Tech Wire Asia.

]]>
Strategies for Democratizing GenAI https://techwireasia.com/04/2024/the-new-dell-poweredge-xe9680-server-gpu-ai-genai-ml-best-advanced-hardware-for-tensorflow/ Thu, 25 Apr 2024 06:00:02 +0000 https://techwireasia.com/?p=238663 Powered by the AMD Instinct MI300X GPU accelerator, this new AI-focused server makes light work of large learning data sets and promotes open AI, open-source and democratic AI.

The post Strategies for Democratizing GenAI appeared first on Tech Wire Asia.

]]>
Perhaps the most groundbreaking change in technology since the birth of the internet is generative AI. Rarely does software have an impact on the business world as much as machine learning algorithms have. Always at the forefront of exploration of the frontiers of tech, the APJ region is already pushing the boundaries of what AI can offer: in media, business intelligence, data processing, marketing, engineering and a hundred other areas.

In one of the largest and widest-reaching surveys on IT in recent years, the Innovation Catalysts study quizzed over 6,000 respondents globally, who gave their answers to a range of queries around innovation, AI, and ML and how their organizations were responding to new technology. (The full survey from Dell Technologies is available here.) The majority of IT professionals (85%) agreed with the proposition that AI and GenAI will significantly transform their industry. And 76% of respondents reported that their organization is already providing intelligent technology in the form of AI optimization software that improves their work experiences.

Given the obvious benefits of AI to all business functions, it’s important, therefore, to understand that access to the compute power and tools required for advanced intelligent algorithms is a prerequisite for today’s businesses. In today’s technology landscape, access to the most advanced AI tools and capabilities can be challenging, especially for businesses looking to innovate and differentiate themselves. The dominance of a few major technology providers has led to a proprietary approach, where the latest AI innovations may be tightly controlled and not easily accessible to a broader range of organizations. This proprietary nature of some AI ecosystems can present obstacles for businesses, especially those in the APJ region, to truly innovate and be creative with AI. They want to use the latest and greatest AI capabilities, but the lack of openness and compatibility between different AI systems gets in their way.

Source: Shutterstock

The need for openness

Developers need open standards to create new uses for AI because it gives them the flexibility to deploy their solutions on-premise, in the cloud, and on edge devices; wherever, in fact, the business’s needs dictate. In parallel with that is the need for compute engines optimized for different devices, capable of delivering AI performance at the point of consumption. Open standards in software and hardware enable interoperability, providing customers the freedom to leverage the AI tools and infrastructure that best suit their unique needs and workflows. This empowers businesses to innovate with generative AI on their own terms, without being limited by a single vendor’s ecosystem.

What the technology industry must pursue, therefore, is a policy of the democratization of GenAI, and those goals are realized by open ecosystems and silicon diversity. Organizations pursuing this strategy of flexibility and choice will gain a significant strategic edge over competitors who primarily rely on public cloud services to manage their AI workloads. This is the key strategy to helping organizations gain a competitive advantage. Empowering businesses with access to the latest hardware optimized for generative AI can further amplify these strategic advantages.

Source: Shutterstock

Powerful hardware

For organizations developing custom AI models and processing large bodies of data, the latest hardware designed from the ground up to be optimized for GenAI significantly lowers TCO, meaning leeway for research and experimentation even within tight budgets.

Better hardware also means projects reach production quicker, and end-users get faster results and an overall better experience. The new Dell PowerEdge XE9680 Server is designed for today’s GenAI workloads. It offers up to eight AMD Instinct MI300X accelerators and provides 1.5TB of coherent GPU accelerator memory per server (the highest ratio in the GPU market currently). That means a lower DC footprint, yet with an increased inference capacity, so very large training datasets can be ingested quickly.

Open software

Hardware power and capabilities unlock an organization’s freedom to innovate without cost overrun, but the software running on it has to offer compatibility with existing AI frameworks, libraries, and models for true portability and compatibility.

Without openness, it’s impossible to achieve that portability across platforms, and therefore, AI can’t be considered democratized. The AMD MI300X Instinct Accelerators in the Dell XE9680 Server, which will be ready to ship in May, offers over 21 petaflops of FP16 performance, yet out-of-the-box, run the common standards in data science of PyTorch and TensorFlow, plus natively supports JAX, Open Neural Network Xchange (ONNX), and OpenAI Triton, inside the AMD ROCm software stack.

ROCm consists of a collection of drivers, development tools, and APIs that enable GPU programming from low-level kernel to end-user applications, and brings together hardware and software optimized for GenAI, large models and fast time-to-market for a business’s AI projects.

AMD’s ROCm is optimized for Instinct MI300X accelerators and is a freely available open software stack that’s capable of evolution and adaptation according to a business’s evolving needs.

An integral part of democratized GenAI is, of course, the open-source ethos. OSS (open-source software) drives quality and excellence, with thousands of users and developers refining and improving the code, allowing increased innovation.

Source: Shutterstock

Better together

Open-source also equates to open flexibility, a situation that means developers can create GenAI-based products and services that operate on a range of devices with upstream support that is available from hundreds of the open-source projects that dominate the GenAI world.

The ultimate flexibility possible today is provided by the combination of the Dell PowerEdge XE9680, AMD’s Instinct MI300X accelerators’ 3rd Gen AMD CDNA (Compute DNA), and ROCm 6 software. A firm foundation and open portability allow businesses and organizations the tools and infrastructure needed to innovate at this critical juncture in technology’s evolution.

GenAI’s transformative powers offer APJ businesses a unique opportunity to develop the next generation of AI-powered software outside the constraints and deliberate roadblocks placed by big tech’s policies of separation and compartmentalization. The horizons can open with just a straightforward deployment configuration tailored to their needs and running on optimized hardware.

To find out more about the Dell PowerEdge XE9680, head to these pages according to your geography: Australia, New Zealand, Singapore, or India. Plus, you can head here to read more about the AMD Instinct MI300X accelerator at the heart of Dell’s next-gen AI-focused hardware.

The post Strategies for Democratizing GenAI appeared first on Tech Wire Asia.

]]>
The criticality of endpoint management in cybersecurity and operations https://techwireasia.com/04/2024/endpoint-management-systems-the-best-and-how-to-achieve-safety/ Wed, 24 Apr 2024 05:54:46 +0000 https://techwireasia.com/?p=238653 Endpoint security and management are the foundation for a safer, more reliable network in 2024. We consider some steps organisations can take to secure their endpoints.

The post The criticality of endpoint management in cybersecurity and operations appeared first on Tech Wire Asia.

]]>
Most events that lead to a loss, corruption, or data theft happen on the devices we use to get a day’s work done. In computing terminology, those devices are called endpoints, and the definition extends to any computing device capable of connecting to, and communicating with, an organisation’s network.

Many endpoint devices are commonly recognisable: the smartphone in your back pocket, the desktop computer or laptop. However, endpoints can also include servers – powerful computers that provide digital services to users, such as file storage, data retrieval, or commonly used applications. When all an organisation’s endpoints are added up, they can number hundreds of thousands in large enterprises.

Source: Shutterstock

Often, even discovering the existence of every endpoint is challenging, a situation that has been made more complicated since the COVID-19 pandemic and the continuing habit of working remotely. Endpoints suddenly included computers in people’s homes or personal laptops used during periods of lockdown.

Within just a couple of years from 2020, the number of endpoints using a company’s network rose sharply, and the number of cybersecurity incidents involving endpoints rose in step. Additionally, the cost of each security breach rose from $7.1m to $8.94m [PDF].

The higher number of endpoints in today’s businesses also means that more devices have at least the capability to delete, corrupt or compromise valuable data. Managing endpoints, therefore, means ensuring that devices work safely, whether from the actions of bad actors, from misuse or operators’ mistakes.

It’s clear, therefore, that managing and securing these devices needs to be at the forefront of any organisation’s cybersecurity and device management priorities. A properly managed and monitored endpoint fleet gives IT teams a clear definition of the devices it’s responsible for and a head-start on tracking down and responding to incidents caused by attackers or so-called internal threats. It also shows which devices are at greater risk of possible compromise, informing teams which endpoints need updating, patching or replacing, and with what priority.

Putting in place a rigorous endpoint management system gives organisations the best ROI of any security platform, and should be the foundation of a range of measures designed to protect the organisation’s users, digital assets and intellectual property.

Best practices in endpoint management are discussed in detail in “The Endpoint Defense Playbook: Locking Down Devices with NinjaOne“, which includes advice on how large fleet management tasks can be automated. But for the purposes of this article, let’s consider some steps that any company can take to close off many of the ways that endpoints put their owners’ digital assets at risk.

Audit
Before an IT team can know what they need to monitor, manage and protect, it has to know what devices appear on the network. An audit is therefore an unambiguous first step, although it should be noted that auditing has to be an ongoing process, as day-to-day, endpoints will change as the organisation evolves and the devices used cycle over time. A real-time network map is therefore required.

Secure access
Users, like endpoints, have to be able to prove who they are, and be granted privileges to operate on the company’s network. Passwords, two-factor authentication and single sign-on (SSO) are methods by which employees show they have the rights to be present on the network.

Zero-trust
Zero-trust is a security posture that dictates users and endpoints have no privileges whatsoever on a network by default. Then, policies grant access to applications, services, and devices on a per-case basis. In cases where no policy applies, the system reverts to zero trust or no access.

Encrypt
Encryption means that any data exchange inside or from outside the network is obfuscated and therefore immune to any eavesdropping. Data at rest should also be encrypted, so physical theft of, for instance, storage drives, will not yield any readable data by third parties.

BYOD policies
Since the emergence of the modern smartphone in the mid-00’s, users often prefer the convenience of at least occasional use of their own devices. BYOD (bring your own device) policies can determine which device types are allowed, and also stipulate which versions of software may run and operate on the network. Enacted policies will prevent insecure operating systems and software from running on users’ devices and ensure a maximum level of security among what is an unpredictable population of endpoints.

Proactive scanning
Endpoint detection and response (EDR) systems scan endpoints and log activity to flag anomalous behaviour to users or to systems administrators. Alerts can tell IT staff when action has to be taken to address apparent threats or to surface unusual patterns of behaviour that need further investigation.

Source: Shutterstock

Patch & update
Software vendors are constantly updating their code to ensure that it is as safe as possible from malicious activity. Software on endpoints should run the latest versions of all software (including the operating system) so every device does not carry at least a potential attack vector. Zero-trust policies are applicable in this respect: endpoints not fully up-to-date can be denied access or given limited privileges by default.

Remediation planning

Despite all preventative measures, every network will always suffer some security or misuse issues. It is essential that IT teams have coherent plans that can be followed when there is the possibility of data breach or corruption. Remediation planning also requires the practise of recovery procedures, so teams are aware of the steps they need to take in the event of a possible incident.

Next steps
Endpoint management and security are mutually supportive processes that together form the basis for strong IT security and data loss prevention. In very small companies, it’s possible to manually implement endpoint management on a per-device basis. But in the majority of cases, an endpoint management software platform is necessary to oversee and, where possible, automate management policies.

Creating a strong and safe IT environment for any organisation is essential for a business to operate in 2024, and it’s a subject that requires a great deal of attention. You can read in more detail about the best practices to implement endpoint management in “The Endpoint Defense Playbook: Locking Down Devices with NinjaOne“, which is available to download now.

The post The criticality of endpoint management in cybersecurity and operations appeared first on Tech Wire Asia.

]]>
Ethical AI: The renewed importance of safeguarding data and customer privacy in Generative AI applications https://techwireasia.com/04/2024/ethical-considerations-in-ai-data-privacy/ Tue, 09 Apr 2024 06:08:51 +0000 https://techwireasia.com/?p=238634 A recent study from the IMF found almost 40 per cent of global employment is now exposed to AI in some way, be it through spotting patterns in data, or generating text or image-based content. As the realm of this technology expands, and more organisations employ it to boost productivity, so does the amount of... Read more »

The post Ethical AI: The renewed importance of safeguarding data and customer privacy in Generative AI applications appeared first on Tech Wire Asia.

]]>
A recent study from the IMF found almost 40 per cent of global employment is now exposed to AI in some way, be it through spotting patterns in data, or generating text or image-based content. As the realm of this technology expands, and more organisations employ it to boost productivity, so does the amount of data that algorithms consume. Of course, with great amounts of data come great responsibility, and the spotlight is on ethical considerations surrounding data’s use and privacy concerns.

Source: Shutterstock

The conversation around data misuse extends further than generative AI. Consumers are arguably savvier about whom they give their information to and the permissions they grant. This is a consequence of organisational data misuse in the past – individuals are fed up with spam texts and calls. Significant data breaches also frequently make the mainstream news, and word quickly spreads, tarnishing brand reputations.

In recent years, data regulations have tightened to help protect consumers and their information. However, we are only at the start of this journey with AI. While laws are being introduced elsewhere in the world to regulate the technology, like the EU’s AI Act, the Australian government has yet to reach that stage. Saying that, in September, Canberra agreed to amend the Privacy Act to give individuals the right to greater transparency over how their personal data might be used in AI. The government has been put under pressure by business groups to prevent AI causing harm and, in June 2023, a paper was published exploring potential regulatory frameworks. However, at the moment, the onus is primarily on individual organisations to handle their AI technologies responsibly. This includes where the initial training data is sourced and how user data is stored.

Using untrustworthy public data to train algorithms does have consequences. These include so-called ‘hallucinations’, where the AI generates incorrect information presented in a manner that appears accurate. Toxicity can also be an issue, where results contain inappropriate language or biases that can be offensive or discriminatory. Air Canada was recently ordered to pay damages to a passenger for misleading advice given by its customer service chatbot, resulting in them paying nearly double for their plane tickets.

On the other hand, if an organisation uses its own customer data for AI system training, it faces a distinct set of risks. Improper handling can result in the violation of data protection regulations, leading to heavy fines or other legal action. In December 2023, researchers at Google managed to trick ChatGPT into revealing some of its training material, and OpenAI is currently facing a number of lawsuits in relation to the data used to train its chatbot. In January, another data breach exposed that the Midjourney AI image generator was trained on the works of over 16,000 artists without authorisation, which could lead to significant legal action.

Source: Shutterstock

Many core business technologies, like contact centres, utilise large volumes of data, and these are often one of the first targets in a digital transformation. Continuous modernisation of CX is essential to meet the rising expectations of customers. AI instils new levels of intelligence in the platforms used by organisations, for example, anticipating customer needs, making tailored recommendations and delivering more personalised services.

Organisations need to evaluate platforms that have processes in place to ensure they safeguard data and privacy, especially if leveraging AI. So-called ‘green flags’ include compliance with the Notifiable Data Breach (NBD) scheme and the PCI Data Security Standard (PCI-DSS). Enabling consumer trust and confidence in how their sensitive data and transaction history are leveraged and stored is essential. Adherence to relevant governance means organisations are reducing the risk of fraud and security breach by improving data security and bolstering authentication methods, to name just a couple of necessary measures.

It can be easy to get in hot water when embarking on a new venture without expert guidance, and AI journeys are no exception. Partnering with a reputable organisation which understands how the technology best fits in a business can be the difference between success and failure. With Nexon’s expertise, organisations have successfully leveraged a range of AI-powered solutions, from Agent Assist and Co-Pilot tools that streamline customer support workflows, to Predictive Web Engagement strategies that deliver personalised digital experiences and increase sales.

Nexon has forged a strategic partnership with Genesys, a global cloud leader in AI-powered experience orchestration, which prioritises ethical data sourcing and customer privacy. Genesys is committed to understanding and reducing bias in generative AI models, which it uses in its software to automatically summarise conversations for support agents and auto-generate email content for leads and prospects. This is achieved through ‘privacy by design’ principles enacted from the inception of its AI development, an emphasis on transparency into how the technology is applied and the use of tools to find and mitigate possible bias.

Genesys envisions a future where ethical considerations play a central role in all AI applications. Genesys AI brings together Conversational, Predictive and Generative AI into a single foundation to enable capabilities that make CX and EX smarter and more efficient and delivers meaningful personalised conversations (digital & voice) between people and brands.

The company’s customer-centric approach ensures that its cloud platform and AI solutions meet ongoing needs and adhere to strict data, privacy and security protocols.

Source: Shutterstock

As AI elements are introduced, they are tested rigorously to ensure they do not violate the protections that its cloud platform promises. Unlike other solutions, Genesys AI was built securely from its inception. Genesys provides users with control over AI use, providing understanding of its impact on experiences and enabling continual optimisation for better outcomes. Additionally, it provides a thorough exploration of the transformative potential of AI and how to responsibly leverage its capabilities for unparalleled customer experiences. You can read more into this subject in the white paper ‘Generative AI 101

Genesys has named Nexon a Partner of the Year twice in a row, thanks to its proven experience and expertise in delivering integrated digital CX solutions. This partnership solidifies the two companies’ collaborative efforts to provide organisations with innovative AI-driven solutions while upholding the highest standards of data ethics and customer privacy. Through this strategic alliance, organisations can navigate the complexities of AI technology, harnessing its transformative potential and drive growth and customer satisfaction responsibly and sustainably.

Contact Nexon today to discover how its AI expertise can drive superior customer interactions and streamline your business operations.

The post Ethical AI: The renewed importance of safeguarding data and customer privacy in Generative AI applications appeared first on Tech Wire Asia.

]]>
Insurance everywhere all at once: the digital transformation of the APAC insurance industry https://techwireasia.com/04/2024/insurance-everywhere-all-at-once-the-digital-transformation-of-the-apac-insurance-industry/ Mon, 08 Apr 2024 08:44:24 +0000 https://techwireasia.com/?p=238603 Explore the revolution in APAC insurance with insights on digitalization, AI, and emerging trends.

The post Insurance everywhere all at once: the digital transformation of the APAC insurance industry appeared first on Tech Wire Asia.

]]>
Insurance has never been a stagnant industry, however the current era is proving to be one of unprecedented change. With the rise of digitalization, changing customer expectations, and the emergence of new business models like embedded insurance, the insurance landscape is evolving at an accelerated pace. Insurers must urgently address their technology infrastructure and adopt an open technology strategy as consumers demand seamless experiences and personalized products. This means embracing cloud technology, leveraging AI and data analytics, and forming strategic partnerships to stay competitive. The stakes are high, and the time to act is now. Failure to do so could result in irrelevance and loss of market share in an industry that is rapidly transforming.

The current state of the insurance industry

TechWireAsia spoke to Nikola Djokic, the Managing Director of Insurance at SAP Fioneer, about the current state of the insurance industry and the challenges it faces. “Insurance is undergoing a revolution,” he said. “The rise of the insurtech and access to data has allowed non-insurance brands to enter the market and offer insurance as part of their offering, adding value to their customers and generating new revenue. Rather than a separate vertical industry, insurance is now taking a role in several ecosystems. This all represents significant new market opportunities for insurtechs, new players and incumbents alike, but the change is rapid, and traditional insurers need to adapt quickly to take advantage of and benefit from the new world order.”

Digitalization has traditionally been hampered in insurance due to legacy systems. Often built over decades, they have created data silos and operational inefficiencies that hinder the adoption of modern technology. Insurers have struggled to integrate new digital solutions seamlessly into their existing infrastructure, leading to fragmented customer experiences and slow response times. Moreover, the risk-averse nature of the insurance industry has contributed to a reluctance to invest in digital transformation initiatives. Insurers have been cautious about migrating sensitive data to the cloud and adopting emerging technologies like AI and machine learning due to concerns about data security, regulatory compliance, and the potential for disruption to established business processes.

Mr Djokic said: “Decisions need to run from the user interface through the middle office to the back office and back again, and these have typically been disconnected. The process of assessing a customer for a policy, or a claim for a payment, traditionally required (and in many cases still requires) a lot of manual intervention.”

Third-party data has been available to facilitate these assessments, but it is rarely integrated into the core insurance solution, making it challenging to meet customer expectations for digital immediacy. “This has allowed new players – neo-insurers – unencumbered by legacy systems or processes to leapfrog ahead in niche areas,” said Mr Djokic.

Insurance penetration in Asia, standing at around two percent in developed markets and one percent in emerging markets, presents a barrier to sector expansion despite the region’s vast population of over four billion people. TechWireAsia caught up with Chirag Shah, the Managing Director of JAPAC Digital and Core Insurance at SAP Fioneer, to try and understand the growth potential of the industry in APAC.

He said: “The Asia Pacific insurance market is experiencing shifts driven by post-COVID-19 customer perceptions, particularly in healthcare. Rising awareness of the protection gap has led to increased demand for health and life insurance products, especially in emerging markets, where insurance penetration and density are lower compared to developed markets.

“Insurers must navigate challenges such as mobility, cybersecurity, and climate change while enhancing value creation within existing operations like claims and underwriting. Challenges include slowing growth, low penetration, and rising combined ratios, particularly in emerging markets.”

Insurance everywhere

Source: SAP Fioneer

“‘Insurance everywhere’ alludes to embedded insurance,” said Mr Djokic. “Delivering insurance at the point it’s needed, as part of a purchase process, circumventing the need for a consumer or business purchaser to undertake a separate set of steps to insure their car, home, electronic item, or holiday.” By making insurance products more accessible and convenient, insurers can reach a broader audience and meet the evolving needs of modern consumers. Accessibility also opens up new opportunities for insurers to partner with other industries and platforms, expanding their reach and market presence.

Mr Djokic added: “[It] has the potential to increase the level of insurance generally, which is good news not just for the industry but society as a whole, as it becomes more protected. But it also means that non-insurance companies can take market share from the traditional players, unless those players turn the situation to their advantage, and become the ones offering insurance solutions to new industries.”

Personalization with data and AI

Traditional insurance practices rely on limited data and broad assumptions, often leading to unfair assessments of risk based on general demographics. This has sparked frustration among consumers who feel penalized for careful behavior while subsidizing riskier individuals. However, emerging technologies like telematics and IoT devices are beginning to change this dynamic by allowing personalized assessments and rewards for behaviors like safe driving and healthy lifestyles.

“We have seen examples of health insurance companies monitoring exercise levels with fitness trackers and dropping premiums accordingly,” said Mr Djokic. “There is now more data accessible to the insurer to contribute to the risk assessment, be it social media or online behavioral data, credit scores or – in the case of embedded insurance – data held or gathered by the non-insurance company.

“For the first time, we’re witnessing a ‘win-win’ in the industry, where data helps the insurer reduce their risks and pass this on in the form of reduced premiums to the customer. As consumers become accustomed to this level of tailoring, it will be essential for insurers to offer personalized insurance to stay competitive.”

Increased data availability is enhanced by AI, particularly machine learning, enabling dynamic risk assessments and tailored policy generation. Predictive analysis and risk scenario modeling help insurers proactively cover emerging risks like climate change and technological advancements. Automated policy drafting and scenario simulation improve efficiency and ensure comprehensive coverage tailored to specific customer needs.

“AI can interpret data accurately and immediately to deliver real-time claims processing and payment while mitigating risks.” Said Mr Djokic. “It delivers the speed consumers and businesses now expect while protecting the insurer.”

Mr Shah added: “Insurers in Japan and Korea may leverage AI and data analytics uniquely to cater to their distinct demographics and technological landscapes. Japan’s aging population may drive insurers to develop AI solutions for personalized services and risk management tailored to older demographics. Korea’s advanced technological infrastructure may facilitate the adoption of AI-driven underwriting and pricing models to enhance customer experiences and operational efficiency.”

The future with SAP Fioneer

On the future of insurance in APAC, Mr Shah said: “Despite challenges, Asia remains an attractive insurance market, with emerging markets expected to see higher premium growth in the next two years driven by rising economic growth, increasing risk awareness post-pandemic, and digitalization of distribution channels.

“Digitally embedded insurance is expected to grow significantly by 2030, driven by increasing digital penetration and partnerships with digital ecosystems.”

Mr Djokic says that the key to taking advantage of the new opportunities in insurance is connectivity – the ability to connect a core insurance solution to, for example, a new data source or user interface. “The secret to that is open technology,” he said.

For example, SAP Fioneer’s Engagement Hub is a tool for insurers to connect in an evolving ecosystem. With bi-directional communication, the Hub links a core insurance system with diverse digital channels, letting insurers craft tailored insurance solutions and adapt to market demands. The Cloud for Insurance cloud-native platform boasts fully managed services, ecosystem integration, and an intuitive user experience, allowing users to scale, innovate, and adapt quickly.

For more information on how insurers can embrace open technology and navigate the transformative changes in the industry, download the ‘Insurance Everywhere All at Once’ whitepaper from SAP Fioneer today.

The post Insurance everywhere all at once: the digital transformation of the APAC insurance industry appeared first on Tech Wire Asia.

]]>
How to perform like the best in spend management https://techwireasia.com/04/2024/best-spend-management-platform-for-businesses-of-any-size/ Fri, 05 Apr 2024 05:08:01 +0000 https://techwireasia.com/?p=238582 The Coupa financial platform and its private AI model holds the key to bringing business spend under control, with primary and secondary benefits that grant fast ROI.

The post How to perform like the best in spend management appeared first on Tech Wire Asia.

]]>
Enacting any transformational change in an organisation’s spend processes is a good deal more involved than simply installing a piece of software that automates away manual processes currently done by finance personnel. That said, achieving a reduction in the cost associated with every spend item is a genuine win, but doing so falls short of the potential benefits that can be won by a fuller overhaul of spend management. That involves a reappraisal of the people, systems and processes involved, a reappraisal that marks the difference between an improvement and a transformation.

The top performers who are able, consistently, to verify and track spending right across a large business are those that set the standard, and these organisations are the ones appearing in the top quartile of the Coupa Business Spend Management Benchmark Report [pdf]. Unlike many business-focused survey papers, the report is based on real-world data and metrics from companies and organisations undergoing a process of transformation involving continuous monitoring and iterative improvement.

Source: Shutterstock

We spoke to Michael Odom, Value Consultant, at Coupa Software, about the report and its implications. The published findings give organisations that need to overhaul their spend processes useful guidance on “what a spend management program should be delivering, regardless of the technology you’re using,” Michael explained. “We see the Coupa Community achieving the amazing results in the benchmark report, and we feel it really sets the bar for finance and procurement teams to see what the right combination of people, process, and platform can bring to their company.”

“When you look at a transformation, you need to make sure that you have the right people in place with the right mindsets to action, and a process that fits in and is right for the business. And the technology ultimately supports both of those areas, it really is a three part equation that you need to get right,” he said.

When a business seeks to change, the technology part of the potential solution is often wrongly thought to provide both the catalyst for and the means to change a company, a misconception that’s common across business functions and not one specifically confined to finance and accounting. Just deploying technology often produces a faster version of the existing undesirable results.

The Coupa report presents 20 KPIs against which CFOs can measure performance in groupings such as ESG, source-to-contract, procurement, supplier management, invoicing, expenses, and payments. By analysing each area and ensuring the resilience of the three P’s (process, people, platform), companies can improve their spend performance. The best performers constantly strive to improve each metric, and thanks to data-based spend management systems, they have empirical information against which they can track progress.

At their most basic, spend management platforms can help prevent mistakes and flag potentially fraudulent spending thanks to rules baked into the software. But traditional algorithms’ yes/no basis for rules may not be appropriate.

“One set of patterns we see as potentially suspicious may be fraudulent activity for one [business] and for another, maybe it’s okay for them. Another great example would be split purchase orders. If I have approval up to $10,000, and I need to go and spend $30,000, it would probably be wrong if I went and raised three requisitions for $9999. Technically, I’m below my limit, […] maybe following the letter of the rules and the policies in place within the system, but I’m definitely violating the spirit of it,” Michael said.

The way to solve for particular use cases is to use cognitive software that learns from and adapts to each company’s patterns of spend, a dedicated AI that trains on day-to-day practice. That’s the premise of Spend Guard, a feature of the Coupa platform that’s helping the best-performing businesses who feature in the Benchmark Report.

“It basically acts as several FTEs (full time employees), when it comes to actually doing audit processes. It’s very difficult for a human to go through and look at all of the activity and all the data that’s flowing through a system – some organisations’ spend could be billions of dollars on an annual basis. It’s just impossible to go through and look at tens of thousands of orders unless you hire a lot of people. And even then, it’s a manual and imperfect solution to a problem that can instead be addressed with AI-based pattern recognition and automation. Spend Guard provides that type of audit capability.”

Source: Shutterstock

Spending doesn’t have to be fraudulent for it to cost an organization. An issue as simple as clerical errors requires staff to manually check records and occasionally chase around the company asking colleagues for clarification. Michael recounted a story of a company in Australia interested in the Coupa platform to reduce costs of that nature. “They identified $800,000 of duplicate invoices. No one was doing anything wrong, per se. It’s just that invoices had been unknowingly raised multiple times. Staff went to enter the invoice and the platform said ‘these invoices are in the system already.’ So they added characters at the end of the invoice filename. Those invoices went through and were paid, and so [the company] had to claw that back. No one was actively trying to commit fraud, but a mistake happened and wasn’t caught until Spend Guard was turned on and detected it. And that’s just one example of where it had been going wrong.”

By resolving inefficiencies in processes and directly saving wasted spend, companies not only see immediate benefits, but the business at large can also improve its performance at a deeper level. “CFOs see activities that are taking place across operations and supply chain, even into areas like HR and IT. Being able to provide information back to those lines of business is important. There’s so much data available through procurement and finance […] for other parts of the business, it’s critical information whether you’re looking at sustainability, or operational efficiency.”

In our next article on spend management, we’ll look at aspects of spending like procurement and ESG and how tying together all the threads of spending can help organisations of any size achieve the type of efficiency metrics exhibited by the best performers listed in the Coupa Business Spend Management Benchmark Report. In the meantime, you can access the (non-gated) report here and learn more about the Coupa spend management platform here. Watch this space.

The post How to perform like the best in spend management appeared first on Tech Wire Asia.

]]>
Empowering Automation with Private AI https://techwireasia.com/04/2024/the-best-automation-platform-thats-easy-to-use/ Thu, 04 Apr 2024 02:46:23 +0000 https://techwireasia.com/?p=238564 The Appian platform brings automation to the workforce, making massive efficiency savings using private AI algorithms.

The post Empowering Automation with Private AI appeared first on Tech Wire Asia.

]]>
It’s an undeniable fact of life that most organisations today run and are reliant on software. The difference between success and failure can often be the speed at which companies can adapt the ways they work and how they present their offerings, and their technology and software.

That’s where problems can stem from, too: most organisations use many different applications, and too many everyday tasks involve editing and moving data from application to application. Software automation can’t easily be achieved because every company’s workflow is unique and draws on data from specific sources.

Of course, there’s always the option to create software from scratch designed to automate workflows, but that’s typically slow and expensive to undertake. Off-the-shelf automation solutions also have the effect of carving into stone the company’s processes – something that goes against the need for fast adaptability and flexibility in dynamic business markets.

Source: Shutterstock

A low-code automation solution offers any organisation a way through the impasse. It allows individual line-of-business experts to create the efficient systems needed to do their work more effectively and offer customers the experiences they demand.

Answering the data questions

At the core of the modern business are the troves of information that grow daily as the company operates. In most cases, data gets stored in discrete silos defined by the software applications used in the different parts of the business (and it’s worth noting that each business function might use a single common application in very different ways).

Half the battle in automation is locating, adapting, and amalgamating information from different sources to be used in joined-up workflows. At a low level, software platforms can communicate with one another and exchange data via APIs (application programming interfaces), but negotiating APIs is usually done at the level of software code, which is not a skill in most people’s wheelhouses. API connections are also fragile due to their tendency to break when one of the platforms they belong to is updated or patched.

It’s essential, therefore, that any low-code automation platform can auto-negotiate with the low-level interfaces that each application presents, creating a reliable connection between each instance of siloed data. By doing so, the automation platform creates an up-to-date resource for its users of a unified data fabric.

Sewing with the data fabric

The technology buzzphrase of the last 18 months has been AI, which is highly relevant here but needs breaking down in this context. Platforms like ChatGPT work off public data, which has been ingested over many months. What’s much more useful in a business setting is to give an AI access to the organisation’s private data so the algorithms can produce answers that are relevant to business operations. The same dedicated algorithms will learn the company’s workflows too, and be able to suggest connections and optimisations to human operators to speed up the process of building automations.

Source: Shutterstock

Even without the help of an in-house AI, the presentation of a data fabric offers so-called ‘citizen automators’ the full choice of information available to them from even the furthest reaches of the business’s data resources.

Armed with this and tools to build automated processes, the people at the coal face can create optimised workflows that benefit them, their organisation, and the organisation’s users or customers.

Messages from the coal face

“Using low-code, businesses can deploy solutions 10 to 20 times faster than traditional coding methods,” Luke Thomas told us. He’s the Area Vice President, Asia Pacific & Japan, at Appian, a company whose low-code automation platform is transforming how private companies and public institutions run their organisations.

The public sector is, of course, particularly sensitive to data regulation and security. Appian’s platform carefully trains its AIs with built-in guardrails that protect information from being used in the wrong contexts. It’s possible, therefore, to have a line-of-business expert build an application (or series of automations) that can access a large system like a company-wide ERP but only be able to see and use data appropriate to the task and within preset constraints.

The benefits of automating workflows become quickly apparent, especially when news of the efficiency gains spreads through the organisation. “A recent partnership we’re really proud of is working with the Office of Public Prosecutions in Victoria on a new case management system that is expected to save around 66,000 hours per year through process automation,” Mr Thomas said.

It’s often in the customer-facing applications that process automation can differentiate one business from another. Lenders’ mortgage insurance company Helia automated its claims management workflows with the Appian AI Process Platform and reduced claim processing time from two days to less than 10 minutes. The dramatic improvement in customer experience gave the company a market advantage and massively reduced its internal costs for each claim.

And while the buzz around artificial intelligence continues in the mainstream media, it’s in the data sets that AI works with that will likely change how organisations work. “I’d suggest that the real value in AI will eventually lie with those who own unique and original data, not necessarily those who create AI technologies,” Mr Thomas told us. Because work processes and data resources are specific to each organisation, using a dedicated AI has the potential to drive efficiency and thereby reduce cost, as well as create the kind of agility that safeguards a business’s future.

Head over to the company’s site to learn more about low-code automation and the Appian AI Process Platform.

The post Empowering Automation with Private AI appeared first on Tech Wire Asia.

]]>
Data ownership and control at the heart of tomorrow’s CX https://techwireasia.com/03/2024/why-zero-party-data-should-be-used-to-create-personalised-experiences/ Tue, 19 Mar 2024 05:25:21 +0000 https://techwireasia.com/?p=238493 Zero to third-party data’s uses can create great CX or destroy all element of trust between an organization and its customers. With Affinidi’s Glenn Gore.

The post Data ownership and control at the heart of tomorrow’s CX appeared first on Tech Wire Asia.

]]>
Concerns about the quantity and type of data that organisations hold are having increasingly adverse effects on customer experiences. On the one hand, brands’ access to information about their customers allows them to personalise every touchpoint for an individual. Yet, on the flip side, consumers can be alarmed that a company knows too much about them and has access to information they didn’t knowingly disclose. Here, the relationship between brand and consumer is not balanced, a situation that breeds distrust. The consumer, customer, or prospect may simply walk away.

Source: Shutterstock

To understand how this situation arises, we need to distinguish between data types: first- and second-party data, for example. Plus, we should examine the concept of zero-party data. To help us demarcate data types and explore the implications of the relationship between customer experience and data, we spoke to Glenn Gore, CEO of Affinidi. (Read about the Affinidi Trust Network here and here for background.)

Defining data

Zero-party data is preference-based or intent-based and is held by the individual to represent the different online versions of themselves. Those different versions could be categorised, for instance, as an individual who is, depending on the context, an employee, a gamer, a charity worker, and a fitness fanatic.

This is the type of information that may help determine broad preferences for interaction with companies and brands. For example, someone who identifies as female in their zero-party data could be shown a women’s clothing line by default when they land on a clothing website.

First-party data is the information gathered by an organisation when an individual interacts with them. That could be a list of foodstuffs bought at a store. What’s interesting, Mr Gore told us, is that zero and first-party data are sometimes contradictory.

Source: Shutterstock

He said: “I say that I don’t want to eat sugary products; that’s zero-party data. But my shopping history says that’s an outright lie because I buy chocolate and fruit juice all the time! So now you can start seeing something really fascinating.”

In that context, a brand could show a message at checkout offering alternative, low-sugar products. That might lower their revenues, assuming diet alternatives are cheaper, but it would be a better customer experience and a net gain for the relationship.

Second-party data is information that’s shared, with approval, between the first party and another. “Let’s say I’ve engaged with a nutritionist and I’ve decided to help with the nutritional accuracy,” said Mr Gore. “I share what I buy at the supermarket. So, that data from the grocery store, which is first-party data, is shared with my new nutritionist.

“The difference here is that it’s with my consent, my knowledge. Nothing else is going to be shipped. The grocery store is not going to share the videotapes of how long I stood staring at the chocolates even though I didn’t buy any.”

Third-party data is the type of information that is collected and often sold and is “kind of the one that gets everyone in trouble.”

Mr Gore said: “This is where data that’s been collected about myself is aggregated with lots of other data sets combined and then sold without my consent, without my knowledge. To stay with that same example, my nutritionist says, ‘Well done, Glenn, you’re buying kale, you’re eating lots of healthy things. But I see that you’re not going to the gym?’ How do you know I’m not going to the gym? I never gave you access to my gym membership! I’m not going to be very happy about that. That’s the invasion of privacy that occurs.”

As awareness of data privacy among consumers grows and increasingly strict laws about data governance fall into place, third-party data not only begins to look less attractive as a concept for the individuals it’s describing but as a potential destroyer of trust and, therefore, customer experience. It’s also a burden of responsibility on organisations that hold it, as it also represents an attractive target for bad actors and legislators.

Consenting data exchange

The key to better customer experiences, and ones that are truly personalised, is the combination of zero and first-party data, which combines intent with action. Then, multiple second-party data instances form a network of consensual data sharing, building mutual trust between the consumer and other organisations.

Mr Gore sees the future of what we now call the ‘data economy’ as one where consumers can join or create their own versions of trust networks, parties with whom they consensually share and receive value in return.

The Affinidi Trust Network is the system that Affinidi is building, comprising a “duality of innovation, the two sides of the same coin.” Developers can already build the components of the Trust Network into vendors’ and service-creators’ offerings. For end-users, the arbiters of their own data, Mr Gore envisages services that will help with the minutiae of zero-party data interactions.

“They will be custodial hub managers of your data,” he said. “These custodial holders who manage how you represent and manage yourself will help you do this on your behalf. That app will be driven by a personal AI capable of sifting the many digital interactions that take place online for each user every day and remove much of the detail of personal data management which is cumbersome.

“You don’t want to wake up every morning with an app saying, ‘We just found another 60 pieces of information about yourself out there on the internet. Do you mind just cataloging those 60?’ Personal AIs will help you with cataloging on your behalf.

Source: Shutterstock

“The worst they may do is to ask about instances where there’s some conflict resolution needed. For example, ‘I’ve automatically organised these 180 different things for you, but these two look like they’re in conflict’, or ‘I know that you might be in the process of changing how you think about this. Can you just help guide me?'”

As personal data privacy issues accelerate and big tech companies work actively to discourage privacy-focused tools – Google’s intended ban on Chrome ad-blockers later this year is a fine example – solutions like the Affinidi Trust Network and the concept of Holistic Identity make increasing sense.

Consumers don’t have to subscribe to every aspect of Rana Foroohar’s ‘Don’t Be Evil‘ to feel that information about them is being misused. That’s already apparent in so-called customer experience platforms that present personalised interactions that are too all-knowing. Representations of prospects and customers derived from bought, aggregated third-party data produce ‘personalisation’ that’s inaccurate because every individual presents multiple versions of themselves online according to context.

Allowing individual users to consensually share relevant information with trusted organisations and brands is the way to build a relationship and establish trust. Those are the relationships that will endure and will produce long-term results for commercial entities. The move to consensual (and profitable) provision of customer experiences begins with becoming part of the Affinidi Trust Network, and you can read more here.

The post Data ownership and control at the heart of tomorrow’s CX appeared first on Tech Wire Asia.

]]>
How Project Morpheus will change business application production https://techwireasia.com/03/2024/how-can-we-use-ai-in-low-code-development-project-morpheus/ Fri, 08 Mar 2024 00:13:08 +0000 https://techwireasia.com/?p=238415 Slated for mid-2024, Project Morpheus brings NLP and AI to the OutSystems low-code platform, bringing speed, security and efficiency from data layer to GUI.

The post How Project Morpheus will change business application production appeared first on Tech Wire Asia.

]]>
For anyone who’s never attended a hackathon, perhaps the biggest surprise is what comes out. The product of 48 sleepless hours of collective work is often barely more than an MVP of the designated goal. That’s not the point of a hackathon, of course, as it’s a learning experience for participants, first and foremost. Plus, at the end of the day, it’s fun.

While not quite the hothouse of a hackathon, many companies’ development teams work in a similar way to develop new apps and extend the function of existing ones. The timescales are longer, but the overall aims are much the same: to get from a business-oriented goal to a product that brings value to the organisation.

Users of low-code platforms benefit from several advantages over companies operating a traditional software development team. Developers can benefit from a suite of AI-powered, business-focused tools and helpers to massively improve productivity. Plus, low-code extends the user base of those engaged in development to include line-of-business experts.

Source: Shutterstock

But as large language models (LLMs) like Copilot can accelerate those working in specific lower-level coding languages, artificial intelligence (AI) can act as a massive multiplier in low-code environments, increasing developer productivity, shortening time-to-production, allowing faster iteration on code, increasing inherent security and sanity-checking architecture and coding for scalability and interoperability.

At present, IT departments suffer from backlogs due to current investments in transformation for innovation and productivity. Resource scarcity, urgent business needs, and essential ongoing operations leave little capacity for tasks considered lower-priority. But that situation leads to a neglect of digital initiatives in process optimization and collaboration, hampering financial performance and stifling the spirit of innovation.

Every organisation’s IT lead is still under pressure to get new features and apps out to end-users where they can generate value for the business. But the OutSystems no-code Morpheus environment helps create departmental applications quickly through a unique AI-powered visual approach. Morpheus provides live suggestions and visualizations, reducing guesswork and keeping projects on track to faster completion.

Source: Shutterstock

Working from a corpus of information comprising the full context of the existing IT stack and enhanced with anonymised metadata of many third-party business types and applications, sustainable development productivity will increase exponentially. In every aspect of software development, the AI will provide suggestions and solutions based on sound business logic for features and UI elements via pop-up options or natural language input.

With wizard-driven, step-by-step prompts, connections between data sources are established and maintained by the OutSystems platform. However, adding user-facing AI will facilitate simpler manipulation of the app’s logic and presentation aspects. Common connections between business applications in daily use become a usable resource in a development environment that’s already a huge aid for developers and citizen developers, combining a GUI-driven interface with editable, detailed logic just a click away. It’s simple in that context to add new views on available data drawn from across the entirety of the information resources the business already owns.

Project Morpheus is a new no-code development experience powered by generative AI that enables citizen developers to create departmental applications inside IT departments’ rules and guidelines. Combining end-user business knowledge and a platform that quickly realises data-driven concepts, the business benefits from powerful, directed applications that directly optimise processes and drive down resource bleed.

The promise of low code has always been of the empowerment of so-called citizen developers. The next chapter in OutSystems brings that a step closer as natural language interactions will likely mean that any business stakeholder can create applications. “Traditional” developers and citizen developers can use the same environment, narrowing the gap between the time from emerging business needs and solutions that address them.

If we take a massive jump in productivity by developers as a given, there are two further high-level benefits to usingOutSystems. The first is the ability to engage business functions early in any development. As discussed, this can be a relatively simple yet eminently usable MVP for development teams. Secondly, the next phase in the OutSystems platform, Project Morpheus, will engender an innovation mindset in all parts of the business. Previously de-prioritised apps that have the potential to revolutionise every corner of the business can now become finished projects quickly and simply.

Enthusiasm for software and technology is infectious, even among technophobic employees of an organisation. Saving time, energy and labour generates massive momentum for creativity in all parts of the business. Harnessing the creativity of line-of-business experts has always been challenging if software development is done with traditional tools in traditional environments. But with OutSystems and its new iteration in Project Morpheus, a creating and problem-solving company will easily outperform its competition.

Learn more about how AI will change practical, enterprise-grade application development here and the OutSystems low-code platform here.

The post How Project Morpheus will change business application production appeared first on Tech Wire Asia.

]]>