Red Hat’s revolution: Speeding generative AI adoption in hybrid clouds
- Leveraging Red Hat OpenShift, the AI-focused portfolio enables real-world AI applications for businesses.
- It provides a strong foundation for a specialized partner network, including IBM’s next-gen AI innovation, watsonx.ai.
Generative AI, the technology behind popular programs like OpenAI’s ChatGPT and DALL-E, is creating quite a stir in the tech industry. These programs use generative AI to instantly create a wide array of content, such as computer code, essays, emails, social media captions, images, and poems, captivating audiences everywhere.
With generative AI technology gaining traction, tech giants like Red Hat are integrating this innovation into their services. Red Hat has introduced new features for OpenShift AI, leveraging the power of open-source technology to provide a scalable foundation for IT operations while fostering an ecosystem for data scientists and developers.
OpenShift AI powers IBM’s watsonx.ai platform’s generative AI services, facilitating scalable, intelligent applications and services across enterprises. As Large Language Models (LLMs) such as GPT-4 and LLaMA become more mainstream, industries are exploring their potential. Customers can fine-tune these models with domain-specific data, improving their accuracy for different use cases.
The emergence of generative AI: A tech trend to watch
According to Matt Hicks, President and CEO of Red Hat, the progression from machine learning and deep learning to generative AI is exciting as the technology doesn’t require labeled data, allowing quicker experimentation with sequential data. It also provides the opportunity to start with broad foundation models and then train them on specific data incrementally.
“I think the outlook is that we’re going to see a lot of innovation based on open source. We aim to position ourselves well to capture that and apply it to enterprises,” said Hicks. “As for incorporating this in our own products, we’ve been utilizing these techniques in offerings like Red Hat Insights for quite some time. I believe, like most enterprises, the generative AI approach will allow us to experiment more, train with data in more efficient ways, and hopefully see more adoption in areas where we’ve already been applying similar techniques. However, now it will move faster and offer a more exciting experience.”
The preliminary stages of AI model training demand considerable infrastructure, necessitating specific platforms and tools even before the model serving, tuning, and management processes begin. Without a platform capable of satisfying these needs, organizations often find their AI/ML utilization capabilities restricted.
OpenShift AI overcomes these hurdles by delivering a uniform infrastructure across the entire AI pipeline – from training and deployment to inference – thereby unleashing the full power of AI.
Chris Wright, Chief Technology Officer and Senior Vice President of Global Engineering at Red Hat, observes that while it’s engaging to interact with tools like ChatGPT or Bard, the responses, although well-phrased, aren’t always accurate.
“In the enterprise setting, we’re seeing the use of foundation models and focused transfer learning to train very specific datasets, perhaps even an enterprise’s own data, to assist businesses in advancing,” Wright said. “Our focus is on integrating these tools into our platform. As such, we developed Ansible Lightspeed in partnership with IBM, an expert in generative AI and domain-specific AI techniques. This partnership has enabled us to use natural language to generate Ansible Playbooks.”
OpenShift AI by Red Hat underpins IBM’s advanced AI offerings, including Watson Code Assistant, bringing domain-specific AI to IT and development teams. Ansible Lightspeed empowers users of all skill levels to create Ansible Playbooks using AI-generated recommendations, enabling them to automate tasks using English language commands.
Challenges of AI integration: Understanding and overcoming obstacles
While AI generates a lot of excitement, its adoption is still being cautiously approached. Hicks suggests that AI adoption requires funding and a boost in current environment efficiency. As organizations venture deeper into AI, establishing trust in core models, especially those trained on extensive data sets, becomes crucial for understanding the underlying data and the resulting outputs.
“We often highlight to our teams the challenges of using these models for coding recommendations, especially given the realm of software licenses and copyright laws,” Hicks emphasized. “It’s crucial to operate within these boundaries – we can’t simply adopt successful code into our products without careful consideration.”
In this context, Hicks emphasizes the importance of understanding the origin of the data used for training the models and the source of the recommendations, a process demonstrated in their collaboration with IBM on Ansible. Lastly, he insists on the importance of governance – understanding the reasons behind specific outputs, the nature of the input data, and how it might change as models are incrementally trained with new data over time.
Hicks concludes by expressing his belief that these challenges are not insurmountable. On the contrary, he views it as inevitable that new practices will be developed in these areas. “Open source will likely drive much of this innovation, just as it has with code. It will shape the world of models, data, and governance. But to move AI into production and mission-critical areas, these components will be necessary,” he concluded.
Paving the way for simplified hybrid cloud management
In addition to AI, Red Hat has expanded the management features of Red Hat Insights for Red Hat Enterprise Linux, aiming to simplify enterprise Linux in hybrid cloud settings. This aligns with Red Hat’s goal of making its leading Linux platform accessible, manageable, and maintainable anywhere.
According to Red Hat’s research, integrating predictive analytics with these enhanced management capabilities can expedite IT issue detection by up to 90% and speed up their resolution by nearly 66% across hybrid clouds. Red Hat Insights helps balance skill levels in IT teams, ensuring systems stay online and free of critical flaws.
Access via console.redhat.com, the improved Red Hat Insights unifies the management of Red Hat Enterprise Linux deployments across hybrid clouds in a user-friendly interface – leveraging predictive analytics to identify potential bugs, misconfigurations, or security vulnerabilities.
READ MORE
- 3 Steps to Successfully Automate Copilot for Microsoft 365 Implementation
- Trustworthy AI – the Promise of Enterprise-Friendly Generative Machine Learning with Dell and NVIDIA
- Strategies for Democratizing GenAI
- The criticality of endpoint management in cybersecurity and operations
- Ethical AI: The renewed importance of safeguarding data and customer privacy in Generative AI applications