Intel, RedHat, and others join hands for Open Platform for Enterprise AI

The Open Platform for Enterprise AI will spearhead the development of open, robust, multi-provider, and composable GenAI systems that are flexible, scalable, and enterprise-grade.

LLMs, ChatGPT, Generative AI

In a move that could redefine how generative AI can be used by enterprises sans the present ambiguity over its ability to scale and interoperable across business systems, the LF AI & Data Foundation has announced the launch of the Open Platform for Enterprise AI (OPEA) in collaboration with several technology companies.

The objective is to spearhead the development of open, robust, multi-provider, and composable GenAI systems that are flexible, scalable, and enterprise-grade. Technology bigwigs supporting the initiative include Intel, VMWare, Red Hat, SAS, Cloudera, MariaDB Foundation, Anyscale, and Datastax. The LF AI & Data Foundation is inviting and expecting more members to join the bandwagon.

 “OPEA will unlock new possibilities in AI by creating a detailed, composable framework that stands at the forefront of technology stacks,” Ibrahim Haddad, executive director at LF AI & Data said in a statement, highlighting OPEA’s focus on open model development, standardized modular pipelines, and support for various compilers and toolchains. “This initiative is a testament to our mission to drive open source innovation and collaboration within the AI and data communities under a neutral and open governance model,” Haddad said.

“Open, multi-provider AI systems like OPEA offer exciting opportunities for driving innovation and value within our organization's AI strategy,” said Saurabh Gugnani, global head of cyberdefense and application security at the Dutch compliance firm, TMF Group. “By leveraging these initiatives, we can access a diverse ecosystem of AI technologies, tools, and expertise from multiple providers. With access to a wide range of AI technologies and solutions, we can stay at the forefront of innovation. We can explore and adopt the latest advancements in AI, including new algorithms, models, and techniques, to enhance our products and services.”

This is an interesting development as we have seen in the past as well as how open source platforms have given many enterprises freedom to develop their own very specialized solutions, said Faisal Kawoosa, chief analyst and founder of technology research firm, Techarc. “In GenAI also we expect such a phase to begin. For instance where a legal tech company can develop specialized GenAI solutions for the legal fraternity that will give in-depth and credible information around legal matters.”

Challenges OPEA aims to address

Currently, most GenAI systems respond to queries and perform tasks based on the data they are trained on, raising questions about their ability to scale and operate. Lack of standardization and regulation is another challenge regarding GenAI deployment in enterprises.

“OPEA intends to address this issue by collaborating with the industry to standardize components, including frameworks, architecture blueprints, and reference solutions that showcase performance, interoperability, trustworthiness, and enterprise-grade readiness,” LF AI & Data Foundation said.

In recent times, the Retrieval-Augmented Generation (RAG) model has been gaining traction among enterprise AI for its ability to extract significant value from existing data repositories, as its knowledge base can go beyond the trained data.

“RAG is the most important approach to allow LLM to access the right, relevant data pipelines to improve the AI quality and user experience. This is a major bottleneck due to the closed and difficult-to-integrate proprietary data pipelines, especially in enterprise space,” said Neil Shah, VP of research & partner at Counterpoint Research. “So, it’s great to see LF and key industry stakeholders come together to reduce the complexities of data retrieval and design a more open, flexible, and modular approach via OPEA.”

Standardization and openness of such frameworks are key to the adoption of GenAI in enterprises, Shah pointed out.

Intel, a critical partner of LF AI & Data in this initiative, underscored the importance of OPEA in addressing critical pain points of RAG adoption and scaling. “Intel is at the forefront of incubating open source development to build trusted, scalable open infrastructure that enables heterogeneity and provides a platform for developer innovation,” Melissa Evers, VP of Software Engineering Group and GM of Strategy to Execution at Intel said in a statement. “It will also define a platform for the next phases of developer innovation that harnesses the potential value generative AI can bring to enterprises and all our lives."

TMF Groups’ Gugnani said the OPEA initiative by LF AI & Data and other industry giants would solve enterprises' five key challenges — enhance flexibility and scalability, foster collaboration, offer cutting-edge technologies and drive cost-effectiveness.

Copyright © 2024 IDG Communications, Inc.