Building Next-Gen LLMs

Contextual AI, a startup founded by Hugging Face and Meta AI researchers, has raised $20 million in seed funding to develop enterprise-specific large language models using "Retrieval Augmented Generation" to optimize data integration and reasoning.

Soham Sharma

12/16/20232 min read

With the emergence of AI as a powerful and disruptive technology, Contextual AI emphasizes the importance of having multiple players in the field rather than leaving the technology solely in the hands of a few big tech corporations. The company aims to leverage open source software and contribute to the community to democratize the technology.

The $20 million seed funding will enable Contextual AI to kickstart its journey and expand its team. The company is actively hiring for various roles in the Bay Area. With a mission to create purpose-built LLMs for enterprises, Contextual AI invites those interested in learning more about their work to reach out to them.

As the journey of AI is only just beginning, the founders remain focused on the future and the potential impact their work can have. They express gratitude for the support from their investors and their team of talented engineers and scientists. Contextual AI’s vision is to empower knowledge workers with accurate, efficient, and secure LLMs that can handle vast private datasets, enabling companies to trust and benefit from the technology.

Contextual AI, a startup founded by former researchers from Hugging Face and Meta AI, has announced its emergence from stealth mode with $20 million in seed funding. Led by Bain Capital Ventures, the funding round also included participation from Lightspeed, Greycroft, SV Angel, and notable angel investors. The company aims to build the next generation of large language models (LLMs) specifically designed for enterprise use.

The current generation of LLMs has already demonstrated their potential, but they are not yet suitable for work, as they suffer from several shortcomings. Contextual AI intends to address these limitations and build AI that works for enterprise environments. The company’s founders, Douwe Kiela and Amanpreet Singh, have extensive experience in AI research and have previously worked on projects related to multimodal frameworks, model evaluation, and representation learning.

Contextual AI plans to focus on solving key challenges faced by enterprises when deploying AI solutions. These challenges include data privacy, customizability, hallucination (LLMs generating false information), compliance, staleness (outdated information), latency, and cost. The company aims to develop LLMs that are safer, more trustworthy, customizable, and efficient than existing models.

The founders propose a new approach called “Retrieval Augmented Generation” (RAG), which involves combining a generative model with a retrieval-based data store. This approach enables the adaptation of models to specific use cases and custom data, mitigating many of the issues associated with current LLMs. The company believes that decoupling the memory from the generative capacity of LLMs and optimizing different modules for data integration, reasoning, speech, and perception will unlock the true potential of these models for enterprise applications.