Skip to main content

Arcee Is Going Small To Win Big in the Long Run

Small Language Models will be a pillar of enterprise AI, offering vast improvements in performance, cost and privacy. Arcee makes them easy to train, deploy and optimize.

Businesses are facing a painful dilemma: In order to get the most out of generative AI, they must unlock their proprietary data for model training. However, for very good reason, many enterprises neither can nor will allow this data to be exposed to third parties, such as the dominant LLM providers. Further, most companies don’t have the resources or expertise to train and maintain their own LLMs, let alone deploy applications on top of them.

It’s not just highly regulated industries like banking or healthcare that experience this conundrum. As we recently wrote, all businesses “need models to be aligned with their goals and values.” They have to protect their secret sauce, whether that’s hours of recorded sales calls, sell decks, customer service interactions, medical records, sensitive financial data, or anything in between. The risk of exposing this data to closed source LLMs that may train on it and reveal it to the public or to competitors is simply unacceptable.

That’s where Arcee AI comes into the picture. Today, we’re excited to announce our investment in Arcee AI’s pioneering vision. Arcee AI has made training and deploying “Small Language Models” using two techniques they have developed, Model Merging and Spectrum, into a straightforward process –offering vast improvements on performance at a staggering level of efficiency, in a secure end-to-end platform where the customer maintains full ownership of their models.

Arcee AIs Model Merging approach allows customers to train an open source LLM on their data, then blend or “merge” that model with another open source LLM. The merged model has the brains of both input LLMs, including the domain-specific customer data, but the size and inference cost of just one input model. The merge doesn’t require any additional training, which means no GPU usage. Yet the performance of the merged model is superior to that of the two input models. Model Merge is truly a technology where 1+1 equals 3.

The other approach Arcee has developed is called Spectrum, which allows for efficient training of LLMs by targeting specific layer modules that are useful, and freezing the rest. Spectrum optimizes training by 42% and addresses problems of LLMs “forgetting” previous knowledge, with no hit to performance. 

The Small Language Models (SLMs) resulting from these training methods greatly outperform closed source models in the enterprise. One financial services customer saw a 23% boost in benchmarks and a 96% reduction in costs. Another insurance customer boosted performance by 83% and cut costs by 89%. These are incredible results for a company still at the beginning of its journey, and much of that has to do with how special and well equipped for this task the Arcee AI Founders are.

Arcee’s Special Team

CEO Mark McQuade and CRO Brian Benedict were both early commercial hires at Hugging Face and have brought with them a deep understanding of the state of AI research and the importance of developing solutions that are production-ready for the enterprise. Their third Co-Founder, CTO Jacob Solawetz, comes from YC company Roboflow (where he also worked with Mark), which has been working on computer vision problems, a subset of AI that also requires custom models for maximum performance.

This team’s roots in AI research led them to one of their most important early hires: Chief of Frontier Research Charles Goddard, a former NASA and Apple engineer who created MergeKit, the leading open source model merging library that is at the heart of Arcee’s significant early traction in the marketplace.

Arcee’s team is end-to-end talented, and they’ve been working hard to release their new solution, available today, Arcee Cloud. This SaaS interface for Arcee’s platform makes Model Merging and Spectrum SLM training available with just a few clicks, which means even less technical users can build and deploy SLMs, supporting our conviction that specialized software will win in AI. 

Arcee’s solutions are designed to empower every organization across industries to merge, train, and infinitely scale their own consumption models. We at Emergence were compelled by their approach to both give people what they need now and resource them to rapidly grow and innovate on their own – it’s a meaningful use of AI that has staying power and limitless potential.

Arcee fits perfectly into another of our convictions, that companies focused on delivering fast, cost-effective flexibility and control to enterprise customers are the ones that will unlock the most value. The infrastructure layer of AI is still in its infancy, and Arcee has truly raced ahead of the pack to deliver clear value to any business working with AI.

When we first met Mark, we had an hours-long conversation about his vision and roadmap for Arcee. We were deeply compelled by it, and knew that his outlook matched one of our core values: winning big by focusing on the long run. We look forward to partnering with Arcee AI as they work to increase the utility, accessibility and performance of  generative AI, so that businesses of all sizes can better serve their customers and achieve their goals.