Cerebras Programs, a pioneer in high-performance AI compute, has launched a groundbreaking answer that’s set to revolutionize AI inference. On August 27, 2024, the corporate introduced the launch of Cerebras Inference, the quickest AI inference service on the earth. With efficiency metrics that dwarf these of conventional GPU-based methods, Cerebras Inference delivers 20 occasions the pace at a fraction of the fee, setting a brand new benchmark in AI computing.
Unprecedented Velocity and Value Effectivity
Cerebras Inference is designed to ship distinctive efficiency throughout numerous AI fashions, significantly within the quickly evolving phase of massive language fashions (LLMs). For example, it processes 1,800 tokens per second for the Llama 3.1 8B mannequin and 450 tokens per second for the Llama 3.1 70B mannequin. This efficiency isn’t solely 20 occasions quicker than that of NVIDIA GPU-based options but additionally comes at a considerably decrease value. Cerebras gives this service beginning at simply 10 cents per million tokens for the Llama 3.1 8B mannequin and 60 cents per million tokens for the Llama 3.1 70B mannequin, representing a 100x enchancment in price-performance in comparison with current GPU-based choices.
Sustaining Accuracy Whereas Pushing the Boundaries of Velocity
One of the spectacular points of Cerebras Inference is its capability to take care of state-of-the-art accuracy whereas delivering unmatched pace. In contrast to different approaches that sacrifice precision for pace, Cerebras’ answer stays inside the 16-bit area for the whole thing of the inference run. This ensures that the efficiency good points don’t come on the expense of the standard of AI mannequin outputs, a vital issue for builders targeted on precision.
Micah Hill-Smith, Co-Founder and CEO of Synthetic Evaluation, highlighted the importance of this achievement: “Cerebras is delivering speeds an order of magnitude faster than GPU-based solutions for Meta’s Llama 3.1 8B and 70B AI models. We are measuring speeds above 1,800 output tokens per second on Llama 3.1 8B, and above 446 output tokens per second on Llama 3.1 70B – a new record in these benchmarks.”
The Rising Significance of AI Inference
AI inference is the fastest-growing phase of AI compute, accounting for about 40% of the overall AI {hardware} market. The appearance of high-speed AI inference, akin to that supplied by Cerebras, is akin to the introduction of broadband web—unlocking new alternatives and heralding a brand new period for AI functions. With Cerebras Inference, builders can now construct next-generation AI functions that require complicated, real-time efficiency, akin to AI brokers and clever methods.
Andrew Ng, Founding father of DeepLearning.AI, underscored the significance of pace in AI improvement: “DeepLearning.AI has multiple agentic workflows that require prompting an LLM repeatedly to get a result. Cerebras has built an impressively fast inference capability which will be very helpful to such workloads.”
Broad Trade Assist and Strategic Partnerships
Cerebras has garnered sturdy help from {industry} leaders and has shaped strategic partnerships to speed up the event of AI functions. Kim Branson, SVP of AI/ML at GlaxoSmithKline, an early Cerebras buyer, emphasised the transformative potential of this expertise: “Speed and scale change everything.”
Different firms, akin to LiveKit, Perplexity, and Meter, have additionally expressed enthusiasm for the impression that Cerebras Inference may have on their operations. These firms are leveraging the ability of Cerebras’ compute capabilities to create extra responsive, human-like AI experiences, enhance consumer interplay in search engines like google, and improve community administration methods.
Cerebras Inference: Tiers and Accessibility
Cerebras Inference is accessible throughout three competitively priced tiers: Free, Developer, and Enterprise. The Free Tier offers free API entry with beneficiant utilization limits, making it accessible to a broad vary of customers. The Developer Tier gives a versatile, serverless deployment possibility, with Llama 3.1 fashions priced at 10 cents and 60 cents per million tokens. The Enterprise Tier caters to organizations with sustained workloads, providing fine-tuned fashions, customized service degree agreements, and devoted help, with pricing out there upon request.
Powering Cerebras Inference: The Wafer Scale Engine 3 (WSE-3)
On the coronary heart of Cerebras Inference is the Cerebras CS-3 system, powered by the industry-leading Wafer Scale Engine 3 (WSE-3). This AI processor is unmatched in its measurement and pace, providing 7,000 occasions extra reminiscence bandwidth than NVIDIA’s H100. The WSE-3’s large scale allows it to deal with many concurrent customers, making certain blistering speeds with out compromising on efficiency. This structure permits Cerebras to sidestep the trade-offs that sometimes plague GPU-based methods, offering best-in-class efficiency for AI workloads.
Seamless Integration and Developer-Pleasant API
Cerebras Inference is designed with builders in thoughts. It options an API that’s totally appropriate with the OpenAI Chat Completions API, permitting for simple migration with minimal code modifications. This developer-friendly strategy ensures that integrating Cerebras Inference into current workflows is as seamless as doable, enabling speedy deployment of high-performance AI functions.
Cerebras Programs: Driving Innovation Throughout Industries
Cerebras Programs isn’t just a frontrunner in AI computing but additionally a key participant throughout numerous industries, together with healthcare, vitality, authorities, scientific computing, and monetary providers. The corporate’s options have been instrumental in driving breakthroughs at establishments such because the Nationwide Laboratories, Aleph Alpha, The Mayo Clinic, and GlaxoSmithKline.
By offering unmatched pace, scalability, and accuracy, Cerebras is enabling organizations throughout these sectors to deal with a few of the most difficult issues in AI and past. Whether or not it’s accelerating drug discovery in healthcare or enhancing computational capabilities in scientific analysis, Cerebras is on the forefront of driving innovation.
Conclusion: A New Period for AI Inference
Cerebras Programs is setting a brand new customary for AI inference with the launch of Cerebras Inference. By providing 20 occasions the pace of conventional GPU-based methods at a fraction of the fee, Cerebras isn’t solely making AI extra accessible but additionally paving the best way for the following era of AI functions. With its cutting-edge expertise, strategic partnerships, and dedication to innovation, Cerebras is poised to guide the AI {industry} into a brand new period of unprecedented efficiency and scalability.
For extra info on Cerebras Programs and to attempt Cerebras Inference, go to www.cerebras.ai.