ServiceNow is accelerating enterprise AI with a new reasoning model built in partnership with NVIDIA — enabling AI agents that respond in real time, handle complex workflows and scale functions like IT, HR and customer service teams worldwide.

Unveiled at ServiceNow’s Knowledge 2025, the Apriel Nemotron 15B model is compact, cost-efficient, and tuned for action. It’s designed to drive the next step forward in enterprise large language models (LLMs).
Smaller Model, Bigger Impact
Apriel Nemotron 15B is engineered for reasoning — drawing inferences, weighing goals and navigating rules in real time. Its smaller size compared to some of the latest general-purpose LLMs (which can run to more than a trillion parameters) means it delivers faster responses and lower inference costs, while still packing enterprise-grade intelligence.
Developed with Advanced Technology
Apriel Nemotron 15B was developed with NVIDIA NeMo, the open NVIDIA Llama Nemotron Post-Training Dataset, and ServiceNow domain-specific data. Its post-training took place on NVIDIA DGX Cloud hosted on AWS, tapping high-performance infrastructure to accelerate development. The result is an AI model optimized for speed, efficiency, and scalability — key ingredients for powering AI agents that can support thousands of concurrent enterprise workflows.
A Closed Loop for Continuous Learning
Beyond the model itself, ServiceNow and NVIDIA are introducing a new data flywheel architecture. This architecture integrates ServiceNow’s Workflow Data Fabric with NVIDIA NeMo microservices, including NeMo Customizer and NeMo Evaluator. This setup enables a closed-loop process that refines and improves AI performance by using workflow data to personalize responses and improve accuracy over time. Guardrails ensure customers are in control of how their data is used in a secure and compliant manner.
From Complexity to Clarity: Real-World Impact
ServiceNow demonstrated how these agentic models have been deployed in real enterprise scenarios, including with AstraZeneca, where AI agents will help employees resolve issues and make decisions with greater speed and precision — giving 90,000 hours back to employees.
“The Apriel Nemotron 15B model — developed by two of the most advanced enterprise AI companies — features purpose-built reasoning to power the next generation of intelligent AI agents,” said Jon Sigler, executive vice president of Platform and AI at ServiceNow. “This achieves what generic models can’t, combining real-time enterprise data, workflow context and advanced reasoning to help AI agents drive real productivity.”
“Together with ServiceNow, we’ve built an efficient, enterprise-ready model to fuel a new class of intelligent AI agents that can reason to boost team productivity,” added Kari Briski, vice president of generative AI software at NVIDIA. “By using the NVIDIA Llama Nemotron Post-Training Dataset and ServiceNow domain-specific data, Apriel Nemotron 15B delivers advanced reasoning capabilities in a smaller size, making it faster, more accurate and cost-effective to run.”
Scaling the AI Agent Era
The collaboration marks a significant shift in enterprise AI strategy, moving from static models to intelligent systems that evolve. It also marks another milestone in the partnership between ServiceNow and NVIDIA, pushing agentic AI forward across industries.
For businesses, this means faster resolution times, greater productivity, and more responsive digital experiences. For technology leaders, it’s a model that fits today’s performance and cost requirements — and can scale as needs grow.
Availability
Apriel Nemotron 15B is now available. The model will support ServiceNow’s Now LLM services and will become a key engine behind the company’s agentic AI offerings.
Learn more about the launch and how NVIDIA and ServiceNow are shaping the future of enterprise AI at Knowledge 2025.

Leave a Reply