How AI helped Domino’s improve pizza delivery

Sharing a container environment and Nvidia GPU server has enabled Domino’s data scientists to create more complex and accurate models to improve store and delivery operations

How AI helped Domino’s improve pizza delivery
Thinkstock

When the words artificial intelligence (AI) and machine learning (ML) are used, people often think of advanced industries such as space exploration and biomedicine that rely heavily on research and development. The fact is, AI and ML should be something all industries are looking at, including retail. We are now in the customer service era and small differences in service can make a big difference in market share.

This past week Nvidia held a virtual version of its annual GPU Technology Conference (GTC), which has become a showcase for real life AI/ML use cases. Historically, the show has been a highly technical one, but over the years, it has evolved into an event where companies showcase how they use advanced technologies to transform their businesses.

Domino’s is using AI and ML to improve store and online operations

Domino’s is an example of a familiar retail business that presented how it’s using AI and ML. The company has come up with a successful recipe to change the way it operates. The secret ingredient is Nvidia’s technology, which the leading pizza chain is using to improve store and online operations, provide a better customer experience, and route orders more efficiently.

As a result, Domino’s is seeing happier customers and more tips for its drivers. But that’s only a small piece of the multifaceted pie. So, what does it take to get pizza from a Domino’s store to someone’s house? The answer is quite complex.

Nvidia DGX-1 server enabled Domino’s to accelerate its AI and ML initiatives

The data science team at Domino’s tested the company’s speed and efficiency by leveraging Nvidia’s DGX-1 server, an integrated software and hardware system for deep learning research. For those not familiar with the DGX server line, Nvidia has created a series of turnkey appliances businesses can drop in and start using immediately. The alternative is to cobble together hardware, software, and AI platforms and tune the entire system correctly. This can take weeks to do.

The Domino’s team created a delivery prediction model that forecasts when an order would be ready, using attributes of the order and what is happening in a Domino’s store, such as the number of employees, managers, and customers present at that moment. The model was based on a large dataset of five million orders, which isn’t massive but large enough to create accurate models. All future orders are fed back into the system to further increase model accuracy.

Desktops and laptops don’t cut it with AI and ML

Domino’s previous models used GPU-enabled laptops and desktops and would take more than 16 hours to train. The long timeframe made it extremely difficult to improve on the model, said Domino’s data science and AI manager Zachary Fragoso during a presentation at virtual GTC 2020.

The extra compute power of the DGX-1 enabled Domino’s data scientists to train more complex models in less time. The system reduced the training time to under an hour and increased accuracy for order forecasts from 75 percent to 95 percent. The test demonstrated how Domino’s could boost productivity by training models faster, Fragoso said.

Resource sharing is another benefit of the DGX-1

Domino’s uncovered another benefit in the process: resource sharing. Each individual GPU on the DGX-1 is so large—with 32 GB of RAM—Domino’s data scientists could use a fraction of the GPUs and run multiple tests simultaneously. With eight such GPUs at their fingertips, the data scientists found themselves sharing resources and knowledge, as well as collaborating across teams.

In the past, sharing work across teams—including code reviews and quality assurance testing—was challenging, since data scientists worked in their own local environments. Now that data scientists are working with a common DGX-1 server, they’re easily able to share Docker containers that are fully customizable and reproducible. This gives the data scientists a large resource pool to work with and access to resources when needed, so they’re not sitting idle. The Docker solution that Domino’s integrated with DGX-1 also makes it easier to reproduce code across different environments because all the data is contained within the Docker image.

Domino’s recently purchased a second DGX-1 and started adding the Kubernetes container management system to the mix. With Kubernetes managed by an optimization engine, Domino’s can dynamically allocate resources to all its data scientists and launch containers faster. According to Fragoso, even data scientists who aren’t familiar with Linux can point-and-click to launch Docker containers.

On the deployment side, Domino’s created an inferencing stack, which includes a Kubernetes cluster and four Nvidia GPUs. This way, data scientists can interact with and build their models using the same Docker container framework they use on the DGX-1.

Domino’s also acquired a machine learning operations platform called Datatron, which sits on top of the Kubernetes cluster with the GPUs and assists Domino’s with ML-specific functionalities. Datatron allows for model performance monitoring in real time, so data scientists can be notified if their model requires retraining.

AI and ML is rapidly moving under the realm of IT departments

Bringing the inference stack in-house allows Domino’s to have all the benefits that the cloud providers offer for hosting ML models, while keeping all data and resources on premises. It has changed the way the data scientists deploy models, giving them much more control over the deployment process, Fragoso explained in his presentation.

Fragoso concluded with advice for other companies looking to bring these technologies in-house: “Think about how your data scientists will work together and collaborate. In our case, the DGX-1 and our data scientists are interacting in a common workspace. It was something that our team didn’t really consider when we first acquired this product and has been a real value for us.”

Historically, data scientists operated as an independent silo within companies. More and more, IT organization are being asked to take on the task of providing the right technology to AI and ML initiatives. Data scientists are expensive resources for most companies and having them sit around waiting for models to finish is akin to tossing good pizza out the window. The right infrastructure, such as the DGX server series, enables companies to speed up processing time to let the data scientists work more and wait less.

Copyright © 2020 IDG Communications, Inc.