Kubernetes at the edge with Akri

Microsoft’s latest Kubernetes extension adds device discovery and management.

Kubernetes at the edge with Akri
Dan Musat (CC0)

Kubernetes has become the Swiss Army knife of distributed computing. Its role as an orchestrator that’s extensible and relatively easy to configure is key to managing applications at scale, ensuring your code runs with the required balance of resources. With a growing ecosystem of extensions, it works across everything, from the smallest devices to hyperscale clouds. In conjunction with service mesh, it provides a way of architecting and operating applications that separates them from both hardware and systems layers.

Microsoft has been thinking a lot about what you can do with Kubernetes, using its Deis Labs team to incubate new Kubernetes tools and features, and often working in the open through the Cloud Native Computing Foundation. Some of the work has been on providing frameworks for building and testing Kubernetes code. Others have focused on alternative ways to scale and manage Kubernetes installs, from virtual kubelets to take advantage of hyperscale clouds to KEDA (Kubernetes event-driven autoscaling) and using events to drive scaling as well as resources.

The latest addition to the Deis Labs family of Kubernetes tools is Akri, a way of managing microcontroller-class devices from Kubernetes. The name Akri is both an acronym (A Kubernetes Resource Interface) and the Greek word for “edge.” Many Kubernetes projects base their names on Greek words since that’s where “Kubernetes” comes from, so Microsoft’s choice isn’t unusual.

Kubernetes on the edge

Akri shouldn’t be confused with Microsoft’s other edge Kubernetes project, Krustlets. Both work with edge devices, but they’re targeted at different classes of hardware. A Krustlet needs a Web assembly WASI (WebAssembly System Interface) runtime to host code, so you’re looking at higher-end ARM-based devices such as the Raspberry Pi 4 compute module or low-end Intel processors. Akri doesn’t require code to be running on edge hardware; all it needs are a set of interfaces to allow it to identify and locate what it calls “leaf devices.”

Focusing on managing edge hardware like this makes sense. You’re not going to want to run code on all your devices. In many cases, you won’t have the capability, as you’ll be using off-the-shelf devices like cameras and IP-connected sensors, where all the code is provided by the vendor and you only have basic management capabilities. Those devices are important parts of your applications, offering APIs or streams of data. You need to be able to determine what’s available, whether they’re running, and what state they are in.

Kubernetes’ built-in device plug-in framework is key to how Akri works. Originally designed to support the use of server hardware such as GPUs or FPGAs (field-programmable gate array), the device plug-in framework is an abstraction layer that makes hardware visible to your Kubernetes controllers and schedulers. Although it was designed to work with relatively few static assets, since you’re unlikely to add new capabilities to servers in a data center, Microsoft has extended it to support devices on the edge of your network.

That change fits with many of Microsoft’s recent Kubernetes announcements, which include bringing its managed Kubernetes platform from Azure down to Azure Stack HCI hardware in on-premises data centers. The Azure team has come to the obvious conclusion that Kubernetes needs to be everywhere, and that it’s as important for distributed applications as the operating system used to be for client/server.

Using Akri in your clusters

For your applications, understanding that Kubernetes has a big role to play in your data center and on your edge hardware is an important philosophical shift. Code gets built once and runs across diverse environments under a common application management layer. The resulting development and operations model makes it easier to manage, deploy, and run complex workloads across all kinds of server and cloud. Adding Akri to that mix changes the role of devices in your network from discrete hardware to application elements that are named, discovered, and used.

Akri is a standard Kubernetes extension, using two custom resource descriptions, a controller, and a way of implementing device plug-ins. Once installed, you write an Akri definition for your device in YAML and stored in a Kubernetes etcd configuration database. Then the device plug-in acts as an agent, finding hardware that matches your definitions using supported discovery protocols, before using the Akri controller to manage it through a broker running in a Kubernetes pod that exposes the device and its APIs to your application code.

Akri installs from a Helm chart in a Deis Labs repository, and once installed and configured can be used to define and manage services. For example, using devices with udev support, you can quickly discover a pool of attached cameras and use them as part of a streaming video application. If a device is exposed in /dev for your cluster, it’s usable. You can connect to remote devices with the appropriate drivers and network services.

Discovering devices

Currently two discovery protocols are supported: udev and ONVIF (Open Network Video Interface Forum), both for network video devices. Udev is a Linux kernel device manager and detects currently attached devices as nodes in /dev. Microsoft notes that Akri is extensible and wants the community to add more protocols and more devices. You may need to develop your own discovery protocols for devices where there’s no standard, which offers an opportunity for Akri to become a development hub for both de facto and de jure device discovery.

Microsoft has published a roadmap of the protocols it plans to support. These include the OPC UA industrial automation protocol, the Bluetooth and LoRaWAN wireless protocols, and a basic IP address scanning capability.

Since udev is part of Linux, it’s a widely available, flexible protocol that offers applications rules to help manage and exclude devices. For example, your application can be configured to only work with devices from a specific vendor, or that output can only be certain image types or have specific capabilities. Sample configuration files in Microsoft’s Akri documentation and tutorials will help you get started.

Once a device is detected it’s attached to an Akri broker pod, ready for use with your application. You can tune the number of broker pods deployed to add redundant pods that can keep a service running if a broker crashes. A set of environment variables in the helm chart manages the connection specification, for example controlling resolution and frame rates. You can change these to configure brokers to work with your choice of services.

Run Akri where you want, how you want

You don’t need a full install of Kubernetes to run Akri; it’s able to run on both K3s and MicroK8s. Either is suitable for small-scale edge controllers and application hosts, building clusters on a small number of nodes and on low-powered devices like Raspberry Pis. With Ubuntu now supporting the Pi 4, it’s easy to get a MicroK8s host running at low cost and with low power requirements. (I’d recommend running off SSD rather than an SD card though, to improve system reliability.)

The initial release of Akri is limited, but it shows a lot of promise. Microsoft is launching early to build a community around the project, intending to hand it over to a foundation as soon as possible. With many possible protocols to support, it’s clear that Microsoft can’t deliver them all, so by offering an open-source platform the community has an opportunity to scratch their own itches and provide tools that help everyone. With a lot of interest in Kubernetes at the edge and in both K3s and MicroK8s, it’ll be interesting to watch how Akri evolves and develops during the next few years. For now it looks like Microsoft is delivering the right tool at the right time.

Copyright © 2020 IDG Communications, Inc.