Azure Functions grows up

Azure’s serverless platform adds support for warm starts, longer-running functions, virtual network connections, and Azure Active Directory

Azure Functions grows up
Thinkstock

For a technology that was born on Azure, beginning life as a derivative of its original platform WebJobs, Azure Functions has come a long way. It’s not just a cloud service, as it can be hosted in containers and run anywhere you’ve got Docker support: from your own dev hardware, to IoT devices at the edge of your network, to on-premises, and even to other cloud platforms. And Azure Functions is still evolving, adding new features and new hosting models.

One key new feature is a premium plan for Azure-hosted Functions. If you’re using Azure Functions to process a lot of data, it’s an essential upgrade. Not only do functions get to run longer—up to 25 minutes—they’re now able to use more compute and more memory: a maximum of 4 cores and 12GB of memory. With Functions-based apps able to work with more, there’s a need to manage budgets more effectively, especially if you’re using them as part of an event-driven programming model, using technologies such as Azure EventGrid to trigger Functions.

Giving Azure Functions a warm start

A new billing model is part of the premium plan. Here you can set a minimum plan size, for example, ensuring that one or multiple Azure Function hosts are always warmed up and ready to go. Similarly, at the other end of the scale, there’s the option to choose a maximum plan size, limiting the number of Functions that can run at a time and helping give you predictable bills.

Controlling Azure Function cold starts is more important than you might think, especially when you need to manage application latency. In an ideal case your app will respond to increases in load demand instantly, adding instances as load increases and dropping off as load drops. It’s a nice idea, but even though Azure Functions launches fast, it still takes time to load a container and to make all the appropriate service connections. That’s why it’s important to manage your cold start times, as in most cases your code will lag demand and will take longer to shut down than you might want.

If you’re using a premium plan, you can reduce the risk of lag by having a prewarmed instance ready to go. This sets a floor for your compute requirements, so you’ll add any additional Function instances on top of the one you already have running. This should allow you to stay ahead of demand, though you will pay for that extra prewarmed instance.

Networking Azure Functions

Azure Functions can now take advantage of Azure’s software-defined networking tools, with different plan types offering different networking options. The basic consumption plan, which is purely event-driven, only allows you to control inbound IP addresses. It’s a reasonable compromise, as you’ll want to control your event sources. Any outbound connections from a consumption plan Function will be to other Azure resources, so you can use them to manage your networking. More complex applications will use the premium plan, which adds support for virtual network connections to Azure ExpressRoute and outbound service endpoints. You can use this to manage connections to on-premises event sources and to on-premises endpoints.

Azure supports using Functions within the App Service environment, much like WebJobs. You have more control over your instances and you can build on the App Service networking tools to manage networking in more detail, including working with Azure virtual network resources. This way you can use Functions as part of a virtual infrastructure, deploying them as an integral component of your applications, applying controls to all the elements of your application.

It’s not only Azure that can scale Functions for you: Microsoft recently announced KEDA (Kubernetes-based event-driven autoscaling). By adding event-driven scaling to Kubernetes applications, you gain a new way to scale your Functions running outside of Azure. Inside Azure you can use services like EventGrid to launch Functions on demand. If you’re running container-hosted Functions on your own infrastructure or another public cloud, you can run them in Kubernetes, using KEDA to support launching new instances as required.

Improving Azure Function code: security and testability

Infrastructure is important, but Functions are at heart code. Microsoft currently supports many common languages in Azure Functions 2.x: .Net Core for C# and F#, Node.js 8 and 10 for JavaScript (and for TypeScript), as well as Java 8. Currently in preview are PowerShell and Python, bringing common scripting tools to Azure Functions and turning it into a platform for event-driven system management for Windows and Linux hosts running on Azure.

One issue facing Azure Functions developers is managing the secrets needed to work with Azure and other APIs. With older versions of Azure Functions you needed to place keys and other connection details in your Functions’ app settings. That kept them out of your code, but it wasn’t the most efficient way to manage secrets, and it risked leaks. Now Azure Functions can use managed identities to handle access tokens, integrating with Azure Active Directory to control access to service with direct authentication using OAuth. Alternatively you can use a Function’s identity to access secrets stored in a Key Vault. Both approaches mean you’re no longer managing secrets alongside your code, ensuring that they’re encrypted when not being used.

The change to .Net Core in Azure Functions Version 2 has made it easier to build and manage your apps, allowing you to use more microservice design patterns. One of these is dependency injection, making it easier to test code and use mocks during development. Building on the same dependency injection features as .Net Core, Azure Functions can now work with existing unit testing frameworks, injecting test functions as required, letting you put them in your full CI/CD (continuous integration and continuous delivery).

Message-driven and event-driven design patterns are a key way to build and deliver distributed applications. By integrating Functions with message queues and publish-and-subscribe architectures we can start to construct scalable microservices that can be used as input and output stages. It’s a model that makes sense as part of everything from IoT sensor data processing to managing event flows in scale-out Kubernetes, and one that looks to keep evolving.

Copyright © 2019 IDG Communications, Inc.