Kubernetes with .NET – Part 1: Overview

This post is a first of series that will help explain how to use .NET to integrate with Kubernetes. We’re not going to focus on how to build docker containers, or how to run those docker containers on Kubernetes. This about writing .NET code that interacts with Kubernetes cluster itself in order to either enhance how the app behaves by being infrastructure aware, and/or extending the functionality of Kubernetes using .NET.

Before we begin, let’s break down what is Kubernetes at its core. It’s best to think of Kubernetes is made up of two main components:

  • A distributed database backed by etcd, and exposed via RESTful API – aka kube-apiserver. The API semantics allow an efficient way to query the current state of a set of resources, pub/sub semantics for being notified on changes to a query, and modifying the state of resources.
  • A collection of components called “controllers” that are background workers responsible for reconciling the desired state of some resource with its actual state. Those that come bundled with Kubernetes are usually referred to as kube-controller-manager.

From the core architectural vision Kubernetes system is quite simple an elegant:

  1. We store the desired state of some resource in the database
  2. Controllers monitor the changes to the desired state of resources via a “live query” and execute actions to bring the actual state of the system into the desired state.

When most people think Kubernetes, they think container orchestrator, but that is just what it does out of the box. The default Kubernetes installation comes with a database populated with object types relevant for orchestrating containers, and bundled with a collection of controllers who’s purpose is to orchestrate containers. But Kubernetes API server is extensible – you can define new object types into it, just like you would create a table in a relational database, and you can build and run new controllers that provide new pieces of infrastructure automation.

When I first realized this, it was like a lightbulb going off in my head. I no longer saw Kubernetes as a container orchestrator, but as a generic platform to build infrastructure automation. Some things you could do using these building blocks:

  • Orchestrate provisioning and deployment of VMs on public clouds
  • Deployment of new Kubernetes clusters
  • CI/CD lifecycle
  • … and so much more

Kubernetes aware code

One question you maybe asking yourself, is why would I ever want to write apps that are Kubernetes aware. Most applications that are being run in Kubernetes are written from with general idea that they are horizontally scalable and are running in a container. It generally has no knowledge of the infrastructure where it is running, and for the most part of it should not care. Having said that, applications do not run in a vacuum – they most often have external dependencies filled by some kind of service. This may be your SQL Server to store your data, a Redis cache, an external authentication/authorization provider such as IdentityServer4, or other microservices. We take on these dependencies in our code because they establish easy to use, robust way to solve specific software architecture problems. Your applications are already infrastructure aware to some degree as you need drivers / SDKs to use those services and you need connection information to connect to them. So if our application was aware of Kubernetes infrastructure, what kind of problems could we solve? The Kubernetes API server stores a lot of interesting information which can be used to determine the infrastructure’s desired state and in many cases its last known actual state. Let’s explore a few ideas of what we can do with this information:

  • Runtime configuration management – we can store our configuration in Kubernetes as ConfigMap. We can plug this into our standard .Net Core config infrastructure as just another config provider, and reconfigure our app at runtime if that configuration ever changes. This is not possible with environmental variables, which are only set once when the process has already started.
  • Service discovery – we may want to discover new services dynamically in our app and change its own behavior based on what it finds. I know somebody is already thinking “but there’s build-in service discovery in Kubernetes”. What is there is the ability to resolve route based on a well-known DNS resolution scheme – this is not service-discovery but a service lookup. This fundamentally different from querying to see what kind of apps are running, determining their capabilities, and integrating them into our app. An example would be a portal app dynamically building up its own navigation menu out of microservices it discovers. This architectural need is currently filled via services like Consul and Spring Cloud Discovery. We can easily do this just by querying Kubernetes for pods that carry specific labels/annotations that describe the capabilities of those apps.
  • Authorization server – IdentityServer4 is often used to create a customized federated authentication/authorization service for our apps. Out of the box, you’re required to provide our own persistence store by implementing interfaces. We could totally store the configuration for OAuth2 clients, users, and scopes as native Kubernetes resources. If you like this idea, I’ve actually explored it into a working POC that you can play with here.
  • Creating custom SDLC workflows – manual steps in SDLC may be automated. For example, a business changing the value of a Jira item may trigger a deployment to a given environment. Or the creation of a new deployment as results in a message on #Slack. While there are often many “third party solutions” for these kinds of problems, you’ll often be wishing you had more control over them.
  • Automating custom infrastructure – maybe you’re looking after systems that don’t fit into containers, and have not Kubernetes integration but do offer API. We can create our own controllers that manage the lifecycle of those apps by storing the desired state in Kubernetes and reconciling the changes by talking to those APIs.

So a question you may be asking yourself is WHY would I want to do this on top of Kubernetes when there are other options available. After all, I have APIs in other systems to do the same thing. The key value of modeling infrastructure management on top of Kubernetes is consistency. Kubernetes creates a standardized best practices way of describing, monitoring, and automating infrastructure which improves interoperability, reuse, and knowledge sharing.

The other important difference is HOW behavior is modeled. Most traditional integration APIs will expose API surface that accepts “command” like messages and take some action as result. These systems will immediately start acting on this behavior and after the processing of the command is completed, the workflow is finished. In contrast, Kubernetes API model embraces the idea that beyond validating the correctness of the submitted request, the APIs only role is to record the desired state. Any installed behavior actors (aka controllers) will monitor and asynchronously process to try to reconcile actual state with the target state. If the actual state and desired state ever diverge in the future, the reconciliation action will kick in and work towards bringing it to the desired state. This architecture makes the system not only highly robust but also loosely coupled and extensible.

Working with Kubernetes API Server

Official CLI – kubectl

The most common way of interacting with Kubernetes API server is through official Kubernetes CLI kubectl, which you’re probably already familiar with. The CLI tool stores API targets and authentication settings in a config file, which you can view via kubectl config view. You can query for the state of resources via kubectl get <plural-name> command and use multiple switches to further narrow down the query, but under the covers, it is just talking to Kubernetes API Server via REST. Modifying the state of a resource is usually done via kubectl apply, which is just a YAML serialized version of the row we’re creating/modifying. The apply command is actually a smart wrapper around “create-or-update”, and will issue HTTP POST/PUT/PATCH as necessary.

Kubernetes Client

The official .NET Kubernetes Client is one of the main libraries available right now that allows interacting with the Kubernetes API server. Much of it is auto-generated out of Kubernetes API server OpenAPI (aka Swagger) document. The client library handles authentication, allows you to use strongly typed objects to interact with Kubernetes API client, and issue streaming requests (watch queries). Since everything is auto-generated the library stays up to date with official Kubernetes release. The downside of it being autogenerated, every single API operation appears under a single KubernetesClient object, which ends up being very large. The library makes it easy to get started as it can use your current kubectl context to target and authenticate when doing local development. Take a look at samples to get an idea of how to get started.

KubeClient

This is an alternative to an official client. This project takes a fundamentally different philosophy, where only the models are autogenerated, but the client itself (methods to call API server) are handwritten. This allows APIs to be nicely organized, more compact, and in some ways better structured. On the flip side, this creates an Achilles’ heel of the project where it risks falling behind on Kubernetes features. It does not expose an API call for every resource, and the low number of maintainers and high maintenance burden means the project carries an inherent risk of becoming stale.

I’ll be honest with you since Kubernetes has been written using Go, the most mature version of SDK available for Kubernetes is found in Go’s implementation. Having said that, other language ecosystems are under active development allowing you to integrate and extend Kubernetes using your favorite stack… which is, of course, .NET right?

In the next part of the series, we’ll take a deep dive on how to interact with Kubernetes API server

Leave a comment

Your email address will not be published. Required fields are marked *