Categories
Uncategorized

Fun with knative (part 3)

Apologies for the gap in posts, I’ve been working on some new tech around the Knative stuff that is absolutely brilliant; I will write a blog post on it as soon as I can as it is a game-changer.

In fact, I’ll give you a quick overview before finishing off the Loom demo stuff; Cloud Events. Put simply, there is a new way of writing event based applications/functions in OpenShift/Kubernetes based on an abstracted and simplified event model. It was designed so as to allow devs to build disconnected applications with ease; put simply the framework provides an event model which is very simple, just a Cloud Event message type and a payload, and the ability to setup namespace-specific brokers, either in-memory or linked to Strimzi/Kafka. This is wired into Knative services and the forthcoming ‘functions’, which are triggered by the arrival and routing of a typed Cloud Event.

What makes it so easy is, literally, the ease of it. Once you have installed OpenShift serverless, for example, you simply create a broker using:

apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
 name: default

You then write your functions/Knative services to receive a Cloud Event, for example using the brilliant ‘Funqy’ Quarkus libraries, and then hook your app into the broker using a ‘trigger’, such as:

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: trigger-hit
spec:
  broker: default
  filter:
    attributes:
      type: hit
  subscriber:
    ref:
     apiVersion: serving.knative.dev/v1
     kind: Service
     name: battleships--hit

Note the simplicity; you simply specify the broker name and the Cloud Event type as the filter and the Knative serverless does all the wiring for you.

If anyone is interested I have written a quick emitter app that uses the node.js Cloud Event SDK for providing events to a Broker to test and see the stuff in action – have a peek at https://github.com/utherp0/cloudeventemitter – you simply provide the broker address (format shown in the repo), the Cloud Event type and the payload and et voila. Brilliant stuff; I will write a deeper blog soon going into the use of a Kafka channel for the broker instead and multiple stage event routing (where a function emits a Cloud Event back to the broker and other functions are triggered by it).

But I appear to have digressed, *again*, so let’s finish off the Loom demo stuff just to show the basics and cool side of knative serving.

If you remember from the first post, the concept of knative serving allows you to simply define an application that is autoscaled down to 0 when it is not being used (simplification; what actually happens is that the application sits in an inactivity loop and when it expires a pre-set time the system downscales it to 0 Pods – this is overwritten and reset, and the application scaled up, when traffic arrives at the ingress point.

I wanted a visually fun demo to show this behaviour in action and also highlight another important feature of the knative services; the concept of ‘revisions’. A revision is another version of the deployment of the application. It has the same application name, and traffic into the application group is load balanced according to a configurable percentage of traffic but the nice thing is that each of the revision behaves, from a knative perspective, independently.

An example of this, using the Loom demo, is that I spin up four knative services. Each services has three revisions; in actuality they are same image/container with differing environment variables – like applications in OpenShift the determination of a version of the application is the image from where it is created and also the configuration of the way in which it is deployed; deploying an application in OpenShift and then changing the environment variables that are exposed to the deployment creates another iteration, or version, of the deployment. And the same logic applies to revisions. In the case of the Loom demo I deploy a knative service and then create revisions for it by altering the environment variables.

So, for the Loom demo I have a simple RESTful endpoint application that returns a colour. This colour can be overridden by an environment variables. The demo itself is deployed by a good old fashioned shell script (bad Me, I really should use/learn Ansible) and it’s worth understanding the way in which this script works.

The full source of the demo is available at https://github.com/utherp0/knativechain along with instructions for setting it up – the previous blog post covered the humdrum bit of setting up and configuring the Operators so do that (not that exciting) bit first before you deploy the demo.

The script is also interesting in that it hand crafts the applications and it’s useful to understand those steps as well as the key to exploiting this kind of technology is knowing just what is going on under the bonnet. So…..

(from the setup.sh in the scripts directory of the repo….)

oc create -f ../yaml/link1-is.yaml

The first thing we do is create the image stream in OpenShift. This is an object that represents the image that is used for the application – in this case we will be building the image from source but in order to do that we need a placeholder/object definition to refer to – this is expressed in the *-is.yaml files and shown below for reference:

apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
  name: link1
spec:
  lookupPolicy:
    local: false

Nice and simple because I don’t like complexity – it complicates things…..

The script then sets up the build config for the applications – this is the ‘cookie cutter’ for building the endpoints we will use and looks like this:

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  name: link1
spec:
  output:
    to:
      kind: ImageStreamTag
      name: link1:latest
  postCommit: {}
  resources: {}
  runPolicy: Serial
  source:
    contextDir: /apps/link1
    git:
      uri: https://github.com/utherp0/knativechain
    type: Git
  strategy:
    sourceStrategy:
      from:
        kind: ImageStreamTag
        name: nodejs:12-ubi7
        namespace: openshift
    type: Source

Now this is great and I love these build configs to death, but don’t get attached to them; one of the forthcoming features in OpenShift is a new and simplified way of doing this kind of builds that is more Kubernetes-ish. But for now we have build-configs; this object def defines the way in which the application is constructed. Note the fact the source for the application comes from the same repo, is built on top of the UBI (Universal Base Image) for RHEL7 and using the node.js version 12 framework. Also note that the output of the build-config ties, in this case, into the image stream defined by the first object definition.

These are all pushed into the OpenShift cluster using the ‘oc create -f’ command that literally throws the object definition given the context of the logon (the script assumes you have logged into OpenShift in advance but does create the project/namespace in which all the components exist).

Once we have defined the four image streams and four build configs, one each for the four instances of the application/knative service we want to use, we start doing the fun stuff.

oc start-build link1

The script now gets OpenShift to kick off builds for each of the defined build configs we have instantiated. It’s that easy; once we have a build config defined we can run it as many times as we want (and because, as in this case, the source is being drawn from a git repo we can simply repeat the build when we change and commit code).

This process produces the four instances of the application and delivers composite images to the four image streams we have defined. This is the foundation of the demo as everything from now on is knative service based and uses these images. So far we have just done the basics and gone from an empty project to four ready-to-deploy images.

The script then uses the ‘oc create -f’ approach to setup the four knative services – there are actually two ways to create knative services (not counting the UI for now); knative has its own command line, kn, which does all thing knative service related but for the sake of the demo and simplicity I wanted to have an easy to digest set of yaml to show the bits and pieces you need for the service, an example of which is shown below:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:  
  name: link1
spec:
  template:
    metadata:
      name: link1-v1    
    spec:
      containerConcurrency: 0
      containers:
      - image: image-registry.openshift-image-registry.svc:5000/chaintest/link1
        name: user-container
        readinessProbe:
          successThreshold: 1
          tcpSocket:
            port: 0
        resources: {}
      timeoutSeconds: 10
  traffic:
  - latestRevision: true
    percent: 100

Also there is a little, err, ‘omission’ with the ‘kn’ client currently which I have raised an RFE on around the setting of initial timeout; I’ll explain that in detail in a second after I explain what this object is creating.

What this does is create the wiring around a knative service, in this case defining the template (which includes the container image we have just built – the image-registry URL is the internal location of the image registry in OpenShift). It also sets the default traffic for the knative service, in this case saying that 100% of the traffic into this service will go to the latest revision.

As part of the demo I wanted the timeout for the knative services to be low; by default they are set to 30 seconds after the traffic ingress. The problem with using ‘kn; to create the service is that it does not (at the moment) have the timeout for the service as part of the configuration you can give a service – you can see what the problem is from my perspective, with an automated build – if I used the ‘kn’ create I would have to change the timeout *post* creation, which would, ta dah, create a new revision (the new knative service would be defined with the 30 second timeout). Changing the timeout would create a *second* revision (the knative services are effectively immutable once created by design). Hence using YAML instead where I can set the default timeout.

When the script has created all four of the knative services it will, if you watch the topology page, spin up the Pods as part of the creation process. This is by design; you want your application to be responsive and also know that it has deployed correctly. I am constantly surprised when demoing it and often a customer will ask ‘if knative serving services are only deployed on traffic how come they spin up when you initially deploy?’.

Then the fun bit – the script then uses the ‘kn’ command to create the different revisions for each of the four services – as mentioned before it uses an ENV variable value to differentiate between the versions, so we use the ‘kn’ client to create labelled new versions of the service thus:

kn service update link1 --revision-name=v2 --env COLOUR=purple

Our four services are called link1 through link4, the script provides a new colour through the environment variable COLOUR and labels the revision accordingly (in this case v2).

So, when the script finishes that part we now have four knative services with three revisions (v1, v2 and v3). And now we do the magic in terms of traffic ingress using:

kn service update link1 --traffic link1-v1=40,link1-v2=30,link1-v3=30

This is the cool bit – we have now assigned, in this case, 40% of traffic for the link1 service to the v1 revision, 30% of traffic for the link1 service to the v2 revision and 30% of traffic for the link1 service to the v3 revision. With just those two commands we have created three different versions, revisions, of each of the services.

The script then sets up the demo app which is a node.js generated front end which provides a webpage that randomly calls the four services to get a colour to render, When the page randomly calls service link1 through link4, each of them then returns one of three responses depending on the traffic load.

But each of these three responses is a *separate* Pod which adheres to the timeout scale-down rules of the knative framework. So effectively you end up with 12 endpoint Pods which are spinning down and up depending on where the traffic lands.

Yeah, it’s a bit pithy, but the concepts are superb and underpin the next generation of efficient application design – these applications are web-based stateless endpoints that, put together, provide a composite output (the loom).

As I said at the start of this little foray into knative, I’ve been looking at the next step from here the last month or so, and I’ll be writing a blog post very shortly on that – Cloud Events. They make designing the next generation of stateless applications even easier but I’ll hold back from enthusing too much right now….

So, get yourself access to an OpenShift cluster and have a play with this demo, it’s reasonably simple but I’ve tried to encapsulate all the bits you need to start thinking about using knative serving in anger……

By utherp0

Amateur thinker, professional developer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s