This is a weblog about computing, culture et cetera, by . Read more.


Why interceptors?

At Metosin, we have been thinking about interceptors in Clojure and ClojureScript. We’ve thought about them so much that in fact we made our own interceptor library for Clojure, called Sieppari1. To understand why, let’s take a look at the pros and cons of using interceptors instead of middleware.

Interceptors are a Clojure pattern, pioneered by the Pedestal framework, that replaces the middleware pattern used by Ring. In Pedestal, they are used for handling HTTP requests, but they can be used for handling all kinds of requests. For example in re-frame they’re used in handling web frontend events such as button clicks.

At Metosin, we’ve used them in a bunch of projects and we’re developing Sieppari to be used with reitit, our (latest) HTTP routing library.

Let’s work this out

In Ring, a HTTP request handler is a function that takes a request map and returns a response map. Something like this:

(defn my-handler [request]
  {:status 200,
   :headers {"Content-Type" "text/plain"},
   :body "hello!!"}))

To enhance the behavior of the handler in reusable way, you can wrap it with a higher-order function that takes the handler as parameter. This can be used to implement features like content encoding and decoding and authentication.

For example, here’s a debugging middleware that prints the incoming request map and the outgoing response map:

(defn print-middleware [handler]
  (fn [request]
    (prn :REQUEST request)
    (let [response (handler request)]
      (prn :RESPONSE response))))

The good thing about middleware is that they’re simple to implement: it’s just a Clojure function and you can use standard constructs such as try-catch. They’re fast, too.

The problems start when you try handle asynchronous operations. Ring specifies async handlers as functions that take callbacks for sending a response and raising an exception as arguments. We have to write a separate version of our debugging middleware for asynchronous handlers.

(defn debug-middleware [handler]
  (fn [request respond raise]
    (prn :REQUEST request)
    (handler request
             (fn [response]
               (prn :RESPONSE response)
               (respond response))
             raise)))

Using the same code for the synchronous and asynchronous handler is tricky and error handling gets difficult.

Interceptors offer a solution: you split the middleware in two phases2, :enter and :leave (or :before and :after as they’re called by re-frame). :enter is called before executing the handler, :leave is called afterwards. Both phases get a context map as a parameter and they return an updated context map. The request is under the key :request and the handler’s response is put under :response.

(def debug-interceptor
  {:enter
   (fn [{:keys [request] :as context}]
     (prn :REQUEST request)
     context)
   :leave
   (fn [{:keys [response] :as context}]
     (prn :RESPONSE response)
     context)})

Middleware can be composed by nesting function calls. With interceptors that does not work, so you need to have an executor that takes a chain of interceptors (called a queue) and executes them in order.

A cool thing you can now do is that if your interceptor returns an asynchronous result (a deferred or core.async channel for example), the executor can wait for it, and if the interceptor returns a synchronous result, the executor can act on it directly. This allows you to use the same interceptors for synchronous and asynchronous operations. The downside is that the executor is bound to be slower than nested function calls.

Another downside is that structures like try-catch and with-open do not work anymore. To allow proper error handling, interceptors have an optional :error phase that gets called if any of the inner interceptors throws an exception.

The queue as data

Middleware do not have to call the handler. For example, an authorization middleware may decide that a request is not authorized and instead of calling the handler, it returns an error response.

Interceptors go further: the remaining queue and the stack of the already-entered interceptors are exposed in the context map and you can manipulate them. If your authorization middleware wants to return early, you can (assoc context :queue []). Another example is that you can have a routing interceptor that pushes route-specific interceptors and a handler to the queue.

Finally, since your interceptor chain is now data instead of middleware-nesting code, you can do fancy tricks like dynamically re-order interceptors based on the dependencies between them. angel-interceptor is an implementation of this and Sieppari supports it as well. I’m a bit skeptical about whether there are real use cases for this, but it’s there if you need it.

Summary


  1. It’s not yet ready for production. At the time of writing, the latest release is 0.0.0-alpha5.
  2. You can do this without interceptors, of course. See e.g. how the session middleware is implemented in the Ring codebase.


I made a backpack

For a long time, I used a Haglöfs Tight Evo XL as my everyday backpack. It was not optimal: it’s a bit too large for my everyday needs, but a bit too small for extended trips. It’s not too stylish, either. However, I didn’t want to spend money on a new backpack. Luckily there is a simple solution: make your own gear.

My main inspiration was the DIY IKEA backpack which is made out of IKEA shopping bags. We didn’t have any spare IKEA bags, but there was a worn-out Clas Ohlson bag which is made of similar material. My girlfriend’s colleague gave me a piece of parachute cutting waste and a broken Haglöfs backpack for scavenging webbing and buckles. I was set for materials.

I came with a list of design constraints:

I realized that I have no idea of what I’m doing. In this kind of situations my usual solution is to copy from others. Thus I started by taking measurements from a Kånken and modifying them to accommodate a 15” laptop.

I wanted to have a top-loading pack and zippers seemed expensive and tricky to sew, so I opted to have a drawcord closure with a lid on top. I also added open pockets to the front and the sides. I don’t like limp packs, so I added an internal pocket for a framesheet. I drew up a pattern and found out that there’s just enough fabric for a one bag. Perfect!

At first I thought that I would make the shoulder straps out of the webbing scavenged from the broken backpack. That didn’t work because there wasn’t enough of it. Then I realized that I could just use the old backpack’s shoulder straps as-is and get comfy straps for free.

It took me one Saturday to make most of the pack and then a couple of evenings to finish it. I did end up buying a cordlock, some cord, and a piece of cardboard to be used as a framesheet. Total budget: about 6 €.

I made the pack in May and I’ve been using it almost daily ever since. I’m really happy how well it turned out. The size is just right and I like the looks. The side pockets are great for the U-lock and as a bonus feature, the framesheet pocket is great for transporting stacks of paper.

I’ve had to fix it a couple of times. I didn’t know how to attach the shoulder straps and as a result, they’re falling off. Unfortunately it’s hard to fix without taking the pack apart. The material wasn’t as sturdy as I thought. It should have been folded for reinforcement in the places with most stress.

Anyway, making a backpack was fun, easy, and rewarding. I’ve made a pair of pants before, but making clothes is hard because they need to actually fit you. With bags, the exact fit hardly matters.

I’m not much of a maker, but it’s great to make something concrete every now and then. It also made me appreciate the high quality of factory-made backpacks more – they might be worth the money after all.


Fully automated releases

Have you ever contributed a patch to an open-source project, got it merged, and then waited months for a new release that would include your patch? Me too, reader, I’ve been there.

Moreover, I’ve been the maintainer who hasn’t gotten around to cutting a release. Cutting releases is a chore. Usually it’s a fragile, multi-step process that is not especially fun.

As programmers, what is our answer to fragile, multi-step processes? We automate them.

How to do it

When creating a release, there are a couple of steps where human input and human judgement is needed.

  1. When to create a release?
  2. How much should the version number be incremented?
  3. What to write in the change log?

There’s an automation-enabling answer to the first question that is familiar for many developers from their work environment: embrace continuous delivery and continuous deployment. Each pull request should leave the project in a state where it can be released. Then you can automatically create a release every time you merge a pull request.

You still need a human to answer the second and third question, at least if you have a conventional versioning scheme. To move the burden away from the maintainer, you can ask the contributors to fill in this information during the contribution process.

I know of two actual implementations of this: Hypothesis continuous release process and semantic-release. Hypothesis asks you to include a special file in your pull requests that looks like this:

RELEASE_TYPE: minor

This release adds a function for printing the text "Hello, world!".

semantic-release relies on specially-formatted commit messages:

feat(core): add function for printing ”Hello, world!”

Here feat means this commit adds a new feature, implying a minor release if you’re following Semantic Versioning.

In both cases, after a pull request has been successfully merged, the CI server will read this information, update the change log, increment the version number, and push a new release to the package manager. As a contributor this means that if your patch gets merged, it will be released.

What about Clojure?

I’m not aware of anyone doing this in the Clojure community1 , but I believe it would be beneficial. There are a lot of small projects that would get contributions but the maintainers are not around to merge and release them. Automated releases would make the work of the existing maintainers easier and it would also make it simpler to onboard new maintainers.

I have implemented a proof-of-concept version of the Hypothesis process for cache-metrics, a small library of mine, but I haven’t yet dared to introduce it to any ”real” libraries. Many actively developed projects would need to change their ways of working as you couldn’t just merge or commit random things to master.

I hope this post acts a starting point for a discussion. What do you think?


  1. cljsjs/packages kind-of does it, but it is a a package repository and not a library.


For more posts, see archive.