« An Introduction to HTTP Signing

June 12, 2019 • ☕️ 4 min read

asset 1

In the world of APIs, we’re exchanging data that we care about. The data may contain sensitive information or maybe you just want to trust the data hasn’t been tampered with. Mechanisms like TLS give us some sense of security in that you’re interacting within an encrypted transport system, but perhaps you need more validation that incoming data hasn’t changed or that the client app is who they claim to be. This is especially important for things like Personally Identifiable Information or Financial data. We can get this added sense of security using a somewhat newly drafted standard called HTTP Signatures. Using this standard, as its currently drafted, you can verify that data for an HTTP Request hasn’t been altered during transport and you can use this information to gain confidence in who sent the request to your server.

“HTTP Signatures” is currently in the draft stage of becoming an IETF standard. Forewarning that its current draft status means that it’s still subject to changes or it could not make its way to being a full standard. The concepts and general idea, though, I think are fully worth considering. Even if it weren’t to become a standard — you could apply these concepts and permit their use using your API and provide libraries for clients to use. 

Signing your HTTP requests is important in high priority areas specifically because TLS (and SSL) have been known to have vulnerabilities over the years. You should absolutely use the latest TLS when possible, because its still a good first step. If you’re concerned about the integrity of the data entering your system, though, you should always consider extra avenues with which to validate incoming data. Specifically, HTTP request signing can help you ensure that data hasn’t changed during the transport of the request from client to server. Let’s get into some specifics about the standard to better understand.

Request signing parameters live either within the `Signature` header or the `Authorization` header (prefixed with “Signature ”). For the purposes of this post I will refer to the `Signature` header. Within that header there are four parameters we care about:


The `keyId` is used to identify the client connecting to our server. This would be used, in your server, to locate the credentials that will be used to sign the data. Perhaps you have a `clients` table with an identifier (matching the `keyId` that would be provided) and a set of credentials that are used to create the signature.




The `algorithm` signifies which signing algorithm was used for the request. so that both server and client are on the same page as to which algorithm will be used to verify the data. A signing algorithm uses cryptographic standards to generate a digest from the data we want to ensure hasn’t been tampered with. 




The `headers` parameter is optional at this point in the standard’s draft, though I personally think you should always list them. Listing them enforces that both client and server are aware of the order of headers and will protect you from issues due to unanticipated middleware, or something of that nature, rendering your request verification invalid. Each header must be lower-cased and separated by a single space. A special identifier “(request-target)” is generated by concatenating the lowercased Method, a space and the path pseudo-headers (for more on this see Section 2.3.1).

Example: assuming we wanted the route information and the date header verified:

headers=“(request-target) date”


The `signature` parameter contains the actual message digest; the message digest being the encrypted data that we will validate on the server as having not been tampered with. The signature body (the not-yet-encrypted data) is created by collecting the key and value of the `headers` specified and joining them, in the order specified, with newlines. Here’s some pseudo-code to illustrate what happens to get the digest from the body:

algorithmFunction = getAlgorithm(algorithm, getCredentials(keyId))

That is to say that we must get an encryption function (using our algorithm parameter), use our credentials from the Key ID to encrypt that data, then Base 64 encode that digest.

Example: If we were to use the RSA keys provided from the draft’s test samples, specify `headers=“(request-target) date”`, with a request:

POST /foo?param=value&pet=dog HTTP/1.1
Host: example.com
Date: Thu, 05 Jan 2014 21:31:40 GMT
Content-Type: application/json
Digest: SHA-256=X48E9qOokqqrvdts8nOJRJN3OWDUoyWxBf7kbu9DBPE=
Content-Length: 18

Our `signature` parameter would look like: 


Ultimately, a Signature header would look something like:

Signature: keyId=“chat-app”,algorithm=“hmac-sha256”,headers=“(request-target) date”,signature=“ATp0r26dbMIxOopqw0OfABDT7CKMIoENumuruOtarj8n/97Q3htHFYpH8yOSQk3Z5zh8UxUym6FYTb5+A0Nz3NRsXJibnYi7brE/4tx5But9kkFGzG+xpUmimN4c3TMN7OFH//+r8hBf7BT9/GmHDUVZT2JzWGLZES2xDOUuMtA=”


Verification is somewhat straightforward if you take a look at the pseudo-code above for how to create the signature digest. If you’re using a symmetric signing mechanism, like HMAC Sha-256, then you just build the signature the same way the client did and compare the signature values. If they match then the signature is verified. For asymmetric mechanisms, like RSA, its a bit more involved. The client will have signed their data with the private key. When it reaches the server you’ll need to Base64 decode the signature, then verify that using a the corresponding public key.


HTTP Signatures aren’t a terribly complex concept, but that doesn’t mean you should have to go write the whole system every time you’d like to consider including it in your API workflow. We decided to [[[Open Source]]] a set of libraries for multiple languages to help encourage people to try the almost-standard out.

Please try them out, submit issues/pull requests or simply [[[tell the world]]] to check it out — we’d love to spread the word.

So What?

We think that a final check as to the integrity of request/response data is important, especially in scenarios where user data is heavily involved. This mechanism provides a standardized, but flexible framework with which to work in. At Highrise we’ve been using this system successfully between some of our internal services to verify the origin of data as well as the integrity. once the system is in place its little more than after thought and is something that’s simple to monitor for failures. In world full of security issues, lets take as much care as we can in protecting our users and ourselves.

Thanks for reading! I work at Highrise building a simpler CRM. If you liked this and want to hear more about software, products and the like let me know on Twitter!