Mention the word cloud, and most developers today will associate it with infrastructural abstractions provided by AWS, GCP or Azure. However, the original sales pitch for cloud computing was not about infrastructure, but about convenient access to business applications. Salesforce figured out roughly two decades ago that companies want access to business applications, but to achieve it, they had to suffer through provisioning, securing and servicing hosting equipment. Salesforce started selling convenience, taking most of the hassle away, and charging a subscription for it.

Software-as-a-Service opened the flood gates, and a line of of aaS-es followed. Want to run a web site, but don’t want the hassle of setting up hardware? Subscribe to the convenience of virtual machines in the cloud. Infrastructure became was a service.

Companies like AWS and Google had so many customers that they could mine meta-data about the client tasks on their infrastructure, and offer that even more conveniently. Want a virtual machine so you can run a database? Why not just subscribe to a database service? Low level operating system abstractions quickly became commodity. We could rent cloud file storage, queues and logs. The whole platform was a service.

After that, the major remaining use case for virtual machines was to run code that talks to all those platform services. Cloud aggregators figured out a way to also make that more convenient. Want to run run entire fleets of virtual processors to meet flexible customer demand? Why not just subscribe to code execution engines, and not worry about scaling up and down? Functions were a service. 

Continuing the trend, cloud operators are now mining meta-data about platform usage, trying to sell even more convenience at a higher price. Want an EMR cluster and a database to create sales dashboards? Why not just subscribe to a dashboards service and not worry about databases, indexes and map-reduce any more? By moving up the stack, providers are now starting to offer business-application components, coming back full circle to original SaaS, but with an interesting twist.

SaaS, but programmable

The two most famous original SaaS cases were Salesforce CRM and Google Analytics. Following that trend, Amazon Pinpoint evolved into a mix of end-user analytics and bare-bones CRM. The end-user interface is rudimentary, far behind what you’d expect from a modern CRM or analytics package, but what makes Pinpoint special is its API interface. Unlike the old business-application SaaS, this one is easily programmable. Another AWS product, QuickSight provides nice reports and data visualisation. A third one, Cognito, is a programmable username/password database. Rather than fully-fledged business SaaS systems, they are all easily composable components. Mix them up together, and you can make incredibly powerful stuff quickly.

For example, by mashing together these services, two colleagues and I created a replacement for the error-logging service Sentry in just two days. Of course, we’ve not replicated everything they offer to everyone, but we built enough to satisfy our needs and ensure GDPR compliance. Usually, a two-day hackaton ends up with a proof-of-concept that doesn’t scale and isn’t secure, but in this case scalability and security come out of the box – that’s one of the benefits of linking together AWS services.

Cloud providers are making these kinds of mashups even easier, both for their apps and for third-party content. AWS has the Serverless Application Repository, and Microsoft runs the Azure serverless community library. Both these services are in their infancy, but already allow users to very quickly set up complex combinations of cloud apps from blueprints created by other users and third-party integrators. Although both repositories are for now only working with open-source community software, it’s not difficult to imagine these early efforts turning into service app-stores, where you will be able to provision and create your own copy of a cloud service, control the data and customise workflows, but service providers can still charge for it.

Mashing up a new flow on top of business-level third-party services is not particularly new. IFTTT and Zapier have been doing that for a long time. What makes the new type of SaaS interesting is how they integrate with clients. In parallel with the new generation of business-level services, cloud providers evolved a new way to deploy and integrate them together.

Glue together your own SaaS

Retro-futurists talk about how we’ve gone back to the mainframe age, using computers mostly as dumb terminals to access services “in the cloud”. And this is not because end-user devices are unfit for complex work – most people today carry phones more powerful than the supercomputers of the previous century – but because cloud services are so convenient and interconnected. Instead of a dumb terminal to a single mainframe, our phones are terminals to a whole array of modern mainframes, that all talk to each other.

Straight from the first SaaS success stories, the whole ecosystem relied on integration. Analytics-as-a-service only makes sense if something else is submitting events. Back when all the major SaaS players operated their own infrastructure, communication between them was quite limited. Mostly, this relied on some kind of remote API, but there is only so much you can do by calling into a system from the outside. The problems start when a vendor system needs to call you. When a payment processor gets a charge-back request from a bank, or a user authenticates against a third-party service, they need to somehow let your application know about their data. A successful SaaS will need to talk to hundreds of thousands of different systems, and they can’t possibly make allowances for each individual variant. This brings communication down to the greatest common divisor online, HTTP. And that’s how we got webhooks.

There’s no doubt that webhooks are hugely successful in linking disparate systems together. After all, Slack is a seven billion dollar business built mostly on managing webhooks. But developing and working with webhooks is incredibly messy. HTTP is a good document protocol, but not really an application protocol. It’s difficult to differentiate network timeouts from task timeouts. Errors might come in the format of the provider protocol, or they might just be HTTP low-level errors. Authentication is tricky, because it relies on shared secrets. When a payment provider sends you a message, it’s often accompanied by a token that needs to be sent back to them for validation, because it’s easy to spoof requests. In case of problems, it’s difficult to know if it’s OK to retry or not. Was the token processed and discarded before the network error, or can we still use it? All these problems have workarounds, of course, but essentially webhooks are designed for a happy-day scenario, and building robust webhook services that are fault-tolerant is a pain. When writing a webhook integration, I often spend more code protecting against communication and HTTP protocol issues than dealing with actual business application concerns.

Beyond webhooks

About a year ago, AWS started dealing with inter-component integrations slightly differently. With Zapier and IFTTT, it’s really difficult to make safe assumptions about where the code is running. On the other hand, you’re using AWS business services and they need to call back into your code that also runs on AWS, there isn’t much point in using a webhook. Cognito lets users customise workflows by adding Lambda functions to execute before or after authentication, during signups, and before or after generating a token. Pinpoint can forward incoming events into a data stream, where they can be easily processed with Lambda functions. CloudFront, the content-delivery network on AWS, lets you customise requests or responses by running a Lambda function in the pipeline.

By using Lambda functions, not webhooks, AWS can make a lot of assumptions about the execution environment and fault tolerance. In case of an error, they know if your code exploded, if the execution environment blew up or if the task just timed out. They know when it’s safe to retry, and when there is not much point trying again. Fault tolerance is baked into the execution environment, and the integration code does not need to worry about it. Likewise, each Lambda function runs under certain credentials, and it’s very easy to limit who can call it and how, so there’s no need to validate and verify incoming messages every time. When writing Lambda integrations instead of Webhooks, all the cruft is gone. We only need to worry about the actual business activity. AWS deploys and hosts it as well.

Moving from SaaS over IaaS to PaaS, by using FaaS, we got to BaDaaS: Business action deployment as a service. Service providers offer application components, and the environment in which you can deploy your own customisations for them, so you need to worry only about the specific business actions that make your work unique.

If both the service and the consumer code are running in the same cloud, it just makes a lot more sense to use the FaaS platform of that cloud and not web hooks. Third-party service providers are embracing this trend as well. For example, Twilio recently introduced “Twilio functions”, a hosted execution environment to let your code run inside Twilio instead of sending you messages with web hooks. Netlify also introduced a similar thing, unsurprisingly called Netlify Functions, allowing you to give “backend superpowers to your frontend code”. Check out the function API for both these services, and you’ll see that it resembles the AWS Lambda event interface. That’s because they just run Lambdas under the hood. These services charge a bit more on top of AWS costs for running your code, but this allows you to save time when writing and deploying integrations. It’s a win-win-win for everyone, from AWS to Twilio to you.

I recently started work on a video editing automation app, and was able to skip a few months of work by integrating user authorisation, analytics, user engagement and reporting components from AWS. Doing it with cloud functions made the whole process very smooth.

Push for FaaS integrations

If the service provider runs their code on AWS, and if you would run your part on AWS as well, there really is no benefit using webhooks any more. So, if you are starting a new app, look for FaaS integrations first. Hopefully, other cloud providers will pick up on this trend, and we’ll be able to customise many more products using their cloud FaaS platforms.

On the other hand, if you work for a service provider, consider running it BaDaaS. Let clients give you business code to deploy and run within your workflow. If you’re on AWS, Google or Microsoft clouds, just reuse their FaaS service, charge a bit more on top of it, and everyone in the chain will be much happier.

Photo by Ziggy Stone on Unsplash