Today's Solution Landscape Pt 2
This is the second in a multi-part series that looks at how serverless and native cloud fit within larger software and IT industry trends.
In this post we're continuing to look at today's solution landscape as a convergence of three long-running trends. The first, which we looked at in the previous post, is the evolution of reuse-in-the-large, and the lessons learned in terms of service architecture, middleware, tools, and standards. Now, in this post, we're going to look at the two other trends, which are the shift to SAAS products, and the emergence of serverless and cloud native technologies.
The Shift to SAAS Products
Except for some specialized cases where security or perhaps network latency constraints still mandate on-premises location, nearly all major software products are now software-as-a-service (SAAS) offerings. Sometimes on-premises versions of products are still available. But in every case that I've seen, the newest features are released first, and sometimes only, on the SAAS versions.
For product vendors there are several factors driving this shift. With customer installed software, vendors often end up having to support older versions of their products. With hosted SAAS products, vendors can get away with supporting only the latest, or for single-tenanted scenarios, comparatively recent versions. When hosted, they also enjoy full control of their deployments, even those running single-tenanted, which simplifies support considerably.
Customers prefer the hosted model as well, for the same reasons that they often prefer to lease buildings rather than own them. Consuming things as a service has financial advantages and makes it easier to scale usage to need. For small-to-medium-sized businesses, the SAAS model also offers a direct path to product adoption, without a need to install software of any kind. This enables freemium or trial models as marketing strategies for vendors, which can help turn casual users into paying customers.
All these benefits help to explain why the hosted model wins over the customer-installed model. But SAAS products are also nearly always hosted on one of the big cloud providers. Why this is so hinges, I think, on two additional factors worth highlighting. First, the tools and services in the cloud are so widely used that they are, at this point, de facto standards. For the big cloud providers, I think this alone is enough, over time, to pull everyone, product vendors and enterprise customers alike, into their orbit.
But on top of this factor, the big cloud providers also represent a singularity of place analogous to what drives concentration around urban centers in the physical world. For product vendors it simply makes sense to offer their services in the same place where all the services they depend on, and where all their customers, are also located. Yes, the internet is global, and theoretically, services running anywhere can be interconnected. But in practice it's easier connect services securely and reliably within the cloud than it is to provision equivalent connectivity outside of it.
As a result of all these factors there's now a growing ecosystem of systems and services available, in the cloud, ready to be integrated into solutions. There are general purpose products like CRM, call center, or document management (territory that's often claimed by the cloud vendors themselves, e.g. Dynamics CRM or SharePoint in Azure).
There are also technical services for things like e-signature, identity, or low-code application platforms, as well as AI-based services for things like OCR or language translation. On top of this, there are many industry-vertical products available as multi-tenanted or single-tenanted deployments, again, all in the cloud.
We can illustrate how game changing this is by looking at how one might build a university student system today. As I wrote in the previous post, about twenty years ago I helped get a project started to rewrite a custom student system. As noted, that system did not incorporate any substantial off-the-shelf functionality. But if you needed to build a custom student system like this today, what would it look like?
The answer, I think, is that such a system built today would delegate large chunks of functionality to other systems and services. Today, there are candidates for specialized services that are well-established in the marketplace and represent best-of-breed in their spaces. In education, this might apply to feature areas like course content management or student assessment.
Having helped to design and build two major student information systems myself, I would also note that a lot of their functionality is about managing profile and contact information, recording interactions with students and parents, and defining extra student attributes that then need to be tracked and reported on. These are all things that might be well managed by an integrated CRM application that, alongside the required APIs, also supports low-code data and screen customizations.
So, to summarize, what role does the shift to SAAS product offerings play in today's solution landscape?
There was a time when planning for new solutions began with a binary, build vs buy analysis. The shift to SAAS product offerings, available in the cloud, and almost all of which include support for modern reuse mechanisms like REST APIs, web hooks, and identity services, means shifting to a new paradigm.
Early-stage planning and design, I would suggest, is no longer about binary build vs buy decisions. It's more about identifying those specialized services that are the best-of-breed within their domains and then assessing their compatibility, in technical and system model terms, for incorporation within a larger, integrated solution.
The Emergence of Serverless
Technology choices are another part of early-stage solution planning. We've now covered the evolution of reuse and the growing ecosystem of SAAS systems and services in the cloud. The question that remains then, is what technology platform can one use to run modern services, applications, and integrations in the cloud?
It's worth stepping back for a moment and itemizing some of the goals one might have for modern technology platforms. I think that, broadly speaking, with a modern platform you want:
- To work with high-level service and integration abstractions.
- To be able to define, deploy, and update resources using declarative infrastructure-as-code.
- To have underlying server hosts which are managed, and to which services are allocated, automatically.
There are two categories of platform technologies that broadly satisfy these goals. First, there are the open standard platforms like DAPR or Kubernetes, which let you declaratively define abstracted services, and which will then run these services on a pool of managed server resources. The advantage of these open platforms is that they can run on a variety of infrastructures, both on-premises and in the cloud.
Alternatively, there are the proprietary, serverless cloud platforms offered by the big cloud providers. Not everyone will agree, but on balance, I think that the benefits that the serverless platforms bring outweigh the (limited) risk of vendor lock-in. Let's first outline these benefits, and then look at the measures you need to take to mitigate against this lock-in.
First, the serverless services are all inherently native to their cloud ecosystems, and as such work seamlessly with the cloud deployment and monitoring tools. These services also, not surprisingly, work seamlessly with each other, albeit, as we'll cover below, there are times when you want to inter-operate these services through open standards.
They also, with the exception of serverless hosted container services, entirely abstract away their underlying infrastructure. One of our goals from above is the decoupling of logical services from the physical infrastructure on which they are deployed. With serverless you don't have to provision, manage, or monitor any infrastructure, whether that be servers, networking, or autoscaling services.
Another, related advantage with serverless technologies is their support for billing based on usage rather than capacity. This makes it possible to deploy as many low-volume test or development environments for solutions as are needed, at nearly zero cost. When you combine this with policy-based guardrails for lab accounts, you free teams to tinker and experiment with prototype deployments without a need for management overhead. Iterations to solutions during development can be much more rapid when you're not running on shared infrastructure.
The build out of serverless services has taken time, but at this point there is comprehensive coverage for the kinds of service and integration features we've outlined in this series. This includes services for:
- Managed APIs with OAuth 2.0 and OpenAPI support.
- Identity with support for SAML- and OIDC-based federation.
- Compute in both function-as-a-service and serverless container hosting form.
- SQL and NoSQL serverless databases.
- Middleware-style messaging, routing, and orchestration services.
As we noted, the flip-side to the benefits of seamlessness and interoperability that you get with proprietary serverless technologies is the potential for vendor lock-in. The good news is that it isn't difficult to design your solutions and code for portability, given them reasonable migration paths to alternative platforms.
To start, some of these services do support open standards. Managed APIs, for example, support OpenAPI definitions, as well as OAuth 2.0 for authorization. Identity services generally support some degree of OIDC- or SAML-based federation. So both of these categories of service can potentially be migrated from one cloud platform to another, albeit with work involved in rewriting their deployment templates. If the compute layer of a serverless solution runs on OCI-compliant container hosts, then this too can be migrated, again, with deployment template work, to another cloud platform.
One might think that the code for serverless functions, which are entirely proprietary, cannot be migrated. But that is also not actually the case. As we'll cover in the next post, serverless functions can be written as light-weight protocol-wrappers, containing only the code needed to marshall inputs and outputs and call portable service components.
Structuring code in this way facilitates migrating backend service code from functions to containers (or the reverse). Moreover, well factored backend service code in which business logic, context, and data access dependencies are all implemented as components with abstract interfaces, are also reasonably portable (with limited re-writing) across cloud platforms.
To summarize: serverless platforms satisfy all our platform goals; they are now feature complete; their services work seamlessly with each other and with their native tools; and you can mitigate against lock-in to have migration paths to alternative platforms. I think this list alone tips the technology choice to serverless, but there is one more, decisive factor.
These service platforms have to operate within a larger context. You might define a suite of services to run in Kubernetes, for example, but that Kubernetes host will have to run somewhere, and integrate with the larger world. With serverless, you can build modern services for APIs, identity, and integration, and stay within that world, enjoying the benefits we've noted.
But when you need to integrate with other systems and services, you'll probably have to enter the IAAS world of network connectivity, at a minimum. With serverless solutions, any additional connectivity or IAAS resources generally are all defined using the same tools and supporting services. So while working with the same tools and techniques, like declarative infrastructure-as-code, you not only get the serverless platform, but also a larger, global cloud platform.
What this all Means
We've now outlined the three trends that I think define today's solution landscape. New solutions facilitate integration by incorporating modern API standards and micro-service architectures. These solutions can be expected to run in the cloud, where they'll have access to, and integrate with, the growing ecosystem of SAAS systems and services. And the emergence of serverless provides a natural cloud native platform on which these solutions can be built.
Why I think this matters is because it represents the shortest path to success in terms of user experiences, software capabilities, and most importantly adaptability. If you're an enterprise organization or a software product company that faces real competitive pressure, and you need to be able to adapt and respond to change, then I think there's huge value in embracing this landscape.
In the enterprise, if you haven't already, I think it's worth getting proficient with modern integration standards and some of the aforementioned serverless technologies. A world in which all your systems are running in the cloud is on the horizon. You may not be at that point yet, but I think you will, and when you get there, having these proficiencies will give you better options.
For product companies, I think the key thing is to see your software, less as a standalone product, and more as a suite of building blocks to be incorporated within larger solutions. That may ultimately mean re-architecting your software to offer more integration points. It definitely means providing the best possible API and web hook coverage, with the associated documentation and sample code to make these easy to work with.
Next
With these calls to action in mind, the two posts that follow in this series will expand on the serverless part of this solution landscape. The first explores some of the technology and architecture strategies that are enabled by serverless, while the second outlines some key elements you may want to consider when defining adoption roadmaps for serverless.