Return to site

How to overcome the challenges of Serverless

Eoin Shanaghy

· AWS and Serverless

I see the same thing every time a new way of building software comes along. As the new paradigm starts gaining traction, there seems to be an inevitable phase of polarized debate. Early adopters talk with passion and enthusiasm, lauding every advantage while detractors pick out every area where the new way compares poorly to older, established methods.

And so it is with Serverless, the most dramatic and exciting development I have seen yet in how we build software. We aim to be objective and scientific in evaluating new methods. Instead, we seem to be overruled by a stronger compulsion. This compulsion could be our passionate and creative side, excited by what we can do with new tools. Alternatively, our comfort and confidence with existing methods can lead us to an almost dogmatic stance, fearful or skeptical of something that looks different to commons patterns and conventions.

In order to make the best decisions for our business and our customers, we must move away from this kind of black-and-white thinking. Being equally aware of the benefits and challenges allows us to make clear and confident decisions. Every new wave of technology has the following set of challenges.

  1. There is insufficient tooling in the early days
  2. There are fewer case studies and sources of evidence that the new way works well
  3. Common patterns and strategies for working in the new way are yet to be established
  4. The number of vendors offering support for the new way is small

Reading through this list, you will see that challenges tend to be temporary. Given enough time, they will disappear. With the Serverless Era, we are already in year five. It’s difficult to put a start date on it but many would put it at November 2014, when AWS Lambda was announced at AWS re:Invent. I believe that many of these challenges have been addressed. There is still room for further improvement but, having developed many Serverless solutions already, I now have a strong sense of confidence that building with Serverless today is, in the majority of cases, an excellent choice for our customers.

Here are some of the specific challenges that everyone building with Serverless should be aware of. In some cases, it might mean that Serverless is not the right choice for you. Mostly, it is simply important to be aware of these so you can design a solution that avoids their effect.

Vendor Lock-In

Lock-in is the challenge I hear most in relation to Serverless. It has been brought up ever since the dawn of the cloud. It’s important to understand and measure what lock-in means to you, since it will depend on how you go about building your Serverless system. Ask the following questions.

  1. Where does lock-in apply? The Functions-as-a-Service (FaaS) part of Serverless is likely to be the easier to port to another vendor since it’s just code. Lock-in applies more to the vendor-specific managed services, like DynamoDB, Kinesis and Aurora in the case of AWS. Forrest Brazeal has an excellent article on the lock-in of AWS IAM which is referenced at the end of this article
  2. What is the cost of switching to another vendor?
  3. What are the costs and missed opportunities of avoiding a Serverless approach for fear of lock-in? Compare this to the cost of switching vendor and the probability of having to switch.
Complexity

There is a paradox with the idea of microservices when it comes to complexity. Serverless is, in many ways, an extension of the microservices so the same paradox applies. By splitting your system into clear, simple services that do one thing well, you achieve simplicity in each service. You then find that complexity emerges in the overall architecture of what becomes a large, distributed system.

Embrace this complexity! It is an inescapable fact that the predominant skills in developing large, enterprise systems will be around distributed system architecture, service integration, message distribution and data storage/retrieval. Serverless done right means significantly less code, particularly for mundane, undifferentiated logic that is not core to your business.

Coding time should shift towards Infrastructure-as-Code (IaC), ensuring your distributed system is well architected, defined and repeatable. This infrastructure should include sufficient monitoring, alerting and reporting to give sufficient clarity and insight. A good example of this is AWS X-Ray, just one example of the excellent available tracing solutions that give you a map of the messages passing through the system.

Figure 1 AWS X-Ray (Source https://aws.amazon.com/blogs/aws/aws-x-ray-update-general-availability-including-lambda-integration/)

Tooling

Tooling is of critical importance to building software quickly and without unnecessary friction. It can be divided into two areas – tooling for development and tooling for running the platform. Developer tooling is for working with code, testing and running it locally (on a developer’s own computer). Runtime tooling deals with the tooling for deploying, monitoring, troubleshooting and managing the robustness, performance and security of the production system on a daily basis.

There are many tools for dealing with the development side and plenty more to come. 2018 saw huge improvement in vendor tooling. The progress made by AWS on SAM, CDK and Amplify means we now have many superb options. The more established Serverless Framework has continually improved and now has healthy competition that is better for everyone. There is still plenty to be done to improve support for local development.

Figure 2 AWS CDK makes building Serverless solutions easier for developers (Source: https://aws.amazon.com/blogs/developer/aws-cdk-developer-preview/)

When it comes to running your platform, we have also seen improvement from cloud vendors in monitoring, deployment and security. In addition, we are seeing many third-party offerings such as IOPipes, Lumigo, Protego and Thundra.

Data Access When You Need It

In a non-serverless world, you typically have a virtual or physical machine with a disk attached. High-speed, local disk access is really useful for storing frequently-accessed data. It comes at a cost, since this data has to be kept synchronized with the true source of data. Still, in a Serverless world, we end up with a gap when it comes to accessing large volumes of data.

Frequently, we see Serverless solutions that use functions (AWS Lambda, for example) to pull data from storage (S3), operate on that data and store a computed result. The time to read and write data far exceeds the computation time. This is a significant disadvantage, since you are paying for compute time when you are not really computing. This is in addition to the performance disadvantage.

Right now, the solution here is to limit the data you pull into functions. You can do this by leveraging the compute and aggregation support in the data store itself – SQL in the case of a relation database or SQL SELECT for S3. You can also use dedicated data processing services instead of rolling your own in a function. This can be a Map-Reduce or Spark implementation, for example. Another option is to pre-compute aggregations using data as it arrives as a stream into your platform.

Cloud vendors will address this challenge in new ways. I have a strong feeling that AWS will ultimately implement Lambda-in-a-Bucket support, allowing us to execute functions close to the data in S3 and avoiding the data transfer overhead. This would be a progression from Lambda@Edge and Lambda support for IoT devices.

Skill Shift

“Wanted: Serverless Developer with 10 years’ experience”. I can see the job adverts now! In reality, there is and always has been a skills gap in software development. The real skills are staying curious, always learning, challenging your established thinking, adapting to new methods and asking questions.

Serverless development promises us another leap forward in doing much more, much faster. It’s easy to jump in to Serverless with no experience and get your hands dirty. It’s also possible to spend time going down the wrong path and learning the hard way. A certain amount of this is part of the learning experience. If you don’t want it to get out of hand, ask for help. There is a great Serverless community out there and plenty of people with experience in building production Serverless applications. It’s a shame to see companies struggle on trying to do everything themselves and try to solve every project delay by either hiring or just pushing deadlines out.

All Posts
×

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OK