Avatar
By Devon Hillard
AWS Lambda is a platform that makes it easy to run Serverless applications on Amazon’s infrastructure.  AWS Lambda handles things like spin-down, spin-up, auto-scaling, and makes it easy to integrate with other Amazon services.  It supports several technology platforms and frameworks including NodeJS, Python, Java, and more.  
 
Recently I have been working on designing and building a service, and ancillary tools, to help manage some complex application clusters on AWS.  It’s essentially a collection of Web Services and will be called rarely.  As such, it made some sense to build it with a Serverless approach so that there wouldn’t be infrastructure running (and costing) 24x7 for a service that might only actually be used for 30 minutes a month.  There is also some relational data needed to manage persistent state.  So AWS Lambda coupled with AWS Aurora Serverless seemed like an ideal approach.  Lambda Web Service functions that run only when called, and a database that spins up only when needed!  Perfect!
 
I used Serverless.com (sls) to manage the deployment to Lambda and to setup the API Gateway.  I am quite impressed with Serverless, and wouldn’t hesitate to use it again for Lambda applications.  
Being a Long Time Java Guy (LTJG) I picked Spring as my framework for a few reasons.  I’ve worked with Spring before and appreciate the ease Spring Boot brings to building a new application.  I also knew there were Spring plugins for all the AWS services I was going to need to interact with, as well as Serverless and Lambda plugins!  
 
I created my Spring Boot project quickly enough.  I had to do a little bit of work to change from Maven to Gradle (personal preference), and then to get Serverless to play nicely with the Gradle style artifacts, etc…. Once I had a building, deploying, web service, I started adding plugins and stripping out unneeded cruft (such as the Tomcat runtime).  
 
Architecturally I wrestled with the approach a bit.  I have 20 years of “old school” monolithic application architecture and development under my belt.  But I’m trying to embrace the more current things like stateless and micro-services.   I’d mapped out 10 web services that had to be vended out.  Plus a few internal “common services” which would provide functionality or integration wrapping that would be needed by multiple client facing services.  However, with only a single small development team, and controlling all the tooling that will be calling these web services, breakinig this into 13 Spring Boot applications with 13 git repos, 13 build scripts, 13 artifacts, etc… seemed like a large amount of overhead for us to take on, with little to no benefit.  
 
As such I went with a hybrid approach.  A single git repo, with a single Spring Boot application, one build script, and one artifact.  Then using SLS to deploy that artifact to 13 AWS Lambdas, each with a different web service endpoint.  A monolithic build with a micro service deployment.  
 
For a while, everything worked great.  Cold starts sucked, but I’d done some optimization of the Spring framework to speed up boot time, and configured the maximum RAM for the Lambda instances, which in turn provided a bit more CPU power.  Plus none of these calls had strict response time requirements and none were user facing.  And after the cold start, Spring is FAST!  Responses including my network latency to AWS were about 5 ms once the application was running.
 
But then we hit a wall.  The Spring Boot application on Lambda might take 10-20 seconds to boot from a cold start (versus 4-5 seconds locally).  That’s a wide range, but it fluctuated and was out of my control.  That worked okay though.  Then when we started integrating with data in the Aurora Serverless database, things broke.  The Aurora Serverless database ALSO has a slow cold start boot time before it’s ready to start returning data.  Once we plugged into the database, the combination of Lambda cold start plus Aurora cold start often exceeds 30 seconds.  Aside from that just being plain SLOW, it crossed the maximum timeout on the AWS API Gateway.  This 30 second timeout cannot be increased.  When warm, the application and DB responded VERY quickly, but when cold (and given the infrequent use of these services many requests would hit a cold application stack) the majority of requests failed completely due to the API Gateway timeout.  
 
I spent most of a day trying various optimization tricks, and brainstorming and researching hacks, workarounds, and even evaluating building retry capabilities into every client service.  But at the end of the day, we’d simply hit a wall.  
 
Luckily it is easy enough to simply take the Spring application and deploy it statefully running on Tomcat on EC2 or via Elastic Beanstalk.  And change Aurora Serverless for Aurora or MariaDB RDS.  Keeping most of the coding work in place, and continuing to be able to leverage Spring’s helpful plugins.  And the hosting costs are tiny even at 24x7.  
 
Another advantage of this more “old school” approach, is that I can run the Spring app locally and do quick iterations of development and testing on my laptop without having to push each code change to the AWS Lambda platform and wait for CloudFormation to run every time, etc…. I know you can, in theory, run Lambda applications locally for testing, but I was never able to get it to work properly for Spring applications expecting an API Gateway request.  I believe AWS Lambda works much better with NodeJS or other languages.  Java definitely feels like a second class citizen with AWS Lambda and even Serverless to some extent.
 
While Lambda does support Spring, I strongly recommend against using AWS Lambda to run your Spring applications.  The cold boot issue is a deal breaker, even before you add in the Aurora Serverless delays.  Even if you have a higher traffic application, you’ll still be hitting cold starts whenever it scales up, etc…. If you want Serverless Lambda pick NodeJS or Python (if you have the necessary expertise).  If you want Spring, run it on a server!