knowledge hub

Your serverless project is growing. On overcoming CloudFormation stack limits and shared API Gateway endpoints and custom domains

Limitations of resources in your AWS CloudFormation template? I'll show method that worked!



As most of you probably know, AWS service called CloudFormation has got its limits and one, in particular, is pretty annoying. Basically, I’m talking about the “maximum number of resources that you can declare in your AWS CloudFormation template”. Right now, it is limited to 200. As a guy who encountered this problem I am going to show you the method that worked for me.

Furthermore, I will cover the sharing of API Gateway endpoints and custom domains as well.

CloudFormation limits equal Serverless Framework limits

It won’t be a surprise if I tell you that, while using Serverless Framework, you have to deal with Cloudformation issues lying underhood. To be honest, I do not treat it as a drawback but keep in mind that getting to know some constraints takes time, especially during the development. Each time you type sls deploy with its options, you’re launching a new one or updating an existing CloudFormation stack.

After the stack is updated, the number of created resources within a particular stack will be displayed in the resource summary. For your information, it doesn’t count just functions but all resources you’ve additionally add, like IAM roles, database tables, S3 buckets, SQS queues, and definitely much more as a mandatory part of your serverless projects.

AWS Console also provides this information in a CloudFormation dashboard (number of resources after service split):

Unfortunately, I have to confess that I’ve missed that point during the development and reached almost 200 resources (it was somewhere around 190). Anyway, the obstacle was neither the CF limit nor the level I’ve reached. That time, I realized it was quite confusing. First of all, I had to figure out how to break it into multiple logical services and keep one common API Gateway, while doing it.

NOTE: By default, each Serverless project generates a new API Gateway.

What’s available on the market?

My first thought when I encountered this problem was: let’s find something out-of-the-box. The most reasonable tool to use seemed to be this one: If I were starting the project from scratch, this plugin would, hopefully, save me time and reduce my worries about limits. However, this was not the case as I already had about 40 Lambda functions working, with some additional AWS services and new parts of the microservice in my mind, waiting in line. If you take a glance at the description at the beginning of, you’ll notice it reads: “It is a good idea to select the best strategy for your needs from the start because the only reliable method of changing strategy, later on, is to recreate the deployment from scratch”. No way, not on Saturday.

By the way, if you’re starting with a new serverless project, consider this repo. By default, there are different types of split available to you: Per Lambda, Per Type, Per Lambda Group.

Let’s jump out of a single serverless.yml

Before making a change, I had one main microservice located in one directory (see below). My idea was to keep one microservice but extract different logics within. Just to make it clear, I am not talking about business logic but a bunch of modules, a great number of Lambda functions, working with one service.

My initial directory:

├── service-a
│   ├── files
│   ├── functions
│   ├── helpers
│   ├── node_modules
│   ├── package-lock.json
│   ├── package.json
│   ├── requirements.txt
│   ├── resources
│   ├── serverless.yml
│   └── tests

It is worth to highlight that you can follow this pattern if your application has many nested paths (presented below with service-a and service-b) and your goal is to split them into smaller services. Although two services have been deployed via different serverless.yml file, both“a” and “b” reference to the same parent path /posts.

Keep in mind that CloudFormation will throw an error if we try to generate an existing path resource. More on how to deal with that in the next paragraph.

Example from

After the split, I’ve ended with directories like the ones shown below, each having its own functions and sharing some AWS resources.

├── service-a-module-1
│   ├── files
│   ├── functions
│   ├── helpers
│   ├── node_modules
│   ├── package-lock.json
│   ├── package.json
│   ├── requirements.txt
│   ├── resources
│   ├── serverless.yml
│   └── tests
└── service-a-module-2
    ├── functions
    ├── helpers
    ├── package-lock.json
    ├── package.json
    ├── requirements.txt
    ├── resources
    ├── serverless.yml
    └── tests

We’ve conquered the stack limits. Let’s talk about sharing API Gateway endpoints and custom domains!

Custom domain sharing

Long story short, if you create multiple API services via a serverless framework file – serverless.yml – they will all have unique api endpoints, like in the example shown below: for service-a

and for service-b

You can assign different base paths for your services. For example, can point to one service, while can point to another one. But if you try to split up your service-a, you’ll face the challenge of sharing the custom domain across them.

* service-a-api for ⇒ GET{bookingId}

* service-a-api for ⇒ POST

* service-a-api for ⇒ PUT{bookingId}

* service-b-api for ⇒ POST

So, what’s the issue, you may ask. Let me explain.

Generally, each path part is a different API Gateway object, and a path part is a child resource of the preceding part. So, the aforementioned path part /service-a is basically a child resource of /. and /service-a/{bookingId} is a child resource of /service-a. Going further, we would like the service-b-api to have the /service-b path. This would be a child resource of /. However, / is created in the service-a service. So, we need to find a way to share the resource across services. Since sharing custom domain wasn’t exactly my issue, let me, my dear readers, give you a solution for this problem in the next paragraph.

API Gateway endpoint sharing

As my case was referring to sharing the same API endpoint among logic modules, I started with serverless framework documentation: Since there is only a brief explanation, without detailed examples, I’ve decided to search further. I divided my problem into separate parts and focused on the endpoint I had created via initial service-a directory (the one below a fake one):

Reading the “Custom domain sharing”part, notice the info about child resources and their dependencies on the preceding parts. At this point I’d like to add one thing: all child resources and root resource have their own “ids”:

AWS Console dashboard showing the ids: pointing to bqdplee0re

/v1 pointing to i2315j

Having this knowledge, I knew I had to find a way to keep two different microservice modules (located inside different serverless.yml files), both pointing to one, common endpoint. I made some attempts, based on docs, but each time I’ve tried to deploy service-a-module-2 via sls deploy, I got an error, notifying me that I was trying to generate an existing path resource /v1.

Finally, I reached my goal. Just look how simple it was.

  • Create API Gateway endpoint in the first service module
  • Create API Gateway PathPart resource in the first service module
  • Share root and, if needed, child path parts
  • Import outputs in the second module
  • If you’re sharing child path, use restApiResources in the module you’re sharing to
  • Configure your paths in Lambda functions/Step Functions in both modules

service-a-module-1serverless.yml config snippets:

Additionally,I had to deploy, in external files, resource for child API Gateway path (/v1). Plus, outputs values that I wanted to export (ApiGatewayId, rootpath Id referencing to / and childpath/v1)had to be shared with service-a-module-2:

Child path is defined by PathPart:${self:custom.api_ver}.

service-a-module-2 serverless.yml config snippets.

NOTE: It has to be defined in the same region as service-a-module-1 because of CloudFormation outputs imports.

As you’ve noticed, service-a-module-1 has a function ready to be invoked via path:

Where as service-a-module-2 has a path defined within a different serverless.yml file:

Before moving to the summary, let me explain one thing. It will be useful for those of you who have different API versioning approach than version embedded in the URL. This topic provokes philosophical talks and people often lose sight of the real goal. And that is developing software for business goals, thinking about API and making it easily consumable. We’ve decided to use route versioning because, during the development, our stage variable may, in some cases, contain different Lambda alias. Then, based on the resource like /v1/package, env variable, Lambda function name with alias,we can serve API Gateway data, and it chooses a corresponding function to be invoked.

Lessons learned

  • CloudFormation stacks’ limits may be inconvenient during “serverless” project development. But if you anticipate the scale of resources per microservice, this shouldn’t bother you at all.
  • Endpoint sharing and custom domain sharing can be implemented in the same way. I’ve described the former but the pattern is the same for the latter as well.
  • There is one drawback: service-a-module-2 is dependent on service-a-module-1 API Gateway resource.

Technology Stack

AWS CloudFormation
AWS CloudFormation
Amazon API Gateway
Amazon API Gateway