This version contains breaking changes. We apologize for this. AWS and the serverless landscape in general is changing. We are trying our best to keep up and offer a better user experience on top of everything without introducing breaking changes. But this release contains a few.
Breaking - Unique Function Names Required
All of your functions must have unique names project-wide. For example, users/show
should become users/usersShow
. Serverless will throw an error on initialization if your function names are not unique.
How to upgrade: Change your function names to be unique.
Breaking - Removed Components While Keeping and Improving Its Functionality
We've removed the concept of Components and instead improved our nested folders functionality to give you complete control over how you want to structure your project. As a result there's no s-component.json
anymore, and the runtime is managed at the function level. So your s-function.json
should now have a runtime property with the value nodejs
or python2.7
. That also means that you can create a function in the root of your project directly with sls function create myFunc
or you can nested as much as you want with sls function create functions/subfolder/myFunc
. If you have settings on s-component.json
they will no longer be applicable. We recommend using Project Templates to store any s-component.json
settings instead.
This change affects the default function name we give to your functions when they are deployed. This will cause issues when you upgrade from V0.4
to V0.5
as your functions will be renamed when deployed on AWS.
How to upgrade: Redeploy your functions. We also recommend taking advantage of the customName
property in s-function.json
if you aren't already. The new default function name when deployed is simply project-function
.
Flexible Dependency Management via Magic Handlers
Now that you have complete control over how you organize your functions, we've come up with the concept of "magic handlers" to give you more control on what you want to be deployed along with your function and how you want to handle function dependencies. We're referring to the handler property of the s-function.json
file. Depending on which parent folder this handler path is relative to, your lambda function will be zipped up from that parent folder and deployed to AWS.
By default the handler is set to handler.handler
. That means it's only relative to the function folder, so only the function folder will be deployed to Lambda. If however you want to include the parent subfolder of a function, you should change the handler to be like this: myFunc/handler.handler
. As you can see, the path to the handler now includes the function folder, which means that the path is now relative to the parent subfolder, so in that case the parent subfolder will be deployed along with your function. So if you have a lib
folder in that parent subfolder that is required by your function, it'll be deployed with your function. This also gives you the ability to handle npm dependencies however you like. If you have a package.json
and node_modules
in that parent subfolder, it'll be included in the deployed lambda. So the more parent folders you include in the handler path, the upper you go in the file tree.
Breaking - Multiple s-resources-cf.json files
We are removing the ability to have resources nested in your function subfolders. The reason for this is we have received a lot of feedback that resources should be broken up into separate, smaller resource stacks, instead of one resource stack defined by separate resource files. This follows microservice patterns more closely, with resource separation per microservice. We are considering implementing this in the near future.
How to upgrade: Merge your s-resources-cf.json files into a single file that is kept at the root of your project.
Breaking - Removal Of Project Buckets, Changing Environment Variable Handling
Project buckets were more useful when we started Serverless/JAWS several months ago. However, AWS continues to improve Lambda, making Project Buckets less relevant. As a result, we've decided to remove them, to reduce complexity. This will break how your project handles Environment Variables, since they used to be stored in Project Buckets.
How to upgrade: You will need to transfer your environment variables from S3 to your s-function.json
files. Environment variables are now stored in an environment
object, in your s-function.json
file, like this:
"environment": {
"MY_ENV_VAR": "VALUE"
}
However, we recommend not putting Environment Variable values in the environment
object, and only putting in Serverless Project Variables as values instead. This allows your Env Variables to be different per stage and stage+region within your project. It also isolates all sensitive data to your _meta
folder, which you should now always .gitignore
.
"environment": {
"MY_ENV_VAR": "${env_myenvvar}" // Kept in _meta/variables and changes depending on stage/region
}
Over the next few days, we will introduce a plugin that will help developers working on the same project sync their _meta folder across teams, securely via S3. Until then, make sure you .gitignore
your _meta
folder in your existing project.
Serverless-Helpers-JS No Longer Needed
The serverless-helpers-js module is no longer needed. Environment variables are automatically inlined into your function when they are run locally AND deployed. This is done by switching your handler with another one on FunctionRun and FunctionDeploy. The handler is titled _serverless_handler
and it contains your Environment Variables.
Added Support for Multiple AWS Accounts for Your Stages
You can now have a new aws account/profile for each of your stages, making it more efficient to work with big teams. When you create a new stage, you will be prompted for which profile to use with that stage. You can create a new profile at this time as well.
Added API Gateway Custom Authorizers Support
API Gateway custom authorizers now allow you to call a Lambda function containing authorization logic, before calling the lambda function containing your application logic. This is called a custom authorizer.
We've added support for this new API Gateway feature into the Framework and improved the user experience. Here is how it works:
The Function that you want to be an authorizer (runs the authorization logic) must have a authorizer
property in its s-function.json
wth these properties:
"name": "auth",
"authorizer": {
"type": "TOKEN",
"identitySource": "method.request.header.Authorization"
},
Endpoints that require that custom authorizer must contain these properties:
"authorizationType": "CUSTOM",
"authorizerFunction": "auth", // Name of the function in your project that does authorization
Function Deploy Is Now 2x Faster
Previously, we used to backup your lambda functions on the project S3 bucket whenever you deploy a function, which slowed down the deployment. We did this to get extra space in case you reached the 1.5GB lambda limit, which is easily reachable with versioning. But since AWS announced it's increasing the quote to be 75GB instead of 1.5GB, we decided it's time to ditch this functionality and keep FunctionDeploy fast.
Added 5 New Actions
Function Remove sls function remove
: This action removes a function from your AWS account along with any Endpoints and Events tied to your function. For more info about this new action checkout the v0.5 docs.
Endpoint Remove sls endpoint remove
: This action removes an endpoint from your AWS account. For more info about this new action checkout the v0.5 docs.
Event Remove sls event remove
: This action removes an event from your AWS account. For more info about this new action checkout the v0.5 docs.
Function Rollback sls function rollback
: This action rolls your function back to a previously deployed version. For more info about this new action checkout the v0.5 docs.
Resources Diff sls resources diff
: This action outputs the differences between your deployed resources and the resources currently defined in your project locally. For more info about this new action checkout the v0.5 docs.
Introducing Runtimes. Plus, You Can Add your Own Runtimes
All Runtime logic is now isolated to Runtime-specific classes. They are available on the serverless instance in the classes property. You can add your own Runtime through a plugin by simply adding a new Runtime class to the main Serverless instance. We've already created a custom babe runtime, which you can reference here: https://github.com/serverless/serverless-runtime-babel When you install this plugin, you will have a babel
runtime option. It supports scaffolding babel functions, running them locally, and deploying them with optional optimizations.
Removed Function/Endpoint/Event Paths Concept
To make our CLI easier to use, we've removed the concept of Function/Endpoint/Event paths. Instead, the names of these assets should be unique project wide. This way, you can simply run sls function deploy myFunc
instead of sls function deploy path/to/myFunc
. Names are what we use to identify Functions/Endpoints/Events.
Although Endpoints don't have a name property, the endpoint name is simply the combination of the endpoint path with the endpoint method separated by ~
symbol, which is always unique project wide. (ie. user/create~GET
). This combination is always unique project wide, and we call it the Endpoint Name. It's also worth noting that the only time you'll be required to enter a function path is with sls function create
, because in this particular case you're not just specifying the function name, but also the location of the function you want to create inside your project.
Added VPC Support
We've added support for VPC immediately after AWS announced VPC support for lambda. There's a new property in the s-function.json
file called vpc
that looks like this:
"vpc": {
"securityGroupIds": [],
"subnetIds": []
}
This is where you can configure your VPC configurations for this lambda. For more info, checkout this issue
Improved Event Sources Configuration
We've improved event sources support for Scheduled and S3 events. You can now specify input parameters for scheduled lambdas by adding a new property in the config
object called input
, which is just a JSON object to be passed to your lambda. For S3 events, you can now specify filterRules
in the config
object to filter the events that trigger your lambda. It should look like this:
"name" : "myS3Event",
"type": "s3",
"config": {
"bucket": "${testEventBucket}",
"bucketEvents": ["s3:ObjectCreated:*"],
"filterRules" : [
{
"name" : "prefix | suffix",
"value" : "STRING_VALUE"
}
]
}
Breaking - Changed Plugin Loading
The way plugins load has been changed slightly. Instead of passing in a ServerlessClass
to the plugin, we pass in the entire Serverless
instance. This allows plugin authors to manipulate the classes Serverless offers, for complete extensibility.
How to upgrade: Some plugins will not work until they are updated. The Optimizer plugin, the S3-Client plugin have already been upgraded. Please review this improved section on Plugin Creation within our documentation to quickly update your plugin http://docs.serverless.com/v0.5.0/docs/plugins
Breaking - Powerful New Classes and Methods for Plugin Authors
We've completely rewritten our codebase and made it more object oriented by creating powerful classes for each of our assets. So now there's a Serverless, Project, Function, Endpoint, Event, Stage, Region, Resources, Variables, Templates and ProviderAws classes. Each of which contain some powerful methods that make it really easy to add custom functionality to the framework. Notice that we removed the State class and we're instead accessing the project using the Project class. For help on this, please check out the improved section on Plugin Creation within our documentation: http://docs.serverless.com/v0.5.0/docs/plugins
How to upgrade: Some plugins will not work until they are updated to use the new classes. Please review this improved section on Plugin Creation within our documentation to quickly update your plugin http://docs.serverless.com/v0.5.0/docs/plugins