I am using serverless to deploy my API on AWS.
In serverless, it allows to deploy a single function:
sls deploy -f <function name>
But it doesn't allow to remove a single function:
sls remove // will remove all functions.
Is there any way to remove single function which won't impact to other functions?
#justin.m.chase suggested:
Simply remove the function in serverless.yml, then run full deploy
sls deploy
the function is removed (Lambda + API Gateway). Perfecto!
I know it's a bit old but the deploy pain of serverless is still a thing.
I recently developed a cli which enables to build microservices in AWS, taking advantage of AWS sam cli (hence the cli name: Rocketsam).
The cli enables caching per function (no more full deploy to the microservice if only one function code changed).
It also has additional features such as splitting the template file to per function, sharing code across functions, fetching logs and more :)
https://www.npmjs.com/package/rocketsam
Currently the cli supports building functions in python 3.6 only, but can be easily extended in the future depending on demand.
As Peter Pham said, remove the function from serverless.yml and do a full:
sls deploy
If you try to delete the function manually in AWS it causes a lot of headaches.
I know this question is over a year old and has been closed but the correct way to remove a single function is to specify it by name which you almost had:
sls remove -f <function name>
Related
I recently (2 days ago) upgraded the node runtime engine on our Cloud Functions instance from Node 10 to 12. (Not sure that is a factor, but it is a recent change.)
Since the upgrade I have been using the Cloud Functions project without trouble. Today is the first time I have done a deploy SINCE the deployment to change the node engine. After I did the deploy, ALL of the Runtime Environment Variables were deleted except one labeled FIREBASE_CONFIG.
As a test, I added another test environment variable via the Cloud Functions console UI. I refreshed the page to ensure the variable was there. Then, I ran another deploy, using this command:
firebase use {project_name} && firebase deploy --only functions:{function_name}
After the deploy completed, I refreshed the environment variables page and found that the test variable I had created was now missing.
I'm quite stumped. Any ideas? Thank you!
It is true that the Firebase CLI manages enviroment configuration and does not allow us to set the ENV variables of the runtime during deployment. This has been explained on other post as well, like this one.
I guess you are aware of the difference between the Cloud Functions Runtime Variables and the Firebase Environment configuration, so I will just leave it here as a friendly reminder.
Regarding the actual issue (New deployments erasing previously set "Cloud Functions Runtime Variables"), I believe this must have been something they fixed already, because I have tested with version 9.10.2 of the firebase CLI, and I could not replicate the issue on my end.
I recommend checking the CLI version that you have (firebase --version) and, if you still experience the same issue, provide us with the steps you took.
I am a beginner to cloud.I have a GCP account with multiple projects in it,I have a gcf.Now i am deploying same function again and again individually for each projects from console.So is there any way i Can deploy one cloud function in all projects by just looping the projectIDs using terraform or anyother platforms?
You can define your function and everything that repeats in each project in a module and then use this module in each project definition. To do it you'll need to explicitly define each of your project in terraform configuration. It might be worth doing if you can utilize other terraform feature e.g. tracking state, keeping infrastructure as a code, transparency, reusability and increase infrastructure complexity without making everything confusing.
Otherwise if you not going to do anything complex but instead all you need to do it deploy the same function over multiple projects and nothing more complex is planned for the observable future then Bash scripting with GCP CLI tool is your Swiss knife. You can check this as a reference: https://cloud.google.com/functions/docs/quickstart
Assuming you have your function code in Google Cloud Source Repositories, and you just want to deploy the same code in all projects, you can create a simple BASH script to do so.
First, you need to recover all the projects you have:
gcloud projects list --format 'value(projectId)'
Then, for each project, deploy the function (I'm assuming Nodejs 12 and an HTTP trigger, but edit at your convenience):
for project in $(gcloud projects list --format 'value(projectId)');
do gcloud functions deploy <FUNCTION_NAME> \
--source https://source.developers.google.com/projects/<PROJECT_ID>/repos/<REPOSITORY_ID>/ \
--runtime nodejs12 \
--trigger-http \
--project $project;
done
To do anything fancier, check the other answer from wisp.
We have a very simple use case--we want to share code with all of our lambdas and we don't want to use webpack.
We can't put relative paths in our package.json files in the lambda folders because when you do sam build twice, it DELETES the shared code and I have no idea why.
Answer requirements:
Be able to debug locally
Be able to run unit tests on business logic (without having to be ran in an AWS sandbox)
Be able to run tests in sam local start-api
Be able to debug the code in the container via sam local invoke
sam build works
sam deploy works
Runs in AWS Lambda in the cloud
TL;DR
Put your shared code in a layer
When referencing shared code in the lambda layer, use a ternary operator when you require(). Check an environment variable that is only set when running in the AWS environment. In this case, we added a short AWS variable in the SAM template, but you can find environment variables that AWS automatically defines, but they will not be as short. This enables you to debug locally outside of the AWS stack, allowing very fast unit tests that test business logic.
let math = require(process.env.AWS ? '/opt/nodejs/common' : '../../layers/layer1/nodejs/common');
let tuc = require(process.env.AWS ? 'temp-units-conv' : '../../layers/layer1/nodejs/node_modules/temp-units-conv');
You shouldn't need to use the ternary operator like that unless within the lambda folder code
Here's a working example that we thought we'd post so that others will have a much easier time of it than we did.
It is our opinion that AWS should make this much easier.
https://github.com/blmille1/aws-sam-layers-template.git
Gotchas
The following gotcha has been avoided in this solution. I am mentioning it because it looked like a straight-forward solution and it took a lot of time before I finally abandoned it.
It is very tempting to add a folder reference in the lambda function's package.json.
//...
"dependencies": {
"common":"file:../../layers/layer1/nodejs/common"
},
//...
If you do that, it will work the first sam build. However, the second time you run sam build, your shared code folder and all subdirectories will be DELETED. This is because when sam builds, it creates an .aws-sam folder. If that folder exists, it performs an npm cleanup, and I think that is what provokes the deleting of the shared code.
I am aware of this gcloud functions deploy hello --entry-point helloworld --runtime python37 --trigger-http which would deploy only the hello function.
But I have multiple functions in my project
Is there a single command like firebase to deploy all functions like firebase deploy --only functions -P default
Right now it is not possible to deploy multiple functions with a single command. There is already an open issue requesting the same, but it is quite old.
Besides tracking the previous issue you could also fill a new issue requesting this feature.
However, I've found 2 related questions in SO with similar issues, in which the solution was to create a small script to perform this:
First is a .sh script
Second is a .py script
Hope this helps you!
I am creating an Alexa skill and I am using Amazon Lambda to handle the intents. I found online several tutorials and decided to use NodeJs with the alexa-sdk. After installing the alexa-sdk with npm, the zipped archive occupied a disksize of ~6MB. If I upload it to amazon, it tells me
The deployment package of your Lambda function "..." is too large to enable inline code editing. However, you can still invoke your function right now.
My index.js has a size of < 4KB but the dependencies are large. If I want to change something, I have to zip it altogether (index.js and the folder with the depencencies "node_modules"), upload it to Amazon and wait till its processed, because online editing isn't available anymore. So every single change of the index.js wastes > 1 minute of my time to zip and upload it. Is there a possibility to use the alexa-sdk dependency (and other dependencies) without uploading the same code continually every time I am changing something? Is there a possibility to use the online-editing function though I am using large dependencies? I just want to edit the index.js.
If the size of your Lambda function's zipped deployment packages
exceeds 3MB, you will not be able to use the inline code editing
feature in the Lambda console. You can still use the console to invoke
your Lambda function.
Its mentioned here under AWS Lambda Deployment Limits
ASK-CLI
ASK Command Line Interface let you manage your Alexa skills and related AWS Lambda functions from your local machine. Once you set it up, you can make necessary changes in your Lambda code or skill and use deploy command to deploy a skill. The optional target will let you deploy the associated Lambda code.
ask deploy [--no-wait] [-t| --target <target>] [--force] [-p| --profile <profile>] [--debug]
More info about ASK CLI here and more about deploy command here