I am new to the serverless framework. Well, at least to the latest version, which depends heavily on CloudFormation.
I installed the framework globally on my computer using:
npm install -g serverless
I then created a service using:
serverless create --template aws-nodejs --path myService
Finally, I ran:
serverless deploy
Everything seems to deploy normally, it shows no error in the terminal.
I can see the CloudFormation files in a newly created, dedicated S3 bucket.
However, I cannot find the default hello Lambda function in the AWS Lambda console.
What am I missing? Are the CloudFormation files not supposed to create Lambda functions upon deployment?
The reason the default hello Lambda function is not listed in the AWS Lambda console is because your Lambda function was uploaded to the default region (us-east-1), while the Lambda console displays the functions of another region.
To set the correct region for your functions, you can use the region field of the serverless.yml file.
Make sure the region property is directly under the provider section. With 2/4 spaces indent. Like this:
provider:
region: eu-west-1
Alternatively, you can specify the region at deployment time, like so:
sls deploy --region eu-west-1
Duh, I had made a super silly mistake:
I did not properly set the AWS region
So, I was looking for a lambda function in the wrong region: of course it could not be found!
Before deploying, one must make sure to set the correct region
UPDATE Well actually, I had set the region in serverless.yml by providing:
region: eu-west-1
However, for some reason the default region was not overwritten, and the function was deployed to the wrong region. Odd, that.
Anyway, one easy way around this issue is to provide the region at deployment time:
sls deploy --region eu-west-1
Related
Our clients are already registered on our development environment and the management is asking for us to create the production environment without loosing any of the already registered user data.
We are trying to deploy the production environment on ap-southeast-2 and our development environment is already on eu-west-1.
I have made the necessary changes for the deployment to happen on these two regions but the problem is that we are creating cognito and s3 buckets using cloudformation template.
We want to use the same s3 buckets and cognito between these two regions but the problem is when I'm deploying to ap-southeast-2 (production) the stack creation fails because s3 bucket already exists.
Is it possible to reuse the same s3 bucket and cognito between regions and stages? I want the serverless framework to check if these resources exists at the region I choose (in this case eu-west-1). We can't create new buckets because we are at the 100 buckets limit!
Here is the code in how we are creating the s3 buckets. We are using serverless framework with nodejs.
Resources:
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
I want the serverless framework to check if these resources exists at the region I choose
This is not how Infrastructure as a Code (IaC) works. CloudFormation nor terraform for that matter have any build in tools to "check" if a resource exists or not. The IaC perspective is - if its in a template than only the given template/stack can manage that. There is nothing in between, like it may exist or not.
Having said that, there are ways to re-architect and go around that. The most common ways are:
Since the bucket is common resource, it should be deployed separately from the rest of your stacks, and its name should be passed as an input to the dependant stacks.
Develop a custom resource in the form of a lambda function. The function would use AWS SDK to check for the existence of your buckets, and return that info to your stack for further use.
We have ElasticSearch domain created in one of the AWS account.
We are trying to use AWS cli command to "describe" this domain.
aws es describe-elasticsearch-domain --domain-name <domain-name>
But receiving error:
An error occurred (ResourceNotFoundException) when calling the
DescribeElasticsearchDomain operation: Domain not found:
We than used list-domain command:
aws es list-domain-names
But received empty response:
{
"DomainNames": [] }
We double checked account info. and credentials in .aws folder and we are pointing to correct aws account also able to view other resources in that account except ElasticSearch.
Any help is appreciated.
It's not the permission issue, It can be profile issue may be command run in other account but I am sure your Elastic search cluster is in a different region and you set the different region in aws configure
All you need to pass region to the aws command
aws es list-domain-names --region DOMAIN_REGION
or
aws es list-domain-names --region us-west-1
The exception clearly said the resources not found in the region by default which specified in aws configure using aws-cli.
aws es describe-elasticsearch-domain --domain-name youdomain domain_region
I'm looking to set up a local Docker instance of AWS Secrets Manager.
I've been scouring the web for an image or anything of the sort that I can use. I can only find documentation for AWS ECS secrets management.
Does anyone have any experience with setting up AWS Secrets Manager for local testing through Docker? Thanks!
Good question!
You could run localstack [1] inside a docker container. It mocks some of the AWS services for testing purposes. AWS Secrets Manager is supported at http://localhost:4584 by default.
There are some useful blog posts covering localstack. [2][3]
However, I could not find any blog post covering AWS Secrets Manager on localstack. I guess you have to try it out yourself.
References
[1] https://github.com/localstack/localstack
[2] https://medium.com/#andyalky/developing-aws-apps-locally-with-localstack-7f3d64663ce4
[3] https://medium.com/pareture/localstack-for-local-aws-dev-22775e483e3d
You can setup local AWS SecretManager inside LocalStack using the following command:
aws --endpoint-url=http://localhost:4566 secretsmanager create-secret --name my_secret --secret-string '[{"my_uname":"username","my_pwd":"password"}]'
Output:
{
"ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:my_secret-denusf",
"Name": "my_secret",
"VersionId": "e168cdf1-5c94-493d-bafd-791779a7515d"
}
the lambda functions having Repository if yes how to check the Repository details how it will be deployed if no Repository.
Theres no "hidden" or "shared" repository that can be used between different lambda functions, every file you need should be uploaded in the zip file you upload.
Read a more detailed explanation here.
Lambda functions do not have repositories. You can create a deployment package yourself or write your code directly in the Lambda console, in which case the console creates the deployment package for you and uploads it, creating your Lambda function.
If your custom code requires only the AWS SDK library, then you can use the inline editor in the AWS Lambda console. Using the console, you can edit and upload your code to AWS Lambda. The console will zip up your code with the relevant configuration information into a deployment package that the Lambda service can run.
You can also test your code in the console by manually invoking it using sample event data.
In the case of the deployment package, you may either upload it directly or upload the .zip file first to an Amazon S3 bucket in the same AWS region where you want to create the Lambda function, and then specify the bucket name and object key name when you create the Lambda function using the console or the AWS CLI
I have a local git repo and I am trying to do continuous integration and deployment using Codeship. https://documentation.codeship.com
I have the github hooked up to the continuous integration and it seems to work fine.
I have an AWS account and a bucket on there with my access keys and permissions.
When I run the deploy script I get this error:
How can I fix the error?
I had this very issue when using aws-cli and relying on the following files to hold AWS credentials and config for the default profile:
~/.aws/credentials
~/.aws/config
I suspect there is an issue with this technique; as reported in github: Unable to locate credentials
I ended up using codeship project's Environment Variables for the following:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Now, this is not ideal. However my AWS-IAM user has very limited access to perform the specific task of uploading to the bucket being used for the deployment.
Alternatively, depending on your need, you could also check out the Codeshop Pro platform; it allows you to have an encrypted file with environment variables that are decrypted at runtime, during your build.
On both Basic and Pro platforms, if you want/need to use credentials in a file, you can store the credentials in environment variables (like suggested by Nigel) and then echo it into the file as part of your test setup.