An error occurred (MissingAuthenticationTokenException) when calling the UpdateFunctionCode operation Lambda AWS - node.js

I have a function in my Lambda named my-s3-function. I need to add this dependency to my Lambda Node.JS. I have followed this part to update the script with dependency included (though, I didn't follow the step wherein I need to zip the folder using zip -r function.zip . but instead I zip the folder by right-clicking it on my PC).
The zip file's structured like this inside:
|node_modules
|<folders>
|<folders>
|<folders>
... // the list goes on
|index.js
|package_lock.json
Upon typing the code aws lambda update-function-code --function-name my-s3-function --zip-file fileb://function.zip to the terminal, I get the following response:
An error occurred (MissingAuthenticationTokenException) when calling the UpdateFunctionCode operation: Missing Authentication Token
What should I do to resolve this?

Based on the comments , this got resolved by configuring the credentials as described in the documentation.
Try first with exporting the credentials as described Environment variables to configure the AWS CLI. Once you are sure your credentials are correct then you can follow this Configuration and credential file

Related

aws CLI: get-job-output erroring with either Errno9 Bad File Descriptor or Errno9 no such file or directory

I'm having some problems with retrieving job output from an AWS glacier vault.
I initiated a job (aws glacier initiate-job), the job is indicated as complete via aws glacier, and then I tried to retrieve the job output
aws glacier get-job-output --account-id - --vault-name <myvaultname> --job-id <jobid> output.json
However, I receive an error: [Errno 2] No such file or directory: 'output.json'
Thinking that perhaps the file needed be created first, and if i did create the file first, (which really doesn't make sense), one would receive the [Errno 9] Bad file descriptor error.
I'm currently using the following version of the AWS CLI:
aws-cli/2.4.10 Python/3.8.8 Windows/10 exe/AMD64 prompt/off
I tried using the aws CLI from both an Administrative and non-Administrative command prompt with the same result. Any ideas on making this work?
From a related reported issue you can try run this command in a DOS window::
copy "c:\Program Files\Amazon\AWSCLI\botocore\vendored\requests\cacert.pem" "c:\Program Files\Amazon\AWSCLI\certifi"
It seems to be an certificate error

Amplify Init Error - ✖ Root stack creation failed init failed TypeError: Cannot redefine property: default

Using amplify init, right after choosing which profile to use, I get this error and am not sure why:
✖ Root stack creation failed
init failed
TypeError: Cannot redefine property: default
I tried changing the different user to be my default in my credentials file and then picking the default profile in the amplify init step for that - same error.
I tried saying I didn't want to use a profile and instead putting in my access key id and secret key in manually, also didn't work.
Found solution here! github issues
Relevant quote - "I found the source of my problem... My ~/.aws/config file contained entries called [default] and [profile default], which causes the symptom."
So I removed the [default] and just left my [profile default] and then the amplify init went through normally!
Amplify expects you to have had an existing user with AdministrationFullAccess. This should be confirmed before running amplify init or perhaps when you run the amplify init you will be prompted if you would be using the default AWS Profile or not. In this case, you might have to create the user yourself and attach a policy to the user and paste both the access and secret keys to the respective section on the console. But when you follow the steps to create a user with amplify configure it is so easy.

gcloud functions call giving an error for background

I am testing node js app using mocha and assert.
Get sample code from this link
I deploy helloBackground function in local and also in gcloud succesfullly
then I try to execute mocha test case.
also tried all way to call gcloud functions describe here
Then execute below command in CMD
functions call helloBackground --data '{\"name\": \"John\"}'
This should return "Hello John!" in command prompt.
but I receive an error as
Error: TypeError: Cannot read property 'name' of undefined
Please let me know how to pass proper data in CMD to test.
Thank you in advance.
Looking at your error message, it is most likely caused by the trigger argument you used when you deployed the app. The helloBackground function is a Background Function, and instead of --trigger-http, you should use a background function trigger.
For example: $ gcloud functions deploy helloBackground --runtime nodejs6 --trigger-resource you_bucket_name --trigger-event google.storage.object.finalize
You would need to create an empty .txt file in the same directory of your app and upload it to Cloud Storage
$ gsutil cp test.txt gs://[ the name of your cloud storage bucket ]
And you can run the app again.
You will find more explanation on the type of functions here
And you can follow this well documented tutorial on Cloud Storage here

Excluding part of a npm package from a claudia.js build

I'm currently using claudia.js to deploy serverless functions to AWS Lambda. However, due to size limitations I run into the following error:
RequestEntityTooLargeException: Request must be smaller than 69905067 bytes for the CreateFunction operation
To resolve this, I'm trying to exclude a subfolder of a npm package as it's not needed, but I'm unsure how to do this during the claudia build process.
Specifically, I'd want to exclude an example subfolder > node_modules/packet/subfolder/*
I've messed around with various configurations of .gitignore and .npmignore but with little success. Any help would be amazing!
Instead of doing that you can simply use --use-s3-bucket option with Claudia.js and 50mb limit will be increased to 250mb (uncompressed).
Try running the following command:
claudia update --use-s3-bucket BUCKET_NAME --region YOUR_REGION
Where BUCKET_NAME is a name of the deployment helper bucket in the same region (YOUR_REGION).

Is there a way to avoid storing the AWS_SECRET_KEY on the .ebextensions?

I'm deploying a Django based project on AWS Elastic Beanstalk.
I have been following the Amazon example, where I add my credentials (ACCESS_KEY/SECRET) to my app.config under the .ebextentions directory.
The same config file has:
container_commands:
01_syncdb:
command: "django-admin.py migrate --noinput"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
leader_only: true
Problem is that this is forcing me to store my credentials under Version Control, and I will like to avoid that.
I tried to remove the credentials and then add them with eb setenv, but the problem is that the two django commands require the these settings to be set on the environment.
I'm using the v3 cli:
eb create -db -c foo bar --profile foobar
where foobar is the name of the profile under ~/.aws/credentials, and where I want to keep my secret credentials.
What is the best security practices for the AWS credentials using EB?
One solution is to keep the AWS credentials, but create a policy that ONLY allows them to POST objects on the one bucket used for /static.
I ended up removing the collecstatic step from the config file, and simply take care of uploading statics on the build side.
After that, all credentials can be removed and all other boto commands will grab the credentials from the security role on the EC2 instance.

Resources