Elastic Beanstalk SQSD Error on worker start - node.js

I've deployed a nodejs worker. However whenever I try to start it, it gets red and this error is showned:
ERROR Instance: i-6eef007a Module: AWSEBAutoScalingGroup ConfigSet: null Command failed on instance. Return code: 1 Output: Error occurred during build: Command 01-start-sqsd failed .
I don't know if it's related, sometimes I get this error on the screen:
IamInstanceProfile: The environment does not have an IAM instance profile associated with it. To improve deployment speed please associate an IAM instance profile with the environment.
I've already given permission to SQS and set key and secret. I don't know what else to do.
Log attached.
Thank you very much.

You need to have an IAM role with the appropriate permissions to create an Elastic Beanstalk worker environment.
The IAM role should have the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "QueueAccess",
"Action": [
"sqs:ChangeMessageVisibility",
"sqs:DeleteMessage",
"sqs:ReceiveMessage"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "MetricsAccess",
"Action": [
"cloudwatch:PutMetricData"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Detailed documentation: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.iam.roles.aeb.html#AWSHowTo.iam.policies.actions.worker
For debugging you can ssh to the instance and look at /var/log/aws-sqsd/default.log to see the logs. If you want to avoid ssh'ing to the instance you can also snapshot logs from the AWS Console as shown here.
You can read more about worker role environments here.

Related

Acces denied CopyObjectCommand nodejs

To rename a file in a bucket, I copy the file to the new name and delete the old one. But when I was migrating from the old aws-sdk to the new S3-client, I now get a access denied on the copy object command. I have triple checked the permissions on the account accessing the objects and nothing seems wrong to me. I have tried applying all the permissions, but sadly with the same results. My permissions look like this:
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectTagging",
"s3:PutObjectVersionTagging",
"s3:ListBucket",
"s3:PutObjectTagging",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::bucket"
]
The parameter I give to the command look like this
{
"Bucket": "bucket",
"CopySource": "pictures/1014/2.png",
"Key": "pictures/1014/1.png"
}
And the output of the command is a 403 AccesDenied. The same s3-client gets used to do normal puts and gets on the same bucket, no problem there. Thanks for helping.

How do I access npm log files in GKE?

I'm running different nodejs microservices on Google Kubernetes Services.
Sometimes these services crash and according to Cloud Logging, I can find detailed information in a logging file. For example, the logging message says
{
"textPayload": "npm ERR! /root/.npm/_logs/2021-10-27T11_26_28_534Z-debug.log\n",
"insertId": "zoqxk8wvkuofhslm",
"resource": {
"type": "k8s_container",
"labels": {
"pod_name": "client-depl-7f679c6b49-5d9tz",
"container_name": "client",
"namespace_name": "production",
"cluster_name": "cluster-1",
"location": "europe-west3-a",
"project_id": "XXX"
}
},
"timestamp": "2021-10-27T11:26:28.701252670Z",
"severity": "ERROR",
"labels": {
"k8s-pod/app": "client",
"k8s-pod/skaffold_dev/run-id": "b5518659-05d6-4c08-9b55-9d58fdd5807f",
"k8s-pod/pod-template-hash": "7f679c6b49",
"compute.googleapis.com/resource_name": "gke-cluster-1-pool-1-8bfc60b2-ag86",
"k8s-pod/app_kubernetes_io/managed-by": "skaffold"
},
"logName": "projects/xxx-productive/logs/stderr",
"receiveTimestamp": "xxx"
}
Where do I find these logs on Google Cloud Platform?
---------------- Edit 2021.10.28 ---------------------------
I should clarify that I am already using the logs explorer. This is what I see there:
The logs show 7 consecutive error entries about npm failing. The last two entries indicate that there are more information in a log file "/root/.npm/_logs/2021-10-27T11_26_28_534Z-debug.log".
Does this log file has more info about the failure or is all the info I get in these 7 error log entries?
Thanks
kubectl logs <your_pod>
You can use GCP Logs Explorer
Assuming you already Enable Logging and Monitoring, You can view logs on:
a. Go to the Logs explorer in the Cloud Console.
b. Click Resource. Under ALL_RESOURCE_TYPES, select Kubernetes Container.
c. Under CLUSTER_NAME, select the name of your user cluster.
d. Under NAMESPACE_NAME, select default.
e. Click Add and then click Run Query.
f. Under Query results, you can see log entries from the monitoring-example Deployment. For example:
{
"textPayload": "2020/11/14 01:24:24 Starting to listen on :9090\n",
"insertId": "1oa4vhg3qfxidt",
"resource": {
"type": "k8s_container",
"labels": {
"pod_name": "monitoring-example-7685d96496-xqfsf",
"cluster_name": ...,
"namespace_name": "default",
"project_id": ...,
"location": "us-west1",
"container_name": "prometheus-example-exporter"
}
},
"timestamp": "2020-11-14T01:24:24.358600252Z",
"labels": {
"k8s-pod/pod-template-hash": "7685d96496",
"k8s-pod/app": "monitoring-example"
},
"logName": "projects/.../logs/stdout",
"receiveTimestamp": "2020-11-14T01:24:39.562864735Z"
}
How about
log into the pod while it is alive
kubectl exec -it your-pod -- sh
wait for it to crash and watch the crash file in real time while the pod is not restarted yet :)
How to login to a GCP Pod:
From the Google Cloud Platform main menu go to Kubernetes Engine -> Workloads
Click on the workload you're interested in:
Find the Managed Pods section and click on the Pod you want to access:
Click on KUBECTL -> Exec -> [name of workload/namespace]
A terminal should appear at the bottom of the browser page, SSHing you into the pod. You can look around for your log file from inside here

Access denied when using aws cli but allowed in web console

My IAM account has "admin" privilege, at least supposedly. I can perform all operations as far as I can tell in web console. For example,
Recently I downloaded aws-cli and quickly configured it by supplying access keys, default region and output format. I then tried to issue some commands and found most of them, but not all, have permission issues. For example
$ aws --version
aws-cli/1.16.243 Python/3.7.4 Windows/10 botocore/1.12.233
$ aws s3 ls s3://test-bucket
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
$ aws ec2 describe-instances
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
$ aws iam get-user
{
"User": {
"Path": "/",
"UserName": "xxx#xxx.xxx",
"UserId": "xxxxx",
"Arn": "arn:aws:iam::nnnnnnnnnn:user/xxx#xxx.xxx",
"CreateDate": "2019-08-21T17:09:25Z",
"PasswordLastUsed": "2019-09-21T16:11:34Z"
}
}
It appears to me that cli, which is authenticated using access key, has a different permission set from web console, which is authenticated using MFA.
Why is permission inconsistent between CLI and GUI? How to make it consistent?
It turns out following statement in one of my policies blocked CLI access due to lacking MFA.
{
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
},
"Resource": "*",
"Effect": "Deny",
"NotAction": [
"iam:CreateVirtualMFADevice",
"iam:EnableMFADevice",
"iam:GetUser",
"iam:ListMFADevices",
"iam:ListVirtualMFADevices",
"iam:ResyncMFADevice",
"sts:GetSessionToken"
],
"Sid": "DenyAllExceptListedIfNoMFA"
},
If you replace BoolIfExists with Bool, it should work. Your CLI requests would not be denied because of not using MFA.
Opposite of https://aws.amazon.com/premiumsupport/knowledge-center/mfa-iam-user-aws-cli/
To remain really secure check this good explanation: MFA token for AWS CLI
In few steps
Get a temporary 36 hours session token.
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user --token-code code-from-token
{
"Credentials": {
"SecretAccessKey": "secret-access-key",
"SessionToken": "temporary-session-token",
"Expiration": "expiration-date-time",
"AccessKeyId": "access-key-id"
}
}
Save these values in a mfa profile configuration.
[mfa]
aws_access_key_id = example-access-key-as-in-returned-output
aws_secret_access_key = example-secret-access-key-as-in-returned-output
aws_session_token = example-session-Token-as-in-returned-output
Call with the profile
aws --profile mfa
Ps: Don't do the cron job as suggested, it goes again the security.
I had this same issue and I fixed it by adding my user to a new group with administrator access in IAM.
to do this go to IAM, Users, click on your user and then [add permissions]
in the next screen click [Create group] and then pick administrator access

AWS Codebuild fails while downloading source. Message: Access Denied

I created a CodeBuild Project that uses a docker image for node8. The purpose of this CodeBuild project is to do unit testing. It takes an input artifact from CodeCommit. And in the buildspec.yml it runs a test command.
This is my (simple) buildspec file:
version: 0.2
phases:
install:
commands:
- echo "install phase started"
- npm install
- echo "install phase ended"
pre_build:
commands:
- echo "pre_build aka test phase started"
- echo "mocha unit test"
- npm test
- echo "mocha unit test ended"
build:
commands:
- echo "build phase started"
- echo "build complete"
The build is failing at the DOWNLOAD_SOURCE phase with the following:
PHASE - DOWNLOAD_SOURCE
Start time 2 minutes ago
End time 2 minutes ago
Message Access Denied
The only logs in the build logs are the following
[Container] 2018/01/12 11:30:22 Waiting for agent ping
[Container] 2018/01/12 11:30:22 Waiting for DOWNLOAD_SOURCE
Thanks in advance.
Screenshot of the CodeBuild policies.
I found a fix. It was a problem with my permissions. I added this to make it work.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:logs:eu-west-1:723698621383:log-group:/aws/codebuild/project",
"arn:aws:logs:eu-west-1:723698621383:log-group:/aws/codebuild/project:*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-eu-west-1-*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameters"
],
"Resource": "arn:aws:ssm:eu-west-1:723698621383:parameter/CodeBuild/*"
}
]
}
I had the same error, a permissions issue accessing S3 bucket url. Originally I used an auto-generated codepipeline-us-west-2-* bucket name with the policy:
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-us-west-2-*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
]
}
After changing to my own bucket name, the policy had to be updated to:
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::project-name-files/*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
]
}
I had similar error and will post my fix in case it helps anyone else. I was using CodePipeline and had two separate builds happening. Build #1 would complete its build and the output artifact for that was to be the input artifact for Build #2. Build #2 was failing on the the DOWNLOAD_SOURCE phase with the following error:
AccessDenied: Access Denied status code: 403
The problem was that in my build spec for Build #1, I didn't have the artifacts defined. After calling out the artifact files/folders in Build #1, then Build #2 was able to download the source without issue.
I was experiencing the same symptoms but my issue was due to the default encryption on the S3 bucket as described in this post.
So everything in S3 is encrypted at rest. When you don't specify how you want to encrypt them, objects in S3 will be encrypted by the default KMS key. And other accounts won't be able to get access to objects in the bucket because they don't have that KMS key for decryption. So to get around this issue, you need to create your own KMS key and use it to encrypt (let the CodeBuild to use this KMS Key you have created in this case). Then allow roles in other accounts to use this key by configure AssumeRole permissions. From what I see, most S3 access denial happens at not being able to decrypt objects. And this is specified here Troubleshoot S3 403 Access Denied - encrypted objects will also cause 403 Access Denied.
In my case, the keys that were being used were mismatched which was causing the decryption failure.
I faced the same issue.
My source was from an S3 folder. The fix involved putting a / at the end of the source path. It seems that without the / CodeBuild thinks it is a key.
Hope this helps someone save time.
In my case I fixed the issue that way - when I was creating a build project configuration there is a step in which you have to provide Service role and Role name. There are two options for that step 1) create new one and 2) choose existing one. I created a new one. After that I faced the issue author described. After some research I added this policies to that role in IAM module and the issue went away.
AWSCodeDeployRoleForECS AWS managed Permissions policy
AWSCodeDeployRole AWS managed Permissions policy
AWSCodeDeployRoleForCloudFormation AWS managed Permissions policy
AWSCloudFormationFullAccess AWS managed Permissions policy
AWSCodeDeployRoleForLambda AWS managed Permissions policy

EC2 instance role gets 'Unknown' error when attempting aws s3 cp KMS encrypted file

I've got an ASG that assigns an IAM Role to each of the instances that join it. Therefore, each instance has the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables baked-in, which will be used upon instantiation to download and decrypt credentials that are stored in an S3 bucket and encrypted using KMS keys.
So I'll have the following components:
An S3 bucket called top-secret.myapp.com
All objects in this bucket are encrypted using a KMS key called My-KMS-Key
An IAM instance role with inline policies attached granting it the ability to interact with both the bucket and the KMS key used to encrypt/decrypt the contents of the bucket (see below)
A user data script that installs the aws-cli upon instantiation and then goes about attempting to download and decrypt an object from the top-secret.myapp.com bucket.
The User Data Script
Upon instantiation, any given instance runs the following script:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://top-secret.myapp.com/secrets.sh . --region us-east-1
chmod +x secrets.sh
. secrets.sh
shred -u -z -n 27 secrets.sh
IAM Role Policies
The IAM role for my ASG instances has three policies attached inline:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::top-secret.myapp.com"
]
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::top-secret.myapp.com/secrets.sh"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:*"
],
"Resource": "arn:aws:kms:us-east-1:UUID-OF-MY-SECRET-KEY-HERE"
}
]
}
The first policy is essentially a full-root-access policy with no restrictions. Or so I thought, but it doesn't work. So I thought it might be that I need to explicitly apply policies that allow interaction with S3 encryption and/or KMS, makes sense.
So I added the second policy that allows the IAM instance role to list the top-secret.myapp.com bucket, and LIST and GET the secrets.sh object within the bucket. But this produced the error illustrated below.
The (Unknown) Error I'm Getting
download failed: s3://top-secret.myapp.com/secrets.sh to ./secrets.sh
A client error (Unknown) occurred when calling the GetObject operation: Unknown
Anyone have any idea what could be causing this error?
Note: This method for transferring encrypted secrets from S3 and decrypting them on-instance works fine using the standard Amazon S3 service master key
For me, the issue was two-fold:
If you're using server-side encryption via KMS, you need to supply the --sse aws:kms flag to the aws s3 cp [...] command.
I was installing an out-of-date version of awscli (version 1.2.9) via apt, and that version didn't recognize the --sse aws:kms command
Running apt-get remove awscli and installing via pip install awscli gave me version 1.10.51, which worked.
EDIT:
If you're using a different KMS key than the default master key for your account, you will need to also add the following flag:
--sse-kms-key-id [YOUR KMS KEY ID]

Resources