I was testing a web application that is normally deployed on an AWS linux VM in an Azure VM.
The (java) application accesses AWS s3 for some storage features and lists objects in an AWS s3 bucket.
Running the application in an Azure VM the list was empty.
suspecting connectivity issues, I installed the AWS CLI on the Azure VM, configured keys, and ran:
$ aws s3 ls
This resulted in
Could not connect to the endpoint URL: "https://s3.us-east.amazonaws.com/"
Confirming my suspicions.
Checking the application stack trace for essentially the application's "listObjects" request shows
Request: http://azuredev.gpo.epacube.com/dps/job/listprojects raised com.amazonaws.services.s3.model.AmazonS3Exception: AWS authentication requires a valid Date or x-amz-date header (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: A2494E7A540B5B20), S3 Extended Request ID: 6+Nv1AtCTe0xz3i7Ra5lrmdEdxiIfXgxYapY9KbomblhYL4Q85L3iTLchpQcwRnixyE5El0WKwM=
com.amazonaws.services.s3.model.AmazonS3Exception: AWS authentication requires a valid Date or x-amz-date header (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: A2494E7A540B5B20), S3 Extended Request ID: 6+Nv1AtCTe0xz3i7Ra5lrmdEdxiIfXgxYapY9KbomblhYL4Q85L3iTLchpQcwRnixyE5El0WKwM=
The exact same code works when run from CENTos on AWS, but when run on Ubuntu 13.04 on Azure it fails.
Why might I be getting the invalid date error?
How do I modify the Azure VM setup so the AWS s3 connections succeed?
Your region is wrong.
https://s3.us-east.amazonaws.com/ is not a valid endpoint. You possibly configured a region as us-east when it should be us-east-1.
I could reproduce the problem by specifying an incorrect endpoint:
This works:
$ aws s3 ls --region us-east-1
This doesn't work:
$ aws s3 ls --region us-east
Could not connect to the endpoint URL: "https://s3.us-east.amazonaws.com/"
For a full list of endpoints, see: Regions and Endpoints
Turns out this was a mix of JDK8 and an older version of the aws-sdk-java dependency on joda-time. Upgrading the joda-time dependency to version 2.8.1 fixed this.
Found here
Related
I have two VMs. One is located in our lab, other one is an AWS EC2 instance. I do aws configure and set my Access Key and ID in both of them.
Any aws command works well from my lab VM but not working from inside AWS EC2 instance
Initially I suspected some mis-configuation in keys and profile precedence so ran aws sts get-caller-identity in both the VMs, both return the exact same value so that's not the issue.
I also tried upgrading/downgrading aws cli but both machines but it did not help. Not sure how to move forward.
[root#XXX-C13-202 ~]# aws sts get-caller-identity
{
"Account": "6766XXXXXXX",
"UserId": "AIDAIDXXXXXXXX",
"Arn": "arn:aws:iam::6766XXXXXXX:user/r*****m"
}
[root#XXX-C13-202 ~]# aws ec2 describe-instances
This Works!!
ubuntu#ip-10-51-23-131:~$ /usr/local/bin/aws sts get-caller-identity
{
"UserId": "AIDAIDXXXXXXXX",
"Account": "6766XXXXXXX",
"Arn": "arn:aws:iam::6766XXXXXXX:user/r*****m"
}
ubuntu#ip-10-51-23-131:~$ /usr/local/bin/aws ec2 describe-instances
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.`
This fails
AWS cli version in EC2 - aws-cli/2.9.19 Python/3.9.11 Linux/3.13.0-92-generic exe/x86_64.ubuntu.14 prompt/off
Tried different versions of AWS cli and verified the credentials are set properly in both the VM
Tried out things suggested in the man page - https://aws.amazon.com/premiumsupport/knowledge-center/troubleshoot-aws-cli-commands-ec2/ which is for the exact issue I was facing. I was not able to solve the problem.
Conclusion is my VPC configuration is restricting the access, since this set by Corporate VPC I could not modify and verify my hypothesis.
We have a django app deployed on elastic beanstalk, and added a feature that accesses an oracle DB. cx-Oracle requires the Oracle client library (instant client), and we would like to have the .zip for the library available as a private object in our S3 bucket, public object is not an option. We want to avoid depending on an Oracle link with wget. I am struggling to develop a .config file in the .ebextensions directory that will install the .zip S3 any time it is deployed. How can was set-up the config to install on deployment?
os: Amazon Linux AMI 1
Sure that is a common practice to get your private files from s3.
You need to have IAM permission (on EB cluster) to access your s3 bucket and download files.
The config in .ebextensions can look something like this:
container_commands:
install:
command: |
#!/bin/bash -xe
aws s3 cp s3:/bucket-name/your-file local-filename
Just like a friendly suggestion, EB is ok to start with but if your app will go to production you will run on some problems (like cannot enforce some ports to not be opened etc) and maybe there are some better options for you to host your app (ECS, EKS etc)
I'm trying to add ECR registry in anchore that is setup in kubernetes. I created an achore-cli pod and tried to execute the below command
anchore-cli registry add /
1234567890.dkr.ecr.us-east-1.amazonaws.com /
awsauto /
awsauto /
--registry-type=awsecr
and I got the following output,
Error: 'awsauto' is not enabled in service configuration
HTTP Code: 406
Detail: {'error_codes': []}
I configured IAM Role via service account with AmazonEC2ContainerRegistryReadOnly policy. Can someone help me with this?
I am trying to execute boto3 functionality from local machine via python fabric3 scripts.
Configuration on local machine:
installed python3.5 fabric3
script using fabric3 to create aws rds snapshot.
ssh auth store in ssh-add ~/.ssh/ec2.pem
Configuration on aws EC2 instance:
created ~/.aws/config and ~/.aws/credentials ans store required config like:
a. region=, output in ~/.aws/config aws_access_key_id=,
b. aws_secret_access_key in ~/.aws/credentials
rds is open to ec2 instance only.
Observation:
while executing fabric script from local machine it ask for botocore.exceptions.NoRegionError: You must specify a region.
if I provides region name while in boto3.client(region_name='')
it will ask for
botocore.exceptions.NoCredentialsError: Unable to locate credentials
means python fabric doesn't pick up the ~/.aws/config and ~/.aws/credentials files.
1. Does python fabric pickup the credential & configs from ~/.aws? because I don't want to provides the credential via fabric script
2. What is the standard way to achieve the fabric based deployment on aws-ec2
For time-being while creating boto3.client() I passed the required aws_access_key_id aws_secret_access_key, but still problem is that why boto3 client not picking up the .aws/config and .aws/cred files while triggered via Python Fabric script.
I'm running a Laravel app with a code like this in one of my controller functions:
$s3 = Storage::disk('s3');
$s3->put( $request->file('file')->getClientOriginalName(), file_get_contents($request->file('file')) );
I believe Laravel utilizes Flysystem behind the scenes to connect to s3. When trying to execute this piece of code I get an error like this:
The Laravel docs isn't giving me much insight into how/why this problem is occurring. Any idea what is going on here?
EDIT: After going through a few other stackoverlflow threads:
fopen fails with getaddrinfo failed
file_get_contents(): php_network_getaddresses: getaddrinfo failed: Name or service not known
it seems as if the issue may be more related to my server's DNS? I'm on a ubuntu 14.04 on a Linode instance. I use Nginx as my webserver.
Your S3 configuration seems to be wrong, as the host it tries to use s3.us-standard.amazonaws.com cannot be resolved on my machine either. You should verify that you have configured the right bucket + region.
Check that your S3 API endpoints are correct.
To eliminate permission (role/credential) and related setup errors, try doing a put-object using the AWS CLI s3api, from that server.
aws s3api put-object --bucket example-bucket --key dir-1/big-video-file.mp4 --body e:\media\videos\f-sharp-3-data-services.mp4