AWS s3 File uploading failing via Laravel - linux

I'm running a Laravel app with a code like this in one of my controller functions:
$s3 = Storage::disk('s3');
$s3->put( $request->file('file')->getClientOriginalName(), file_get_contents($request->file('file')) );
I believe Laravel utilizes Flysystem behind the scenes to connect to s3. When trying to execute this piece of code I get an error like this:
The Laravel docs isn't giving me much insight into how/why this problem is occurring. Any idea what is going on here?
EDIT: After going through a few other stackoverlflow threads:
fopen fails with getaddrinfo failed
file_get_contents(): php_network_getaddresses: getaddrinfo failed: Name or service not known
it seems as if the issue may be more related to my server's DNS? I'm on a ubuntu 14.04 on a Linode instance. I use Nginx as my webserver.

Your S3 configuration seems to be wrong, as the host it tries to use s3.us-standard.amazonaws.com cannot be resolved on my machine either. You should verify that you have configured the right bucket + region.

Check that your S3 API endpoints are correct.
To eliminate permission (role/credential) and related setup errors, try doing a put-object using the AWS CLI s3api, from that server.
aws s3api put-object --bucket example-bucket --key dir-1/big-video-file.mp4 --body e:\media\videos\f-sharp-3-data-services.mp4

Related

Is there a way to access a private .zip S3 object with a django app's .ebextension config file deployed on elastic beanstalk

We have a django app deployed on elastic beanstalk, and added a feature that accesses an oracle DB. cx-Oracle requires the Oracle client library (instant client), and we would like to have the .zip for the library available as a private object in our S3 bucket, public object is not an option. We want to avoid depending on an Oracle link with wget. I am struggling to develop a .config file in the .ebextensions directory that will install the .zip S3 any time it is deployed. How can was set-up the config to install on deployment?
os: Amazon Linux AMI 1
Sure that is a common practice to get your private files from s3.
You need to have IAM permission (on EB cluster) to access your s3 bucket and download files.
The config in .ebextensions can look something like this:
container_commands:
install:
command: |
#!/bin/bash -xe
aws s3 cp s3:/bucket-name/your-file local-filename
Just like a friendly suggestion, EB is ok to start with but if your app will go to production you will run on some problems (like cannot enforce some ports to not be opened etc) and maybe there are some better options for you to host your app (ECS, EKS etc)

AWS ElasticBeanstalk Amazon Linux 2 .platform folder not copying NGINX conf

I've been moving over to ElasticBeanstalk using Amazon Linux 2 and I'm having a problem overwriting the default nginx.conf file. I'm following the AL2 docs for the reverse proxy.
They say, "To override the Elastic Beanstalk default nginx configuration completely, include a configuration in your source bundle at .platform/nginx/nginx.conf:"
My apps folder structure
When I run my deploy though, I get the error
CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment: Elastic Beanstalk ignored your '.ebextensions/nginx' configuration directory. To include these configuration files, move them to '.platform/nginx'.","timestamp":1598554657,"severity":"WARN"},{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1598554682,"severity":"ERROR"}]}]}
The main part of the error is
"Elastic Beanstalk ignored your '.ebextensions/nginx' configuration directory. To include these configuration files, move them to '.platform/nginx'.""
Which I'm confused about because this is where I've put the file/folder.
I've tried completely removing the .ebextensions folder and got the same error.
I've tried starting from a completely fresh beanstalk environment and still got that error. I'm not understanding how beanstalk is managing this.
Based on the comments.
The issue was caused by duplicate locations of the nginx config file. This was due to deleting the nginx default path in .ebextensions, while EB re-creating it.
Since this seems as a bug, AWS support ticked was created.

How to deploy shield with Kibana on Bluemix

I am trying to deploy Kibana on Bluemix PaaS. Because Kibana is a Node.js application, it can be deployed as such on Bluemix. All i have to do:
Provide a simple manifest.yml file that details the app name and a couple of other things
Provide a Procfile that has just one line as web: bin/kibana --port=$PORT
Thus, I can run Kibana on Bluemix. Note that this is pushed via Cloud Foundry.
Also, I was able to install the marvel and sense plugins for Kibana.
Now, I installed the shield plugin. This plugin requires an ssl key and an ssl cert file to run. The path to these files must be provided in the kibana.yml file.
After installation, I tested the shield plugin natively and it worked just fine.
Here is the layout of the directory structure:
bin(d)
config(d)
installedPlugins(d)
node_modules(d)
sslFiles(d)
manifest.yml
Procfile
(d) represents directories. The sslFiles folder contains the ssl key and ssl cert files.
Before I could push to Bluemix, I knew that the paths to the SSL files would have to be relative to the app in Bluemix. Thus, in the kibana.yml file, I specified them as:
kibana.ssl.key:app/sslFiles/kibana.key
kibana.ssl.cert:app/sslFiles/kibana.cert
I did this as in Bluemix, I could see the following directory structure:
app(d)
bin(d)
config(d)
installedPlugins(d)
node_modules(d)
sslFiles(d)
manifest.yml
Procfile
Indentation represents containment. So, I pushed it to Bluemi using Cloud Foundry, but now I get a 502 Bad Gateway: Registered endpoint failed to handle the request error. I tried changing the paths to sslFiles/kibana.key but then I got a cannot find path sslFiles/kibana.key staging error.
What is responsible for my 502 error? Is it the path to the sslFiles? If so, how can I properly provide the paths?

Azure VM cannot connect to AWS s3

I was testing a web application that is normally deployed on an AWS linux VM in an Azure VM.
The (java) application accesses AWS s3 for some storage features and lists objects in an AWS s3 bucket.
Running the application in an Azure VM the list was empty.
suspecting connectivity issues, I installed the AWS CLI on the Azure VM, configured keys, and ran:
$ aws s3 ls
This resulted in
Could not connect to the endpoint URL: "https://s3.us-east.amazonaws.com/"
Confirming my suspicions.
Checking the application stack trace for essentially the application's "listObjects" request shows
Request: http://azuredev.gpo.epacube.com/dps/job/listprojects raised com.amazonaws.services.s3.model.AmazonS3Exception: AWS authentication requires a valid Date or x-amz-date header (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: A2494E7A540B5B20), S3 Extended Request ID: 6+Nv1AtCTe0xz3i7Ra5lrmdEdxiIfXgxYapY9KbomblhYL4Q85L3iTLchpQcwRnixyE5El0WKwM=
com.amazonaws.services.s3.model.AmazonS3Exception: AWS authentication requires a valid Date or x-amz-date header (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: A2494E7A540B5B20), S3 Extended Request ID: 6+Nv1AtCTe0xz3i7Ra5lrmdEdxiIfXgxYapY9KbomblhYL4Q85L3iTLchpQcwRnixyE5El0WKwM=
The exact same code works when run from CENTos on AWS, but when run on Ubuntu 13.04 on Azure it fails.
Why might I be getting the invalid date error?
How do I modify the Azure VM setup so the AWS s3 connections succeed?
Your region is wrong.
https://s3.us-east.amazonaws.com/ is not a valid endpoint. You possibly configured a region as us-east when it should be us-east-1.
I could reproduce the problem by specifying an incorrect endpoint:
This works:
$ aws s3 ls --region us-east-1
This doesn't work:
$ aws s3 ls --region us-east
Could not connect to the endpoint URL: "https://s3.us-east.amazonaws.com/"
For a full list of endpoints, see: Regions and Endpoints
Turns out this was a mix of JDK8 and an older version of the aws-sdk-java dependency on joda-time. Upgrading the joda-time dependency to version 2.8.1 fixed this.
Found here

How do I reset credential on AWS Elastic Beanstalk

I accidentally typed in the wrong aws-access-id and secret key after running eb init when going through the tutorial Deploying an Express Application to Elastic Beanstalk
Now I am getting the following error "Error: "my-mistyped-key" not a valid key=value pair (missing equal-sign) in Authorization header..."
What is the best way to reset my credential so that I can run "eb init"
go to ~/.aws/config and change your credentials there
On windows, you can find the config file to delete at C:\Users\You\.aws\
You will have to enable viewing hidden files
If you have the AWS CLI installed, type aws configure and you can re-enter your credentials.

Resources