How do I reset credential on AWS Elastic Beanstalk - node.js

I accidentally typed in the wrong aws-access-id and secret key after running eb init when going through the tutorial Deploying an Express Application to Elastic Beanstalk
Now I am getting the following error "Error: "my-mistyped-key" not a valid key=value pair (missing equal-sign) in Authorization header..."
What is the best way to reset my credential so that I can run "eb init"

go to ~/.aws/config and change your credentials there

On windows, you can find the config file to delete at C:\Users\You\.aws\
You will have to enable viewing hidden files

If you have the AWS CLI installed, type aws configure and you can re-enter your credentials.

Related

jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection Message [Auth fail]

I am learning to use Jenkins to deploy a .Net 5.0 application on an AWS EC2 server. This is the first time I am using Linux server and Jenkins for .Net (I'm am a life long Windows guy), and I am facing an error while trying to publish my artifacts over SSH to Web Server.
My setup:
Jenkins server is an AWS EC2 Linux AMI server.
Web Server is also an AWS EC2 LInux AMI server.
My Jenkins is correctly installed and working. I am able to build and run unit test cases without any issues.
For Deploy, I am using 'Publish Over SSH' plugin, and I have followed all steps to configure this plugin as mentioned here https://plugins.jenkins.io/publish-over-ssh/.
However, when try to 'Test Configuration', I get the below error,
Failed to connect or change directory
jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [WebServer]. Message [Auth fail]]
I did a ping test from Jenkins server to Web Server, and it is a success.
I'm using the .pem key in the 'Key' section of 'Publish over SSH'. This key is the same key I use to SSH into the web server.
The below link suggests many different solutions, but none is working in my case.
Jenkins Publish over ssh authentification failed with private key
I was looking at the below link which describes the same problem,
Jenkins publish over SSH failed to change to remote directory
However in my case I have kept 'Remote Directory' as empty. I don't know if I have to specify any directory here. Anyways, I tried creating a new directory under the home directory of user ec2-user as '/home/ec2-user/publish' and then used this path as Remote Directory, but it still didn't work.
Screenshot of my settings in Jenkins:
I would appreciate if anyone can point me to the right direction or highlight any mistake I'm doing with my configuration.
In my case following steps solved the problem.
Solution is based on Ubuntu 22.04
add two line in /etc/ssh/sshd_config
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes +ssh-rsa
restart sshd service
sudo service sshd restart
you might consider the following:
a. From the screenshot you’ve provided, it seems that you have checked the Use password authentication, or use different key option which will require you to add your key and password (inputs from these fields will be used in connecting to your server via SSH connection). If you use the same SSH key and passphrase/password on all of your servers, you can uncheck/untick that box and just use the config you have specified above.
b. You might also check if port 22 of your web server allows inbound traffic from the security group where your Jenkins server/EC2 instance is running. See reference here.
c. Also, make sure that the remote directory you have specified is existing otherwise the connection may fail.
Here's the sample config

SSH'ing to Linux Client using AWS command line in Jenkins

I need to SSH on to my Linux box from Jenkins using AWS cli. To do so, AWS documentation states I need to use my pem key:
ssh -i /path/my-key-pair.pem ec2-user#ec2-198-51-100-1.compute-1.amazonaws.com
However, Jenkins does not have access to where I have the pem file stored and moving it is not an option.
I have generated a sshagent in Jenkins using my pem file, but cannot find any documentation or examples that show how replacing the path to pem file with my sshagent would work.
Does anyone have any any idea what the syntax is or could be point me in the direction of some documentation on this?
You have mixed two questions or things:
to ssh you certainly need the .pem key but not to execute the aws cli. Use below for ssh from jenkins to ec2 instance.
Instead of doing the above you can update the EC2 instance ec2-user /home/ec2-user/.ssh/authorized_keys with the public key of the jenkins user.
For executing aws cli commands if you want you need to use Access Credentials.

Configuring jhipster-registry v3.0.3 with Git repo

I am trying to use the pre-packaged v3.0.3 war file for the jhipster-registry. I am launching it with these command-line properties in an attempt to point it to my Git repo for configuration info:
jhipster-registry-3.0.3.war --spring.profiles.active=prod,cust1 \
--spring.cloud.config.server.git.uri=http://myserver/url/MyConfig \
--spring.cloud.config.server.git.username=user \
--spring.cloud.config.server.git.password=pass
It starts, but I always get this error:
Your JWT secret key is not set up, you will not be able to log into the JHipster
I've tried many combinations of how to setup the Git repo. I'm using the sample application.yml file from https://github.com/jhipster/jhipster-registry-sample-config
Does the jhipster-registry itself not read any configuration files from Git?
If I want to configure the jhipster-registry properties, should I keep overriding things on the command-line, or put a yml file somewhere? It isn't clear to me the proper way to configure it when it is a pre-built war file and has embedded bootstrap/application yml files.
Is there a way to turn on debug logging so I can see what is going on?
This is because your JWT token isn't configured in your Git repository.
Have a look at our sample Git repository.
The Registry will send this token to all configured applications, and thus will be able to connect to them.
Otherwise, it shows a warning as it knows this will be an issue later.
Please note that this is a difference from the "classical" Eureka and Spring Cloud Config servers, which are not secured by default.

cat: ssh-rsa: No such file or directory to add Codeship SSH to AWS

I'm trying to add codeship SSH to aws cat ssh-rsa [SSH_KEY] >> .ssh/authorized_keys but I've encountered following error cat: ssh-rsa: No such file or directory regarding http://www.eq8.eu/blogs/19-setting-up-simple-wordpress-deployment-with-codeship-to-aws-ec2
Please let me know how to solve it. Because I'm now trying to deploy nodejs application to aws with codeship. Or is there anyway I can deploy nodejs application to aws with codeship.
You can find the public key for your Codeship project on the projects General settings page. You can then take this key and add it to the .ssh/authorized_keys file on the EC2 instance(s) you want to deploy to.
See https://documentation.codeship.com/general/projects/project-ssh-key/ for the documentation article on this topic.

AWS s3 File uploading failing via Laravel

I'm running a Laravel app with a code like this in one of my controller functions:
$s3 = Storage::disk('s3');
$s3->put( $request->file('file')->getClientOriginalName(), file_get_contents($request->file('file')) );
I believe Laravel utilizes Flysystem behind the scenes to connect to s3. When trying to execute this piece of code I get an error like this:
The Laravel docs isn't giving me much insight into how/why this problem is occurring. Any idea what is going on here?
EDIT: After going through a few other stackoverlflow threads:
fopen fails with getaddrinfo failed
file_get_contents(): php_network_getaddresses: getaddrinfo failed: Name or service not known
it seems as if the issue may be more related to my server's DNS? I'm on a ubuntu 14.04 on a Linode instance. I use Nginx as my webserver.
Your S3 configuration seems to be wrong, as the host it tries to use s3.us-standard.amazonaws.com cannot be resolved on my machine either. You should verify that you have configured the right bucket + region.
Check that your S3 API endpoints are correct.
To eliminate permission (role/credential) and related setup errors, try doing a put-object using the AWS CLI s3api, from that server.
aws s3api put-object --bucket example-bucket --key dir-1/big-video-file.mp4 --body e:\media\videos\f-sharp-3-data-services.mp4

Resources