Sending email via aws ses - node.js

I have 2 servers A & B and hosted in AWS and my app is built using nodejs.
I have the same copy of application running on both servers. Now from server A the email works but not from B.
I have a file called emailconfig.json which has accesskeyId, secretAccessKey and region which I call to load the config.
I think the same config can't be used in another server in AWS to send the email?
Code --
router.post('/sendmail', function(req, res, next) {
// load aws config
console.log("I am here 1");
aws.config.loadFromPath('\emailconfig.json');
console.log("I am here 2");
For some reason, I can't see the second log in server B but works in server A.
Any help is highly appreciated. Thanks in advance.

I think you server B IAM permission is different from the server A. Because it creates separate IAM Roles for each EC2, you should specify the IAM role which has been used for server A as the IAM role of server B too.

Related

Send GET request from Amplify service to EC2 machine

< I am a real newbie so I am sorry if I am using the terms incorrectly. >
Hey guys!
I am trying to deploy my website, I put my front - files in Amplify app which provides me with an HTTPS url.
My goal is to load my backend code to EC2 ubuntu machine and to run it via pm2.
I have a trouble understanding how to do it, I am writing backend code in nodejs and I am using the express framework.
When I am developing locally, it all runs perfectly.
My backend code :
app.get('/db', (req,res) => {
let ddb = new AWS.DynamoDB({ apiVersion: "2012-08-10" });
const params = {
TableName: "blablabla",
};
let itemObj = [];
ddb.scan(params, function (err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data);
data.Items.forEach(function (element, index, array) {
itemObj.push(data.Items);
res.status(200).json(itemObj);
});
}
})
Relate front-end code :
function getData(username){
var xmlhttp = new XMLHttpRequest();
var url = "http://localhost/db";
xmlhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) { //request completed
result = JSON.parse(this.responseText);
blablabla
}
};
xmlhttp.open("GET", url, true);
xmlhttp.send();
}
When I am using localhost url and run the server via my computer (npm start server..) I do get the data I am looking for on the amplify service.
But when I use the elastic IP addresses of the EC2 machine I get an error: "was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint, This request has been blocked"
Is there any way to allow those kind of requests?
Do I even use the correct IP of the EC2 machine?
It seems to me that if EC2 provided me an HTTPS address, it will works fine, am I right or it has nothing to do with it?
Thanks in advence.
It works on your local machine because you don't have an SSL certificate on localhost, so your frontend is not loaded over a secure connection. When you run the frontend from Amplify, you're connecting to the Amplify domain name via SSL (I expect the URL is something like https://master.randomalphanumericstring.amplifyapp.com). So your browser complains when that page tries to make an insecure connection to your EC2 instance.
You can work around this by changing your browser settings to allow mixed content. In Chrome, it's Settings->Site Settings->Insecure Content->Add site. But that's just a workaround for development, obviously that won't work for production.
You can't make an HTTPS request to an IP address. SSL certificates must be associated with a domain name. It's also not a very robust solution to have your backend depend on a specific IP address. You have a few options to address this:
Generate an SSL certificate and install it on your EC2 instance. You can't use AWS Certificate Manager with EC2, so you'd need to obtain a certificate from letsencrypt or some other source. Self-signed won't work, it has to be trusted by the browser. And of course you need a registered domain for that.
Add an application load balancer with a secure listener and a certificate issued through ACM that directs requests to your EC2 instance. Again, you'll need to have a registered domain that you can use with the certificate.
Deploy your backend through Amplify. This will provide an API endpoint with a secure connection in the awsamazon.com domain.
There are many other ways to create an app backend with a secure connection, but those should get you started.

Postman not reaching AWS EKS API endpoint

I'm trying to figure out how to get postman to work with EKS. I have a simple nodejs app.
const express = require('express');
const app = express();
app.get('/', (req, res) => res.send('hello world'));
app.listen(3000, () => {
console.log('My REST API running on port 3000!');
});
Here's everything I've done so far:
I created a docker container and successfully pushed it to ECR.
Also I tested docker by running it locally and I was able to reach it and get hello world response so the docker container seems fine.
I created an EKS cluster with the docker container and have the api server endpoint
but when I try and make a call with postman, I get
I even tried adding access key and secret from IAM user that has access to EKS, but I get same error.
When I configured the cluster, I set it to public so I don't understand why Postman can't reach the API endpoint.
Also I added the following permissions to the IAM user I'm using in postman. I wasn't sure which one was correct so I added all of them. I also put the security credentials for that IAM user in postman.
What am I missing? I appreciate the help!
Actually, your Postman is reaching AWS EKS API endpoint, but you are getting authentication/authorization error - 403 Forbidden. I see OpenID Connect provider URL in the API config, so I would expect OIDC authentication and not AccessKey/SecretKey. Check AWS EKS documentation or contact your AWS support.

Stop EC2 Instance

I have a node js application running on EC2. After a certain operation, I want to stop the EC2.
I am using this function to stop EC2
const stopInstance = () => {
// set the region
AWS.config.update({
accessKeyId: "MY ACCESS KEY",
secretAccesskey: "SECRET KEY",
region: "us-east-1"
})
// create an ec2 object
const ec2 = new AWS.EC2();
// setup instance params
const params = {
InstanceIds: [
'i-XXXXXXXX'
]
};
ec2.stopInstances(params, function(err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
console.log(data); // successful response
}
});
}
When I am running it from EC2, it's giving error
UnauthorizedOperation: You are not authorized to perform this operation.
But when I am running the same code, using the same key and secret from my local machine, It's working perfectly.
Permissions I have
This will be down to the permissions of the IAM user being passed into the script.
Firstly this error message indicates that an IAM user/role was successfully used in the request, but failed to have permissions so that can be ruled out.
Assuming a key and secret are being successfully (looks like hard coded) you would be looking at further restrictions within the policy (such as principal or .
If the key and secret are not hard coded but instead passed in as environment variables, perform some debug to output the string values and validate these are what you expect. If they do not get passed into the SDK then it may be falling back to an instance role that is attached.
As a point of improvement, generally when interacting with the AWS SDK/CLI from within AWS (i.e. on an EC2 instance) you should use a IAM role over an IAM user as this will lead to less API credentials being managed/rotated. An IAM role will rotate temporary credentials for you every few hours.
If the same credentials are working on local machine then it's probably not a permission issue, but just to further isolate the issue, you can try to run the AWS-GetCallerIdentity to check the credentials that are being used.
https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html
In case if this does not help, create a new user and try giving full admin access and then using the credentials to see if this get's resolved. This will confirm whether we are facing a permission issue or not.

Google Cloud Vision reverse image search fails on Azure App Service because GOOGLE_APPLICATION_CREDENTIALS file cannot be found

I am attempting to perform a Google reverse image search using Google Cloud Vision on an Azure app service web app.
I have generated a googleCred.json, which the Google client libraries use in order to construct API requests. Google expects it to be available from an environment variable named GOOGLE_APPLICATION_CREDENTIALS.
The Azure app service that runs the web app has settings that mimic environment variables for the Google client libraries. The documentation is here, and I have successfully set the variable here:
Furthermore, the googleCred.json file has been uploaded to the app service. Here is the documentation I followed to use FTP and FileZilla to upload the file:
Also, the file permissions are as open as they can be:
However, when I access the web app in the cloud, I get the following error message:
Error reading credential file from location D:\site\wwwroot\Statics\googleCred.json: Could not find a part of the path 'D:\site\wwwroot\Statics\googleCred.json'. Please check the value of the Environment Variable GOOGLE_APPLICATION_CREDENTIALS
What am I doing wrong? How can I successfully use the Google Cloud Vision API on an Azure web app?
This error message is usually thrown when the application is not being authenticated correctly due to several reasons such as missing files, invalid credential paths, incorrect environment variables assignations, among other causes.
Based on this, I recommend you to validate that the credential file and file path are being correctly assigned, as well as follow the Obtaining and providing service account credentials manually guide in order to explicitly specify your service account file directly into your code; In this way, you will be able to set it permanently and verify if you are passing the service credentials correctly.
Passing the path to the service account key in code example:
// Imports the Google Cloud client library.
const Storage = require('#google-cloud/storage');
// Instantiates a client. Explicitly use service account credentials by
// specifying the private key file. All clients in google-cloud-node have this
// helper, see https://github.com/GoogleCloudPlatform/google-cloud-node/blob/master/docs/authentication.md
const storage = new Storage({
keyFilename: '/path/to/keyfile.json'
});
// Makes an authenticated API request.
storage
.getBuckets()
.then((results) => {
const buckets = results[0];
console.log('Buckets:');
buckets.forEach((bucket) => {
console.log(bucket.name);
});
})
.catch((err) => {
console.error('ERROR:', err);
});
I'm writing here since i can't comment, but at a quick glance, is the "D:" in the path necessary? I assume you uploaded the file to the app service so try with this value for the path "\site\wwwroot\Statics\googleCred.json"

deploy bot without hosting on azure

I have created a bot using Microsoft bot framework (NodeJs) , it working fine on local system. When i deployed with azure functions then also its working fine.
But i'm trying to deploy in some other way like
1) I want to register my bot on azure
2) but want to host somewhere else(SSL Certified server) i.e not on azure (don't want to use azure bot functions)
I have followed the steps got in some articles
My App.js Looks like this
But i'm getting below error when i tried to test with "Test in web Chat"
Can anyone please help me, what i'm doing wrong here ??
Thanks
Not familiar with AWS, but for making restify server support HTTPS, you can try to use:
const restify = require('restify');
const fs = require('fs');
const https_options = {
key: fs.readFileSync('./localhost_3978.key'), //on current folder
certificate: fs.readFileSync('./localhost_3978.cert')
};
server = restify.createServer(https_options);
Create the restify server with your SSL certificate and key files before your bot application.

Resources