private and public ip using AWS api gateway + lambda + Nodejs - node.js

I am trying to get the users private IP and public IP in an AWS environment. Based on this answer (https://stackoverflow.com/a/46021715/4283738) there should be a header X-Forwarded-For , separated ips and also from forum (https://forums.aws.amazon.com/thread.jspa?threadID=200198)
But when I have deployed my api via API Gateway + lambda + nodejs v8. I have consoled out the JSON for event and context varaibles for the nodejs handler function arguments for debugging (https://y0gh8upq9d.execute-api.ap-south-1.amazonaws.com/prod) I am not getting the private ips.
The lambda function is
const AWS = require('aws-sdk');
exports.handler = function(event, context, callback){
callback(null, {
"statusCode": 200,
"body": JSON.stringify({event,context})
});
}
API Gateway Details
GET - Integration Request
Integration type -- Lambda Function
Use Lambda Proxy integration -- True
Function API : https://y0gh8upq9d.execute-api.ap-south-1.amazonaws.com/prod

Case-1 : You can not get the private IP of the user for the security reasons(If configured by the user, this is done by NAT or PAT (Network Address Translation or Port Address Translation behind the screen. NAT will add this ip in his table and send the request ahead with the public id or can say router-id).
Case-2: If here your private ip means is if multiple users are using the same public network(WIFI etc). Then again you can define two IPs one is public which is common for all but inside there public network they have another ip which is unique inside that public network.
For example: Let's say there is WIFI with public IP (1.1.1.1). It has two users A and B.Notably, as they are sharing the same WIFI so the router will have only one IP(public and common for all) but inside this router, A and B will have different IPs such as 192.1.1.1 and 192.1.1.2 which can be called as private.
In both cases, you will get only the public IP(At position 0 in X-Forwarded-For header).
You can get X-Forwarded-For header inside
event.headers.multiValueHeaders.
If you can access both then what is the benefit of having private and public ip?
To access AWS VPC private subnet as well you have to use NAT and the client will never know the actual IP for the security reasons. I request you to re-review your requirements once again.

Don't know what makes you stuck in here, correct me if I'm wrong.
From Wiki:
The X-Forwarded-For (XFF) HTTP header field is a common method for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer.
I set X-Forwarded-For in header & test with Postman:
https://imgur.com/a/8QZEdyH

The "X-Forwarded-For" header shows the public ip of the user.
Thats all you get.
Internal IPs are not visible.
Only the "public ip" which is indicated in the header.

Related

Using places API from Firebase Cloud Functions with restricted API Key

I recently restricted an api key to accept requests from specific websites, but I use the same api key from Firebase Cloud Functions for Places search.
What url do I add for the cloud functions request?
Url is built like this
const urlFindPlaceFromText = "https://maps.googleapis.com/maps/api/place/findplacefromtext";
const fields = "formatted_address,name,geometry,icon,rating,price_level,place_id";
const location = "point:$latitude, $longitude";
const url = `${urlFindPlaceFromText}/json?input=${searchString}&inputtype=textquery&language=en&fields=${fields}&locationbias=${location}&key=${apiKey}`;
const placesRequest = await axios.get(url)
Response:
data: {
> candidates: [],
> error_message: 'API keys with referer restrictions cannot be used with this API.',
> status: 'REQUEST_DENIED'
> }
The error you are seeing looks like you are making the API call server side. Because you have placed a referrer restriction on your API key, it will be limited to executing on the browser with the web service APIs.
As mentioned in the comments above,you may create a separate key to use server-side. You can change your restriction from a browser restriction to a server restriction by using IP addresses to restrict access, instead of browser referrers.
Check this APIs FAQ on switching key type to a server restricted key
Also check these similar example for more information:
Key restrictions by IP address not working
How to get IP address from client
Cloud function secure HTTPS endpoint with API key
API keys referrer restrictions cannot be used with this API
error

Send GET request from Amplify service to EC2 machine

< I am a real newbie so I am sorry if I am using the terms incorrectly. >
Hey guys!
I am trying to deploy my website, I put my front - files in Amplify app which provides me with an HTTPS url.
My goal is to load my backend code to EC2 ubuntu machine and to run it via pm2.
I have a trouble understanding how to do it, I am writing backend code in nodejs and I am using the express framework.
When I am developing locally, it all runs perfectly.
My backend code :
app.get('/db', (req,res) => {
let ddb = new AWS.DynamoDB({ apiVersion: "2012-08-10" });
const params = {
TableName: "blablabla",
};
let itemObj = [];
ddb.scan(params, function (err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data);
data.Items.forEach(function (element, index, array) {
itemObj.push(data.Items);
res.status(200).json(itemObj);
});
}
})
Relate front-end code :
function getData(username){
var xmlhttp = new XMLHttpRequest();
var url = "http://localhost/db";
xmlhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) { //request completed
result = JSON.parse(this.responseText);
blablabla
}
};
xmlhttp.open("GET", url, true);
xmlhttp.send();
}
When I am using localhost url and run the server via my computer (npm start server..) I do get the data I am looking for on the amplify service.
But when I use the elastic IP addresses of the EC2 machine I get an error: "was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint, This request has been blocked"
Is there any way to allow those kind of requests?
Do I even use the correct IP of the EC2 machine?
It seems to me that if EC2 provided me an HTTPS address, it will works fine, am I right or it has nothing to do with it?
Thanks in advence.
It works on your local machine because you don't have an SSL certificate on localhost, so your frontend is not loaded over a secure connection. When you run the frontend from Amplify, you're connecting to the Amplify domain name via SSL (I expect the URL is something like https://master.randomalphanumericstring.amplifyapp.com). So your browser complains when that page tries to make an insecure connection to your EC2 instance.
You can work around this by changing your browser settings to allow mixed content. In Chrome, it's Settings->Site Settings->Insecure Content->Add site. But that's just a workaround for development, obviously that won't work for production.
You can't make an HTTPS request to an IP address. SSL certificates must be associated with a domain name. It's also not a very robust solution to have your backend depend on a specific IP address. You have a few options to address this:
Generate an SSL certificate and install it on your EC2 instance. You can't use AWS Certificate Manager with EC2, so you'd need to obtain a certificate from letsencrypt or some other source. Self-signed won't work, it has to be trusted by the browser. And of course you need a registered domain for that.
Add an application load balancer with a secure listener and a certificate issued through ACM that directs requests to your EC2 instance. Again, you'll need to have a registered domain that you can use with the certificate.
Deploy your backend through Amplify. This will provide an API endpoint with a secure connection in the awsamazon.com domain.
There are many other ways to create an app backend with a secure connection, but those should get you started.

Getting timeout from Mailchimp Transactional when running from Lambda function

I'm trying to send emails through Mailchimp Transactional/Mandrill using Node and Serverless Framework.
I'm able to send emails fine locally (using serverless-offline), however when I deploy the function to our staging environment, it is giving a timeout error when trying to connect to the API.
My code is:
const mailchimp = require('#mailchimp/mailchimp_transactional')(MAILCHIMP_TRANSACTIONAL_KEY);
async function sendEmail(addressee, subject, body) {
const message = {
from_email: 'ouremail#example.com',
subject,
text: body,
to: [
{
email: addressee,
type: 'to',
},
],
};
const response = await mailchimp.messages.send({ message });
return response;
}
My Lambda is set at a 60 second timeout, and the error I'm getting back from Mailchimp is:
Response from Mailchimp: Error: timeout of 30000ms exceeded
It seems to me that either Mailchimp is somehow blocking traffic from the Lambda IP, or AWS is not letting traffic out to connect to the mail API.
I've tried switching to use fetch calls to the API directly instead of using the npm module, and still get back a similar error (although weirdly in html format):
Mailchimp send email failed: "<html><body><h1>408 Request Time-out</h1>\nYour browser didn't send a complete request in time.\n</body></html>\n\n"
Are there any AWS permissions I've missed, or Mailchimp Transactional/Mandrill configs I've overlooked?
I was having the identical issue using Mailchimp's Marketing API and solved it by routing traffic through an NAT Gateway. Doing this allows your lambda functions that are within a VPC to reach external services.
The short version of how I was able to do this:
Create a new subnet within your VPC
Create a new route table for the new subnet you just created and make sure that the new subnet is utilizing this new route table
Create a new NAT Gateway
Have the new route table point all outbound traffic (0.0.0.0/0) to that NAT Gateway
Have the subnet associated with the NAT Gateway point all outbound traffic to an Internet Gateway (this is generally already created when AWS populates the default VPC)
You can find out more at this link: https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/

API Gateway - ALB: Hostname/IP doesn't match certificate's altnames

My setup currently looks like:
API Gateway --- ALB --- ECS Cluster --- NodeJS Applications
|
-- Lambda
I also have a custom domain name set on API Gateway (UPDATE: I used the default API gateway link and got the same problem, I don't think this is a custom domain issue)
When 1 service in ECS cluster calls another service via API gateway, I get
Hostname/IP doesn't match certificate's altnames: "Host: someid.ap-southeast-1.elb.amazonaws.com. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com"
Why is this?
UPDATE
I notice when I start a local server that calls the API gateway I get a similar error:
{
"error": "Hostname/IP doesn't match certificate's altnames: \"Host: localhost. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com\""
}
And if I try to disable the HTTPS check:
const response = await axios({
method: req.method,
url,
baseURL,
params: req.params,
query: req.query,
data: body || req.body,
headers: req.headers,
httpsAgent: new https.Agent({
: false // <<=== HERE!
})
})
I get this instead ...
{
"message": "Forbidden"
}
When I call the underlying API gateway URL directly on Postman it works ... somehow it reminds me of CORS, where the server seems to be blocking my server either localhost or ECS/ELB from accessing my API gateway?
It maybe quite confusing so a summary of what I tried:
In the existing setup, services inside ECS may call another via API gateway. When that happens it fails because of the HTTPS error
To resolve it, I set rejectUnauthorized: false, but API gateway returns HTTP 403
When running on localhost, the error is similar
I tried calling ELB instead of API gateway, it works ...
There are various workarounds, which introduce security implications, instead of providing a proper solution. in order to fix it, you need to add a CNAME entry for someid.ap-southeast-1.elb.amazonaws.com. to the DNS (this entry might already exists) and also to one SSL certificate, alike it is being described in the AWS documentation for Adding an Alternate Domain Name. this can be done with the CloudFront console & ACM. the point is, that with the current certificate, that alternate (internal !!) host-name will never match the certificate, which only can cover a single IP - therefore it's much more of an infrastructural problem, than it would be a code problem.
When reviewing it once again... instead of extending the SSL certificate of the public-facing interface - a better solution might be to use a separate SSL certificate, for the communication in between the API Gateway and the ALB, according to this guide; even self-signed is possible in this case, because the certificate would never been accessed by any external client.
Concerning that HTTP403 the docs read:
You configured an AWS WAF web access control list (web ACL) to monitor requests to your Application Load Balancer and it blocked a request.
I hope this helps setting up end-to-end encryption, while only the one public-facing interface of the API gateway needs a CA certificate, for whatever internal communication, self-signed should suffice.
This article is about the difference in between ELB and ALB - while it might be worth a consideration, if indeed the most suitable load-balancer for the given scenario had been chosen. in case no content-based routing is required, cutting down on useless complexity might be helpful. this would eliminate the need to define the routing rules ...which you should also review once, in case sticking to ALB. I mean, the questions only shows the basic scenario and some code which fails, but not the routing rules.

how to make express route private accessible

I'am running a nodejs/express application as a backend solution for my current project. The application is using passport-jwt to secure some routes with JWT as header Authorization for a route, let's call this route secure-route. Now I'm running a second application which needs to access secure-route without the necessary Authorization header. The necessary Authorization header is generated by a login route after the user has authorized successfully.
The problem is, that I don't want to provide a (fake) jwt Authorization header (which shouldn't expire). The second application/server should access my first application with a more appropriate authorization strategy like basic-auth.
I thought about making secure-route private in another router module so I can access this private route by maybe rerouting.
So how can I make an express route private accessible ? Or is there a solution for authenticating a backend/server without affecting the current authentication strategy ?
EDIT :
both backends running on a serverless structure on AWS
Assuming this second application you mention is running either on the same server or on another server in the same network, then you can do the following:
Create a new web server on a non-standard port that is not accessible from the general internet (just a few lines of code with Express).
Run that new web server in the same nodejs process that your existing server with the secure-route is running on.
In that new server, create a route for the private access. In that private route, do not implement any access control.
Put the code for the route into a separately callable function.
When that new server route gets hit, call the same function that you use to implement the secure route in the other server.
Verify that there is no access to your second server's port from the internet (firewall settings).
You could also just take your one existing server and route and allow access without the authorization header only when accessed from a specific IP address where your other app is running.
If you can't use anything about the network topology of the server to securely identify your 2nd app when it makes a request, then you have to create a secret credential for it and use that credential (akin to an admin password or admin certificate). Or, switch to an architecture where you can use the network topology to identify the 2nd app.
You should make a middleware and use it like this
/Starting Point of the Project/
let CONGIG = require('./config');
let middleware = require('./middleware');
let app = express();
app.use(middleware.testFunction);
require('./route')(app);
'use strict';
let middleware = {
testFunction : function(req,res,next){
var condition = ''; /* now here you can write your logic on condition that when should be the condition be true and when it shoudld not be true based on the req.url , if the user is trying to access any public url you can simply allow the true part of the condition to run and if the person is accessing a private part of route then you can check for additional parameters in header and then set the condition true and if not you must send an error msg or a simple message as you are not allowed to access the private parts of the web application. */
if(condtion){
next();
} else {
res.send('error');
}
}
}
So by designing a middlware you can basically seperate the logic of private and public routes and on what condition a route is public or private in a seperate module that will deal with it , it is little bit difficult to understand but it is better to first filter out public and private route than latter checking . In this way on the very initial hit we can differentiate the private and public routes.

Resources