Why are there multiple POST requests with /FotorShopSurpport API on my AWS Elastic Bean Stalk node server? - node.js

I see multiple POST requests throughout my logs:
POST /FotorShopSurpport/fetchModulesByAppkey
POST /FotorShopSurpport/fetchRecommendResource
POST /FotorShopSurpport/batchResourcePkgNumByType
I don't have any API matching that route neither am I calling this APIs on my server. I recently created this server and no one even knows link to the server apart from me.
Is this something Elastic Beanstalk doing? Or is it totally different?
I have several other servers through elastic beanstalk and these requests are the first time I have seen in any logs.

Found some access logs containing "FotorShopSurpport" on google. The requests are for store.fotor.com.
$ host store.fotor.com
store.fotor.com is an alias for elb-store-376424179.us-west-2.elb.amazonaws.com.
elb-store-376424179.us-west-2.elb.amazonaws.com has address 52.34.194.249
elb-store-376424179.us-west-2.elb.amazonaws.com has address 35.160.57.75
Some client trying to access store.fotor.com is using the wrong IP, maybe because of too agressive caching. ELB keeps changing IPs. I have seen such request in my access logs too. Make sure your webserver is configured to only serve requests for your own hostnames.

Related

Problem running Express app over HTTPS on aws

I have an ExpressJS backend and I want to run over https on aws (so I don't get 'mixed type content' error when trying to connect with my frontend which runs over https), it's running great using http but when using https it doesn't work.
I asked this question before and I got answers like 'use nginx', 'use load balancer', unfortunately I don't know much about this stuff as I'm not very experienced with all aws variations and options, are there any tutorials I can follow step by step ? or any easy way to serve my backend over https without complexity?
any easy way to serve my backend over https without complexity?
The easiest way (don't confused with the cheapest way) is to change your EB environment to load-balanced one. You can do this in EB console's configuration settings.
This change will create Application Load Balancer for your app, and place it in-front of your instance. Once ALB is running you can follow this AWS guide:
How can I configure HTTPS for my Elastic Beanstalk environment?
In the above, only section Terminate HTTPS on the load balancer would be relevant.
Depending on the nature of your application, is it fully dynamic, or more on static side, you could also consider using Using Elastic Beanstalk with Amazon CloudFront, instead of using ALB. CloudFront could be also be easily setup to use HTTPS between clients and CloudFront, but the issue is that traffic between CloudFront and your EB instance would go over the internet unencrypted (HTTP). Obviously, you could make it HTTPS, but this requires further changes and configurations which does not fall into category of "easy ways".

I run a timer for each http request I get, I'd want to add a loadbalancer but the second request may/will not be sent to the right backend server

I run a timer per user API request (over HTTP).
I want to grow horizontally (adding servers) but if I had some servers behind a loadbalancer the user may not be sent to the same backend server for the second request and my timing function wouldn't work.
If I could use cookies it would be easy with sticky sessions.
I can recognize the user using a parameter in the URL but I would prefer not to have to create my own loadbalancing scheme using Nginx or similar solutions.
If that helps:
- App is in nodejs
- Hosted at DigitalOcean.
Anyone struck by a great idea?

Identify Internal Request Nginx + NodeJS Express

I have an NodeJS API using Express Framework.
I use Nginx for Load Balancing betwween my NodeJS instances. I use PM2 to spawn NodeJS Instances.
I identified in the log that Ngnix makes some "dummy/internal" requests, probably to identify if the instance is on (heartbeat requests could be the appropriate name for this requests).
My question is: Which is the right method to identifiy these "dummy/internal" requests on my API?
I'm fairly certain that nginx only uses passive health checks for upstream servers. In other words – because all HTTP requests are assumed to result in a response, nginx says "If I send this server a bunch of requests and don't get responses for them, I'll consider the server to be unhealthy".
Can you share some access logs of the requests you're seeing?
As far as I know, nginx does not send any requests to upstream servers that are not ultimately initiated by a client.

How to setup an API server with no domain name

I've been struggling to understand this because I don't quite know what to search for. Basically, I'm working on a simple node server that just works as an API that is to be consumed by a mobile application. I'm planning to deploy it to DigitalOcean but since I don't need a domain name because I don't have a website, how will I send the HTTP requests to the server? My guess is something related to the droplet's IP but that doesn't seem quite right.
Just send requests to IP address and port from your mobile app. like GET http://54.54.32.23:3000/user-newsfeeds/15

How to get full request URL in Node.js running on Heroku

I'm running a node application on heroku and I would like to find the full request URL through which my application is being requested. In particular, I want to know if it has been accessed through HTTP or HTTPS, so that I can redirect clients connecting through HTTP to use the same URL but with HTTPS instead.
Since the application is running under proxies, etc., the protocol and host portions of the requests I can read are the ones where node is running, as forwarded by Heroku infrastructure.
Hints appreciated!
BTW, my app uses requestjs, in case that is relevant

Resources