I am relying on the steam API in order for my website to work. In the start everything was working perfectly fine, however now that the site is gaining popularity I keep getting 429 error codes as the API reaches ratelimit. I am hosting the site on a EC2 instance using pm2 and nginx. Is there a way to avoid reaching these ratelimits?
I have done changes so it dosent fetch from API unless needed but the problem is still happening. Is there something I can do with nginx in order to avoid this issue? I am hosting the api on port 3005 I was thinking maybe I could set up several apis on different ports but this seems really tidious. An alternative would be to host the API on different EC2 but I was having some session problems doing it that way.
server {
root /home/ubuntu/apps/norskins-app/client/build;
index index.html index.htm index.nginx-debian.html;
server_name mywebsite.com;
location / {
try_files $uri /index.html;
}
location /api {
proxy_pass http://localhost:3005;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Under here is a bunch of other stuff setup byCertbot
}
Joining this guy's question, I cannot load more than 3 inventories in a minute.
Related
I've created a restful api with nodejs and I'm planning to use sapper/svelte for front-end. In the end, these will be seperate apps and I want to run them on the same server with same domain. Is this approach reasonable? If it is, what should my nginx configuration file look like? If not, what should be my approach?
This my conf for api:
server {
server_name domain.name;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
.
.
.
}
Following best pratice your API will be under BASE/api/
That will allow you to host backend + Frontend on the same server
server {
server_name domain.name;
location /api/ { # Backend
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
...
}
location / { # Frontend
root /app-path/;
index index.html;
try_files $uri $uri/ /index.html;
...
}
}
Since this is your first svelte / sapper project, I would keep things separate and see if you can get started with svelte to hit the API on nginx. Decouple things and ship svelte on gitlab pages or whatever other CI destination you prefer.
If it comes time to run with sapper, my advice remains the same - have it hit your API externally to keep your projects clear and distinct. You already launched the API before the front end - no worries, but I don’t see how your config needs to know where the front end will run or why entwining them would be beneficial.
I have an ASP.NET Core 2 Web API project and Centos 7 Linux server.
I ran the project in my server as a service.
It is running 7/24 in my server.
If I write to Linux terminal "wget http://localhost:5000/api/users --no-check-certificate", users json file download to my server. There is no problem here.
But I can't access to my api from my local computer.
If I write "http://[SERVER_IP]:[PORT]/api/users" to a web browser, it returns 502 Bad Gateway Http Status Code.
How can I fix it?
etc/nginx/nginx.conf
http {
...
server{
listen 12900;
location / {
proxy_pass http://195.201.150.228:5000;
}
}
}
I solved the problem. My server has vesta c panel. so i must not write the server block to the etc/nginx/nginx.conf. I created a user and a subdomain which is named api.hocamnerede.com and i wrote the server block to /home/Hocamnerede/conf/web/api.hocamnerede.com.nginx.conf.
/home/Hocamnerede/conf/web/api.hocamnerede.com.nginx.conf
server {
listen 195.201.150.228:80;
server_name api.hocamnerede.com www.api.hocamnerede.com;
root /home/Hocamnerede/web/api.hocamnerede.com/public_html;
index index.php index.html index.htm;
access_log /var/log/nginx/domains/api.hocamnerede.com.log combined;
access_log /var/log/nginx/domains/api.hocamnerede.com.bytes bytes;
error_log /var/log/nginx/domains/api.hocamnerede.com.error.log error;
location /api {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I am anticipating the reason behind 502 bad gateway and http request blocked by CORS policy in ASP.NET Core
because of deadlock in the database also we may get 502 bad gate way error in browser console.
database deadlock : We are trying to get/put data asynchronously in database there db is getting locked due large amount of transactions and service is not getting any response.
solution : We can resolve this issue by increase the timeout in web.config file
solution : Either we should keep on clear the all sessions in db side else we should convert all async insert/select/update methods to sync in service side.
side effect for sync calls : We are going to loose our application performance.
We have a nodejs app that currently uses socket.io ( with namespaces ). This app is used as a dashboard for a specific financial market. Each instance of app subscribe to a specific market data and provides a dashboard. Initially we were running 3 separate instances of this app configured for 3 separate markets on the server, all binding to separate ports for serving requests.
Since we plan to add more markets it makes sense to have a reverse proxy server where a single port (along with separate URI for each market) can be used. However, setting up nginx has been a nightmare for various reasons.
(a) each instance of app for a market can be in different development stage and hence can have different static files. Managing all static file via nginx seems painful ? What can be done to leave handling of the static files with the app itself.
(b) socket.io communication is a failure. We tried to look into network communication and it seems it keeps on getting 404 page not found error when trying to connect to socket.io server. Not sure why it is connecting via http::/localhost/server.io/ instead of ws://localhost/server.io/ ? Can somebody point us to a similar example ? Anything that needs to be taken care of ?
IN our case we have been trying the following inside nginx sites-available/default
location /app/ {
proxy_pass http://localhost:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# kill cache
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
}
Using nginx as a reversed proxy should not give you a hard time. The great thing about nginx is that you can have multiple projects on the same server with different domains.
Here is an example of nginx with multiple projects:
server {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://localhost:3000;
#Rember to set the header like this otherwise the socket might not work.
proxy_set_header X-Real-Ip $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
server {
listen 80;
server_name subdomain.yourdomain.com;
location / {
proxy_pass http://localhost:3001;
}
}
I'm not sure why your socket should fail. Perhaps the mistake is that you try to define the route on the client site. Try having the javascript like this:
var socket = io();
or if your socket runs on one of your other applications:
var socket = io('http://yourdomain.com');
And remember that your changes should be added to sites-enabled instead of sites-avaible
So now I am serving my backend app on mysite:4300 and my static site on mysite:80. This is okay but causes a few problems, one being SSL I would have to get two signed certificates, at least I think.
Another problem which is CORS, it's not a big issue my express app is configured to allow CORS, but I would like to serve it all under one origin.
Here is how my nginx config looks.
I created inside /etc/nginx/conf.d/mysite.com.conf
server {
listen 80;
server_name mysite.com;
location / {
proxy_pass http://localhost:3100; //node js port
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
So basically the above allows me to serve my nodejs app (running with forever.js) for port :3100 on port :80.
This hijacks my static site, obviously but I was wondering how I could possibly configure it to serve on mysite.com/myApp
My Question
How do I configure nginx to serve as a proxy not to mysite.com:80 but mysite.com:80/myApp so I can serve my static website on mysite.com:80?
Do I need to rethink how I am using the proxy, or is there a configuration method I can use?
P.S Following this tut https://www.digitalocean.com/community/tutorials/how-to-host-multiple-node-js-applications-on-a-single-vps-with-nginx-forever-and-crontab
Maybe I need to configure DNS records, or create a subdomain?
Solution: I ended up using a subdomain, but I think it's pretty much the same concpet.
Thanks to #Peter Lyons I have this server block
server {
listen 80;
server_name app.mysite.com;
location / {
root /var/www/mySite/public_html;
proxy_pass http://localhost:3100;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location #app {
proxy_pass http://localhost:3100;
}
}
It wasn't working until I added a root and location #app so now it works fine on port:80 so no more having a port number exposed in the url. Hopefully I can setup SSL for this now!
I think I am going to try and serve it like this mysite.com/myApp to test it, might be handy in the future.
P.S I may also avoid using the subdomain, because it still is considered Cross origin Are AJAX calls to a sub-domain considered Cross Site Scripting?
I may want to allow my app to communicate with my site, and avoiding CORS might make it easier. That is if mysite.com/myAPP is not considered CORS either. We will see.
Try: proxy_pass http://localhost:3100/myApp$uri;, which I think should do what you want.
When I want nginx to serve static files, I use try_files like this:
location / {
root /path/to/my/app/wwwroot;
try_files $uri $uri.html $uri/index.html #app;
}
location #app {
proxy_pass http://localhost:3100;
}
I have built an small nodejs app using express. Everything works fine on my local machine & the app runs good when I point my browser to http://localhost:3000
But now I am planning to host this app on one of the domain, Lets say http://example.org which is running on nginx & its a php code
But how do I make my express app to properly run the app on example.org/nodeapp ?
Currently, it is considering the example.org as base url of my app & hence throws 404 as it searches for nodeapp in my routes. It should ideally consider example.org/nodeapp as baseurl.
In my server block config, I have following code
listen 80;
server_name your-domain.com;
location /nodeapp {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
You could have nginx rewrite the url by adding something like rewrite ^/nodeapp/(.*) /$1 break;