linux redirect localhost port to url port - linux

I need to redirect localhost:8080 to http://url:8080/.
Some background:
I am using docker swarm stack services. One service (MAPS) creates a simple http server that lists xml files to port 8080 and another service (WAS) uses WebSphere Application Server that has a connector that uses these files, to be more precise it calls upon a file maps.xml that has the urls of the other files as http://localhost:8080/<file-name>.xml.
I know docker allows me to call on the service name and port within the services, thus I can use curl http://MAPS:8080/ from inside my WAS service and it outputs my list of xml files.
However, this will not always be true. The prod team may change the port number they want to publish or they might update the maps.xml file and forget to change localhost:8080 to MAPS:8080.
Is there a way to make it so any call to localhost:8080 gets redirected to another url, preferrably using a configuration file? I also need it to be lightweight since the WAS service is already quite heavy and I can't make it too large to deploy.
Solutions I tried:
iptables: Installed it on the WAS service container but when I tried using it it said my kernel was outdated
tinyproxy: Tried setting it up as a reverse proxy but I couldn't make it work
ncat with inetd: Tried to use this solution but it also didn't work
I am NO expert so please excuse any noob mistakes I made. And thanks in advance!

It is generally not a good idea to redirect localhost to another location as it might disrupt your local environment in surprising ways. Many packages depend on localhost being localhost :-)
it is possible to add MAPS to your hosts file (/etc/hosts) giving it the address of maps.

Related

Throttling/Restricting localtunnel-server traffic

We've developed a server software and for ease of use for end-users, we are using the localtunnel-server app on one of our linux servers to get around the need for port forwarding and messing around with firewalls.
The problem is that it seems to tunnel "all" traffic on the port 80. However, we are afraid of this being abused. We would like to restrict traffic somehow and I wanted to know if that was even possible.
For example, let's say our app uses the "/myapp" virtual directory on the localhost website. So if a request is supposed to go to http://localhost/myapp/index.html then the traffic gets tunneled to http://mytunnel.myserver.com/myapp/index.html
The problem is, if there are other sites running on localhost, http://localhost/someotherapp also gets through. We'd like to block urls that don't match a format or contain keywords such as "/myapp"
Is that even possible? And if so, any guidance on how to achieve this, would be greatly appreciated.

Single domain on multiple server

I have a domain with multiple active users with several applications hosting on it.
Domain: www.domain.com and running on server IP: XXX.XXX.XXX.1
I want to run www.domain.com/business on server IP: XXX.XXX.XXX.2
and similarly to run www.domain.com/hosting on server IP: XXX.XXX.XXX.3
It is very similar to Google scenario:
www.google.com runs on XXX.XXX.173.1 - XXX.XXX.185.1
www.google.com/+dinesh on XXX.XXX.186.1 -XXX.XXX.187.1
I have seen a lot of articles to manage DNS and virtual entries but unable to get correct answer.
Another way to do this is to make the host portions slightly different, i.e.:
business.domain.com/business
hosting.domain.com/hosting
You would then use these links where you are currently putting www.domain.com/business and www.domain.com/hosting. It's then a simple matter to have those different hostnames point at different addresses.
In general, it's not possible to have URLs with the same host point to different IP addresses on the basis of the stuff after the hostname. I cannot seem to verify your Google example (from where I'm looking, they both go to the same set of addresses). If you've more information on how you determined those addresses, please post that and maybe something else can be suggested.
You can manage it through Load balance rather than run on different server
Please use a reverse proxy in front of the application servers.
Consider using nginx or Apache Httpd.
These can be configured to route (technically proxy) to the desired app servers by inspecting the context path in URL.
If you choose to use nginx, see this post on how to configure nginx for such a use case.
Nginx configuration page for additional details: config

Why does ec2 instance not display my website? Using nodejs

I am running a m1.microinstance of aws, using CentOS. I downloaded Yeoman, git, npm and all of the dependencies are present. I am trying to run a MEAN stack on this server, so, mongo, express, angular and node. However, when I visit my public DNS, my site gives me this error: "Oops! Google Chrome could not connect to ec2-54-191-0-63.us-west-2.compute.amazonaws.com". On my admin control panel, I see my instance status, and it says it is running. I understand that if I had used apache, the page that displays is in the /var/www/html directory.. So, how do I get a directory similar to apaches, to display my html files, or whatever I would like the public to see? I have my security groups configured, for inbound, to listen to SSH port 22, for everyone, as well as HTTP port 80, for everyone.
Yeoman set up a nice app folder for me, but for some reason it does not display. I thought maybe I was missing a server.js, but that does not seem to have fixed anything when I added it. Any advice? Thanks!
Make sure you are matching the port all the way through - your browsers URL:PORT, the EC2 routing rules and your NodeJS settings. It looks like you might be listening to a port higher than 80 on the server.
As you mentioned in your comment, if you want to listen on a port below 1024 you will need to run the command as a privileged user.
I didn't run node as root on my AWS server, so it was not setting up my nicely built app that Yeoman made for me.
http://www.stackoverflow.com/questions/9164915 was where I realized my mistake. I am new to linux OS soo, I am learning. :)

How to use Node js in conjunction with Webmin

I have a server running webmin (different domains pointing to different app/directories). Currently I can have my php app running from a directory and all I need to do in order to make it live is get webmin to direct that domain to that specific directory.
Can I do the same with a node js app? If not, how can I use node and webmin in the same box?
I know you didn't say this specifically, but assuming you're hosting the other web stuff through, say, Apache, you would need to leverage that, but you can probably get the effect you want. Basically, it sounds like you want to be able to use "host header" separation for services, rather than having a separate IP address for, say, Apache and Node.js to each use.
So, if you let Apache bind to the main port you're using (80/443/both), then you would run ode and have it configured to listen on a different port (say 8080 as in the example you left in another comment). You can then use mod_proxy in Apache and have it route request with certain domain names to Node. Here's a: more concrete example of this but really the idea is not specific to Node. It can apply to any other process that wants to respond to HTTP requests on your server (or even on a different server).

Node.js introduction

Please pardon my ignorance on node.js. I have started reading on node.js and have some perception which might be wrong. So needed it to clarify
When we use createServer() method, does it creates a virtual server. Not sure whether the term "virtual" is appropriate, but it's the best I can describe it :)
I am confused that how should I deploy my application having node.js + other custom js files as a part of it. If I deploy my application in the main server, does that mean I have two servers?
Thanks for bearing with me.
I will try to answer that:
Q1:
createServer basically creates a process which listens on the specified port for the requests. So yes you can call it as a virtual server which constantly listens for request at the port.
Q2:
Yes you can say that it has now 2 servers
For eg: you server had apache initially which listens to port 80 (you can access it as http://example.com/ it by default looks for port 80)
and then you also start the node service listening on some other port for eg: port 8456 (you can access it as http://example.com:8456/ which will look for port 8456)
So yes you can there are two servers.
EDIT
Q: So what would be the difference if the page is served by the physical server and the virtual server created by node.js?
Physical Server and Node Server are 2 different things and there is no way a single request is going to both the servers.
For eg:
I use apache server to host my website running on PHP. It serves all the html contents of my website (which involves connecting to mysql for data).
Some of the requests could be:
http://example.com/reports.php
http://example.com/search.php
At the other end I might be using nodejs server for totally another purpose. For eg: I might use it for an API, which returns JSON/XML in return. I can use this API myself for some dynamic contents by making AJAX calls with javascript or simple CURL commands from PHP. Or I might also make this API available to public.
Some of the requests could be:
http://example.com:8456/getList?apikey=&param1=&param2=
My choice for NodeJs Server used as an API would be for its ability to handle concurrent request and since its asynchronous for file operations it will be much faster than PHP.
In this case I have a website which is not only working on PHP but its the combination of 2 different technologies (PHP on Apache and Nodejs) and hence 2 servers are totally different running on same server but have there own execution space.
Third Question:
So what would be the difference if the page is served by the physical server and the virtual server created by node.js?
If I might add, it's a virtual server in the sense that apache is an virtual http server listening on whatever port. Of course apache had a lot more modules and plugins and configurations to it where as Node's is lighter (kind of like WEBrick for rails), non-blocking and agile for building on. Then again apache is more stable.. in other words, it's a decision of software, both sitting on the server listening to a particular port set by you.
That said there's deployment methods that allow you to place a node application in front of software such as nginx (another server-side software) or HAproxy (load handling with a lot of power), so really it's all up to how you choose to configure it.
Maybe I'm getting to far from your question, but I hope this helps!
Also, You should give the answer to the other guy, he came first ;)

Resources