On my main server, I fetch data from an external/seperate redis server which is accessed through an api https://localhost:7000/api/?token=**** which works. However token and api is not secure. And since I want to have redis server to be separate, this technique isn't suited for my case.
In my case I want to have 2 independent servers A and B.
A should load data from B without using an api or url call... Instead it should use port (e.g. //server:123). This way server B can only be accessed from A.
I want this approach to work for both development and production. AWS has "Server Groups" I believe, but that's production only...
So is there a way to create this kind of connection with nodejs? I also want to know if this is only possible having a running server already, since I don't have one yet.
Note: In case you are wondering, I use redis to store private keys for encryption, so I need a secure, separate server which can be controlled independently
It is not very clear what you're trying to do since accessing data from another server without using an API does not really make sense. Anything you do to access it is some type of API.
If you want to make it so that only server A can access server B, then you have a number of choices to make that secure:
Require authentication whenever server B is accessed and make it so that only server A has those authentication credentials.
Assuming server A and server B are in your same server infrastructure, put the server B API on a port that is not available to the outside world, but is only available from within your server infrastructure (this usually involves picking a port that your firewall to the outside is blocking access to).
On server A, only accept connections on its API from the specific IP address of server B.
You can even implement more than one of these options at once. For example, it's not uncommon to use 1) and 2) together.
Stunnel is built for that ! basically speaking it's a vpn ! but not for machines for ports ! it's a bit complicated , you will have to deal with certificates and a couple of other things (config both servers...) but once it's done it's a breeze to launch and reuse (just launch a file) give it a try !
and see this link : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-ssl-tunnel-using-stunnel-on-ubuntu
you should also consider adding an ip table rule at the database server to allow access to your server only.
Edit:
Keep in mind that redis was designed to be used in a trusted environment . This means that the security layer will not be redis itself but a third party software that u'll need to setup.
For dev purpose no need to make this bulletproof. And even if you want to , it's kinda hard to do . because the security of your app is mainly depending on the infrastructure of the company that will host your app.
That being said , if you want to secure a redis instance in a localhost environment . a rule at the ip table allowing only the localhost to access your port 6379 will be suffcient.
The other thing that could compromise the security of your redis DB is the app itself . An important aspect of this is to validate EVERYTHING , it should be a good start.
Finally if you want to dive a bit deeper take a look at this link
https://www.digitalocean.com/community/tutorials/how-to-secure-your-redis-installation-on-ubuntu-14-04
hope this helps !
Related
My team and I are working on a digital signage platform.
We have ~ 2000 Raspberry Pi around the world connected to a Nodejs server using Socket IO. The Raspberries are initiating the connection.
We would like to be able to scale horizontally our application on multiple servers but we have a problem that we can’t figure out.
Basically, the application stores the sockets of the connected Raspberry in an array.
We have an external program that calls the API within the server, this results by the server searching which sockets will be "impacted" by the API call and send them the informations.
After lots of search, we assume that we have to stores the sockets (or their ID) elsewhere (Redis ?), to make the application stateless. Then, any server can respond to a API call and look the sockets in a central place.
Unfortunately, we can’t find any detailed example on how to do that.
Can you please help us ?
Thanks
(You can't store sockets from multiple server instances in a shared datastore like redis: they only make sense in the context of the server where they were initiated).
You will need a cluster of node.js servers to handle this. There are various ways to make a cluster. They all involve directing incoming connections from your RPis to a "generic" hostname, for example server.example.com. Behind that server.example.com hostname will be multiple node.js servers.
Each incoming connection from each RPi connects to just one of those multiple servers. (You know this, I believe.) This means one node.js server in your cluster "owns" each individual RPi.
(Telling you how to rig up a cluster of node.js servers is beyond the scope of this answer. Hints: round-robin DNS or a reverse-proxy nginx front end.)
Then, you want to route -- to fan out -- the incoming data from each API call to each server in the cluster, so the server can route it to the RPis it owns.
Here's a good way to handle that:
Set up a redis cache or other shared data store. It can be very small.
When each node.js server starts, have it register itself as active. That is, have it place its own specific address for handling API calls into the shared server. The specific address is probably of the form 12.34.56.78:3000: that is, an IP address and port.
Have each server update that address every so often, once a minute or so, to show it is still alive.
When an API call arrives at server.example.com, it will come to a more-or-less randomly chosen node.js server instance.
Get that server to read the list of server addresses from the redis cache
Get that server to repeat the API call to all servers except itself. Add a parameter like repeated=yes to the repeated API calls.
Then, each server looks at its list of connected sockets and does what your application requires.
On server shutdown, have the server unregister itself -- remove its address from redis -- if possible.
In other words, build a way of fanning out the API calls to all active node.js servers in your cluster.
If this must scale up to a very large number (more than a hundred or so) node.js servers, or to many hundreds of API calls a minute, you probably should investigate using message queuing software.
SECURE YOUR REDIS server from random cybercreeps on the internet.
I want to host a web app with node.js on a Linux virtual machine using the the HTTP module.
As the app will be visualising sensitive data I want to ensure it can only be accessed from PCs on the same LAN.
My understanding is that using the HTTP module a web server is created that's initially only accessible by other PCs on the same LAN. I've seen that either by tunnelling or portforwarding a node.js server can be exposed if desired.
Question
Are there any other important considerations/ways the server could be accessed externally?
Is there a particular way I can setup a node.js server to be confident that it's only accessible to local traffic?
It really depends what you are protecting against.
For example, somebody on your LAN could port forward your service using something like ngrok. There are a few things you can check for:
In this case the header x-forwarded-for is set. So, to protect against this you can check for this header on the incoming request, and if set you can reject the request.
The host header is also set and will indicate how the client referred to your service - if it is as you expect (maybe a direct local LAN address such as 192.168.0.xxx:3000) then all is OK, if not (I ran ngrok on a local service and got something of the form xxxxxxxx.ngrok.io) then reject it.
Of course a malicious somebody could create their own server to redirect requests. The only way there is to put in usernames and passwords or similar. At least you then known who is (allegedly) accessing your service and do something about it.
However, if you are not trying to pretect against a malicious internal actor, then you should be good as you are - I can't think of any way (unless there is a security hole in your LAN) for your service to be made public without somebody actively setting that up.
My last suggestion would be to use something like express rather than the http module by itself. It really does make life a lot simpler. I use it a lot for just this kind of simple internal server.
Thought I'd add a quick example. I've tested this with ngrok and it blocks access via the public address but works find via localhost. Change the host test to whatever local address (or addresses) you want to serve this service from.
const express=require('express');
const app=express();
app.use((req,res,next)=>{
if (req.headers.host!=='localhost:3000' || req.headers['x-forwarded-for']){
res.status(403).send('Invalid access!');
} else next();
});
app.get('/',(req,res)=>res.send('Hello World!'));
app.listen(3000,()=>{
console.log('Service started. Try it at http://localhost:3000/');
});
I would prefer using nginx as a proxy here and rely on nginx' configuration to accept traffic from local LAN to the node.js web server. If this is not possible, a local firewall would be the best tool for the job.
I'm making a nodejs application that will act a server for other sites in different countries as the data being transmitted will be business related data. I would like to know how I can safely/securely send this data.
I am currently using socket.io to act as my main server (Master) on other sites there are (Slave) servers that handle the data from the master server.
I have got this working in a local environment but want to deploy this in the other sites.
I have tried to Google this to see if anyone else has done this but came across socket.io sessions but I don't know if this will fit with (Server->Server) connections.
Any help or experience would be grateful.
For server-server communication where you control both ends of the communication you can use WebSocket over HTTPS, you can use TCP over SSH tunnel or any other encrypted tunnel. You can use a PubSub service, a queue service etc. There are a lot of ways you can do it. Just make sure that the communication is encrypted either natively by the protocols you use or with VPN or tunnels that connect your servers in remote locations.
Socket.io is usually used as a replacement for WebSocket where there is no native support in the browser. It is rarely used for server to server communication. See this answer for more details:
Differences between socket.io and websockets
If you want a higher level framework with focus on real-time data then see ActionHero:
https://www.actionherojs.com/
For other options of sending real-time data between servers you can use some shared resource like a Redis database or some pub/sub service like Faye or Kafka, or a queue service like ZeroMQ or RabbitMQ. This is what is usually done to make things like that work across multiple instances of the server or multiple locations. You could also use a CouchDB changes feed, or a similar feature of RethinkDB to make sure that all of your instances get all the data as soon as it is posted by any one of them. See:
http://docs.couchdb.org/en/2.0.0/api/database/changes.html
https://rethinkdb.com/docs/changefeeds/javascript/
https://redis.io/topics/pubsub
https://faye.jcoglan.com/
https://kafka.apache.org/
Everything that uses HTTP is easy to encrypt with HTTPS. Everything else can be encrypted with a tunnel or VPN.
Good tools that can add encryption for protocols that are not encrypted themselves (like e.g. the Redis protocol) are:
http://www.tarsnap.com/spiped.html
https://www.stunnel.org/index.html
https://openvpn.net/
https://forwardhq.com/help/ssh-tunneling-how-to
See also:
https://en.wikipedia.org/wiki/Tunneling_protocol
Note that some hosting services may give you preconfigured tunnels or internal network interfaces that pass data encrypted between your servers located in different data centers of that provider. Some providers give you tools and tutorials to that easily as well.
A beginner question, but I've looked through many questions on this site and haven't found a simple, straightforward answer:
I'm setting up a Linux server running Ubuntu to store a MySQL database.
It's important this server is secure as possible, as far as I'm aware my main concerns should be incoming DoS/DDoS attacks and unauthorized access to the server itself.
The database server only receives incoming data from one specific IP (101.432.XX.XX), on port 3000. I only want this server to be able to receive incoming requests from this IP, as well as prevent the server from making any outgoing requests.
I'd like to know:
What is the best way to prevent my database server from making outgoing requests and receiving incoming requests solely from 101.432.XX.XX? Would closing all ports ex. 3000 be helpful in achieving this?
Are there any other additions to the linux environment that can boost security?
I've taken very basic steps to secure my phpmyadmin portal (linked to the MySQL database), such as restricting access to solely my personal IP address.
To access the database server requires the SSH key (which itself is password protected).
A famous man once said "security is a process, not a product."
So you have a db server that should ONLY listen to one other server for db connections and you have the specific IP for that one other server. There are several layers of restriction you can put in place to accomplish this
1) Firewall
If your MySQL server is fortunate enough to be behind a firewall, you should be able to block out all connections by default and allow only certain connections on certain ports. I'm not sure how you've set up your db server, or whether the other server that wants to access it is on the same LAN or not or whether both machines are just virtual machines. It all depends on where your server is running and what kind of firewall you have, if any.
I often set up servers on Amazon Web Services. They offer security groups that allow you to block all ports by default and then allow access on specific ports from specific IP blocks using CIDR notation. I.e., you grant access in port/IP combination pairs. To let your one server get through, you might allow access on port 3000 to IP address 101.432.xx.xx.
The details will vary depending on your architecture and service provider.
2) IPTables
Linux machines can run a local firewall (i.e., a process that runs on each of your servers itself) called iptables. This is some powerful stuff and it's easy to lock yourself out. There's a brief post here on SO but you have to be careful. It's easy to lock yourself out of your server using IPtables.Keep in mind that you need to permit access on port 22 for all of your servers so that you can login to them. If you can't connect on port 22, you'll never be able to login using ssh again. I always try to take a snapshot of a machine before tinkering with iptables lest I permanently lock myself out.
There is a bit of info here about iptables and MySQL also.
3) MySQL cnf file
MySQL has some configuration options that can limit any db connections to localhost only - i.e., you can prevent any remote machines from connecting. I don't know offhand if any of these options can limit the remote machines by IP address, but it's worth a look.
4) MySQL access control via GRANT, etc.
MySQL allows you very fine-grained control over who can access what in your system. Ideally, you would grant access to information or functions only on a need-to-know basis. In practice, this can be a hassle, but if security is what you want, you'll go the extra mile.
To answer your questions:
1) YES, you should definitely try and limit access to your DB server's MySQL port 3000 -- and also port 22 which is what you use to connect via SSH.
2) Aside from ones mentioned above, your limiting of PHPMyAdmin to only your IP address sounds really smart -- but make sure you don't lock yourself out accidentally. I would also strongly suggest that you disable password access for ssh connections, forcing the use of key-pairs instead.You can find lots of examples on google.
What is the best way to prevent my database server from making outgoing requests and receiving incoming requests solely from 101.432.XX.XX? Would closing all ports ex. 3000 be helpful in achieving this?
If you don't have access to a separate firewall, I would use ip tables. There are a number of managers available for you on this. So yes. Remember that if you are using IPtables, make sure you have a way of accessing the server via OOB (short for out of band, which means accessing it in such a way that if you make a mistake in IP tables, you can still access it via console/remote hands/IPMI, etc)
Next up, when creating users, you should only allow that subnet range plus user/pass authentication.
Are there any other additions to the linux environment that can boost security? I've taken very basic steps to secure my phpmyadmin portal (linked to the MySQL database), such as restricting access to solely my personal IP address.
Ubuntu ships with something called AppArmor. I would investigate that. That can be helpful to prevent some shenanigans. An alternative is SELinux.
Further, take more steps with phpmyadmin. That is your weakest link in the security tool chain we are building.
To access the database server requires the SSH key (which itself is password protected).
If security is a concern, I would NOT use SSH key style access. Instead, I would use MySQLs native support for SSL certificate authentication. Here is now to configure it with phpmyadmin.
Please pardon my ignorance on node.js. I have started reading on node.js and have some perception which might be wrong. So needed it to clarify
When we use createServer() method, does it creates a virtual server. Not sure whether the term "virtual" is appropriate, but it's the best I can describe it :)
I am confused that how should I deploy my application having node.js + other custom js files as a part of it. If I deploy my application in the main server, does that mean I have two servers?
Thanks for bearing with me.
I will try to answer that:
Q1:
createServer basically creates a process which listens on the specified port for the requests. So yes you can call it as a virtual server which constantly listens for request at the port.
Q2:
Yes you can say that it has now 2 servers
For eg: you server had apache initially which listens to port 80 (you can access it as http://example.com/ it by default looks for port 80)
and then you also start the node service listening on some other port for eg: port 8456 (you can access it as http://example.com:8456/ which will look for port 8456)
So yes you can there are two servers.
EDIT
Q: So what would be the difference if the page is served by the physical server and the virtual server created by node.js?
Physical Server and Node Server are 2 different things and there is no way a single request is going to both the servers.
For eg:
I use apache server to host my website running on PHP. It serves all the html contents of my website (which involves connecting to mysql for data).
Some of the requests could be:
http://example.com/reports.php
http://example.com/search.php
At the other end I might be using nodejs server for totally another purpose. For eg: I might use it for an API, which returns JSON/XML in return. I can use this API myself for some dynamic contents by making AJAX calls with javascript or simple CURL commands from PHP. Or I might also make this API available to public.
Some of the requests could be:
http://example.com:8456/getList?apikey=¶m1=¶m2=
My choice for NodeJs Server used as an API would be for its ability to handle concurrent request and since its asynchronous for file operations it will be much faster than PHP.
In this case I have a website which is not only working on PHP but its the combination of 2 different technologies (PHP on Apache and Nodejs) and hence 2 servers are totally different running on same server but have there own execution space.
Third Question:
So what would be the difference if the page is served by the physical server and the virtual server created by node.js?
If I might add, it's a virtual server in the sense that apache is an virtual http server listening on whatever port. Of course apache had a lot more modules and plugins and configurations to it where as Node's is lighter (kind of like WEBrick for rails), non-blocking and agile for building on. Then again apache is more stable.. in other words, it's a decision of software, both sitting on the server listening to a particular port set by you.
That said there's deployment methods that allow you to place a node application in front of software such as nginx (another server-side software) or HAproxy (load handling with a lot of power), so really it's all up to how you choose to configure it.
Maybe I'm getting to far from your question, but I hope this helps!
Also, You should give the answer to the other guy, he came first ;)