NodeJS TLS/TCP server in need of an external firewall - node.js

Problem:
I have an AWS EC2 instance running FreeBSD. In there, I'm running a NodeJS TLS/TCP server. I'd like to create a set of rules (in my NodeJS application) to be able to individually block IP addresses programmatically based on a few logical conditions.
I'd like to run an external (not on the same machine/instance) firewall or load-balancer, that I can control from NodeJS programmatically, such that when certain conditions are given, I can block a specific remote-address(IP) before it reaches the NodeJS instance.
Things I've tried:
I have initially looked into nginx as an option, running it on a second instance, and placing my NodeJS server behind it, but after skimming through the NGINX
Cookbook
Advanced Recipes for High Performance
Load Balancing I've learned that only the NGINX Plus (the paid version) allows for remote/API control & customization. While I believe that paying $3500/license is not too much (considering all NGINX Plus' features), I simply can not afford to buy it at this point in time; in addition the only feature I'd be using (at this point) would be the remote API control and the IP address blocking.
My second thought was to go with the AWS/ELB (elastic-load-balancer) by integrating AWS' SDK into my project. That sounded feasible, unfortunately, after reading a few forum threads and part of their documentation (unless I'm mistaken) it seems these two features I need are not available on the AWS/ELB. AWS seems to offer an entire different service called WAF that I honestly don't understand very well (both as a service and from a feature-stand-point).
I have also (briefly) looked into CloudFlare, as it was recommended in one of the posts, here on Sackoverflow, though I can't really tell if their firewall would allow this level of (remote) control.
Question:
What are my options? What would you guys recommend I did?

I think Nginx provide such kind of functionality please refer to link
If you want to block an IP with Node TCP you can just edit a nginx config file and deny IP address.
Frankly speaking, If I were you, I would use AWS WAF but if you don’t want to use it, you can simply use Node JS
In Node JS You should have a global array variable where you will store all blocked IP addresses and upon connection, you will check whether connected host IP is in blocked IP variable. However there occurs a problem when machine or application is restarted, you will lose all information about blocked IP-s. So as a solution to that you can just setup Redis (It is key-value database but there are also other datatypes) DB and store blocked IP-s there. Inasmuch as Redis DB is in RAM all interaction with DB will be instantly and as long as machine or node is restarted, Redis makes a backup on hard drive and it syncs from it and continue to work in RAM with old databases.

Related

How to host Node.Js server and PostgreSQL database from my computer?

I want to host my own server and database on my computer, I don't want to pay monthly for services.
I developed a node.js app and it's using a postgresql database. I have a domain with an angular app and the app needs to use data from the server.
Can someone tell me how I can do this and which OS would be the best?
Thanks!
You have to do few things for that to work.
First, your Angular app needs to be able to connect with your home server so it either needs a static IP address accessible from the outside, a dynamic IP with dynamic DNS, or a VPN.
Your server needs to properly support CORS so that your Angular app would be able to connect with it. It will send OPTIONS requests that your server needs to handle properly.
Make sure that your server is always on, the internet connection is reliable, the power is reliable and that your services are properly restarted on reboot.
Make sure that your server is always up to date with security patches, is configured properly and doesn't use any unneeded software and services.
For (1) you have a lot of options and it all depends on whether you have a static or dynamic IP address, whether it is accessible from the internet etc. which you didn't include in your answer.
For (2) it depends on what Node framework do you use for you server-side application which you didn't include in your question. You need to use a way to set up CORS that is specific for the framework that you use.
The (3) is hard in home environment but it's important because on any downtime your users will not be able to use your application.
The (4) is critical in home environment because if anyone breaks into your server, he'll have access to your home network which may have a different kinds of consequences that breaking into a data center.
Another option would be to use a cheap VPS provider like Digital Ocean where you can get a server for $5 a month (or 2 months for free with this link) which may be less hassle that setting up your own server - for which you have to pay for electricity, manage the hardware, monitor the connectivity etc.
If you choose a VPS then (1) us taken care for you - you get your own static IP address accessible from the world, (3) is taken care for you completely, (4) is relatively easy to do and the biggest issue is making sure that CORS works as it should - but here you can host your API on the same domain as your frontend and then you don't need to worry about CORS at all.
If you get a VPS then you can host your frontend Angular app from the same server so that it doesn't even have to cost you more.

Setting up a web server for access outside of subnetwork (Node.js, Nginx maybe, Ubuntu server)

A little bit of context. I have developped a webapp on node.js (and a glamourous set of extensions). It has been approved for testing with true users at my company and i am supposed to deploy it now. Problem is that basically i have no idea unto how attack this problem. I have so many questions.
For the moment i have created a virtual machine on the local server. I have installed ubuntu server unto it and i have the intuition about how to deploy the app in this part (i suppose following the same steps as when i started to work on this project). I do not know however if i can have remote access from the outside of my network to this virtual machine. I also dont know if additional configuration in ubuntu's side is needed to make such an idea work (for example: in the installation there was a part about proxies that at the moment i decided to ignore)
From the few documents i have read about it since i was assigned this, a solution may lie in using nginx. The logic behind it if i am not mistaken (and please correct me if i am) is that nginx can help linking the HTTP requests (through the port 80 which is normally opened for access in most machines) and link it to a specific port on the machine (The sexy app i have developped).
In a more early stage, what ressources would i need to start this off? Would i need a domain name? IS it necessary? Do i need a different virtual server to link the apps or can they be on the same machine?
If you have additional comments or tips for someone that is learning to do this kind of thing, please do.
For remote access, you will need a couple of things. First of all, you will need to make sure that your virtual machine is on a bridged adapter. I'm not sure what virtual machine you are on, or I'd give you more detail on how to do this. Second, you will need to make sure that your router has port 80 (or whatever port you chose to use) setup via port forwarding so that requests coming in map to the server (a request comes to the router on the port, the router must then know where to send those requests to). Finally, if you want to use a port other than port 80, you should be able to configure this in the nodejs configuration. This may also be configurable in the router so that requests coming in on port 80 are mapped to, say 8080, but, given that this is a company, it's probably easier to reconfigure the nodejs server than have it set up special mapping.
This experience comes from personal experience with hosting web servers at home. Corporate routers should need similar configuration unless each system has a public IP address on the internet, which is unlikely.

maintaining DNS resolution for connecting clients when using fresh app instances with fresh IPs

I have an Ansible playbook for spinning up and building a brand new GNU/Linux box and installing vsftpd.
I have a client which needs to send a nightly file over SFTP. I have instructed to send to ftp.example.com.
I need to be able to very quickly run the playbook against any infrastructure provider (such as DigitalOcean, AWS, Rackspace etc) and without any change on the client end still receive the nightly file upload even if (as will be the case) the IP of the server has changed. So, one night the server may be on a DigitalOcean box in NewYork, the next on an AWS box in Ireland.
Now, obviously I could use a DNS name-server provider who has a good API to code against and reset the A-record as a stage of the playbook run. However, this will likely mean that until the client's DNS cache is flushed they will still be seeing ftp.example.com as the previous server.
So, how can I guarantee that this will work without any interaction on the part of the client.
Many thanks
In terms of DNS, you cant. Evan if you set a low TTL on the DNS records (time to live/expire) many DNS services cache short TTLs for up to 72 Hours.
My recommendation is not to have a infrastructure that requires constant changes in IP.
What may be a better solution is to use a distributed service like BitTorrentSync.
You could also host your own BIND server and ask your client to point his DNS at your BIND server, avoiding any third party DNS.
The real solution would be to stick keeping a persistant instance. If money is an issue, you can suspend your instance saving on compute in Amazon AWS

Windows Active Directory Domain setup remotely through univention using samba4

I have a slight problem bit of the back story. recently ive been trying to test out univention which is a linux distribution with the goal of being able to replace Microsoft active directory.
I tested it locally and all went reasonably well after a few minor issues i then decided to test it remotely as the company wants to allow remote users to access this so i used myhyve.com to host it and its now been setup successfully and works reasonably well.
however
my main problem is DNS based as when trying to connect to the domain the only way windows will recognize it is by editing the network adapter and setting ip v4 dns server address to the ip address of the server hosting the univention active directory replacement. although this does allow every thing to work its not ideal and dns look up on the internet are considerably longer. i was wondering if any one had any ideas or have done something similar and encountered this problems before and know a work around. i want to avoid setting up a vpn if possible.
after initially registering the computer on the domain i am able to remove the dns server address and just use a couple of amendments to the HOST file to keep it running but this still leads to having issues connecting to the domain controller sometimes and is not ideal. any ideas and suggestions would be greatly received.
.Michael
For the HOST entries, the most likely issue is, that there are several service records a computer in the domain needs. I'm not sure, whether these can be provided via the HOST file or not but you'll definitely have authentication issues if they are missing. To see the records your domain is using issue the following commands on the UCS system.
/usr/share/univention-samba4/scripts/check_essential_samba4_dns_records.sh
For the slow resolution of the DNS records there are several points where you could start looking. My first test would be whether or not you are using a forwarder for the web DNS requests and whether or not the forwarder is having a decent speed. To check if you are using one, type
ucr search dns/forwarder
If you get a valid IP for either of the UCR Variables, dns/forwarder1, dns/forwarder2 or dns/forwarder3, you are forwarding your DNS requests to a different Server. If all of them are empty or not valid IPs then your server is doing the resolution itself.
Not using a forwarder is often slow, as the DNS servers caching is optimized for the AD operations, like the round robin load balancing. Likewise a number of ISPs require you to use a forwarder to minimize the DNS traffic. You can simply define a forwarder using ucr, I use Google on IPv4 for the example
ucr set dns/forwarder1='8.8.8.8'
The other scenario might be a slow forwarder. To check it try to query the forwarder directly using the following command
dig univention.com #(ucr get dns/forwarder1)
If it takes long, then there is nothing the UCS server can do, you'll simply have to choose a different forwarder from the ucr command above.
If neither of the above helps, the next step would be to check whether there are error messages for the named daemon in the syslog file. Normally these come when you are trying to manually remove software or if the firewall configuration got changed.
Kevin
Sponsored post, as I work for Univention North America, Inc.

Sync mongodb instances

What is the best solution to sync a mongodb instance in local server with dynamic IP (set by ISP) with a mongodb instance in public server (eg. Amazon AWS)? Can i do that from node.js ?
You can do this in a number of ways, but first to address the public/dynamic IP issue you will want to either use a hostname --> IP address mapping that you maintain (/etc/hosts or your own DNS servers) or look into one of the dynamic DNS solutions.
Once you have the changing IP address problems solved, the question is how to keep the systems in sync. The most obvious way is to have the two nodes in a replica set - if your connection is reliable enough this might work, though you will probably want to put an arbiter locally or remotely for whatever side of the connection you want to do writes on when the connection is flakey (in a 2 node set, if either node is down then they are both secondary and cannot take writes).
Another option is to use the mongo connector which lets you sync to arbitrary destinations, including another MongoDB instance.
That project will give you a pretty good idea of what you need to do (in python) to provide such a syncing service. You will need to write something similar in node.js to achieve a proper sync and essentially you will need to tail the oplog on one host and apply it to the other on a regular basis, depending on your requirements.

Resources