I'm currently using Compose.io to host my MongoDB - however its costs $31, my DB isn't so big and I don't really use any specific features.
I've decided to create a droplet on DigitalOcean and then use their one click install for MongoDB.
With Compose.io, I simply use a a connection URL like mongodb://USERNAME:PASSWORD#aws-xxxx.com:xxx/myDB along with a ssl certificate.
However with DigitalOcean, it looks like SSH'ing into the droplet then connecting is the best approach (rather than creating an open access bind_url.
So i want to ask:
Is this SSH process quite intensive/time consuming in terms of would it simply SSH once then remain connected, until the node app (website) was closed?
I'm thinking of using npm install tunnel-ssh. Is this recommended?
Any tips/advice/security notes would be appreciated.
Thanks.
Compose definitely offers a lot of security features that would take quite a bit of configuration to replicate. If this is a production database I would consider $31/month a good value. But speaking directly to your questions:
OpenSSH can be configured to keep the tunnel alive. The settings can be configured on both the client and server configuration file.
Keep SSH session alive
OpenSSH is very efficient an doesn't impose much overhead. Resource-wise it's not a concern. SSH2 implemented in native javascript is not going to perform as well as the OpenSSH binary. So I wouldn't use 'tunnel-ssh' without a convincing reason.
If you store your key with your application when somebody roots your application server they will also have your key. So make sure the user that you tunnel with has reduced privileges on the server, just what they need to access MongoDB and no more.
You might also consider just running your application and MongoDB on the same droplet. Don't expose MongoDB to the network. I wouldn't recommend this for production, but it's fine for low key scenarios. Keep in mind, if someone roots your server or application they will also have full access to the DB. Make sure you have a backup strategy.
Related
I have programmed an application that users can use to process genome data. This application relies on a 10GB database file, that users have to download in order to run the application. At the moment, I have stored this file on Google Drive, but the download bandwith is limited, so if a number of users download the file on a certain day, it will not work for others and they will get errors running the application.
My solution would be to host the file on our research server, create a user that only has access rights to this folder and nothing else, and make the file downloadable from the server via scp within the application (which is open source) through that user.
My question now is, is this safe to do or are people potentially able to hack into our server? If this method would be a security risk, what would be a better way to provide this file?
Thank you in advance!
Aloha
You can setup something like free Seafile https://www.seafile.com/en/home/, or ask the admin to set it up for you which is pretty secure like a self hosted google drive with 2fa authentication.
Another nice and easy tool is Filebrowser on github (https://github.com/filebrowser/filebrowser)
I would not really advice giving people shell/scp access inside your network.
And hosting anything inside a company network is in general not wisest idea, there is a always a risk involved.
I would setup a Seafile/filebrowser solution at a cheap rented server outside your network and upload it there. Or if you have a small pc left set it up in a DMZ Zone, a zone that has special access restrictions inside your company.
You want to use SSH (scp) as a transportation and authentication method for file hosting. It's possible to keep this safe with caution. For example, GitHub uses SSH for transport when providing git access with the git+ssh protocol.
Now for the caution part, if you haven't done it before, it's not a trivial task.
The proper way to achieve this would be set up an isolated SSH server in a chroot environment, and set up an SSH user on this isolated SSH instance only (not a user in the system that is added by eg useradd). Then you can add the files that's absolutely necessary to the chroot, and provide SSH access to users.
(Nowadays you might want to consider using Linux filesystem namespaces, if applicable, to replace chroot, but I'm not sure on this.)
As for other options, setting up a simple Nginx server for static file hosting might be a lot easier, provided you have some understanding of HTTP and TLS. There're lots of writings on the Internet about this.
Both ways, if you are to expose your server to the Internet or Intranet, you need to make sure of firewalling. Consider to learn about nftables or firewalld or the like, if you haven't already.
SSH is reasonably safe. Always keep software up-to-date.
Set up an sftp-only user with chrooted directory. In /etc/ssh/sshd_config:
Match User MyUser
ChrootDirectory /var/ssh/chroot
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
This user will not get a shell (because of internal-sftp), and cannot see files outside of /var/ssh/chroot.
Use a certificate client-side, additional to password.
Good description of the setup process for certificates:
https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server
Your solution is moderately safe.
A better solution is to put it on a server accessible via sftp, behind a password, but also encrypt the file: in this way you introduce a double layer of protection.
On a Linux server you should be able to use a tool like gpg to encrypt your file.
Next you share the decryption key with your partners using a secure channel with e.g. an end2end encrypted messaging software.
Problem:
I have an AWS EC2 instance running FreeBSD. In there, I'm running a NodeJS TLS/TCP server. I'd like to create a set of rules (in my NodeJS application) to be able to individually block IP addresses programmatically based on a few logical conditions.
I'd like to run an external (not on the same machine/instance) firewall or load-balancer, that I can control from NodeJS programmatically, such that when certain conditions are given, I can block a specific remote-address(IP) before it reaches the NodeJS instance.
Things I've tried:
I have initially looked into nginx as an option, running it on a second instance, and placing my NodeJS server behind it, but after skimming through the NGINX
Cookbook
Advanced Recipes for High Performance
Load Balancing I've learned that only the NGINX Plus (the paid version) allows for remote/API control & customization. While I believe that paying $3500/license is not too much (considering all NGINX Plus' features), I simply can not afford to buy it at this point in time; in addition the only feature I'd be using (at this point) would be the remote API control and the IP address blocking.
My second thought was to go with the AWS/ELB (elastic-load-balancer) by integrating AWS' SDK into my project. That sounded feasible, unfortunately, after reading a few forum threads and part of their documentation (unless I'm mistaken) it seems these two features I need are not available on the AWS/ELB. AWS seems to offer an entire different service called WAF that I honestly don't understand very well (both as a service and from a feature-stand-point).
I have also (briefly) looked into CloudFlare, as it was recommended in one of the posts, here on Sackoverflow, though I can't really tell if their firewall would allow this level of (remote) control.
Question:
What are my options? What would you guys recommend I did?
I think Nginx provide such kind of functionality please refer to link
If you want to block an IP with Node TCP you can just edit a nginx config file and deny IP address.
Frankly speaking, If I were you, I would use AWS WAF but if you don’t want to use it, you can simply use Node JS
In Node JS You should have a global array variable where you will store all blocked IP addresses and upon connection, you will check whether connected host IP is in blocked IP variable. However there occurs a problem when machine or application is restarted, you will lose all information about blocked IP-s. So as a solution to that you can just setup Redis (It is key-value database but there are also other datatypes) DB and store blocked IP-s there. Inasmuch as Redis DB is in RAM all interaction with DB will be instantly and as long as machine or node is restarted, Redis makes a backup on hard drive and it syncs from it and continue to work in RAM with old databases.
Why there are single web service just for mongodb? Unlike LAMP, I will just install everything on my ec2. So now I'm deploying MEAN stack, should I seperate mongodb and my node server? I'm confused. I don't see any limitation mixing node with mongod under one single instance, I can use tools like mongolab as well.
Ultimately it depends how much load you expect your application to have and whether or not you care about redundancy.
With mongo and node you can install everything on one instance. When you start scaling the first separation is to separate the application from the database. Often its easier to set everything up that way especially if you know you will have the load to require it.
I know that Ruby on Rails has this feature, and in the railstutorial it specifically encourages it. However, I have not found such a thing in nodejs. If I want to run Sqlite3 on my machine so I can have easy to use database access, but postgres in production on Heroku, how would I do this in Nodejs? I can't see to find any tutorials on it.
Thank you!
EDIT: I meant to include Node.JS + Express.
It's possible of course, but be aware that this is probably a bad idea: http://12factor.net/dev-prod-parity
If you don't want to go through the hassle of setting up postgres locally, you could instead use a free postgres plan on Heroku and connect to it from your local machine:
DATABASE_URL=url node server.j
A .env file can make this easier:
https://devcenter.heroku.com/articles/heroku-local#copy-heroku-config-vars-to-your-local-env-file
To switch between production and development Db you use different ports for running you application locally and on Heroku.
As Heroku by default runs the application to port 80 you have a some other port while running your app locally.
This will help you to figure out in run time if your application is running locally or in production and you can switch the Databases accordingly.
You could use something like jugglingdb to do this:
JugglingDB(3) is cross-db ORM for nodejs, providing common interface to access most popular database formats. Currently supported are: mysql, sqlite3, postgres, couchdb, mongodb, redis, neo4j and js-memory-storage (yep, self-written engine for test-usage only). You can add your favorite database adapter, checkout one of the existing adapters to learn how, it's super-easy, I guarantee.
Jugglingdb also works on client-side (using WebService and Memory adapters), which allows to write rich client-side apps talking to server using JSON API.
I personally haven't used it, but having a common API to access all your database instances would make it super simple to use one locally and one in production - you could wire up some location detection without too much trouble as well and have it automatically select the target db depending on the environment it's in.
I've built an application that run with node.js, which permit to retrieve some data through a REST API.
I want to put it online on a personal computer (Windows), but I have no idea how to install a server and what I need to make my application available online.
Can someone explain me the steps to do it ? I know that some online services exists like Heroku but I want to do it by myself.
Thank you
This question looks small, but it's actually huge. I started writing this as a basic guide, and it ended up really being quite a lengthy answer, so I split it into pieces. Overall hope this helps!
Using a VPS
You don't want to serve a website from your personal computer, because any time your computer is off, the website will be down. You don't want that kind of responsibility with your computer, so much of the time people choose to essentially rent server space from companies that's sole purpose is to get you space/bandwidth on a simple computer that is always on. These are often called VPS's (virtual private servers).
So the first step I'd recommend is to grab a VPS for yourself. Digital Ocean is a great service that you can get a solid server from for $5/month, I would recommend starting there. There are bunches of other companies you can get VPS's from though if you prefer, probably the most popular alterntive being linode.
Once you've got yourself a VPS, log in to it using ssh. Usually it will look something like this:
ssh root#000.000.0000
...with the number at the end being the IP address of your server. Most VPS's are some flavor of linux, so being familiar with the linux command line interface is important. Once you're all set in your server, you'll want to do a few things. This is what I usually do, in order:
Install vim
For me, vim is the easiest way to edit files through the command line. This certainly might not be the case for everyone - some people prefer emacs, and some nano, which is a lot simpler. If you are interested in learning about vim, there are loads of tutorials around the 'net. If getting into vim isn't your thing, I'd recommend using nano instead wherever I mention it from here on.
To get it installed, we can use apt, which is aptitude, the package manager on ubuntu, the flavor of linux I'll use in this answer since it's a popular one for servers, and is the default for digital ocean. Just run apt-get update to make sure packages are up to date, then apt-get install vim to put in vim.
Add your ssh key
Add your ssh key to ~/.ssh/authorized_keys so that you don't need a password to log in. If you are unfamiliar with ssh keys, they are basically a pair of cryptographic keys that you can use to avoid needing to authorize with a password every time. By adding your public key to the ~/.ssh/authorized_keys file, you are essentially telling the server "this is my computer, so you don't need to ask me for a password to log in". Github has a great guide on how to generate keys. Once this is done, you can open the file with vim, get into insert mode, and paste the public key in from your local machine. Save and quit and you're set.
Install node.js
If you are trying to run a node app, you will of course need to have node! Installing node on linux is a bit different because the node installer I'm sure you used locally is graphical, where here you only have the command line. Luckily, it's not much more difficult with this set of instructions, which you can follow exactly. Make sure you do not just do the default apt-get install nodejs, as this will install an old version. Take the couple steps after the second paragraph to add ppa and get a newer version.
Deploy your app
Ok, so you have a machine that has node and theoretically could run your app. This is good news. Now we need to actually get the app onto the machine. There are a few ways you can do this. If you have ruby installed locally, you can use capistrano, a popular deployment solution. A lighter weight approach that I often prefer is deploy, although I don't think that will work on windows. You can also just use github or bitbucket - push your app to a remote repo then clone it down from your VPS (make sure to apt-get install git and set up your username first - if it's a private repo you'll probably have generate and add a key to get access to pull it down). However you manage to do it, get the files transferred.
Test your app
On your VPS, cd into wherever your app was put and run it. Make sure everything is working ok, and hit http://YOUR_IP:PORT, just your ip address followed by a port number that your app is running on after the colon. You should be able to see your app. If not check back to the terminal, it may have crashed. Sometimes you can find flukes when you are setting it up on a different system. If your app uses a database, you might need to get this configured too. You can google "ubuntu setup database name" and find some tutorials -- digital ocean has a pretty solid library of these types of tutorials themselves.
Install nginx
Nginx is a great way to serve multiple apps on one machine, and to handle domain names and such. I wrote an article on how to set up nginx that you can check out to learn the basics and get it installed. Once this is done, you can link up your app with a proxy_pass. Rather than try_files, which is what the article does to server static files, just drop in a proxy_pass statement to the port your app is running on instead, and nginx will direct traffic right through to your app. Here's an example, if you had your app running on port 1234 and your domain name was example.com
server {
server_name example.com;
location / {
proxy_pass http://localhost:1234;
}
}
This will just take traffic coming into the box from example.com and pass it to your app, which is awesome.
Get your domain in order
I have to assume you don't want to require people to use an IP address to access your app, and you want a domain name. Go grab one from wherever, and once you have this you need to edit the DNS records. I've found that it's easiest to use dnsimple for this, as not every domain registrar has solid dns record handling, and you can keep all your dns management in one place. Now, just put an A record on the root of your domain, pointing it to your VPS's IP address. After giving it a couple minutes for the records to propigate, a hit to that domain should go directly to your server - fantastic.
Now is the time to check through and make sure that your app is running properly and that your nginx configuration is correct (and that you have reloaded nginx). Make sure that in your configuration, the server_name mirrors the domain you set to point at your VPS. Make sure the port in the proxy_pass is the same as your app is running on. Once this has been confirmed, go to the domain, and if you did it right, your app will come up. Whoo!
Run it on a production server
Great, so we got our app running and it's online on the internet for the public to enjoy. Just about time to sit back and let everyone throw money at you, a common occurance whenever you get a site shipped. But don't recline too quickly, because the last thing we need is to make sure this app stays up and continues running even if something goes wrong, or you log out of your VPS, so you don't always have to keep a terminal window open running the app. For this, we can use what some call production servers -- servers made specifically to ensure that your app runs in the background and stays running all the time. Luckily, node has a few of these open source, my favorite being pm2. Check out this page, read the getting started instructions, install pm2 on your machine, and run your app. The process might look something like this:
npm install pm2 -g
cd path_to_my_app
pm2 start app.js
Since you ran it on the same port, your nginx configuration should remain the same, and your app should still be up if you visit the domain.
Phew, that was a lengthy process. Probably more than you expected - makes sense why something like heroku exists. So is this really worth it, running and maintaining the site yourself? I'd argue yes, and I host every one of the sites and apps I run like this. Here's why:
learning: I learn tons about how things work this way, and get much better at sysops.
cost: You can host like 20 sites on a single $5 digital ocean box. hosting is pennies.
control: Heroku sometimes goes down and it sucks because all you can do is wait for them to get it back up. If my site goes down, it's my fault and I can find out why and fix it.
I'm sure this answer was more than you ever expected to get here, but hope this helps! Getting from dev to sysops is a journey and sometimes can get really frustrating, but I promise once you have a good handle on things, it feel great and really helps your skills a lot.
Finally, I want to note that this is without a doubt an opinionated guide. There are tons of other tools and other ways to do these things -- the workflow I have here is just the way I prefer to do things. By all means feel free to tinker and suit the workflow to your needs once you have it under your belt! There are also lots of other details that could be added in here about setting up different databases, improving your deploy/restart flow, and securing your box a little more throughly. Would love to hear any feedback and add any of these pieces in if you or others are interested.
Google Platform has resources for Node developers. There is a tutorial shows you how to deploy a simple Node.js application to Google App Engine Managed VMs. Detail of the pricing is here.
Amazon Web Service (AWS) also has the similar service. Here is the tutorial. The AWS Free Tier is designed to enable you to get hands-on experience with AWS at no charge for 12 months after you sign up. You can investigate AWS as a platform for your Node.js application. Check it here.