Heroku seems great, but most non-trivial applications require authentication, and conventional authentication schemes require an SSL connection, and it's impossible to get https://your_app_name.com (you can only get https://your_app_name.heroku.com).
So if you're using Heroku, is it that:
You don't mind directing users to
another domain (seems pretty
bad)
You don't mind foregoing
SSL for authentication (seems really
bad)
Your app doesn't require authentication
This is now a moot point. According to the documentation (http://docs.heroku.com/ssl, see http://addons.heroku.com/ for pricing), Heroku now allows custom domains to have SSL through their SSL Endpoint addon.
https://devcenter.heroku.com/articles/ssl-endpoint
Heroku also just announced support for SNI. This will allow them to attach SSL to any domain hosted on Heroku's service. It is still in beta but should get pushed to every one soon. Heroku continues to improve their security offerings.
Hey, it's James from Heroku. The inability to use SSL with a custom domain is a problem shared by all multi-tenant platforms, due to a fundamental issue with the SSL protocol. A solution is in the works, we'll post details as soon we've finalized the plan.
I'm using Twitter's OAuth for authentication on my apps (via twitter-auth).
Generic OpenID or even Facebook Connect would work just as well, as each of these handle the sensitive bits of authentication on somebody else's server.
Authlogic is an authentication gem that has has plugins for each of these methods.
However, SSL is now fully supported on Heroku, if you're willing to pay a price that reflects the difficulty in getting SSL to work in a multi-tenant environment.
You can use a custom domain name in Heroku. This is not included in the free account though. Also Heroku makes it dead simple to deploy Ruby on Rails apps. Deploying a Ruby on Rails application on a cheap hosting provider that only gives you limited if any shell access can be a nightmare. Not mention Heroku's server already preconfigured to optimize Ruby on Rails code, likewise scaling up is just a matter of sliding a scale on the user interface.
With Heroku you can use custom domain names (in the free version too).
Scaling is easy, very easy, and they are making it better and better (i'm testing memcached and work like a charm, delayed job, the backup system and the git integration are great too).
The only problem for me, as you wrote is the SSL...
Related
I have a single back-end running node/express providing API endpoints and 2 static (react) front-ends. The front-ends interact with the users and communicate with the back-end.
I need to use https through-out once enter production stage.
The front-ends will require their own domain names.
I’ve been thinking on the simplest way to have these configured and have come up with Option 1 (see diagram). Node.js API server running on one VPS and as the front-ends are static sites these can be loaded on separate servers (UPDATE- Mean't to say hosting providers) hence get their own domains. As an option, and I’m unsure if its needed, add cloudflare to the front-end to provide a layer of security.
This will allow front-ends to have separate domain names.
As this is a start-up project and doubt a large number of visitors, I’m wondering if the above is over-engineered and un-necessarily complicated.
So I’m considering Option 2 of hosting back-end api app and the two front-ends on the same linux vps. As the front-ends are static, I added the front-ends into the public folder of node.js. This allowed me to access the front-ends as http://serverIP:8080/siteA
As I want to access front-end as http://siteA.com I’m assuming I require a reverse proxy (nginX)
The questions to help me decide between the two options are:
For a start-up operation and given he above scenario which option is best ?
I understand that node.js requires a port number regardless to work, for the API I don’t mind having a port number (as its not applicable for end users i.e. http://10.20.30.40:3000), however the two front-ends require their own domain names (www.siteA.xom, www.siteB.com), therefore will I need to employ a reverse proxy (nginX) regardless if they are static sites or not ?
I’m concerned that someone could attack API end-points (http://10.20.30.40:3000). In this case, is it true with Option 2 is safer than 1 - that I could potentially block malicious direct API calls as all sites are hosted on the same VPS and the API can be easily be secured, this is not exposed to the outside world?
My developer once upon a time told me that option 1 is best as nginX adds un-needed complication, but not sure what he meant – hence my confusion, to be honest I don’t think he wanted to add nginX to the server.
I’m looking at a high-level guidance to get me on the right track. Thank
This is - as you have also doubted - unnecessarily complex, and incorrect in some cases. Here's a better (and widely used across the industry) design. I'm strongly recommending to drop the whole VM approach and go for a shared computing unit, unless you are using that machine for some other computation and utilizing it that way is saving your company a lot of money. I strongly doubt this is the case. Otherwise, you're just creating problems for yourself.
One of the most common mistakes that you can make when using Node.js is to host the static content through the public folder (for serious projects) Don't. Use a CDN instead. You'll get better telemetry depending on the CDN, redundancy, faster delivery, etc. If you aren't expecting high volumes of traffic and performance of delivering that static content isn't outrageously important at the moment, you can even go for a regular hosting server. I've done this with namecheap and GoDaddy before.
Use a direct node-js shared - or dedicated depending on size - hosting for your app and use CI/CD to deploy it. You can use CNAMEs to map whatever domain name you want to have your app on (ex: https://something.com) to map to the domain name of the cloud-hosting provider url for your app. I've used multiple things, Azure, Heroku, Namecheap for the apps and primarily Azure DevOps to manage the CI/CD pipeline, although Jenkins is super popular as well. I'd recommend Heroku - since it provides a super easy setup.
When designing any API on HTTP, you should assume people will call your API directly. See this answer for more details: How to prevent non-browser clients from sending requests to my server I'm not suggesting to put something like CloudFlare, but you may be overthinking it, look into your traffic first. Get it when you need it. As long as you have the right authentication / authorization mechanism in place, security of the API shouldn't be a big problem on these platforms. If you deploy it on one of these platforms, you won't have to deal with ports either. Unless you reach absolutely massive scale, it will definitely be cheaper for you operate with high-reliability this way.
You won't need to deal with nginx anymore.
I am kinda old school and the first programming language for web I saw was PHP, and everybody uses it with Apache. At that time, I also knew ASP, which were used along with Microsoft IIS and, later, ASP.NET, that runs over IIS, as well.
The time passed, I went to the ERP world and, when I came back (few months ago), I knew Golang and Node.js and for my surprise they have their own web servers.
I can see many advantages in the builtin web servers, but, every application needs to rewrite their web server rules (I faced that recently when I needed to setup a HTTPS server using Express.js).
After some hard work to understand all the nuances of the HTTP protocol, I asked myself: and if I am doing it in the wrong way? If all the permissive rules that I created in my dev server go to production? Maybe this is an useless concern. But maybe I am creating a fragile server that could be exploited by a naive hacker.
Using a server like Apache it is harder to misuse security rules, because there are settings for development and production environments that are explicit. If the rules are hardcoded (as they are in Node or Go), an unaware developer can use development rules in production and nobody is going to see it before the stuff happens.
Any thoughts?
web server focuses on the speed capacity and the caculating capacity. No matter how good java or php web is or how many old companies put them in use, as long as a new language can provides a faster speed and better capacity such as go, more programmer would go for it.
by the way, to run a web server in go is really such an easy thing.It's faster building and slightly running.And the routine in go helps the web server beter serves milions of client requests,Which old web language can hardly do it.
You can still use nginx or apache in front of your golang gateway for many reasons including tls termination.
But service to service communication might be nice to communicate directly to services and the golang http webserver is fast. It also supports http2 out of the box. Go leverages its "goroutines" to reduce overhead from the os to handle many requests at once.
Node.js and Golang do not have their web server, these are just some lib packages implement http-protocols and open some ports to provide services.
Like Spring web.
Nginx/IIS/Apache are true server, web server just a component of them.
I think Spring should meet the full application scenarios, include /gateway/security/route/package/runtime manage/ and so on.
But when we has some different language platform, then we need nginx/apache/spring gateway/zuul/or others to route them.
I currently have a website where I need to use node.js, I am not able to use node.js however, because the web host does not support it. What is the best way I go about hosting a server without having to completely change hosts?
[…] without having to completely change hosts?
If your current hosting provider doesn't support nodejs and you want to use nodejs, then you have to change hosting provider. Sorry.
I can recommend Google Cloud Engine. You can create a virtual machine, e.g. running Fedora, access it via SSH and install what you need, i.e. apache2, nodejs, etc.
If you're not comfortable with that, you should go for a managed hosting solution instead. It will probably be a little more expensive, and you'll have less flexibility in what programs you can use (since you share your virtual machine with other customers and can't make changes to the system yourself), but on the upside, most of the setup is done for you. There are many providers you can choose from; google "managed hostinig with nodejs" if you want an overview. I have used 1and1 before and was mostly happy with it. As you can see here, they have nodejs installed on their servers.
Your question makes hardly any sense... but
Heroku is really great for Node.js app hosting
Is ngrok a safe tool to use? I was reading a tutorial which recommended to use ngrok test API responses that I make to outside services that need to connect to my endpoints also.
There is no source code available for Version 2.0, considering it started as an open source project in 2014. I am suspect of any code that opens a tunnel to my localhost from the cloud. Pretty scary stuff especially without source code!
It opens up a tunnel to your dev machine, which is partially secured by obscurity (a hard to guess subdomain), and can be further secured by requiring a password. But you're still opening yourself up to ngrok itself, and the company is completely opaque (no address, no employees, no business name, no LinkedIn presence; all I can find is that it has 1-10 employees and is private; not even sure what country its based in). On top of that the code is not open-sourced. No reason to think they're not legit, but not a lot of information available to build trust.
You may be able to use ngrok and other local tunnel services with more security by encrypting the traffic. See https://security.stackexchange.com/questions/177280/end-to-end-encryption-for-localtunnel-ngrok-setup/177357#177357 for more information.
I found good rating, but vacuous information here:
http://www.scamadviser.com/is-ngrok.com-a-fake-site.html
The kicker for me is
https://developer.atlassian.com/blog/2015/05/secure-localhost-tunnels-with-ngrok/
where the Atlassian folks recommend it highly.
I think I am going to use it.
If anyone is concerning compromising their development environment, you can use Docker. There are many ngrok/docker projects but here is the one I chose: https://github.com/gtriggiano/ngrok-tunnel
for macOS, use "TARGET_HOST=docker.for.mac.localhost"
They now offer a service where you locally run only ssh, no need to run any of their code on your machine.
You run something like ssh -R 80:localhost:8501 tunnel.us.ngrok.com http. This connects to one of their hosts and forwards connections they receive back to your machine and the service you run on localhost:8501.
This seems secure to me, the only thing is that you don't know what information they collect and who is connecting to your exposed service. They print all connections, but it's their binary that does this and someone might well listen in without you noticing. You can check connections on your end, but you cannot be sure who it is that connects.
Ngrok is a convenient and highly secure utility for creating tunnels to locally hosted applications via a reverse proxy. This is a utility for publishing locally hosted applications on the web. style="letter-spacing: 0px;">Simply put, any locally hosted application provides a publicly accessible web URL to the . H. Either a Spring Boot or Nodejs based web application, or a webhook for a chat application, etc.
I have a WSS installation that's behind basic authentication/SSL (it's hosted at a public web host). I'm creating a sister site in ASP.NET, and am considering just running the credentials through and allowing users to log into the new system providing there is no 401 Not Authorized error returned.
Both are internet-facing applications that will be used by about 20-50 people.
What am I missing? I've never heard of this recommended before, but I don't see why it wouldn't work.
I can't see any major problems with that - you'll obviously want to make sure both servers are using SSL if you've got to send that over the Internet, but other then that it sounds like an elegant way to share credentials between applications.