Why all those new languages have their own web server? - node.js

I am kinda old school and the first programming language for web I saw was PHP, and everybody uses it with Apache. At that time, I also knew ASP, which were used along with Microsoft IIS and, later, ASP.NET, that runs over IIS, as well.
The time passed, I went to the ERP world and, when I came back (few months ago), I knew Golang and Node.js and for my surprise they have their own web servers.
I can see many advantages in the builtin web servers, but, every application needs to rewrite their web server rules (I faced that recently when I needed to setup a HTTPS server using Express.js).
After some hard work to understand all the nuances of the HTTP protocol, I asked myself: and if I am doing it in the wrong way? If all the permissive rules that I created in my dev server go to production? Maybe this is an useless concern. But maybe I am creating a fragile server that could be exploited by a naive hacker.
Using a server like Apache it is harder to misuse security rules, because there are settings for development and production environments that are explicit. If the rules are hardcoded (as they are in Node or Go), an unaware developer can use development rules in production and nobody is going to see it before the stuff happens.
Any thoughts?

web server focuses on the speed capacity and the caculating capacity. No matter how good java or php web is or how many old companies put them in use, as long as a new language can provides a faster speed and better capacity such as go, more programmer would go for it.
by the way, to run a web server in go is really such an easy thing.It's faster building and slightly running.And the routine in go helps the web server beter serves milions of client requests,Which old web language can hardly do it.

You can still use nginx or apache in front of your golang gateway for many reasons including tls termination.
But service to service communication might be nice to communicate directly to services and the golang http webserver is fast. It also supports http2 out of the box. Go leverages its "goroutines" to reduce overhead from the os to handle many requests at once.

Node.js and Golang do not have their web server, these are just some lib packages implement http-protocols and open some ports to provide services.
Like Spring web.
Nginx/IIS/Apache are true server, web server just a component of them.
I think Spring should meet the full application scenarios, include /gateway/security/route/package/runtime manage/ and so on.
But when we has some different language platform, then we need nginx/apache/spring gateway/zuul/or others to route them.

Related

Architecting back-end and Front-end solution (nodeJS)

I have a single back-end running node/express providing API endpoints and 2 static (react) front-ends. The front-ends interact with the users and communicate with the back-end.
I need to use https through-out once enter production stage.
The front-ends will require their own domain names.
I’ve been thinking on the simplest way to have these configured and have come up with Option 1 (see diagram). Node.js API server running on one VPS and as the front-ends are static sites these can be loaded on separate servers (UPDATE- Mean't to say hosting providers) hence get their own domains. As an option, and I’m unsure if its needed, add cloudflare to the front-end to provide a layer of security.
This will allow front-ends to have separate domain names.
As this is a start-up project and doubt a large number of visitors, I’m wondering if the above is over-engineered and un-necessarily complicated.
So I’m considering Option 2 of hosting back-end api app and the two front-ends on the same linux vps. As the front-ends are static, I added the front-ends into the public folder of node.js. This allowed me to access the front-ends as http://serverIP:8080/siteA
As I want to access front-end as http://siteA.com I’m assuming I require a reverse proxy (nginX)
The questions to help me decide between the two options are:
For a start-up operation and given he above scenario which option is best ?
I understand that node.js requires a port number regardless to work, for the API I don’t mind having a port number (as its not applicable for end users i.e. http://10.20.30.40:3000), however the two front-ends require their own domain names (www.siteA.xom, www.siteB.com), therefore will I need to employ a reverse proxy (nginX) regardless if they are static sites or not ?
I’m concerned that someone could attack API end-points (http://10.20.30.40:3000). In this case, is it true with Option 2 is safer than 1 - that I could potentially block malicious direct API calls as all sites are hosted on the same VPS and the API can be easily be secured, this is not exposed to the outside world?
My developer once upon a time told me that option 1 is best as nginX adds un-needed complication, but not sure what he meant – hence my confusion, to be honest I don’t think he wanted to add nginX to the server.
I’m looking at a high-level guidance to get me on the right track. Thank
This is - as you have also doubted - unnecessarily complex, and incorrect in some cases. Here's a better (and widely used across the industry) design. I'm strongly recommending to drop the whole VM approach and go for a shared computing unit, unless you are using that machine for some other computation and utilizing it that way is saving your company a lot of money. I strongly doubt this is the case. Otherwise, you're just creating problems for yourself.
One of the most common mistakes that you can make when using Node.js is to host the static content through the public folder (for serious projects) Don't. Use a CDN instead. You'll get better telemetry depending on the CDN, redundancy, faster delivery, etc. If you aren't expecting high volumes of traffic and performance of delivering that static content isn't outrageously important at the moment, you can even go for a regular hosting server. I've done this with namecheap and GoDaddy before.
Use a direct node-js shared - or dedicated depending on size - hosting for your app and use CI/CD to deploy it. You can use CNAMEs to map whatever domain name you want to have your app on (ex: https://something.com) to map to the domain name of the cloud-hosting provider url for your app. I've used multiple things, Azure, Heroku, Namecheap for the apps and primarily Azure DevOps to manage the CI/CD pipeline, although Jenkins is super popular as well. I'd recommend Heroku - since it provides a super easy setup.
When designing any API on HTTP, you should assume people will call your API directly. See this answer for more details: How to prevent non-browser clients from sending requests to my server I'm not suggesting to put something like CloudFlare, but you may be overthinking it, look into your traffic first. Get it when you need it. As long as you have the right authentication / authorization mechanism in place, security of the API shouldn't be a big problem on these platforms. If you deploy it on one of these platforms, you won't have to deal with ports either. Unless you reach absolutely massive scale, it will definitely be cheaper for you operate with high-reliability this way.
You won't need to deal with nginx anymore.

Website with node.js, hosting architecture decision

I am planning to start my first website. The website is a little HTML5+CSS+JS website with a backend running node.js that serves the data stored on mongodb. I would like to know which one is the best solution regarding mostly the security:
Web hosting (SSL and cloudflare) + VPS serving on port 3000 (with SSL, cloudflare and node.js with sensible data;users and pass and a local mongodb)
Everything in the same VPS.
Any other approach you can give.
The thing is that in the first approach there are two elements in the architecture so if someone wants to hack it i suppose it's more difficult. On the other hand in the second approach if the VPS is hacked everything is hacked and they could access to passwords, mongodb database. I am quite obsessed with security as it is my first website and i don't know what meassures to make to protect my VPS (node.js and mongodb).
Furthermore, i would like to know in terms of efficiency which would be best solution imagine for a 10MB website with 1.000 visits a day.
Regardless of how many actual servers you decide to deploy on, I'd strongly suggest not serving your site directly from node.js. Instead, proxy it through a more robust http server such as Apache or Nginx or even lighttpd. For the very simple reason that the http module in node.js was never meant to protect against worms and hacking attempts and various other malware.
I've written web servers from scratch myself and have noticed that in general, you'll get your first hacking attempt within the first hour of putting your server online. You'll get around a dozen or so hacking attempt per day on the slowest days and it goes up from there. These attempts are so common that most server software no longer log them in access logs and simply block them.
From my own personal experience I estimate that around 5% to 10% of my bandwidth is consumed by failed hacking/infection attempts. That is when I'm not being actively attacked.
Security through obscurity is not good security. Especially since node's http module is not very obscure in the first place and someone is bound to find a hackable weakness one of these days.
Apart from security, you also waste fewer CPU cycles ignoring these hacking attempts in Apache or Nginx compared to node.js since you don't need to run any javascript code to handle them.
You can make the choice between the two architectures moot. Both architectures are hackable, and your data will be exposed.
If security is paramount, check out Mylar - it's a platform that protects data confidentiality even when an attacker gets full access to servers. Mylar stores only encrypted data on the server, and decrypts data only in users' browsers.
It runs on top of Meteor, which in turn runs on top of Node.js and uses MongoDB, so if your web app is small, it should be easy to port the code. Meteor also stores passwords using bcrpyt, the best
password hashing algorithm nowadays.

Confusion about the support of Node.js by hosting providers

I'm working for a small company on something like a new PHP environment for future projects. I'd like to cram in as much modernization and automization as possible (while I can).
The thing is, I always come across solutions that require Node.js (Grunt, Autoprefixer, ...). None of our customer's hosting providers support Node.js (not even our own managed server). Most of the time I don't even have shell access.
I come across npm this and npm that so often, almost as if it's some always available quasistandard. Do I have some misunderstanding here – or is this simply only usable by people hosting their projects on their own servers? Am I just out of luck if I have to support a wide range of (sometimes questionable) shared hosting providers?
Comparing most PHP applications and most Node.js applications is apples and oranges.
Most PHP applications are fairly self-contained and intended to be used with web servers and a mostly stock PHP configuration. Most Node.js applications have a ton of NPM dependencies that need to be installed, and while HTTP is used to connect between the web server and the Node.js application, it isn't always clear what port that will be on. Plus, the Node.js application may require extra configuration, command line parameters, etc. Some hosting for Node.js is smart enough to look at the package.json file (Elastic Beanstalk for example) and figure out how to start your Node.js application.
These days you will find PHP going the same way. A lot of software is built with Composer packages that must be set up and installed. You won't find many folks getting that working on shared hosting either. Many Node.js applications have nothing to do with the web or web servers. That is increasingly becoming the case with PHP as well, but you won't find shared hosting for PHP applications.
Basically, you're looking at two entirely different ecosystems.
I think that your company needs to realize that you're sacrificing an awful lot just to stay compatible with cheap crappy shared hosting. These days you can get a $5/mo. VPS to run whatever you want, and that's often the same price as your shared hosting. Why waste time and resources while building a substandard application if you can pay $10 more a year and do what you want/need to do?
Use the technologies that you need to get the job done. If what you can do works fine in a normal PHP web application framework, then use that. If you need to build a persistent server application and feel that Node.js is right for you, use that.

node js common practices

I've been reading up on a few node tutorials but there are a couple of best/common practices that I would like to ask about for those out there that have built real node apps before.
Who do you run the node application as on your linux box? None of the tutorials I've read mention anything about adding a node user and group so I'm curious if it's because they just neglect to mention it or because they do something else.
Where do you keep your projects? '/home/'? '/var/'?
Do you typically put something in front of your node app? Such as nginx or haproxy?
Do you run other resources, such as storage(redis, mongo, mysql, ...), mq, etc..., on the same machine or separate machines?
I am guessing this question is mostly about setting up your online server and not your local development machine.
In the irc channel somebody answered the same question and said that he uses a separate user for each application. So I am guessing that this is a good common practice.
I mostly do /home/user/apps
I see a lot of nginx examples so I am guessing that is what most people use. I have a server with varnish in front of the a node.js application and that works well and was easy to setup. There are some pure node.js solutions but for something as important as your reversed proxy I would go for something that is a little more battle-tested.
To answer this correctly you probably have to ask your self. What are my resources? Can I afford many small servers? How important is your application? Will you lose money if your app goes down?
If you run a full stack on lets say one VPS then if there is a problem with that VPS then only one of your apps is affected.
In terms of maintenance having for example one database server for multiple apps might seem attractive. You could reason that if you need to update your database to patch a security hole you only need to do it in one place. On the other hand you now have a single point of failure for all the apps depending on that database server.
I personally went for many full stack server and I am learning how to automate deployment and maintenance. Tools like Puppet and Chef seem to be really helpful for this.
I only owned my own Linux servers for the last 3 months and have been a Linux user for 1.5 years. So before setting up a server park based on these answers make sure you do some additional research.
Here's what I think:
Using separate user for each app is the way I'm doing this.
I keep it in /home/user/ to make sure that only user (and root of course) has access to the app.
Some time ago I've created my own reverse proxy in Node JS based on node-http-proxy module. If you don't want to use reverse proxy then there's no point in putting anything in front of Node. There's even more: it may harm the app, since for example nginx can't use HTTP/1.1 (at least at the moment).
All resources I run on the same machine. Only when I actually need to distribute my app between separate machines I start thinking about seperate machines. There's no need to preoptimize. App's code is a different thing, though.
Visit the following links::
nettuts
nodetuts
lynda nodejs tutorials
Best practice seems to be to use the same user/group as you would for Apache or a similar web server.
On Debian, that is www-data:www-data
However, that can be problematic with some applications that might require higher permissions. For example, I've been trying to write something similar to Webmin using Node and this requires root permissions (or at least adm group) for a number of tasks.
On Debian, I use /var/nodejs (I use /var/www for "normal" web applications such as PHP)
One of the reasons I'm still reluctant to use Node (apart from the appalling lack of good quality documentation) is the need to assign multiple IP Ports when running multiple applications. I think that for any reasonably sized production environment you would use virtual servers to partition out the Node server processes.
One thing that Node developers seem to often forget is that, in many enterprise environments, IP ports are very tightly controlled. Getting a new port opened through the firewall is a very painful and time-consuming task.
The other thing to remember if you are using a reverse proxy is that web apps often fail when run from behind a proxy - especially if mapping a virtual folder (e.g. https://extdomain/folder -> http://localhost:1234), you need to keep testing.
I'm just running a single VPS for my own systems. However, for a production app, you would need to understand the requirements. Production apps would be very likely to need multiple servers if only for resilience and scalability.

How to create a dynamic website without IIS

I want to create a dynamic website that does not support IIS. The area where I work does not allow anything to be installed in the server. The have a windows based server and I would like to create a dynamic website. IIS not allowed and server side languages like asp.net, php are not allowed. They did not say anything about client side. Is it possible to do?
In short, a general answer to your question Is it possible? would be No, it's not. And if you still find a way, it's not going to be worth the effort.
For one thing, even without programming languages like ASP.NET or PHP, you still need a web server such as IIS to serve static content. There are of course alternatives to IIS specifically, but no web server at all means no serving web sites at all.
If you would be given an opportunity to server static content, you could possibly produce a web site that is dynamic at least on a per visit basis using client side scripting and cookies, but the things you could make that site do would be very limited, and without anything other than serving static content there is no saving things between sessions, or in any way affecting the server side of the application.
You have to ask yourself why you need to serve this website. Is this something your company would benefit from? If so, could you convince the IT department to setup an environment to serve it? Are there any other alternatives? And, perhaps the most important one: there are lots of free or almost free web hosting solutions out there. Why not just use one of them?
There are many excellent reasons why you would want to create a dynamic website without using a web server. Here are a couple:
You are creating a website as a means of presenting a dataset with hyperlinks that you want to be able to archive on read-only media and ignore for 10 years or more (as you can do with books), and still be able to read (IIS is very poor at backwards compatibility).
You need to present your data to people who have no access to servers or the internet and have no idea how to turn their PC into a web server (there are many millions of such people in the developing world)
Yes, it's challenging, but if you want something to be readable by anyone, anywhere, anytime, and all you can count on are web browsers, there's no option.
By saying you want to do it without IIS, I'm assuming you're implying Apache as well (since you reference no server-side languages).
It depends what you mean by 'dynamic'. Essentially you'll be limited to
JavasScript, which means that you can manipulate information and elements already on the page.
iFrames - this would let you load external pages into elements and pages on the page. These could be dynamic, and if they were on the same server you could manipulate it as well. If it was from an external server, then you wouldn't have control over it from that page.
If you are able to set up an HTTP proxy, you can use JavaScript together with a service like CouchOne. You will need the proxy, since browsers restrict AJAX calls.

Resources