Hosting multiple .net core application on Ubuntu using Nginx - linux

I created sample web api on .net core and registered it on default file in Nginx and was able to access it from outside.
The API looked like https://<>/api/values.
Now I want to add more configurations to host more web api with different port number. The problem is how will default file differentiate between multiple APIs since base URL is same i.e localhost\<> for all.

You need to create server blocks. Each of these server blocks will handle/listen/respond to different app. You can host as many apps you want to on a single Ubuntu machine using nginx this way.
This will be very helpful and describe the entire process of creating server blocks for your nginx server.

Related

Deploy Node.js on tomcat

I have developed an application using Angular, Node/Express and MySQL. I have deployed my Angular application on the tomcat server, which is connected to some 10 PCs. However, I want to deploy my backend i.e, Node.js/Express.js on the same server as well since my app is totally dependent on the backend. How can I do that? I read online that one cannot deploy node.js on tomcat as both are separate servers. Do I have to install Node.js/MySQL separately on the same server? Isn't there any security threat linked to hosting the front-end and the back-end on the same server machine?
I would really appreciate if someone could clear my mind regarding this.
It's true that Tomcat and nodejs are separate web server programs, and you cannot run one within the other. You can run them both on the same machine, but they must use different ports.
You can use a reverse proxy server (nginx) to project the illusion to your end users that your Tomcat and nodejs apps run on the same server and port. Explaining how to do that is far beyond the scope of a SO answer.
You can share a database server (MySql) between the Java applications running on Tomcat and the Javascript.
There is no inherent security risk in hosting your front-end and back-end code on the same origin server. In fact, there are security advantages, because you can set up restrictive CORS rules.

Deploy a MEAN stack application to an existing server

I have a Ubuntu Server on DigitalOcean which hosts a website, and a Windows Server on AWS which hosts another website.
I just built a mean.js stack app on my MAC, and I plan to deploy it to production.
It seems that most of the existing threads discuss about using a new dedicated server. For example, this thread is about deploying on a new AWS EC2 instance; this video is about deploying on a new Windows Azure server; this is to create a new droplet in DigitalOcean.
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server? If yes, will there be any difference in terms of performance?
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server?
Yes. Both Windows and Ubuntu allows you to deploy multiple applications on same instance.
For Ubuntu you can read this post which will help you server multiple apps.
In this example used Nginx, but you can follow to this example and use it without any server like Apache or Nginx. If you need subdomains I would suggest to use Apache virtual hosts with reverse proxy module and pm2
For Windows and its IIS I would suggest to use iisnode, in google you can find a lot of articles how to configure it.
will there be any difference in terms of performance?
It is depended on your applications, if you are already serving applications which handles huge traffic and need CPU and memory, I would not suggest you to use multiple apps on same instance, but if you are going to use simple web apps, you can easily use same instance.
Hope this answer will help you!

Configure my web server to expose the separate folders as separate web servers?

How can I configure my web server to make the 5 separate folders as 5 different separate web servers?
Depends on what HTTP daemon you are using, but with Apache you probably want to look at using Name Based Virtual Hosts - http://www.tecmint.com/apache-ip-based-and-name-based-virtual-hosting/

Best practices for shared image folder on a Linux cluster?

I'm building a web app that will scale into a linux cluster with tomcat and nginx. There will be one nginx web server load balancing multiple tomcat app servers. And a database server in behind. All running on CentOS 6.
The app involves users uploading photos. I plan to keep all the images on the file system of the front nginx box and have pointers to them stored in the database. This way nginx can serve them full speed without involving the app servers.
The app resizes the image in the browser before uploading. So file size will not be too extreme.
What is the most efficient/reliable way of writing the images from the app servers to the nginx front end server? I can think of several ways I could do it. But I suspect some kind of network files system would be best.
What are current best practices?
Assuming you do not use CMS (Content Management System), you could use the following options :
If you have only one front end web server then the suggestion would be to store it locally on the web server in a local Unix filesystem.
If you have multiple web servers, you could store the files on a SAN or NAS shared network device. This way you would not need to synchronize the files across the servers. Make sure that the shared resource is redundant else if it goes down, your site will be down.

confusion in bootstrapping my application in backbonejs

I have a working application in codeigniter phil sturgeon REST API with backbone.js, underscore.js and require.js
Ineed to use mongodb and node.js in the backend. I have build a working REST API for the same. Now i am clueless as to how to migrate my whole project to work with this API. I use XAMPP on windows to serve apache. So since now i don't need to use xamp, how do i determine the structure of file system?
what files will go there? how do i bootstrap my application?
Node.js comes with an application server.
You need to run your application server at a certain port (e.g. localhost:3000).
There are multiple ways to deal this:
Application Server is also the Web server.
That means your application server at port 80. That's not really an optimal solution as application servers are not ideal for static assets. I think there are some security issues too but need to read on that.
Setup a web server and forward requests to application server
Here you will setup a web server. apache is a web server but for nginx is optimal for node.js applications. So you run your web server on port 80 and forward all requests to your application server port (e.g. 3000) . You setup nginx in a way so that for static assets (images, javascripts, css etc.) it doesn't bother application server but just serve the files directly from the file system.
Setup a proxy server in node.js
Take a look at Bounce Not sure how it performs in a production setup but the reviews are good. But it suffers from slowness in serving static asset.

Resources