Nodes.js Rest API - node.js

I'm learning how to code Rest API with node.js, but I'm wondering what is the next step to send a REST API on a real server (once it's working with localhost)? It's not easy to find how to proceed on the web... Any suggestions?

You have 2 options.
Self Hosting
Host it on your own machine.
You can use a service like ngrok or PageKite. They are super simple to try and are free to experiment.
They will generate an URL for you, than you can access your server through that .
You can also do it the hard way.
On your router settings you need to forward the port you are using to your machine.
Also on your router settings, you need to make your internal IP static.
Create a firewall exception for that port.
With that done, you can reach your server through your public IP address.
Cloud Hosting
VPS (Cheaper, but not as scalable)
You can rent a Virtual Private Server, which is a virtual machine running the OS of your preference (Linux is more common). Then you can just upload your code and start your server. Some even offer free domains for you to reach the server.
There are many hosting platforms. Check this.
Cloud hosting Platforms (More expensive, infinitely scalable)
Platforms like AWS or DigitalOcean offer virtual machines that are infinitely scalable.
AWS offers the ec2 service and DigitalOcean offers the Droplets service.
There are LOTS of resources online on how to do it.
aws ec2
aws ec2
DigitalOcean
DigitalOcean
VPS
VPS
Paste this in Google how to deploy nodejs vps OR digitalocean OR aws OR ec2
And be happy

Related

HOW to deploy a MERN APP to production, without using firebase or heroku or aws

So people, I'm planning to build a website with MERN Stack and host it from my local machine. How to do that without using aws or fire base.
How to use my machine as Backend and database(express, node and mongodb hosted in local host) for the react frontend (hosted in www.someurl.com).
PS: I have already created a site using firebase and firestore as backend.
https://t-heros.web.app/
Thanks in advance.
One thing is you need to keep your local machine 24*7 for your app to work throughout the day. And for your answer you can expose your localhost to public domain by using static IP address and routing applicable from your router with appropriate firewall rules, all this also requires effort and maybe static IP address requires purchase. With both these together your React app can access the localhost on your machine for operations.

ReactJS, Express application self hosting on internal dedicated IP server

I know we can host our ReactJS application on Amazon, Microsoft Azure, Heroku etc.
But what are the important steps and security precautions required to do in order to setup on an internal hosting server.
What are pros and corn of having Linux or Windows based server. (also which versions are relevant)
How to setup an SSL on local hosting server. What are options.
What are the security precautions to be made.
An internet line with dedicated IP from ISP can be connected, but do need to have any security hardware in middle in the network?
How to setup/connect a purchased domain name (www.mydomain.com) to an internal hosting server.
How to have multiple IPs to an internal hosting server, so if one server fails or one network fails the other keeps working with the purchased domain name.
How to log IP(visitors) access log in hardware level to keep server secure.
How to setup internal code version control system (using any local version control system and also GitHub), so if one deployment fails or creates any trouble; we can then restore to older code version.
How to setup a mailing server to send and receive emails and also how can we setup different emails on local hosting server.
Just had a look at the following link which contains most of the details related to most common server setup practice. Hopefully this will answer the question related to the server environment setup.
https://www.digitalocean.com/community/tutorials/5-common-server-setups-for-your-web-application
In the Related Articles Section (at end of the above article) there are lots of information on setting on NodeJS appliction on Ubunto etc. Hopefully the discussion there will clarify the concepts in more depth. e.g. How To Deploy a Node.js and MongoDB Application with Rancher on Ubuntu 16.04

Data not displaying from website hosted on AWS

I am facing very weird problem.Please help.
I have developed website using MEAN stack and it is hosted on aws ec2 instance.
If I access that website from my laptop, I can see the data(from mongodb installed on server) in my website.But at the same time when I access the website from some other laptop or say mobile phone(using browser), all the tables are coming blank without any data.
I am not getting, why It is working on my laptop as there is no relation between aws instance and my machine.Except that I use their console/dashboard from my machine.
Thanks.
Please check if you set your laptop host file while developing the website to resolve to the AWS EC2 instance IP where your website is hosted.
Or check the EC2 instance security group if you have opened the HTTP port of the instance just to your IP address.
As above 2 are the only causes that might give the issue mentioned by you.

Making locally hosted server accessible ONLY by AWS hosted instances

Our system has 3 main components:
A set of microservices running in AWS that together comprise a webapp.
A very large monolithic application that is hosted within our network, and comprises of several other webapps, and exposes a public API that is consumed by the AWS instances.
A locally hosted (and very large) database.
This all works well in production.
We also have a testing version of the monolith that is inaccessible externally.
I would like to able to spin up any number of copies of the AWS environment for testing or demo purposes that can access the demo testing version of the monolith. However, because it's a test system, it needs to remain inaccessbile to the public. I know how to achieve this with AWS easily enough (security groups etc.), but how can I secure the monolith so it can be accessed ONLY by any number of dynamically created instances running in AWS (given that the IP addresses are dynamic and can therefore not be whitelisted)?
The only idea I have right now is to use an access token, but I'm not sure how secure that is.
Edit - My microservices are each running on an EC2 instance.
Assuming you are running your microservices on EC2, if you want API calls from your application servers running in AWS to come from a known IP/IPs then this can be accomplished by using a NAT instance or a proxy. This way even though your application servers are dynamic, the apparent source of the requests is not.
For a NAT you would run your EC2 instances in a private subnet and configure them to send all of their Internet traffic out over the NAT instance which will have a constant IP. Using a proxy server or fleet of proxy servers can be accomplished in much the same way, but would require your microservice applications be configured to use it.
The better approach would be to simply not send the traffic to your microservices over the public Internet.
This can be accomplished by establishing a VPN from your company network to your VPC. Alternatively, you could establish a Direct Connect to bridge the networks.
Side note, if your microservices are actually running in AWS Lambda then this answer does not apply.

Choosing shared Linux AMI machine image for AWS

I know next to nothing about server management and just got started with Amazon Web Services.
I want to deploy a Linux server which runs Apache, MySQL, phpMyAdmin as well as email capabilities (account mgmt and webmail interface) and backup capabilities. I want to administer the server with a nice web user interface like cPanel, doing things like file management, email account management, access to phpMyAdmin.
Therefore I thought about deploying a shared Linux AMI, instead of building and configuring the server myself. I want to make my life easy, that is, deploying something pre-existing which is easy to manage (web user interface) since I haven't got time to learn all about server management right now.
I found this list of images. Which one of these would fit my requirements?
This is an inappropriate use case for EC2. As Amazons CTO Werner Vogels said a few months ago "an EC2 instance is not a server, it's a building block." EC2 is used to provide computing resources to an application that spans multiple, loosely-coupled services. It's not a drop in replacement for a standard VPS.
That's not to say that a lot of people aren't using EC2 instances as servers. However, these are often the same people who bitterly complain about excessive downtime on AWS without realizing that it's mostly their own fault. An application must be designed to be deployed in a cloud-based environment when it's built on an IaaS platform like AWS. If your application is not aware of autoscaling groups and other high-availability features then traditional dedicated hosting will be cheaper, less complex, and more durable than AWS.
I am aware of AMI's for webmin, but not for cPanel. Here is the link:
https://www.virtualmin.com/documentation/aws/virtualmin_gpl_ami
I would echo the comments made by #jamieb however in that this is really not a good use case for EC2. You are limited to a single elastic IP per instance, so you have no ability to do IP-based virtual hosts as you would with a typical VPS.

Resources