I want to run e2e tests like protractor (or other Selenium tests) on developement server. Is it possible to switch to different - test database for the testing time? I am loading fixtures before each test runs.
What are good practices for this kind of testing - with node.js and mongo.db in backend, concerning database setup?
Thank you in advance.
The easiest way to do it IMHO would be to spin up another instance of your application with different configuration, namely connecting to a different database and listening on a different port. Then you can point Selenium to it. In theory the FE of the application should be port agnostic, however if that presents a problem, nginx can be of a great help.
Let's consider you want it on port 3333 and domain test.myapp. Here is sample configuration file for nginx.
server {
listen 80;
server_name test.myapp;
location / {
proxy_pass http://localhost:3333;
proxy_set_header Host $host;
proxy_buffering off;
}
}
Of course you would like to have another server defined for your current development server. Simply rinse and repeat.
Usually the configuration in a nodejs application is chosen based on the value of environmental variable NODE_ENV. You can pass it like so, when you run your app (I am assuming here it is a Linux server):
$ NODE_ENV=test node app.js
Then inside your application you would easily get access to it:
var env = process.env.NODE_ENV
I hope it helps.
Mocha can now accept a --config file which could be used to point to a different database. I use the same database server, a server can have multiple databases, this makes it very simple and lightweight for a developer.
https://www.w3resource.com/mocha/command-line-usage.php
Related
Looking at the following scenario, I want to know if this can be considered a good practice in terms of architecture:
A React application that uses NextJS framework to render on the server. As the data of the application changes often, it uses Server-side Rendering ("SSR" or "Dynamic Rendering"). In terms of code, it fetches data on the getServerSideProps() function. Meaning it will be executed by the server on every request.
In the same server instance, there is an API application running in parallel (it can be a NodeJS API, Python Flask app, ...). This app is responsible to query a database, prepare models and apply any transformation to data. The API can be accessed from the outside.
My question is: How can NextJS communicate to the API app internally? Is a correct approach for it to send requests via a localhost port? If not, is there an alternative that doesn't imply NextJS to send external HTTP requests back to same server itself?
One of the key requirements is that each app must remain independent. They are currently running on the same server, but they come from different code-repositories and each has its own SDLC and release process. They have their own domain (URL) and in the future they might live on different server instances.
I know that in NextJS you can use libraries such as Prisma to query a database. But that is not an option. Data modeling is managed by the API and I want to avoid duplication of efforts. Also, once the NextJS is rendered on the client side, React will continue calling the API app via normal HTTP requests. That is for keeping a dynamic front-end experience.
This is very common scenario when frontend application running independent from backend. Reverse proxy usually help us.
Here are simple way I would like to suggest you to use to achieve(and also this is one of the best way)
Use different port for your frontend and backend application
All api should start with specific url like /api and your frontend
route must not start with /api
Use a web server( I mostly use Nginx that help
me in case of virtual host, reverse proxy, load balancing and also
server static content)
So in your Nginx config file, add/update following location/route
## for backend or api and assuming backend is running on 3000 port on same server
location /api {
proxy_pass http://localhost:3000;
## Following lines required if you are using WebSocket,
## I usually add even not using WebSocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
## for frontend and assuming frontend is running on 5000 port on same server
location / {
proxy_pass http://localhost:5000;
}
sorry if this is a wrong question on this forum but I am simply just stuck and need some advice. I have a shared hosting service and a cloud based hosting server with node.js installed. I want to host my website as normal but I also want to add real time chat and location tracking using node.js I am confused with what I am reading in several places because node.js is itself a server but not designed to host websites? So I have to run 2 different servers? One for the website and one to run node.js? When I setup the cloud one with a node.js script running I can no longer access the webpages.
Whats the best way for me achieve this as I am just going round in circles. Also is there a way I can set up a server on my PC and run and test both of these together before hand so I see what is needed and get it working as it will stop me ordering servers I dont need.
Many thanks for any help or advice.
Node can serve webpages using a framework like Express, but can cause conflicts if run on the same port as another webserver program (Apache, etc). One solution could be to serve your webpages through your webserver on port 80 (or 443 for HTTPS) and run your node server on a different port in order to send information back and forth.
There are a number of ways you can achieve this but here is one popular approach.
You can use NGINX as your front facing web server and proxy the requests to your backend Node service.
In NGINX, for example, you will configure your upstream service as follows:
upstream lucyservice {
server 127.0.0.1:8000;
keepalive 64;
}
The 8000 you see above is just an example, you may be running your Node service on a different port.
Further in your config (in the server config section) you will proxy the requests to your service as follows:
location / {
proxy_pass http://lucyservice;
}
You're Node service can be running in a process manager like forever / pm2 etc. You can have multiple Node services running in a cluster depending on how many processors your machine has etc.
So to recap - your front facing web server will be handling all traffic on port 80 (HTTP) and or 443 (HTTPS) and this will proxy the requests to your Node service running on whatever port(s) you define. All of this can happen on one single server or multiple if you need / desire.
I'm trying to set virtual host on my local machine for my Node (Express) project. But I cant figure out how to avoid port number
This is what I had entered on my /etc/hosts file.
192.168.151.207 www.potato.com
192.168.151.207 www.tomato.com
I can access site by www.potato.com:3000 but I want it to be simply www.potato.com.
I was Googling for last few days but all most all the solution says to use Nginx for reverse proxy. I also read somewhere that if I use Nginx I can't use Socket. And socket is something which I have to use in next phase of the project.
Any help is heartily appreciated.
Did you try virtualhost npm package?
Make your HTTP server hostname-aware very simply.
You define the handler for each server name, and that will return the
final handler to be passed to your HTTP server.
Works fine with Express.
You only need to use nginx or any orther proxy solution (there are nodejs modules too you could integrate with your application) if you want serve each virtualhost with different applications (because they cannot listen to the same port).
Here the answer to my question. I use Nginx only and setup a reverse proxy.
First on my /etc/hosts file I add the domain which I want to use.
127.0.0.1 tomato.com
This means whenever I open this URL "tomato.com" browser will change for 127.0.0.1. But my Express server is running on 127.0.0.1:3000. Now we need to point 127.0.0.1 to 127.0.0.1:3000. Using Nginx we can configure this. Below given line of code does this. /etc/nginx/sites-available/tomato.conf
server_name tomato.com;
location / {
proxy_pass "http://127.0.0.1:3000/"
}
For more detail check this post from Digitalocean
I have a very simple Node js app, which listens to a particular host ip address and portnumber.
Right now I am maintaining this hostname and port number in a js file ( called config.js) inside functions .
When I create server I call those functions ,and they return the hardcoded hostname and the port values, using which I create server and listen to it .
so if hostname and port changes, I have to go the config.js and simply changes it .
However I feel, this is not good.
What is the best practice to maintain host and port, and how we generally maintain it for large node apps. Looking for some informations on this
Thanks & Warm Regards
Musaffir
There is a npm package called config that you can use to store configurations,
https://www.npmjs.com/package/config, this is a highly used package to manage configurations in multiple environments.
You can use this to have multiple configurations that can be set by an Environment variable and in a dockerized enviroment use docker specific environment variables using custom-environment-variables.js to override default.js.
For more details
https://github.com/lorenwest/node-config/wiki/
i think that you should put your hostname and post to env variables.
for dev, use can use https://www.npmjs.com/package/dotenv to develop.
e.g.
HOST_NAME=xxxx
PORT=3000
in your code you can get value via process.env.HOST_NAME and process.env.PORT
A simple easy way would be to maintain it in package.json file like
"host": "11.12.13.114",
"port":"3333",
can require this to the js file where you want the value
const package = require("./package.json");
console.log(package.host) --> 11.12.13.114
“If #nginx isn’t sitting in front of your node server, you’re probably doing it wrong.”
— Bryan Hughes via Twitter
For a short while now, I have been making applications with Node.js and, as Mr.Hughes advises, serving them with Nginx as a reverse proxy. But I am not sure that I am doing it correctly because I can still access my Node.js application over the internet without going through the Nginx server.
At its core, the typical application is quite simple. It is served with ExpressJS like this:
var express = require("express");
var app = express();
// ...
app.listen(3000);
and Nginx is configured as a reverse-proxy like so:
# ...
location / {
proxy_pass http://127.0.0.1:3000;
}
# ...
And this works wonderfully! However I have noticed a behaviour that I am not sure is desirable, and may defeat a large portion of the purpose of using Nginx as a reverse-proxy in the first place:
Assuming example.org is a domain name pointing to my server, I can navigate to http://www.example.org:3000 and interact with my application from anywhere, without touching the Nginx server.
The typical end user would never have any reason to navigate to http://<whatever-the-server-host-name-or-IP-may-be>:<the-port-the-application-is-being-served-on>, so this would never effect them. My concern, though, is that this may have security implications that a not-so-end-user could take advantage of.
Should the application be accessible directly even though Nginx is being used as a reverse-proxy?
How can you configure the application so that it is only be available to the local machine / network / Nginx server?
It is best not to be available directly (imho).
You can specify accepted hostname:
app.listen(3000, 'localhost');