What is proper way to store code/functions that are used by both the frontend and backend? - node.js

My frontend Reactjs app is stored in one repository.
My backend Node.js app is stored in another repository.
There are some functions used by both. Where should store those functions so that both repositories can access them?

You can create a library that exports all of the functions you'll be needing, then publish it to NPM and add it to the dependencies of both projects' package.json. With NPM you can set your packages as private, too, in case you don't want your code/package to be publicly available.
The starting point would be to create a directory with all the functions you need, export them all in an index.js, and run npm init to create a package.json for your new project. You'll be guided for naming and assigning a version number, then publish with npm publish (you may need to create an account and run npm login first). Then in your frontend and backend projects you simply npm install <your-package> like any other npm package.
Your project directory may be as simple as...
myFunctions.js
index.js
package.json
myFunctions.js:
export const functionA = () => {
return "a"
}
export const functionB = () => {
return "b"
}
index.js:
export * from './myFunctions.js'
package.json (can be created with npm init:
{
"name": "my-functions",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC"
}
Then in the directory run npm publish, and in your other projects you can run npm install my-functions.
And finally, in your other projects:
import { functionA } from 'my-functions';
// ...
functionA() // returns "a"

Creating a separate NPM package for your helper functions can certainly be a good solution, but I find them somewhat annoying to maintain across different repositories. I tend to try and avoid them.
There are certainly some functions in your application that do have purpose on both the front- and backend, but I would encourage you to look at these carefully to see if that logic can be the responsibility of one or the other (backend or frontend).
For example; if you have a function to parse a date and format it in a very specific way for your app then you can have that function live solely in the backend and leverage it to pass back the already converted value to the frontend - avoiding the burden of maintaining it in 2 places or in a separate package that then needs to be updated in 2 repositories.
Sometimes there's just no getting around it though, but I found that in most cases I can split them accordingly.

Related

Is there any way to open nw.js app from web browser?

I have a desktop application, packaged using node-webkit JS. Is there any way to open this app with IP address from other computer by browser? I just set node-remote to http://localhost:3000 in package.json but is not working when I use chrome and open the IP. There are some errors like nw is not defiend and etc. Please tell me if this way can work or not. Thanks
I don't know nwjs but if I understand correctly, you want to access to localhost in your computer from another computer.
you can not use ip to access because of NAT. but fortunaly you can do that by a third computer (that is not behind NAT). to do that use localtunel.
for more info see this.
Do you think you could paste what your package.json looks like? I've done what you are talking about. Here is what my file structure looks like:
!(https://i.imgur.com/L3M6lvx.png)
The package.json that is in my project folder:
!(https://i.imgur.com/uZV7mzr.png)
The 1st thing that I did was install my dependencies into my project folder so that I don't get the command not recognized error. I did that by going to my project folder and typing:
npm init -y
npm install nw#0.50.1-sdk nwjs-builder -D
This creates a fresh package.json and adds the modules to the file as dependencies. Then I went into my src folder and created another package.json. I set the "main" tag to my index.html
Going back to the .json in my root project folder, we add to the "script" tag:
"script": {
"dev": "nw src/ --remote-debugging-port=9222"
}
(you can make dev whatever you want)
Once you have that setup, all you need to do is run npm run dev and your app will open up. Head over to chrome and type localhost:9222 and you should be set.
It is possible to create an app that can run in a regular browser, and also in NW.js with added features when it runs inside NW.js. You would need to basically wrap anything in if statements, like
if (window.nw) {
let fs = window.nw.require('fs');
let file = fs.readFileSync('./whatever.txt');
console.log(String(file));
}
You could then create two different npm scripts. One to just run a local web server and one to run it and launch NW.js.
{
"main": "http://localhost:4467",
"node-remote": "http://localhost:4467",
"node-main": "server.js",
"scripts": {
"start": "concurrently \"npm run serve\" \"wait-on http://localhost:4467 && nw .\"",
"serve": "node server.js"
},
"dependencies": {
"express": "latest"
},
"devDependencies": {
"concurrently": "latest",
"wait-on": "latest"
}
}
Example: https://github.com/nwutils/nw-local-server-example

How Do I Build For A UAT Environment Using React?

According to the React docs you can have development, test and production envs.
The value of NODE_ENV is set automatically to development (when using npm start), test (when using npm test) or production (when using npm build). Thus, from the point of view of create-react-app, there are only three environments.
I need to change root rest api urls based on how I am deployed.
e.g.
development: baseURL = 'http://localhost:3004';
test: baseURL = 'http://localhost:8080';
uat: baseURL = 'http://uat.api.azure.com:8080';
production: baseURL = 'http://my.cool.api.com';
How do I configure a UAT environment for react if it only caters for dev, test and prod?
What would my javascript, package.json and build commands look like to switch these values automatically?
Like John Ruddell wrote in the comments, we should still use NODE_ENV=production in a staging environment to keep it as close as prod as possible. But that doesn't help with our problem here.
The reason why NODE_ENV can't be used reliably is that most Node modules use NODE_ENV to adjust and optimize with sane defaults, like Express, React, Next, etc. Next even completely changes its features depending on the commonly used values development, test and production.
So the solution is to create our own variable, and how to do that depends on the project we're working on.
Additional environments with Create React App (CRA)
The documentation says:
Note: You must create custom environment variables beginning with REACT_APP_. Any other variables except NODE_ENV will be ignored to avoid accidentally exposing a private key on the machine that could have the same name.
It was discussed in an issue where Ian Schmitz says:
Instead you can create your own variable like REACT_APP_SERVER_URL which can have default values in dev and prod through the .env file if you'd like, then simply set that environment variable when building your app for staging like REACT_APP_SERVER_URL=... npm run build.
A common package that I use is cross-env so that anyone can run our npm scripts on any platform.
"scripts": {
"build:uat": "cross-env REACT_APP_SERVER_URL='http://uat.api.azure.com:8080' npm run build"
Any other JS project
If we're not bound to CRA, or have ejected, we can easily configure any number of environment configurations we'd like in a similar fashion.
Personally, I like dotenv-extended which offers validation for required variables and default values.
Similarly, in the package.json file:
"scripts": {
"build:uat": "cross-env APP_ENV=UAT npm run build"
Then, in an entry point node script (one of the first script loaded, e.g. required in a babel config):
const dotEnv = require('dotenv-extended');
// Import environment values from a .env.* file
const envFile = dotEnv.load({
path: `.env.${process.env.APP_ENV || 'local'}`,
defaults: 'build/env/.env.defaults',
schema: 'build/env/.env.schema',
errorOnMissing: true,
silent: false,
});
Then, as an example, a babel configuration file could use these like this:
const env = require('./build/env');
module.exports = {
plugins: [
['transform-define', env],
],
};
Runtime configuration
John Ruddell also mentioned that one can detect at runtime the domain the app is running off of.
function getApiUrl() {
const { href } = window.location;
// UAT
if (href.indexOf('https://my-uat-env.example.com') !== -1) {
return 'http://uat.api.azure.com:8080';
}
// PROD
if (href.indexOf('https://example.com') !== -1) {
return 'http://my.cool.api.com';
}
// Defaults to local
return 'http://localhost:3004';
}
This is quick and simple, works without changing the build/CI/CD pipeline at all. Though it has some downsides:
All the configuration is "leaked" in the final build,
It won't benefit from dead-code removal at minification time when using something like babel-plugin-transform-define or Webpack's DefinePlugin resulting in a slightly bigger file size.
Won't be available at compile time.
Trickier if using Server-Side Rendering (though not impossible)
To have multiple environments in a React.js application you can use this plugin
env-cmd from NPM
And after that Create the three files as per your need.
For example if you want to setup dev, stag and prod environments you can write your commands like this.
"start:dev": "env-cmd -f dev.env npm start", // dev env
"build:beta": "env-cmd -f stag.env npm run build", // beta env
"build": "react-scripts build", // prod env using .env file

Sharing code between Firebase Functions and React

I'm using Firebase functions with a React application. I have some non-trivial code that I don't want to duplicate, so I want to share it between the deployed functions and my React client. I've got this working locally in my React client (though I haven't tried deploying) - but I can't deploy my functions.
The first thing I tried was npm link. This worked locally, but the functions won't deploy (which makes sense, since this leaves no dependency in your package.json). Then I tried npm install ../shared/ - this looked promising because it did leave a dependency in package.json with a file: prefix - but Firebase still won't deploy with this (error below).
My project directory structure looks like this:
/ProjectDir
firebase.json
package.json (for the react app)
/src
* (react source files)
/functions
package.json (for firebase functions)
index.js
/shared
package.json (for the shared module)
index.js
My shared module package.json (extraneous details omitted):
{
"name": "myshared",
"scripts": {
},
"dependencies": {
},
"devDependencies": {
},
"engines": {
"node": "8"
},
"private": true,
"version": "0.0.1"
}
My firebase functions package.json (extraneous details omitted):
{
"name": "functions",
"scripts": {
},
"dependencies": {
"myshared": "file:../shared",
},
"devDependencies": {
},
"engines": {
"node": "8"
},
"private": true
}
When I try to deploy with:
firebase deploy --only functions
It's telling me it can't load the module:
Function failed on loading user code. Error message: Code in file index.js can't be loaded.
Did you list all required modules in the package.json dependencies?
And I don't think the issue is how I export/imported my code- but just in case:
The export:
exports.myFunc = () => { some code };
The import (functions/index.js)
const myFunc = require('myshared');
And in my react code:
import { myFunc } from 'myshared';
So far the searching I've done hasn't yielded anything that works. Someone did mention entering the shared module path in firebase.json, but I couldn't find any details (including in the firebase docs) that show what that would look like. Thanks for any tips to get this going.
I found a solution. I'm not sure if it's the only or even the best solution, but it seems to work for this scenario, and is easy. As Doug noted above, Firebase doesn't want to upload anything not in the functions directory. The solution was to simply make my shared module a subdirectory under functions (ie ./functions/shared/index.js). I can then import into my functions like a normal js file. However, my shared folder also has a package.json, for use as a dependency to the react app. I install it using:
npm install ./functions/shared
This creates a dependency in my react app, which seems to resolve correctly. I've created a production build without errors. I haven't deployed the react app yet, but I don't think this would be an issue.
Another solution is to create a symlink. In terminal, under /ProjectDir, execute:
ln -s shared functions/shared
cd functions
npm i ./shared

Prevent `npm publish` when ran directly

I am not sure weather it is possible or not.
Is it possible to prevent publish when npm publish ran directly and make it accessible only via scripts.
User must be denied when npm publish is executed directly. i.e. User mush be able to publish via any scripts or npm run <script>
or
is there a way to tell npm only to publish <folder>/ or to look for a tarball when published.
If I mark it private I won't be able to publish at all. My main intention was to prevent accidental publishes.
NPM team gave a simple workaround which is awsome.
package.json
{
"prepublishOnly": "node prepublish.js",
"release": "RELEASE_MODE=true npm publish"
}
prepublish.js
const RELEASE_MODE = !!(process.env.RELEASE_MODE)
if (!RELEASE_MODE) {
console.log('Run `npm run release` to publish the package')
process.exit(1) //which terminates the publish process
}
Mark the package as private:
If you set "private": true in your package.json, then npm will refuse
to publish it.
This is a way to prevent accidental publication of private
repositories. If you would like to ensure that a given package is only
ever published to a specific registry (for example, an internal
registry), then use the publishConfig dictionary described below to
override the registry config param at publish-time.
{
"name": "some",
"version": "1.0.0",
"private": true
}
If you are trying to force something to happen before publishing, leverage the prepublish or prepublishOnly npm-script.
Yes, we can restrict npm to prevent accidental publish by making private: true in package.json
You can have script for publish also
In your package.json
{
"scripts": {
"publish:mypackages": "npm publish folder1/file1.tgz --registry http://custom-registry..."
}
}
Now in cmd: npm run publish:mypackages
It publishes the given tarball to the registry you have given.

How do I install npm packages on Google Cloud Functions?

I'm trying to create a simple function that:
fetches a JSON file from a public URL
does a little number crunching and spits out an answer.
I figured that Google Cloud Functions would be the perfect fit since I can code in JS and don`t have to worry about server deployment, etc.
I've never really used nodejs/npm so maybe this is the issue, but I tried reading online and they just mention
npm install package-name
I'm not sure where I can do this on the Google Cloud Functions page.
I'm currently using the inline editor and I have the following:
var fetch = require('node-fetch');
exports.test2 = (req, res) => {
fetch("myURLgoes here").then(function(response){
res.status(200).send('Success:'+response);
});
I get the following error:
Error: function crashed.Details:
fetch is not defined
From Google Cloud Platform console, go to your cloud functions.
You should have two files when creating or editing a functions:
- index.js: where you define your functions
- package.json: where you define your dependency.
Your package.json at the start is something like this:
{
"name": "sample-http",
"version": "0.0.1"
}
Add in your package.json all your module that you'd like to install with the command npm install as below:
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"#material-ui/core": "^4.1.1"
}
}
you can find the last version of the package on www.npmjs.com
You can run the npm command from the Google Cloud Shell (which you can access from the Google Cloud Console).

Resources