Project has a single node_moudules in root dir. Jest config has separate projects settings for backend and frontend subfolders.
I want to mock node modules using the __mocks__ folder. It works if I put __mocks__ inside "backend/src" but how can I have a single __mocks__ folder for both frontend and backend?
jest.config:
export default {
projects: [
{
displayName: 'Backend',
rootDir: './backend',
roots: ['<rootDir>/src'],
},
],
};
I just had to add a second entry to roots:
roots: ['<rootDir>/src', '<rootDir>/../__mocks__']
Related
I have an express application with three branches: master, staging, and development. I want to run all three branches simultaneously, each on a different port, i.e. master on 3000, staging on 3001, development on 3002.
I'm hoping to achieve this with PM2, but haven't been able to get this to work yet. I was trying with an ecosystem.config.yml like the one below, which successfully runs the application on ports 3000-3002 and injects the corresponding environment variables, but runs the code of the active branch on all three ports.
Is it possible configure PM2 to run different git branches on different ports? Maybe with the PM2 deploy command and associated configuration somehow?
module.exports = {
apps: [
{
name: "api",
script: "app.js",
watch: ".",
env: {
NODE_ENV: "production",
PORT: "3000",
},
},
{
name: "api-staging",
script: "app.js",
watch: ".",
env: {
NODE_ENV: "staging",
PORT: "3001",
},
},
{
name: "api-dev",
script: "app.js",
watch: ".",
env: {
NODE_ENV: "development",
PORT: "3002",
},
},
],
};
Answering my own question here. The only way I've found to easily do this is to make a local copy of each branch of my repo in its own directory using git clone --branch <branchname> --single-branch <remote-repo-url>, e.g. git clone --branch development --single-branch git#github.com:myuser/my-api.git api-dev to clone the development branch of my repo into a local directory /api-dev.
Then in each local directory, which contains a single branch of my repo, I create an ecosystem.config.js file with configuration appropriate for that branch, e.g.
module.exports = {
apps: [
{
name: "api-dev",
script: "app.js",
watch: ".",
env: {
NODE_ENV: "development",
PORT: "3002",
},
},
],
};
Then I can run pm2 start ecosystem.config.js in each of my local repositories and run each branch on its own port.
Does anyone know a simpler way to do this?
I want to deploy AWS Lambda functions with Node8.10 and Ruby2.5 runtimes from one serverless.yml file.
I set up the following folder structure, with /node and /ruby holding my respective handlers.
-/nodeRubyLambdas
-/node
-handler.js
-package.json, package-lock.json, /node_modules
-/ruby
-rubyRijndaelEncryption.rb
-Gemfile, Gemfile.lock, /vendor
-serverless.yml
-webpack.config.js
-package.json for serverless-webpack
Here is my serverless.yml
service: nodeRubyLambdas
plugins:
- serverless-webpack
- serverless-offline
custom:
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
provider:
name: aws
stage: dev
region: us-west-2
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource: "*"
package:
individually: true
functions:
nodeMain:
handler: node/handler.main
runtime: nodejs8.10
events:
- http:
path: main
method: get
package:
individually: true
rubyEncryption:
handler: ruby/rubyRijndaelEncryption.lambda_handler
runtime: ruby2.5
environment:
RIJNDAEL_PASSWORD: 'a string'
package:
individually: true
My webpack configuration: (This is the base example, I just added the bit to ignore ruby files when I got my first error.)
const slsw = require("serverless-webpack");
const nodeExternals = require("webpack-node-externals");
module.exports = {
entry: slsw.lib.entries,
target: "node",
// Generate sourcemaps for proper error messages
devtool: 'source-map',
// Since 'aws-sdk' is not compatible with webpack,
// we exclude all node dependencies
externals: [nodeExternals()],
mode: slsw.lib.webpack.isLocal ? "development" : "production",
optimization: {
// We do not want to minimize our code.
minimize: false
},
performance: {
// Turn off size warnings for entry points
hints: false
},
// Run babel on all .js files and skip those in node_modules
module: {
rules: [
{
test: /\.js$/,
loader: "babel-loader",
include: __dirname,
exclude: [/node_modules/, /\.rb$/]
}
]
}
};
Fail #0:
[Webpack Compilation error] Module parse failed
Fail #1:
Basically, webpack assumes all functions are .js and tries to package them as such. Based off this suggestion, I forced my entry point in webpack config to be my handler.js
module.exports = {
entry: "./node/handler.js",
target: "node",
...
This packages ONLY the Node Lambda. An empty placeholder for the Ruby Lambda is created on AWS.
Fail #2:
I commented out webpack from serverless.yml and added include and exclude statements in the functions package options.
functions:
nodeMain:
package:
individually: true
include:
- node/**
- handler.js
exclude:
- ruby/**
- rubyLambda/**
rubyEncryption:
package:
individually: true
include:
- vendor/**
- rubyRijndaelEncryption.rb
exclude:
- Gemfile
- Gemfile.lock
- node/**
This gets an [ENOENT: no such file or directory] for node/node_modules/#babel/core/node_modules/.bin/parser. This file is not there but I don't understand why it is looking for it, since webpack is not being called.
Sort of success?:
I was able to get the Lambdas to deploy if I commented out webpack and used
serverless deploy function -f <function name here>
to deploy the Ruby Lambda and then uncommented webpack and used the same thing to deploy the Node Lambda.
I'm convinced that there's a better way to get them to deploy; Have I missed something in my setup? Is there another option I haven't tried?
P.S. I did see this pull request https://github.com/serverless-heaven/serverless-webpack/pull/256, but it seems to be abandoned since 2017.
serverless-webpack is not designed for non-JS runtimes. It hijacks serverless packaging and deploys ONLY the webpack output.
Here are your options:
Don't use serverless-webpack and simply use serverless' built-in packaging.
You can use webpack directly (not serverless-webpack), and change your build process to compile using webpack first and then let serverless deploy the output folder.
P.S. The package.individually property is a root-level property in your serverless.yml. It shouldn't be in provider or in your function definitions.
For those who may be looking for options for multiple-runtimes other than serverless-webpack, I ended up switching to this plugin: https://www.npmjs.com/package/serverless-plugin-include-dependencies.
It works with my runtimes (Ruby and Node) and lets you use package.individually with package.include/exclude at the root and function level if the plugin misses something.
I have a nodejs app in a subdirectory called node-server of a monorepo project.
I was able to leverage heroku-buildpack-monorepo project to allow me to deploy just the node-server directory to Heroku when I directly do a push like this:
git push heroku master
This will run the monorepo buildpack, which will copy the subfolder to a staging directory, wipe out the entire root structure, then replace the root directory with the contents of the staging directory. Then it goes on to autodetect my nodejs buildpack and everything works great.
But now, I'm trying to set up Heroku CI to automatically detect and run my tests and stage my app when I commit directly to Github:
git push origin mybranch
I've set up my app.json file as follows:
{
"environments": {
"test": {
"buildpacks": [
{
"url": "https://github.com/lstoll/heroku-buildpack-monorepo"
},
{
"url": "https://github.com/heroku/heroku-buildpack-nodejs"
}
],
"env": {
"APP_BASE": "node-server"
}
}
}
}
The problem is that Heroku sets up a read-only file system when it stages the code for test execution, making the monorepo buildpack unusable:
-----> Fetching https://github.com/heroku/heroku-buildpack-nodejs buildpack...
buildpack downloaded
-----> Monorepo app detected
rm: cannot remove '/app': Read-only file system
FAILED to copy directory into place
Is there any other way to get Heroku CI to work with a monorepo such that it can detect the buildpack and execute tests in a subdirectory?
I have a docker app with the following containers
node - source code of the project. it serves up the html page situated in the public folder.
webpack - watches files in the node container and updates the public folder (from the node container) on the event of change in the code.
database
this is the webpack/node container setup
web:
container_name: web
build: .
env_file: .env
volumes:
- .:/usr/src/app
- node_modules:/usr/src/app/node_modules
command: npm start
environment:
- NODE_ENV=development
ports:
- "8000:8000"
webpack:
container_name: webpack
build: ./webpack/
depends_on:
- web
volumes_from:
- web
working_dir: /usr/src/app
command: webpack --watch
So currently , the webpack container monitors and updates the public folder. i have to manually refresh the browser to see my changes.
I'm now trying to incorporate webpack-dev-server to enable automatic refresh in the browser
these are my changes to the webpack config file
module.exports = {
entry:[
'webpack/hot/dev-server',
'webpack-dev-server/client?http://localhost:8080',
'./client/index.js'
],
....
devServer:{
hot: true,
proxy: {
'*': 'http://localhost:8000'
}
}
}
and the new docker-compose file file webpack
webpack:
container_name: webpack
build: ./webpack/
depends_on:
- web
volumes_from:
- web
working_dir: /usr/src/app
command: webpack-dev-server --hot --inline
ports:
- "8080:8080"
i seem to be getting an error when running the app
Invalid configuration object. Webpack has been initialised using a configuration object that does not match the API schema.
webpack | - configuration.entry should be one of these:
webpack | object { <key>: non-empty string | [non-empty string] } | non-empty string | [non-empty string] | function
webpack | The entry point(s) of the compilation.
webpack | Details:
webpack | * configuration.entry should be an object.
webpack | * configuration.entry should be a string.
webpack | * configuration.entry should NOT have duplicate items (items ## 1 and 2 are identical) ({
webpack | "keyword": "uniqueItems",
webpack | "dataPath": ".entry",
webpack | "schemaPath": "#/definitions/common.nonEmptyArrayOfUniqueStringValues/uniqueItems",
webpack | "params": {
webpack | "i": 2,
webpack | "j": 1
webpack | },
webpack | "message": "should NOT have duplicate items (items ## 1 and 2 are identical)",
webpack | "schema": true,
webpack | "parentSchema": {
webpack | "items": {
webpack | "minLength": 1,
webpack | "type": "string"
webpack | },
webpack | "minItems": 1,
webpack | "type": "array",
webpack | "uniqueItems": true
webpack | },
webpack | "data": [
webpack | "/usr/src/app/node_modules/webpack-dev-server/client/index.js?http://localhost:8080",
webpack | "webpack/hot/dev-server",
webpack | "webpack/hot/dev-server",
webpack | "webpack-dev-server/client?http://localhost:8080",
webpack | "./client/index.js"
webpack | ]
webpack | }).
webpack | [non-empty string]
webpack | * configuration.entry should be an instance of function
webpack | function returning an entry object or a promise..
As you can see , my entry object doesnt have any duplicate items.
Is there something additional i should be doing? anything i missed?
webpack-dev-server should basically proxy all requests to the node server.
I couldn't make webpack or webpack-dev-server watch (--watch) mode work even after mounting my project folder into container.
To fix this you need to understand how webpack detects file changes within a directory.
It uses one of 2 softwares that add OS level support for watching for file changes called inotify and fsevent. Standard Docker images usually don't have these (specially inotify for linux) preinstalled so you have to install it in your Dockerfile.
Look for inotify-tools package in your distro's package manager and install it. fortunately all alpine, debian, centos have this.
Docker & webpack-dev-server can be fully operational without any middleware or plugins, proper configuration is the deal:
devServer: {
port: 80, // use any port suitable for your configuration
host: '0.0.0.0', // to accept connections from outside container
watchOptions: {
aggregateTimeout: 500, // delay before reloading
poll: 1000 // enable polling since fsevents are not supported in docker
}
}
Use this config only if your docker container does not support fsevents.
For performance efficient way check out HosseinAgha answer #42445288: Enabling webpack hot-reload in a docker application
try doing this:
Add watchOptions.poll = true in webpack config.
watchOptions: {
poll: true
},
Configure host in devServer config
host:"0.0.0.0",
Hot Module Reload is the coolest development mode, and a tricky one to set up with Docker. In order to bring it to life you'll need 8 steps to follow:
For Webpack 5 install, in particular, these NPM packages:
npm install webpack webpack-cli webpack-dev-server --save-dev --save-exact
Write this command into 'scripts' section in 'package.json' file:
"dev": "webpack serve --mode development --host 0.0.0.0 --config webpack.config.js"
Add this property to 'webpack.config.js' file (it'll enable webpack's hot module reloading)
devServer: {
port: 8080,
hot: "only",
static: {
directory: path.join(__dirname, './'),
serveIndex: true,
},
},
Add this code to the very bottom of your 'index.js' (or whatever), which is the entry point to your app:
if (module.hot) {
module.hot.accept()
}
Expose ports in 'docker-compose.yml' to see the app at http://localhost:8080
ports:
- 8080:8080
Sync your app's /src directory with 'src' directory within a container. To do this use volumes in 'docker-compose.yml'. In my case, directory 'client' is where all my frontend React's files sit, including 'package.json', 'webpack.config.js' & Dockerfile. While 'docker-compose.yml' is placed one leve up.
volumes:
- ./client/src:/client/src
Inside the volume group you'd better add the ban to syncronize 'node_modules' directory ('.dockerignore' is of no help here).
volumes:
...
- /client/node_modules
Fire this whole bundling from Docker's CMD, not from RUN command inside your 'docker-compose.yml'.
WORKDIR /client
CMD ["npm", "run", "dev"]
P.S. If you use Webpack dev server, you don't need other web servers like Nginx in your development. Another important thing to keep in mind is that Webpack dev server does not recompile files from '/src' folder into '/disc' one. It performs the compilation in memory.
Had this trouble from a windows device. Solved it by setting WATCHPACK_POLLING to true in environment of docker compose.
frontend:
environment:
- WATCHPACK_POLLING=true
I had the same problem. it was more my fault and not webpack nor docker. In fact you have to check that the workdir in your Dockerfile is the target for your bindmount in docker-compose.yml
Dockerfile
FROM node
...
workdir /myapp
...
on your docker-compose.yml
web:
....
-volumes:
./:/myapp
It should work if you configure the reloading on your webpack.config.js
I'm trying to build a webapp using nodejs. It compiles and runs fine on my local machine but when I try to host it on Azure, webpack seems to cause problem.
//webpack.config.js
var config = {
entry: './main.js',
output: {
path:'/',
filename: 'index.js',
},
devServer: {
// inline: true,
// port: 8080
},
module: {
loaders: [
{
test: /\.jsx?$/,
exclude: /node_modules/,
loader: 'babel-loader',
query: {
presets: ['es2015', 'react']
}
}
]
}
}
module.exports = config;
This is the file hierarchy:
This is the sources tab in Chrome Dev tool for local machine. I notice here the index.js get compiled as specified in the config file.
Then I just place the source on the server using git. This is the response I get from the server:
This is the sources tab for the hosting server.
I suspect that it could be because there is difference in interpreting the directories on my local machine and the host?! I'm using MacOS.
Basically, you would need to compile your WebPack application first before deploying to Azure production server. However, you can also leverage Custom Deployment Script to install Node.js modules and run custom scripts to build your WebPack application on Azure Web Apps during the Azure Deployment task. For detailed steps, please check out this post on StackOverflow.