I have a problem with mounting in Docker. I want simply save and return pictures to front-end.
This is a dockerfile:
FROM node:boron
WORKDIR /app
COPY . .
RUN npm install --production
RUN mkdir -p /app/public
VOLUME ["/app/public"]
CMD yum install imagemagick
# if we don't use this specific form, SIGINT/SIGTERM doesn't get forwarded
CMD node server.js
I'm deploying with skyliner.io.
Inspecting my image I get :
[
{
"Id": "sha256:598085445f82a8324f41842a7ac4f93a55b009d93bfaf07e7ce7b8a4bc5918d9",
"RepoTags": [
"thurst-back-end:latest"
],
"RepoDigests": [],
"Parent": "",
"Comment": "",
"Created": "2017-01-09T16:05:50.958866532Z",
"Container": "85457fb45353305715ea72297187fd6b88a019aa369426428c536a6a80450206",
"ContainerConfig": {
"Hostname": "45f28166fed1",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=6.9.4"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) CMD [\"/bin/sh\" \"-c\" \"node server.js\"]"
],
"ArgsEscaped": true,
"Image": "sha256:64249ddf0e9111ef191b1fb02d1af3ae2c7735f0509169a8e5fa6bc980a463ba",
"Volumes": {
"/app/public": {}
},
"WorkingDir": "/app",
"Entrypoint": null,
"OnBuild": [],
"Labels": {}
},
"DockerVersion": "1.11.2",
"Author": "",
"Config": {
"Hostname": "45f28166fed1",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=6.9.4"
],
"Cmd": [
"/bin/sh",
"-c",
"node server.js"
],
"ArgsEscaped": true,
"Image": "sha256:64249ddf0e9111ef191b1fb02d1af3ae2c7735f0509169a8e5fa6bc980a463ba",
"Volumes": {
"/app/public": {}
},
"WorkingDir": "/app",
"Entrypoint": null,
"OnBuild": [],
"Labels": {}
},
"Architecture": "amd64",
"Os": "linux",
"Size": 700375224,
"VirtualSize": 700375224,
"GraphDriver": {
"Name": "overlay",
"Data": {
"RootDir": "/var/lib/docker/overlay/739c2f7ee799c2ec0e75beb02c24c084aa9545fa6f1680b6a65062bf5d6133e8/root"
}
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:b6ca02dfe5e62c58dacb1dec16eb42ed35761c15562485f9da9364bb7c90b9b3",
"sha256:60a0858edcd5aad240966e33389850e4328de4cfb5282977eddda56bffc7f95f",
"sha256:53c779688d06353f7ba4fd7ce1d43ce146ad0278ebead0feea1846383c730024",
"sha256:0a5e2b2ddeaa749d95730bad9be3e3a472ff6f80544da0082a99ba569df34ff3",
"sha256:fa18e5ffd316beb0c4c929ea1fff8d559a73a366f30f1004bb06af3e9f800696",
"sha256:604c78617f347c58e4ce0021f47928b7df3d799ea7c5e9367fa5a800e473dc06",
"sha256:6a73c39a0ab65b5e2da69b9013fc7f50c8bf5be27c0cf5fb3b642a247a8993ca",
"sha256:b7ce32b271bee3f3c614232448a4308cdfc4a2bf6f8db1436f51cb74ae5c15dc",
"sha256:a276062d9f56b85bf34797301d74b761970c3e6ce0ccd3525f4535e675a0974e",
"sha256:2f616e13f894a3a5c4dc33cbbcce345c51a704d56a70396cacdfb2e96e2ff9df",
"sha256:c6dfd7a877dba2837cc46e906cde9aa6e1cc5f89c9c65cefa81f130d59e2c7ac"
]
}
}
]
Next command to understand problem:
$ docker volume ls
DRIVER VOLUME NAME
local 2fe327f9a9d82d7ddad72e8d9dcda76e3212653e100c24453de9edbbf60fbe53
AND also
$ docker volume inspect 2fe327f9a9d82d7ddad72e8d9dcda76e3212653e100c24453de9edbbf60fbe53
[
{
"Name": "2fe327f9a9d82d7ddad72e8d9dcda76e3212653e100c24453de9edbbf60fbe53",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/2fe327f9a9d82d7ddad72e8d9dcda76e3212653e100c24453de9edbbf60fbe53/_data",
"Labels": null
}
]
When I run project not in container - all work good, files saves to /public/images/:id/:id-user.jpg.
But when I run project in docker, files are located in /var/lib/docker/overlay/0a2bdfae85072dce01e470eb71f1199ab23d90eb6f9e573d6a65e06d3d387cce/upper/app/public/images.
No sure I understand it correct, but could it be because your app writes to a path /public?
You say when you run not in container, you get /public/images/..., but your volume is /app/public, which is another path, and hence you write into you container volume..
Related
I have jest tests in my angular project.
I have a package.json file specifying the version of jest I would like to use to run the test. The file includes:
"#types/jest": "^24.0.18",
"jest": "^24.9.0",
"jest-preset-angular": "^7.1.1",
The jest config also includes:
"setupFilesAfterEnv": [
"<rootDir>/setup-jest.ts"
],
This is where the issue occurs. When trying to run jest, I get the following message:
● Validation Warning:
Unknown option "setupFilesAfterEnv" with value ["<rootDir>/setup-jest.ts"] was found.
This is probably a typing mistake. Fixing it will remove this message.
Configuration Documentation:
https://jestjs.io/docs/configuration.html
I had a look at jest -h and found a flag which gives me the setup of the jest environment.
jest --showConfig
This however shows that I am running jest on version
"version": "23.6.0"
So my question lies here. How come after I do an npm i, the jest version trying to run the tests is different / old.
I tried installing jest-cli with the -g flag and the save-dev flag.
Also trying to run tests in VS Code, if thats any help.
Please help.
Thank you in advance.
Full log of npx jest --showConfig
● Validation Warning:
Unknown option "setupFilesAfterEnv" with value ["<rootDir>/setup-jest.ts"] was found.
This is probably a typing mistake. Fixing it will remove this message.
Configuration Documentation:
https://jestjs.io/docs/configuration.html
{
"configs": [
{
"automock": false,
"browser": false,
"cache": true,
"cacheDirectory": "/var/folders/bs/wrvrgl6132df8l5ndxv40m3m0000gn/T/jest_dx",
"clearMocks": false,
"coveragePathIgnorePatterns": [
"/node_modules/",
"setup-jest.ts"
],
"detectLeaks": false,
"detectOpenHandles": false,
"errorOnDeprecated": false,
"filter": null,
"forceCoverageMatch": [],
"globals": {
"ts-jest": {
"tsConfig": "<rootDir>/tsconfig.spec.json",
"stringifyContentPathRegex": "\\.html$",
"astTransformers": [
"jest-preset-angular/InlineHtmlStripStylesTransformer"
]
}
},
"haste": {
"providesModuleNodeModules": []
},
"moduleDirectories": [
"node_modules"
],
"moduleFileExtensions": [
"ts",
"html",
"js",
"json"
],
"moduleNameMapper": [
[
"#app/(.*)",
"/Users/name/Projects/project/src/app/$1"
],
...
],
"modulePathIgnorePatterns": [],
"name": "6caa4...",
"prettierPath": "/Users/name/Projects/project/node_modules/prettier/index.js",
"resetMocks": false,
"resetModules": false,
"resolver": null,
"restoreMocks": false,
"rootDir": "/Users/name/Projects/project",
"roots": [
"/Users/name/Projects/project"
],
"runner": "jest-runner",
"setupFiles": [],
"setupTestFrameworkScriptFile": null,
"skipFilter": false,
"snapshotSerializers": [],
"testEnvironment": "/Users/name/Projects/project/node_modules/jest-environment-jsdom-thirteen/build/index.js",
"testEnvironmentOptions": {},
"testLocationInResults": false,
"testMatch": [
"**/__tests__/**/*.js?(x)",
"**/?(*.)+(spec|test).js?(x)"
],
"testRegex": "",
"testRunner": "/Users/name/node_modules/jest-jasmine2/build/index.js",
"testURL": "http://localhost",
"timers": "real",
"transform": [
[
"^.+\\.(ts|js|html)$",
"/Users/name/Projects/project/node_modules/ts-jest/dist/index.js"
]
],
"watchPathIgnorePatterns": []
}
],
"globalConfig": {
"bail": false,
"changedFilesWithAncestor": false,
"collectCoverage": true,
"collectCoverageFrom": null,
"coverageDirectory": "/Users/name/Projects/project/coverage",
"coverageReporters": [
"json",
"text",
"lcov",
"clover"
],
"coverageThreshold": null,
"detectLeaks": false,
"detectOpenHandles": false,
"errorOnDeprecated": false,
"expand": false,
"filter": null,
"globalSetup": null,
"globalTeardown": null,
"listTests": false,
"maxWorkers": 7,
"noStackTrace": false,
"nonFlagArgs": [],
"notify": false,
"notifyMode": "always",
"passWithNoTests": false,
"projects": null,
"rootDir": "/Users/name/Projects/project",
"runTestsByPath": false,
"skipFilter": false,
"testFailureExitCode": 1,
"testPathPattern": "",
"testResultsProcessor": null,
"updateSnapshot": "new",
"useStderr": false,
"verbose": null,
"watch": false,
"watchman": true
},
"version": "23.6.0"
}
Showing npm config get log here too:
; cli configs
metrics-registry = "http://.../.../npm-group/"
scope = ""
user-agent = "npm/6.9.0 node/v10.15.3 darwin x64"
; project config /Users/user/Projects/project/.npmrc
registry = "http://.../.../npm-group/"
; node bin location = /Users/user/.nvm/versions/node/v10.15.3/bin/node
; cwd = /Users/user/Projects/project
; HOME = /Users/user
; "npm config ls -l" to show all defaults.
I had the same issue, after a long search I've tried this:
type jest
Which gave me the location:
/usr/local/bin/jest
Renaming this file (or deleting it), solved the problem (note that now running jest will give command not found).
I am in the process of deploying my .BNA file to fabric, I been testing and prototyping it in on the bluemix playground succesfully however, when I try to install the network application to fabric I get the error.
> Error: Error trying install business network.
>Error: No valid responses from any peers.
>Response from attempted peer comms was an error:
>Error: 14 UNAVAILABLE: Connect Failed
Command failed
**This is the steps I took**
1. Launch your Fabric network
> ./startFabric.sh
2.) Create the peer admin card
> ./createPeerAdminCard.sh
3.) Install the network application to fabric
> composer network install -a dist/bna.bna -c PeerAdmin#hlfv1
**This step is where I get the error**
✖ Installing business network. This may take a minute...
Error: Error trying install business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: 14 UNAVAILABLE: Connect Failed
Command failed
**Details of my env**
Node Version: v8.11.3
Docker version: 18.03
Composer version: v0.19.12
Docker PS:
[Docker PS Screen shot][1]
[1]: https://i.stack.imgur.com/HQGBf.png
Any help is really appreciated.
UPDATE
Connection.json for hlfv1
{
"name": "hlfv1",
"x-type": "hlfv1",
"x-commitTimeout": 300,
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300",
"eventHub": "300",
"eventReg": "300"
},
"orderer": "300"
}
}
},
"channels": {
"composerchannel": {
"orderers": [
"orderer.example.com"
],
"peers": {
"peer0.org1.example.com": {}
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.org1.example.com"
]
}
},
"orderers": {
"orderer.example.com": {
"url": "grpc://localhost:7050"
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:7051",
"eventUrl": "grpc://localhost:7053"
}
},
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "http://localhost:7054",
"caName": "ca.org1.example.com"
}
}
}
Hlfv11 vs HLFv1
I noticed when I look in the the fabric-scrips there are two components hlfv11 vs hlfv1.
Screen shot of fabric tools
When I start the startfabric I get the line that fabric assumes it is "hlfv11" instead of hlfv1.
enter image description here
Any help would be appreciated.
docker inspect peer0.org1.example.com
[
{
"Id": "6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac",
"Created": "2018-07-20T22:49:51.238208735Z",
"Path": "peer",
"Args": [
"node",
"start"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 7506,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-07-20T22:49:51.543106588Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:b023f9be07714e495e6d41849d7e916434e85580754423ece145866468ad29a9",
"ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/resolv.conf",
"HostnamePath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/hostname",
"HostsPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/hosts",
"LogPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac-json.log",
"Name": "/peer0.org1.example.com",
"RestartCount": 0,
"Driver": "aufs",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/var/run:/host/var/run:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer:/etc/hyperledger/configtx:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/peer/msp:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "composer_default",
"PortBindings": {
"7051/tcp": [
{
"HostIp": "",
"HostPort": "7051"
}
],
"7053/tcp": [
{
"HostIp": "",
"HostPort": "7053"
}
]
},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": null,
"Name": "aufs"
},
"Mounts": [
{
"Type": "bind",
"Source": "/var/run",
"Destination": "/host/var/run",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/users",
"Destination": "/etc/hyperledger/msp/users",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer",
"Destination": "/etc/hyperledger/configtx",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp",
"Destination": "/etc/hyperledger/peer/msp",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "6caa83b2a8a5",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"7051/tcp": {},
"7053/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"CORE_LOGGING_LEVEL=debug",
"CORE_CHAINCODE_LOGGING_LEVEL=DEBUG",
"CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock",
"CORE_PEER_ID=peer0.org1.example.com",
"CORE_PEER_ADDRESS=peer0.org1.example.com:7051",
"CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=composer_default",
"CORE_PEER_LOCALMSPID=Org1MSP",
"CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp",
"CORE_LEDGER_STATE_STATEDATABASE=CouchDB",
"CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"FABRIC_CFG_PATH=/etc/hyperledger/fabric"
],
"Cmd": [
"peer",
"node",
"start"
],
"Image": "hyperledger/fabric-peer:x86_64-1.1.0",
"Volumes": {
"/etc/hyperledger/configtx": {},
"/etc/hyperledger/msp/users": {},
"/etc/hyperledger/peer/msp": {},
"/host/var/run": {}
},
"WorkingDir": "/opt/gopath/src/github.com/hyperledger/fabric",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "d44983248579bb25822020f82382fba01b891c3338b2fe91bb17ac3936126c69",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "composer",
"com.docker.compose.service": "peer0.org1.example.com",
"com.docker.compose.version": "1.21.1",
"org.hyperledger.fabric.base.version": "0.4.6",
"org.hyperledger.fabric.version": "1.1.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5645c1988100b53fa9a8c2d13adc40c43f3995cb808b3eda28771176033b26b4",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"7051/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "7051"
}
],
"7053/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "7053"
}
]
},
"SandboxKey": "/var/run/docker/netns/5645c1988100",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"composer_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"peer0.org1.example.com",
"6caa83b2a8a5"
],
"NetworkID": "d4f496b7b3aeae87d1b1461523bc8620ac34b54d9b3b9f8d31c6cfa7be4da024",
"EndpointID": "a19687702d04e166dc0291dc9ce1130caf5eccf484ece4fd988c13cc2660c8fb",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.5",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:13:00:05",
"DriverOpts": null
}
}
}
}
]
Fixed: Needed to reinstall Hyperledger fabric, composer, node, npm, and docker. And need to set "unset ${!DOCKER*}" there seemed to an docker issue.
This error is usually seen when the CLI cannot connect to the Fabric using the addresses specified in the PeerAdmin's connection.json file. Did you download the latest fabric-tools as shown here prior to this?
Sometimes if there is a proxy involved (on a corporate network), there can be some routing failures.
see answer here which may help you -> Hyperledger composer network install
ERROR 14 means that you the composer can't locate the peers. Your issue is here:
"peers": {
"peer0.org1.example.com": {}
}
you need to write something like:
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:7051",
"eventUrl": "grpc://localhost:7053"
}
}
FIXED:
I uninstalled docker, node, npm, and reinstalled everthing and made sure to use unset ${!DOCKER*} when first installing docker for Mac OS
I'm struggling to keep my node.js container running on ECS. It runs fine when I run it locally with docker compose, but on ECS it runs for a 2-3 mins and handles a few connections (2-3 health checks from the load balancer), then closes down. And I can't work out why.
My Dockerfile -
FROM node:6.10
RUN npm install -g nodemon \
&& npm install forever-monitor \
winston \
express-winston
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
Then in my package.json -
{
...
"main": "forever.js",
"dependencies": {
"mongodb": "~2.0",
"abbajs": ">=0.1.4",
"express": ">=4.15.2"
}
...
}
In my docker-compose.yml I run with nodemon -
node:
...
command: nodemon
In my cloudWatch logs I can see everything start -
14:20:24 npm info lifecycle my_app#1.0.0~start: my_app#1.0.0
Then I see the health check requests (all with http 200's), then a bit later it all wraps up -
14:23:00 npm info lifecycle mapov_reporting#1.0.0~poststart: mapov_reporting#1.0.0
14:23:00 npm info ok
I've tried wrapping my start.js script in forever-monitor, but that doesn't seem to be making any difference.
UPDATE
My ECS task definition -
{
"requiresAttributes": [
{
"value": null,
"name": "com.amazonaws.ecs.capability.ecr-auth",
"targetId": null,
"targetType": null
},
{
"value": null,
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs",
"targetId": null,
"targetType": null
},
{
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19",
"targetId": null,
"targetType": null
}
],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:562155596068:task-definition/node:12",
"networkMode": "bridge",
"status": "ACTIVE",
"revision": 12,
"taskRoleArn": null,
"containerDefinitions": [
{
"volumesFrom": [],
"memory": 128,
"extraHosts": null,
"dnsServers": null,
"disableNetworking": null,
"dnsSearchDomains": null,
"portMappings": [
{
"hostPort": 0,
"containerPort": 3000,
"protocol": "tcp"
}
],
"hostname": null,
"essential": true,
"entryPoint": null,
"mountPoints": [],
"name": "node",
"ulimits": null,
"dockerSecurityOptions": null,
"environment": [
{
"name": "awslogs-group",
"value": "node_logs"
},
{
"name": "awslogs-region",
"value": "us-east-1"
},
{
"name": "NODE_ENV",
"value": "production"
}
],
"links": null,
"workingDirectory": null,
"readonlyRootFilesystem": null,
"image": "562155596068.dkr.ecr.us-east-1.amazonaws.com/node:06b5a3700df163c8563865c2f23947c2685edd7b",
"command": null,
"user": null,
"dockerLabels": null,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "node_logs",
"awslogs-region": "us-east-1"
}
},
"cpu": 1,
"privileged": null,
"memoryReservation": null
}
],
"placementConstraints": [],
"volumes": [],
"family": "node"
}
Tasks are all stopped with the status Task failed ELB health checks in (target-group .... Health checks pass 2 or 3 times before they start failing. And there's no record of anything other than an http 200 in the logs.
I was using an old version of the mongo driver ~2.0, and keeping connections to more than one db. When I upgraded the driver, the issue went away.
"dependencies": {
"mongodb": ">=2.2"
}
I can only assume that there was a bug in the driver.
My question is how to use node-schedule to run a cron only on one instance out of two instances of node server.
Currently it is running on both instances but I want it to be executed only on one instance.
So How can you make a cluster run a task only once?
Thanks in advance.
{
"apps": [
{
"name": "Example",
"script": "boot/app/app.js",
"watch": false,
"exec_mode": "cluster_mode",
"instances": 2,
"merge_logs": true,
"cwd": "/srv/www.example.com/server",
"env": {
"NODE_ENV": "development",
.......
.......
}
}
]
}
You can use an environment variable provided by PM2 itself called NODE_APP_INSTANCE which requires PM2 2.5.
NODE_APP_INSTANCE environment variable is used to determine difference between process, for example you may want to run a cronjob only on one process, you can just check if process.env.NODE_APP_INSTANCE === 0, Since two processes can never have the same number.
More Info on PM2 official doc here.
You should use enviroment variables.
In your code you will check this env var:
if(process.env.WITH_SCHEDULE) {
...
}
When you start your instances, you will set WITH_SCHEDULE only for one instance.
Example pm2.json:
{
"apps": [
{
"name": "Example",
"script": "boot/app/app.js",
"args": [],
"error_file": "/srv/www.example.com/logs/error.log",
"out_file": "/srv/www.example.com/logs/info.log",
"ignore_watch": [
"node_modules"
],
"watch": false,
"cwd": "/srv/www.example.com/server",
"env": {
"NODE_ENV": "production",
"WITH_SCHEDULE": "1",
"HOST": "127.0.0.1",
"PORT": "9030"
}
},
{
"name": "Example",
"script": "boot/app/app.js",
"args": [],
"error_file": "/srv/www.example.com/logs/error.log",
"out_file": "/srv/www.example.com/logs/info.log",
"ignore_watch": [
"node_modules"
],
"watch": false,
"cwd": "/srv/www.example.com/server",
"env": {
"NODE_ENV": "production",
"HOST": "127.0.0.1",
"PORT": "9030"
}
}
]
}
Browsersync is working fine with a PHP/Symfony 3 project with the following command:
browser-sync start --proxy http://localhost:8000 --files "web/css/**/*.css"
The browser will open at http://localhost:3000 and if I change something in web/css I can see the updated stylesheets without a full page reload. So far so good.
However it doesn't work with the following bs-config.js:
module.exports = {
"files": [
"web/css/**/*.css"
],
"server": false,
"proxy": "http://localhost:8000"
};
And the command:
browser-sync start
Browser will not load, changes aren't detected and reloading doesn't work. What I'm missing?
try this:
1º Create "bs-config.js" with: browser-sync init
2º Open file and edit like this:
module.exports = {
"ui": {
"port": 3001,
"weinre": {
"port": 8080
}
},
"files": "web/css/**/*.css",
"watchOptions": {},
"server": false,
"proxy": "http://localhost:8000",
"port": 3000,
"middleware": false,
"serveStatic": [],
"ghostMode": {
"clicks": true,
"scroll": true,
"forms": {
"submit": true,
"inputs": true,
"toggles": true
}
},
"logLevel": "info",
"logPrefix": "BS",
"logConnections": false,
"logFileChanges": true,
"logSnippet": true,
"rewriteRules": [],
"open": "local",
"browser": "default",
"cors": false,
"xip": false,
"hostnameSuffix": false,
"reloadOnRestart": false,
"notify": true,
"scrollProportionally": true,
"scrollThrottle": 0,
"scrollRestoreTechnique": "window.name",
"scrollElements": [],
"scrollElementMapping": [],
"reloadDelay": 0,
"reloadDebounce": 0,
"reloadThrottle": 0,
"plugins": [],
"injectChanges": true,
"startPath": null,
"minify": true,
"host": null,
"localOnly": false,
"codeSync": true,
"timestamps": true,
"clientEvents": [
"scroll",
"scroll:element",
"input:text",
"input:toggles",
"form:submit",
"form:reset",
"click"
],
"socket": {
"socketIoOptions": {
"log": false
},
"socketIoClientConfig": {
"reconnectionAttempts": 50
},
"path": "/browser-sync/socket.io",
"clientPath": "/browser-sync",
"namespace": "/browser-sync",
"clients": {
"heartbeatTimeout": 5000
}
},
"tagNames": {
"less": "link",
"scss": "link",
"css": "link",
"jpg": "img",
"jpeg": "img",
"png": "img",
"svg": "img",
"gif": "img",
"js": "script"
}};
3º Run the server: php -S localhost:8000
4º Start BrowserSync: browser-sync start --config bs-config.js