Fabric chaincode container (nodejs) cannot access npm - hyperledger-fabric

I appreciate help in this matter.
I have the latest images (2.2.0, CA 1.4.8), but I'm getting the error when I'm installing chaincode at the first peer:
failed to invoke chaincode lifecycle, error: timeout expired while executing transaction
I'm working behind a proxy, using a VPN.
I tried to increase the timeouts at the docker config, for all peers:
CORE_CHAINCODE_DEPLOYTIMEOUT=300s
CORE_CHAINCODE_STARTUPTIMEOUT=300s
The process works perfectly up to that point (channel created, peers joined the channel). The chaincode can be installed manually with npm install.
I couldn't find an answer to this anywhere. Can someone provide guidance?
UPDATE: It seems that the chaincode container gets boostrap (and even attributed a random name), but gets stuck at:
+ INPUT_DIR=/chaincode/input
+ OUTPUT_DIR=/chaincode/output
+ cp -R /chaincode/input/src/. /chaincode/output
+ cd /chaincode/output
+ '[' -f package-lock.json -o -f npm-shrinkwrap.json ]
+ npm install --production
I believe it is the proxy blocking npm.
I tried to solve this with:
npm config set proxy proxy
npm config set https-proxy proxy
npm set maxsockets 3
After days of struggling, I've found a solution:
-Had to build a custom fabric nodeenv image, that contained the env variables to setup the npm proxy vars: as in node chaincode instantiate behind proxy. After that, I've setup the following env vars in docker.yaml:
- CORE_CHAINCODE_NODE_RUNTIME=my_custom_image
- CORE_CHAINCODE_PULL=true

Useful! For Chinese users, npmjs may sometimes be disconnected and can build their own instances.
For example, if I use version 2.3.1, Download https://github.com/hyperledger/fabric-chaincode-node/tree/v.2.3.1 Edition. Then enter docker/fabric-nodeenv, modify the Dockerfile, and add a line to modify the warehouse source for npmjs:
RUN npm config set registry http://registry.npm.taobao.org
The entire file is as follows:
ARG NODE_VER=12.16.1
FROM node:${NODE_VER}-alpine
RUN npm config set registry http://registry.npm.taobao.org
RUN apk add --no-cache \
make \
python \
g++;
RUN mkdir -p /chaincode/input \
&& mkdir -p /chaincode/output \
&& mkdir -p /usr/local/src;
ADD build.sh start.sh /chaincode/
Then build a docker image, using the command
docker image build-t whatever/fabric-nodeenv:2.3.
Wait a few minutes, and he'll build the image, and then with docker images, you'll see the image he created
Finally, in peer's configuration file, add in peer-environment
- CORE_CHAINCODE_NODE_RUNTIME=whatever/fabric-nodeenv:2.3
Hope it can help others!

Related

npm run publish doesn't do anything when I push repository from master branch to gh-pages

While trying to run publish.sh file in VS Code
#!/usr/bin/env sh
# abort on errors
set -e
# build
npm run docs:build
# navigate into the build output directory
cd docs/.vuepress/dist
# if you are deploying to a custom domain
# echo 'www.example.com' > CNAME
git init
git add -A
git commit -m 'deploy'
# if you are deploying to https://<USERNAME>.github.io
# git push -f git#github.com:boldak/<USERNAME>.github.io.git master
# if you are deploying to https://<USERNAME>.github.io/<REPO>
git push -f https://github.com/shunichkaaaa/edu_db_labs_IO-12_Group-4 master:gh-pages
cd -
I meet the problem, that npm run do completely ANYTHING. It doesn't return a mistake, nor creates the second branch on GitHub as it should be.
console output
As was advised on one of the websites I tried to set npm ignore-scripts value to false.
npm config set ignore-scripts false
But it already was set to false.
I can host my service locally while using npm run docs:dev.
Switching from VS Code terminal to Windows Command Prompt just repeated the previous problem.
So any opinions on this count??

Elastalert2 rules folder config not working

I'm using Elastalert2 now to get notifications from error log in slack.
We need to receive alarms of all service logs through our dozens of rules.
Docker builds ElastAlert2 and deploy it on Argocd.
But, there is a problem that the rules_folder config does not work
There is rules_folder in config.yaml
rules_folder: /home/elastalert/rules
and this is Example Dockerfile
FROM python:3.9.13-slim
# installation
RUN pip3 install --upgrade pip \
&& pip3 install cryptography elastalert2
ENV LANG="en_US.UTF-8"
# add configuration and alarm
RUN mkdir -p /home/elastalert
WORKDIR /home/elastalert
ADD ./config.yaml /home/elastalert
COPY ./rules /home/elastalert/rules
and this is run command
command: [ "/bin/sh", "-c" ]
args:
- >-
echo "Finda Elastalert is started!!" &&
elastalert-create-index &&
elastalert --verbose --config config.yaml
...
but error occur like...
[error][1]
I think the rule files cannot be imported as args.
In other words, it seems that rules_folder does not apply
If, specify a specific rule file in the start command, it works well.
For example,
elastalert --verbose --config config.yaml --rule ./rules/example/example.yaml
However, it can only execute one rule.
We have dozens of rules.
What's the problem?
Solve.
Don't store empty yaml in your rules/ sub.
The problem was that I commented out all the yaml files except the test rule yaml for the operation test.
By replacing the commented yaml file with another extension such as .text.
Now elastalert recognizes and operates all rules.

Kubernetes exit code 243

I'm facing an issue with the deployement of my Node.js application on my Kubernetes container.
The container is stuck on Crashlooping with this error "Back-off restarting failed container" and as error code i have this "Reason: Error - exit code: 243"
I did a describe of the pod i found nothing except the "Back-off restarting failed container" .
If someone could help that would be great thanks
I'm not sure why this worked, but it seems to be something with using npm run... to start the node service. I experimented with changing my Docker file to launch the container using:
CMD npm run start
To just running the node command, using exactly what NPM should have been running, directly:
CMD node ...
EDIT:
In our environment it was an access problem. To get NPM working, we had to chown all the directories:
COPY --chown=${uid}:${gid} --from=builder /app .

Running Nuxt.js in Docker Container build by Paketo.io / Cloud Native Buildpacks

I want to Containerize my Nuxt.js application. I could write my own Dockerfile (as mentioned in the Nuxt.js Google Cloud Run docs for example), but as Cloud Native Buildpacks are here to free us from the need to write those over and over again I wanted to simply use Paketo.io to build a Container from my Nuxt.js app.
I ran
pack build microservice-ui-nuxt-js --path . --builder paketobuildpacks/builder:base
and a Container was created successfully. Here's the full log:
$ pack build microservice-ui-nuxt-js --path . --builder paketobuildpacks/builder:base
base: Pulling from paketobuildpacks/builder
Digest: sha256:3e2ee17348bd901e7e0748e0e1ddccdf8a602b624e418927145b5f84ca26f264
Status: Image is up to date for paketobuildpacks/builder:base
base-cnb: Pulling from paketobuildpacks/run
Digest: sha256:b6b1612ab2dfa294514fff2750e8d724287f81e89d5e91209dbdd562ed7f7daf
Status: Image is up to date for paketobuildpacks/run:base-cnb
===> DETECTING
4 of 7 buildpacks participating
paketo-buildpacks/ca-certificates 2.2.0
paketo-buildpacks/node-engine 0.4.0
paketo-buildpacks/npm-install 0.3.0
paketo-buildpacks/npm-start 0.2.0
===> ANALYZING
Previous image with name "microservice-ui-nuxt-js" not found
===> RESTORING
===> BUILDING
Paketo CA Certificates Buildpack 2.2.0
https://github.com/paketo-buildpacks/ca-certificates
Launch Helper: Contributing to layer
Creating /layers/paketo-buildpacks_ca-certificates/helper/exec.d/ca-certificates-helper
Paketo Node Engine Buildpack 0.4.0
Resolving Node Engine version
Candidate version sources (in priority order):
-> ""
<unknown> -> ""
Selected Node Engine version (using ): 14.17.0
Executing build process
Installing Node Engine 14.17.0
Completed in 5.795s
Configuring build environment
NODE_ENV -> "production"
NODE_HOME -> "/layers/paketo-buildpacks_node-engine/node"
NODE_VERBOSE -> "false"
Configuring launch environment
NODE_ENV -> "production"
NODE_HOME -> "/layers/paketo-buildpacks_node-engine/node"
NODE_VERBOSE -> "false"
Writing profile.d/0_memory_available.sh
Calculates available memory based on container limits at launch time.
Made available in the MEMORY_AVAILABLE environment variable.
Paketo NPM Install Buildpack 0.3.0
Resolving installation process
Process inputs:
node_modules -> "Not found"
npm-cache -> "Not found"
package-lock.json -> "Found"
Selected NPM build process: 'npm ci'
Executing build process
Running 'npm ci --unsafe-perm --cache /layers/paketo-buildpacks_npm-install/npm-cache'
Completed in 14.988s
Configuring launch environment
NPM_CONFIG_LOGLEVEL -> "error"
Configuring environment shared by build and launch
PATH -> "$PATH:/layers/paketo-buildpacks_npm-install/modules/node_modules/.bin"
Paketo NPM Start Buildpack 0.2.0
Assigning launch processes
web: nuxt start
===> EXPORTING
Adding layer 'paketo-buildpacks/ca-certificates:helper'
Adding layer 'paketo-buildpacks/node-engine:node'
Adding layer 'paketo-buildpacks/npm-install:modules'
Adding layer 'paketo-buildpacks/npm-install:npm-cache'
Adding 1/1 app layer(s)
Adding layer 'launcher'
Adding layer 'config'
Adding layer 'process-types'
Adding label 'io.buildpacks.lifecycle.metadata'
Adding label 'io.buildpacks.build.metadata'
Adding label 'io.buildpacks.project.metadata'
Setting default process type 'web'
Saving microservice-ui-nuxt-js...
*** Images (5eb36ba20094):
microservice-ui-nuxt-js
Adding cache layer 'paketo-buildpacks/node-engine:node'
Adding cache layer 'paketo-buildpacks/npm-install:modules'
Adding cache layer 'paketo-buildpacks/npm-install:npm-cache'
Successfully built image microservice-ui-nuxt-js
Now running
docker run --rm -i --tty -p 3000:3000 microservice-ui-nuxt-js
i hoped to see my app inside the Browser at http://localhost:3000. But no luck! My app doesn't seem to be fully running:
Although my console looks good:
What am I missing?
I read about the HOST variable in this post , which the whole problem is about! And then I also found this answer, since I now knew what to look for. The Nuxt.js configuration docs state it also:
By default, the Nuxt.js development server host is localhost which is
only accessible from within the host machine. In order to view your
app on another device you need to modify the host.
And the crucial config is mentioned also:
Host '0.0.0.0' is designated to tell Nuxt.js to resolve a host
address, which is accessible to connections outside of the host
machine (e.g. LAN)
So all we have to do is to define a Docker environment variable --env "HOST=0.0.0.0" and run the Paketo build Container like this:
docker run --rm -i --tty --env "HOST=0.0.0.0" -p 3000:3000 microservice-ui-nuxt-js
Now the Browser should also show our app at http://localhost:3000:
You can try it yourself using the GitHub Container Registry published image of the example project:
docker run --rm -i --tty --env "HOST=0.0.0.0" -p 3000:3000 ghcr.io/jonashackt/microservice-ui-nuxt-js:latest

rsync files from inside a docker container?

We are using Docker for the build/deploy of a NodeJS app. We have a test container that is built by Jenkins, and executes our unit tests. The Dockerfile looks like this:
FROM node:boron
# <snip> some misc unimportant config here
# Run the tests
ENTRYPOINT npm test
I would like to modify this step so that we run npm run test:cov, which runs the unit tests + generates a coverage report HTML file. I've modified the Dockerfile to say:
# Run the tests + generate coverage
ENTRYPOINT npm run test:cov
... which works. Yay!
...But now I'm unsure how to rsync the coverage report ( generated by the above command inside the Dockerfile ) to a remote server.
In Jenkins, the above config is invoked this way:
docker run -t test --rm
which, again, runs the above test and exists the container.
how can I add some additional steps after the entrypoint command executes, to (for example) rsync some results out to a remote server?
I am not a "node" expert, so bear with me on the details.
First of all, you may consider if you need a separate Dockerfile for running the tests. Ideally, you'd want your image to be built, then tested, without modifying the actual image.
Building a test-image that uses your NodeJS app as a base image (FROM my-nodejs-image) could do the trick, but may not be needed if all you have to do is run a different command / entrypoint on the image.
Secondly; stateful data (the coverage report falls into that category) should not be stored inside the container (i.e., not stored on the container's filesystem). You want your containers to be ephemeral, and anything that should live beyond the container's lifecycle (anything that should be preserved after the container itself is gone), should be stored outside of the container; either in a "volume", or in a bind-mounted directory.
Let me start with the "separate Dockerfile" point. Let's say, your NodeJS application Dockerfile looks like this;
FROM node:boron
COPY package.json /usr/src/app/
RUN npm install && npm cache clean
COPY . /usr/src/app
CMD [ "npm", "start" ]
You build your image, and tag it, for example, with the commit it was built from;
docker build -t myapp:$GIT_COMMIT .
Once the image was built succesfully, you want to test it. Probably a quick test to verify it actually "runs". Many ways to do that, perhaps something like;
docker run \
-d \
--rm \
--network=test-network \
--name test-{$GIT_COMMIT} \
myapp:$GIT_COMMIT
And a container to test it actually does something;
docker run --rm --network=test-network my-test-image curl test-{$GIT_COMMIT}
Once tested (and the temporary container removed), you can run your coverage tests, however, instead of writing the coverage report inside the container, write it to a volume or bind-mount. You can override the command to run in the container with docker run;
mkdir -p /coverage-reports/{$GIT_COMMIT}
docker run \
--rm \
--name test-{$GIT_COMMIT}\
-v /coverage-reports/{$GIT_COMMIT}:/usr/src/app/coverage \
myapp:$GIT_COMMIT npm run test:cov
The commands above;
Create a unique local directory to store the test-artifacts (coverage report)
Runs the image you built (and tagged myapp:$GIT_COMMIT)
Bind-mount the /coverage-reports/{$GIT_COMMIT} into the container at usr/src/app/coverage
Runs the coverage tests (which will write to /usr/src/app/coverage if I'm not mistaken - again, not a Node expert)
Removes the container once it exits
After the container exits, the coverage report is stored in /coverage-reports/{$GIT_COMMIT} on the host. You can use your regular tools to rsync those where you want.
As an alternative, you can use a volume plugin to write the results to (e.g.) an s3 bucket, which saves you from having to rsync the results.
Once tests are successful, you can docker tag the image to bump your application's version (e.g. docker tag myapp:1.0.12345), docker push to your registry, and deploy the new version.
Make a script to execute as the entrypoint and put the commands in the script. You pass in args when calling docker run and they get passed to the script.
The docs have an example of the postgres image's script. You can build off that.
Docker Entrypoint Docs

Resources