I am new to the development of blockchain technologies, to make the development and implementation process easier I am using the ibm extension, it brings a tutorial to do all the infrastructure assembly. I was able to finish the entire tutorial with no problem and at this point I have:
Smart contract developed in typescript
Api in nodejs that insert some
assets
In this local environment everything works great and I can make requests from postman, from nodejs I open port 8089 and the petitions (GET, POST, PUT,DELETE) for all cases were correct.
The problem comes when I create a Dockerfile for my nodejs project, which has the following structure
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 8089
CMD [ "node", "server.js" ]
Inside the docker the image launches successfully,but when trying to make a request to my container that has the nodejs api it shows me the following error, which I can see in the logs of my image
error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Endorser- name: org1peer-api.127-0-0-1.nip.io:8080, url:grpc://org1peer-api.127-0-0-1.nip.io:8080, connected:false, connectAttempted:true}
error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server org1peer-api.127-0-0-1.nip.io:8080 url:grpc://org1peer-api.127-0-0-1.nip.io:8080 timeout:3000
I am not sure if it is because it is not possible to connect the container with my hydperledger fabric that is deployed using the ibm extension or because I am not configuring the ports correctly.
Finally I have the connection.json file generated by my hydperledger fabric ibm extension and which I am using to connect from the api to the chaincode
{
"certificateAuthorities": {
"org1ca-api.127-0-0-1.nip.io:8080": {
"url": "http://org1ca-api.127-0-0-1.nip.io:8080"
}
},
"client": {
"connection": {
"timeout": {
"orderer": "300",
"peer": {
"endorser": "300"
}
}
},
"organization": "Org1"
},
"display_name": "Org1 Gateway",
"id": "org1gateway",
"name": "Org1 Gateway",
"organizations": {
"Org1": {
"certificateAuthorities": [
"org1ca-api.127-0-0-1.nip.io:8080"
],
"mspid": "Org1MSP",
"peers": [
"org1peer-api.127-0-0-1.nip.io:8080"
]
}
},
"peers": {
"org1peer-api.127-0-0-1.nip.io:8080": {
"grpcOptions": {
"grpc.default_authority": "org1peer-api.127-0-0-1.nip.io:8080",
"grpc.ssl_target_name_override": "org1peer-api.127-0-0-1.nip.io:8080"
},
"url": "grpc://org1peer-api.127-0-0-1.nip.io:8080"
}
},
"type": "gateway",
"version": "1.0"
}
Was the blockchain network still running when you created the Docker image. The registered user in the 'wallet' will become stale if not, and will no longer be valid for connecting to the network. It's been a while since I last used the IBM extension, so I don't know if it has the ability to stop the network as well as drop it entirely. But do check to make sure that the client credentials are up to date as a potential reason for not being able to connect to the network.
Related
I am trying to debug a typescript nestjs application in --watch mode (so it restarts when files are changed) in Visual Studio code (using the Docker extension). The code is mounted in docker through the use of a volume.
It almost works perfectly, the docker is correctly launched, the debugger can attach, however I have one problem that I can't seem to work out:
As soon as a file is changed, the watcher picks it up and I see the following in docker logs -f for the container:
[...]
[10:12:59 AM] File change detected. Starting incremental compilation...
[10:12:59 AM] Found 0 errors. Watching for file changes.
Debugger listening on ws://0.0.0.0:9229/af60f5e3-394d-4df3-a565-8d15898348bf
For help, see: https://nodejs.org/en/docs/inspector
user#system:~$
# (at this point the docker logs command stops and the docker is gone)
At that point vscode ends the debugging session, the docker stops (or vice versa?) and I have to manually restart it.
If I launch the exact same docker command (copy/pasted from the vscode terminal window) manually, it does not stop when changing a file. This is the command it generates:
docker run -dt --name "core-dev" -e "DEBUG=*" -e "NODE_ENV=development" --label "com.microsoft.created-by=visual-studio-code" -v "/home/user/projects/core:/usr/src/app" -p "4000:4000" -p "9229:9229" --workdir=/usr/src/app "node:14-buster" yarn run start:dev --debug 0.0.0.0:9229
I did try to look with strace what happens and this is what I see on the node process when I change any file:
strace: Process 28315 attached
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=40, si_uid=0, si_status=SIGTERM, si_utime=79, si_stime=9} ---
+++ killed by SIGKILL +++
The killed by SIGKILL line does not happen when docker is run manually, it only happens when it's started from vscode when debugging.
Hopefully someone has an idea where I'm going wrong.
Here are the relevant configs:
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Docker Node.js Launch",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"platform": "node"
}
]
}
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"customOptions": "--workdir=/usr/src/app",
"image": "node:14-buster",
"command": "yarn run start:dev --debug 0.0.0.0:9229",
"ports": [{
"hostPort": 4000,
"containerPort": 4000
}],
"volumes": [
{
"localPath": "${workspaceFolder}",
"containerPath": "/usr/src/app"
}
],
"env": {
"DEBUG": "*",
"NODE_ENV": "development"
}
},
"node": {
"enableDebugging": true,
}
}
]
}
Here is a hello world repo: https://github.com/strikernl/nestjs-docker-hello-world
So here's what I found out. When you change code it restarts node's debugger process. VSCode kills Docker container when it loses connection to the debugger.
There is a nice feature which restarts debugger sessions on code changes (see this link) but the problem is - it is for "type": "node" launch configurations. Yours is "type": "docker". From it's options for node only autoAttachChildProcesses seems promising but it doesn't solve the problem (I've checked).
So my suggestion is:
Create a docker-compose.yml file, which will start the container instead of VSCode.
Edit your launch.json so that it attaches to the node in container and restarts debugger session on changes.
Remove/rework tasks.json as it is not needed in it's current state.
docker-compose.yml:
version: "3.0"
services:
node:
image: node:14-buster
working_dir: /usr/src/app
command: yarn run start:dev --debug 0.0.0.0:9229
ports:
- 4000:4000
- 9229:9229
volumes:
- ${PWD}:/usr/src/app
environment:
DEBUG: "*"
NODE_ENV: "development"
launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to node",
"type": "node",
"request": "attach",
"restart": true,
"port": 9229
}
]
}
Save the docker-compose.yml in your project root and use docker-compose up to start the container (you may need to install it first https://docs.docker.com/compose/ ). Once it's working start the debugger as usual.
I was following this tutorial to deploy Ghost to Google App Engine
https://cloud.google.com/community/tutorials/ghost-on-app-engine-part-1-deploying
However, the approach of installing Ghost as an NPM Module has been deprecated.
This tutorial introduced a method of installing Ghost with a Dockerfile. https://vanlatum.dev/ghost-appengine/
I'm trying to deploy Ghost to Google App Engine by utilizing this Dockerfile, and connect to my Google Cloud SQL database.
However I'm getting the issue:
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error:
[2019-10-03 21:10:46] ERROR connect ENOENT /cloudsql/ghost
connect ENOENT /cloudsql/ghost
"Unknown database error"
Error ID:
500
Error Code:
ENOENT
----------------------------------------
DatabaseError: connect ENOENT /cloudsql/ghost
at DatabaseError.KnexMigrateError (/var/lib/ghost/versions/2.31.1/node_modules/knex-migrator/lib/errors.js:7:26)
In the first tutorial it mentions needing to run a migration before starting ghost to prevent this issue. So I've tried adding this line in my Dockerfile
RUN npm install knex-migrator --no-save
RUN NODE_ENV=production node_modules/knex-migrator init --mgpath node_modules/ghost
But then I get the following error:
/bin/sh: 1: node_modules/knex-migrator: Permission denied
The command '/bin/sh -c NODE_ENV=production node_modules/knex-migrator init --mgpath node_modules/ghost' returned a non-zero code: 126
How can I configure my Dockerfile to migrate the database before running Ghost to ensure it can connect to the Cloud SQL database?
Files:
Dockerfile
FROM ghost
COPY config.production.json /var/lib/ghost/config.production.json
WORKDIR /var/lib/ghost
COPY credentials.json /var/lib/ghost/credentials.json
RUN npm install ghost-gcs --no-save
WORKDIR /var/lib/ghost/content/adapters/storage/ghost-gcs/
ADD https://raw.githubusercontent.com/thomas-vl/ghost-gcs/master/export.js index.js
WORKDIR /var/lib/ghost
config.production.json
{
"url": "https://redactedurl.appspot.com",
"fileStorage": false,
"mail": {},
"database": {
"client": "mysql",
"connection": {
"socketPath": "/cloudsql/ghost",
"user": "redacted",
"password": "redacted",
"database": "ghost",
"charset": "utf8"
},
"debug": false
},
"server": {
"host": "0.0.0.0",
"port": "2368"
},
"paths": {
"contentPath": "content/"
},
"logging": {
"level": "info",
"rotation": {
"enabled": true
},
"transports": ["file", "stdout"]
},
"storage": {
"active": "ghost-gcs",
"ghost-gcs": {
"key": "credentials.json",
"bucket": "redactedurl"
}
}
}
app.yaml
runtime: custom
service: blog
env: flex
manual_scaling:
instances: 1
env_variables:
MYSQL_USER: redacted
MYSQL_PASSWORD: redacted
MYSQL_DATABASE: ghost
INSTANCE_CONNECTION_NAME: redacted:us-central1:ghost
beta_settings:
cloud_sql_instances: redacted:us-central1:ghost
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?.*\.ts$
- ^(.*/)?config\.development\.json$
``
According to the Connecting from App Engine page, you need to update your path to /cloudsql/INSTANCE_CONNECTION_NAME (so /cloudsql/redacted:us-central1:ghost).
According to the github readme (https://github.com/SeleniumHQ/docker-selenium) the chrome standalone needs the option "-v /dev/shm:/dev/shm" but I am struggling to find in the documentation how to do this correctly.
The full docker run command looks like this:
docker run -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:3.12.0-cobalt
My reason for needing this is I have tests that specifically fail due to not having this option enabled.
Currently my azure command looks like this:
az container create --resource-group ${resourceGroup} --name ${containerName} --image selenium/standalone-chrome:3.12.0-cobalt --dns-name-label ${dnsNameLabel} --ports 4444
I have been trying to play around with the --azure-file-volume options with no success. Any help is greatly appreciated.
Edit:
Until this is figured out I have decided to use azure vms. Using a vm image that has docker installed and starts up the docker-selenium container. It is not quite as fast or as pretty to script but it gets the job done without having the issue with options to start a docker container with. For anyone that decides to go this route here is my cloud-init code for the vm.
#cloud-config
package_upgrade: true
package_reboot_if_required: true
runcmd:
- apt-get update
- curl -fsSL https://get.docker.com/ | sh
- curl -fsSL https://get.docker.com/gpg | sudo apt-key add -
- sudo docker run -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:3.12.0-cobalt
While there isn't a way to use Azure CLI to do this, you can use an Azure Resource Manager Template deployment.
Create a deployment template file, for example:
selenium-aci-standalone-example.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"dnsNameLabel": {
"type": "String"
}
},
"resources": [
{
"apiVersion": "2018-06-01",
"location": "[resourceGroup().location]",
"name": "[parameters('dnsNameLabel')]",
"properties": {
"containers": [
{
"name": "standalone-chrome",
"properties": {
"image": "selenium/standalone-chrome",
"ports": [{ "port": "4444", "protocol": "TCP" }],
"resources": { "requests": { "cpu": "1.0", "memoryInGb": "1.5" } },
"volumeMounts": [{"name": "devshm", "mountPath": "/dev/shm"}]
}
}
],
"ipAddress": {
"ports": [{ "port": "4444", "protocol": "TCP" }],
"type": "Public",
"dnsNameLabel": "[parameters('dnsNameLabel')]"
},
"osType": "Linux",
"volumes": [
{
"name": "devshm",
"emptyDir": {}
}
]
},
"type": "Microsoft.ContainerInstance/containerGroups"
}
]
}
Then you can execute the deployment with Azure CLI:
az group create -n selenium-standalone-rg -l westus2
az group deployment create -g selenium-standalone-rg --template-file .\selenium-aci-standalone-example.json --parameters dnsNameLabel=test-standalone-selenium-chrome
Mounting an emptyDir to /dev/shm on the node containers solved this issue for us running Selenium Grid with Azure Container Instances. It seems that it's not possible to directly control the size of the volume - and I couldn't find information about the size of an emptyDir volume in ACI documentation - but all the broken pipe errors went away in our test runs after we added the volume configuration to our ARM template.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
How can I use electron-builder's auto-update feature with Amazon S3 in my electron app?
Maybe someone who has already implemented it, can give more details than those which are provided in the electron-builder documentation?
Yeah I'm agree with you, I've been through there recently...
Even if i'm late, I will try to tell as much as I know for others!
In my case, I'm using electron-builder to package my electron/anguler app.
To use electron-builder, I suggest you to create a file called electron-builder.json at project root.
Thats the content of mine :
{
"productName": "project-name",
"appId": "org.project.project-name",
"artifactName": "${productName}-setup-${version}.${ext}", // this will be the output artifact name
"directories": {
"output": "builds/" // The output directory...
},
"files": [ //included/excluded files
"dist/",
"node_modules/",
"package.json",
"**/*",
"!**/*.ts",
"!*.code-workspace",
"!package-lock.json",
"!src/",
"!e2e/",
"!hooks/",
"!angular.json",
"!_config.yml",
"!karma.conf.js",
"!tsconfig.json",
"!tslint.json"
],
"publish" : {
"provider": "generic",
"url": "https://project-release.s3.amazonaws.com",
"path": "bucket-path"
},
"nsis": {
"oneClick": false,
"allowToChangeInstallationDirectory": true
},
"mac": {
"icon": "src/favicon.ico"
},
"win": {
"icon": "src/favicon.ico"
},
"linux": {
"icon": "src/favicon.png"
}
}
As you can see, you need to add publish config if you want to publish the app automaticly to s3 with electron-buider. The thing I don't like with that, is that all artifacts and files are all located in the same folder. In my case, like you can see in package.json below I decided to package it manually with electron-builder build -p never. This is basically telling never publish it, but I needed it because without it, it would not generate the latest.yml file. I'm using Gitlab-ci to generate the artefacts, then I use a script to publish it on s3, but you can can use -p always option if you want.
Electron-builder need the latest.yml file, because this is how he knoes if the artefact on s3 is more recent.
latest.yml content exemple :
version: 1.0.1
files:
- url: project-setup-1.0.0.exe
sha512: blablablablablablablabla==
size: 72014605
path: project-setup-1.0.0.exe
sha512: blablablablablabla==
releaseDate: '2019-03-10T22:18:19.735Z'
One other important thing to mension is that electron-builder will try to fetch content at the url you provided in electron-builder.json publish config like so :
https://project-release.s3.amazonaws.com/latest.yml
https://project-release.s3.amazonaws.com/project-setup-1.0.0.exe
And this is the default uploaded content
For that, you need to have your s3 bucket public so every one with the app can fetch newest versions...
Here's the policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-bucket-name/*",
"arn:aws:s3:::your-bucket-name"
]
}
]
}
Replace your-bucket-name
Second, to package the app, I added a script to package.json. ("build:prod" for angular only)
"scripts": {
"build:prod": "npm run build -- -c production",
"package:linux": "npm run build:prod && electron-builder build --linux -p never",
"package:windows": "npm run build:prod && electron-builder build --windows -p never",
"package:mac": "npm run build:prod && electron-builder build --mac -p never",
},
Finally, here's a really well written article here that work with gitlab-ci.
I might have forgotten some parts, ask for any questions!
Here is the documentation for S3 autoUpdater in electron-builder
https://www.electron.build/configuration/publish#s3options
You put your configuration inside package.json build tag, for example:
{
"name": "ps-documentation",
"description": "Provides a design pattern for Precisão Sistemas",
"build":{
"publish": {
"provider": "s3",
"bucket": "your-bucket-name"
},
}
}
I'm trying to deploy a multiple docker container to Elastic Beanstalk. There is two containers, one for the supervisor+uwsgi+django application and one for the JavaScript frontend.
Using docker-compose it works fine locally
My docker-compose file:
version: '2'
services:
frontend:
image: node:latest
working_dir: "/frontend"
ports:
- "3000:3000"
volumes:
- ./frontend:/frontend
- static-content:/frontend/build
command: bash -c "yarn install && yarn build"
web:
build: web/
working_dir: "/app"
volumes:
- ./web/app:/app
- static-content:/home/docker/volatile/static
command: bash -c "pip3 install -r requirements.txt && python3 manage.py migrate && supervisord -n"
ports:
- "80:80"
- "8000:8000"
depends_on:
- db
- frontend
volumes:
static-content:
The image for the nodejs is the oficial Docker one.
For the "web" I use the following Dockerfile:
FROM ubuntu:16.04
# Install required packages and remove the apt packages cache when done.
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y \
python3 \
python3-dev \
python3-setuptools \
python3-pip \
nginx \
supervisor \
sqlite3 && \
pip3 install -U pip setuptools && \
rm -rf /var/lib/apt/lists/*
# install uwsgi now because it takes a little while
RUN pip3 install uwsgi
# setup all the configfiles
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
COPY nginx-app.conf /etc/nginx/sites-available/default
COPY supervisor-app.conf /etc/supervisor/conf.d/
EXPOSE 80
EXPOSE 8000
However, AWS uses its own "compose" settings, defined in the dockerrun.aws.json, which has a different syntax, so I had to adapt it.
First, I used the container-transform app to generate the file based on my docker-compose
Then I have to do some adjustment, i.e: The AWS file doesn't seem to have a "workdir" property, so I had to change it according
I also published my image to AWS Elastic Container Registry
The file became the following:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"command": [
"bash",
"-c",
"yarn install --cwd /frontend && yarn build --cwd /frontend"
],
"essential": true,
"image": "node:latest",
"memory": 128,
"mountPoints": [
{
"containerPath": "/frontend",
"sourceVolume": "_Frontend"
},
{
"containerPath": "/frontend/build",
"sourceVolume": "Static-Content"
}
],
"name": "frontend",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
]
},
{
"command": [
"bash",
"-c",
"pip3 install -r /app/requirements.txt && supervisord -n"
],
"essential": true,
"image": "<my-ecr-image-path>",
"memory": 128,
"mountPoints": [
{
"containerPath": "/app",
"sourceVolume": "_WebApp"
},
{
"containerPath": "/home/docker/volatile/static",
"sourceVolume": "Static-Content"
},
{
"containerPath": "/var/log/supervisor",
"sourceVolume": "_SupervisorLog"
}
],
"name": "web",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
},
{
"containerPort": 8000,
"hostPort": 8000
}
],
"links": [
"frontend"
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "/var/app/current/frontend"
},
"name": "_Frontend"
},
{
"host": {
"sourcePath": "static-content"
},
"name": "Static-Content"
},
{
"host": {
"sourcePath": "/var/app/current/web/app"
},
"name": "_WebApp"
},
{
"host": {
"sourcePath": "/var/log/supervisor"
},
"name": "_SupervisorLog"
}
]
}
But then after deploy I see it on the logs:
> ------------------------------------- /var/log/containers/frontend-xxxxxx-stdouterr.log
>
> ------------------------------------- yarn install v1.3.2
> [1/4] Resolving packages...
> [2/4] Fetching packages...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> error An unexpected error occurred:
> "https://registry.yarnpkg.com/aws-sdk/-/aws-sdk-2.179.0.tgz:
> ESOCKETTIMEDOUT".
I have tried to increase timeout for yarn... but the error still happen
I also can't even execute bash on the container (it gets stuck forever)
or any other command (i.e: trying to reproduce the yarn issue)
And the _SupervisorLog doesn't seem to be mapping according, the folder is empty and I can't understand exactly what is happening or reproduce correctly the error
If I try to go to the url sometimes I get a Bad Gateway, sometimes I don't even get that.
If I try to go to the path where it should load the "frontend" I get a "forbidden" error.
Just to clarify, this is working fine when I run the containers locally with docker-compose.
I have started using Docker recently, so feel free to point any other issue you might find on my files.