My go path currently points to /usr/local/go but this IS NOT where I prefer to install projects.
What is the preferred method to point to the go path to execute go but build projects from a completely different directory? Thanks
In fabric, you will install the chaincode from inside the cli container, that way don't matter specifically where your gopath is mapped on your machine.
Map your chaincode path to the container go_path:
volumes:
- ./chaincode/:/opt/gopath/src/github.com/
And then install and instantiate it.
Related
I am developing a nodejs app using windows 10 WSL with remote container in visual studio code.
What are the best practices for Dockerfile and docker-compose.yml at this time?
Since we are in the development phase, we don't want to COPY or ADD the program source code in the Dockerfile (it's not practical to recreate the image every time we change one line).
I use docker compose to bind the folder with the source code on the windows side with volume, but in that case, the source code folder and the set of files from the Docker container will all have Root permission.
In the Docker container, node.js runs as node general user.
For the above reasons, node.js will not have write permission to the folders you bind.
Please let me know how to solve this problem.
I found a way to specify UID or GUID, but I could not specify UID or GID because I am binding from windows.
You can optionally mount Node code using NFS in Docker-compose
I have an app service for my PHP 8.0 application. Azure will only allow me to use Linux which is fine, but I'm having trouble installing composer globally.
Because only files inside of /home are persisted I'm not sure where I can place the resulting composer.phar file, in such a way that it is included in the PATH, and I can't find any relevant documentation, only relevant discussion I could find was this: https://learn.microsoft.com/en-us/answers/questions/3638/installing-composer-on-azure-app-service.html but it still didn't help.
Could anyone tell me either where to put composer.phar or whether there's a way to edit my path to point towards /home/composer.phar?
Thanks!
I found the right way.
If I echo'ed PATH it told me that /home/site/wwwroot was included in the PATH, so all that's needed is to move the composer file into wwwroot doing something like mv composer.phar /home/site/wwwroot/composer.
I'm following this link to setup multi organizations for business network
I have cloned the git code and tried to setup my first network by following the instructions here
I have followed each and every step till I reach command to generate using ./byfn.sh -m generate it exists saying cryptogen tool not found which can be seen below.
text#blockchain-2:~/fabric-tools/fabric-samples/first-network$ ./byfn.sh -m generate
Generating certs and genesis block for with channel 'mychannel' and CLI timeout of '10000' seconds and CLI delay of '3' second
s
Continue (y/n)? y
proceeding ...
cryptogen tool not found. exiting
How to fix this issue and proceed further?
Assuming you already have Docker and Docker Compose installed, you will also need to install some platform specific binaries (including cryptogen). If you have already installed these binaries then adding the folder containing the binaries to PATH will help.
These 2 documents below in the Hyperledger Fabric will help with the pre-requisites, but don't run the byfn.sh script - run that script from the clone of the sstone repo, and run it with the parameters specified in the Multi-Org Composer Tutorial
There are the 2 fabric pre-req docs:
Cryptogen and other binaries and
Fabric Pre-reqs
Make sure your /bin and /first-network are in the same directory because cryptogen belongs to /bin
$ /learn/hyperledger/fabric-samples$ ls
balance-transfer chaincode-docker-devmode first-network README.md
basic-network config high-throughput scripts
bin fabcar LICENSE
chaincode fabric-ca MAINTAINERS.md
I have a Dockerfile to run errbot, looking for a way to script plugin installation. The documentation only seems to list the manual !repos install ... method.
Is there any way for automatic plugin installation from git repo?
Yes, you can simply use the BOT_EXTRA_PLUGIN_DIR config parameter and put any plugins you want to preload there.
https://github.com/errbotio/errbot/blob/master/errbot/config-template.py#L85
If you've initialized Docker swarm mode (even if just on the one Docker host), you can use Docker Configs to pull in files to a container at run-time rather than baking them into the Docker image.
For your specific use case, check out the Docker image https://github.com/swarmstack/errbot-docker which does exactly what you are looking for.
I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.