I am trying to run hyperledger fabric sample test network on windows 11 - hyperledger-fabric

when executing "/network.sh up", I got this:
-bash: /network.sh: no such file or directory
how can I fix this?
I need to get the network up

You say you are trying to run /network.sh. This will try to run a network.sh script in the root directory of your filesystem. More likely you meant to type ./network.sh, which will run a network.sh script in your current working directory (. is interpreted by the shell as the current directory).
You give no details of how your environment is set up so make sure you are following the guidance on running in Windows provided by the Fabric documentation:
https://hyperledger-fabric.readthedocs.io/en/latest/prereqs.html#windows
Essentially the recommendation is to run on Linux within a (Windows Subsystem for Linux) virtual machine. I personally run Docker within the Linux environment too, and don't use Docker Desktop for Windows. Either approach should work.

Related

Where does '~' expand to when mounted in docker with windows subsystem for linux?

I have a docker container I wrote that sets up AWS profiles for me. In Linux it works great, on WSL it partially works.
When I run the container I am mounting the ~/.aws directory, checking if the profiles exist and if they don't exist I create them. If they do exist I don't do anything.
In Linux I can run this container and then continue to use aws-cli with no problems.
In Windows subsystem for Linux - when I run the container the first time around, it will create the profiles for me. If I choose to run the container again it sees that the profiles already exist so it does nothing. This tells me the file exists somewhere but I cant use aws-cli because the file doesn't exist at ~/.aws.
So my question is where is ~/.aws in WSL when mounted to a docker container? I've attempted to do a find on the entire filesystem in WSL and that returns nothing. I've also tried changing the mount path to /root/.aws and I run into the same conditions.
EDIT:
I still don't know the answer to my question above. But if anyone comes across this question I did find a work around.
I've updated Docker Desktop to allow mounting the entire c:/ drive. Then I just changed my docker run command to mount c:/.aws instead of ~/.aws, so my command looks like -v c:/.aws:/root/.aws. After that I added this environment variable in WSL export AWS_SHARED_CREDENTIALS_FILE="/mnt/c/.aws/credentials" and now aws cli picks up on my profile changes.
The shell always expands ~ to the value of the HOME environment variable. If that environment variable is not set, then it expands to nothing. If you want to find where ~/.aws is located, then you can write something like echo ~/.aws and the shell will expand it for you.
The only exception is that ~user expands to the home directory of the user user; the HOME environment variable is not consulted there.
You have to remember that in your setup the docker engine (docker for windows) is installed on windows, it is inside the windows environment that the docker command is 'launched'. So when you say use ~/.aws it looks in the windows file system for this location.
In windows ~ is a valid directory name (try mkdir ~ from a cmd prompt) so when you say map ~/.aws I'm unsure what actually gets created. maybe try searching your c drive for a folder called ~. There is no ~ shortcut in windows for the home folder, and if there was which home would it be? the home of the logged in windows user? or the home inside WSL?
To make this work in WSL you need to pass ~/.aws to wslpath like this:
➜ echo $(wslpath ~/.aws)
/mnt/c/home/damo/.aws
But this location is the path according to WSL not windows you need to do it twice with the -w flag the second time
➜ echo $(wslpath -w $(wslpath ~/.aws))
C:\home\damo\.aws
which would make your final docker command look like this:
docker run -it -v $(wslpath -w $(wslpath ~/.aws)):/root/.aws awsprofileprocessor:latest
With this you will now be telling docker for windows the windows path to put the mount
Please let me know if this works for you, I'm interested in how it turns out.

Run nodejs in sandbox with virtual filesystem

I am working on a project of online python compiler. When user sends a python, Server will execute it. What I want do is,create a sandbox with virtual filesystem, execute that script instide it, and that sandbox should far from real-server's filesystem, but nodejs should be able to control stdin and stdout of that sandbox.
How to make it possible?
Docker is a great way to sandbox things.
You can run
docker run --network none python:3
from your node.js server. Look at other switches of docker run to plug in as many security holes as possible.
The shtick is, you run the docker command from your node.js server and pass the user's python code via stdin.
Now, if your node.js server is on one machine and the sendbox should run on another machine, you tell docker to connect to the other machine using the DOCKER_HOST environment variable.
Docker containers wrap up the software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries — basically anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
This might be worth to read https://instabug.com/blog/the-difference-between-virtual-machines-and-containers/

xcopy, net use not working from Linux machine

I am trying to copy files from windows server to network shared folder via VPN . Here is my code from batch file. This is working fine without any issues.
net use \\servername\test_folder password /user:user_name
xcopy C:\Apache\htdocs\arul\xias \\servername\\test_folder
But when I try to run this from Linux machine it is not working. This Linux machine is also connected to network shared folder via VPN. So I tried below on Linux machine in .sh file.
net use \\servername\test_folder password /user:user_name
cp C:\Apache\htdocs\arul\xias \\servername\\test_folder
I am getting errors like net command is not found and cp: -r not specified;
How to achieve this from Linux machine ?
The commands "net use" and "xcopy" are specific to Windows and will newer work on linux.
You should use some smb specific commands instead (of course, the kernel must support them).

Running Matlab code on a cluster

I have a university account for the university's cluster, but I don't know how can I use it to run my Matlab code. Could anyone help? I connect to the cluster by typing below code in the terminal of my laptop:
ssh myusername#192.168.194.222
Then it asks me to type my password.After that, below text appears:
Welcome to gav 9.1.1 (3.12.60-ql-generic-9.1-74) based on Ubuntu 14.04.5 LTS
Last login: Sun Apr 16 10:45:49 2017 from 192.168.41.213
gav:~ >
How can I run my code after these processes? Could anyone help me?
It looks like you have a Linux shell, so you can run your script (for instance yourScript.m)
> matlab -nojvm -nodisplay -nosplash < yourScript.m
(see also https://uk.mathworks.com/help/matlab/ref/matlablinux.html)
As far as I know, there are two possibilities:
Conventional Matlab is installed on the Cluster
The Matlab Distributed Computing server is installed on the cluster
Conventional Matlab is installed on the Cluster
You execute Matlab on the cluster as you would on your local computer. I guess that you work on Windows on your local computer, given that you quote a simple shell prompt in your question ;) All right, all right, bad psychic skillz ;) see edit below.
What you see is the cluster awaiting a program name to execute. This is called the "Shell". Google "Linux shell tutorial" or start with this tutorial to get information about how to operate a Linux system without a graphical desktop.
Try to start matlab by simply typing matlab after the text you've seen. If it works, you see Matlab's welcome message and the Matlab prompt as you would see it in Matlab's command window on your local PC.
Bonus: you can try to execute Matlab on the cluster but see a graphical interface by replacing your ssh call by ssh -X myusername#192.168.194.222, so add an additional -X.
Upload your Matlab scripts to the cluster, for example by using WinSCP (tutorial)
Execute your Matlab functions like you would locally by navigating into the correct folder and typing the function name.
EDIT: As you use Linux, you may use gio mount ssh://myusername#192.168.194.222 to access your home folder on the cluster via your file manager. If that fails, try gvfs-mount ssh://myusername#192.168.194.222 (the old name of the tool). The packages gvfs-backends and gvfs-fuse (I assume that you use ubuntu, other distributions may have different package names) must be installed for this; use your Package manager to install them if you get an error like "command not found".
Distributed Computing Server
This provides a set of Matlab "Workers" which are sent tasks from your local Computer. You use your local Matlab installation to connect to the Distributed computing server. Start with the Matlab Help Pages for the Distributed Computing Server

Remote development - Edit on Windows & Build on Linux

I am looking for a solution for a remote development environment as follows:
Editor - Windows Source Insight / Visual Studio
Source control - Clearcase
Build server - Linux
The above can't be modified.
In my current setup, I can view and edit the sources on Windows using a Windows Cleacase client.
My problem is mainly the build (and the later on, the debug) process.
I need to invoke 'make' from Windows on a specific Clearcase view on the Linux Server.
I can login in a separate process using SSH to the Linux server and run 'make', but it is a cumbersome procedure.
I am also unable to view the 'make' results and double-lick them to go to the specific warning/errors.
Is there any way to remotely bind a Windows command/batch to a Linux environment?
Perhaps through SSH?
Thank you for any suggestion you might have.
The usual solution is rather a pull strategy (where your build server fetches information on Linux, rather than trying to pilot everything from Windows.
If you follow the SSH path, be aware of technote swg21351507:
Linux SSH connection hangs when attempting to exit after starting ClearCase.
This can affect the use of scripts to start/stop ClearCase remotely using SSH.
Cause
This is a due to a OpenSSH server design, which will not close the console until all process/jobs executed by the user are completed.
Refer to this SSH FAQ for further details, regarding background jobs.
Resolving the problem
Redirect the ClearCase start script to either /dev/null or to a log file.
Example:
/usr/atria/etc/clearcase start < /dev/null >& /dev/null
/usr/atria/etc/clearcase start < /tmp/ccstart >& /tmp/ccstart
Try sshfs. I don't if there is a sshfs client for windows. If not, you can try NFS, or even SAMBA. Those definately work in Windows and Linux.
I just came across this and wanted to answer, even if the original poster has surely resolved their issue. This could be quite easily resolved by installing a jenkins instance on the build machine. You could kick off the build from the web interface and have it pull the files from clearcase and tell you the results.

Resources