Why do I get a "tar: Error opening archive: Unrecognized archive format" error when installing hyperledger fabric - hyperledger-fabric

Running Mac OS Big Sur 11.5.2 (Apple M1 chip)
When I run the command "curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/main/scripts/bootstrap.sh| bash -s", I get :
tar: Error opening archive: Unrecognized archive format
==> There was an error downloading the binary file.
------> 2.3.3 platform specific fabric binary is not available to download <----

It looks like that script builds tar file names to download based on your local machine architecture:
https://github.com/hyperledger/fabric/blob/f9027a48cb6028243de75240b6a85f90834edaab/scripts/bootstrap.sh#L12
ARCH=$(echo "$(uname -s|tr '[:upper:]' '[:lower:]'|sed 's/mingw64_nt.*/windows/')-$(uname -m | sed 's/x86_64/amd64/g')")
https://github.com/hyperledger/fabric/blob/f9027a48cb6028243de75240b6a85f90834edaab/scripts/bootstrap.sh#L162-L163
BINARY_FILE=hyperledger-fabric-${ARCH}-${VERSION}.tar.gz
CA_BINARY_FILE=hyperledger-fabric-ca-${ARCH}-${CA_VERSION}.tar.gz
And then tries to download them from GitHub:
https://github.com/hyperledger/fabric/blob/f9027a48cb6028243de75240b6a85f90834edaab/scripts/bootstrap.sh#L90
download "${BINARY_FILE}" "https://github.com/hyperledger/fabric/releases/download/v${VERSION}/${BINARY_FILE}"
For me on an older Intel-based Mac, this results in a download URL something like https://github.com/hyperledger/fabric/releases/download/v2.3.3/hyperledger-fabric-darwin-amd64-2.3.3.tar.gz, which does exist. I guess whatever ARCH value your M1-based Mac generates results in a tar file URL that does not exist. Maybe you can tweak the script so it downloads the same binaries as my Intel-based Mac and get things working using Rosetta.

I had similar problem with the M1 chip. Well the one line solution to the problem is to 'make' your own binaries and copy them over.
Detail steps follow here:
Step 1 : Clone fabric and fabric-ca
git clone https://github.com/hyperledger/fabric
git clone https://github.com/hyperledger/fabric-ca
Step 2: Change directory to 'fabric'. Make Native files
make native
Step 3: Move the binaries to respective folder
mv build/bin $destination_folder/
mv sampleconfig $destination_folder/config
Step 4 : Repeat with fabric-ca
cd fabric-ca-client
make fabric-ca-client
mv bin/fabric-ca-client $destination_folder/bin
Step 5: Navigate to your $destination_folder/ and run things as usual.
Note : I learnt about the non support to M1 and also that I've to 'make' my own native files from hyperledger.org. The full credits of this answer goes to Brett Logan (https://jira.hyperledger.org/browse/FAB-18389?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel&showAll=true).

Related

Teressa - Unable to Install from binary distribution

Im having difficulty installing teressa from a binary distribution. (https://docs.tessera.consensys.net/en/stable/HowTo/Get-started/Install/Distribution/) I have extracted the .tar file from the github link however the teressa/bin command only displays 1 command which is the teressa help command despite me running various different commands such as terresa version and terresa -keygen , it will still only display the teressa help command and I am unsure why that is the case
My Script:
C://Program Files/ tar xvf tessera-dist-22.1.7.tar
(To extract tar file from github repo under docs)
C://Program Files/ cd tessera-22.1.7/bin/ tessara help
(Shows help)
C://Program Files/ cd tessera-22.1.7/bin/ tessara version
(Shows help again)
Update: I managed to get the Tessera functions to work properly through running the tessera image on docker desktop.
I used docker run quorumengineering/tessera:latest on command prompt/windows powershell. And docker cp container-id:/path C://Program Files to copy and paste my private keys from docker container into my local windows file explorer

Linux and Calamares - how they join

I'm working on making a Arch Linux ISO for getting a my custom Linux system. I read about Calamares installer that seems very cool for customizing your own installer. I prepared my ISO and also my Calamares files but I don't understand how they join. i tried several searches but I don't find it.
For example, assuming that I have my ISO files ready under /home/user/archiso/airootfs directory (airootfs will be the root of my future system), I started by compiling and installing Calamares by following several docs, in my case:
$ git clone https://github.com/calamares/calamares.git
$ mkdir -p /home/user/calamares/build
$ cd calamares/build
$ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DWITH_PYTHONQT=ON -DAppStreamQt_DIR=... ..
$ make
$ sudo make install
and I get the Calamares modules in calamares/build/src/modules and brading in calamares/build/src/branding and I customized these files according my needs.
Now that I have ready Calamares and my custom ISO ready, how in general the airootfs can be tied with Calamares installer in order that when I run the ISO on a new machine, the Calamares installer starts automatically at the login?
I tried to edit build/src/modules/unpackfs/unpackfs.conf file by adding:
unpack:
- source: /home/user/archiso_build/iso/arch/x86_64/airootfs.sfs
sourcefs: ext4
but I didn't get the expected behavior because, when I make the ISO by mkarchiso and I mount it in a new VM:
Calamares Installer does not start automatically (maybe there is a specific command that I should store somewhere)
When I start manually Calamares, I get the error on the GUI: users#users module not found; bootloader#bootloader module not found; and so on
How these two entities communicate one each other during the creation of the ISO by mkarchiso?
Source: https://github.com/calamares/calamares/wiki/Develop-Guide
Thanks

Is it possible to generate linux .rpm packages from flutter linux app?

As far as I know, Flutter for linux app only targets snap packaging format.
Is it possible to generate .rpm and .deb (cross-linux platform) software packages from the flutter build?
Kindly post any help on how to package a flutter-linux app as RPM package
Building RPMs and DEBs is doable, but a pretty involved process. I will try to outline the basic process for RPM's as best as I can. The process of making a DEB is mostly the same with a few differences. I will stick to RPM's for now.
The main thing which is a pain is that to build packages you need specific tools which are only available on the distros. So if you want to do this cross platform (generate a RPM on a ubuntu machine for example) we need to use Docker.
Create a Dockerfile which in which we will install the rpm-build package which contains all tools to build RPMs.
FROM centos:7
RUN yum install -y -q rpm-build
Build this dockerfile and remember the docker image, we will need it later.
Execute the following command mkdir -p build/{BUILD,RPMS,SOURCES,SPECS,SRPMS}. This will create the directory structure required for rpmbuild
Create a .spec file, this file is a config file for the rpmbuild command and place it in the build/SPECS directory. The contents of this file are very specific to the what the package has to do. RPMs are very flexible and can do lots of stuff ranging from just copying files to running complex bash scripts on the target machine to perform compilation on the target machine and perform complex installations. Here are some guides which I found useful: package guide, fedora guide, and redhat guide.
Download the files you want to package, often they are distributed as tarballs and place it in the build/SOURCES directory.
Now we can execute the following command docker run --rm -v $(pwd)/build:/rpmbuild {name of image} /bin/bash -c "cd /rpmbuild && rpmbuild --define '_topdir /rpmbuild' -ba SPECS/flutter.spec"
I will break the command down.
docker run --rm -v $(pwd)/build:/rpmbuild {name of image} - we start a container from the image we created earlier, and mount the build dir in which our .spec and .tar.gz are located so the container can see them. --rm cleans up the container after we are done since we don't need it after the first command.
/bin/bash -c - this is a trick since we need to execute 2 command inside the docker container, if we don't do this our shell will thing the && is meant after the docker command and not passed to the container.
"cd /rpmbuild && rpmbuild --define '_topdir /rpmbuild' -ba SPECS/flutter.spec" - move to the mounted build directory and build the RPM package. the -ba option tells rpmbuild to build both the binary and source packages in case you want the source package as well.
If all went well your should now have an .rpm file in the build/RPMS and a source package in the build/SRPMS directory.
For DEB the process is almost the same, except you need a debian or ubuntu docker image, you use the dpkg-deb command to build and you need a control file instead of a .spec file(same purpose different format)
I will also go over some of the key parts of building rpm packages for flutter applications in case the previous answer was ambiguous.
I have already written an article going into details but here I will only highlight the key parts. You can find the article here
Also for simplicity of explanation, I will be using cool-app as an example throughout this post.
1- Run flutter build linux to get the Linux build inside the build folder in your project root directory.
2- Copy your bundle folder someplace else and there, rename it according to app-name-version semantic. e.g. cool-app-1.0.0
3- Create a .desktop file. e.g. cool-app.desktop will look like the following :
[Desktop Entry]
Name=My Cool App
Comment=A cool app that does everything
Exec=/usr/bin/cool-app/cool-app
Icon=/usr/bin/cool-app/data/flutter_assets/assets/icon.svg
Terminal=false
Type=Application
Categories=Utility;
change the properties accordingly and place the .desktop file inside cool-app-1.0.0 directory
4- run mkdir -p rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS} in your home directory.
5- copy the cool-app-1.0.0 directory to ~/rpmbuild/SOURCES.
Due to a problem that as the time of writing this still persists, the rpath for lib*_plugin.so files point to user's build tree path which results in rpmbuild failure.
6- Run patchelf --print-rpath * inside cool-app-1.0.0/lib and check the output. If the output contains a path from your home directory, run patchelf --set-rpath '$ORIGIN' * to fix the rpaths, then check again. now all paths must be $ORIGIN. (This was the way I was able to fix it, not sure if it's the best solution)
You can check this github issue for further information.
7- cd into ~/rpmbuild/SOURCES and run tar --create --file cool-app-1.0.0.tar.gz cool-app-1.0.0 to create a tar.gz file.
8- Create a file named cool-app.spec inside ~/rpmbuild/SPECS directory.
Sample cool-app.spec file :
Name: cool-app
Version: 1.0.0
Release: 1%{?dist}
Summary: Very cool app
BuildArch: x86_64
URL: https://github.com/CoolDev/cool-app
License: GPLv3
Source0: %{name}-%{version}.tar.gz
Requires: bash
%description
A very cool app that does everything
%prep
%setup -q
%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/%{_bindir}/%{name}
cp %{name} $RPM_BUILD_ROOT/%{_bindir}/%{name}
cp -R lib $RPM_BUILD_ROOT/%{_bindir}/%{name}
cp -R data $RPM_BUILD_ROOT/%{_bindir}/%{name}
desktop-file-install %{name}.desktop
%clean
rm -rf $RPM_BUILD_ROOT
%files
%{_bindir}/%{name}
/usr/share/applications/
You could use the same template and only change Name,Version,Summery,Release,BuildArch,URL,License,%description and you will most likely be fine.
I went over the spec file in more detail in my article mentioned in the beginning of this post.
9- run rpmbuild --bb cool-app.spec to get your rpm file inside ~/rpmbuild/RPMS directory.

Can not running command to run Code Composer

Ubuntu Version: 19.10
Code Composer Version: CCS10.1.0.00010_linux-x64
I'm trying to install the Code Composer at Linux. But at README into Code Composer file, say this.
Code Composer Studio for Linux README
-----------------------------------------
Please note that use of anti-virus software or a firewall may cause issues
during installation. McAfee software in particular has been especially
troublesome. If possible, disable anti-virus and firewall software during
installation. If not, use of the Offline Installer is strongly recommended.
If you are using an Offline CCS Installer, the tar.gz file in which it is
distributed must first be extracted:
1. tar xfz CCSxx.x.x.xxxxx_linux-x64.tar.gz
2. cd CCSxx.x.x.xxxxx_linux
If you are using the Online/Web installer you need to extract the respective tar file
1. tar xfz CCSxx.x.x.xxxxx_web_linux-x64.tar.gz
Once extraction has successfully completed, proceed with the installation
steps below.
To install Code Composer Studio:
1. Execute ccs_setup_xx.x.x.xxxxx.run
2. Choose where you want to install.
3. Select the components you wish to install.
I extracted the file using the command
tar xfz CCS10.1.0.00010_linux-x64.tar.gz -C ../Documents/CodeComposer
I navigated to CodeComposer directory and used ls to see the files and I saw the file that the readme say to run.
So I wrote the command
ccs_setup_10.1.0.00010.run
And appeared this
zsh: command not found: ccs_setup_10.1.0.00010.run
Can you do ./ccs_setup_10.1.0.00010.run instead?
ccs_setup... is not on your path, so you need to give it a specific relative path so it knows where to look.

Docker - /bin/sh: <file> not found - bad ELF interpreter - how to add 32bit lib support to a docker image

UPDATE – Old question title:
Docker - How to execute unzipped/unpacked/extracted binary files during docker build (add files to docker build context)
--
I've been trying (half a day :P) to execute a binary extracted during docker build.
My dockerfile contains roughly:
...
COPY setup /tmp/setup
RUN \
unzip -q /tmp/setup/x/y.zip -d /tmp/setup/a/b
...
Within directory b is a binary file imcl
Error I'm getting was:
/bin/sh: 1: /tmp/setup/a/b/imcl: not found
What was confusing, was that displaying the directory b (inside the dockerfile, during build) before trying to execute the binary, showed the correct file in place:
RUN ls -la /tmp/setup/a/b/imcl
-rwxr-xr-x 1 root root 63050 Aug 9 2012 imcl
RUN file /tmp/setup/a/b/imcl
ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped`
Being a Unix noob at first I thought it was a permission issue (root of the host being different than root of the container or something) but, after checking, the UID was 0 for both so it got even weirder.
Docker asks not to use sudo so I tried with su combinations:
su - -c "/tmp/setup/a/b/imcl"
su - root -c "/tmp/setup/a/b/imcl"
Both of these returned:
stdin: is not a tty
-su: /tmp/setup/a/b: No such file or directory
Well heck, I even went and defied Docker recommendations and changed my base image from debian:jessie to the bloatish ubuntu:14.04 so I could try with sudo :D
Guess how that turned out?
sudo: unable to execute /tmp/setup/a/b/imcl: No such file or directory
Randomly googling I happened upon a piece of Docker docs which I believe is the reason to all this head bashing:
"Note: docker build will return a no such file or directory error if the file or directory does not exist in the uploaded context. This may happen if there is no context, or if you specify a file that is elsewhere on the Host system. The context is limited to the current directory (and its children) for security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason why ADD ../file will not work."
So my question is:
Is there a workaround to this?
Is there a way to add extracted files to docker build context during a build (within the dockerfile)?
Oh and the machine I'm building this is not connected to the internet...
I guess what I'm asking is similar to this (though I see no answer):
How to include files outside of Docker's build context?
So am I out of luck?
Do I need to unzip with a shell script before sending the build context to Docker daemon so all files are used exactly as they were during build command?
UPDATE:
Meh, the build context actually wasn't the problem. I tested this and was able to execute unpacked binary files during docker build.
My problem is actually this one:
CentOS 64 bit bad ELF interpreter
Using debian:jessie and ubuntu:14.04 as base images only gave No such file or directory error but trying with centos:7 and fedora:23 gave a better error message:
/bin/sh: /tmp/setup/a/b/imcl: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
So that led me to the conclusion that this is actually the problem of running a 32-bit application on a 64-bit system.
Now the solution would be simple if I had internet access and repos enabled:
apt-get install ia32-libs
Or
yum install glibc.i686
However, I dont... :[
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
According to IBM, the precise libraries I need are gtk2.i686 and libXtst.i686 and possibly libstdc++
[root#localhost]# yum install gtk2.i686
[root#localhost]# yum install libXtst.i686
[root#localhost]# yum install compat-libstdc++
UPDATE:
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
You could use various non-official 32-bit images available on DockerHub, search for debian32, ubuntu32, fedora32, etc.
If you can't trust them, you can build such an image by yourself, and you can find instruction on DockerHub too, e.g.:
on f69m/ubuntu32 home page, there is a link to GitHub repo used to generate images;
on hugodby/fedora32 home page, there is an example of commands used to build the image;
and so on.
Alternatively, you can prepare your own image based on some official image and add 32-bit packages to it.
Say, you can use a Dockerfile like this:
FROM debian:wheezy
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y ia32-libs
...and use produced image as a base (with FROM directive) for images you're building without internet access.
You can even create an automated build on DockerHub that will rebuild your image automatically when your Dockerfile (posted, say, on GitHub) or mainline image (debian in the example above) changes.
No matter how did you obtain an image with 32-bit support (used existing non-official image or built your own), you can then store it to a tar archive using docker save command and then import using docker load command.
You're in luck! You can do this using the ADD command. The docs say:
If <src> is a local tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory... When a directory is
copied or unpacked, it has the same behavior as tar -x: the result is
the union of:
Whatever existed at the destination path and
The contents of the
source tree, with conflicts resolved in favor of “2.” on a
file-by-file basis.

Resources