What are binaries in hyperledger? - hyperledger-fabric

I could not find any info on what a binary is in hyperledger-fabric.
This is used here
I already searched online but got nothing.
Can someone please explain ?

Binary is a pretty standard computer term. It refers to an executable file on a computer system. It's referred to as a binary because they are usually a file of bytes and not human readable.

If you have downloaded the samples, the binaries can be found in the bin directory, which will be up one level from the sample blockchain projects. There you will see all of the programs that are called when building and running a network.

Related

Query Windows Active Directory using Python 2.7 from Windows OR Linux device

I have need to write some code that will run on both Windows or Linux which I can use to query a Windows Active directory domain for users or computers. The queries will be relatively small. I'm used to using DSQUERY for such things on Windows but I need the solution to be cross platform and written in Python 2.7.
I've seen some examples on the web but everything I've read refers to installing LDAP code on Linux to make it work (which really isn't an option for me since I won't own the hosts the code will be running on).
I also found the PYAD library, but as far as I can tell it relies on being on a Windows box and having PYWIN32 installed.
Ideally I'd like one piece of code that can run on either architecture.
I'm not really looking for code examples per se (but if you want to drop some I'm fine with that), but I really just need a lead I guess.
Thanks in advance
Unfortunately, LDAP is going to be your best bet. From Protocols and Interfaces to Active Directory, it states:
Core protocol that is supported by Active Directory, as described in
RFC 2251 (LDAPv3) and RFC 1777 (LDAPv2).
The python-ldap api works across multiple platforms.
You can see all protocols that Active Directory supports here.

Is it sensible to build an application with static linking on linux?

I need to build an application running on an embedded vendor supplied version of linux. According to the documentation it has libc version 2.8.90. I have built a simple application in C++ on a desktop and copied the binary across to the hardware along with copies of the libraries it is linked to. In order to remove any potential conflicts of linking against different versions of libraries I considered attempting to link to libraries statically. After some research I found the following question and answers and after reading through it gave the impression that linking statically is not a good thing to do. What I could not find here (or anywhere else so far) was a simple explanation of why this seems to be frowned upon. It would seem to me (pretty much a novice to linux) to be a way of solving my problem of bundling my executable as a single package and running it on my hardware but clearly it seems to be considered a bad idea but can someone please explain why??
Obviously I am aware that it would cause bloating of my binary but I am not worried about that. Additionally, I am aware of the licensing issues, but I am not concerned with that aspect of things particularly. This is not a commercial application so I do not think that it applies to me.
The advantages are, as you expect, a single binary that works without having to install the other dependencies and which you can easily move around.
The disadvantages are the size and the need to recompile the entire application if there's an update (e.g. a security fix) to the linked library and perhaps licensing issues (as you've noted).
Tradeoffs. If it solves your problem, go for it.

TrueCrypt alternative with API

I am searching for a TrueCrypt alternative that has an API to programmatically access the files. Does anyone know a solution?
The API should support the listing, creating, changing and deleting of files.
Diskcryptor does not have an API, but it is GPL.
If I may, I beleive what you are asking for is for a abstract file system library. I understand that you want to load a TrueCrypt or similar container and list its content. When it is opened, such a container is just raw bytes reprenting sectors. On top the the encryption, such an API would see only raw sectors and it would have to make sense of them with a corresponding sector level api.
You can see the problem in another way. How would you write a program, such as zip, that can present such information on a zip file, a very common container if you will.
So the API you are looking for would need to acheive two things :
Understand the container's encryption scheme (possibly multiple version of it)
Understand the sector format of the embeeded filesystem
Provide a user friendly API.
I have asked myself the same questions a while ago, scoured the net for answers, and this answer is the sum of what I have found so far. I hope you find it a valid answer, even if its not actionable.
Not yet, anyways ;)
Our SolFS OS Edition might be what you are looking for if you plan to create new software. It's available for Windows, MacOS X, Linux and FreeBSD.
Java Filesystem Provider with integrated encryption : https://github.com/cryptomator/cryptofs

Packaging proprietary software for Linux

I'm doing cross-platform development and I want to build a nice, self-contained (!) package for Linux. I know that that's not the way it's usually done, but the application requires all data in one place, so I'm installing it into /opt, like many other proprietary software packages do. I will eventually provide deb and rpm packages, but it will only be .tar.gz for now. The user should extract it somewhere and it should work. I'd rather not have an installer.
First my questions, then the details:
How do other people package proprietary software for Linux?
Are there tools for packaging software including shared libraries?
Now for some details: This is my project's (I call it foo for this purpose) layout:
foo (binary)
config.ini
data
Now in the package, there will be two additional elements:
libs
foo.sh
libs will contain all the shared libraries the project requires, and foo.sh is a script that sets LD_LIBRARY_PATH to include libs. Therefore, the user will execute foo.sh and the program should start.
I have a shell script that packages the software in the following steps:
Create empty directory and copy foo.sh to it
Invoke the build process and make install into the new directory
Copy shared libs from the filesystem
Package everything as .tar.gz
What do you think of this? There are some problems with this approach:
I have to hard code all dependencies twice (once in CMake, once in the packaging script)
I have to define the version number twice (once in the source code, once in the packaging script)
How do you do it?
Edit:
Another question that just came up: How do you determine on which libraries your software depends? I did an ldd foo, but there's an awfull lot. I looked at how WorldOfGoo packages look, and they ship only very few libraries. How can I make assumptions about which library will be present on a user's system and which won't? Just install all targeted distributions into a virtual matine and see what's required?
Generic issues
Your way to package your stuff (with dependent libs) to /opt is how proprietary (and even open-source) software is packaged. It's recommended practice of The Linux Foundation (see my answer to the other question for links).
External libs may be either compiled from scratch and embedded into your build process as a separate step (especially if you modify them), or fetched from packages of some distributions. The second approach is easier, but the first one allows more flexibility.
Note that it's not necessary to include some really low-level libraries (such as glibc, Xorg) into your package. They'd be better left to system vendors to tune, and you may just assume they exist. Moreover, there's an Linux Standard Base, that documents the most important libraries; these libraries exist almost everywhere, and can be trusted.
Note also that if you compile under a newer system, most likely, users of older systems won't be able to use it, while the reverse is not true. So, to reach better compatibility, it might be useful to compile package under a system that's two years older than today.
I just outlined some generic stuff, but I believe that Linux Developers Network website contains more information about packaging and portability.
Packaging
Judging by what I saw in the open-source distribution projects, your script does it the same way distribution vendors package software. Their scripts automatically patch sources, mimic installation of software and package the resultant folders into DEBs and RPMs.
Tar.gz, or course, could also work, but creating, for example, an RPM is not complex enough for you to miss such an opportunity to make life of your users so much easier.
Answering your questions,
Yes, you have to hard-code dependencies twice.
The thing is that when you hardocde them in CMake, you specify them in the other terms than when you specify them in a packaging script. CMake refers to shared libraries and header files, while packaging script refers to packages.
There's no cross-distribution one-to-one relationship between package names and shared libs and headers. It varies through distributions. Therefore, it should be specified twice.
But the package can be easily re-packed by distribution vendors, especially if you strive to packing all dependent libs into it (so there'll be less external dependencies to port). Also, a tool that can port packages from one distribution to another will appear soon (I'll update my answer when it's released).
Yes, you have to specify your version twice.
But the thing is that you may organize your packaging process in such a way that package and software versions never get out-of-sync. Just make the packaging script check out from your repository (or download from your website) exactly the same version that the script will write to package specifications.
Analyzing Dependencies
To analyze dependencies of your software, you may use our open-source, free Linux Application Checker tool. It will report the list of libraries it depends on, show distributions your software is compatible with, and help your application be more portable across distributions. It turns out that sometimes more cross-distribution compatibility can be achieved by little effort, and you don't have to lock yourself into support of just a few selected distributions.
Think long and hard (or ask your product development department) which distributions / architectures you need to support.
Make sure that they fully understand the testing implications.
I expect you will come up with a very short list of supported distributions and architectures.
It really depends on which customers are paying for Linux support. Most people use Redhat Enterprise (on servers) or Centos (which is indistinguishable from a technical perspective).
If you only need to support Redhat, you only need to support RPM, job's a good'un.

Linux lib / include organization for cross-compiled libraries?

We are cross-compiling an application for an embedded Linux target under desktop Linux. For testing and other purposes we are using statically linked libraries with our application. The testing library we are using is CMockery.
My question is: Where should the static libraries and include files for CMockery live, given that we are cross-compiling?
If we weren't cross-compiling, things should go in /usr/local/lib.
Some suggestions from our team have been:
/opt/google/lib and /opt/google/include
/opt/embeddedLinuxDistro/usr/local/share/google/lib (and include)
/usr/local/arch/lib (and include)
Any pointers appreciated!
Note: After writing this answer, my summary would be:
Keep anything that is non-standard to the Linux distro you're using separate. In fact keep files for different projects separate even if they share libraries. This will make it much easier to move your files to another machine, to setup multiple complete builds for testing, and most importantly to be able to recreate the build starting from scratch.
The decision is really subjective.
Do you just need one copy of the library for all users?
Does it rarely change?
If your build machine caught fire and you had no backups of that machine, how quickly and easily could you re-build your environment of libraries and cross-compilers?
I ask these questions, because if the library changes often or different users may need different versions, you're better off having it be portable. That is, you can specify in your build where to find the files.
Of your team's suggestions, I would lean towards a path that contains a reference to your project. This will make it easier a year from now (when someone asks you to setup another build machine) to reproduce everything.
Lastly, I wouldn't worry about trying to adhere to "standard" library locations because you're not creating and managing a Linux distribution. Furthermore, most people don't really know anything more than "/usr/lib" and /usr/local/lib" and even the people that know those do not know the difference.
Do what's best for your project no matter what that may be.

Resources