How to distribute python3 code which contains external libraries - python-3.x

I wrote a small script in python3 that uses numpy, matplotlib, and other libraries used by pyCharm CE in my linux machine.
I used pyCharm to code and create the virtual env.
The script works only inside pyCharm because of the dependencies.
And a friend of mine wants to use my script in a windows machine. I'm not sure if even he has python installed.
How can I run my script outside pyCharm, or how can I activate the virtual env created by pyCharm to run the script?
And
How I can create a package or something to give the script to my friend or anyone else to freely use it?
Thanks

One way of going about to ask your friend to install python3.x and pip in his system. Meanwhile you create a requirements.txt which consists of the libraries that need to be installed and their versions in this format.
dj-database-url==0.5.0
Django==2.2.5
pytz==2019.2
sqlparse==0.3.0
psycopg2>=2.7,<3.0
Then ask your friend to run pip install -r <path to requirements.txt>. This will install all the required libraries and if there is no OS based dependencies then the project should run fine.
Another way of doing it in the case of bigger project where there are OS based dependencies is to use a containerization tool such as docker. Containerization lets you run projects, in other machines, which are dependent on various packages or environments which are available/installed in your machine.
For example: Imagine I created a python based application which is dependent on multiple packages in my Debian machine. I can build a docker image using python3.x as the base and install the required packages inside the image during the build time. It is fairly simple to do so. After doing so I can push the image to docker hub which is a registry to store docker images. Do mind that the images stored here are publicly available. If you are worried about that, you can use a private AWS ECR registry to store your images. Once I have pushed the image, anyone with access to the image can pull it and spin up a container. A container is an instance of an image which can run the applications/scripts/anything that the image is built to do. In order to be able to spin up containers they will need docker installed in their machine.
This way you can share your project and make it run in anyone's machine with as little hassle as possible. They will not need anything other than docker installed in their machine. Unlike Virtual Machine docker containers are not heavy on your machine.
In your case using docker you can build an image (much like an ISO image) with python3.x as base and install all the required packages such as numpy, matplotlib and other libraries, then copy the scripts required for the project to run into the image and push it to docker hub or a private registry of your choice. Then you can give your friend an access to the image. Your friend will need Docker for Windows installed in his machine in order to be able to spin up a container using the image you provided him with. This container will have your script running as it will have all the required dependencies installed in it by you while building the image itself.
For more info on Docker: https://www.docker.com/

Related

How can I know which Debian libraries Electron needs to run?

What I'm doing
I'm building an Electron-based kiosk app using Balena to run on a Raspberry Pi 4. Balena requires a Dockerfile to build the container that will run my app. In that Dockerfile, I must make sure I install all the libraries needed by Electron. The image I'm using is based on Debian Buster (the default image Balena uses).
What I know
I've found two working examples in GitHub similar to what I'm trying to do where I can see which libraries are installed:
https://github.com/Ciantic/balena-electron-example (list of installed libraries)
https://github.com/balena-io/balena-electronjs (list of installed libraries)
And also two files inside the Electron repo that mention required libraries:
https://github.com/electron/electron/blob/77049545050673949b2844f17b3731196947956a/build/install-build-deps.sh#L189-L231
https://github.com/electron/electron/blob/d5ab63b1ead93dcb4e3099fccd4670fe9258ca9c/docs/development/build-instructions-linux.md
What's confusing me
Each list of libraries in the above files is different from the others. I don't know which one I should follow. Also, the build instructions for Linux don't have any list specific for Debian.
My question
How can I know for sure exactly which libraries I need to install in my Debian-based container so that Electron can run?
In the example, it should show a Dockerfile that you can use to get the required libraries.

Hyperledger Fabric setup

I would like to setup Hyperledger Fabric on an Ubuntu machine with docker (docker-compose up). Is it possible to run the chaincode and nodejs code from another system (Mac system), as I already have Go and nodejs ready on the Mac.
Please help me with this query.
you can use same environment in different systems. This is the main reason to choose docker and docker-compose.
Just follow steps. Please confirm the version of tools.
To run on another system, you only have to simply build the image of your current hyperledger package on current system(Ubuntu). and use this image on another system(MAC).
Yes you can totally do that. Use this example: https://github.com/hyperledger/fabric-sdk-node/tree/master/examples/balance-transfer
Run docker-compose in your ubuntu machine. Update the app config.json and /app/network-config.json with ubuntu machine IP and make sure required ports are opened.
Run app on your mac.

Should we install a docker image from Docker Hub at the begining?

I want to introduce docker for my development environment.
I wanted to create a docker image from a existing linux machine.
But,I could not find a official method on docker documentation.
https://docs.docker.com/learn/
(I know there are some ways on the Internet to create a docker image like converting .iso file to .tar.gz file.
However,it's not official)
After that,I installed a docker image of Debian OS from Docker Hub with 'docker pull' command.
However,I could not find a correct version of Debian OS I wanted.
So, to get a OS of a correct kernel verion and a correct Debian os version,
after I install a docker image from Docker Hub, should I customize it?
Is there any way as an official manner to create a docker image from a exisiting linux machine?
Sounds like you should be looking at Hashicorp's packer, it would allow you to build your own Docker base images, from whatever base you wish.
https://www.packer.io/docs/builders/docker.html

Which commands of the defined Linux Distribution are available in a Docker container?

I'm new to docker and understand that the linux kernel is shared between the host-os and the containers. But I don't really understand how deep docker emulates a specific linux-distribution. Lets say we have a simple docker file like this:
FROM ubuntu:16.10
RUN apt-get install nginx
It will give me a docker container with nginx installed in an Ubuntu 16.10 environment. So I should be able to use apt-get as default package manager of Ubuntu. But how deep is this? Can I assume that typical commands of those distribution like lsb_release are emulated like in a full VM with Ubuntu 16.10 installed?
The reason behind my question is that linux distributions are different. I need to know which commands are avaliable, for example when I run a container with Ubuntu 16.10 like the one above on a host which a different distribution installed (like Red Hat, CentOS etc).
A Ubuntu image in Docker is about 150 MB. So I think there are not all tools included like in a real installation. But how can I know on which I can desert that they're there.
Base OS images for Docker are deliberately stripped down, and for Ubuntu they are removing more commands with each new release. The image is meant as the base for a dedicated application to run, you wouldn't typically connect to the container and run commands inside it, and a smaller image is easier to move around and has a smaller attack vector.
There isn't a list of commands in each image version that I know of, you'll only know by building your image. But when images are tagged you can assume a future minor update will not break downstream images - a good argument for explicitly specifying a tag in your Dockerfile.
E.g, this Dockerfile builds correctly:
FROM ubuntu:trusty
RUN ping -c 1 127.0.0.1
This one fails:
FROM ubuntu:xenial
RUN ping -c 1 127.0.0.1
That's because ping was removed from the image for the xenial release. If you just used FROM ubuntu then the same Dockerfile would have built correctly when trusty was the latest tag and then failed when it was replaced by xenial.
A container is presenting you with the same software environment as the non-containerized distribution. It may not have (in fact, probably does not have) all the same packages installed by default, but you can install whatever you need using the appropriate package manager. The availability of software in the container has nothing to do with the distribution running on your host (the Ubuntu image will be the same regardless of whether your are running Docker under CentOS, Fedora, Ubuntu, Arch, etc).
If you require certain commands to be available, just ensure that they are installed in your Dockerfile.
One of the few things that works differently inside a container is that there is typically no service management process running (like init or systemd or whatever), so you cannot start services the same way you can on the host without a little bit of work.

Using container for Linux applications?

I am experimenting with multiple versions of QEMU.
This involves downloading different versions and variants of source code, and running the usual: configure, make and make install.
The problem is I can't install multiple versions simultaneously because they use the same install script. I need to uninstall (make uninstall) before I install another one. This only works if I have kept the makefile of the installed binaries.
I think what I would like to do is something similar to Python's virtualenv. A standalone Linux user(?) environment for each application that I can easily remove.
Is there such a thing? Or is my approach completely flawed?
I think the best approach for such cases is docker container. Docker is a container-based virtualization technology, In which you can build your customized Linux-based environment and host your application inside it. thereafter, that means, you have containerized your application and its ready to be distributed and run easily.

Resources