PyCharm offers to synchronize imported packages (eg openpyxl) with repositories.
Is it good practice to synch these (even though they are imported standard packages)?
Thanks
The answer is no.
A virtual environment need not be replicated as the packages and their version can be listed with the pop freeze command into a text file called requirements.txt that should be shared.
Others can use this file to build up the libraries.
Related
In a node.js project, I'm using Go for a critical part of it that node isn't adequate enough to handle. I want to split the Go code into a sockets package and a main package, with sockets containing required structs/interfaces for the main package to run. The problem I'm having is that from what I can gather from Go's documentation, I can only use external packages like sockets remotely from github/gopkg. I don't want to split the repository for the project into one containing the Go code and one containing node's. How can I make the sockets package available for main to import locally while making it possible to rebuild the binaries for the two packages if any updates to their source code are made?
Edit: importing the packages is no longer an issue, but rebuilding packages on update still remains
It happens the same to my team too and we end up using vendor it's pretty easy to manage all the external packages. So, whoever checkout your repo will have all the packages inside vendor.
Understanding and using the vendor folder
And Please refer this site lots of other option out there too:
Golang Package Management Tools
I read a bit about NixOS and tried it these days, because I got the impression that it would let me configure a Linux with just one file.
When I used it, I installed a bunch of packages with nix-env, so they didn't end up in the configuration.nix, but I could simply uninstall them later and add them to the configuration.nix by hand. I there something like npm i -g <package> that would install this globally so it would end up in the configuration.nix and could simply be copied to another machine.
Also, I installed stuff like zsh and atom and they have an entirely different approach to configuration and customization (bashscript, javascript, less, etc).
Is there a way for Nix/NixOS to track the package-specific config too?
Does it already happen and I don't see it? Like the nix expression of the package knows where the package will store its config etc.
I mean, it's nice that I can add these packages to the main config and when using it at another PC I get the same software installed, but I still see myself writing rather much configs for the installed packages too.
If you want packages installed through configuration.nix, then the easiest way to accomplish that is to add them to the environment.systemPackages attribute. Packages listed in there will be available automatically to all users on the machine. As far as I know, there is no shell command available to automate the maintenance of that attribute, though. The only way to manage that list is by editing configuration.nix and manually adding the packages you'd like to have installed.
Nix does not manage package-specific configuration files. As you probably know, NixOS provides such a mechanism for files in /etc, but a similar mechanism to manage config files in $HOME etc. does not exist. The PR https://github.com/NixOS/nixpkgs/pull/9250 on Github contains a concrete proposal to add this capability to Nix, but it hasn't been merged yet because it requires some changes that are controversial.
Nix does not currently offer ways of managing user specific configuration or language specific package managers. AFAICT that's because it is a very complex and opinionated territory compared to generating configs for sshd etc.
There are however Nix-based projects providing solution to at least some parts of your question. For managing user configuration (zsh etc.), have a look at home manager.
I am facing a peculiar problem. Here at high school I have got about 10 computers (all are same type, same type cpu, same type memory etc) donated which are now running Debian after reinstall. I was try to teach the pupils some Haskell, I myself learned it little. The kids are interested. A problem is our country is third world and the internet is very slow and costly. The basic ghc and ghci I installed using deb packages (found by using apt-rdepends) on all machines after once downloading all of the deb files only on one machine using some limited time free internet connection. It has taken more than 10 hours to download the all ghc deb files that are missing.
I want know if such trick is possible for cabal? I will download all required tar or other files once, on one computer, using the costly and slow internet, but then I do not want spend all my money to download from internet for all 10 computers.
I want show the kids diagrams and gloss package as it is enjoyable and funny.
I am inspired by this gentleman Smith
How should I do this ? Is there way for other packages in general other than diagrams and gloss?
Thank you and sorry for my bad English.
By default, cabal caches each package it downloads to ~/.cabal/packages (and prefers its cache to re-fetching the package unless you explicitly request a re-fetch). So it should be simple enough to just copy that directory between computers.
This would still require you to build all the packages on each machine. If you would prefer to skip even that step, you could consider directly copying GHC's package database around to each of the machines. This is a bit more delicate, but could save quite some time/power.
The global package database (where you should be installing packages that you want to be shared between users) is in /usr/local/lib/ghc-$version by default, and you should be able to copy that directory around to all your computers as well. You can check that you have installed the packages you want into the global database using ghc-pkg list, which will list all the package/version combos installed, separating them by whether they are installed in the global or user package database.
In the past I have done this to get GHC and Cabal working on a machine behind a firewall that "cabal install" couldn't see through.
You can use "wget" to download the latest version of every Hackage package. (Or you might try doing something similar with Stack, but I haven't tried that). Also download https://hackage.haskell.org/packages/index.tar.gz, which is the index file.
Install GHC, cabal and cabal-install, and then find the cabal-install configuration file and point it at a local repository containing the index.tar.gz package and the archives for the packages that you downloaded. Then hopefully you should find "cabal install" will work from the local repository.
I built ZeroMQ and Sodium from source and have them installed properly on my development machine, which is just a Pi2. I have one other machine that I want to make sure these gets installed to properly. Is there a proper way to do this other than just copy .a and .so files around?
So, there are different ways of handling this particular issue.
If you're installing all your built-from-source packages into a dedicated tree (maybe /usr/local, or /opt/mypackages) then simply copying files around is a fine solution, using something like rsync. Particularly since you only have two machines, anything more complicated may not be worth the effort.
If you're trying to install ZeroMQ and Sodium along side system-managed files (in, e.g., /usr/lib and /usr/bin)...don't do that. That is, don't try to mix "things installed by packages" with "things installed from source", because that way lies sadness and doom.
That said, a more manageable way of distributing these files would be to build custom packages and then setting up a local apt repository, so that you can just apt install the packages on your systems. There are various guides out there for doing this if you want to go down this route. It's a good skill to have in general, especially if you ever want to share your tools with someone else (because it makes it easy for them to install any necessary dependencies).
I'm developing an application using Python 3. What is the best practice to use third party libraries for development process and end-user distribution? Note that I'm working within these constraints:
Developers in the team should have the exact same version of the libraries.
An ideal solution would work on both Windows and Linux.
I would like to avoid making the user install software before using our own; that is, they shouldn't have to install product A and product B before using ours.
You could use setuptools to create egg files for your libraries, assuming they aren't available in egg form already. You could then bundle the eggs alongside your software, which would need to either install them, or ensure that they were on the import path.
This has some complexities, i.e. if your libraries have C-extensions, then your eggs become platform-specific, but in my experience this is the most widely-accepted means of 'bundling' stuff in Python.
I have to say that this remains one of Python's weaknesses, though; the third-party ecosystem is certainly aimed at developers rather than end-users.
There are no best practices, but there are a few different tracks people follow. With regard to commercial product distribution there are the following:
Manage Your Own Package Server
With regard to your development process, it is typical to either have your dev boxes update from a local package server. That allows you to "freeze" the dependency list (i.e. just stop getting upstream updates) so that everyone is on the same version. You can update at particular times and have the developers update as well, keeping everyone in lockstep.
For customer installs you usually write an install script. You can collect all the packages and install your libs, as well as the other at the same time. There can be issues with trying to install a new Python, or even any standard library because the customer may already depend on a different version. Usually you can install in a sandbox to separate your packages from the systems packages. This is more of a problem on Linux than Windows.
Toolchain
The other option is to create a toolchain for each supported OS. A toolchain is all the dependencies (up to, but not including base OS libs like glibc). This toolchain gets packaged up and distributed for both developers AND customers. Best practice for a toolchain is:
change the executable to prevent confusion. (ie. python -> pkg_python)
don't install in .../bin directories to prevent accidental usage. (ie. on Linux you can install under .../libexec. /opt is also used although personally I detest it.)
install your libs in the correct location under lib/python/site-packages so you don't have to use PYTHONPATH.
Distribute the source .py files for the executables so the install script can relocate them appropriately.
The package format should be an OS native package (RedHat -> RPM, Debian -> DEB, Win -> MSI)
For developers use PIP with requirements file.
For end users, specify requirements in setup.py.