pre-commit passing files as when using entry - flake8

I am using pre-commit to invoke flake8 with a plug-in flake8-requirements.
The plug-in currently requires flake8 to be invoked in the package root, which conveniently isn't the repo root. Per this comment in a pre-commit issue, I have accordingly modified my pre-commit config to be this:
- repo: local
hooks:
- id: flake8
name: flake8 src package
alias: flake8-src
files: ^src/
types: [python]
language: system
entry: bash -c "cd src && flake8"
This works properly. Unfortunately, the src package is large, and flake8 takes a few seconds to run. So, now pre-commit runs are not snappy.
How can one tweak the entry such that the files from pre-commit (passed as positional args) are passed to flake8?
Update: or am I wrong, and this works already as intended?

flake8-requirements seems like not the greatest plugin -- it relies on needing to have access to your entire codebase at once so you can't really get beyond the "it's going to be slow" and have flake8-requirements at the same time.
personally I would split the flake8-requirements check out to a separate check and probably not run it as part of pre-commit (because it is so slow)
also, I noticed you're not using the official flake8 configuration and instead ~reinventing the wheel with a repo: local hook. as such you've unintentionally written a fork bomb :)
disclaimer: I'm the current flake8 maintainer and I created pre-commit

Related

Building the latest version of process as a dependency

In order to be able to cancel a process on Windows, I need to make use of this fix for process package, which is still not released. I've tried adding the latest version from github as a dependency in my stack.yaml file:
packages:
- '.'
- location:
git: https://github.com/haskell/process.git
commit: 2fb7e739771f4a899a12b45f8b392e4874616b89
extra-dep: true
But the stack build command fails:
Process exited with code: ExitFailure 1
Logs have been written to: C:\Users\nadalesagutde\Documents\github\capitanbatata\sandbox\racing-turtles\.stack-work\logs\process-1.6.1.0.log
Configuring process-1.6.1.0...
Warning: The 'build-type' is 'Configure' but there is no 'configure' script.
You probably need to run 'autoreconf -i' to generate it.
setup.exe: configure script not found.
In the README of process is stated that autoreconf -i must be run before, but I don't know how to tell this to stack. Do I need some extra configuration in my stack.yaml file?
Looks like the package's git repo does not include the "configure" script, which is needed to use the package directly. The reason things work when downloading from hackage is that the source distribution does include the configure script. Frustrating! I think this is an atypical design decision for a package that uses configure. I've opened this stack issue: https://github.com/commercialhaskell/stack/issues/3534
Suggested workaround is to clone the repo as a submodule and run autoreconf -i manually.

Custom git command autocompletion

I have implemented a custom git command by writing a shell script located in /usr/local/bin. It works fine, but I would like the script to autocomplete branches in the command line, just like git checkout [TAB][TAB]. How could this be done?
EDIT:
Just adding some context: git allows you to very easily add your own commands by creating a script git-$subcommandName. In my case here git-install, the script facilitates checking out a branch, building, packaging, and installing the source code.
Figured it out. I needed to download git bash completion (here), create a new file in /etc/bash_completion.d with the following contents:
source ~/.git-completion.bash
_git_install ()
{
__gitcomp_nl "$(__git_refs)"
}
and then exec bash to reload completion scripts.

How to get cabal and nix work together

As far as I understood, Nix is alternative for cabal sandbox.
I finally managed to install Nix, but I still don't understand how it can replace a sandbox.
I understand you don't need cabal using Nix and the wrapped version of GHC; however if you want
to publish a package you'll need at some point to package it using cabal. Therefore, you need to be able to write and test your cabal configuration within NIX. How do you do that?
Ideally, I would like an environment similar to cabal sandbox but "contained" within NIX, is that possible? In fact, what I really would like is the equivalent of nested sandboxes — as I usually work on projects made of multiple packages.
Update about my current workflow
At the moment I work on 2 or 3 independent projects (P1, P2, P3) which are each composed of 2 or 3 cabal modules/packages, let's say for P1: L11, L12 (libraries)
and E11 (executables). E11 depends on L12 which depends on L11. I mainly split the executables from the library because they are private and kept on a private git repo.
In theory, each project could have this own sandbox (shared between its submodules). I tried that (having a common sandbox for L11 L12 and E11), but it's quickly annoying because, if you modify L11, you can't rebuild it because E11 depends on it, so I have to uninstall E11 first to recompile L11.
It might be no exactly the case, but I encounter the similar problem.
This would be fine if I were occasionally modifying L11, but in practice, I changed it more that E11.
As the shared sandbox doesn't work, so I went back to the one sandbox for every package solution. It's working but is less than ideal.
The main problem is if I modify L11, I need to compile it twice (once in L11, and then again in E11). Also, each time I'm starting a new sandbox, as everybody knows, I need to wait a while to get everything package downloaded and recompiled.
So by using Nix, I'm hopping to be able to set up separate cabal "environments" per project, which solves all the issue aboves.
Hope this is clearer.
I do all my development using Nix and cabal these days, and I can happily say that they work in harmony very well. My current workflow is very new, in that it relies on features in nixpkgs that have only just reached the master branch. As such, the first thing you'll need to do is clone nixpkgs from Github:
cd ~
git clone git://github.com/nixos/nixpkgs
(In the future this won't be necessary, but right now it is).
Single Project Usage
Now that we have a nixpkgs clone, we can start using the haskellng package set. haskellng is a rewrite of how we package things in Nix, and is of interest to us for being more predictable (package names match Hackage package names) and more configurable. First, we'll install the cabal2nix tool, which can automate some things for us, and we'll also install cabal-install to provide the cabal executable:
nix-env -f ~/nixpkgs -i -A haskellngPackages.cabal2nix -A haskellngPackages.cabal-install
From this point, it's all pretty much clear sailing.
If you're starting a new project, you can just call cabal init in a new directory, as you would normally. When you're ready to build, you can turn this .cabal file into a development environment:
cabal init
# answer the questions
cabal2nix --shell my-project.cabal > shell.nix
This gives you a shell.nix file, which can be used with nix-shell. You don't need to use this very often though - the only time you'll usually use it is with cabal configure:
nix-shell -I ~ --command 'cabal configure'
cabal configure caches absolute paths to everything, so now when you want to build you just use cabal build as normal:
cabal build
Whenever your .cabal file changes you'll need to regenerate shell.nix - just run the command above, and then cabal configure afterwards.
Multiple Project Usage
The approach scales nicely to multiple projects, but it requires a little bit more manual work to "glue" everything together. To demonstrate how this works, lets consider my socket-io library. This library depends on engine-io, and I usually develop both at the same time.
The first step to Nix-ifying this project is to generate default.nix expressions along side each individual .cabal file:
cabal2nix engine-io/engine-io.cabal > engine-io/default.nix
cabal2nix socket-io/socket-io.cabal > socket-io/default.nix
These default.nix expressions are functions, so we can't do much right now. To call the functions, we write our own shell.nix file that explains how to combine everything. For engine-io/shell.nix, we don't have to do anything particularly clever:
with (import <nixpkgs> {}).pkgs;
(haskellngPackages.callPackage ./. {}).env
For socket-io, we need to depend on engine-io:
with (import <nixpkgs> {}).pkgs;
let modifiedHaskellPackages = haskellngPackages.override {
overrides = self: super: {
engine-io = self.callPackage ../engine-io {};
socket-io = self.callPackage ./. {};
};
};
in modifiedHaskellPackages.socket-io.env
Now we have shell.nix in each environment, so we can use cabal configure as before.
The key observation here is that whenever engine-io changes, we need to reconfigure socket-io to detect these changes. This is as simple as running
cd socket-io; nix-shell -I ~ --command 'cabal configure'
Nix will notice that ../engine-io has changed, and rebuild it before running cabal configure.

GIT ignores commit-msg hook

Recently I migrated from opensuse to centos and after that GIT has started to ignore my custom commit-msg hook. It simply doesn't execute it. (I checked it by add small piece of code to "add_ChangeId" function )
Hook generates Change-Id hash for every commit
GIT version: 1.8.1.2
File is located in following location: .git/hooks/
For debugging purposes I even have set 0777 permissions to the whole .git directory
Here is the full text of commit-msg file - http://pastebin.com/zmYNi0ED
timoras you are gold. Then I tried to execute script using sh .git/hooks/scriptname it worked, but when tried to call it using .git/hooks/scriptname and shell returned that I haven't permissions to execute it.
After that I looked at fstab, and found out that have forgot to add exec flag to the partition where this file was located.
Now everything works.
One more time thanks timoras!

Using Jenkins BUILD NUMBER in RPM spec file

Name: My Software
Version: 1.0.5
Release: 1
Summary: This is my software
Not sure if anyone has tried this before or if it is easy, but:
A spec file has two unique indicators for its version:
Version (which specifies software version)
Release (which specifies the package's number - if you build an RPM, it's broken, and build another one, you up the 'Release' number.
I'm wondering if anyone has tried, or knows how, I could use the Jenkins $BUILD_NUMBER variable to dynamically change the Release number, thereby increasing the Release number every time a new successful build completes...?
It's been a long time... and thankfully I have no rpm based systems so I can't test this.
You can pass parameters to rpmbuild on the commandline
rpmbuild --define="version ${env.BUILD_NUMBER}"
It would be helpful to post snippets of the spec and the script you're using to build the rpm.
You don't want your build script editing the spec file, which I'm assuming it's pulling out down from some source control.
I've been using the Jenkins build number as the 'release' and packaging via fpm.
Couple fpm with some globals provided by Jenkins
# $BUILD_ID - The current build id, such as "2005-08-22_23-59-59" (YYYY-MM-DD_hh-mm-ss)
# $BUILD_NUMBER - The current build number, such as "153"
# $BUILD_TAG - String of jenkins-${JOB_NAME}-${BUILD_NUMBER}. Convenient to put into a resource file, a jar file, etc for easier identification.
There's some nebulous variables in the example command below, but $BUILD_NUMBER is what I'm using for the release here (fpm calls it iteration instead).
fpm_out=$(fpm -a all -n $real_pkg_name -v $version -t rpm -s dir --iteration $BUILD_NUMBER ./*)
In my Jenkins setup, I've decided to bypass the build number with regards to the RPM version numbering completely. Instead, I use a home-made script that generates and keeps track of the various releases that are being generated.
In my spec file:
Version: %{_iv_pkg_version}
Release: %{_iv_pkg_release}%{?dist}
And in the Jenkins build script:
# Just initialising some variables, and retrieving the release number.
package="$JOB_NAME"
# We use setuptools, so we can query the package version like so.
# Use other means to suit your needs.
pkg_version="$(python setup.py --version)"
pkg_release="$(rpm-release-number.py "$package" "$pkg_version")"
# Creating the src.rpm (ignore the spec file variables)
rpmbuild --define "_iv_pkg_version $pkg_version" \
--define "_iv_pkg_release $pkg_release" \
-bs "path/to/my/file.spec"
# Use mock to build the package in a clean chroot
mock -r epel-6-x86_64 --define "_iv_pkg_version $pkg_version" \
--define "_iv_pkg_release $pkg_release" \
"path/to/my/file.src.rpm"
rpm-release-number.py is a simple script that maintains a file-based database (in JSON format, for easy maintenance). It can handle being run at the same time, so no worries there, but it won't work if you have build slaves (as far as I can tell, I don't use them so can't test). You can find the source code and documentation here.
The result is that I get the following package versioning scheme:
# Build the same version 3 times
foo-1.1-1
foo-1.1-2
foo-1.1-3
# Increment the version number, and build twice
foo-1.2-1
foo-1.2-2
PS: Note that the Jenkins build script is just an example, the logic behind creating the rpmbuild directory structure and retrieving the .src.rpm and .spec file names is a bit more complicated.
Taking into account that spec file could be 3rd-party I prefer to do pre-build sed-patching of Release field:
sed -i 's/^Release:\(\s*\)\(.*\)$/Release:\1%{?_build_num:%{_build_num}.}%{expand:\2}/g' ./path/to/spec
rpmbuild --define '_build_num $BUILD_NUM' -ba ./path/to/spec
Here %{expand:...} macro is used to handle macro defined release numbers like ones in Mageia specs:
Release: %mkrel 1
Resulting field will be:
Release: %{?_build_num:%{_build_num}.}%{expand:%mkrel 1}
Conditional expansion of _build_num macro makes spec still usable for local build. I.e. if SRPM is also prepared by build system. But it could be reduced to:
sed -i 's/^Release:\(\s*\)\(.*\)$/Release:\1'$BUILD_NUM'.%{expand:\2}/g' ./path/to/spec
rpmbuild -ba ./path/to/spec

Resources