Poetry add dependecy from requirements.text fails NoSuchOptionException The "-r" option does not exist - dependency-management

I want to add poetry to the existing project
to add a dependency from requirements.txt I ran the following command
poetry add $( cat requirements.txt )
In either case, I get the same error - shown below
.venv) bash-3.2$ poetry add $( cat requirements.txt )
Stack trace:
11 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/console_application.py:123 in run
io = io_factory(
10 ~/.poetry/lib/poetry/console/config/application_config.py:221 in create_io
resolved_command = application.resolve_command(args)
9 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/console_application.py:110 in resolve_command
return self._config.command_resolver.resolve(args, self)
8 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/resolver/default_resolver.py:34 in resolve
return self.create_resolved_command(result)
7 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/resolver/default_resolver.py:166 in create_resolved_command
if not result.is_parsable():
6 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/resolver/resolve_result.py:43 in is_parsable
self._parse()
5 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/resolver/resolve_result.py:49 in _parse
self._parsed_args = self._command.parse(self._raw_args)
4 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/api/command/command.py:113 in parse
return self._config.args_parser.parse(args, self._args_format, lenient)
3 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/args/default_args_parser.py:53 in parse
self._parse(args, _fmt, lenient)
2 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/args/default_args_parser.py:103 in _parse
self._parse_short_option(token, tokens, fmt, lenient)
1 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/args/default_args_parser.py:272 in _parse_short_option
self._add_short_option(name, None, tokens, fmt, lenient)
NoSuchOptionException
The "-r" option does not exist.
at ~/.poetry/lib/poetry/_vendor/py3.8/clikit/args/default_args_parser.py:349 in _add_short_option
345│ def _add_short_option(
346│ self, name, value, tokens, fmt, lenient
347│ ): # type: (str, Optional[str], List[str], ArgsFormat, bool) -> None
348│ if not fmt.has_option(name):
→ 349│ raise NoSuchOptionException(name)
350│
351│ self._add_long_option(
352│ fmt.get_option(name).long_name, value, tokens, fmt, lenient
353│ )
for reference here is my requirement.txt
# This file is autogenerated by pip-compile with python 3.8
# To update, run:
#
# pip-compile requirements.in
#
click==8.0.3
# via -r requirements.in
gevent==21.8.0
# via grequests
greenlet==1.1.2
# via gevent
grequests==0.6.0
# via -r requirements.in
idna==3.3
# via requests
iniconfig==1.1.1
# via pytest

Looks like It was reading comments in requirements.txt
simple just ignore the comments run the following
cat requirements.txt|grep -v '#'|xargs poetry add

Related

SED style Multi address in Python?

I have an app that parses multiple Cisco show tech files. These files contain the output of multiple router commands in a structured way, let me show you an snippet of a show tech output:
`show clock`
20:20:50.771 UTC Wed Sep 07 2022
Time source is NTP
`show callhome`
callhome disabled
Callhome Information:
<SNIPET>
`show module`
Mod Ports Module-Type Model Status
--- ----- ------------------------------------- --------------------- ---------
1 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
2 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
3 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
4 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
21 0 Fabric Module N9K-C9504-FM-R ok
22 0 Fabric Module N9K-C9504-FM-R ok
23 0 Fabric Module N9K-C9504-FM-R ok
<SNIPET>
My app currently uses both SED and Python scripts to parse these files. I use SED to parse the show tech file looking for a specific command output, once I find it, I stop SED. This way I don't need to read all the file (these can get to be very big files). This is a snipet of my SED script:
sed -E -n '/`show running-config`|`show running`|`show running config`/{
p
:loop
n
p
/`show/q
b loop
}' $1/$file
As you can see I am using a multi address range in SED. My question specifically is, how can I achieve something similar in python? I have tried multiple combinations of flags: DOTALL and MULTILINE but I can't get the result I'm expecting, for example, I can get a match for the command I'm looking for, but python regex wont stop until the end of the file after the first match.
I am looking for something like this
sed -n '/`show clock`/,/`show/p'
I would like the regex match to stop parsing the file and print the results, immediately after seeing `show again , hope that makes sense and thank you all for reading me and for your help
You can use nested loops.
import re
def process_file(filename):
with open(filename) as f:
for line in f:
if re.search(r'`show running-config`|`show running`|`show running config`', line):
print(line)
for line1 in f:
print(line1)
if re.search(r'`show', line1):
return
The inner for loop will start from the next line after the one processed by the outer loop.
You can also do it with a single loop using a flag variable.
import re
def process_file(filename):
in_show = False
with open(filename) as f:
for line in f:
if re.search(r'`show running-config`|`show running`|`show running config`', line):
in_show = True
if in_show
print(line)
if re.search(r'`show', line1):
return

How to format this code so that flake8 is happy?

This code was created by black:
def test_schema_org_script_from_list():
assert (
schema_org_script_from_list([1, 2])
== '<script type="application/ld+json">1</script>\n<script type="application/ld+json">2</script>'
)
But now flake8 complains:
tests/test_utils.py:59:9: W503 line break before binary operator
tests/test_utils.py:59:101: E501 line too long (105 > 100 characters)
How can I format above lines and make flake8 happy?
I use this .pre-commit-config.yaml
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: 'https://github.com/pre-commit/pre-commit-hooks'
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- repo: 'https://gitlab.com/pycqa/flake8'
rev: 3.8.4
hooks:
- id: flake8
- repo: 'https://github.com/pre-commit/mirrors-isort'
rev: v5.7.0
hooks:
- id: isort
tox.ini:
[flake8]
max-line-length = 100
exclude = .git,*/migrations/*,node_modules,migrate
# W504 line break after binary operator
ignore = W504
(I think it is a bit strange that flake8 reads config from a file which belongs to a different tool).
from your configuration, you've set ignore = W504
ignore isn't the option you want as it resets the default ignore (bringing in a bunch of things, including W503).
If you remove ignore=, both W504 and W503 are in the default ignore so they won't be caught
as for your E501 (line too long), you can either extend-ignore = E501 or you can set max-line-length appropriately
for black, this is the suggested configuration:
[flake8]
max-line-length = 88
extend-ignore = E203
note that there are cases where black cannot make a line short enough (as you're seeing) -- both from long strings and from long variable names
disclaimer: I'm the current flake8 maintainer

Compiling Rocket in Bazel

I'm attempting to get a working prototype of the following scenario:
Language: Rust (rustc 1.45.0-nightly (ad4bc3323 2020-06-01))
Framework: Rocket v0.4.4
Build Tool: Bazel
Platform: Mac OS X / Darwin x64
Running bazel build //web-api yields the below error. I believe, based on looking at the Cargo.lock file it is because Rocket's dependency on the hyper library specifies a dependency on the log 0.3.9 library. For whatever reason it is not using the more recent log=0.4.x. That said, I don't know why it's pulling this library since, if I build it manually, it works fine.
ERROR: /private/var/tmp/_bazel_nathanielford/2a39169ea9f6eb02fe788b12f9eae88f/external/raze__log__0_3_9/BUILD.bazel:27:1: error executing shell command: '/bin/bash -c CARGO_MANIFEST_DIR=$(pwd)/external/raze__log__0_3_9 external/rust_darwin_x86_64/bin/rustc "$#" --remap-path-prefix="$(pwd)"=__bazel_redacted_pwd external/raze__log__0_3_9/src/lib.rs -...' failed (Exit 1) bash failed: error executing command /bin/bash -c 'CARGO_MANIFEST_DIR=$(pwd)/external/raze__log__0_3_9 external/rust_darwin_x86_64/bin/rustc "$#" --remap-path-prefix="$(pwd)"=__bazel_redacted_pwd' '' external/raze__log__0_3_9/src/lib.rs ... (remaining 24 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
error[E0425]: cannot find function `set_logger` in crate `log`
--> external/raze__log__0_3_9/src/lib.rs:731:16
|
731 | match log::set_logger(&ADAPTOR) {
| ^^^^^^^^^^ not found in `log`
|
help: consider importing this function
|
204 | use set_logger;
|
The following is my directory structure:
/
|-WORKSPACE
|-BUILD # Empty
|-web-api/
| |-BUILD
| |-src/
| | |-main.rs
| |-cargo/
| |-Cargo.toml
| |-Cargo.lock
| |-BUILD.bazel
| |-remote/
| |-... (Cargo-raze files)
In order to set up the cargo-raze I did the following, following instructions from the github page.:
$ cd web-api/cargo
$ cargo generate-lockfile
$ cargo vendor --versioned-dirs --locked
$ cargo raze
(The generate-lockfile is what creates the Cargo.lock file, and the cargo raze is what creates the BUILD.bazel file and all the contents of the remote sub directory.)
And then to execute the bazel build I go back to the root and run bazel build //web-api, which produces the above error.
This is my WORKSPACE file:
workspace(name = "rocket-bazel")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_rust",
sha256 = "f21c67fc2fef9d57fa3c81fde1defd9e57d451883388c0a469ec1c470fd30dcb",
strip_prefix = "rules_rust-master",
urls = [
"https://github.com/bazelbuild/rules_rust/archive/master.tar.gz"
],
)
http_archive(
name = "bazel_skylib",
sha256 = "9a737999532daca978a158f94e77e9af6a6a169709c0cee274f0a4c3359519bd",
strip_prefix = "bazel-skylib-1.0.0",
url = "https://github.com/bazelbuild/bazel-skylib/archive/1.0.0.tar.gz",
)
load("#io_bazel_rules_rust//rust:repositories.bzl", "rust_repositories")
rust_repositories(version="nightly", iso_date="2020-06-02")
load("#io_bazel_rules_rust//:workspace.bzl", "bazel_version")
bazel_version(name = "bazel_version")
load("//web-api/cargo:crates.bzl", "raze_fetch_remote_crates")
raze_fetch_remote_crates()
This is my web-api/BUILD file:
load("#io_bazel_rules_rust//rust:rust.bzl", "rust_binary")
rust_binary(
name = "web-api",
srcs = ["src/main.rs"],
deps = [
"//web-api/cargo:rocket",
],
)
And my web-api/cargo/Cargo.toml file:
load("#io_bazel_rules_rust//rust:rust.bzl", "rust_binary")
rust_binary(
name = "web-api",
srcs = ["src/main.rs"],
deps = [
"//web-api/cargo:rocket",
],
)
I've run out of ideas as to what to try. I can get this to compile without Bazel, just using rust (though obviously the files are in slightly different places). I can get it to compile inside a Docker container. I just can't get Bazel (necessarily with cargo raze, either in vendor or remote mode) to run successfully: I assume that there is some mismatch in compile target or the nightly build that is not being properly set - but I'm not sure how to diagnose or get past that.
Here is a link to a repository with the files/structure I tried.
I had a similar issue when I made a minimal Bazel workspace with rust and the log crate together with env_logger crate. I found a similar issue when you try to compile without features = ["std"]. I then tried to enable that in Cargo.toml on the log dependency without success.
My solution is that in Cargo.toml under [raze] I added:
default_gen_buildrs = true
I could trace it down to that when default_gen_buildrs flag is not set in the generated log crate the BUILD.bazel file did not have a cargo_build_script definition or this:
crate_features = [
"std",
],

Read the final value of the MACHINEOVERRIDES variable

During the porting of the layer for my SoM from the pyro branch to the dunfell one, I've encountered some problems related to the COMPATIBLE_MACHINE list in my recipes (BitBake says that there is no recipe compatible with my machine).
In order to further investigate this issue, I tried to read the actual value of the MACHINEOVERRIDES variable using the bitbake -e command. However, I don't think this list is complete, because watching the bitbake -e output I can see that there are other variable expansions which are used to form the value of the general OVERRIDES variable.
This is the output of the bitbake -e | grep OVERRIDES command run from my build environment:
# $DISTROOVERRIDES [3 operations]
DISTROOVERRIDES="fslc"
# $FILESOVERRIDES [2 operations]
# "${TRANSLATED_TARGET_ARCH}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}"
# [doc] "A subset of OVERRIDES used by the OpenEmbedded build system for creating FILESPATH."
# "${TRANSLATED_TARGET_ARCH}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}"
FILESOVERRIDES="arm:isiot:armv7ve:use-mainline-bsp:isiot-geamx6ul:fslc"
# $MACHINEOVERRIDES [14 operations]
# "PRISTINE_MACHINEOVERRIDES"
MACHINEOVERRIDES="isiot:armv7ve:use-mainline-bsp:isiot-geamx6ul"
# $MACHINEOVERRIDES_EXTENDER_FILTER_OUT
MACHINEOVERRIDES_EXTENDER_FILTER_OUT=" imx mx6 mx6q mx6dl mx6sx mx6sl mx6sll mx6ul mx6ull mx7 mx7d mx7ulp mx8 mx8qm mx8mm mx8mn mx8mq mx8qxp "
# $MACHINEOVERRIDES_EXTENDER_FILTER_OUT_use-mainline-bsp
MACHINEOVERRIDES_EXTENDER_FILTER_OUT_use-mainline-bsp=" imx mx6 mx6q mx6dl mx6sx mx6sl mx6sll mx6ul mx6ull mx7 mx7d mx7ulp mx8 mx8qm mx8mm mx8mn mx8mq mx8qxp "
# $MACHINEOVERRIDES_EXTENDER_mx25
MACHINEOVERRIDES_EXTENDER_mx25="use-mainline-bsp"
# $MACHINEOVERRIDES_EXTENDER_mx6dl
MACHINEOVERRIDES_EXTENDER_mx6dl="imxfbdev:imxpxp:imxipu:imxvpu:imxgpu:imxgpu2d:imxgpu3d:imxepdc"
# $MACHINEOVERRIDES_EXTENDER_mx6q
MACHINEOVERRIDES_EXTENDER_mx6q="imxfbdev:imxipu:imxvpu:imxgpu:imxgpu2d:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx6sl
MACHINEOVERRIDES_EXTENDER_mx6sl="imxfbdev:imxpxp:imxgpu:imxgpu2d:imxepdc"
# $MACHINEOVERRIDES_EXTENDER_mx6sll
MACHINEOVERRIDES_EXTENDER_mx6sll="imxfbdev:imxpxp:imxepdc"
# $MACHINEOVERRIDES_EXTENDER_mx6sx
MACHINEOVERRIDES_EXTENDER_mx6sx="imxfbdev:imxpxp:imxgpu:imxgpu2d:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx6ul
MACHINEOVERRIDES_EXTENDER_mx6ul="imxfbdev:imxpxp"
# $MACHINEOVERRIDES_EXTENDER_mx6ull
MACHINEOVERRIDES_EXTENDER_mx6ull="imxfbdev:imxpxp:imxepdc"
# $MACHINEOVERRIDES_EXTENDER_mx7d
MACHINEOVERRIDES_EXTENDER_mx7d="imxfbdev:imxpxp:imxepdc"
# $MACHINEOVERRIDES_EXTENDER_mx7ulp
MACHINEOVERRIDES_EXTENDER_mx7ulp="imxfbdev:imxpxp:imxgpu:imxgpu2d:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx8mm
MACHINEOVERRIDES_EXTENDER_mx8mm="imxdrm:imxvpu:imxgpu:imxgpu2d:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx8mn
MACHINEOVERRIDES_EXTENDER_mx8mn="imxdrm:imxgpu:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx8mq
MACHINEOVERRIDES_EXTENDER_mx8mq="imxdrm:imxvpu:imxgpu:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx8qm
MACHINEOVERRIDES_EXTENDER_mx8qm="imxdrm:imxdpu:imxgpu:imxgpu2d:imxgpu3d"
# $MACHINEOVERRIDES_EXTENDER_mx8qxp
MACHINEOVERRIDES_EXTENDER_mx8qxp="imxdrm:imxdpu:imxgpu:imxgpu2d:imxgpu3d"
# $OVERRIDES [2 operations]
# "${TARGET_OS}:${TRANSLATED_TARGET_ARCH}:pn-${PN}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}:${CLASSOVERRIDE}${LIBCOVERRIDE}:forcevariable"
# [doc] "BitBake uses OVERRIDES to control what variables are overridden after BitBake parses recipes and configuration files."
# "${TARGET_OS}:${TRANSLATED_TARGET_ARCH}:pn-${PN}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}:${CLASSOVERRIDE}${LIBCOVERRIDE}:forcevariable"
OVERRIDES="linux-gnueabi:arm:pn-defaultpkgname:isiot:armv7ve:use-mainline-bsp:isiot-geamx6ul:fslc:class-target:libc-glibc:forcevariable"
# $PRISTINE_MACHINEOVERRIDES [13 operations]
# rename from MACHINEOVERRIDES machine-overrides-extender.bbclass:49 [machine_overrides_extender_handler]
PRISTINE_MACHINEOVERRIDES="mx6:mx6ul:isiot:armv7ve:imx:use-mainline-bsp:isiot-geamx6ul"
# $SRC_URI_OVERRIDES_PACKAGE_ARCH
overrides = d.getVar('OVERRIDES').split(':')
msg = 'Recipe %s has PN of "%s" which is in OVERRIDES, this can result in unexpected behaviour.' % (d.getVar("FILE"), pn)
compat_machines = (d.getVar('MACHINEOVERRIDES') or "").split(":")
# unless the package sets SRC_URI_OVERRIDES_PACKAGE_ARCH=0
override = d.getVar('SRC_URI_OVERRIDES_PACKAGE_ARCH')
overrides = (":" + (d.getVar("FILESOVERRIDES") or "")).split(":")
overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + multilib
localdata.setVar("OVERRIDES", overrides)
overrides = d.getVar("OVERRIDES").split(":")
machine_overrides = (d.getVar('PRISTINE_MACHINEOVERRIDES') or '').split(':')
machine_overrides_filter_out += (d.getVar('MACHINEOVERRIDES_EXTENDER_FILTER_OUT_%s' % override) or '').split()
extender = d.getVar('MACHINEOVERRIDES_EXTENDER_%s' % override)
# so we can reprocess OVERRIDES if/as/when needed.
d.renameVar("MACHINEOVERRIDES", "PRISTINE_MACHINEOVERRIDES")
d.setVar("MACHINEOVERRIDES", "${#machine_overrides_extender(d)}")
localdata.setVar('OVERRIDES', pkg)
localdata.setVar('OVERRIDES', localdata.getVar('OVERRIDES') + ':' + pkg)
overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + item
localdata.setVar("OVERRIDES", overrides)
localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + pkg)
Is there a way to get the value of MACHINEOVERRIDES and OVERRIDES variables after those operations occur?
# so we can reprocess OVERRIDES if/as/when needed.
d.renameVar("MACHINEOVERRIDES", "PRISTINE_MACHINEOVERRIDES")
d.setVar("MACHINEOVERRIDES", "${#machine_overrides_extender(d)}")
localdata.setVar('OVERRIDES', pkg)
localdata.setVar('OVERRIDES', localdata.getVar('OVERRIDES') + ':' + pkg)
overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + item
localdata.setVar("OVERRIDES", overrides)
localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + pkg)
Thanks.
bitbake -e already shows you the variables after the operations.
MACHINEOVERRIDES="isiot:armv7ve:use-mainline-bsp:isiot-geamx6ul" and
OVERRIDES="linux-gnueabi:arm:pn-defaultpkgname:isiot:armv7ve:use-mainline-bsp:isiot-geamx6ul:fslc:class-target:libc-glibc:forcevariable"
bitbake -e first shows you how many operations act on a variable e.g. # $MACHINEOVERRIDES [14 operations] and where they are coming from and after that the expanded value.
You could use bitbake -e | less to browse the output instead of grepping it.
You can simply use :
bitbake -e <package_name> | grep ^MACHINEOVERRIDES=
bitbake -e <package_name> | grep ^OVERRIDES=

Snakemake refuses to unpack input function when rule A is a dependency of rule B, but accepts it when rule A is the final rule

I have a snakemake workflow for a metagenomics project. At a point in the workflow, I map DNA sequencing reads (either single or paired-end) to metagenome assemblies made by the same workflow. I made an input function conform the Snakemake manual to map both single end and paired end reads with one rule. like so
import os.path
def get_binning_reads(wildcards):
pathpe=("data/sequencing_binning_signals/" + wildcards.binningsignal + ".trimmed_paired.R1.fastq.gz")
pathse=("data/sequencing_binning_signals/" + wildcards.binningsignal + ".trimmed.fastq.gz")
if os.path.isfile(pathpe) == True :
return {'reads' : expand("data/sequencing_binning_signals/{binningsignal}.trimmed_paired.R{PE}.fastq.gz", PE=[1,2],binningsignal=wildcards.binningsignal) }
elif os.path.isfile(pathse) == True :
return {'reads' : expand("data/sequencing_binning_signals/{binningsignal}.trimmed.fastq.gz", binningsignal=wildcards.binningsignal) }
rule backmap_bwa_mem:
input:
unpack(get_binning_reads),
index=expand("data/assembly_{{assemblytype}}/{{hostcode}}/scaffolds_bwa_index/scaffolds.{ext}",ext=['bwt','pac','ann','sa','amb'])
params:
lambda w: expand("data/assembly_{assemblytype}/{hostcode}/scaffolds_bwa_index/scaffolds",assemblytype=w.assemblytype,hostcode=w.hostcode)
output:
"data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.bam"
threads: 100
log:
stdout="logs/bwa_backmap_samtools_{assemblytype}_{hostcode}.stdout",
samstderr="logs/bwa_backmap_samtools_{assemblytype}_{hostcode}.stdout",
stderr="logs/bwa_backmap_{assemblytype}_{hostcode}.stderr"
shell:
"bwa mem -t {threads} {params} {input.reads} 2> {log.stderr} | samtools view -# 12 -b -o {output} 2> {log.samstderr} > {log.stdout}"
When I make an arbitrary 'all-rule' like this, the workflow runs successfully.
rule allbackmapped:
input:
expand("data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.bam", binningsignal=BINNINGSIGNALS,assemblytype=ASSEMBLYTYPES,hostcode=HOSTCODES)
However, when the files created by this rule are required for subsequent rules like so:
rule backmap_samtools_sort:
input:
"data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.bam"
output:
"data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.sorted.bam"
threads: 6
resources:
mem_mb=5000
shell:
"samtools sort -# {threads} -m {mem_mb}M -o {output} {input}"
rule allsorted:
input:
expand("data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.sorted.bam",binningsignal=BINNINGSIGNALS,assemblytype=ASSEMBLYTYPES,hostcode=HOSTCODES)
The workflow closes with this error
WorkflowError in line 416 of
/stor/azolla_metagenome/Azolla_genus_metagenome/Snakefile: Can only
use unpack() on list and dict
To me, this error suggests the input function for the former rule is faulty. This however, seems not to be the case for it ran successfully when no subsequent processing was queued.
The entire project is hosted on github. The entire Snakefile and a github issue.

Resources