pre-commit InvalidConfigError - pre-commit-hook

When I want to commit my changes in the .pre-commit-config.yaml, I get the following error:
An error has occurred: InvalidConfigError:
==> File .pre-commit-config.yaml
=====> while parsing a block mapping
in "<unicode string>", line 33, column 3
did not find expected key
in "<unicode string>", line 34, column 3
Check the log at /Users/name/.cache/pre-commit/pre-commit.log
The lines 33+ are:
- repo: local
- id: pytest
name: Run tests (pytest)
entry: pytest -x
language: system
types: [python]
pass_filenames: false

I missed adding "hooks" to the file, now it works:
- repo: local
hooks: # <- this was missing
- id: pytest
name: Run tests (pytest)
entry: pytest -x
language: system
types: [python]
pass_filenames: false

Related

How to pass value to variable in list, to pick one by one using command "ansible-playbook test.yml --extra-var variable1=ntpd,crond,sysconf"

I tried to pass value manually to a specific variable by using command
ansible-playbook test.yml --extra-var variable1=ntpd
Can you please help me to know how to pass value to variable in list, to pick one by one using command?
I've tried as below did not work
ansible-playbook test.yml --extra-var "variable1=ntpd,crond,sysconf"
Tried as but no luck
ansible-playbook test.yml --extra-var "variable1=ntpd,crond,sysconf"
ansible-playbook test.yml -e '{"variable1":["ntpd", "crond"]}'
The playbook should pick 1st value as ntpd and then second value as crond and so on.
You may have a look into the following minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Show provided content and type
debug:
msg:
- "var1 contains {{ var1 }}"
- "var1 is of tpye {{ var1 | type_debug }}"
which will called with
ansible-playbook extra.yml --extra-vars="var1=a,b,c"
resulting into an output of
TASK [Show provided content and type] ******
ok: [localhost] =>
msg:
- var1 contains a,b,c
- var1 is of tpye unicode
I understand that you like to get a list out of it for further processing. Since there is a string of comma separated values this is quite simple to achieve, just split the string on the separator.
- name: Create list and loop over elements
debug:
msg: "{{ item }}"
loop: "{{ var1 | split(',') }}"
will result into an output of
TASK [Create list and loop over elements] ******
ok: [localhost] => (item=a) =>
msg: a
ok: [localhost] => (item=b) =>
msg: b
ok: [localhost] => (item=c) =>
msg: c
As the example show packages names, the use cause might be to install or validate packages and which are provided in a list of comma separated values (pkgcsv).
In example for the yum module – Manages packages with the yum package manager and because of the Notes
When used with a loop: each package will be processed individually, it is much more efficient to pass the list directly to the name option.
one should better proceed further with
- name: Install a list of packages with a list variable
ansible.builtin.yum:
name: "{{ pkgcsv | split(',') }}"
or, as the module has the capabilities to do so, one could also leave the CSV just as it is
- name: Install a list of packages with a list variable
ansible.builtin.yum:
name: "{{ pkgcsv }}"
Further Reading
As this could be helpful for other or in future cases.
Is there any way to validate whether a file is passed as extra vars in ansible-playbook?

How to format this code so that flake8 is happy?

This code was created by black:
def test_schema_org_script_from_list():
assert (
schema_org_script_from_list([1, 2])
== '<script type="application/ld+json">1</script>\n<script type="application/ld+json">2</script>'
)
But now flake8 complains:
tests/test_utils.py:59:9: W503 line break before binary operator
tests/test_utils.py:59:101: E501 line too long (105 > 100 characters)
How can I format above lines and make flake8 happy?
I use this .pre-commit-config.yaml
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: 'https://github.com/pre-commit/pre-commit-hooks'
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- repo: 'https://gitlab.com/pycqa/flake8'
rev: 3.8.4
hooks:
- id: flake8
- repo: 'https://github.com/pre-commit/mirrors-isort'
rev: v5.7.0
hooks:
- id: isort
tox.ini:
[flake8]
max-line-length = 100
exclude = .git,*/migrations/*,node_modules,migrate
# W504 line break after binary operator
ignore = W504
(I think it is a bit strange that flake8 reads config from a file which belongs to a different tool).
from your configuration, you've set ignore = W504
ignore isn't the option you want as it resets the default ignore (bringing in a bunch of things, including W503).
If you remove ignore=, both W504 and W503 are in the default ignore so they won't be caught
as for your E501 (line too long), you can either extend-ignore = E501 or you can set max-line-length appropriately
for black, this is the suggested configuration:
[flake8]
max-line-length = 88
extend-ignore = E203
note that there are cases where black cannot make a line short enough (as you're seeing) -- both from long strings and from long variable names
disclaimer: I'm the current flake8 maintainer

No value for arguement in function call

I am very new to Python and am working through the Dagster hello tutorial
I have set up the following from the tutorial
import csv
from dagster import execute_pipeline, execute_solid, pipeline, solid
#solid
def hello_cereal(context):
# Assuming the dataset is in the same directory as this file
dataset_path = 'cereal.csv'
with open(dataset_path, 'r') as fd:
# Read the rows in using the standard csv library
cereals = [row for row in csv.DictReader(fd)]
context.log.info(
'Found {n_cereals} cereals'.format(n_cereals=len(cereals))
)
return cereals
#pipeline
def hello_cereal_pipeline():
hello_cereal()
However pylint shows
a no value for parameter
message.
What have I missed?
When I try to execute the pipeline I get the following
D:\python\dag>dagster pipeline execute -f hello_cereal.py -n
hello_cereal_pipeline 2019-11-25 14:47:09 - dagster - DEBUG -
hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 -
PIPELINE_START - Started execution of pipeline
"hello_cereal_pipeline". 2019-11-25 14:47:09 - dagster - DEBUG -
hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 -
ENGINE_EVENT - Executing steps in process (pid: 11684)
event_specific_data = {"metadata_entries": [["pid", null, ["11684"]],
["step_keys", null, ["{'hello_cereal.compute'}"]]]} 2019-11-25
14:47:09 - dagster - DEBUG - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - STEP_START - Started execution
of step "hello_cereal.compute".
solid = "hello_cereal"
solid_definition = "hello_cereal"
step_key = "hello_cereal.compute" 2019-11-25 14:47:10 - dagster - ERROR - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - STEP_FAILURE - Execution of
step "hello_cereal.compute" failed.
cls_name = "FileNotFoundError"
solid = "hello_cereal"
solid_definition = "hello_cereal"
step_key = "hello_cereal.compute"
File
"c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\errors.py",
line 114, in user_code_error_boundary
yield File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\engine\engine_inprocess.py",
line 621, in _user_event_sequence_for_step_compute_fn
for event in gen: File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\execution\plan\compute.py",
line 75, in _execute_core_compute
for step_output in _yield_compute_results(compute_context, inputs, compute_fn): File
"c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\execution\plan\compute.py",
line 52, in _yield_compute_results
for event in user_event_sequence: File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\definitions\decorators.py",
line 418, in compute
result = fn(context, **kwargs) File "hello_cereal.py", line 10, in hello_cereal
with open(dataset_path, 'r') as fd:
2019-11-25 14:47:10 - dagster - DEBUG - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - ENGINE_EVENT - Finished steps
in process (pid: 11684) in 183ms event_specific_data =
{"metadata_entries": [["pid", null, ["11684"]], ["step_keys", null,
["{'hello_cereal.compute'}"]]]} 2019-11-25 14:47:10 - dagster - ERROR
- hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 - PIPELINE_FAILURE - Execution of pipeline "hello_cereal_pipeline"
failed.
[Update]
From Rahul's comment I realised I had not copied the whole example.
When I corrected that I got a FileNotFoundError
To answer the original question about why you are receiving a "no value for parameter" pylint message -
This is because the pipeline function calls don't include any parameters in the constructors and the #solid functions have parameters defined. This is intentional from dagster and can be ignored by adding the following line either at the beginning of the module, or to the right of the line with the pylint message. Note that putting the python comment below at the beginning of the module tells pylint to ignore any instance of the warning in the module, whereas putting the comment in-line tells pylint to ignore only that instance of the warning.
# pylint: disable=no-value-for-parameter
Lastly, you could also put a similar ignore statement in a .pylintrc file too, but I'd advise against that as that would be project-global and you could miss true issues.
hope this helps a bit!
Please check whether the dataset(csv file) which you are using is in the same directory with your code file. That may be the case why are you getting the
FileNotFoundError error

upgraded puppet 3 to 5. if I store a value in hiera as "D:\Temp\" it is taking "D:\\Temp\\"

In yaml file it is written as, temp: D:\Temp\ ;
While running it is taking as D:\\\\Temp\\\\
debugging message: Found key: "temp" value: "D:\\Temp\\"

Building lists based on matched attributes (ansible)

Trying to build a list of servers that match an attribute (in this case and ec2_tag) to schedule specific servers for specific tasks.
I'm trying to match against selectattr with:
servers: "{{ hostvars[inventory_hostname]|selectattr('ec2_tag_Role', 'match', 'cassandra_db_seed_node') | map(attribute='inventory_hostname') |list}}"
Though I'm getting what looks like a type error from Ansible:
fatal: [X.X.X.X]: FAILED! => {"failed": true, "msg": "Unexpected templating type error occurred on ({{ hostvars[inventory_hostname]|selectattr('ec2_tag_Role', 'match', 'cassandra_db_seed_node') | map(attribute='inventory_hostname') |list}}): expected string or buffer"}
What am I missing here?
When you build a complex filter chain, use debug module to print intermediate results... and add filter one by one to achieve desired result.
In your example, you have mistake on the very first step: hostvars[inventory_hostname] is a dict of facts for your current host only, so there is nothing to select elements from.
You need a list of hostvars' values, because selectattr is applied to a list, not a dict.
But in Ansible hostvars is a special variable and is not actually a dict, so you can't just call .values() on it without jumping through some hoops.
Try the following code:
- hosts: all
tasks:
- name: a kind of typecast for hostvars
set_fact:
hostvars_dict: "{{ hostvars }}"
- debug:
msg: "{{ hostvars_dict.values() | selectattr('ec2_tag_Role','match','cassandra_db_seed_node') | map(attribute='inventory_hostname') | list }}"
You can use the group_by module to create ad-hoc groups depending on the hostvar:
- group_by:
key: 'ec2_tag_role_{{ ec2_tag_Role }}'
This will create groups called ec2_tag_role_* which means that later on you can create a play with any of these groups:
- hosts: ec2_tag_role_cassandra_db_seed_node
tasks:
- name: Your tasks...

Resources