This code was created by black:
def test_schema_org_script_from_list():
assert (
schema_org_script_from_list([1, 2])
== '<script type="application/ld+json">1</script>\n<script type="application/ld+json">2</script>'
)
But now flake8 complains:
tests/test_utils.py:59:9: W503 line break before binary operator
tests/test_utils.py:59:101: E501 line too long (105 > 100 characters)
How can I format above lines and make flake8 happy?
I use this .pre-commit-config.yaml
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: 'https://github.com/pre-commit/pre-commit-hooks'
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- repo: 'https://gitlab.com/pycqa/flake8'
rev: 3.8.4
hooks:
- id: flake8
- repo: 'https://github.com/pre-commit/mirrors-isort'
rev: v5.7.0
hooks:
- id: isort
tox.ini:
[flake8]
max-line-length = 100
exclude = .git,*/migrations/*,node_modules,migrate
# W504 line break after binary operator
ignore = W504
(I think it is a bit strange that flake8 reads config from a file which belongs to a different tool).
from your configuration, you've set ignore = W504
ignore isn't the option you want as it resets the default ignore (bringing in a bunch of things, including W503).
If you remove ignore=, both W504 and W503 are in the default ignore so they won't be caught
as for your E501 (line too long), you can either extend-ignore = E501 or you can set max-line-length appropriately
for black, this is the suggested configuration:
[flake8]
max-line-length = 88
extend-ignore = E203
note that there are cases where black cannot make a line short enough (as you're seeing) -- both from long strings and from long variable names
disclaimer: I'm the current flake8 maintainer
Related
the title. I have copied the code as is from google's website but it is not working:
import io
from google.cloud import vision_v1
def sample_batch_annotate_files(file_path="path/to/your/document.pdf"):
"""Perform batch file annotation."""
client = vision_v1.ImageAnnotatorClient()
# Supported mime_type: application/pdf, image/tiff, image/gif
mime_type = "application/pdf"
with io.open(file_path, "rb") as f:
content = f.read()
input_config = {"mime_type": mime_type, "content": content}
features = [{"type_": vision_v1.Feature.Type.DOCUMENT_TEXT_DETECTION}]
# The service can process up to 5 pages per document file. Here we specify
# the first, second, and last page of the document to be processed.
pages = [1, 2, -1]
requests = [{"input_config": input_config, "features": features, "pages": pages}]
response = client.batch_annotate_files(requests=requests)
for image_response in response.responses[0].responses:
print(u"Full text: {}".format(image_response.full_text_annotation.text))
for page in image_response.full_text_annotation.pages:
for block in page.blocks:
print(u"\nBlock confidence: {}".format(block.confidence))
for par in block.paragraphs:
print(u"\tParagraph confidence: {}".format(par.confidence))
for word in par.words:
print(u"\t\tWord confidence: {}".format(word.confidence))
Yet it is not working and giving me the following error:
File "---", line 16, in sample_batch_annotate_files
features = [{"type_": vision_v1.Feature.Type.DOCUMENT_TEXT_DETECTION}]
AttributeError: module 'google.cloud.vision_v1' has no attribute 'Feature'
I am using a conda environment and this is the .yml - as I read in another posts, I have installed the Google-api-python-client and google-cloud-vision as recommended. Might it be related to the 'google-cloud-vision' version 1.0.1 when it should be 3.5.1 ? how to update it? I installed it with: conda install -c conda-forge google-cloud-vision
channels:
- conda-forge
- defaults
dependencies:
- aiohttp=3.8.1=py39hb82d6ee_1
- aiosignal=1.2.0=pyhd8ed1ab_0
- async-timeout=4.0.2=pyhd8ed1ab_0
- attrs=22.1.0=pyh71513ae_1
- brotlipy=0.7.0=py39hb82d6ee_1004
- ca-certificates=2022.9.24=h5b45459_0
- cachetools=5.2.0=pyhd8ed1ab_0
- certifi=2022.9.24=pyhd8ed1ab_0
- cffi=1.15.1=py39h0878f49_0
- charset-normalizer=2.1.1=pyhd8ed1ab_0
- cryptography=37.0.4=py39h7bc7c5c_0
- frozenlist=1.3.1=py39hb82d6ee_0
- google-api-core=2.10.1=pyhd8ed1ab_0
- google-api-core-grpc=2.10.1=hd8ed1ab_0
- google-api-python-client=2.64.0=pyhd8ed1ab_0
- google-auth=2.12.0=pyh1a96a4e_0
- google-auth-httplib2=0.1.0=pyhd8ed1ab_1
- google-cloud-core=2.3.2=pyhd8ed1ab_0
- google-cloud-storage=2.5.0=pyh6c4a22f_0
- google-cloud-vision=1.0.1=pyhd8ed1ab_0
- google-crc32c=1.1.2=py39h3fc79e4_3
- google-resumable-media=2.4.0=pyhd8ed1ab_0
- googleapis-common-protos=1.56.4=py39h35db3c3_0
- grpcio=1.46.0=py39hb76b349_1
- grpcio-status=1.41.1=pyhd3eb1b0_0
- httplib2=0.20.4=pyhd8ed1ab_0
- idna=3.4=pyhd8ed1ab_0
- libcrc32c=1.1.2=h0e60522_0
- libprotobuf=3.20.1=h7755175_1
- libzlib=1.2.12=h8ffe710_2
- multidict=6.0.2=py39hb82d6ee_1
- openssl=1.1.1q=h8ffe710_0
- pip=22.2.2=py39haa95532_0
- protobuf=3.20.1=py39hcbf5309_0
- pyasn1=0.4.8=py_0
- pyasn1-modules=0.2.7=py_0
- pycparser=2.21=pyhd8ed1ab_0
- pyopenssl=22.0.0=pyhd8ed1ab_1
- pyparsing=3.0.9=pyhd8ed1ab_0
- pysocks=1.7.1=pyh0701188_6
- python=3.9.13=h6244533_1
- python_abi=3.9=2_cp39
- pyu2f=0.1.5=pyhd8ed1ab_0
- requests=2.28.1=pyhd8ed1ab_1
- rsa=4.9=pyhd8ed1ab_0
- setuptools=63.4.1=py39haa95532_0
- six=1.16.0=pyh6c4a22f_0
- sqlite=3.39.3=h2bbff1b_0
- typing-extensions=4.4.0=hd8ed1ab_0
- typing_extensions=4.4.0=pyha770c72_0
- tzdata=2022c=h04d1e81_0
- uritemplate=4.1.1=pyhd8ed1ab_0
- urllib3=1.26.11=pyhd8ed1ab_0
- vc=14.2=h21ff451_1
- vs2015_runtime=14.27.29016=h5e58377_2
- wheel=0.37.1=pyhd3eb1b0_0
- win_inet_pton=1.1.0=py39hcbf5309_4
- wincertstore=0.2=py39haa95532_2
- yarl=1.7.2=py39hb82d6ee_2
- zlib=1.2.12=h8ffe710_2
Manually installed google-cloud-vision downloading the tar.bz2 from Anaconda.
activate [env]
conda install path-to-tar
By doing that I had my google-cloud-vision package update to the version I chose, but then I had an issue with PROTO. The error said that could find the module "import proto".
What I did is to update: conda install -c conda-forge google-api-python-client
and that solved the issue.
When I want to commit my changes in the .pre-commit-config.yaml, I get the following error:
An error has occurred: InvalidConfigError:
==> File .pre-commit-config.yaml
=====> while parsing a block mapping
in "<unicode string>", line 33, column 3
did not find expected key
in "<unicode string>", line 34, column 3
Check the log at /Users/name/.cache/pre-commit/pre-commit.log
The lines 33+ are:
- repo: local
- id: pytest
name: Run tests (pytest)
entry: pytest -x
language: system
types: [python]
pass_filenames: false
I missed adding "hooks" to the file, now it works:
- repo: local
hooks: # <- this was missing
- id: pytest
name: Run tests (pytest)
entry: pytest -x
language: system
types: [python]
pass_filenames: false
I am updating the following template.yaml file in Python3:
alpha:
alpha_1:
alpha_2:
beta:
beta_1:
beta_2:
- beta_2a:
beta_2b:
gamma:
Using ruamel.py I am able to fill the blank space correctly.
file_name = 'template.yaml'
config, ind, bsi = ruamel.yaml.util.load_yaml_guess_indent(open(file_name))
and updating each element I am able to arrive to:
alpha:
alpha_1: "val_alpha1"
alpha_2: "val_alpha2"
beta:
beta_1: "val_beta1"
beta_2:
- beta_2a: "val_beta2a"
beta_2b: "val_beta2b"
gamma: "val_gamma"
Here there is the issue, I may need other children elements in beta_2 node, in this way:
alpha:
alpha_1: "val_alpha1"
alpha_2: "val_alpha2"
beta:
beta_1: "val_beta1"
beta_2:
- beta_2a: "val_beta2a"
beta_2b: "val_beta2b"
- beta_2c: "val_beta2c"
beta_2d: "val_beta2d"
gamma: "val_gamma"
I do not know in advance if I could need more branches like above and change the template each time is not an option.
My attempts with update() or appending dict were unsuccessful. How can I get the desired result?
My attempt:
entry = config["beta"]["beta_2"]
entry[0]["beta_2a"] = "val_beta2a"
entry[0]["beta_2b"] = "val_beta2b"
entry[0].update = {"beta_2c": "val_beta2a", "beta_2d": "val_beta2d"}
In this case, the program does not display any changes in the results, meaning that the last line with update did not work at all.
2022-03-31 16:18:34 ['ryd', '--force', 'so-71693609.ryd']
Your indent is five for the list with a two space offset for the indicator (-),
so there is no real need to try and analyse the indent unless some other program
changes that.
The value for beta_2 is a list, to get what you want you need to append
a dictionary to that list:
import sys
from pathlib import Path
import ruamel.yaml
from ruamel.yaml.scalarstring import DoubleQuotedScalarString as DQ
file_name = Path('template.yaml')
yaml = ruamel.yaml.YAML()
yaml.indent(sequence=5, offset=2)
config = yaml.load(file_name)
config['alpha'].update(dict(alpha_1=DQ('val_alpha1'), alpha_2=DQ('val_alpha2')))
config['beta'].update(dict(beta_1=DQ('val_beta1')))
config['gamma'] = DQ('val_gamma')
entry = config["beta"]["beta_2"]
entry[0]["beta_2a"] = DQ("val_beta2a")
entry[0]["beta_2b"] = DQ("val_beta2b")
entry.append(dict(beta_2c=DQ("val_beta2a"), beta_2d=DQ("val_beta2d")))
yaml.dump(config, sys.stdout)
which gives:
alpha:
alpha_1: "val_alpha1"
alpha_2: "val_alpha2"
beta:
beta_1: "val_beta1"
beta_2:
- beta_2a: "val_beta2a"
beta_2b: "val_beta2b"
- beta_2c: "val_beta2a"
beta_2d: "val_beta2d"
gamma: "val_gamma"
I am very new to Python and am working through the Dagster hello tutorial
I have set up the following from the tutorial
import csv
from dagster import execute_pipeline, execute_solid, pipeline, solid
#solid
def hello_cereal(context):
# Assuming the dataset is in the same directory as this file
dataset_path = 'cereal.csv'
with open(dataset_path, 'r') as fd:
# Read the rows in using the standard csv library
cereals = [row for row in csv.DictReader(fd)]
context.log.info(
'Found {n_cereals} cereals'.format(n_cereals=len(cereals))
)
return cereals
#pipeline
def hello_cereal_pipeline():
hello_cereal()
However pylint shows
a no value for parameter
message.
What have I missed?
When I try to execute the pipeline I get the following
D:\python\dag>dagster pipeline execute -f hello_cereal.py -n
hello_cereal_pipeline 2019-11-25 14:47:09 - dagster - DEBUG -
hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 -
PIPELINE_START - Started execution of pipeline
"hello_cereal_pipeline". 2019-11-25 14:47:09 - dagster - DEBUG -
hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 -
ENGINE_EVENT - Executing steps in process (pid: 11684)
event_specific_data = {"metadata_entries": [["pid", null, ["11684"]],
["step_keys", null, ["{'hello_cereal.compute'}"]]]} 2019-11-25
14:47:09 - dagster - DEBUG - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - STEP_START - Started execution
of step "hello_cereal.compute".
solid = "hello_cereal"
solid_definition = "hello_cereal"
step_key = "hello_cereal.compute" 2019-11-25 14:47:10 - dagster - ERROR - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - STEP_FAILURE - Execution of
step "hello_cereal.compute" failed.
cls_name = "FileNotFoundError"
solid = "hello_cereal"
solid_definition = "hello_cereal"
step_key = "hello_cereal.compute"
File
"c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\errors.py",
line 114, in user_code_error_boundary
yield File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\engine\engine_inprocess.py",
line 621, in _user_event_sequence_for_step_compute_fn
for event in gen: File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\execution\plan\compute.py",
line 75, in _execute_core_compute
for step_output in _yield_compute_results(compute_context, inputs, compute_fn): File
"c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\execution\plan\compute.py",
line 52, in _yield_compute_results
for event in user_event_sequence: File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\definitions\decorators.py",
line 418, in compute
result = fn(context, **kwargs) File "hello_cereal.py", line 10, in hello_cereal
with open(dataset_path, 'r') as fd:
2019-11-25 14:47:10 - dagster - DEBUG - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - ENGINE_EVENT - Finished steps
in process (pid: 11684) in 183ms event_specific_data =
{"metadata_entries": [["pid", null, ["11684"]], ["step_keys", null,
["{'hello_cereal.compute'}"]]]} 2019-11-25 14:47:10 - dagster - ERROR
- hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 - PIPELINE_FAILURE - Execution of pipeline "hello_cereal_pipeline"
failed.
[Update]
From Rahul's comment I realised I had not copied the whole example.
When I corrected that I got a FileNotFoundError
To answer the original question about why you are receiving a "no value for parameter" pylint message -
This is because the pipeline function calls don't include any parameters in the constructors and the #solid functions have parameters defined. This is intentional from dagster and can be ignored by adding the following line either at the beginning of the module, or to the right of the line with the pylint message. Note that putting the python comment below at the beginning of the module tells pylint to ignore any instance of the warning in the module, whereas putting the comment in-line tells pylint to ignore only that instance of the warning.
# pylint: disable=no-value-for-parameter
Lastly, you could also put a similar ignore statement in a .pylintrc file too, but I'd advise against that as that would be project-global and you could miss true issues.
hope this helps a bit!
Please check whether the dataset(csv file) which you are using is in the same directory with your code file. That may be the case why are you getting the
FileNotFoundError error
How do I rewrite this YAML so it is more structured, then reference it in Puppet using hiera function?
Currently, I am working with a hieradata syntax that looks very flat and hard to read.
service::proxy::behind_reverse_proxy: true
service::proxy::proxy_timeout: 300
service::proxy::serverlist:
- host1.fqdn
- host2.fqdn
And grabbed these in a params.pp file, for example
$behind_reverse_proxy = hiera('service::proxy::behind_reverse_proxy', 'False')
$serverlist = hiera('service::proxy::serverlist')
I thought I could rewrite the YAML like so in an effort to make it more readable...
service::proxy:
behind_reverse_proxy: true
proxy_timeout: 300
serverlist:
- host1.fqdn
- host2.fqdn
And updated the params.pp file according to
Hiera Key.subkey syntax
interacting with structured data
$behind_reverse_proxy = hiera('service::proxy.behind_reverse_proxy', 'False')
$serverlist = hiera('service::proxy.serverlist')
However upon puppet agent -t that resulted in
Error 400 on SERVER: Could not find data item service::proxy.serverlist in any Hiera data file and no default supplied
I think these are relevant
[user#server ~]$ facter -y | grep 'version'
facterversion: 2.4.4
puppetversion: 3.8.2
Following up on my comment about how you can access your restructured data:
service::proxy:
behind_reverse_proxy: true
proxy_timeout: 300
serverlist:
- host1.fqdn
- host2.fqdn
In your manifest, instead of this ...
$behind_reverse_proxy = hiera('service::proxy.behind_reverse_proxy', 'False')
$serverlist = hiera('service::proxy.serverlist')
... you might do this:
$proxy_info = merge(
{ 'behind_reverse_proxy' => false, 'serverlist' => [] },
hiera('service::proxy', {})
)
$behind_reverse_proxy = $proxy_info{'behind_reverse_proxy'}
$serverlist = $proxy_info{'serverlist'}
The merge() function is not built-in, but rather comes from Puppet's (formerly PuppetLabs's) widely-used stdlib module. There's a good chance that you are already using that module elsewhere, but even if not, it may be well worth your while to introduce it to your stack.
I've never used Hiera, but I think the problem is that you have a sequence (array) when you wanted a mapping (hash).
In the below YAML, the value of the service::proxy key is a sequence with three elements, each of which is a mapping with one key:
service::proxy:
- behind_reverse_proxy: true
- proxy_timeout: 300
- serverlist:
- host1.fqdn
- host2.fqdn
What you probably wanted, though, was for service::proxy to be a mapping with three keys:
service::proxy:
behind_reverse_proxy: true
proxy_timeout: 300
serverlist:
- host1.fqdn
- host2.fqdn
The examples in the Hiera docs you linked to seem to support this.