No value for arguement in function call - python-3.x

I am very new to Python and am working through the Dagster hello tutorial
I have set up the following from the tutorial
import csv
from dagster import execute_pipeline, execute_solid, pipeline, solid
#solid
def hello_cereal(context):
# Assuming the dataset is in the same directory as this file
dataset_path = 'cereal.csv'
with open(dataset_path, 'r') as fd:
# Read the rows in using the standard csv library
cereals = [row for row in csv.DictReader(fd)]
context.log.info(
'Found {n_cereals} cereals'.format(n_cereals=len(cereals))
)
return cereals
#pipeline
def hello_cereal_pipeline():
hello_cereal()
However pylint shows
a no value for parameter
message.
What have I missed?
When I try to execute the pipeline I get the following
D:\python\dag>dagster pipeline execute -f hello_cereal.py -n
hello_cereal_pipeline 2019-11-25 14:47:09 - dagster - DEBUG -
hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 -
PIPELINE_START - Started execution of pipeline
"hello_cereal_pipeline". 2019-11-25 14:47:09 - dagster - DEBUG -
hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 -
ENGINE_EVENT - Executing steps in process (pid: 11684)
event_specific_data = {"metadata_entries": [["pid", null, ["11684"]],
["step_keys", null, ["{'hello_cereal.compute'}"]]]} 2019-11-25
14:47:09 - dagster - DEBUG - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - STEP_START - Started execution
of step "hello_cereal.compute".
solid = "hello_cereal"
solid_definition = "hello_cereal"
step_key = "hello_cereal.compute" 2019-11-25 14:47:10 - dagster - ERROR - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - STEP_FAILURE - Execution of
step "hello_cereal.compute" failed.
cls_name = "FileNotFoundError"
solid = "hello_cereal"
solid_definition = "hello_cereal"
step_key = "hello_cereal.compute"
File
"c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\errors.py",
line 114, in user_code_error_boundary
yield File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\engine\engine_inprocess.py",
line 621, in _user_event_sequence_for_step_compute_fn
for event in gen: File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\execution\plan\compute.py",
line 75, in _execute_core_compute
for step_output in _yield_compute_results(compute_context, inputs, compute_fn): File
"c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\execution\plan\compute.py",
line 52, in _yield_compute_results
for event in user_event_sequence: File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\definitions\decorators.py",
line 418, in compute
result = fn(context, **kwargs) File "hello_cereal.py", line 10, in hello_cereal
with open(dataset_path, 'r') as fd:
2019-11-25 14:47:10 - dagster - DEBUG - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - ENGINE_EVENT - Finished steps
in process (pid: 11684) in 183ms event_specific_data =
{"metadata_entries": [["pid", null, ["11684"]], ["step_keys", null,
["{'hello_cereal.compute'}"]]]} 2019-11-25 14:47:10 - dagster - ERROR
- hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 - PIPELINE_FAILURE - Execution of pipeline "hello_cereal_pipeline"
failed.
[Update]
From Rahul's comment I realised I had not copied the whole example.
When I corrected that I got a FileNotFoundError

To answer the original question about why you are receiving a "no value for parameter" pylint message -
This is because the pipeline function calls don't include any parameters in the constructors and the #solid functions have parameters defined. This is intentional from dagster and can be ignored by adding the following line either at the beginning of the module, or to the right of the line with the pylint message. Note that putting the python comment below at the beginning of the module tells pylint to ignore any instance of the warning in the module, whereas putting the comment in-line tells pylint to ignore only that instance of the warning.
# pylint: disable=no-value-for-parameter
Lastly, you could also put a similar ignore statement in a .pylintrc file too, but I'd advise against that as that would be project-global and you could miss true issues.
hope this helps a bit!

Please check whether the dataset(csv file) which you are using is in the same directory with your code file. That may be the case why are you getting the
FileNotFoundError error

Related

Encountered an internal AutoML error- ClientException: Message: No objects to concatenate

I am trying to implement Hierarchical time series forecasting on azureautoml pipelines.
I followed this notebook for implementation
https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb
While I ran training pipeline on compute instance it worked, but when I am running the same on compute cluster it breaks at hts-proportion-calculation part.
This is the error I am getting,
system error:
Encountered an internal AutoML error. Error Message/Code: ClientException. Additional Info: ClientException:
      Message: No objects to concatenate
      InnerException: None
      ErrorResponse
{
"error": {
"message": "No objects to concatenate"
}
}
logs :
Loading arguments for scenario proportions-calculation
adding argument --input-medatadata
adding argument --hts-graph
adding argument --enable-event-logger
Input arguments dict is {'--input-medatadata': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_automl_training_workspaceblobstore/azureml/17ca5ae7-7269-4246-888f-e781071e3f5c/automl_training', '--hts-graph': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_hts_graph_workspaceblobstore/azureml/a2c1b15a-c895-41e8-b6a6-1ca37ebe9e77/hts_graph', '--enable-event-logger': None}
Unknown file to proceed outputs.txt
processing: outputs.txt with type None.
Cleaning up all outstanding Run operations, waiting 300.0 seconds
3 items cleaning up...
Cleanup took 0.001676321029663086 seconds
Traceback (most recent call last):
File "proportions_calculation_wrapper.py", line 47, in <module>
runtime_wrapper.run()
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_many_models/automl_pipeline_step_wrapper.py", line 63, in run
self._run()
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 44, in _run
proportions_calculation(self.arguments_dict, self.event_logger, script_run=self.step_run)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 173, in proportions_calculation
proportion_files_list, forecasting_parameters.time_column_name, graph.label_column_name
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 92, in calculate_time_agg_sum_for_all_files
df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 304, in concat
sort=sort,
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 351, in __init__
raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate
Please let me know how can I resolve this issue ?
This error was incurred as Iteration timeout was not less than experiment timeout , but the system error & logs are a kind of misleading.
df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True)
logs was pointing to pandas "No objects to concatenate"
This error can be overcome by setting iterationtimeout value less than experimenttime out value.
I had set iteration_timeout_minutes=60 which caused the error.
automl_settings = AutoMLConfig(
task="forecasting",
primary_metric="normalized_root_mean_squared_error",
experiment_timeout_hours=1,
label_column_name=label_column_name,
track_child_runs=False,
forecasting_parameters=forecasting_parameters,
pipeline_fetch_max_batch_size=15,
model_explainability=model_explainability,
n_cross_validations="auto", # Feel free to set to a small integer (>=2) if runtime is an issue.
cv_step_size="auto",
# The following settings are specific to this sample and should be adjusted according to your own needs.
iteration_timeout_minutes=10,
iterations=15,
)
We are able to run the sample successfully using the compute cluster as given below.
from azureml.core.compute import ComputeTarget, AmlCompute
# Name your cluster
compute_name = "hts-compute"
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print("Found compute target: " + compute_name)
else:
print("Creating a new compute target...")
provisioning_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D16S_V3", max_nodes=20
)
# Create the compute target
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(
show_output=True, min_node_count=None, timeout_in_minutes=20
)
# For a more detailed view of current cluster status, use the 'status' property
print(compute_target.status.serialize())

Failed build Yocto Gatesgarth "extensible SDK" (eSDK) - populate_sdk_ext fail

I'm working with Yocto "Gatesgarth" on a custom board based on i.MX6ULL.
I'm facing some problems in generating the extensible SDK (eSDK).
The generation of normal SDK it's accomplished correctly.
Below some details.
Details of system:
Board based on NXP i.MX6ULL
Yocto version "Gatesgarth 3.2.4 (May 2021)"
BB_VERSION = "1.48.0",
NATIVELSBSTRING = "ubuntu-18.04"
DISTRO_VERSION = "5.10-gatesgarth"
meta-qt5 is present
Build environment based on Docker Container
Environment Variable:
File: conf/local.conf
SDKMACHINE ?= 'x86_64'
File: test-image-mx6ull.bb
inherit core-image
inherit populate_sdk_qt5
inherit populate_sdk_ext
SDK_EXT_TYPE = "minimal"
SDK_INCLUDE_TOOLCHAIN = "1"
SDK_INCLUDE_PKGDATA = "0"
SDK_INCLUDE_NATIVESDK = "1"
The command executed is :
bitbake test-image-mx6ull -c populate_sdk_ext
Output:
ERROR: test-image-mx6ull-1.0-r0 do_populate_sdk_ext: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_populate_sdk_ext(d)
0003:
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 720, function: do_populate_sdk_ext
0716: bb.fatal('The extensible SDK can currently only be built for the same architecture as the machine being built on - SDK_ARCH is set to %s (likely via setting
SDKMACHINE) which is different from the architecture of the build machine (%s). Unable to continue.' % (d.getVar('SDK_ARCH'), d.getVar('BUILD_ARCH')))
0717:
0718: d.setVar('SDK_INSTALL_TARGETS', get_sdk_install_targets(d))
0719: if d.getVar('SDK_INCLUDE_BUILDTOOLS') == '1':
*** 0720: buildtools_fn = get_current_buildtools(d)
0721: else:
0722: buildtools_fn = None
0723: d.setVar('SDK_REQUIRED_UTILITIES', get_sdk_required_utilities(buildtools_fn, d))
0724: d.setVar('SDK_BUILDTOOLS_INSTALLER', buildtools_fn)
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 556, function: get_current_buildtools
0552: import glob
0553: btfiles = glob.glob(os.path.join(d.getVar('SDK_DEPLOY'), '*-buildtools-nativesdk-standalone-*.sh'))
0554: btfiles.sort(key=os.path.getctime)
0555: print("MY-DEBUG - btfiles = {} - SDK_DEPLOY = {}".format(btfiles, d.getVar('SDK_DEPLOY')))
*** 0556: return os.path.basename(btfiles[-1])
0557:
0558:def get_sdk_required_utilities(buildtools_fn, d):
0559: """Find required utilities that aren't provided by the buildtools"""
0560: sanity_required_utilities = (d.getVar('SANITY_REQUIRED_UTILITIES') or '').split()
Exception: IndexError: list index out of range
DEBUG: Python function do_populate_sdk_ext finished
MY-DEBUG - btfiles = [] - SDK_DEPLOY = /yocto/build-mX6ull/tmp/deploy/sdk
Question:
In line 553 the array btfiles should be filled,
but the array is empty and the line 556 generate the exception.
I have no idea of whats is wrong, what I have forget and what Yocto environment variables are needed to setup to do a correctly work.
hope you are doing good
i had similar issue where i couldnt populate esdk,
its all in GLIBC version..
kindly update your GLIBC version
In my case i had to update GLIBC version to 2.33 in "yocto-uninative.inc" file. It worked for me!!!

pre-commit InvalidConfigError

When I want to commit my changes in the .pre-commit-config.yaml, I get the following error:
An error has occurred: InvalidConfigError:
==> File .pre-commit-config.yaml
=====> while parsing a block mapping
in "<unicode string>", line 33, column 3
did not find expected key
in "<unicode string>", line 34, column 3
Check the log at /Users/name/.cache/pre-commit/pre-commit.log
The lines 33+ are:
- repo: local
- id: pytest
name: Run tests (pytest)
entry: pytest -x
language: system
types: [python]
pass_filenames: false
I missed adding "hooks" to the file, now it works:
- repo: local
hooks: # <- this was missing
- id: pytest
name: Run tests (pytest)
entry: pytest -x
language: system
types: [python]
pass_filenames: false

How to format this code so that flake8 is happy?

This code was created by black:
def test_schema_org_script_from_list():
assert (
schema_org_script_from_list([1, 2])
== '<script type="application/ld+json">1</script>\n<script type="application/ld+json">2</script>'
)
But now flake8 complains:
tests/test_utils.py:59:9: W503 line break before binary operator
tests/test_utils.py:59:101: E501 line too long (105 > 100 characters)
How can I format above lines and make flake8 happy?
I use this .pre-commit-config.yaml
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: 'https://github.com/pre-commit/pre-commit-hooks'
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- repo: 'https://gitlab.com/pycqa/flake8'
rev: 3.8.4
hooks:
- id: flake8
- repo: 'https://github.com/pre-commit/mirrors-isort'
rev: v5.7.0
hooks:
- id: isort
tox.ini:
[flake8]
max-line-length = 100
exclude = .git,*/migrations/*,node_modules,migrate
# W504 line break after binary operator
ignore = W504
(I think it is a bit strange that flake8 reads config from a file which belongs to a different tool).
from your configuration, you've set ignore = W504
ignore isn't the option you want as it resets the default ignore (bringing in a bunch of things, including W503).
If you remove ignore=, both W504 and W503 are in the default ignore so they won't be caught
as for your E501 (line too long), you can either extend-ignore = E501 or you can set max-line-length appropriately
for black, this is the suggested configuration:
[flake8]
max-line-length = 88
extend-ignore = E203
note that there are cases where black cannot make a line short enough (as you're seeing) -- both from long strings and from long variable names
disclaimer: I'm the current flake8 maintainer

Bazel Error After Upgrading Nodejs Rules - ERROR: defs.bzl has been removed from build_bazel_rules_nodejs

After upgrading build_bazel_rules_nodejs from 0.42.2 to 1.0.1 I get this error:
ERROR: /home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/build_bazel_rules_nodejs/defs.bzl:19:5: Traceback (most recent call
last):
File "/home/flolu/Desktop/minimal-bazel-monorepo/services/server/src/BUILD", line 76
nodejs_image(name = "server", <2 more arguments>)
File "/home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/io_bazel_rules_docker/nodejs/image.bzl", line 112, in nodejs_image
nodejs_binary(name = binary, <2 more arguments>)
File "/home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/build_bazel_rules_nodejs/defs.bzl", line 19, in nodejs_binary
fail(<1 more arguments>)
ERROR: defs.bzl has been removed from build_bazel_rules_nodejs
Please update your load statements to use index.bzl instead.
See https://github.com/bazelbuild/rules_nodejs/wiki#migrating-off-build_bazel_rules_nodejsdefsbzl for help.
ERROR: error loading package 'services/server/src': Package 'services/server/src' contains errors
INFO: Elapsed time: 0.119s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (1 packages loaded)
FAILED: Build did NOT complete successfully (1 packages loaded)
Line 76 in the error refers to this part of the BUILD file:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "server",
data = [":lib"],
entry_point = ":index.ts",
)
But there is no defs.bzl. So I am confused by the error.
So in detail I have upgraded from
http_archive(
name = "build_bazel_rules_nodejs",
sha256 = "16fc00ab0d1e538e88f084272316c0693a2e9007d64f45529b82f6230aedb073",
urls = ["https://github.com/bazelbuild/rules_nodejs/releases/download/0.42.2/rules_nodejs-0.42.2.tar.gz"],
)
to
http_archive(
name = "build_bazel_rules_nodejs",
sha256 = "e1a0d6eb40ec89f61a13a028e7113aa3630247253bcb1406281b627e44395145",
urls = ["https://github.com/bazelbuild/rules_nodejs/releases/download/1.0.1/rules_nodejs-1.0.1.tar.gz"],
)
You can recreate the error by cloning this repo: https://github.com/flolude/minimal-bazel-monorepo/tree/48add7ddcad4d25e361e1c7f7f257cf916a797b2 and running
bazel test //services/server/src:test
There are some breaking changes between those versions of build_bazel_rules_nodejs. Namely the import path this:
load("#build_bazel_rules_nodejs//:defs..bzl", <whatever>)
needs to become this
load("#build_bazel_rules_nodejs//:index.bzl", <whatever>)
You also need to update your io_bazel_rules_docker to at least v0.13.0. From looking at the release notes its the version compatible with 1.0.1 in node. https://github.com/bazelbuild/rules_docker/releases/

Resources