AWS SAM template error : collections.OrderedDict' object has no attribute 'startswith - node.js

I am getting this error while using SAM template for deploying resources
below is the script
- sam package --template-file test.json --s3-bucket $s3_bucket --s3-prefix packages/my_folder/ --output-template-file samtemplate.yml
getting this error even tried after rollbacking to previous working status
return any([url.startswith(prefix) for prefix in ["s3://", "http://", "https://"]])
File "/usr/local/lib/python3.8/site-packages/samcli/lib/providers/sam_stack_provider.py", line 250, in
return any([url.startswith(prefix) for prefix in ["s3://", "http://", "https://"]])
AttributeError: 'collections.OrderedDict' object has no attribute 'startswith'
After adding some debug message I got this error
2021-04-22 06:42:32,820 | Unable to resolve property S3bucketname: OrderedDict([('Fn::Select', ['0', OrderedDict([('Fn::Split', ['/', OrderedDict([('Ref', 'TemplateS3BucketName')])])])])]). Leaving as is.

Related

How to format the file path in an MLTable for Azure Machine Learning uploaded during a pipeline job?

How is the path to a (.csv) file to be expressed in a MLTable file
that is created in a local folder but then uploaded as part of a
pipline job?
I'm following the Jupyter notebook automl-forecasting-task-energy-demand-advance from the azuerml-examples repo (article and notebook). This example has a MLTable file as below referencing a .csv file with a relative path. Then in the pipeline the MLTable is uploaded to be accessible to a remote compute (a few things are omitted for brevity)
my_training_data_input = Input(
type=AssetTypes.MLTABLE, path="./data/training-mltable-folder"
)
compute = AmlCompute(
name=compute_name, size="STANDARD_D2_V2", min_instances=0, max_instances=4
)
forecasting_job = automl.forecasting(
compute=compute_name, # name of the compute target we created above
# name="dpv2-forecasting-job-02",
experiment_name=exp_name,
training_data=my_training_data_input,
# validation_data = my_validation_data_input,
target_column_name="demand",
primary_metric="NormalizedRootMeanSquaredError",
n_cross_validations="auto",
enable_model_explainability=True,
tags={"my_custom_tag": "My custom value"},
)
returned_job = ml_client.jobs.create_or_update(
forecasting_job
)
ml_client.jobs.stream(returned_job.name)
But running this gives the error
Error meassage:
Encountered user error while fetching data from Dataset. Error: UserErrorException:
Message: MLTable yaml schema is invalid:
Error Code: Validation
Validation Error Code: Invalid MLTable
Validation Target: MLTableToDataflow
Error Message: Failed to convert a MLTable to dataflow
uri path is not a valid datastore uri path
| session_id=857bd9a1-097b-4df6-aa1c-8871f89580d8
InnerException None
ErrorResponse
{
"error": {
"code": "UserError",
"message": "MLTable yaml schema is invalid: \nError Code: Validation\nValidation Error Code: Invalid MLTable\nValidation Target: MLTableToDataflow\nError Message: Failed to convert a MLTable to dataflow\nuri path is not a valid datastore uri path\n| session_id=857bd9a1-097b-4df6-aa1c-8871f89580d8"
}
}
paths:
- file: ./nyc_energy_training_clean.csv
transformations:
- read_delimited:
delimiter: ','
encoding: 'ascii'
- convert_column_types:
- columns: demand
column_type: float
- columns: precip
column_type: float
- columns: temp
column_type: float
How am I supposed to run this? Thanks in advance!
For Remote PATH you can use the below and here is the document for create data assets.
It's important to note that the path specified in the MLTable file must be a valid path in the cloud, not just a valid path on your local machine.

arangoimport: edge attribute missing or invalid

ArangoDB Version: 3.8
Storage Engine:
Deployment Mode: Single Server
Deployment Strategy: Manual Start
Operating System: Ubuntu 20.04
Total RAM in your machine: 32Gb
Disks in use: < SSD
Used Package: < Ubuntu .deb
Affected feature: arangoimport
(base) raphy#pc:~$ arangodb
2021-11-04T09:34:45+01:00 |INFO| Starting arangodb version 0.15.3, build 814f8be component=arangodb
2021-11-04T09:34:45+01:00 |INFO| Using storage engine 'rocksdb' component=arangodb
2021-11-04T09:34:45+01:00 |INFO| Serving as master with ID 'ef664d42' on :8528... component=arangodb
2021-11-04T09:34:45+01:00 |INFO| Waiting for 3 servers to show up.
component=arangodb
2021-11-04T09:34:45+01:00 |INFO| Use the following commands to start other servers: component=arangodb
arangodb --starter.data-dir=./db2 --starter.join 127.0.0.1
arangodb --starter.data-dir=./db3 --starter.join 127.0.0.1
2021-11-04T09:34:45+01:00 |INFO| ArangoDB Starter listening on 0.0.0.0:8528 (:8528) component=arangodb
I'm trying to import data in this way:
(base) raphy#pc:~$ arangoimport --server.database "ConceptNet" --collection "rel_type" "./ConceptNet/conceptnet.jsonl"
But I get these errors:
Connected to ArangoDB 'http+tcp://127.0.0.1:8529, version: 3.8.2, database: 'ConceptNet', username: 'root'
----------------------------------------
database: ConceptNet
collection: rel_type
create: no
create database: no
source filename: ./ConceptNet/conceptnet.jsonl
file type: json
threads: 2
on duplicate: error
connect timeout: 5
request timeout: 1200
----------------------------------------
Starting JSON import...
2021-11-04T14:49:48Z [165643] INFO [9ddf3] {general} processed 1945 bytes (3%) of input file
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 0: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"pm","_to":"am","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/fr","process":"/s/process/wikiparsec/2"}}
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 1: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"red","_to":"amber","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 2: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"proprium","_to":"apelativum","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 3: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"s","_to":"beze\t","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 4: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"euphoria","_to":"bad_trip","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 5: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"gooder","_to":"badder","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 6: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"goodest","_to":"baddest","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 7: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"goodie","_to":"baddie","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2","contributor":"/s/resource/wiktionary/fr"}}
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 8: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"windy","_to":"calm","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 9: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"anger","_to":"calm_down","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/fr","process":"/s/process/wikiparsec/2"}}
2021-11-04T14:49:48Z [165643] WARNING [e5a29] {general} at position 10: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"get_angry","_to":"calm_down","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/fr","process":"/s/process/wikiparsec/2"}}
created: 0
warnings/errors: 11
updated/replaced: 0
ignored: 0
This is the jsonl file I'm trying to import :
conceptnet.jsonl :
{"_from":"pm","_to":"am","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/fr","process":"/s/process/wikiparsec/2"}}
{"_from":"red","_to":"amber","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
{"_from":"proprium","_to":"apelativum","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
{"_from":"s","_to":"beze\t","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
{"_from":"euphoria","_to":"bad_trip","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
{"_from":"gooder","_to":"badder","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
{"_from":"goodest","_to":"baddest","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
{"_from":"goodie","_to":"baddie","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2","contributor":"/s/resource>
{"_from":"windy","_to":"calm","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
{"_from":"anger","_to":"calm_down","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/fr","process":"/s/process/wikiparsec/2"}}
{"_from":"get_angry","_to":"calm_down","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/fr","process":"/s/process/wikiparsec/2"}}
I tried to modify the line in the jsonl file as follows:
{"_from":"pm","_to":"am","rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/fr","process":"/s/process/wikiparsec/2"}
But still get this error:
(base) raphy#pc:~$ arangoimport --server.database "ConceptNet" --collection "rel_type" "./ConceptNet/conceptnet.jsonl"
Please specify a password:
Connected to ArangoDB 'http+tcp://127.0.0.1:8529, version: 3.8.2, database: 'ConceptNet', username: 'root'
----------------------------------------
database: ConceptNet
collection: rel_type
create: no
create database: no
source filename: ./ConceptNet/conceptnet.jsonl
file type: json
threads: 2
on duplicate: error
connect timeout: 5
request timeout: 1200
----------------------------------------
Starting JSON import...
2021-11-04T18:48:55Z [37684] WARNING [e5a29] {general} at position 0: creating document failed with error 'edge attribute missing or invalid', offending document: {"_from":"pm","_to":"am","rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/fr","process":"/s/process/wikiparsec/2"}
What am I doing wrongly or missing? How to solve the problem?
I found that saving the documents into the jsonl file as following, solves the problem:
conceptnet.jsonl :
{"_from":"conceptnet/pm","_to":"conceptnet/am","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/fr","process":"/s/process/wikiparsec/2"}}
{"_from":"conceptnet/red","_to":"conceptnet/amber","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}
{"_from":"conceptnet/proprium","_to":"conceptnet/apelativum","rel":{"rel_type":"Antonym","language":"en","license":"-sa/4.0","sources":"/s/resource/wiktionary/en","process":"/s/process/wikiparsec/2"}}

Input format for Tensorflow models on GCP AI Platform

I have a uploaded a model to GCP AI Platform Models. It's a simple Keras, Multistep Model, with 5 features trained on 168 lagged values. When I am trying to test the models in, I'm getting this strange error message:
"error": "Prediction failed: Error during model execution: <_MultiThreadedRendezvous of RPC that terminated with:\n\tstatus = StatusCode.FAILED_PRECONDITION\n\tdetails = \"Error while reading resource variable dense_7/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/dense_7/bias)\n\t [[{{node model_2/dense_7/BiasAdd/ReadVariableOp}}]]\"\n\tdebug_error_string = \"{\"created\":\"#1618946146.138507164\",\"description\":\"Error received from peer ipv4:127.0.0.1:8081\",\"file\":\"src/core/lib/surface/call.cc\",\"file_line\":1061,\"grpc_message\":\"Error while reading resource variable dense_7/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/dense_7/bias)\\n\\t [[{{node model_2/dense_7/BiasAdd/ReadVariableOp}}]]\",\"grpc_status\":9}\"\n>"
The input is on the following format, a list ((1, 168, 5))
See below of example:
{
"instances":
[[[ 3.10978284e-01, 2.94650396e-01, 8.83664149e-01,
1.60210423e+00, -1.47402699e+00],
[ 3.10978284e-01, 2.94650396e-01, 5.23466315e-01,
1.60210423e+00, -1.47402699e+00],
[ 8.68576328e-01, 7.78699823e-01, 2.83334426e-01,
1.60210423e+00, -1.47402699e+00]]]
}

AWS Lambda Layers - module 'dicttoxml' has no attribute 'dicttoxml'

I have a AWS Lambda function based on python 3.7 and trying to use a module dicttoxml via AWS layers. My Python code is as below:
import json
import dicttoxml
def lambda_handler(event, context):
xml = dicttoxml.dicttoxml({"name": "Foo"})
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
At my local machine, it works perfectly fine but Lambda gives error as below:
{
"errorMessage": "module 'dicttoxml' has no attribute 'dicttoxml'",
"errorType": "AttributeError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 4, in lambda_handler\n xml = dicttoxml.dicttoxml({\"name\": \"Ankur\"})\n"
]
}
The directory structure of dicttoxml layer is as below:
dicttoxml.zip > python > dicttoxml > dicttoxml.py
I feel puzzled, what is wrong here?
I created the custom layer with dicttoxml can confirm that it works.
The technique used includes docker tool described in the recent AWS blog:
How do I create a Lambda layer using a simulated Lambda environment with Docker?
Thus for this question, I verified it as follows:
Create empty folder, e.g. mylayer.
Go to the folder and create requirements.txt file with the content of
echo dicttoxml > ./requirements.txt
Run the following docker command:
docker run -v "$PWD":/var/task "lambci/lambda:build-python3.7" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.7/site-packages/; exit"
Create layer as zip:
zip -9 -r mylayer.zip python
Create lambda layer based on mylayer.zip in the AWS Console. Don't forget to specify Compatible runtimes to python3.7.
Test the layer in lambda using the following lambda function:
import dicttoxml
def lambda_handler(event, context):
print(dir(dicttoxml))
The function executes correctly:
['LOG', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '__version__', 'collections', 'convert', 'convert_bool', 'convert_dict', 'convert_kv', 'convert_list', 'convert_none', 'default_item_func', 'dicttoxml', 'escape_xml', 'get_unique_id', 'get_xml_type', 'ids', 'key_is_valid_xml', 'logging', 'long', 'make_attrstring', 'make_id', 'make_valid_xml_name', 'numbers', 'parseString', 'randint', 'set_debug', 'unicode', 'unicode_literals', 'unicode_me', 'version', 'wrap_cdata']

Unable to build local AMLS environment with private wheel

I am trying to write a small program using the AzureML Python SDK (v1.0.85) to register an Environment in AMLS and use that definition to construct a local Conda environment when experiments are being run (for a pre-trained model). The code works fine for simple scenarios where all dependencies are loaded from Conda/ public PyPI, but when I introduce a private dependency (e.g. a utils library) I am getting a InternalServerError with the message "Error getting recipe specifications".
The code I am using to register the environment is (after having authenticated to Azure and connected to our workspace):
environment_name = config['environment']['name']
py_version = "3.7"
conda_packages = ["pip"]
pip_packages = ["azureml-defaults"]
private_packages = ["./env-wheels/utils-0.0.3-py3-none-any.whl"]
print(f"Creating environment with name {environment_name}")
environment = Environment(name=environment_name)
conda_deps = CondaDependencies()
print(f"Adding Python version: {py_version}")
conda_deps.set_python_version(py_version)
for conda_pkg in conda_packages:
print(f"Adding Conda denpendency: {conda_pkg}")
conda_deps.add_conda_package(conda_pkg)
for pip_pkg in pip_packages:
print(f"Adding Pip dependency: {pip_pkg}")
conda_deps.add_pip_package(pip_pkg)
for private_pkg in private_packages:
print(f"Uploading private wheel from {private_pkg}")
private_pkg_url = Environment.add_private_pip_wheel(workspace=ws, file_path=Path(private_pkg).absolute(), exist_ok=True)
print(f"Adding private Pip dependency: {private_pkg_url}")
conda_deps.add_pip_package(private_pkg_url)
environment.python.conda_dependencies = conda_deps
environment.register(workspace=ws)
And the code I am using to create the local Conda environment is:
amls_environment = Environment.get(ws, name=environment_name, version=environment_version)
print(f"Building environment...")
amls_environment.build_local(workspace=ws)
The exact error message being returned when build_local(...) is called is:
Traceback (most recent call last):
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\core\environment.py", line 814, in build_local
raise error
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\core\environment.py", line 807, in build_local
recipe = environment_client._get_recipe_for_build(name=self.name, version=self.version, **payload)
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\_restclient\environment_client.py", line 171, in _get_recipe_for_build
raise Exception(message)
Exception: Error getting recipe specifications. Code: 500
: {
"error": {
"code": "ServiceError",
"message": "InternalServerError",
"detailsUri": null,
"target": null,
"details": [],
"innerError": null,
"debugInfo": null
},
"correlation": {
"operation": "15043e1469e85a4c96a3c18c45a2af67",
"request": "19231be75a2b8192"
},
"environment": "westeurope",
"location": "westeurope",
"time": "2020-02-28T09:38:47.8900715+00:00"
}
Process finished with exit code 1
Has anyone seen this error before or able to provide some guidance around what the issue may be?
The issue was with out firewall blocking the required requests between AMLS and the storage container (I presume to get the environment definitions/ private wheels).
We resolved this by updating the firewall with appropriate ALLOW rules for the AMLS service to contact and read from the attached storage container.
Assuming that you'd like to run in the script on a remote compute, then my suggestion would be to pass the environment you just "got". to a RunConfiguration, then pass that to an ScriptRunConfig, Estimator, or a PythonScriptStep
from azureml.core import ScriptRunConfig
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
src = ScriptRunConfig(source_directory=project_folder, script='train.py')
# Set compute target to the one created in previous step
src.run_config.target = cpu_cluster.name
# Set environment
amls_environment = Environment.get(ws, name=environment_name, version=environment_version)
src.run_config.environment = amls_environment
run = experiment.submit(config=src)
run
Check out the rest of the notebook here.
If you're looking for a local run this notebook might help.

Resources