I'm creating stack instance, using python boto3 SDK. According to the documentation I should be able to use ParameterOverrides but I'm getting following error..
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "ParameterOverrides", must be one of: StackSetName, Accounts, Regions, OperationPreferences, OperationId
Environment :
aws-cli/1.11.172 Python/2.7.14 botocore/1.7.30
imports used
import boto3
import botocore
Following is the code
try:
stackset_instance_response = stackset_client.create_stack_instances(
StackSetName=cloudtrail_stackset_name,
Accounts=[
account_id
],
Regions=[
stack_region
],
OperationPreferences={
'RegionOrder': [
stack_region
],
'FailureToleranceCount': 0,
'MaxConcurrentCount': 1
},
ParameterOverrides=[
{
'ParameterKey': 'CloudtrailBucket',
'ParameterValue': 'test-bucket'
},
{
'ParameterKey': 'Environment',
'ParameterValue': 'SANDBOX'
},
{
'ParameterKey': 'IsCloudTrailEnabled',
'ParameterValue': 'NO'
}
]
)
print("Stackset create Response : " + str(stackset_instance_response))
operation_id = stackset_instance_response['OperationId']
print (operation_id)
except botocore.exceptions.ClientError as e:
print("Stackset creation error : " + str(e))
I'm not sure where I'm doing wrong, any help would be greatly appreciated.
Thank you.
1.8.0 is the first version of Botocore that has parameteroverrides defined.
https://github.com/boto/botocore/blob/1.8.0/botocore/data/cloudformation/2010-05-15/service-2.json#L1087-L1090
1.7.30 doesn't have that defined. https://github.com/boto/botocore/blob/1.7.30/botocore/data/cloudformation/2010-05-15/service-2.json
Related
I have this Python code to register a Google Cloud Storage (GCS) repository:
import requests
from grabconfig import grabconfig
(HOSTS, ign) = grabconfig()
reqHeaders = {'content-type' : 'application/json'}
for h in HOSTS:
url = f'http://{h}:9200'
r = requests.put(f'{url}/_snapshot/prod_backup2',
'''{ \"type\" : \"gcs\" }, { \"settings\" : { \"client\" : \"secondary\", \"bucket\" : \"prod_backup2\" },
{ \"compress\" : \"true\" }}''',
headers=reqHeaders)
print(r)
print(r.json())
r2 = requests.get(f'{url}/_cat/snapshots')
print(r2)
print(r2.json())
The configuration file I am using is the prod.py one:
HOSTS = ['10.x.x.x']
BACKUP_REPO = ['prod_backup2']
But when I run the code I get this error, always:
<Response [500]>
{'error': {'root_cause': [{'type': 'repository_exception', 'reason': '[prod_backup2] repository type [gcs] does not exist'}], 'type': 'repository_exception', 'reason': '[prod_backup2] repository type [gcs] does not exist'}, 'status': 500}
I think I found it: the gcs plugin was not installed on the server I was targeting.
That's supposed to be fixed by Monday, so I'm on to the next task.
I am trying to access the snowflake datasource using "great_expectations" library.
The following is what I tried so far:
from ruamel import yaml
import great_expectations as ge
from great_expectations.core.batch import BatchRequest, RuntimeBatchRequest
context = ge.get_context()
datasource_config = {
"name": "my_snowflake_datasource",
"class_name": "Datasource",
"execution_engine": {
"class_name": "SqlAlchemyExecutionEngine",
"connection_string": "snowflake://myusername:mypass#myaccount/myDB/myschema?warehouse=mywh&role=myadmin",
},
"data_connectors": {
"default_runtime_data_connector_name": {
"class_name": "RuntimeDataConnector",
"batch_identifiers": ["default_identifier_name"],
},
"default_inferred_data_connector_name": {
"class_name": "InferredAssetSqlDataConnector",
"include_schema_name": True,
},
},
}
print(context.test_yaml_config(yaml.dump(datasource_config)))
I initiated great_expectation before executing above code:
great_expectations init
but I am getting the error below:
great_expectations.exceptions.exceptions.DatasourceInitializationError: Cannot initialize datasource my_snowflake_datasource, error: 'NoneType' object has no attribute 'create_engine'
What am I doing wrong?
Your configuration seems to be ok, corresponding to the example here.
If you look at the traceback you should notice that the error propagates starting at the file great_expectations/execution_engine/sqlalchemy_execution_engine.py in your virtual environment.
The actual line where the error occurs is:
self.engine = sa.create_engine(connection_string, **kwargs)
And if you search for that sa at the top of that file:
import sqlalchemy as sa
make_url = import_make_url()
except ImportError:
sa = None
So sqlalchemy is not installed, which you
don't get automatically in your environement if you install greate_expectiations. The thing to do is to
install snowflake-sqlalchemy, since you want to use sqlalchemy's snowflake
plugin (assumption based on your connection_string).
/your/virtualenv/bin/python -m pip install snowflake-sqlalchemy
After that you should no longer get an error, it looks like test_yaml_config is waiting for the connection
to time out.
What worries me greatly is the documented use of a deprecated API of ruamel.yaml.
The function ruamel.yaml.dump is going to be removed in the near future, and you
should use the .dump() method of a ruamel.yaml.YAML() instance.
You should use the following code instead:
import sys
from ruamel.yaml import YAML
import great_expectations as ge
context = ge.get_context()
datasource_config = {
"name": "my_snowflake_datasource",
"class_name": "Datasource",
"execution_engine": {
"class_name": "SqlAlchemyExecutionEngine",
"connection_string": "snowflake://myusername:mypass#myaccount/myDB/myschema?warehouse=mywh&role=myadmin",
},
"data_connectors": {
"default_runtime_data_connector_name": {
"class_name": "RuntimeDataConnector",
"batch_identifiers": ["default_identifier_name"],
},
"default_inferred_data_connector_name": {
"class_name": "InferredAssetSqlDataConnector",
"include_schema_name": True,
},
},
}
yaml = YAML()
yaml.dump(datasource_config, sys.stdout, transform=context.test_yaml_config)
I'll make a PR for great-excpectations to update their documentation/use of ruamel.yaml.
I have a Windows Machine that I want to add VM extension using the azure python SDK , I send the following request
{'location': 'westus',
'tags': None,
'publisher': 'Microsoft.Compute',
'virtual_machine_extension_type': 'CustomScriptExtension',
'type_handler_version': '1.4',
'settings': '{
"file_uris": ["https://mysite.azurescripts.net/ps_enable_winrm_http.ps1"],
"command_to_execute": "powershell -ExecutionPolicy Unrestricted -file ps_enable_winrm_http.ps1"}'
}
but what happens is that it gives the following exception
configure virtual_machine '946b4246-a604-4b01-9e6a-09ed64a93bdb' failed with this error :
VM has reported a failure when processing extension '13da0dc5-09c0-4e56-a35d-fdbc42432e11'.
Error message: "Invalid handler configuration. Exiting.
Error Message: Expecting state 'Element'.. Encountered 'Text' with name '', namespace ''. "
More information on troubleshooting is available at https://aka.ms/VMExtensionCSEWindowsTroubleshoot
adding a simple code snippet that I use
vm_extension_name = "{0}".format(uuid4())
vm_extension_params = {
'location': location_val,
'tags': tags_val,
'publisher': 'Microsoft.Compute',
'virtual_machine_extension_type': 'CustomScriptExtension',
'type_handler_version': type_handler_version,
'auto_upgrade_minor_version': True,
'settings': json.dumps({
'fileUris': file_uris,
'commandToExecute': command_to_execute
})
}
logger.info("sending {0}".format(vm_extension_params))
any ideas , should I send something differently or am I missing something from the above request that cause the issue
thanks for the help in advance
Regards,
When we use python sdk to install custom script extension, we should create Object VirtualMachineExtension. Its parameter settings should be Object. But you define it as str. Please update it with removing ''. For more details, please refer to the document
For example
from azure.mgmt.compute import ComputeManagementClient
from azure.common.credentials import ServicePrincipalCredentials
AZURE_TENANT_ID= ''
AZURE_CLIENT_ID=''
AZURE_CLIENT_SECRET=''
AZURE_SUBSCRIPTION_ID=''
credentials = ServicePrincipalCredentials(client_id=AZURE_CLIENT_ID,secret=AZURE_CLIENT_SECRET,tenant=AZURE_TENANT_ID)
compute_client = ComputeManagementClient(credentials, AZURE_SUBSCRIPTION_ID)
resource_group_name='stan'
vm_name='win2016'
params_create = {
'location':'CentralUS',
'tags': None,
'publisher': 'Microsoft.Compute',
'virtual_machine_extension_type': 'CustomScriptExtension',
'type_handler_version': '1.4',
'settings':
{
'fileUris': ['https://***/test/test.ps1'],
'commandToExecute': 'powershell -ExecutionPolicy Unrestricted -File test.ps1'
}
}
ext_poller = compute_client.virtual_machine_extensions.create_or_update(
resource_group_name,
vm_name,
'test',
params_create,
)
ext = ext_poller.result()
print(ext)
I'm trying to code a dictionary based logging configuration and have been stumped by a ValueError that occurs when I run the program. I've stripped it down to the essentials and the problem remains. I've read the 3.5 docs, logging HOWTO, Logging Cookbook, etc. but unfortunately, the solution has not presented itself. Any help would be appreciated.
Also, I'm only 3 weeks into python so I may just be out of my depth at this point. Here's the code...
import logging.config
log_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters':{
'verbose_formatter':{
'format':'%(levelname)s: %(name)s: %(asctime)s.%(msecs).03d : '\
'%(message)s: %(process)s: %(processName)s',
'datefmt':'%Y-%m-%d %H:%M:%S'
},
'precise_formatter':{
'format':'%(levelname)s: %(name)s: %(asctime)s.%(msecs).03d : '\
'%(message)s',
'datefmt':'%Y-%m-%d %H:%M:%S'
},
'brief_formatter':{
'format':'%(levelname)s: %(message)s'
}
},
'handlers':{
'con_handler':{
'class':'logging.StreamHandler',
'level':'DEBUG',
'formatter':'precise_formatter',
'stream':'ext://sys.stdout'
},
'file_handler':{
'class':'logging.handlers.RotatingFileHandler',
'filename':'logger.log',
'maxBytes':1048576,
'backupCount':4,
'level':'DEBUG',
'formatter':'precise_formatter',
'encoding':'utf8'
}
},
'loggers':{
'level':'DEBUG',
'handlers':['con_handler', 'file_handler']
}
}
logging.config.dictConfig(log_config)
logger = logging.getLogger(__name__)
logger.critical('This should always be seen!')
When run, I receive the follow:
ValueError was unhandled by user code
Message: Unable to configure logger 'handlers': 'ConvertingList' object has no attribute 'get'
or sometimes this...
ValueError was unhandled by user code
Message: Unable to configure logger 'level': 'str' object has no attribute 'get'
I suspect that the different errors may have to do with the sometimes changing order of the dictionary?
Change the loggers section to
'loggers':{
'': {
'level':'DEBUG',
'handlers':['con_handler', 'file_handler']
}
}
The '' (empty string) refers to the root logger. you can add more loggers for different components:
'loggers':{
'': {
'level':'DEBUG',
'handlers':['con_handler', 'file_handler']
}
'bottle': { #I only want error level from bottle :)
'level':'ERROR',
'handlers':['con_handler', 'file_handler']
}
}
To config the root logger use a root key of your log_config dictionary.
root - this will be the configuration for the root logger.
Source: Dictionary Schema Details
Following this description your config should look something like this:
log_config = {
...
'handlers': {
'con_handler': ...,
'file_handler': ...
},
'loggers': {
'other_logger': ...
},
'root': {
'level': 'DEBUG',
'handlers': ['con_handler', 'file_handler']
}
}
The following groovy scripts fail using command line
#Grab("org.apache.poi:poi:3.9")
println "test"
Error:
unexpected token: println # line 2, column 1.
println "test"
^
1 error
Removing the Grab, it works!
Anything I missed?
$>groovy -v
Groovy Version: 2.1.7 JVM: 1.7.0_25 Vendor: Oracle Corporation OS: Linux
Annotations can only be applied to certain targets. See SO: Why can't I do a method call after a #Grab declaration in a Groovy script?
#Grab("org.apache.poi:poi:3.9")
dummy = null
println "test"
Alternatively you can use grab as a method call:
import static groovy.grape.Grape.grab
grab(group: "org.apache.poi", module: "poi", version: "3.9")
println "test"
For more information refer to Groovy Language Documentation > Dependency management with Grape.
File 'Grabber.groovy'
package org.taste
import groovy.grape.Grape
//List<List[]> artifacts => [[<group>,<module>,<version>,[<Maven-URL>]],..]
static def grab (List<List[]> artifacts) {
ClassLoader classLoader = new groovy.lang.GroovyClassLoader()
def eal = Grape.getEnableAutoDownload()
artifacts.each { artifact -> {
Map param = [
classLoader: classLoader,
group : artifact.get(0),
module : artifact.get(1),
version : artifact.get(2),
classifier : (artifact.size() < 4) ? null : artifact.get(3)
]
println param
Grape.grab(param)
}
}
Grape.setEnableAutoDownload(eal)
}
Usage :
package org.taste
import org.taste.Grabber
Grabber.grab([
[ "org.codehaus.groovy.modules.http-builder", "http-builder", '0.7.1'],
[ "org.postgresql", "postgresql", '42.3.1', null ],
[ "com.oracle.database.jdbc", "ojdbc8", '12.2.0.1', null]
])