The below code is my attempt at passing a session parameter to snowflake through python. This part of an existing codebase which runs in AWS Glue, & the only part of the following that doesn't work is the session_parameters.
I'm trying to understand how to add session parameters from within this code. Any help in understanding what is going on here is appreciated.
sf_credentials = json.loads(CACHE["SNOWFLAKE_CREDENTIALS"])
CACHE["sf_options"] = {
"sfURL": "{}.snowflakecomputing.com".format(sf_credentials["account"]),
"sfUser": sf_credentials["user"],
"sfPassword": sf_credentials["password"],
"sfRole": sf_credentials["role"],
"sfDatabase": sf_credentials["database"],
"sfSchema": sf_credentials["schema"],
"sfWarehouse": sf_credentials["warehouse"],
"session_parameters": {
"QUERY_TAG": "Something",
}
}
In AWS Cloudwatch, I can find the parameter was sent with the other options. In snowflake, the parameter was never set.
I can add more detail where necessary, I just wasn't sure what details are needed.
It turns out that there is no need to specify that a given parameter is a session parameter when you are using the Spark Connector. So instead:
sf_credentials = json.loads(CACHE["SNOWFLAKE_CREDENTIALS"])
CACHE["sf_options"] = {
"sfURL": "{}.snowflakecomputing.com".format(sf_credentials["account"]),
"sfUser": sf_credentials["user"],
"sfPassword": sf_credentials["password"],
"sfRole": sf_credentials["role"],
"sfDatabase": sf_credentials["database"],
"sfSchema": sf_credentials["schema"],
"sfWarehouse": sf_credentials["warehouse"],
"QUERY_TAG": "Something",
}
Works perfectly.
I found this in the Snowflake Documentation for Using the Spark Connector: Here's the section on setting Session Parameters
Related
I am trying to list of all Parameters along with all their tags, I am trying to do so without listing the value of the parameters.
My initial approach was to do a describe_parameters and then loop through the Key Names and then perform list_tags, while doing so I found out that the ARNs are needed to perform list_tags which are not returned in the describe parameters.
Is there a way to get the parameters along with their tags without actually getting the parameters?
You can do this with the resource groups tagging api IF THEY ARE ALREADY TAGGED. Here's a basic example below without pagination.
import boto3
profile = "your_profile_name"
region = "us-east-1"
session = boto3.session.Session(profile_name=profile, region_name=region)
client = session.client('resourcegroupstaggingapi')
response = client.get_resources(
ResourceTypeFilters=[
'ssm',
],
)
print(response)
If you're wanting to discover untagged parameters, this won't work. Better would be to setup config rules to highlight these issues without you having to manage searching for them.
I am looking for an example on how to start execution of a step function from API Gateway using Terraform and the aws_apigatewayv2_integration resource. I am using an HTTP API (have only found an older example for REST API's on Stackoverflow).
Currently I have this:
resource "aws_apigatewayv2_integration" "workflow_proxy_integration" {
api_id = aws_apigatewayv2_api.default.id
credentials_arn = aws_iam_role.api_gateway_step_functions.arn
integration_type = "AWS_PROXY"
integration_subtype = "StepFunctions-StartExecution"
description = "The integration which will start the Step Functions workflow."
payload_format_version = "1.0"
request_parameters = {
StateMachineArn = aws_sfn_state_machine.default.arn
}
}
Right now, my State Machine receives an empty input ("input": {}). When I try to add input to the request_parameters section, I get this error:
Error: error updating API Gateway v2 integration: BadRequestException: Parameter: input does not fit schema for Operation: StepFunctions-StartExecution.
I spent over an hour looking for a solution to a similar problem I was having with request_parameters. AWS's documentation currently uses camelCase for their keys in all of their examples (stateMachineArn, input, etc.) so it made it difficult to research.
You'll want to use PascalCase for your keys, similar to how you already did for StateMachineArn. So instead of input, you'll use Input.
I am attempting to set the database/document context programmatically via the Python API. My steps are as follows:
session = BaseXClient.Session("localhost", 1984, "admin", "admin")
query = session.query("//node")
query.context("doc('dbname')") # **NOT SURE HOW TO SET THE DB TO USE**
query.execute()
I already know that I can simply use the session object as follows and it works fine:
session.execute("xquery doc('dbname')//node/child")
But I am looking for a way to OPEN a database within the scope of the program call separate from the query string. I am not able to find the documentation to explicitly set the database prior to executing the query using the context object. I have looked at the source code for the python BaseXClient and there is context method for the Query() instance that is not well documented. I am attempting to use this to set the Database and not having much luck.
The context you have supplied is just a string. It is not evaluated. In a client server context it is difficult to see how one could pass in a database here.
I think your alternatives are to use the execute command to open a database before running the query. This will set the context. e.g.
var q = session.execute("open mydatabase",log.print)
var q = session.query("count(*)")
or use the query command bind command to pass parameters
var q = session.query("declare variable $db external; count(collection($db))")
q.bind("db", "mydatabase","",log.print);
q.execute(log.print);
Sorry these examples use Javascript and my BaseX Node client as I am not familiar with the Python API but I am sure the same applies in the Python API
I am building a REST API which connects to a NEO4J instance. I am using the koa-neo4j library as the basis (https://github.com/assister-ai/koa-neo4j-starter-kit). I am a beginner at all these technologies but thanks to some help from this forum I have the basic functionality working. For example the below code allows me to create a new node with the label "metric" and set the name and dateAdded propertis.
URL:
/metric?metricName=Test&dateAdded=2/21/2017
index.js
app.defineAPI({
method: 'POST',
route: '/api/v1/imm/metric',
cypherQueryFile: './src/api/v1/imm/metric/createMetric.cyp'
});
createMetric.cyp"
CREATE (n:metric {
name: $metricName,
dateAdded: $dateAdded
})
return ID(n) as id
However, I am struggling to know how I can approach more complicated examples. How can I handle situations when I don't know how many properties will be added when creating a new node beforehand or when I want to create multiple nodes in a single post statement. Ideally I would like to be able to pass something like JSON as part of the POST which would contain all of the nodes, labels and properties that I want to create. Is something like this possible? I tried using the below Cypher query and passing a JSON string in the POST body but it didn't work.
UNWIND $props AS properties
CREATE (n:metric)
SET n = properties
RETURN n
Would I be better off switching tothe Neo4j Rest API instead of the BOLT protocol and the KOA-NEO4J framework. From my research I thought it was better to use BOLT but I want to have a Rest API as the middle layer between my front and back end so I am willing to change over if this will be easier in the longer term.
Thanks for the help!
Your Cypher syntax is bad in a couple of ways.
UNWIND only accepts a collection as its argument, not a string.
SET n = properties is only legal if properties is a map, not a string.
This query should work for creating a single node (assuming that $props is a map containing all the properties you want to store with the newly created node):
CREATE (n:metric $props)
RETURN n
If you want to create multiple nodes, then this query (essentially the same as yours) should work (but only if $prop_collection is a collection of maps):
UNWIND $prop_collection AS props
CREATE (n:metric)
SET n = props
RETURN n
I too have faced difficulties when trying to pass complex types as arguments to neo4j, this has to do with type conversions between js and cypher over bolt and there is not much one could do except for filing an issue in the official neo4j JavaScript driver repo. koa-neo4j uses the official driver under the hood.
One way to go about such scenarios in koa-neo4j is using JavaScript to manipulate the arguments before sending to Cypher:
https://github.com/assister-ai/koa-neo4j#preprocess-lifecycle
Also possible to further manipulate the results of a Cypher query using postProcess lifecycle hook:
https://github.com/assister-ai/koa-neo4j#postprocess-lifecycle
I need to pass request parameters to a specified zeppelin paragraph have them available to the spark context. tbh this is proving a real nightmare. I can write some js in the %angular interpreter to retrieve the query parameters but as z.angularBind("myparam", "value") currently only works in Spark Interpreter(scala) I can't use this.
My next thought was to retrieve the Paragraph and/or Notebook object - I'm thinking it must have a reference somewhere to the url that invoked it. However all you can easily get is the paragraphId/noteId from the InterpreterContext.
Anyone point me in the right direction?
You can pass parameters through dynamic form. Create the parameters through dynamic form for your notebook. To pass a value for the dynamic form, use the following
{
"params": {
"formLabel1": "value1",
"formLabel2": "value2"
}
}
Doc: https://zeppelin.apache.org/docs/0.7.2/rest-api/rest-notebook.html#run-a-paragraph-synchronously
Note that you can pass params only when you want to run a single paragraph.