How to send(set) parameter dictonaryy to zeep client - python-3.x

Sorry if this question is already asked.
I am trying to send some parameter as dict, but i am getting results as none or not note found.
in parameter i am sending NodeName(GOAFB) and want to change its NodeDetails as mention in param.
And this node name is available on the address, check with get method(Shown in snapshot).
Below is the code i tried
from zeep import Client
from zeep.transports import Transport
from requests import Session
from requests.auth import HTTPBasicAuth
from zeep.wsse.username import UsernameToken
import json
wsdl = "http://10.2.1.8/ws/17.0/Bhaul.asmx?wsdl"
session = Session()
client = Client(wsdl, transport=Transport(session=session),wsse=UsernameToken('admin','password'))
param = {
"Ib440ConfigSet": {
"NodeName": "GOAFB",
"NodeDetail": {
"Custom": [
{
"Name": "Circle",
"Value": "KOLKATA"
},
{
"Name": "SGW",
"Value": "1010"
}
]
}
}
}
dd=client.service.Ib440ConfigGet("GOAFB")
client.service.Ib440ConfigSet(*param)
Below snapshot contains results:
Please support how to make it working

In order to send a dict we need to assign double *
So this should work for you:
client.service.Ib440ConfigSet(**param)

Related

Unable to write JSON data into InfluxDB 2.x using python client

I'm trying to insert a simple json data into the InfluxDB 2.x using python client: influxdb_client.
The code create the measurement and all the fields and tags but it does not insert the data.
This is how I'm connecting to InfluxDB:
from datetime import datetime
from influxdb_client import InfluxDBClient as FluxDBClient
from influxdb_client import Point, WriteOptions
from influxdb_client .client.write_api import SYNCHRONOUS
api_token = "gC4vfGRHOW_OMHsXtLBBDnY_heI76GkqQgZlouE8hNYZOVUVLHvhoo79Q-5b0Tvj82rmTofXjywx_tBoTXmT8w=="
org = "0897e8ca6b394328"
client = FluxDBClient(url="http://localhost:8086/", token=api_token, org=org, debug=True, timeout=50000)
Here is the code to insert data:
points = [{
"measurement": "temp",
"time": datetime.now(),
"tags": {
"weather_station": "LAHORE",
"module_detail_id": 1,
},
"fields": {
"Rad1h": 1.2,
"TTT": 100,
"N": 30,
"VV": 20,
"power": 987,
}
}]
options = WriteOptions(batch_size=1, flush_interval=8, jitter_interval=0, retry_interval=1000)
_write_client = client.write_api(write_options=options)
_write_client.write("Test", "0897e8ca6b394328", points)
_write_client.__del__()
client.__del__()
When I run the script, I get 204 response in return, which means that the write query is executed successfully. But I'm getting no record, as you can see in the image below:

Has an invalid foreign key Django TestCase

i have a model pruchase and a model transaction, transactions have a ForeignKey from pruchase and when a try run the tests success th first test_payment_request but the second test_payment_transaction_state faile an launch the next error:
django.db.utils.IntegrityError: The row in table 'transactions_transactionmodel' with primary key '0664aefce71447699d8ca9e7677ba4cc' has an invalid foreign key: transactions_transactionmodel.purchase_id contains a value 'ba7dc5ac0e1c4b9eb009e772f405f5db' that does not have a corresponding value in purchases_purchasemodel.id.
this is my test:
import datetime
import socket
from django.test import TestCase
from .payment import PaymentTransactions
from apps.purchases.models import PurchaseModel
class PaymentTransactionsTestCase(TestCase):
def setUp(self):
self.purchase = {"purchase":PurchaseModel( total_value = 124236,
products = [
{
"name": "Aretes",
"value": "6490"
},
{
"name": "Manilla",
"value": "6.000"
}
],
purchase_date = datetime.datetime.utcnow()
),
"value":124236,
"client_ip": socket.gethostbyname(socket.gethostname())
}
def test_payment_request(self):
error, payment, transaction = PaymentTransactions().\
payment_transaction_request(**self.purchase)
self.assertFalse(error)
self.assertTrue(payment)
self.assertIn("tpaga_payment_url", payment)
self.assertIn("token", payment)
self.assertEquals(transaction.token, payment["token"])
print("paso prueba 1")
def test_payment_transaction_state(self):
purchase = {"purchase":PurchaseModel( total_value = 124236,
products = [
{
"name": "Aretes",
"value": "6490"
},
{
"name": "Manilla",
"value": "6.000"
}
],
purchase_date = datetime.datetime.utcnow()
),
"value":124236,
"client_ip": socket.gethostbyname(socket.gethostname())
}
error, payment, transaction = PaymentTransactions().\
payment_transaction_request(**purchase)
self.assertFalse(error)
error, transaction_created = PaymentTransactions().\
payment_transaction_state(transaction.id)
self.assertFalse(error)
self.assertEquals(transaction_created.state, transaction.state)
but i don't know whatshappends if someone know, please can explain me.
Make change models.SOMETHING to models.CASCADE in your field of Model Payment.

Unable to execute mtermvectors elasticsearch query from AWS EMR cluster using Spark

I am trying to execute this elasticsearch query via spark:
POST /aa6/_mtermvectors
{
"ids": [
"ABC",
"XYA",
"RTE"
],
"parameters": {
"fields": [
"attribute"
],
"term_statistics": true,
"offsets": false,
"payloads": false,
"positions": false
}
}
The code that I have written in Zeppelin is :
def createString():String = {
return s"""_mtermvectors {
"ids": [
"ABC",
"XYA",
"RTE"
],
"parameters": {
"fields": [
"attribute"
],
"term_statistics": true,
"offsets": false,
"payloads": false,
"positions": false
}
}"""
}
import org.elasticsearch.spark._
sc.esRDD("aa6", "?q="+createString).count
I get the error :
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: org.elasticsearch.hadoop.rest.EsHadoopRemoteException: parse_exception: parse_exception: Encountered " <RANGE_GOOP> "["RTE","XYA","ABC" "" at line 1, column 22.
Was expecting:
"TO" ...
{"query":{"query_string":{"query":"_mtermvectors {\"ids\": [\"RTE\",\"ABC\",\"XYA\"], \"parameters\": {\"fields\": [\"attribute\"], \"term_statistics\": true, \"offsets\": false, \"payloads\": false, \"positions\": false } }"}}}
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
This is probably something simple but I am unable to find a way to set the request body while making the spark call
I'm not sure but I don't think this is currently supported with es-Spark package. You can check this link to see what options are available via sparkContext of esRDD.
What you could do instead is to make use of High Level Rest Client of Elasticsearch and get the details in a List or Seq or any file and then load that into Spark RDD.
It is round the world way but unfortunately that is the only way I suppose. Just so it helps, I have created the below snippet so you at least have the required data from Elasticsearch related to the above query.
import org.apache.http.HttpHost
import org.elasticsearch.client.RequestOptions
import org.elasticsearch.client.RestClient
import org.elasticsearch.client.RestHighLevelClient
import org.elasticsearch.client.core.MultiTermVectorsRequest
import org.elasticsearch.client.core.TermVectorsRequest
import org.elasticsearch.client.core.TermVectorsResponse
object SampleSparkES {
/**
* Main Class where program starts
*/
def main(args: Array[String]) = {
val termVectorsResponse = elasticRestClient
println(termVectorsResponse.size)
}
/**
* Scala client code to retrieve the response of mtermVectors
*/
def elasticRestClient : java.util.List[TermVectorsResponse] = {
val client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("localhost", 9200, "http")))
val tvRequestTemplate = new TermVectorsRequest("aa6","ids");
tvRequestTemplate.setFields("attribute");
//Set the document ids you want for collecting the term Vector information
val ids = Array("1", "2", "3");
val request = new MultiTermVectorsRequest(ids, tvRequestTemplate);
val response = client.mtermvectors(request, RequestOptions.DEFAULT)
//get the response
val termVectorsResponse = response.getTermVectorsResponses
//close RestHighLevelClient
client.close();
//return List[TermVectorsResponse]
termVectorsResponse
}
}
As an example you can get the sumDocFreq of the first document in below manner
println(termVectorsResponse.iterator.next.getTermVectorsList.iterator.next.getFieldStatistics.getSumDocFreq)
All you would now need is to find a way to convert the collection into a Seq in a way that could be loaded in an RDD.

About Amazon SageMaker Ground Truth save in S3 after label

I am setting up the automatic data labelling pipeline for my colleague.
First, I define the ground truth request based on API (bucket, manifests, etc).
Second, I create this labelling job, and all files are uploaded in S3 immediately.
After that my colleague will receive an email saying it is ready to label it, then he will label the data and submit.
Until now, everything is well and quick. Then I check the SageMaker labelling job dashboard, it shows the task is in progress, and it takes very very long time to know it is completed or failed. I don't know the reason. Yesterday, it saved the results at 4 am, took around 6 hours. But if I create label job on website instead of sending requests, it will save the results quickly.
Can anyone explain it? Or maybe I need to set up a time sync or other configuration?
This is my config:
{
"InputConfig": {
"DataSource": {
"S3DataSource": {
"ManifestS3Uri": ""s3://{bucket_name}/{JOB_ID}/{manifest_name}-{JOB_ID}.manifest""
}
},
"DataAttributes": {
"ContentClassifiers": [
"FreeOfPersonallyIdentifiableInformation",
"FreeOfAdultContent"
]
}
},
"OutputConfig": {
"S3OutputPath": "s3://{bucket_name}/{JOB_ID}/output-{manifest_name}/"
},
"HumanTaskConfig": {
"AnnotationConsolidationConfig": {
"AnnotationConsolidationLambdaArn": "arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClass"
},
"PreHumanTaskLambdaArn": "arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClass",
"NumberOfHumanWorkersPerDataObject": 2,
"TaskDescription": "Dear Annotator, please label it according to instructions. Thank you!",
"TaskKeywords": [
"text",
"label"
],
"TaskTimeLimitInSeconds": 600,
"TaskTitle": "Label Text",
"UiConfig": {
"UiTemplateS3Uri": "s3://{bucket_name}/instructions.template"
},
"WorkteamArn": "work team arn"
},
"LabelingJobName": "Label",
"RoleArn": "my role arn",
"LabelAttributeName": "category",
"LabelCategoryConfigS3Uri": ""s3://{bucket_name}/labels.json""
}
I think my Lambda function is wrong, when I change to aws arn (preHuman and Annotation) everything works fine.
This is my afterLabeling Lambda:
import json
import boto3
from urllib.parse import urlparse
def lambda_handler(event, context):
consolidated_labels = []
parsed_url = urlparse(event['payload']['s3Uri']);
s3 = boto3.client('s3')
textFile = s3.get_object(Bucket = parsed_url.netloc, Key = parsed_url.path[1:])
filecont = textFile['Body'].read()
annotations = json.loads(filecont);
for dataset in annotations:
for annotation in dataset['annotations']:
new_annotation = json.loads(annotation['annotationData']['content'])
label = {
'datasetObjectId': dataset['datasetObjectId'],
'consolidatedAnnotation' : {
'content': {
event['labelAttributeName']: {
'workerId': annotation['workerId'],
'result': new_annotation,
'labeledContent': dataset['dataObject']
}
}
}
}
consolidated_labels.append(label)
return consolidated_labels
Are there any reasons?

get this JSON representation of your neo4j objects

I want to get data from thisarray of json object :
[
{
"outgoing_relationships": "http://myserver:7474/db/data/node/4/relationships/out",
"data": {
"family": "3",
"batch": "/var/www/utils/batches/d32740d8-b4ad-49c7-8ec8-0d54fcb7d239.resync",
"name": "rahul",
"command": "add",
"type": "document"
},
"traverse": "http://myserver:7474/db/data/node/4/traverse/{returnType}",
"all_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/all/{-list|&|types}",
"property": "http://myserver:7474/db/data/node/4/properties/{key}",
"self": "http://myserver:7474/db/data/node/4",
"properties": "http://myserver:7474/db/data/node/4/properties",
"outgoing_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/out/{-list|&|types}",
"incoming_relationships": "http://myserver:7474/db/data/node/4/relationships/in",
"extensions": {},
"create_relationship": "http://myserver:7474/db/data/node/4/relationships",
"paged_traverse": "http://myserver:7474/db/data/node/4/paged/traverse/{returnType}{?pageSize,leaseTime}",
"all_relationships": "http://myserver:7474/db/data/node/4/relationships/all",
"incoming_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/in/{-list|&|types}"
}
]
what i tried is :
def messages=[];
for ( i in families) {
messages?.add(i);
}
how i can get familes.data.name in message array .
Here is what i tried :
def messages=[];
for ( i in families) {
def map = new groovy.json.JsonSlurper().parseText(i);
def msg=map*.data.name;
messages?.add(i);
}
return messages;
and get this error :
javax.script.ScriptException: groovy.lang.MissingMethodException: No signature of method: groovy.json.JsonSlurper.parseText() is applicable for argument types: (com.tinkerpop.blueprints.pgm.impls.neo4j.Neo4jVertex) values: [v[4]]\nPossible solutions: parseText(java.lang.String), parse(java.io.Reader)
Or use Groovy's native JSON parsing:
def families = new groovy.json.JsonSlurper().parseText( jsonAsString )
def messages = families*.data.name
Since you edited the question to give us the information we needed, you can try:
def messages=[];
families.each { i ->
def map = new groovy.json.JsonSlurper().parseText( i.toString() )
messages.addAll( map*.data.name )
}
messages
Though it should be said that the toString() method in com.tinkerpop.blueprints.pgm.impls.neo4j.Neo4jVertex makes no guarantees to be valid JSON... You should probably be using the getProperty( name ) function of Neo4jVertex rather than relying on a side-effect of toString()
What are you doing to generate the first bit of text (which you state is JSON and make no mention of how it's created)
Use JSON-lib.
GJson.enhanceClasses()
def families = json_string as JSONArray
def messages = families.collect {it.data.name}
If you are using Groovy 1.8, you don't need JSON-lib anymore as a JsonSlurper is included in the GDK.
import groovy.json.JsonSlurper
def families = new JsonSlurper().parseText(json_string)
def messages = families.collect { it.data.name }

Resources