Unable to write JSON data into InfluxDB 2.x using python client - python-3.x

I'm trying to insert a simple json data into the InfluxDB 2.x using python client: influxdb_client.
The code create the measurement and all the fields and tags but it does not insert the data.
This is how I'm connecting to InfluxDB:
from datetime import datetime
from influxdb_client import InfluxDBClient as FluxDBClient
from influxdb_client import Point, WriteOptions
from influxdb_client .client.write_api import SYNCHRONOUS
api_token = "gC4vfGRHOW_OMHsXtLBBDnY_heI76GkqQgZlouE8hNYZOVUVLHvhoo79Q-5b0Tvj82rmTofXjywx_tBoTXmT8w=="
org = "0897e8ca6b394328"
client = FluxDBClient(url="http://localhost:8086/", token=api_token, org=org, debug=True, timeout=50000)
Here is the code to insert data:
points = [{
"measurement": "temp",
"time": datetime.now(),
"tags": {
"weather_station": "LAHORE",
"module_detail_id": 1,
},
"fields": {
"Rad1h": 1.2,
"TTT": 100,
"N": 30,
"VV": 20,
"power": 987,
}
}]
options = WriteOptions(batch_size=1, flush_interval=8, jitter_interval=0, retry_interval=1000)
_write_client = client.write_api(write_options=options)
_write_client.write("Test", "0897e8ca6b394328", points)
_write_client.__del__()
client.__del__()
When I run the script, I get 204 response in return, which means that the write query is executed successfully. But I'm getting no record, as you can see in the image below:

Related

Parsing JSON from POST request using Python and FastAPI

I'm trying to parse a JSON that came from a POST request from user to my API made with FastAPI. I have a nested JSON that can receive multiple values and my problem is: How can I parse it and get the values from this nested JSON?
The JSON is like this:
{
"recipe": [
{
"ingredient": "rice",
"quantity": 1.5
},
{
"ingredient": "carrot",
"quantity": 1.8
},
{
"ingredient": "beans",
"quantity": 1.8
}
]
}
I must grab the all values from the list recipe separated, as I have a database with this ingredient and will query all of this and do some calculates with the quantity given from user. But I don't even know how I can parse this JSON from the POST request.
I already have a 2 classes with Pydantic validating this values like:
class Ingredientes(BaseModel):
ingredient: str
quantity: float
class Receita(BaseModel):
receita: List[Ingredientes] = []
Edit 1: I tried to include it in my function that recieve this POST request and didn't work like:
#app.post('/calcul', status_code=200)
def calculate_table(receita: Receita):
receipe = receita["receita"]
for ingredient in receipe:
return f'{ingredient["ingredient"]}: {ingredient["quantity"]}'
Edit 2: Fixed the issue with the code bellow(Thanks MatsLindh):
#app.post('/calcul', status_code=200)
def calculate_table(receita: Receita):
receipe = receita.receita
for ingredient in receipe:
return f'{ingredient.ingrediente}: {ingredient.quantidade}'
To parse a JSON-formatted string (from the POST request, for example) into a dictionary, use the loads function from the json module, like this:
import json
s = """{
"recipe": [
{
"ingredient": "rice",
"quantity": 1.5
},
{
"ingredient": "carrot",
"quantity": 1.8
},
{
"ingredient": "beans",
"quantity": 1.8
}
]
}
"""
recipe = json.loads(s)["recipe"]
for ingredient in recipe:
print(f"{ingredient['ingredient']}: {ingredient['quantity']}")

How to send(set) parameter dictonaryy to zeep client

Sorry if this question is already asked.
I am trying to send some parameter as dict, but i am getting results as none or not note found.
in parameter i am sending NodeName(GOAFB) and want to change its NodeDetails as mention in param.
And this node name is available on the address, check with get method(Shown in snapshot).
Below is the code i tried
from zeep import Client
from zeep.transports import Transport
from requests import Session
from requests.auth import HTTPBasicAuth
from zeep.wsse.username import UsernameToken
import json
wsdl = "http://10.2.1.8/ws/17.0/Bhaul.asmx?wsdl"
session = Session()
client = Client(wsdl, transport=Transport(session=session),wsse=UsernameToken('admin','password'))
param = {
"Ib440ConfigSet": {
"NodeName": "GOAFB",
"NodeDetail": {
"Custom": [
{
"Name": "Circle",
"Value": "KOLKATA"
},
{
"Name": "SGW",
"Value": "1010"
}
]
}
}
}
dd=client.service.Ib440ConfigGet("GOAFB")
client.service.Ib440ConfigSet(*param)
Below snapshot contains results:
Please support how to make it working
In order to send a dict we need to assign double *
So this should work for you:
client.service.Ib440ConfigSet(**param)

Deleting Field using Pymongo

I don't have enough reputation to comment and hence I have to ask this question again.
I have tried different ways to delete my dynamically changing date column as mentioned here but nothing worked for me : How to remove a field completely from a MongoDB document?
Environment_Details - OS : Windows10, pymongo : 3.10.1, MongoDB Compass App : 4.4, python: 3.6
I am trying to delete column "2020/08/24"(this date will be dynamic in my case). My data looks like this:
[{
"_id": {
"$oid": "5f4e4dda1031d5b55a3adc70"
},
"Site": "ABCD",
"2020/08/24": "1",
"2020/08/25": "1.0"
},{
"_id": {
"$oid": "5f4e4dda1031d5b55a3adc71"
},
"Site": "EFGH",
"2020/08/24": "1",
"2020/08/25": "0.0"
}]
Commands which don't throw me any error but also don't delete the column/field "2020/08/24":
col_name = "2020/08/24"
db.collection.update_many({}, {"$unset": {f"{col_name}":1}})
db.collection.update({}, {"$unset": {f"{col_name}":1}}, False, True)
db.collection.update_many({}, query =[{ '$unset': [col_name] }])
I am always running into error while trying to use multi:True with update option.
The exact code that I am using is:
import pymongo
def connect_mongo(host, port, db):
conn = pymongo.MongoClient(host, port)
return conn[db]
def close_mongo(host, port):
client = pymongo.MongoClient(host, port)
client.close()
def delete_mongo_field(db, collection, col_name, host, port):
"""Delete column/field from a collection"""
db = connect_mongo(host, port, db)
db.collection.update_many({}, {"$unset": {f"{col_name}":1}})
#db.collection.update_many({}, {'$unset': {f'{col_name}':''}})
close_mongo(host,port)
col_to_delete = "2020/08/30"
delete_mongo_field(mydb, mycollection, col_to_delete, 'localhost', 27017)
The following code worked with Python 3.8, PyMongo 3.11., and MongoDB v 4.2.8.
col_name = '2020/08/24'
result = collection.update_many( { }, { '$unset': { col_name: '' } } )
print(result.matched_count, result.modified_count)
The two documents in the post were updated and the field with the name "2020/08/24" was removed. NOTE: A MongoDB collection's document can have a field name with / character (See Documents - Field Names).
[EDIT ADD]
The following delete_mongo_field function worked for me updating the documents correctly by removing the supplied field name:
def delete_mongo_field(db, collection, col_name, host, port):
db = connect_mongo(host, port, db)
result = db[collection].update_many( { }, { '$unset': { col_name: 1 } } ) # you can also use '' instead of 1
print(result.modified_count)
On a separate note, you might want to consider changing your data model to store the dates as values rather than keys, and also to consider storing them as native date objects, e.g.
import datetime
import pytz
db.testcollection.insert_many([
{
"Site": "ABCD",
"Dates": [
{
"Date": datetime.datetime(2020, 8, 24, 0, 0, tzinfo=pytz.UTC),
"Value": "1"
},
{
"Date": datetime.datetime(2020, 8, 25, 0, 0, tzinfo=pytz.UTC),
"Value": "1.0"
}]
},
{
"Site": "EFGH",
"Dates": [
{
"Date": datetime.datetime(2020, 8, 24, 0, 0, tzinfo=pytz.UTC),
"Value": "1"
},
{
"Date": datetime.datetime(2020, 8, 25, 0, 0, tzinfo=pytz.UTC),
"Value": "0.1"
}]
}])
But back to you question ... the first example works fine for me. Can you try the sample code below and see if you get different results:
from pymongo import MongoClient
import pprint
db = MongoClient()['testdatabase']
db.testcollection.insert_many([{
"Site": "ABCD",
"2020/08/24": "1",
"2020/08/25": "1.0"
}, {
"Site": "EFGH",
"2020/08/24": "1",
"2020/08/25": "0.0"
}])
pprint.pprint(list(db.testcollection.find({}, {'_id': 0})))
col_name = "2020/08/24"
db.testcollection.update_many({}, {"$unset": {f"{col_name}": 1}})
pprint.pprint(list(db.testcollection.find({}, {'_id': 0})))
Result:
[{'2020/08/24': '1', '2020/08/25': '1.0', 'Site': 'ABCD'},
{'2020/08/24': '1', '2020/08/25': '0.0', 'Site': 'EFGH'}]
[{'2020/08/25': '1.0', 'Site': 'ABCD'}, {'2020/08/25': '0.0', 'Site': 'EFGH'}]

Unable to execute mtermvectors elasticsearch query from AWS EMR cluster using Spark

I am trying to execute this elasticsearch query via spark:
POST /aa6/_mtermvectors
{
"ids": [
"ABC",
"XYA",
"RTE"
],
"parameters": {
"fields": [
"attribute"
],
"term_statistics": true,
"offsets": false,
"payloads": false,
"positions": false
}
}
The code that I have written in Zeppelin is :
def createString():String = {
return s"""_mtermvectors {
"ids": [
"ABC",
"XYA",
"RTE"
],
"parameters": {
"fields": [
"attribute"
],
"term_statistics": true,
"offsets": false,
"payloads": false,
"positions": false
}
}"""
}
import org.elasticsearch.spark._
sc.esRDD("aa6", "?q="+createString).count
I get the error :
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: org.elasticsearch.hadoop.rest.EsHadoopRemoteException: parse_exception: parse_exception: Encountered " <RANGE_GOOP> "["RTE","XYA","ABC" "" at line 1, column 22.
Was expecting:
"TO" ...
{"query":{"query_string":{"query":"_mtermvectors {\"ids\": [\"RTE\",\"ABC\",\"XYA\"], \"parameters\": {\"fields\": [\"attribute\"], \"term_statistics\": true, \"offsets\": false, \"payloads\": false, \"positions\": false } }"}}}
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
This is probably something simple but I am unable to find a way to set the request body while making the spark call
I'm not sure but I don't think this is currently supported with es-Spark package. You can check this link to see what options are available via sparkContext of esRDD.
What you could do instead is to make use of High Level Rest Client of Elasticsearch and get the details in a List or Seq or any file and then load that into Spark RDD.
It is round the world way but unfortunately that is the only way I suppose. Just so it helps, I have created the below snippet so you at least have the required data from Elasticsearch related to the above query.
import org.apache.http.HttpHost
import org.elasticsearch.client.RequestOptions
import org.elasticsearch.client.RestClient
import org.elasticsearch.client.RestHighLevelClient
import org.elasticsearch.client.core.MultiTermVectorsRequest
import org.elasticsearch.client.core.TermVectorsRequest
import org.elasticsearch.client.core.TermVectorsResponse
object SampleSparkES {
/**
* Main Class where program starts
*/
def main(args: Array[String]) = {
val termVectorsResponse = elasticRestClient
println(termVectorsResponse.size)
}
/**
* Scala client code to retrieve the response of mtermVectors
*/
def elasticRestClient : java.util.List[TermVectorsResponse] = {
val client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("localhost", 9200, "http")))
val tvRequestTemplate = new TermVectorsRequest("aa6","ids");
tvRequestTemplate.setFields("attribute");
//Set the document ids you want for collecting the term Vector information
val ids = Array("1", "2", "3");
val request = new MultiTermVectorsRequest(ids, tvRequestTemplate);
val response = client.mtermvectors(request, RequestOptions.DEFAULT)
//get the response
val termVectorsResponse = response.getTermVectorsResponses
//close RestHighLevelClient
client.close();
//return List[TermVectorsResponse]
termVectorsResponse
}
}
As an example you can get the sumDocFreq of the first document in below manner
println(termVectorsResponse.iterator.next.getTermVectorsList.iterator.next.getFieldStatistics.getSumDocFreq)
All you would now need is to find a way to convert the collection into a Seq in a way that could be loaded in an RDD.

How to create kubernetes cluster in google cloud platform in python using google-cloud-container module

I'm trying to create kubernetes cluster on google cloud platform through python (3.7) using google-cloud-container module.
Created kubernetes cluster through google cloud platform and was able to successfully retrieve details for that cluster using google-cloud container (python module).
I'm trying now to create kubernetes cluster through this module. I created a JSON file with required key values and passed it as parameter, but i'm getting errors. Would appreciate if provided a sample code for creating kubernetes cluster in google cloud platform. Thank you in advance.
from google.oauth2 import service_account
from google.cloud import container_v1
class GoogleCloudKubernetesClient(object):
def __init__(self, file, project_id, project_name, zone, cluster_id):
credentials = service_account.Credentials.from_service_account_file(
filename=file)
self.client = container_v1.ClusterManagerClient(credentials=credentials)
self.project_id = project_id
self.zone = zone
def create_cluster(self, cluster):
print(cluster)
response = self.client.create_cluster(self.project_id, self.zone, cluster=cluster)
print(f"response for cluster creation: {response}")
def main():
cluster_data = {
"name": "test_cluster",
"masterAuth": {
"username": "admin",
"clientCertificateConfig": {
"issueClientCertificate": True
}
},
"loggingService": "logging.googleapis.com",
"monitoringService": "monitoring.googleapis.com",
"network": "projects/abhinav-215/global/networks/default",
"addonsConfig": {
"httpLoadBalancing": {},
"horizontalPodAutoscaling": {},
"kubernetesDashboard": {
"disabled": True
},
"istioConfig": {
"disabled": True
}
},
"subnetwork": "projects/abhinav-215/regions/us-west1/subnetworks/default",
"nodePools": [
{
"name": "test-pool",
"config": {
"machineType": "n1-standard-1",
"diskSizeGb": 100,
"oauthScopes": [
"https://www.googleapis.com/auth/cloud-platform"
],
"imageType": "COS",
"labels": {
"App": "web"
},
"serviceAccount": "abhinav#abhinav-215.iam.gserviceaccount.com",
"diskType": "pd-standard"
},
"initialNodeCount": 3,
"autoscaling": {},
"management": {
"autoUpgrade": True,
"autoRepair": True
},
"version": "1.11.8-gke.6"
}
],
"locations": [
"us-west1-a",
"us-west1-b",
"us-west1-c"
],
"resourceLabels": {
"stage": "dev"
},
"networkPolicy": {},
"ipAllocationPolicy": {},
"masterAuthorizedNetworksConfig": {},
"maintenancePolicy": {
"window": {
"dailyMaintenanceWindow": {
"startTime": "02:00"
}
}
},
"privateClusterConfig": {},
"databaseEncryption": {
"state": "DECRYPTED"
},
"initialClusterVersion": "1.11.8-gke.6",
"location": "us-west1-a"
}
kube = GoogleCloudKubernetesClient(file='/opt/key.json', project_id='abhinav-215', zone='us-west1-a')
kube.create_cluster(cluster_data)
if __name__ == '__main__':
main()
Actual Output:
Traceback (most recent call last):
File "/opt/matilda_linux/matilda_linux_logtest/matilda_discovery/matilda_discovery/test/google_auth.py", line 118, in <module>
main()
File "/opt/matilda_linux/matilda_linux_logtest/matilda_discovery/matilda_discovery/test/google_auth.py", line 113, in main
kube.create_cluster(cluster_data)
File "/opt/matilda_linux/matilda_linux_logtest/matilda_discovery/matilda_discovery/test/google_auth.py", line 31, in create_cluster
response = self.client.create_cluster(self.project_id, self.zone, cluster=cluster)
File "/opt/matilda_discovery/venv/lib/python3.6/site-packages/google/cloud/container_v1/gapic/cluster_manager_client.py", line 407, in create_cluster
project_id=project_id, zone=zone, cluster=cluster, parent=parent
ValueError: Protocol message Cluster has no "masterAuth" field.
Kind of late answer, but I had the same problem and figured it out. Worth writing for future viewers.
You should not write the field names in the cluster_data as they appear in the REST API.
Instead you should translate them to how they would look by python convention, with words separated by underline instead of camelcase.
Thus, instead of writing masterAuth, you should write master_auth. You should make similar changes to the rest of your fields, and then the script should work.
P.S you aren't using the project_name and cluster_id params in GoogleCloudKubernetesClient.init. Not sure what they are, but you should probably remove them.
The module is still using the basic REST API format to create the cluster. You can also use the GUI to choose all the options you want to use for your cluster, then press on the REST hyperlink at the bottom of the page, this will provide you with the REST format required to build the cluster you want.
The error you are getting is because you have a blank (or unspecified) field that must be specified. Some of the fields listed on the API have default values that you don't need, others are required.

Resources