How can I use the Python client library for influxdb to for queries? (InfluxDB OSS v1.8.10) - python-3.x

I learned how to download the client library and write to influxdb databases in influxdb OSS v1.8 at this link:
https://docs.influxdata.com/influxdb/v1.8/tools/api_client_libraries/#install-and-use-the-python-client-library
but I can't find out how to use it for querying. My concern is that this version of influx doesn't seem to use buckets, and the explanation on github:
https://github.com/influxdata/influxdb-client-python
only explains how to write and query with buckets. It includes the code that makes querying with v1.8 possible, but it doesn't explain how to use it. If anyone has any tips or resources that could help, please let me know!

The Python Client libraries were developed using the InfluxDB Cloud and InfluxDB 2.0 APIs. Some of these APIs were backported to InfluxDB 1.8, but as you have seen, there are a few differences.
With InfluxDB 1.8, there are two key things to keep in mind:
The "bucket" parameter is the database you wish to read from. For querying, all you need to do is specify the database name. When writing data, it can also take the retention policy via a string like database/retention_policy (e.g. testing/autogen).
To query, your database will need Flux support enabled, which is disabled by default. See this option to enable it. If you are new to Flux, check out this basics page on how to build queries.
Below is a brief example of using the python client library to write a point and then query it back using a flux query:
from influxdb_client import InfluxDBClient, Point
username = ''
password = ''
database = 'testing'
with InfluxDBClient(url='http://localhost:8086', token=f'{username}:{password}', org='-') as client:
with client.write_api() as writer:
point = Point("mem").tag("host", "host1").field("used_percent", 25.43234543)
writer.write(bucket=database, record=point)
querier = client.query_api()
tables = querier.query(f'from(bucket: \"{database}\") |> range(start: -1h)')
for record in tables[0].records:
print(f'{record.get_measurement()} {record.get_field()}={record.get_value()} {record.get_time()}')
Hope that helps!

Related

node and express - how to use fake data

It's been a while since I used node and express and I was sure that this was possible, but i'm having an issue of figuring it out now.
I have a simple postgres database with sequelize. I am building a back end and don't have a populated database yet. I want to be able to provide fake data to use to build the front end and to test with. Is there a way to populate a database when the node server is started? Maybe by reading a json file into the database?
I know that I could point to this fake data using a setting in the environment file, but I don't see how to read in the data on startup. Is there a way to create a local database, read in the data, and point to that?
You can use fake factory package, I think it can solve your problem.
https://www.npmjs.com/package/faker-factory
FakerJs provides that solution.
import { faker } from '#faker-js/faker';
const randomName = faker.name.findName();
const randomEmail = faker.internet.email();
with the above, you can run a loop for loop to be specific to create
the desired data you may need for your project.
Also, check on the free web-API that provides fake or real data to workon

How to connect Google Datastore from a script in Python 3

We want to do some stuff with the data that is in the Google Datastore. We have a database already, We would like to use Python 3 to handle the data and make queries from a script on our developing machines. Which would be the easiest way to accomplish what we need?
From the Official Documentation:
You will need to install the Cloud Datastore client library for Python:
pip install --upgrade google-cloud-datastore
Set up authentication by creating a service account and setting an environment variable. It will be easier if you see it, please take a look at the official documentation for more info about this. You can perform this step by either using the GCP console or command line.
Then you will be able to connect to your Cloud Datastore client and use it, as in the example below:
# Imports the Google Cloud client library
from google.cloud import datastore
# Instantiates a client
datastore_client = datastore.Client()
# The kind for the new entity
kind = 'Task'
# The name/ID for the new entity
name = 'sampletask1'
# The Cloud Datastore key for the new entity
task_key = datastore_client.key(kind, name)
# Prepares the new entity
task = datastore.Entity(key=task_key)
task['description'] = 'Buy milk'
# Saves the entity
datastore_client.put(task)
print('Saved {}: {}'.format(task.key.name, task['description']))
As #JohnHanley mentioned, you will find a good example on this Bookshelf app tutorial that uses Cloud Datastore to store its persistent data and metadata for books.
You can create a service account and download the credentials as JSON and then set an environment variable called GOOGLE_APPLICATION_CREDENTIALS pointing to the json file. You can see the details at the link below.
https://googleapis.dev/python/google-api-core/latest/auth.html

DocumentDB Data migration Tool, can't migrate from db to db

I'm using DocumentDB Data Migration Tool to migrate a documentDB db to a newly created documentDB db. The connectionStrings verify say it is ok.
It doesn't work (no data transferred (=0) but not failure written in the log file (Failed = 0).
Here is what is done :
I've tried many things such as :
migrate / transfer a collection to a json file
migrate to partitionned / non partitionned documentdb db
for the target indexing policy I've taken the source indexing policy (json got from azure, documentdb db collection settings).
...
Actually nothing's working, but I have no error logs, maybe a problem of documentdb version ?
Thanx in advance for your help.
After debugging the solution from the tool's repo I figure the tools fail silently if you mistyped the database's name like I did.
DocumentDBClient just returns an empty async enumerator.
var database = await TryGetDatabase(databaseName, cancellation);
if (database == null)
return EmptyAsyncEnumerator<IReadOnlyDictionary<string, object>>.Instance;
I can import from an Azure Cosmos DB DocumentDB API collection using DocumentDB Data Migration tool.
Besides, based on my test, if the name of the collection that we specify for Source DocumentDB is not existing, no data will be transferred and no error logs is written.
Import result
Please make sure the source collection that you specified is existing. And if possible, you can try to create a new collection and import data from this new collection, and check if data can be transferred.
I've faced same problem and after some investigation found that internal document structure was changed. Therefor after migration with with tool documents are present but couldn't be found with data explorer (but with query explorer using select * they are visible)
I've migrated collection through mongo api using Mongichef
#fguigui: To help troubleshoot this, could you please re-rerun the same data migration operation using the command line option? Just launch dt.exe from the same folder as Data Migration Tool for syntax required. Then after you launch it with required parameters, please paste the output here and I'll take a look what's broken.

Groovy Couchbase help needed

I am beginner in Groovy and Couchbase. Used Groovy-console to script some basic Groovy. Used couchbase console tool with UI to meddle with documents on couchbase. Now I wanna combine them. I want to meddle with documents in couchbase using Groovy script.
Where can I find an apt tutorial? Or an example code of Groovy-couchbase connection and operation will also help a lot.
(I couldn't find on Google searches, so had to turn to my fellow experts on stackoverflow)
Thank you so much! :-)
All you need is the Java client.
#Grab('com.couchbase.client:java-client:2.2.6')
import com.couchbase.client.java.CouchbaseCluster
// Connect to localhost
def cluster = CouchbaseCluster.create()
// Open the default bucket and the "beer-sample" one
def defaultBucket = cluster.openBucket()
def beerSampleBucket = cluster.openBucket("beer-sample")
// Disconnect and clear all allocated resources
cluster.disconnect()
The Java client documentation is here: http://developer.couchbase.com/documentation/server/4.0/sdks/java-2.2/java-intro.html

azure table storage query

when I Post data and Query a Table with the database as: Dev datastorage (emulator) it works.
When I Post data Table with the data in Azure data base (have account) it works.
When I Get data from Table with the data in Azure data base (have account) it does not works.
In both the cases the code is the same.except the key and account credentials.
Is it I should do anything to Query ?
var query = azure.TableQuery
.select().from('dummytable').where('PartitionKey eq ?', key);
can any one suggest why Query is not working.
should there be anything else that need to be done
From Storage Explorer it works, I am able to see the entities.
only from the program I am not able to get the response. But in the same program "PUT"operation is working.
Was happening the same to me.. did an upgrade from the azure npm package 0.6.1 to 0.6.7 and now works, hope this helps.
I would look at the value that is on your partition key. There are some values that aren't on the list of invalid characters that Azure has issue with. For example before SDK 1.7 you could safely insert a % in a key, but if you queried for it specifically it wouldn't work. To test if this is the problem try running your query but without the filter and make sure your row is returned.
After reading the msdn mailing lists, I have upgraded the azure npm with the latest package 0.6.7 and it works. looks to be an issue with azure

Resources