Is there a more efficient way to interact with ItemPaged objects from azure-data-tables SDK function query_entities? - python-3.x

The quickest method I have found is to just convert the ItemPaged object to a list using list() and then I'm able to manipulate/extract using a Pandas DataFrame. However, if I have millions of results, the process can be quite time-consuming, especially if I only want every nth result over a certain time-frame, for instance. Typically, I would have to query the entire time-frame and then re-loop to only obtain every nth element. Does anyone know a more efficient way to use query_entities OR how to more efficiently return every nth item from ItemPaged or more explicitly from table.query_entities? Portion of my code below:
connection_string = "connection string here"
service = TableServiceClient.from_connection_string(conn_str=connection_string)
table_string = ""
table = service.get_table_client(table_string)
entities = table.query_entities(filter, select, etc.)
results = pd.DataFrame(list(entities))

Does anyone know a more efficient way to use query_entities OR how to more efficiently return every nth item from ItemPaged or more explicitly from table.query_entities?
After reproducing from my end, one of the ways to achieve your requirement using get_entity() instead of query_entities(). Below is the complete code that worked for me.
entity = tableClient.get_entity(partition_key='<PARTITION_KEY>', row_key='<ROW_KEY>')
print("Results using get_entity :")
print(format(entity))
RESULTS:

Related

Transforming large array of objects to csv using json2csv

I need to transform a large array of JSON (that can have over 100k positions) into a CSV.
This array is created directly in the application, it's not the result of an uploaded file.
Looking at the documentation, I've thought on using parser but it says that:
For that reason is rarely a good reason to use it until your data is very small or your application doesn't do anything else.
Because the data is not small and my app will do other things than creating the csv, I don't think it'll be the best approach but I may be misunderstanding the documentation.
Is it possible to use the others options (async parser or transform) with an already created data (and not a stream of data)?
FYI: It's a nest application but I'm using this node.js lib.
Update: I've tryied to insert with an array with over 300k positions, and it went smoothly.
Why do you need any external modules?
Converting JSON into a javascript array of javascript objects is a piece of cake with the native JSON.parse() function.
let jsontxt=await fs.readFile('mythings.json','uft8');
let mythings = JSON.parse(jsontxt);
if (!Array.isArray(mythings)) throw "Oooops, stranger things happen!"
And, then, converting a javascript array into a CSV is very straightforward.
The most obvious and absurd case is just mapping every element of the array into a string that is the JSON representation of the object element. You end up with a useless CSV with a single column containing every element of your original array. And then joining the resulting strings array into a single string, separated by newlines \n. It's good for nothing but, heck, it's a CSV!
let csvtxt = mythings.map(JSON.stringify).join("\n");
await fs.writeFile("mythings.csv",csvtxt,"utf8");
Now, you can feel that you are almost there. Replace the useless mapping function into your own
let csvtxt = mythings.map(mapElementToColumns).join("\n");
and choose a good mapping between the fields of the objects of your array, and the columns of your csv.
function mapElementToColumns(element) {
return `${JSON.stringify(element.id)},${JSON.stringify(element.name)},${JSON.stringify(element.value)}`;
}
or, in a more thorough way
function mapElementToColumns(fieldNames) {
return function (element) {
let fields = fieldnames.map(n => element[n] ? JSON.stringify(element[n]) : '""');
return fields.join(',');
}
}
that you may invoke in your map
mythings.map(mapElementToColumns(["id","name","element"])).join("\n");
Finally, you might decide to use an automated for "all fields in all objects" approach; which requires that all the objects in the original array maintain a similar fields schema.
You extract all the fields of the first object of the array, and use them as the header row of the csv and as the template for extracting the rest of the elements.
let fieldnames = Object.keys(mythings[0]);
and then use this field names array as parameter of your map function
let csvtxt= mythings.map(mapElementToColumns(fieldnames)).join("\n");
and, also, prepending them as the CSV header
csvtxt.unshift(fieldnames.join(','))
Putting all the pieces together...
function mapElementToColumns(fieldNames) {
return function (element) {
let fields = fieldnames.map(n => element[n] ? JSON.stringify(element[n]) : '""');
return fields.join(',');
}
}
let jsontxt=await fs.readFile('mythings.json','uft8');
let mythings = JSON.parse(jsontxt);
if (!Array.isArray(mythings)) throw "Oooops, stranger things happen!";
let fieldnames = Object.keys(mythings[0]);
let csvtxt= mythings.map(mapElementToColumns(fieldnames)).join("\n");
csvtxt.unshift(fieldnames.join(','));
await fs.writeFile("mythings.csv",csvtxt,"utf8");
And that's it. Pretty neat, uh?

Timeseries differencing - ArangoDB (AQL or Python)

I have a collection which holds documents, with each document having a data observation and the time that the data was captured.
e.g.
{
_key:....,
"data":26,
"timecaptured":1643488638.946702
}
where timecaptured for now is a utc timestamp.
What I want to do is get the duration between consecutive observations, with SQL I could do this with LAG for example, but with ArangoDB and AQL I am struggling to see how to do this at the database. So effectively the difference in timestamps between two documents in time order. I have a lot of data and I don't really want to pull it all into pandas.
Any help really appreciated.
Although the solution provided by CodeManX works, I prefer a different one:
FOR d IN docs
SORT d.timecaptured
WINDOW { preceding: 1 } AGGREGATE s = SUM(d.timecaptured), cnt = COUNT(1)
LET timediff = cnt == 1 ? null : d.timecaptured - (s - d.timecaptured)
RETURN timediff
We simply calculate the sum of the previous and the current document, and by subtracting the current document's timecaptured we can therefore calculate the timecaptured of the previous document. So now we can easily calculate the requested difference.
I only use the COUNT to return null for the first document (which has no predecessor). If you are fine with having a difference of zero for the first document, you can simply remove it.
However, neither approach is very straight forward or obvious. I put on my TODO list to add an APPEND aggregate function that could be used in WINDOW and COLLECT operations.
The WINDOW function doesn't give you direct access to the data in the sliding window but here is a rather clever workaround:
FOR doc IN collection
SORT doc.timecaptured
WINDOW { preceding: 1 }
AGGREGATE d = UNIQUE(KEEP(doc, "_key", "timecaptured"))
LET timediff = doc.timecaptured - d[0].timecaptured
RETURN MERGE(doc, {timediff})
The UNIQUE() function is available for window aggregations and can be used to get at the desired data (previous document). Aggregating full documents might be inefficient, so a projection should do, but remember that UNIQUE() will remove duplicate values. A document _key is unique within a collection, so we can add it to the projection to make sure that UNIQUE() doesn't remove anything.
The time difference is calculated by subtracting the previous' documents timecaptured value from the current document's one. In the case of the first record, d[0] is actually equal to the current document and the difference ends up being 0, which I think is sensible. You could also write d[-1].timecaptured - d[0].timecaptured to achieve the same. d[1].timecaptured - d[0].timecaptured on the other hand will give you the inverted timestamp for the first record because d[1] is null (no previous document) and evaluates to 0.
There is one risk: UNIQUE() may alter the order of the documents. You could use a subquery to sort by timecaptured again:
LET timediff = doc.timecaptured - (
FOR dd IN d SORT dd.timecaptured LIMIT 1 RETURN dd.timecaptured
)[0]
But it's not great for performance to use a subquery. Instead, you can use the aggregation variable d to access both documents and calculate the absolute value of the subtraction so that the order doesn't matter:
LET timediff = ABS(d[-1].timecaptured - d[0].timecaptured)

Insert values into API request dynamically?

I have an API request I'm writing to query OpenWeatherMap's API to get weather data. I am using a city_id number to submit a request for a unique place in the world. A successful API query looks like this:
r = requests.get('http://api.openweathermap.org/data/2.5/group?APPID=333de4e909a5ffe9bfa46f0f89cad105&id=4456703&units=imperial')
The key part of this is 4456703, which is a unique city_ID
I want the user to choose a few cities, which then I'll look through a JSON file for the city_ID, then supply the city_ID to the API request.
I can add multiple city_ID's by hard coding. I can also add city_IDs as variables. But what I can't figure out is if users choose a random number of cities (could be up to 20), how can I insert this into the API request. I've tried adding lists and tuples via several iterations of something like...
#assume the user already chose 3 cities, city_ids are below
city_ids = [763942, 539671, 334596]
r = requests.get(f'http://api.openweathermap.org/data/2.5/groupAPPID=333de4e909a5ffe9bfa46f0f89cad105&id={city_ids}&units=imperial')
Maybe a list is not the right data type to use?
Successful code would look something like...
r = requests.get(f'http://api.openweathermap.org/data/2.5/group?APPID=333de4e909a5ffe9bfa46f0f89cad105&id={city_id1},{city_id2},{city_id3}&units=imperial')
Except, as I stated previously, the user could choose 3 cities or 10 so that part would have to be updated dynamically.
you can use some string methods and list comprehensions to append all the variables of a list to single string and format that to the API string as following:
city_ids_list = [763942, 539671, 334596]
city_ids_string = ','.join([str(city) for city in city_ids_list]) # Would output "763942,539671,334596"
r = requests.get('http://api.openweathermap.org/data/2.5/group?APPID=333de4e909a5ffe9bfa46f0f89cad105&id={city_ids}&units=imperial'.format(city_ids=city_ids_string))
hope it helps,
good luck

Get the label of a MathJax equation

How can I get the label of an equation? I'm attempting to reprocess an equation with a label, but I have to delete the label from MathJax.Extension["TeX/AMSmath"].labels first, for which the label must be known...
I know I can scan through the source text for the label MathJax.Hub.getAllJax("mathDiv")[0}.SourceElement().find("\label(") (...), but this seems needlessly complicated. Is there a better way?
There's no built-in API for this.
If you don't need to keep labels, then the reset in the comment above is probably the best way to go about it:
MathJax.Extension["TeX/AMSmath"].labels = {}
A quick and dirty way to get the IDs is to leverage the fact that they end up in the output. So you can just get all the IDs in the output, e.g.,
const math = MathJax.Hub.getAllJax()[0];
const nodesWithIds = document.getElementById(math.root.inputID).previousSibling.querySelectorAll('[id]');
const ids = [];
for (node of nodesWithIds) ids.push(node.id);
A cleaner and perhaps conceptually easier way would be to leverage MathML (which is essentially the internal format): the \label{} always ends up on an mlabeledtr. The trouble is that you'd have to re-parse that, e.g.,
const temp = document.createElement('span');
temp.innerHTML = math.root.toMathML();
const nodesWithIds = temp.querySelectorAll('mlabeledtr [id]');
const ids = [];
for (node of nodesWithIds) ids.push(node.id);
This will make sure the array only has relevant IDs in them (and the contents of the nodes should correspond to \label{}.
I suppose with helper libraries it might be easier to dive into the math.root object directly and look for IDs recursively (in its data key).

Filtering Haystack (SOLR) results by django_id

With Django/Haystack/SOLR, I'd like to be able to restrict the result of a search to those records within a particular range of django_ids. Getting these IDs is not a problem, but trying to filter by them produces some unexpected effects. The code looks like this (extraneous code trimmed for clarity):
def view_results(request,arg):
# django_ids list is first calculated using arg...
sqs = SearchQuerySet().facet('example_facet') # STEP_1
sqs = sqs.filter(django_id__in=django_ids) # STEP_2
view = search_view_factory(
view_class=SearchView,
template='search/search-results.html',
searchqueryset=sqs,
form_class=FacetedSearchForm
)
return view(request)
At the point marked STEP_1 I get all the database records. At STEP_2 the records are successfully narrowed down to the number I'd expect for that list of django_ids. The problem comes when the search results are displayed in cases where the user has specified a search term in the form. Rather than returning all records from STEP_2 which match the term, I get all records from STEP_2 plus all from STEP_1 which match the term.
Presumably, therefore, I need to override one/some of the methods in for SearchView in haystack/views.py, but what? Can anyone suggest a means of achieving what is required here?
After a bit more thought, I found a way around this. In the code above, the problem was occurring in the view = search_view_factory... line, so I needed to create my own SearchView class and override the get_results(self) method in order to apply the filtering after the search has been run with the user's search terms. The result is code along these lines:
class MySearchView(SearchView):
def get_results(self):
search = self.form.search()
# The ID I need for the database search is at the end of the URL,
# but this may have some search parameters on and need cleaning up.
view_id = self.request.path.split("/")[-1]
view_query = MyView.objects.filter(id=view_id.split("&")[0])
# At this point the django_ids of the required objects can be found.
if len(view_query) > 0:
view_item = view_query.__getitem__(0)
django_ids = []
for thing in view_item.things.all():
django_ids.append(thing.id)
search = search.filter_and(django_id__in=django_ids)
return search
Using search.filter_and rather than search.filter at the end was another thing which turned out to be essential, but which didn't do what I needed when the filtering was being performed before getting to the SearchView.

Resources