How to use the function transaction.retrieve() to get receipt data? - bixby

I am developing a transaction workflow capsule, and I use the function transaction.retrieve() to get order data from the platform. But it returns only part of the order data.
MyReceipt is a structure stored the order informations, it is defined like this:
structure (MyReceipt) {
description (order info)
// properties
features { activity}
}
And it is built as a output concept of Commit Action, like this
action (CommitRequest) {
type (Commit)
description ()
collect {
// MyRequest
}
output (MyReceipt)
}
I try to get data like this
transaction.retrieve("bixby.MyCapsule.MyReceipt")
It is supposed to return all the MyReceipt Data. But it return only part of the Receipt data.Is it right to get all the orders? Or is there other ways to get all the receipt data?
And I have found the sample code use it just like this to get the last Receipt data
transaction.retrieve("bixby.MyCapsule.MyReceipt", "ALL", 1)
but it doesn't explain what these two parameter "ALL" and 1 represent for. And I want to get more details about the usage of this function.
Could you plz tell me how to use the function transaction.retrieve() or other function to get all the Receipt historical data, and How can I check out the transaction data for someone when I try to find the cause of the issue.

Copy the answer from dogethis. (Thanks, man! You do the hard work, I took credit)
We have the DOC ready online here
Basically, ALL is the default to get all state of transaction data, and 1 means only one record. The API page was not there before, so thanks for let us know.
I think it's the 1 cause you not get all record, but it does has a limit 20...
Have fun with Bixby!

Related

Firebase how to read latest children node?

I'm new to firebase and currently I'm still trying to learn how to get the latest children node based in this RTDB. My nodeMCU will send new data periodically so I'm trying to get the latest node when its added and the value of that node. Can you provide with a sample code for me to understand better? And if possible please explain like I'm 5. Thank you and have a good day.
From what I understand you have an Arduino module that is going to be constantly introducing data into your database.
What you want is to be able to read the value shown in the image as MQ7 every time a new value is added.
If this is the case there are different ways to obtain it.
The first and most common one would be to use the firebase Child Added event. With this event you can handle the data entered every time there is an addition to the reference to the database.
Using this event you would have a set of all the values entered in your reference and with each addition automatically (In Real Time) this set would be updated.
Taking your image as an example, the query code would be something like this (JS):
dbRef.child("Sensor MQ7").on("child_added", (snap) => {
for (i in snap.val()) {
const value_MQ7 = snap.child(i).child("MQ7").val()
// Do what you want with the value
console.log(value_MQ7)
}
})
If you don’t want to have that set with all the values entered in your reference, the best option would be a new function that returns only the value you are requesting, that is, a function that returns the MQ7 value of the last object entered in your reference sensor MQ7.
The query code would be something like this (JS):
const query = dbRef.child("Sensor MQ7").orderByKey().limitToLast(1);
query.get().then((snap) => {
for (i in snap.val()) {
// Do what you want with the value
const value_MQ7 = snap.child(i).child("MQ7").val()
console.log(value_MQ7)
}
})

Using Logstash Aggregate Filter plugin to process data which may or may not be sequenced

Hello all!
I am trying to use the Aggregate filter plugin of Logstash v7.7 to correlate and combine data from two different CSV file inputs which represent API data calls. The idea is to produce a record showing a combined picture. As you can expect the data may or may not arrive in the right sequence.
Here is as an example:
/data/incoming/source_1/*.csv
StartTime, AckTime, Operation, RefData1, RefData2, OpSpecificData1
231313232,44343545,Register,ref-data-1a,ref-data-2a,op-specific-data-1
979898999,75758383,Register,ref-data-1b,ref-data-2b,op-specific-data-2
354656466,98554321,Cancel,ref-data-1c,ref-data-2c,op-specific-data-2
/data/incoming/source_1/*.csv
FinishTime,Operation,RefData1, RefData2, FinishSpecificData
67657657575,Cancel,ref-data-1c,ref-data-2c,FinishSpecific-Data-1
68445590877,Register,ref-data-1a,ref-data-2a,FinishSpecific-Data-2
55443444313,Register,ref-data-1a,ref-data-2a,FinishSpecific-Data-2
I have a single pipeline that is receiving both these CSVs and I am able to process and write them as individual records to a single Index. However, the idea is to combine records from the two sources into one record each representing a superset. of Operation related information
Unfortunately, despite several attempts I have been unable to figure out how to achieve this via Aggregate filter plugin. My primary question is whether this is a suitable use of the specific plugin? And if so, any suggestions would be welcome!
At the moment, I have this
input {
file {
path => ['/data/incoming/source_1/*.csv']
tags => ["source1"]
}
file {
path => ['/data/incoming/source_2/*.csv']
tags => ["source2"]
}
# use the tags to do some source 1 and 2 related massaging, calculations, etc
aggregate {
task_id = "%{Operation}_%{RefData1}_%{RefData1}"
code => "
map['source_files'] ||= []
map['source_files'] << {'source_file', event.get('path') }
"
push_map_as_event_on_timeout => true
timeout => 600 #assuming this is the most far apart they will arrive
}
...
}
output {
elastic { ...}
}
And other such variations. However, I keep getting individual records being written to the Index and am unable to get one combined. Yet again, as you can see from the data set there's no guarantee of the sequencing of records - so I am wondering if the filter is the right tool for the job, to begin with? :-\
Or is it just me not being able to use it right! ;-)
In either case, any inputs/ comments/ suggestions welcome. Thanks!
PS: This message is being cross-posted over from Elastic forums. I am providing a link there just in case some answers pop up there too.
The answer is to use Elastic search in upsert mode. Please see the specifics here..
I recommend first that the information reaches you in order so that the filter can take it better, secondly, you could set the options in your pipeline.yml: pipeline.workers: 1 and pipeline.ordered: true, thus guaranteeing the order of processing.

WooCommerce Subscriptions: how to determine the last correctly paid order for a given subscription

Is there any already-programmed method to get the last correctly-paid order for a given subscription?
$subscription->get_last_order() will return the last associated order, no matter if that order involved a correct-payment or not.
$subscription->get_related_orders() will return the whole list of orders, and the list can include pending-payment or failed orders.
I think if you wrap / trigger $subscription->get_last_order() with the woocommerce_subscription_payment_complete action (https://docs.woocommerce.com/document/subscriptions/develop/action-reference/) you would essentially achieve that objective. That hook fires both for initial subscription orders and renewal orders and will ensure the $last_order is paid for. Something like this:
add_action( 'woocommerce_subscription_payment_complete', 'set_last_order' );
function set_last_order( $subscription ) {
$last_order = $subscription->get_last_order( 'all', 'any' );
// If you want to be able to reference that $last_order at any time
// then you could just save/update that order ID to post meta so
// that you can grab it any time outside of the action.
}
}
I know that seems a little clunky, but it's the best way I can think of. The only other option that comes to mind would be to loop through $subscription->get_related_orders() checking is_paid() from high IDs to low IDs and grabbing the first one from there.

Referencing external doc in CouchDB view

I am scraping an 90K record database using JSON-RPC and I am trying to put in some basic error checking. I want to start by scraping the database twice using two different settings and adding a prefix to the second scrape. This way I can check to ensure that the two settings are not producing different records (due to dropped updates, etc). I wanted to implement the comparison using a view which compares each document from the first scrape with it's twin produced by the second scrape and then emit the names of records with a difference between them.
However, I cannot quite figure out how to pull in another doc in the view, everything I have read only discusses external docs using the emit() function, which is too late to permit me to compare it. In the example below, the lookup() function would grab the referenced document.
Is this just not possible?
function(doc) {
if(doc._id.slice(0,1)!=='$' && doc._id.slice(0,1)!== "_"){
var otherDoc = lookup('$test" + doc._id);
if(otherDoc){
var keys = doc.value.keys();
var same = true;
keys.forEach(function(key) {
if ((key.slice(0,1) !== '_') && (key.slice(0,1) !=='$') && (key!=='expires')) {
if (!Object.equal(otherDoc[key], doc[key])) {
same = false;
}
}
});
if(!same){
emit(doc._id, 1);
}
}
}
}
Context
You are correct that this is not possible in CouchDB. The whole point of the map function is that it must be idempotent, otherwise you lose all the other nice benefits of a pre-calculated index.
This is why you cannot access external resources in the map function, whether they be other records or the clock. Any time you run a map you must always get the same result if you put the same record into it. Since there are no relationships between records in CouchDB, you cannot promise that this is possible.
Solution
However, you can still achieve your end goal, just be different means. Some possibilities...
Assuming there is some meaningful numeric value in each doc, you could use a view to take the sum of all those values and group them by which import you did ({key: <batch id>, value: <meaningful number>}). Then compare the two numbers in your client or the browser to see if they match.
A brute force approach would be to use a view to pair the docs that should match. Each doc is on a different row, but they're grouped by a common field. Then iterate through the entire index comparing the pairs. This would certainly be the quickest to code and doesn't depend on your application or data.
Implement a validation function to enforce a schema on your data. Just be warned that this will reduce your write throughput since each written record will be piped out of Erlang and into the JS engine. Also, this is only applicable if you're worried about properly formed records instead of their precise content, which might not be the case.
Instead of your different batch jobs creating different docs, have them place them into the same doc. The structure might look like this: { "_id": "something meaningful", "batch_one": { ..data.. }, "batch_two": { ..data.. } } Then your validation function could compare them or you could create a view that indexes all the docs that don't match. All depends on where in your pipeline you want to do the error checking and correction.
Personally I like the last option better, but only if you don't plan to use the database as is in production. Ie., you wouldn't want to carry around all that extra data in each record.
Hope that helps.
Cheers.

How to remove the data after user has logged out?

I was following along the example Lending Library app in the book "Packtpub.Getting.Started.with.Meteor.js". It is running at:
http://matloob.lendlib.meteor.com
It works fine, but when a user logs out when one category is open and its items are being displayed, that category and its items remain on the page while the rest is filtered out. On refreshing the page the remaining category is also filtered out.
The publish function is:
Meteor.publish("Categories", function () {
Meteor.flush(); // I added this so it will flush out the remaining data, but :(
return lists.find({owner: this.userId}, {fields: {Category: 1}});
});
It is hard to point out the exact vulnerability withour seeing more code but this is what I could find out: even if not logged in as a user one can set the session variable current_list to an id to get the corresponding list document:
Session.set("current_list",'ZLREaTCPiC6E7ece3')
So I assume that somewhere in your code you publish the details of a list given its id.
At least this would explain why the list remains even after logging out when a category is selected (which in turn means current_list holds an id).
Possibly publishing the list is done using Deps.autorun since the list is immediately published once the session variable is changed.
Maybe you can find that piece of code and post it or just change it so that it also includes a check whether the user is the owner of that list or category.
Consider using the user-status package to listen for users logging out and doing some cleanup on the server as a result:
https://github.com/mizzao/meteor-user-status
Specfically, you can use the following callback:
UserStatus.on "sessionLogout", (advice) ->
console.log(advice.userId + " with session " + advice.sessionId + " logged out")

Resources