Chainlink node CRON job. How does it get paid? - cron

these are the docs
https://docs.chain.link/docs/jobs/types/cron/
type = "cron"
schemaVersion = 1
schedule = "CRON_TZ=UTC * */20 * * * *"
externalJobID = "0EEC7E1D-D0D2-476C-A1A8-72DFB6633F01"
observationSource = """
fetch [type="http" method=GET url="https://chain.link/ETH-USD"]
parse [type="jsonparse" path="data,price"]
multiply [type="multiply" times=100]
fetch -> parse -> multiply
"""
But what I am wondering is how the job connects to the Oracle contract. How does it connect with the user contract to get paid. Where and how do we send the data once job is complete at specified increment.
Does the job start when the job is posted on the node side. Or does it start the clock once a user contract calls it?
Any help would be much appreciated. I trying to run through the types of jobs to familarize myself with the capbilities of a chainlink node.

A Chron job executes a job based on a chron defined schedule. This means it's triggered based on some condition that the Chainlink node evaluates, and not triggered externally via a smart contract. Because of this, the node isn't paid in LINK tokens for processing the request like it does for API calls, because it's initiating a request itself as opposed to receiving a request (and payment) from on-chain. A node can't initiate a job on its own and then expect to receive payment from a consuming contract, if you require such functionality then you can try to have some logic in the function called to withdraw some LINK. But be careful of who can call this function.
If you want to send data on-chain to a smart contract once the job is completed, you need to manually define an ethtx task at the end of the cron job.
Here's an extended version of your job that sends the result back to a function called someFunction at the contract deployed at address 0xa36085F69e2889c224210F603D836748e7dC0088. The data can be in any format, as long as the abi of the function matches what's being encoded in the job. ie if the function expects a bytes param, you need to ensure your encoding a bytes param, if it expects a uint param, then you need to encode a uint param. In this example, a bytes parameter is used
type = "cron"
schemaVersion = 1
name = "GET > bytes32 (cron)"
schedule = "CRON_TZ=UTC #every 1m"
observationSource = """
fetch [type="http" method=GET url="https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=USD"]
parse [type="jsonparse" path="USD"]
multiply [type="multiply" times=100]
encode_response [type="ethabiencode"
abi="(uint256 data)"
data="{\\"data\\": $(multiply) }"]
encode_tx [type="ethabiencode"
abi="someFunction(bytes32 data)"
data="{ \\"data\\": $(encode_response) }"]
submit_tx [type="ethtx"
to="0x6495C9684Cc5702522A87adFd29517857FC99f45"
data="$(encode_tx)"]
fetch -> parse -> multiply -> encode_response -> encode_tx -> submit_tx
"""
And here's the consuming contract for the cron job above:
// SPDX-Lincense-Identifier: MIT
pragma solidity ^0.8.7;
contract Cron {
bytes32 public currentPrice;
function someFunction(bytes32 _price) public {
currentPrice = _price;
}
}
To answer your other question, the is active as soon as it's created on the node, and will start evaluating the triggering conditions for it to active a run, based on the schedule defined

Related

Why does transact and wait behave differently when within a function is behaving different?

Within my project I intend to send large volumes of transactions therefore for simplicity I am building a wrapper function for the following functions to be executed together: contractName.functions.functionName(params).transact() and w3.eth.wait_for_transaction(tx_hash). However when I write the functions transact_and_wait with the above implemented within in the transactions do not get executed!
Implementation of Transact and wait
def transact_and_wait(contract_function, transaction_params= {"gas": 100000}):
# Send the transaction
if transaction_params != {"gas": 100000}:
transaction_params["gas"] = 100000
transaction_hash = contract_function.transact(transaction_params)
# Wait for the transaction to be mined
transaction_receipt = w3.eth.wait_for_transaction_receipt(transaction_hash)
return transaction_receipt
Where it is called via: Transact_and_wait(contractName.functions.functionName(account.address))
For example this should set the a role for a user defined via index 1
However when I call. print(contractName.functions.stateVariable(account.address).call()) it returns 0
If i do the same process above but not within a functions:
tx_hash = contractName.functions.functionName(account.address).transact({"gas": 100000}))
transaction_receipt = w3.eth.wait_for_transaction_receipt(tx_hash)
Then I can call the same getter: print(contractName.functions.stateVariable(account.address).call()
It returns 1.

How to get details of per function app in Application Insights

I'm running python v3 function app and it contains multiple functions with different bindings(cosmos, blob, http etc). I'm trying to get the details of this function app in application insights like no of request, exception raised during execution or number of request per function app and per function etc.
I'm able to run and get few details like request count. Now I'm trying to map request details with other tables like exceptions, request etc but not able to map and drill down to the particular function.
For e.g Let suppose I have 10 function in function app and they run one after another based on output of previous function. Let say in any case flow got failed at any function. Now I want at which step/function my function app failed, details of error, successful and unsuccessful flow completion of function app
Below are the some query I have used for monitoring purpose.
Request on first function to get the total number of request counts for function app.
requests
| where timestamp > ago(1d)
| where operation_Name =~ "function name"
| summarize RequestsCount=sum(itemCount) by cloud_RoleName,bin(timestamp,1d)
Request and Average Duration of functions
requests
| summarize RequestsCount=sum(itemCount), AverageDuration=avg(duration) by operation_Name
| order by RequestsCount desc
You can check the exception per function with:
exceptions
| extend OperationName = iff(operation_Name == "","[No operation name]",operation_Name)
| summarize Count = count() by cloud_RoleName, OperationName, type, method
To join with requests:
requests
| where timestamp > ago(24h) and success == false
| join kind= inner (
exceptions
| where timestamp > ago(24h)
) on operation_Id
| project exceptionType = type, failedMethod = method, requestName = name, requestDuration = duration, success
Keep it mind, if you catch an error yourself, the result of the function will be success.
You could also work with custom error logs in your functions where you maybe create a json object which will end up in the message column of the traces table. You can query further then:
traces
| where message contains "the error i am searching for"
| extend json = parse_json(message)
| project
timestamp,
errorSource = json.error_source,
step = json.step,
errors = json.errors,
url = json.url

Change Feed Processor Lib does not honour ChangeFeedProcessorOptions FeedPollDelay / CheckPointFrequency

I am following this sample code (https://github.com/Azure/azure-documentdb-changefeedprocessor-dotnet#example) to register an observer to process change feed in cosmos db collection.
I am creating new documents in the cosmos db collection using a utility (say create 400 documents within a for loop).
I am using using FeedPollDelay of 30 seconds. But it doesn't seem to be honoured by CFP lib. ProcessChangesAsync method gets invoked repeatedly even before feed poll delay interval expires.
In the first batch, around 60 docs are retrieved and in the second batch around 20 docs are retrieved, in the third batch around 100 docs are retrieved.
DocumentCollectionInfo feedCollectionInfo = new DocumentCollectionInfo()
{
DatabaseName = databaseName,
CollectionName = monitoredCollectionName,
Uri = new Uri(uri),
MasterKey = masterKey
};
DocumentCollectionInfo leaseCollectionInfo = new DocumentCollectionInfo()
{
DatabaseName = databaseName,
CollectionName = leaseCollectionName,
Uri = new Uri(uri),
MasterKey = masterKey
};
ChangeFeedProcessorOptions feedProcessorOptions = new ChangeFeedProcessorOptions()
{
FeedPollDelay = TimeSpan.FromSeconds(30)
//LeasePrefix = Guid.NewGuid().ToString(),
//MaxItemCount = 100
};
ChangeFeedProcessorBuilder builder = new ChangeFeedProcessorBuilder();
processor = await builder
.WithHostName(hostName)
.WithFeedCollection(feedCollectionInfo)
.WithLeaseCollection(leaseCollectionInfo)
.WithProcessorOptions(feedProcessorOptions)
.WithObserver<LiveWorkItemChangeFeedObserver>()
.BuildAsync();
await processor.StartAsync();
Receiving 60 docs in first batch is fine. But I am expecting the second batch to be invoked with rest 340 docs in a single batch after the feed poll delay (30 seconds) interval expires.
But ProcessChangesAsync method gets triggered frequently and this option is not being honoured.
FeedPollDelay is used when the Change Feed Processor reads the Change Feed and finds no new changes, not in-between each batch.
Example flow:
CFP polls for changes, finds X.
ProcessChangesAsync is called with X
After ProcessChangesAsync finishes, CFP immediately polls for changes, finds Y.
ProcessChangesAsync is called with Y.
After ProcessChangesAsync finishes, CFP immediately polls for changes, finds nothing, waits FeedPollDelay.
CFP polls for changes, finds Z.
ProcessChangesAsync is called with Z
After ProcessChangesAsync finishes, CFP immediately polls for changes, finds nothing, waits FeedPollDelay.
Etc….

Dump series back into InfluxDB after querying with replaced field value

Scenario
I want to send data to an MQTT Broker (Cloud) by querying measurements from InfluxDB.
I have a field in the schema which is called status. It can either be 1 or 0. status=0 indicated that series has not been sent to the cloud. If I get an acknowlegdment from the MQTT Broker then I wish to rewrite the query back into the database with status=1.
As mentioned in FAQs for InfluxDB regarding Duplicate data If the information has the same timestamp as the previous query but with a different field value => then the update field will be shown.
In order to test this I created the following:
CREATE DATABASE dummy
USE dummy
INSERT meas_1, type=t1, status=0,value=123 1536157064275338300
query:
SELECT * FROM meas_1
provides
time status type value
1536157064275338300 0 t1 234
now if I want to overwrite the series I do the following:
INSERT meas_1, type=t1, status=1,value=123 1536157064275338300
which will overwrite the series
time status type value
1536157064275338300 1 t1 234
(Note: this is not possible via Tags currently in InfluxDB)
Usage
Query some information using the client with "status"=0.
Restructure JSON to be sent to the cloud
Send the information to cloud
If successful then write the output from Step 1. back into the DB but with status=1.
I am using the InfluxDBClient Python3 to create the Application (MQTT + InfluxDB)
Within the write_points API there is a parameter which mentions batch_size which require int as input.
I am not sure how can I use this with the Application that I want. Can someone guide me with this or with the Schema of the DB so that I can upload actual and non-redundant information to the cloud ?
The batch_size is actually the length of the list of the measurements that needs to passed to write_points.
Steps
Create client and query from measurement (here, we query gps information)
client = InfluxDBClient(database='dummy')
op = client.query('SELECT * FROM gps WHERE "status"=0', epoch='ns')
Make the ResultSet into a list:
batch = list(op.get_points('gps'))
create an empty list for update
updated_batch = []
parse through each measurement and change the status flag to 1. Note, default values in InfluxDB are float
for each in batch:
new_mes = {
'measurement': 'gps',
'tags': {
'type': 'gps'
},
'time': each['time'],
'fields': {
'lat': float(each['lat']),
'lon': float(each['lon']),
'alt': float(each['alt']),
'status': float(1)
}
}
updated_batch.append(new_mes)
Finally dump the points back via the client with batch_size as the length of the updated_batch
client.write_points(updated_batch, batch_size=len(updated_batch))
This overwrites the series because it contains the same timestamps with status field set to 1

Akka: How to ensure that message has been received?

I have an actor Dispenser. What it does is it
dispenses some objects by request
listens to arriving new ones
Code follows
class Dispenser extends Actor {
override def receive: Receive = {
case Get =>
context.sender ! getObj()
case x: SomeType =>
addObj(x)
}
}
In real processing it doesn't matter whether 1 ms or even few seconds passed since new object was sent until the dispenser starts to dispense it, so there's no code tracking it.
But now I'm writing test for the dispenser and I want to be sure that firstly it receives new object and only then it receives a Get request.
Here's the test code I came up with:
val dispenser = system.actorOf(Props.create(classOf[Dispenser]))
dispenser ! obj
Thread.sleep(100)
val task = dispenser ? Get()
val result = Await.result(task, timeout)
check(result)
It satisfies one important requirement - it doesn't change original code. But it is
At least 100ms seconds slow even on very high performance boxes
Unstable and fails sometimes because 100 ms or any other constant doesn't provide any guaranties.
And the question is how to make a test that satisfies requirement and doesn't have cons above (neither any other obvious cons)
You can take out the Thread.sleep(..) and your test will be fine. Akka guarantees the ordering you need.
With the code
dispenser ! obj
val task = dispenser ? Get()
dispenser will process obj before Get deterministically because
The same thread puts obj then Get in the actor's mailbox, so they're in the correct order in the actor's mailbox
Actors process messages sequentially and one-at-a-time, so the two messages will be received by the actor and processed in the order they're queued in the mailbox.
(..if there's nothing else going on that's not in your sample code - routers, async processing in getObj or addObj, stashing, ..)
Akka FSM module is really handy for testing underlying state and behavior of the actor and does not require to change its implementation specifically for tests.
By using TestFSMRef one can get actors current state and and data by:
val testActor = TestFSMRef(<actors constructor or Props>)
testActor.stateName shouldBe <state name>
testActor.stateData shouldBe <state data>
http://doc.akka.io/docs/akka/2.4.1/scala/fsm.html

Resources