Insights + Evaluating gRPC Message Flow Hyperledger Fabric - hyperledger-fabric

I want to examine the gRPC Message Flow from invoking a smart contract until a block is created:
I exactly want to examine these steps (the used message stream) I later found composed in a whole block (If I understand it right these parts are only put together at the end in a block some adds):
Invoke call of a Chaincode e.g. Change Value "a" to "10" in using CLI:
1. CLI sends Proposal to Endorser -> [SignedProposal with Signature, Proposal:(Header+Payload)]
2. Endorser sends Proposal Response back to CLI -> [ProposalResponse with its Endorsement,
PropRespPayload]
3. CLI packs endorsements into Transaction + sends them to orderer for block creation
4. Block is created by orderer + validation of sign.
What is the fastest way to fetch them?
What I did:
(Not good, rather laborious) Try to modificate code in binaries like "peer" where gRPC is handled, rebuild images:
My problem is that I am able to build and modificate the binaries like the peer executable (which is used in images and started inside a docker container like the peer), but I finally want to use them and make us of the sample projects like first-network, where I can invoke a transaction and log with own implementations what is gRPCed there. What I could do here and what is very time consuming is to rebuild all images and later make all sample files fitting to the new environement and implement this parts, but I think there have to be a faster way of evaluating the message flow (with the output of the full gRPC message stream /decoded and encoded).
(I think the best way for now):
I have not discovered faster ways yet (am new to Go and gRPC), instead of logging what gRPC is sending with wireshark and try to decode it (but it won't work for all parts, cause of incomplete messages or afraid). For some parts (proving sign) it is necessary that I have the marshalled version of some objects. This is what I need the most actually, but therefore I need to understand the gRPC content of the wiresharked parts :)
Do you have any suggestions for me? Would you rather go on with Way1 or Way2? Or am I on a too complicated way to fix it?
Is there a faster way existing? I mean I need the unmarshaled parts, but also the marshaled content of some objects and I have the proto files (when these are the correct ones for the logging parts I did in wireshark while an invoke was take place).

You can make your own gRPC api by following those simple steps:
1st you need to make a signed proposal. To make a signed proposal, you can get idea from endorser_test.go file.
To send a signed proposal to peer for endorsement, you need a ProcessProposal gRPC call where you can get the response from endorsing peer and your need to create a EndorserClient too.
After that you need to collect all endorsements from peers and have to make signed envelope
To make a signed envelope, you can take help from txutils.go file
To send a signed envelope to orderers you need to broadcast your envelope to orderers with Send gRPC call where we need to create a AtomicBroadcastClient.

This seems closely related to the question you posted a month or so ago.
As you point out, if you wish to do things like validate signatures, you will need the marshaled form of the messages, but if you wish to inspect the messages, you will need to unmarshal them.
I would think that option 1 (modifying the code to dump the information you need) is still the most useful. As you can perform whatever serialization, persistence, or analysis inside the code itself. If you simply store these data structures to disk via something like wire-shark, then you will need to track them, parse them, etc., which seems like more work to me.
If you have marshaled messages on disk, you can try using a tool like configtxlator to unmarshal the messages to a more friendly JSON format, if you have tracked the appropriate type, though this still seems more difficult than simply injecting code to me.

Related

Tracking Discord with GA4

Bots are amazing, unless you're Google Analytics
After many months of learning to host my own Discord bot, I finally figured it out! I now have a node server running on my localhost that sends and receives data from my Discord server; it works great. I can do all kinds of the things I want to with my Discord bot.
Given that I work with analytics everyday, one project I want to figure out is how to send data to Google Analytics (specifically GA4) from this node server.
NOTE: I have had success in sending data to my Universal Analytics property. However, as awesome as that was to finally see pageviews coming into, it was equally heartbreaking to recall that Google will be getting rid of Universal Analytics in July of this year.
I have tried the following options:
GET/POST requests to the collect endpoint
This option presented itself as impossible from the get-go. In order to send a request to the collection endpoint, a client_id must be sent along with the request itself. And this client_id is something that must be generated using Google's client id algorithm. So, I can't just make one up.
If you consider this option possible, please let me know why.
Install googleapis npm package
At first, I thought I could just install the googleapis package and be ready to go, but that idea fell on its face immediately too. With this package, I can't send data to GA, I can only read with it.
Find and install a GTM npm package
There are GTM npm packages out there, but I quickly found out that they all require there to be a window object, which is something my node server would not have because it isn't a browser.
How I did this for Universal Analytics
My biggest goal is to do this without using Python, Java, C++ or any other low level languages. Because, that route would require me to learn new languages. Surely it's possible with NodeJS alone... no?
I eventually stumbled upon the idea of actually hosting a webpage as some sort of pseudo-proxy that would send data from the page to GA when accessed by something like a page scraper. It was simple. I created an HTML file that has Google Tag Manager installed on it, and all I had to do was use the puppeteer npm package.
It isn't perfect, but it works and I can use Google Tag Manager to handle and manipulate input, which is wonderful.
Unfortunately, this same method will not work for GA4 because GA4 automatically excludes all identified bot traffic automatically, and there is no way to turn that setting off. It is a very useful feature for GA4, giving it quite a bit more integrity than UA, and I'm not trying to get around that fact, but it is now the Bane of my entire goal.
https://support.google.com/analytics/answer/9888366?hl=en
Where to go from here?
I'm nearly at the end of my wits on figuring this one out. So, either an npm package exists out there that I haven't found yet, or this is a futile project.
Does anyone have any experience in sending data from NodeJS to GA4? (or even GTM?) How did you do it?
...and this client_id is something that must be generated using Google's client id algorithm. So, I can't just make one up...
Why, of course you can. GA4 generates it pretty much the same as UA does. You don't need anything from google to do it.
Besides, instead of mimicking just requests to the collect endpoint, you may just wanna go the MP route right away: https://developers.google.com/analytics/devguides/collection/protocol/ga4 The links #dockeryZ gave, work perfectly fine. Maybe try opening them in incognito, or in a different browser? Maybe you have a plugin blocking analytics urls.
Moreover, you don't really need to reinvent the bicycle. Node already has a few packages to send events to GA4, here's one looking good: https://www.npmjs.com/package/ga4-mp?activeTab=readme
Or you can just use gtag directly to send events. I see a lot of people doing it even on the front-end: https://www.npmjs.com/package/ga-gtag Gtag has a whole api not described in there. Here's more on gtag: https://developers.google.com/tag-platform/gtagjs/reference Note how the library allows you to set the client id there.
The only caveat there is that you'll have to track client ids and session ids manually. Shouldn't be too bad though. Oh, and you will have to redefine the concept of a pageview, I guess. Well, the obvious one is whenever people post in the chan that is different from the previous post in a session. Still, this will have to be defined in the code.
Don't worry about google's bot traffic detection. It's really primitive. Just make sure your useragent doesn't scream "bot" in it. Make something better up.

Smart Contract execution reverted

I'm building an interaction with some smartcontracts via Javascript and following Uniswap Documentation as well as EtherJs, here is my code:
Gist Github
Via console it shows as the transaction was successful,
Console Logs
But the transaction gets reverted without a clear error message,
Transaction hash
Is there any way to track or find where my code is wrong?, I triple checked paths and all the stuff, as I'm still learning about smartcontracts interaction maybe there's something I'm missing out.
Thanks in advance,
Triple checked:
Smartcontracts addresses
Methods syntaxis (following Uniswap Documentations)
Contract ABIs

Pubsubhubbub library for NodeJs

I have a system where various rss feeds are added. I want to follow the content and be notified when new content is added in the feeds without having to check them one by one.
I found out there is a pubsubhubbub protocol and that publishers can use various hubs which implement this protocol in their feeds. This is how I found out about superfeedr and I'm trying to work with their XMPP API. I installed their nodejs library and made a few subscribe tests that worked fine.
Is it possible to use the node superfeedr module to subscribe to a feed that doesn't use superfeedr? For example I found one that has:
link rel='hub' href='http://pubsubhubbub.appspot.com/'
Do I have to handle each hub separately or I can just send them the same requests based on the protocol?
Alex, I created Superfeedr.
Yes, of course it is possible to subscribe to a feed that doesn't use Superfeedr. Superfeedr acts as a default hub. You can add any feed, and you should get notifications for it. The only difference is that you may see delays. We poll feeds every 15 minutes, so, unless there are strong caches, you should see messages no later than 15 minutes after they've been published.
2 and 3 are probably not relevant given 1. However, I believe there are a couple other PubSubHubbub libraries, but they all require that your endpoint is outside the firewall... and all of them will only work for feeds that use the pubsubhubbub protocol. Even though your application will use each hub separately, the code should be the same, so that's transparent for you.
I hope this helps.

I need to validate and send feedback to a participating server. How can we add Netty request code channel.write() within handler code?

I am coding for a Netty based Notifying Server, which takes in Message Buffer many hundreds at a second from a Server(A) through RPC, and then sends it to an Http Real Time Server, after checking for the validity(the validation consists of checking for a tag ID and its value). If the validation is not successful, the System needs to send an error feedback back to the Server(A) with an Error Code.
I intend to write the validation logic inside a handler, but how do we make the handler to send the feedback if message is found to be invalid?
Can we include database code also into a handler, so I can persist the validation specific details to a Database? Will adding this DB code handicap the Netty performance? If Yes, what's the better way of using a Database (insert) code inside a handler?
Please can anyone guide me? Can I write the DB code inside an Executor?
Kindly excuse me if I am asking a too basic questions. I am still on Learning phase.
Let me try to answer the questions.
1) I think it does not not matter if you want to send an ERROR or SUCCESS response. Just use Channel.write(..) to write it and have an encoder that can handle the encoding to a ChannelBuffer. There is not difference here
2) You should add an ExecutionHandler in front to make sure your db calls do not block the IO-Thread. See [1].
[1] http://netty.io/docs/stable/api/org/jboss/netty/handler/execution/ExecutionHandler.html

Simple sip based client interaction... Any Ideas

I am tring to do the following:
I want a SIP User Agent to perform the following steps on receiving an inbound call (call set up request).
1) Read the caller ID from the SIP request and Log the details to file
2) Drop the call (terminate the call without picking up the call)
I have not been able to find a high level api that will let me script this interaction. I have taken a look at Jain but it seems to be a very low level API and I imagine will require a lot of work to get the above interaction coded up and working. Can anyone suggest an apropriate API to implement the above.
NOTE: I have tried ROXEO.com and their CCXML based apps are great but their pricing is aimed at big companies, so Voxeo is not an option.
There are quite a few open source SIP stacks around two examples of many are pjsip and sipsorcery (as a disclaimer I do some dev work on the latter). It will all depend on your language and prefeences as to which one suits. There are also lots of SIP tools around that may be a more efficient approach for you such as SIPp.
Apart from those options and given your very simple requirements you could probably get away with 20 or 30 lines of code that listens on a UDP socket, parses the incoming INVITE to extract the From header and then sends back a rejection response by changing the top line of the request to make it a response and sending it back to where it came from.
If you're using C, try eXosip, you could easily whatever you want.
Here
It's clear that Jain SIP could be quite painful (actually all the configuration but the API otherwise is quite high-level, to manipulate messages) , but you can take the jain-sip-presence-proxy and removes almost everything from their INVITE handler and build your own message
if you're using java, you can use peers which provides a high level api in package net.sourceforge.peers.sip.core.useragent. The entry point is UserAgent class, take a look at gui package if you want to see how it is used. Traces are in log files so you can track calls.
ivrworx but it can handle one scenarion at a time only
Asterisk pbx can act as a simple sip client, and do just that, however if you wante to integrate something in your own solution, take a look at: http://sipsimpleclient.org/projects/sipsimpleclient/wiki/SipMiddlewareApi

Resources