If I call the transaction endpoint from the API gateway:
https://testnet-gateway.elrond.com/transaction/89a15e2ea521764d21ac2de83064dd7c1848f83dff4dcbad0518fdf41a70d889
I get the following data:
"data":"RVNEVE5GVFRyYW5zZmVyQDRkNDU1ODQ2NDE1MjRkMmQ2MjM5MzM2NTM2MzBAMDEyMzRmQDA1OWUxZDQ2YTljM2I4OWNhMkAwMDAwMDAwMDAwMDAwMDAwMDUwMDRmNzllYzQ0YmIxMzM3MmI1YWM5ZDk5NmQ3NDkxMjBmNDc2NDI3NjI3Y2ViQDYzNmY2ZDcwNmY3NTZlNjQ1MjY1Nzc2MTcyNjQ3Mw=="
What does it represent?
That "data" field is encoded in base64 and it represents the Input Data field available in explorer:
https://testnet-explorer.elrond.com/transactions/89a15e2ea521764d21ac2de83064dd7c1848f83dff4dcbad0518fdf41a70d889
ESDTNFTTransfer#4d45584641524d2d623933653630#01234f#059e1d46a9c3b89ca2#000000000000000005004f79ec44bb13372b5ac9d996d749120f476427627ceb#636f6d706f756e6452657761726473
It can be decoded using an online base64 decoder.
Related
I reading binary data (jpeg image) using an api (Web Action) and i want to store it as varbinary or base64 in azure sql server.
As it looks there is no way to base64 encode binary data using azure data factory. Is that correct?
So i am trying to pass it as byte[] using a varbinary parameter. The parameter of the stored procedure looks like this:
#Photo varbinary(max) NULL
The parameter in the stored procedure action in ADF looks like this:
But this seems also not to work, because the pipeline is failing with this error:
The value of the property 'Value' is invalid for the stored procedure parameter 'Photo'.
Is it possible to store that image using that approach? And if not, how can this be achieved (using ADF and Stored Procuedure)?
Just to be safe, are you missing a '#' before the activity ?
Cant see it on the picture
Peace
I have successfully deployed and verified an ERC721 smart contract on BSC's testnet. Also successfully minted and awarded new ERC721 tokens. Next up is transferring tokens between wallets. So far so good, except that I'd like to add transfer comments to the transfer transactions.
My contract supports the standard safeTransferFrom(senderWaller, receiverWallet, tokenId, data) function, and I can see the data (i.e. - the transfer comment) being sent out. But it doesn't appear when I view the successful transaction in the BSC testnet explorer.
Here is an example transaction --> https://testnet.bscscan.com/tx/0x1f3bf69da66cff66bbeeb6ce6f7505be8a78729685162811cb29c9dc30a347d6. Decoding the data in the BSC testnet explorer I can see the trailing data in hex form and it translates back to readable text when I convert it back. See trailing value starting with 205363... below. Here is a screen shot of the hex converting back to the intended text value.
Function: safeTransferFrom(address from, address to, uint256 tokenId, bytes _data)
MethodID: 0xb88d4fde
[0]: 0000000000000000000000008175f3b00af0b775136b918a78298aaf4e1ea137
[1]: 000000000000000000000000ba3662af7c0cecd20cd97ef8072c30f4449b16b1
[2]: 0000000000000000000000000000000000000000000000000000000000000005
[3]: 0000000000000000000000000000000000000000000000000000000000000080
[4]: 0000000000000000000000000000000000000000000000000000000000000020
[5]: 5363686564756c656420736572766963696e6700000000000000000000000000
Here is my code that is used to encode the web3 method call.
const soygaToken = new web3.eth.Contract(soygaABI, contractAddress);
var byteComments = Buffer.from(comments.padEnd(32, "\0"), 'utf-8');
var myData = soygaToken.methods.safeTransferFrom(senderAddress, recipientAddress, tokenId, byteComments).encodeABI();
Any ideas as to what's causing this data to be missing from the transaction when I look on the BSC testnet explorer? Reviewing the ERC721 specs (https://ethereum.org/en/developers/docs/standards/tokens/erc-721/) the data parameter should be a 32-byte value. Which it appears I'm passing along.
Bounced this off the core Nethereum developer. He verified the user data is present, but it's likely just an issue where the Etherscan web client isn't decoding it. So the user data should be accessible.
I have an API created in Loopback 4 which retrieves data from a database in PostgreSQL 13 encoded with UTF8. Visiting the API explorer (localhost:3000/explorer) and executing the GET requests I realize that even when the database fields contain characters like letters with accents and ñ's; the retrieved JSON only shows blanks in the position where the character must have appeared. For example, if the database has a field with a word like 'piña', the JSON returns 'pi a'.
When I try a POST request, inserting a field like 'ramírez' (note the í), in the database, the field is shown as 'ramφrez' and when I execute a GET of that entry, the JSON now has de correct value, 'ramírez'.
How can I fix that?
I'd recommend using the Buffer class:
var encodedString = Buffer.from('string', 'utf-8');
with this way you will be able to return anything you want. In NodeJS Buffer class already included so you don't need to install any dependencies.
If you don't get what you need you can change 'utf-8' part.
I have a Node JS application running with Express and mongodb. I have a use case where my device sends data to my server. It might be JSON data or Comma-Separated String(Only String, not CSV file). I need to check which type of data is coming and manipulate that to JSON if request body would be a String. When I was trying to display the data type of data being sent to the server, it's displaying as "object" even after giving the "String" data as input. And the operation is getting successful but data is not inserting into the database. Can anyone help me in resolving this issue?
Sample Payload(request.body) would be,
"heat:22,humidity:36,deviceId:sb-0001"
Expected response is,
{
"heat": "22",
"humidity": "36",
"deviceId": "sb-0001"
}
#Aravind Actually typeof will returns "string" if operand is a string.So please check whether the string is coming or not in body, because if it is null then typeof will return "object".
I need to check which type of data is coming and manipulate that to JSON ...
HTTP is build upon the media-type concept that defines the syntax used in the payload of that type as well as the semantics of the elements used within that payload. You might take a look at how HTML defines forms to get a grip what a media-type specification should include. Through content negotiation client and server agree on a common media-type and thus a representation format to exchange payloads for supported by both. This simply increases interoperability between the participants as they exchange data in well-defined, hopefully standardized, representation formats both understand and support.
Through Accept headers a receiver can state preferences on which types to receive, including a weighting scheme to indicate that a certain representation format is preferred over an other one but the recipient is also fine with the other one, while a Content-Type header will indicate the actual representation format being sent.
RFC 7111 defines text/csv for CSV based representations and RFC 8259 specifies application/json for JSON payload. As the sender hopefully knows what kind of document it sends to the receiver, you can use this information to distinguish the payload on the receiver side. Note however that according to Fielding true REST APIs must use representation formats that support hypertext-driven interaction flows to allow clients to take further actions upon the payload received without having to invoke some external documentation. Both, JSON and CSV, by default don't support such an interaction model, that is abreviated with the term HATEOAS. For your simple scenario the content type negotiation might though be sufficient enough to solve your needs.
In terms of processing CSV and converting the data to JSON this is simply a matter of splitting up the CSV string on the delimiter symbol (, in the sample) to key-value pairs and then splitting the key and values further on the : symbol and adding these to a new JSON object. There is also a csvtojson library available at NPM that you might utilize in such a case.
I have what seems to be a pretty simple test of uploading a large file to Azure Blob storage.
The response I get is 400:
400 (Block ID is invalid. Block ID must be base64 encoded.)
The URL I am uploading to is: https://xxxx.blob.core.windows.net/tmp/af620cd8-.....&comp=blocklist
with the body:
<?xml version="1.0" encoding="utf-8"?>
<BlockList>
<Latest>BLOCK0</Latest>
<Latest>BLOCK1</Latest>
</BlockList>
This comes after a couple of successful block uploads:
https://xxxx.blob.core.windows.net/tmp/af620cd8-02e0-fee2....&blockid=BLOCK0
etc.
It doesn't seem like anything here requires Base64 encoding and the block IDs are of the same exact size (something mentioned in another post). Is there anything else I could try here?
The complete code is here:
https://github.com/mikebz/azureupload
and the specific front end file is here:
https://github.com/mikebz/azureupload/blob/master/formfileupload/templates/chunked.html
The block ids must be base64 encoded and because you're not doing so, you're getting this error.
From Put Block REST API documentaion:
blockid: Required. A valid Base64 string value that identifies the
block. Prior to encoding, the string must be less than or equal to 64
bytes in size. For a given blob, the length of the value specified for
the blockid parameter must be the same size for each block. Note that
the Base64 string must be URL-encoded.