Trouble using etherblob for a task - python-3.x

I have been tasked to identify the first block in which a user (that has been actively uploading compressed archives of data on the Ethereum Rinkeby blockchain) has uploaded a blob of data between 2021-01-19 and 2021-01-21.
I was developing the following command: $etherblob -network rinkeby 7919090 7936368 --encrypted. Where should I run this command in order to get the correct output?

Related

How to abort uploading a stream to google storage in Node.js

Interaction with Cloud Storage is performed using the official Node.js Client library.
Output of an external executable (ffmpeg) through fluent-ffmpeg is piped to a writable stream of a Google Cloud Storage object using [createWriteStream].(https://googleapis.dev/nodejs/storage/latest/File.html#createWriteStream).
Executable (ffmpeg) can end with an error. In this case the file is created on Cloud Storage with 0 length.
I want to abort uploading on the command error to avoid finalizing an empty storage object.
What is the proper way of aborting the upload stream?
Current code (just an excerpt):
ffmpeg()
.input(sourceFile.createReadStream())
.output(destinationFile.createWriteStream())
.run();
Files are instances of https://cloud.google.com/nodejs/docs/reference/storage/latest/storage/file.

Azure Media Services -- Create Live Output and Streaming Locator with Python SDK

I am working on a project that uses the Azure Media Services Python SDK (v3). I have the following code which creates a live output and a streaming locator once the associated live event is running:
# Step 2: create a live output (used to reference the manifest file)
live_outputs = self.__media_services.live_outputs
config_data_live_output = LiveOutput(asset_name=live_output_name, archive_window_length=timedelta(minutes=30))
output = live_outputs.create(StreamHandlerAzureMS.RESOUCE_GROUP_NAME, StreamHandlerAzureMS.ACCOUNT_NAME, live_event_name, live_output_name, config_data_live_output)
# Step 3: get a streaming locator (the ID of the locator is used in the URL)
locators = self.__media_services.streaming_locators
config_data_streaming_locator = StreamingLocator(asset_name=locator_name)
locator = locators.create(StreamHandlerAzureMS.RESOUCE_GROUP_NAME, StreamHandlerAzureMS.ACCOUNT_NAME, locator_name, config_data_streaming_locator)
self.__media_services is an object of type AzureMediaServices. When I run the code above, I receive the following exception:
azure.mgmt.media.models._models_py3.ApiErrorException: (ResourceNotFound) Live Output asset was not found.
Question: Why is Azure Media Services throwing this error with an operation that creates a resource? How can I resolve this issue?
Note that I have managed to authenticate the SDK to Azure Media Services using a service principal and that I can successfully push video to the live event using ffmpeg.
I suggest that you take a quick look at the flow of a live event in this tutorial, which unfortunately is in .NET. We are still working on updating Python samples.
https://learn.microsoft.com/en-us/azure/media-services/latest/stream-live-tutorial-with-api
But it should help with the issue. The first issue I see is that it's likely you did not create the Asset for the Live output to record into.
You can think of Live Outputs as "tape recorder" machines, and the Assets at the tapes. They are locations in your storage account that the tape recorder is going to write to.
So after you have the Live Event running, you can have up to 3 of these "tape recorders" operating and writing to 3 different "tapes" (Assets) in storage.
Create an empty Asset
Create a live output and point it to that Asset
get the streaming locator for that Asset - so you can watch the tape. Notice that you are creating the streaming locator on the asset you created in step 1. Think of it as "I want to watch this tape" and not "I want to watch this tape recorder".

Lambda which reads jpg/vector files from S3 and processes them using graphicsmagick

We have a lambda which reads jpg/vector files from S3 and processes them using graphicsmagick.
This lambda was working fine till today. But since today morning we are getting errors while processing vector images using grahicsmagick.
"Error: Command failed: identify: unable to load module /usr/lib64/ImageMagick-6.7.8/modules-Q16/coders/ps.la': file not found # error/module.c/OpenModule/1278.
identify: no decode delegate for this image format/tmp/magick-E-IdkwuE' # error/constitute.c/ReadImage/544."
The above error is occurring for certain .eps files (vector) while using the identify function of the gm module.
Could you please share your insights on this.
Please let us know whether any backend changes have gone through with the aws end for Imagemagick module recently which might have had an affect on this lambda.

AzCopy blob download throwing errors on local machine

I am running the following command while learning how to use AzCopy.
azcopy /Source:https://storeaccountname.blob.core.windows.net/container /Dest:C:\container\ /SourceKey:Key /Pattern:"tdx" /S /V
Some files are downloaded by most files result in an error like the following. I have no idea why this happening and wondered if somebody has encountered this and knows the cause and the fix.
[2016/05/31 21:27:13][ERROR] tdx/logs/site-visit/archive/1463557944558/visit-1463557420000: Failed to open file C:\container\tdx\logs\site-visit\archive\1463557944558\visit-1463557420000: Access to the path 'C:\container\tdx\logs\site-visit\archive\1463557944558\visit-1463557420000' is denied..
My ultimate goal was to create backups of the blobs in a container of one storage account to the container of another storage account. So I am starting out with basics which seem to fail.
Here is a list of folder names from an example path pulled from Azure Portal:
storeaccountname > Blob service > container > app-logs > hdfs > logs
application_1461803569410_0008
application_1461803569410_0009
application_1461803569410_0010
application_1461803569410_0011
application_1461803569410_0025
application_1461803569410_0027
application_1461803569410_0029
application_1461803569410_0031
application_1461803569410_0033
application_1461803569410_0035
application_1461803569410_0037
application_1461803569410_0039
application_1461803569410_0041
application_1461803569410_0043
application_1461803569410_0045
There is an error in the log for each one of these folders that looks like this:
[2016/05/31 21:29:18.830-05:00][VERBOSE] Transfer FAILED: app-logs/hdfs/logs/application_1461803569410_0008 => app-logs\hdfs\logs\application_1461803569410_0008.
[2016/05/31 21:29:18.834-05:00][ERROR] app-logs/hdfs/logs/application_1461803569410_0008: Failed to open file C:\container\app-logs\hdfs\logs\application_1461803569410_0008: Access to the path 'C:\container\app-logs\hdfs\logs\application_1461803569410_0008' is denied..
The folder application_1461803569410_0008 contains two files. Those two files were successfully downloaded. From the logs:
[2016/05/31 21:29:19.041-05:00][VERBOSE] Finished transfer: app-logs/hdfs/logs/application_1461803569410_0008/10.2.0.5_30050 => app-logs\hdfs\logs\application_1461803569410_0008\10.2.0.5_30050
[2016/05/31 21:29:19.084-05:00][VERBOSE] Finished transfer: app-logs/hdfs/logs/application_1461803569410_0008/10.2.0.4_30050 => app-logs\hdfs\logs\application_1461803569410_0008\10.2.0.4_30050
So it appears that the problem is related to copying folders, which themselves are blobs but I can't be certain yet.
There are several known issues when using AzCopy, such as the below which will cause error,
If there are two blobs named “a” and “a/b” under a storage container, copying the blobs under that container with /S will fail. Windows will not allow the creation of folder name “a” and file name “a” under the same folder.
Refer to https://blogs.msdn.microsoft.com/windowsazurestorage/2012/12/03/azcopy-uploadingdownloading-files-for-windows-azure-blobs/. Scroll down to the bottom, see details of Known Issues.
In my container con2, there are a folder named abc.pdf and also a file abc.pdf, when executing Azcopy download command with /S, it will prompt a error message.
Please check your container whether there are folders with the same name as a file.

Azure .Net SDK Error : FsOpenStream failed with error 0x83090aa2

We are trying to download a file present in Data Lake Store. I have been following the below tutorial which uses .Net Azure SDk.
https://azure.microsoft.com/en-us/documentation/articles/data-lake-analytics-get-started-net-sdk/
As we have already the file present in Azure Data Lake Store , I just added the code to download the file
FileCreateOpenAndAppendResponse beginOpenResponse = _dataLakeStoreFileSystemClient.FileSystem.BeginOpen("/XXXX/XXXX/test.csv", DataLakeStoreAccountName, new FileOpenParameters());
FileOpenResponse openResponse = _dataLakeStoreFileSystemClient.FileSystem.Open(beginOpenResponse.Location);
But it's failing with the below error
{"RemoteException":{"exception":"RuntimeException","message":"FsOpenStream
failed with error 0x83090aa2 ().
[83271af3c3a14973ad7814e7d9d201f6]","javaClassName":"java.lang.RuntimeException"}}
While debugging we inspected the beginOpenResponse.Location that been used in the second line code. It seems to the correct value as below
https://XXXXXXXX.azuredatalakestore.net/webhdfs/v1/XXXX/XXX/test.csv?op=OPEN&api-version=2015-10-01-preview&read=true
The error does not provide much information to track down the problem.
I agree that the store errors are currently non-printable comment. We are working on improving this.
According to my store developer, 0x83090aa2 is access check failed. Can you please check if you have access to the storage account and/or the path is correct?

Resources