As I am trying to get log files out of the camera in python console, would you guys say were the log files of camera gonna be and by following the onvif documentation, I tried using this code:
log=cam.devicemgmt.GetSystemLog({'Logtype': 'System'})
print(log)
errors:
The SystemLogType type doesn't accept collections as value
GetSystemLog() got an unexpected keyword argument 'LogIype'.
I connected camera using onvif in python console, now I need to retrieve log files from the camera using onvif
Related
I am working on a project that uses the Azure Media Services Python SDK (v3). I have the following code which creates a live output and a streaming locator once the associated live event is running:
# Step 2: create a live output (used to reference the manifest file)
live_outputs = self.__media_services.live_outputs
config_data_live_output = LiveOutput(asset_name=live_output_name, archive_window_length=timedelta(minutes=30))
output = live_outputs.create(StreamHandlerAzureMS.RESOUCE_GROUP_NAME, StreamHandlerAzureMS.ACCOUNT_NAME, live_event_name, live_output_name, config_data_live_output)
# Step 3: get a streaming locator (the ID of the locator is used in the URL)
locators = self.__media_services.streaming_locators
config_data_streaming_locator = StreamingLocator(asset_name=locator_name)
locator = locators.create(StreamHandlerAzureMS.RESOUCE_GROUP_NAME, StreamHandlerAzureMS.ACCOUNT_NAME, locator_name, config_data_streaming_locator)
self.__media_services is an object of type AzureMediaServices. When I run the code above, I receive the following exception:
azure.mgmt.media.models._models_py3.ApiErrorException: (ResourceNotFound) Live Output asset was not found.
Question: Why is Azure Media Services throwing this error with an operation that creates a resource? How can I resolve this issue?
Note that I have managed to authenticate the SDK to Azure Media Services using a service principal and that I can successfully push video to the live event using ffmpeg.
I suggest that you take a quick look at the flow of a live event in this tutorial, which unfortunately is in .NET. We are still working on updating Python samples.
https://learn.microsoft.com/en-us/azure/media-services/latest/stream-live-tutorial-with-api
But it should help with the issue. The first issue I see is that it's likely you did not create the Asset for the Live output to record into.
You can think of Live Outputs as "tape recorder" machines, and the Assets at the tapes. They are locations in your storage account that the tape recorder is going to write to.
So after you have the Live Event running, you can have up to 3 of these "tape recorders" operating and writing to 3 different "tapes" (Assets) in storage.
Create an empty Asset
Create a live output and point it to that Asset
get the streaming locator for that Asset - so you can watch the tape. Notice that you are creating the streaming locator on the asset you created in step 1. Think of it as "I want to watch this tape" and not "I want to watch this tape recorder".
We have a lambda which reads jpg/vector files from S3 and processes them using graphicsmagick.
This lambda was working fine till today. But since today morning we are getting errors while processing vector images using grahicsmagick.
"Error: Command failed: identify: unable to load module /usr/lib64/ImageMagick-6.7.8/modules-Q16/coders/ps.la': file not found # error/module.c/OpenModule/1278.
identify: no decode delegate for this image format/tmp/magick-E-IdkwuE' # error/constitute.c/ReadImage/544."
The above error is occurring for certain .eps files (vector) while using the identify function of the gm module.
Could you please share your insights on this.
Please let us know whether any backend changes have gone through with the aws end for Imagemagick module recently which might have had an affect on this lambda.
I've been trying to register a device instance on Google Assistant SDK on my Raspberry Pi 3.
Here is my input code
googlesamples-assistant-devicetool --project-id RASPI-ED53D register --model RASPI-ED53D-LIGHT-7NBZNA --type OUTLET --manufacturer SUPERCONN --product-name LIGHT --device 0001 --client-type LIBRARY
Output/Error
Creating new device model
Error: Failed to register model: 400
Could not create the device model. Check that the request contains the required field project_id with a valid format in the request payload. See https://developers.google.com/assistant/sdk/reference/device-registration/model-and-instance-schemas for more information.
The syntax is correct according to the google resources.
Any ideas?
The schema lists action.devices.types.OUTLET, not OUTLET.
When I list my devices, they have similar types specified, e.g.:
Device Type: action.devices.types.SPEAKER
i am trying to use the probe function in nodejs to discover ONVIF cam and it doesn't work.
when i am looking in the wireshark i can't find any broadcast message sending from my computer.
here is the code:
var onvif = require('onvif');
onvif.Discovery.on('device', function(cam){
// function will be called as soon as NVT responses
logger.info("SensorAutoDiscover find camera: " + cam.hostname);
});
onvif.Discovery.probe();
Yesterday I've changed SOAP message for onvif.Discovery.probe method because a lot of cameras wont response on old message. Please try new version from github: npm install https://github.com/agsh/onvif.git and let me know your results.
I am trying to read application insights export data using stream analytics. Here is how my blob looks like.
In my stream analytics I reference this blob and try to read these files using the download sample data functionality. I do not get any data.
I am also setting the PATH PREFIX PATTERN
As democenteralinsightscollector_5cc2f280d52d47bdbe186e87d8037fc0/Requests/{date}/{time}
the following links will help you with the process: http://azure.microsoft.com/en-us/documentation/videos/export-to-power-bi-from-application-insights/
https://azure.microsoft.com/en-us/documentation/articles/app-insights-export-power-bi/
Actually I was trying to do the same and was able to get Stream Analytics to read the Application Insights blob export, but where this fails is that the Json that is emitted by Application Insights has the following entries:
"anonAcquisitionDate":"0001-01-01T00:00:00Z","authAcquisitionDate":"0001-01-01T00:00:00Z"
which causes the Stream Analytics Input to fail.
The Stream Analytics operation logs has the following:
First Occurred: 04/30/2015 03:06:22 | Resource Name: appinsightsevents | Message: Failed to convert values to match the column type for column anonAcquisitionDate
So, basically Stream Analytics cannot process the input