Add Timestamp to File Name in Flash Media Server - flash-media-server

Is there some way to dynamically name files published in Flash Media server.
Several clients in an application will be publishing to FMS. They may start and stop recording several times, and I would like to append a time stamp (format: yy-mm-dd-hh-mm-ss) to the file name in main.asc.
For example the following files might be created by clients 1 and 2 using the ns.publish(myclientName); command;
client1's first recording client1_2011-01-01-22-47-01.flv
client1's second recording client1_2011-01-01-22-54-55.flv
client2's first recording client2_2011-01-01-22-59-34.flv
client1's third recording client1_2011-01-01-22-04-12.flv
I don't want to use ns.publish(myClientName, "append");. There needs to be a separate file for each publish session.
The best I can come up with is to use File.creationTime and File.renameTo() on application.onUnpublish() to add the timestamp when publishing has ended, but it it wouldn't be tolerant of an unexpected server outage.
Edit: Unknown to me and in conflict with the documentation, the Date object in Flash Media Server is not the one we know and love. It has no properties. For example
var currentTime = new Date();
trace("CurrentTime: " +currentTime.time);
prints
CurrentTime: undefined
Running
for (var prop in currentTime)
trace(prop);
prints nothing.
I was surprised and frustrated after an hour or so to learn this. Hope it helps someone.

currentTime.valueOf() is timespan

Related

skipping to specific currentTime with pixi-sound

I'm using pixi-sound.js and want to be able to skip to a specific point in the audio file. I've achieved this before using HTML5 audio by updating the currentTime, but I'm not sure where to access this with pixi-sound. There are at least two currentTime values, as well as 'progress', in the object, but changing those doesn't cause a skip.
var sound = PIXI.sound._sounds['track01'];
var currenttime = sound.media.context.audioContext.currentTime;
I would have thought this would be a common usage thing, but can't find any reference to it in the documents. Any ideas much appreciated.
In PixiJS Sound, you can hand over a options object as an argument for play method of an instance of Sound class. And the value of options.start is the "start time offset in seconds".
References: #pixi/sound v4.2.0 source, pixi-sound v3.0.5 source
In your code, if you want to play the sound from the point of 10 seconds offset, I think you can try like the following:
var sound = PIXI.sound._sounds['track01'];
sound.play({ start: 10 }); // play from 10 seconds offset

Google Cast Video Player becomes unresponsive after network error

I am working on a Chromecast custom receiver app, built on top of the sample app provided by Google (sampleplayer.CastPlayer)
The app manages a playlist, I would like the player to move on to the next item in the list after a video fails to play for whatever reason.
I am running into a situation where, after a video fails to load because of a network error, the player becomes unresponsive: in the 'onError_()' handler, my custom code will do this
var queueLoadRequest = ...
var mediaManager = ...
setTimeout (function(){mediaManager.queueLoad(queueLoadRequest)}), 5000
...the player does receive the LOAD event according to the receiver logs, but nothing happens on the screen, the player's status remains IDLEand the mediaManager.getMediaQueue().getItems() remains undefined. Same result trying to use the client controller to try to load a different video.
I have tried to recover with mediaManager.resetMediaElement() and player.reset() in the onError_ handler, but no luck.
For reference, here is a screenshot of the logs (filtered for errors only) leading up to the player becoming unresponsive. Note that I am not interested in fixing the original error, what I need to figure out is how to recover from it:
My custom code is most likely responsible for the issue, however after spending many hours + stripping the custom code to a bare minimum in an effort to isolate the responsible bit of code, I have not made any progress. I am not looking for a fix but rather for some guidance in troubleshooting the root cause: what could possibly cause the Player to become unresponsive? or alternatively how can one recover from an unresponsive Player?

NAudio - Does a new instance of OffsetSampleProvider have to be created for each playback

As explained here, OffsetSampleProvider can be used in order to play a specific portion of an audio file. Like this:
AudioFileReader AudioReader = new AudioFileReader("x.wav");
OffsetSampleProvider OffsetProvider = New OffsetSampleProvider(AudioReader);
OffsetProvider.SkipOver = TimeSpan.FromSeconds(5);
OffsetProvider.Take = TimeSpan.FromSeconds(8);
myWaveOut.Init(OffsetProvider);
myWaveOut.Play();
The above example will play an audio for 8 seconds, starting at second 5. However, if I want to play it again, it will not play, unless I set the Position property of the AudioFileReader to 0, and re-create a new instance of OffsetSampleProvider from it. So I would like to know if I'm missing something, or this is the way that OffsetSampleProvider should be used (and if it does, do I have to free any resources related to it).
You could copy the code for OffsetSampleProvider and add a Reset method to it. I'd also avoid using SkipOver for performance reasons and just set the CurrentTime of the AudioFileReader to 5 seconds directly before you play.

WifiLock under-locked my_lock

I'm tring to download an offline map pack. Trying to reverse engineer the example project from the skobbler support website, however when trying to start a download the download manager crashes.
What my usecase is: show a list of available countries (within the EUR continent) and make the user select a single one, and that will be downloaded at that time. So far I have gotten a list where those options are available. Upon selecting an item (and starting the download) it crashes.
For the sake of the question I commented out some things.
Relevant code:
// Get the information about where to obtain the files from
SKPackageURLInfo urlInfo = SKPackageManager.getInstance().getURLInfoForPackageWithCode(pack.packageCode);
// Steps: SKM, ZIP, TXG
List<SKToolsFileDownloadStep> downloadSteps = new ArrayList<>();
downloadSteps.add(new SKToolsFileDownloadStep(urlInfo.getMapURL(), pack.file, pack.skmsize)); // SKM
//downloadSteps.add(); // TODO ZIP
//downloadSteps.add()); // TODO TXG
List<SKToolsDownloadItem> downloadItems = new ArrayList<>(1);
downloadItems.add(new SKToolsDownloadItem(pack.packageCode, downloadSteps, SKToolsDownloadItem.QUEUED, true, true));
mDownloadManager.startDownload(downloadItems); // This is where the crash is
I am noticing a running download, since the onDownloadProgress() is getting triggered (callback from the manager). However the SKToolsDownloadItem that it takes as a parameter says that the stepIndex starts at 0. I don't know how this can be, since I manually put that at (byte) 0, just like the example does.
Also, the logs throw a warning on SingleClientConnManager, telling me:
Invalid use of SingleClientConnManager: connection still allocated.
This is code that gets called from within the manager somewhere. I am thinking there is some vital setup steps missing from the documentation and the example project.

How can I view an image from Azure Blob Storage, rather than download it?

Ok, so I am using Node.js and Azure Blob Storage to handle some file uploads.
When a person uploads an image I would like to show them a thumbnail of the image. The upload works great and I have it stored in my blob.
I used this fine link (Generating Azure Shared Access Signatures with BlobService.getBlobURL() in Azure SDK for Node.js) to help me create this code to create a share access temporary url.
process.env['AZURE_STORAGE_ACCOUNT'] = "[MY_ACCOUNT_NAME]";
process.env['AZURE_STORAGE_ACCESS_KEY'] = "[MY_ACCESS_KEY]";
var azure = require('azure');
var blobs = azure.createBlobService();
var tempUrl = blobs.getBlobUrl('[CONTAINER_NAME]', "[BLOB_NAME]", { AccessPolicy: {
Start: Date.now(),
Expiry: azure.date.minutesFromNow(60),
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.READ
}});
This creates a url just fine.
Something like this : https://[ACCOUNT_NAME].blob.core.windows.net:443/[CONTAINER_NAME]/[BLOB_NAME]?st=2013-12-13T17%3A33%3A40Z&se=2013-12-13T18%3A33%3A40Z&sr=b&sp=r&sig=Tplf5%2Bg%2FsDQpRafrtVZ7j0X31wPgZShlwjq2TX22mYM%3D
The problem is that when I take the temp url and plug it into my browser it will only download the image rather than view it (in this case it is a simple jpg file).
This translates to my code that I can't seem to view it in an tag...
The link is right and downloads the right file...
Is there something I need to do to view the image rather than download it?
Thanks,
David
UPDATE
Ok, so I found this article:
http://social.msdn.microsoft.com/Forums/windowsapps/en-US/b8759195-f490-420b-a587-2bb614366ad2/embedding-images-from-blob-storage-in-ssrs-report-does-not-work
Basically it told me that wasn't setting the file type when uploading it so the browser didn't know what to do with it.
I used code from here: http://www.snip2code.com/Snippet/8974/NodeJS-Photo-Upload-with-Azure-Storage/
This allowed me to upload it correctly and it now views properly in the browser.
The issue I am having now is that when I put the tempUrl into an img tag I get this error:
Failed to load resource: the server responded with a status of 403 (Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.)
This is the exact same link that if I post it to my browser it works just fine...why can't I show it in an image tag?
UPDATE 2
Ok, so as a stupid test I put in a 7 second delay from when my page loads and when the img tag gets the source from the temp url. This seems to fix the problem (most of the time), but it is, obviously, a crappy solution even when it works...
At least this verifies that, because it works sometimes, my markup is at least correct.
I can't, for the life of me, figure out why a delay would make a bit of difference...
Thoughts?
UPDATE 3
Ok, based on a comment below, I have tried to set my start time to about 20 minutes in the past.
var start = moment().add(-20, 'm').format('ddd MMM DD YYYY HH:mm:ss');
var tempUrl = blobs.getBlobUrl(Container, Filename, { AccessPolicy: {
Start: start,
Expiry: azure.date.minutesFromNow(60),
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.READ
}});
I made my start variable the same format as the azure.date.minutesFromNow. It looks like this: Fri Dec 13 2013 14:53:58
When I do this I am not able to see the image even from the browser, much less the img tag. Not sure if I am doing something wrong there...
UPDATE 4 - ANSWERED
Thanks to the glorious #MikeWo, I have the solution. The correct code is below:
var tempUrl = blobs.getBlobUrl('[CONTAINER_NAME]', "[BLOB_NAME]", { AccessPolicy: {
Start: azure.date.minutesFromNow(-5),
Expiry: azure.date.minutesFromNow(45),
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.READ
}});
Mike was correct in that there seemed to be some sort of disconnect between the start time of the server and my localhost so I needed to set the start time in the past. In update 3 I was doing that, but Mike noticed that Azure does not allow the start and end time to be more than 60 minutes...so in update 3 I was doing -20 start and 60 end which is 80 minutes.
The new, successful way I have above makes the total difference 50 minutes and it works without any delay at all.
Thanks for taking the time Mike!
Short version: There is a bit of time drift that occurs in distributed systems, including in Azure. In the code that creates the SAS instead of doing a start time of Date.now(), set the start time to a minute or two in the past. Then you should be able to remove the delay.
Long version: The clock on the machine creating the signature and adding the Date.now might be a few seconds faster than the machines in BLOB storage. When the request to the URL is made immediately the BLOB service hasn't hit the "start time" yet of the BLOB and thus throws the 403. So, by setting the start time a few seconds in the past, or even the start of the current day if you want to cover a massive clock drift, you building in handling of the clock drift.
UPDATE: After some trial and error: Make sure that when creating an adhoc SAS it can't be longer than an hour. Setting the start time a few minutes in the past and then expiration 60 minutes in the future was too big. Making it a little in the past and then not quite an hour from then for expiration.

Resources