HLS.js / Wowza / Cloudfront Browser do not download media after Encoder restart - amazon-cloudfront
my workflow is:
Using ffmpeg I send an RTMP stream to WOWZA App1.
App1 sends the stream to an internal 2nd app(App2).
App2 transcodes and packetize to hls and is the origin for Cloudfront distribution.
Cloudfront serves the streams to users.
The player on users is based on HLS.js.
To prepare for differents scenarios I forced App2 to restart during a test transmission, in this case App1 is still receiving stream from ffmpeg and trying to send it to App2, after app2 is ready this link is established again and App1 continues sending the stream to App2, but there is no video on client side.
Before restart, chunklist.m3u8 lists many chunks until the 17th: media-u3510ez40_17.ts
Then, while App2 is restarting chunklist.m3u8 do not exist and cloudfront send 404 error.
And then, when App2 is back, chunklist.m3u8 lists a new list of chunks starting at 1 with a new id: media-u1ofkjj9w_1.ts
The problem is that there is no video and network traffic shows that the browser do not downloads the new listed chunks.
The chunklist.m3u8 keep adding new chunks but the browser do not download any of this... until appears the 18th Chunks... and the video restart.
I try many times and the problem is the same, before restart the last chunk has a number N, and after restart there is no video until reach de N+1 Chunk, but the Ids are different.
I don't know if this issue is on Wowza, Cloudfront or the HLS.js player :/
chunklist.m3u8 Before Restart:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:9
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-DISCONTINUITY-SEQUENCE:0
#EXTINF:8.333,
media-u3510ez40_1.ts
#EXTINF:8.333,
media-u3510ez40_2.ts
#EXTINF:8.334,
.
.
.
media-u3510ez40_16.ts
#EXTINF:8.333,
media-u3510ez40_17.ts
chunklist.m3u8 After Restart:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:17
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-DISCONTINUITY-SEQUENCE:0
#EXTINF:16.396,
media-u1ofkjj9w_1.ts
#EXTINF:8.333,
media-u1ofkjj9w_2.ts
.
.
.
media-u1ofkjj9w_16.ts
#EXTINF:8.333,
media-u1ofkjj9w_17.ts
#EXTINF:8.333,
media-u1ofkjj9w_18.ts
You need to set cupertinoCalculateChunkIDBasedOnTimecode property to true in Add Custom Property section of the Wowza Streaming Engine application. Refer this: https://www.wowza.com/docs/how-to-configure-apple-hls-packetization-cupertinostreaming
Also, note that it will help in the case of encoders that send synchronized timecodes to Wowza. For encoder that doesn't send synchronized timecodes, I suggest to implement an absolute timecode irrespective of encoder send it or not. This will help the application to recover from the N+1 chunk number after the restart.
The below page will help you to configure it properly.
http://thewowza.guru/how-to-set-stream-timecodes-to-absolute-time/
About the session-id change part, When using Wowza as origin for Cloudfront distribution, you need to enable httpOriginMode and disable httpRandomizeMediaName to make it work properly. Below Wowza doc will help you set up it properly, https://www.wowza.com/docs/how-to-configure-a-wowza-server-as-an-http-caching-origin
Related
make brodcast like obs in node js to stram live video
hi guys i want to make app like obs to stream video to live video web service (twitch,youtube) with rmtp portocol with node js lang the website give me this two tokens to make live : Stream URL: rtmp://rtmp.cdn.asset.aparat.com:443/event Stream Key ea1d40ff5d9d91d39ca569d7989fb1913?s=2f4ecd8367346828 I know how to stream from obs but i want to stream music from server 24 hours ill happy to help me with libs or examples if theres other langs to do this work pleas tell that thank you
How to list call recordings in Nexmo?
I'm working on a voice assistant with Nexmo. I use Node-RED to build the NCCO object including a record node. In the provided tutorials by Nexmo, e.g. Build Your Own Voicemail With Node-RED and the Nexmo Voice API the directly download the recording to the local machine. In my case, don't want to immediately download the audio file via node-red but let Nexmo store my audios and download them all together later through e.g. a Python script. In the docs it says that "NOTE: After your recording is complete, it is stored by Nexmo for 30 days before being automatically deleted". Unfortunately, I can't find any reference about where the audios are stored in my Nexmo account and how to list all recordings/recording urls of a Nexmo application. Thank you for any help. Nina
Currently there’s no way to get the recordings as a list from Nexmo. What you could do instead is, capture the API response from the recording webhook and log it. Then later on when you’re ready to download them, read it back to a get recording node. If you connect a debug node into the /recording webhook, you can see the structure of the message object. payload: object start_time: "2020-03-04T13:06:40Z" recording_url: "https://api.nexmo.com/v1/files/516f74a8-abcd-4270-b553-2582650a2e5a" size: 26478 recording_uuid: "dbd3cb68-0a3a-4c89-bc2d-7c38abd2c497" end_time: "2020-03-04T13:06:47Z" conversation_uuid: "CON-93ef5eef-gg92-49ae-9e01-f3c782390dd9" timestamp: "2020-03-04T13:06:47.733Z" You’ll need the recording_url to download a recording, but it’s good to keep the conversation_uuid handy, as you can lookup the FROM and TO numbers based on this. I'm leaving a possible solution with Google Sheets below, as a replacement for the /recording webhook. Import it from clipboard into your editor and see the comment node for instructions :) [{"id":"9198a326.cfb3e","type":"tab","label":"Flow 2","disabled":false,"info":""},{"id":"49dc7444.625ff4","type":"http in","z":"9198a326.cfb3e","name":"","url":"/record","method":"post","upload":false,"swaggerDoc":"","x":210,"y":380,"wires":[["21594561.f44c1a","47021216.a25afc","dc814f00.17fa7"]]},{"id":"47021216.a25afc","type":"http response","z":"9198a326.cfb3e","name":"","statusCode":"","headers":{},"x":610,"y":380,"wires":[]},{"id":"5371302c.ee3688","type":"getrecording","z":"9198a326.cfb3e","creds":"10de89c6.d1db3e","filename":"recordings/{{msg.payload.from}}_{{msg.payload.timestamp}}.mp3","x":1260,"y":580,"wires":[["6d03a7ed.2a6d7","38dd6acf.b46cae"]]},{"id":"21594561.f44c1a","type":"debug","z":"9198a326.cfb3e","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","x":450,"y":320,"wires":[]},{"id":"bcb08110.c2f7c","type":"e-mail","z":"9198a326.cfb3e","server":"smtp.gmail.com","port":"465","secure":true,"tls":true,"name":"","dname":"","x":1690,"y":580,"wires":[]},{"id":"6d03a7ed.2a6d7","type":"debug","z":"9198a326.cfb3e","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","x":1490,"y":500,"wires":[]},{"id":"38dd6acf.b46cae","type":"change","z":"9198a326.cfb3e","name":"","rules":[{"t":"set","p":"topic","pt":"msg","to":"'Voicemail from ' & msg.req.query.from","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":1500,"y":580,"wires":[["bcb08110.c2f7c"]]},{"id":"c8fadcb1.13aca","type":"inject","z":"9198a326.cfb3e","name":"Download/Send recordings in email","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":280,"y":580,"wires":[["af254a75.791888"]]},{"id":"f9706ea5.ea66","type":"change","z":"9198a326.cfb3e","name":"","rules":[{"t":"set","p":"payload","pt":"msg","to":"{\t \"start_time\": (msg.payload)[0],\t \"recording_url\": (msg.payload)[1],\t \"size\": (msg.payload)[2],\t \"recording_uuid\": (msg.payload)[3],\t \"end_time\": (msg.payload)[4],\t \"conversation_uuid\": (msg.payload)[5],\t \"timestamp\": (msg.payload)[6],\t \"from\": (msg.payload)[7]\t}","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":1060,"y":580,"wires":[["5371302c.ee3688"]]},{"id":"9f017b25.db6fe8","type":"debug","z":"9198a326.cfb3e","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":970,"y":440,"wires":[]},{"id":"dc814f00.17fa7","type":"change","z":"9198a326.cfb3e","name":"","rules":[{"t":"set","p":"payload","pt":"msg","to":"[payload.start_time, payload.recording_url, payload.size, payload.recording_uuid, payload.end_time, payload.conversation_uuid, payload.timestamp, req.query.from]","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":480,"y":440,"wires":[["c8118876.f668e"]]},{"id":"521189f7.2f6d8","type":"debug","z":"9198a326.cfb3e","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":830,"y":760,"wires":[]},{"id":"deba2a63.a4f4d","type":"split","z":"9198a326.cfb3e","name":"","splt":"\\n","spltType":"str","arraySplt":1,"arraySpltType":"len","stream":false,"addname":"","x":830,"y":580,"wires":[["f9706ea5.ea66"]]},{"id":"c8118876.f668e","type":"GSheet","z":"9198a326.cfb3e","creds":"90e07aa9.34a6","method":"append","action":"","sheet":"1mmXhj40aeSooxmtku3ma4auLyrHhJO8xCSQsklZ1_BU","cells":"Sheet4!A1","name":"","x":730,"y":440,"wires":[["9f017b25.db6fe8"]]},{"id":"af254a75.791888","type":"GSheet","z":"9198a326.cfb3e","creds":"bd7b95fd.c3dee8","method":"get","action":"","sheet":"1mmXhj40aeSooxmtku3ma4auLyrHhJO8xCSQsklZ1_BU","cells":"Sheet4!A:H","name":"","x":610,"y":580,"wires":[["521189f7.2f6d8","deba2a63.a4f4d"]]},{"id":"335f8e96.7242ba","type":"comment","z":"9198a326.cfb3e","name":"📖 Instructions","info":"1. Install `node-red-contrib-google-sheets` package and restart Node-RED.\n2. Add _creds_, _SpreadsheetID_ and _Cells_ in the **GSheet** nodes \n3. Add your Nexmo credentials in the **Get Recording** node\n4. Configure **email** node or add download functionality","x":220,"y":180,"wires":[]},{"id":"10de89c6.d1db3e","type":"nexmovoiceapp","z":"","name":"New Voice App"},{"id":"90e07aa9.34a6","type":"gauth","z":"9198a326.cfb3e"}]
MS Bot Framework - Messages with audio attachments lost
I'm writing a bot in Node.js using the MS Bot Framework. To send attachments, I'm actually using the filestream buffer as the contentUrl, e.g. ... var base64 = new Buffer(filedata).toString('base64'); var msg = new builder.Message() .setText(session, text) .addAttachment({ contentUrl: util.format('data:%s;base64,%s', contentType, base64), contentType: contentType }); session.send(msg); ... where contentType is the proper mimetype for the file in question. When I test this locally (using the Bot Framework Emulator), this works perfectly for both image and audio files - messages with image attachments display the image, and messages with audio attachments show the audiocard allowing for playback, etc. However, when I test this through FB Messenger, the images work fine, but the audio messages just never appear in FB. Not even the text of the message comes through; it's like the entire message is lost. The dialogue simply skips over the message containing the audio attachment. I'm not even seeing any errors received server-side. This is happening with both mp3 and wav test audio files, that are each under 1MB (smaller than many of the image files I've successfully tested). Is there some trick to sending playable audio files to the FB Messenger channel specifically? Thanks!
I wasn't (yet) able to get a response from FB support, but after further testing, it looks like there is a filesize limit on audio files FB Messenger will accept. Specifically, I was able to get a sample file of ~45KB to send and display in Messenger successfully, but a larger file of ~400KB got dropped (aka seemed to send successfully from the server-side perspective, but did not show up in Messenger). Strangely, some of my much larger image files went through, so it seems like this same limit does not exist for image attachments. Will do some further testing, but it seems like the ultimate solution will be either to majorly compress my audio files, or to host them somewhere else instead of sending as a filestream.
Play Audio from receiver website
I'm trying to get my receiver to play an mp3 file hosted on the server with the following function playSound_: function(mp3_file) { var snd = new Audio("audio/" + mp3_file); snd.play(); }, However, most of the time it doesn't play and when it does play, it's delayed. When I load the receiver in my local browser, however, it works fine. What's the correct way to play audio on the receiver?
You can use either a MediaElement tag or Web Audio API. Simplest is probably a MediaElement.
Monotouch video streaming with MPMoviePlayerController
I am creating a Monotouch iPhone app that will display streaming videos. I have been able to get the MPMoviePlayerController working with a local file (NSUrl FromFile), but have not been able to get videos streamed from a media server. Here is the code I am using to play the video: string url = #"http://testhost.com/test.mp4"; var nsurl = NSUrl.FromString(url); mp = new MPMoviePlayerController(nsurl); mp.SourceType = MPMovieSourceType.Streaming; //enable AirPlay //mp.AllowsAirPlay = true; //Add the MPMoviePlayerController View this.View.AddSubview(mp.View); //set the view to be full screen and show animated mp.SetFullscreen(true, true); //MPMoviePlayer must be set to PrepareToPlay before playback mp.PrepareToPlay(); //Play Movie mp.Play(); Is there something else in implementing the MPMoviePlayerController for video streaming that I am missing? I also read that videos for iOS should be streamed using Apple's HTTP Live Streaming on the media server, is that a requirement? I have never worked with video streaming on an iOS device before so I am not sure if there is something lacking in my code or the media server setup or a combination.
I'm pretty sure you need a streaming server to use a video file from an HTTP url. However it's a requirement for applications (on the AppStore) to do so: Important iPhone and iPad apps that send large amounts of audio or video data over cellular networks are required to use HTTP Live Streaming. See “Requirements for Apps.” The good news is that your existing code should not have to be changed to handle this (it's server, not client side).