How to implement DiAL Protocol in WinJS? - winjs

I have tried this
var dialDevicePicker = new Windows.Media.DialProtocol.DialDevicePicker();
var rect = {
width:200,
height:200,
x:0,
y:0
}
dialDevicePicker.show(rect);
Finally I got a popup searching for devices, but a message in it saying disconnect first.
I have searched universal samples in GitHUB. But, found only csharp example and not in Javascript.
Can some one help me in implementing Dial Protocol for casting videos from my app to ChromeCast?
I have added Image for reference

Related

Web Bluetooth - Show extra info on scanning dialog

I'm trying to support web-bluetooth to connect to my devices and perform a simple task (such as playing LED).
However, the device information showed on the scanning dialog when calling navigator.bluetooth.requestDevice is not so clear. It only shows the device name and a random (?) hex string.
The problem here is all my devices have the same name (AWESOME_LED), thus it's not easy for the user to select the correct LED if all scanning items show the same device name info. As far as I know, we can not custom to add more info showing on the scanning dialog.
I come up with a new solution that is changing the device name to unique for each LED with the format AWESOME_LED + [uniqueid] e.g AWESOME_LED1, AWESOME_LED2, AWESOME_LED3 so that the user can distinguish one from the others.
My question are:
Is there any alternative solution without making the device name unique?
If not, is there any problem / rejection / limitation from Apple or Google for my current app on App Store / Google Play by not using the same device name for all devices? I have been investigating it at Apple forums / Accessory Design Guidelines and looks like there are no problems, just to make sure if anyone has faced trouble from Apple / Google.
Thanks for your help.
Scanning dialog
My question are:
Is there any alternative solution without making the device name unique?
The browser prompt is not customisable yet. One solution you highlighted already is to make your LED device name unique. If you're able to control the device, why not having one AWESOME_DEVICE name and a GATT characteristic you can write to that controls individual LED colors. Maybe something like:
const device = await navigator.bluetooth.requestDevice({
filters: [{ name: "AWESOME_DEVICE" }],
});
const server = await device.gatt.connect();
const service = await server.getPrimaryService(0x1234); // Your service UUID
const characteristic = await service.getCharacteristic(0x5678); // Your characteristic UUID
// Set LED #1 to red color.
await characteristic.writeValue(
new Uint8Array(/*ledIndex=*/ 1, /*r=*/ 255, /*g=*/ 0, /*b=*/ 0)
);
If not, is there any problem / rejection / limitation from Apple or Google for my current app on App Store / Google Play by not using the
same device name for all devices? I have been investigating it at
[Apple forums][1] / [Accessory Design Guidelines][2] and looks like
there are no problems, just to make sure if anyone has faced trouble
from Apple / Google.
None that I'm aware of.

Using Flutter to connect and write to Bluetooth Devices

I'm new with Flutter and am just trying to make this work.
Am using Flutter Blue https://pub.dartlang.org/packages/flutter_blue
It connects though, just this issue when writing.
But when writing i'm receiving this message. Not sure what I am doing wrong though.
here's my code..
onPressed: () {
print("HEY write pressed");
var fff1 = new Guid("0000fff1-0000-1000-8000-00805f9b34fb");
var fffa = new Guid("0000fffa-0000-1000-8000-00805f9b34fb");
BluetoothCharacteristic characteristic = new BluetoothCharacteristic(uuid: fffa, serviceUuid: fff1, descriptors: null, properties: null);
_writeCharacteristic(characteristic);
},
PlatformException(locateCharacteristic, service could not be located on the device, null)
I've tried following this post.
Flutter Blue Read characteristic UUID
In production I would probably save the UUID as variables, but the effect should be similar..
If anyone has any guidance or tips that would be super welcome.
Your code is right but the device you want to connect to doesnt contain this service :var fff1 = new Guid("0000fff1-0000-1000-8000-00805f9b34fb");
Check which services exist in the device you want to connect to.

Does v3 Google Cast receiver parse alternative audio tracks from an hls master playlist automatically or do I have to define them in the sender?

I'm trying to get a multi-audio HLS stream working on a v3 Google Cast custom receiver app. The master playlist of the stream refers to several video renditions of different resolution and two alternative audio tracks:
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="TV Ton",DEFAULT=YES, AUTOSELECT=YES,URI="index_1_a.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="Audiodeskription",DEFAULT=NO, AUTOSELECT=NO,URI="index_2_a.m3u8"
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=383000,RESOLUTION=320x176,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_0_av.m3u8
...more renditions
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=3697000,RESOLUTION=1280x720,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_6_av.m3u8
The video plays fine in both the sender and receiver app, I can see both audio tracks in the sender app, but when casting to the receiver there are no controls for changing the audio tracks.
When accessing the AudioTracksManager's getTracks() method while intercepting the LOAD message like so...
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS
const audioTracksManager = playerManager.getAudioTracksManager();
console.log(audioTracksManager.getTracks())
console.log('Load request: ', loadRequestData);
return loadRequestData;
});
I get an error saying:
Uncaught Error: Tracks info is not available.
Maybe unrelated, but super weird: I can console.log the request's media prop and see its tracks prop (an array with the expected 1 video and 2 audio tracks), however, if I try to access the tracks property in the LOAD message interceptor I get undefined.
I currently cannot look into the iOS sender code yet, so I tried to eliminate error sources on the receiver end. The thing is:
I always assumed that the receiver identifies alternative audio tracks on its own when loading HLS playlists. Is this assumption correct or can the AudioTracksManager only access tracks that have been previously defined in a sender app?
I couldn't find a clear statement on that in the Google Cast reference...
Ok, feeling stupid for the time I spent on this, but I'm finally able to answer my own question. I didn't realize that I was accessing the AudioTracksManager in the wrong place - namely in the LOAD message interceptor instead of in a PLAYER_LOAD_COMPLETE event listener (as it is properly documented here)
After placing my logic into this event listener I was able to access and programmatically set my audio tracks.
So to answer my original question: Yes, the receiver app automatically identifies alternative audio tracks from an HLS playlist.

socket.io - io.sockets.adapter object?

I'm experimenting with socket.io and trying to build a multi-room chat app. The guide I'm following is out of date using pre 1.0.0 socket.io.
I'm trying to find a list of connected clients in a given room. Googling around shows that I have to use the adapter. However, I cannot find the documentation for it anywhere. I searched for it in the git-hub doc but search didn't return any information on adapter. https://github.com/socketio/socket.io-client/blob/master/docs/API.md
Can someone point me in the right direction and where I can read more about adapter and associated methods on it? Also if you can provide the most up to date documentation for socket.io I'd greatly appreciate it. Thank you.
You can get a map of all rooms in the top level namespace like this:
io.nsps['/'].adapter.rooms
You can list the sockets in one of those rooms like this:
function getSocketsInRoom(room, namespace = '/') {
let room = io.nsps[namespace].adapter.rooms[room];
return room.sockets;
}
As best I can tell, this kind of stuff is simply not documented. I've only discovered things like this by examining how things are stored in the debugger. That may or may not mean it's subject to change in the future - I really have no idea.
sockets:
{ '2v8OmIS4qTGX61-YAAAC': true, '3YnScxOgpmAGhZWsAAAG': true },
length: 2 }
it gives u this output. So it basically gives you the clientId and whether it is connected or not and total number of clients connected to a specific room. When the code below is executed (in server side written in node.js) gives u the above output.There is currently two clients connected to the same room named "hello".
var clientsInRoom = io.sockets.adapter.rooms[room];
but when u write this code below and console log it
var clientsInRoom = io.sockets.adapter.rooms
when single client is connected it will console log this
{ '9mVAHSDwcwnqsF4aAAAA':
Room { sockets: { '9mVAHSDwcwnqsF4aAAAA': true }, length: 1 } }
this crazy '9mVAHSDwcwnqsF4aAAAA' literals is client id which is unique for each client

WebAudioApi audio node stop not working in IOS UIWebView

I am playing a list of audios in an html into UIWebView of IOS with the AudioContext object of the Javascript
eg :
var ctx = new AudioContext();
var node= ctx.createBufferSource();
node.buffer = AudioBufferFromAjaxCall;
node.connect(gainNodeObjCreatedEarlier);
node.start();
and the problem is node.stop(); isnt working , the audio started continuesly playing and not stopping
It would be helpful to know the version of safari that your UIWebView is using and whether or not you can reproduce it in the desktop version. Also, do you get any kind of log or error in the console?
In any case, have you tried passing a parameter to the stop method? something like
node.stop(0);
This argument specifies after how many seconds the node should stop. Sending 0 may help.

Resources