Default Media Receiver limitations - google-cast

Can I use the Default Media Receiver to display a web page or an HTML5 app? Using Javascript in the Chrome browser, I have no problem sending a single png image (content type image/png) to the Chromecast but it fails if I specify an html link (content type text/html). session.loadMedia will fire the error handler and e.code/e.description reports session_error/LOAD_FAILED. I used Google's home page for my test:
//var currentMediaURL = "https://www.google.com/images/srpr/logo11w.png";
//var currentMediaType = "image/png";
var currentMediaURL = "https://www.google.com";
var currentMediaType = "text/html";
function startApp()
{
var mediaInfo = new chrome.cast.media.MediaInfo(currentMediaURL, currentMediaType);
var request = new chrome.cast.media.LoadRequest(mediaInfo);
session.loadMedia(request, onMediaDiscovered.bind(this, 'loadMedia'), onMediaError);
};

I think you need to have custom receiver, just have it run your code accordingly...

Related

Azure Text to Speech (Cognitive Services) in web app - how to stop it from outputting audio?

I'm using Azure Cognitive Services for Text to Speech in a web app.
I return the bytes to the browser and it works great, however on the server (or local machine) the speechSynthesizer.SpeakTextAsync(inp) line outputs the audio to the speaker.
Is there a way to turn this off, since this runs on a web server (and even if I ignore it, there's the delay while it outputs audio before sending back the data)
Here's my code ...
var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
speechConfig.SpeechSynthesisVoiceName = "fa-IR-FaridNeural";
speechConfig.OutputFormat = OutputFormat.Detailed;
using (var speechSynthesizer = new SpeechSynthesizer(speechConfig))
{
// todo - how to disable it saying it here?
var speechSynthesisResult = await speechSynthesizer.SpeakTextAsync(inp);
return Convert.ToBase64String(speechSynthesisResult.AudioData);
}
What you can do is add an audioconfig to the speechSynthesizer.
In this audioconfig object you can specify a file path to a .wav file which already exist on the server.
Whenever you run speaktextasyn instead of a speaker it will redirect the data to the .wav file.
This audio file you can read and perform your logic later.
Just add the following code before creating the speechSynthesizer object.
var audioconfig = AudioConfig.FromWavFileOutput(filepath);
here filepath is a location of the .wav file as a string.
Complete code :
string filepath = "<file path> " ;
var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
var audioconfig = AudioConfig.FromWavFileOutput(filepath);
speechConfig.SpeechSynthesisVoiceName = "fa-IR-FaridNeural";
speechConfig.OutputFormat = OutputFormat.Detailed;
using (var speechSynthesizer = new SpeechSynthesizer(speechConfig, audioconfig))
{
// todo - how to disable it saying it here?
var speechSynthesisResult = await speechSynthesizer.SpeakTextAsync(inp);
return Convert.ToBase64String(speechSynthesisResult.AudioData);
}

Play html audio in wkwebview will report error: Required client entitlement is missing

An error is reported when playing h5 audio or video elements in wkwebview: Error acquiring assertion: <Error Domain=RBSAssertionErrorDomain Code=3 "Required client entitlement is missing" UserInfo={RBSAssertionAttribute=<RBSDomainAttribute| domain:"com.apple.webkit" name:"MediaPlayback" sourceEnvironment:"(null)">, NSLocalizedFailureReason=Required client entitlement is missing}>
Then the performance of the webview will become very poor, no response to any element click.This problem has affected our tens of thousands of users.The current solution is to call the client method to play audio.How to solve the problem fundamentally?
1.There is an audio element and a button button in my HTML file. Click the button to play audio.
<body>
<button onclick="handleClick()">PLAY</button>
<audio id="audio" src="https://ac-dev.oss-cn-hangzhou.aliyuncs.com/test-2022-music.mp3"></audio>
<script>
function handleClick() {
document.getElementById("audio").play();
}
</script>
</body>
2.Create a wkwebview to load the html file in my demo APP.
class ViewController: UIViewController , WKUIDelegate{
var webView: WKWebView!
override func loadView() {
let config = WKWebViewConfiguration()
config.preferences.javaScriptEnabled = true
config.allowsInlineMediaPlayback = true
webView = WKWebView(frame: .zero, configuration: config) //.zero
webView.uiDelegate = self
view = webView
}
override func viewDidLoad() {
super.viewDidLoad()
let myURL = URL(string: "https://ac-dev.oss-cn-hangzhou.aliyuncs.com/test-2022-py.html")
let myRequest = URLRequest(url: myURL!)
webView.load(myRequest)
}
}
3.Click the button in the HTML to play the audio, and you can see the error report on the xcode.
iPadN[2133:855729] [assertion] Error acquiring assertion: <Error Domain=RBSAssertionErrorDomain Code=3 "Required client entitlement is missing" UserInfo={RBSAssertionAttribute=<RBSDomainAttribute| domain:"com.apple.webkit" name:"MediaPlayback" sourceEnvironment:"(null)">, NSLocalizedFailureReason=Required client entitlement is missing}>
4.To sum up, this error will appear when playing audio or video in HTML. Then the app performance will drop a lot, and the interactive response will be very slow.
I had a similar issue. My plan was to use new Audio(...) with an asset or a blob stored as object url in the storage. It is working for the asset but was not for the blob stored as string in the storage ... but only on the device, exactly like you mentioned.
I resolved it with the following strategy (for the not working part, it is all type script in my case, but can be done directly in java script):
create a ``AudioContext```
create ArrayBuffer from my blob/string using the ``FileReader```
decode the audio data (await context.decodeAudioData(arrayBuffer, (buffer) => {...}))
play the sound on the destination of the audio context:
const source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.start(0)
clean up the stuff on end, if needed
source.onended = async(): Promise<void> => {
source.disconnect();
await context.close();
}

How to use Adaptive Cards on Teams Messaging Extension with thumbnail card preview?

I want to send a card to a teams channel using messaging extension. On messaging extension i need to show a preview thumbnail card and onclick of that thumbnail a adaptive card will be displayed.
I have tried the below code and while trying to use "MessagingExtensionResult" its giving error. Also i'm unable to add the dll for "MessagingExtensionResult" its giving incompatible version error. I'm using .Net framework 4.6.
var results = new ComposeExtensionResult()
{
AttachmentLayout = "list",
Type = "result",
Attachments = new List<ComposeExtensionAttachment>(),
};
var card = CardHelper.CreateCardForExperties(pos, true);
var composeExtensionAttachment = card.ToAttachment().ToComposeExtensionAttachment();
results.Attachments.Add(new ComposeExtensionAttachment
{
ContentType = "application/vnd.microsoft.teams.card.adaptive",
Content = JsonConvert.DeserializeObject(updatedJsonString),
Preview = composeExtensionAttachment
});
Using below code we can invoke adaptive card from thumbnail card preview.
ComposeExtensionResponse response = null;
1. var results = new ComposeExtensionResult()
{
AttachmentLayout = "list",
Type = "result",
Attachments = new List<ComposeExtensionAttachment>(),
};
Create a function that returns thumbnail card (preview card)
var previewThumbnailCard = CreateThumbnailCard();
Create a function that returns Adaptive card in form of attachment.
var adaptivecardattachment = CreateAdaptiveCardAsAttachment();
Cast that attachment card to composeextensionattachment and pass
thumbnail card to it as attachment.
var composeExtensionAttachmentAdaptive = adaptivecardattachment .ToComposeExtensionAttachment(previewThumbnailCard.ToAttachment());
Return the response
{
ComposeExtension = results
};
return response;

How to create Chrome notification for my changes in website?

How to create google chrome extension with a notification for every change in my website page ?
If your site has RSS feed, you can do something like this (only new pages):
Get and parse RSS feed using jQuery:
$.get('http://yoursite.com/rss', function(data) {
$(data).find('item').each(function() {
var url_news = $(this).find('link').text();
var news_id = $(this).find('id').text();
var descr = $(this).find('title').text();
});
});
Show Notify using Chrome Notification
https://developer.chrome.com/extensions/notifications
var title = "Title";
var notification = webkitNotifications.createNotification(img, title, descr);
notification.onclick = function() {
chrome.tabs.create({url: url_news}, notification.cancel());
chrome.windows.getLastFocused(null, function(win){
if(win.state=="minimized"){
chrome.windows.update(win.id, {state:"normal", focused:true});
}
});
};
notification.show();
When updated your rss feed, chrome extension will display notifications

How to interact with RAMP in a custom receiver application for Chromecast

I've created a custom sender and receiver applications for Chromecast. The sender should be sending a url (dash mpd) over the wire. The receiver should create a video element and upon receiving the dash mpd url create some extra files that interact with the video element.
On the sender I'm doing this:
var request = new cast.LaunchRequest(APP_ID, receiver);
request.parameters = params;
cast_api.launch(request, onLaunch);
Followed by:
var request = new cast.MediaLoadRequest("http://dash.edgesuite.net/envivio/dashpr/clear/Manifest.mpd");
request.parameters = params;
cast_api.loadMedia(cv_activity.activityId, request, onLoad);
Then in the receiver I have:
var receiver = new cast.receiver.Receiver(APP_ID, [cast.receiver.RemoteMedia.NAMESPACE]);
var rampHandler = new cast.receiver.RemoteMedia();
rampHandler.addChannelFactory(receiver.createChannelFactory(cast.receiver.RemoteMedia.NAMESPACE));
rampHandler.onOpen = onOpen;
rampHandler.onMessage = onMessage;
rampHandler.onLoad = onLoad;
rampHandler.onInfo = onInfo;
rampHandler.onPlay = onPlay;
rampHandler.onStop = onStop;
rampHandler.onEnded = onEnded;
rampHandler.onMetadataLoaded = onMetadataLoaded;
rampHandler.onLoadMetaDataError = onLoadMetaDataError;
rampHandler.onVolume = onVolume;
onOpen and onMessage get fired once when I launch the receiver. Nothing appears to happen after I call api.loadMedia. From what I've read it appears that the api is accessing the RAMP calls which the RemoteMedia should respond to. I'm expecting either onLoad or onMessage to get triggered after api.loadMedia is called.. But nothing happens. There's no traces at all in the Chromecast debugger.
You have to change the object prototype:
cast.receiver.RemoteMedia.prototype.onLoad = onLoad

Resources