I've been looking into Azure Media Services and I've been able to create a program that copies my video blob from my website storage to my media service storage account and create an asset/asset file from it. Then I've got it encoding for adaptive streaming.
The issue I'm having is on playback. I'm wanting to use the Azure Media Player as it shows great promise in detecting the environment and providing the correctly encoded video for streaming.
When I use the iframe approach (found here) it works, but I feel I'm losing some ability to customize -- also it's breaking in Safari on Mac.
<iframe class="video-preview" src="//aka.ms/azuremediaplayeriframe?url=[MANIFEST URL HERE]&autoplay=false" name="azuremediaplayer" allowfullscreen></iframe>
The other method (found here) utilizes the <video> tag along with css & js files put in the header.
Header code:
<link href="//amp.azure.net/libs/amp/1.1.0/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet">
<script src="//amp.azure.net/libs/amp/1.1.0/azuremediaplayer.min.js"></script>
<script>
amp.options.flashSS.swf = "//amp.azure.net/libs/amp/1.1.0/techs/StrobeMediaPlayback.2.0.swf"
amp.options.flashSS.plugin = "//amp.azure.net/libs/amp/1.1.0/techs/MSAdaptiveStreamingPlugin-osmf2.0.swf"
amp.options.silverlightSS.xap = "//amp.azure.net/libs/amp/1.1.0/techs/SmoothStreamingPlayer.xap"
</script>
Video code:
<video id="azuremediaplayer" class="azuremediaplayer amp-default-skin amp-big-play-centered video-preview" controls data-setup='{"nativeControlsForTouch": false}'>
<source src="[MANIFEST URL HERE]" type="application/vnd.ms-sstr+xml" />
<p class="amp-no-js">To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video</p>
</video>
The <data-setup> attribute is supposed to activate the <video> tag and turn it into an Azure Media Player, but that's not happening for me.
So, my question is: what method is preferred/standard? I know that's difficult to pin down as it's still very young and is always changing, but just wanted to see what everyone else's experiences were.
The iframe approach which is on the demo website is currently a proof of concept (see the warning on the page "Note: this embed code is for demo purposes only. Do not use in production"). It is meant to serve as a way to show that the player can work in an iframe. This will expand over time, but the flexibility of the iframe currently is limited to how you want to design the parameters.
In general, the approach you take depends on what you are trying to achieve (meaning depending on the level of flexibility you require). In general, the current recommended approach is to use the JS and CSS method directly on your page.
Now, for the issues you are having, it would be great to understand what you are seeing.
1.For the iframe issue on Safari on Mac, what are you seeing? I just tried the following on OS X Yosemite and Safari and it seems to be working fine
<iframe src="//aka.ms/azuremediaplayeriframe?url=%2F%2Famssamples.streaming.mediaservices.windows.net%2F91492735-c523-432b-ba01-faba6c2206a2%2FAzureMediaServicesPromo.ism%2Fmanifest&autoplay=false" name="azuremediaplayer" scrolling="no" frameborder="no" align="center" height="280px" width="500px" allowfullscreen></iframe>
2. Are you able to view the samples provided in the documentation? Here is the list of samples and specifically you should look at the basic videotag sample. You will need to make sure an source is added to the video tag for the auto-detect to work. If you are still
If you are still having issues, please reach out to ampinfo#microsoft.com
Related
I am trying to figure out a way to embed a url that streams an audio stream to either an iframe of the best cross compatible browser frame that I could use. I would like to have it autoplay on mobile devices - which I would assume would be no issue as it's already streaming all the time anyway.
My stream url is
http://67.212.165.106:8028/stream
Thanks for any assistance with this, I tried to enclose this url in an iframe but it doesn't work.
Here's where I have that:
http://radio.baseballpodcasts.net/IframePlayer.html
Tried this and it doesn't seem to work:
http://radio.baseballpodcasts.net/IframePlayer.html
You're linking directly to a stream so there's nothing to iframe. What you need is an <audio> element.
<audio src="http://67.212.165.106:8028/stream" controls preload="none" autoplay></audio>
Note that autoplay does not normally work. If users come to your site and click play often enough, it may eventually.
I have a page that shows merchant calendar
https://staggingv2.tappio.me/c/sami/Tappio-Meeting
Now I want to build functionality that allow merchant to embed this page on their own website using script. The Same feature is provided by calendly some like this. and I want to implement this feature as well in my website.
I have very less knowledge about webpack config. I have go through this but there is not much guidance in this answer. I know I have to build this page to public directory but how i will achieve this just for single page?
Can anyone provide me pathway to achieve this functionality?
<!-- Calendly inline widget begin -->
<div class="calendly-inline-widget" data-
url="https://calendly.com/s-m-sami125" style="min-
width:320px;height:630px;"></div>
<script type="text/javascript"
src="https://assets.calendly.com/assets/external/widget.js"
async></script>
<!-- Calendly inline widget end -->
This is the page which is embedded in external website just by above script.
https://calendly.com/s-m-sami125/test?month=2022-12
You can see that in sandbox
https://codesandbox.io/s/eloquent-chaplygin-lxssxo?file=/index.html
I think this is not a webpack question. At first glance most developers would recommend iFrame. You can simply generate the calendar on your NextJs server without any menu and frame, then share the JS code with your merchant to embend it. If you choose this way you may facing problems with design (maybe the target page have different design than your page) and Cross-Origin Resource Sharing (CORS) issues. CORS can be solved if you have admin rights on your server (I assume you have) but If you allow access to it for every source this should be a security problem.
I think providing an API is a better solution. You can make API in NextJs easily and your merchants display data as they want in any design. Calling an API is easy from any framework nowdays.
What I am trying to achieve is to render the DOM of an HTML page with Animations & JavaScript on a Node.js server, then convert that into a stream, which can then be viewed client side in an HTML page. As this would keep the content of the HTML page & its data where it is secure, and only allow the page to be viewed (The use case for this means people don't need to interact with the page, only the server can through code, and people can just view it).
For context, this is part of a NW.js application, which is hosting the Node Server. So there is the possibility of rendering the HTML in a hidden window; however, that is one step closer to not-secure because it would be a window technically accessible by the desktop (I think).
Some of my questions for you all are:
What is the best method of sending video from servers to clients? I have come across WebRTC which actually has a screen sharing solution, however I don't believe that works for rendering the HTML page headless on a Node.js server.
Are there are ways to render (and then convert the DOM into a video stream) on a Node.js server?
If I am simply barking up a tree that doesn't exist, and what I am questioning isn't feasible: are there any completely secure ways of simply displaying a web page? Because from my inexperienced research, you can't truly hide anything and can only obfuscate the HTML & CSS & some JavaScript.
The server needs to be able to edit colors, icons, text, positioning and more of the elements in the page live, and so I believe I can't just put the page & animations in a video file and play that on the client side. It needs to be editable on the fly.
Any thoughts you all have on this matter would be much appreciated. I apologize if this question isn't traditional in any way, this is part of me getting my bearings as a first time questioner on here. :D
I am currently beginning to look at creating Chrome Apps and have followed a few of the basic tutorials now. I am happy with the basics so far except for one thing.
All the sample code and tutorials only seem to have one html file in the package, but what if I want to take a web app I have that uses more than one HTML page and turn it into a Chrome App?
How to I get the Chrome App to change from the index.html to another html page when I want to show some other html? I have tried using the standard html anchor tag along with the target set to _blank or _self, but it will only open a URL on the internet in a browser rather than changing the page in my application.
I am not from a web development background, so am I missing something basic to do this?
The simplest version of what Vincent Scheib said:
index.html
...
<div id="screen1" style="display:block">
...
</div>
<div id="screen2" style="display:none">
...
</div>
main.js
...
// A navigational event happens:
document.getElementById("screen1").style.display = "none";
document.getElementById("screen2").style.display = "block";
...
Packaged apps intentionally do not support navigation. Apps are not in a browser, there is no concept of forward, back, or reload. Applications which do require the concept of navigation, should use a user interface framework that supports that functionality. E.g. by manipulating the DOM, using CSS, or using iframes to animate and control visibility of components of your app.
how to check if a certain page is being accessed from a crawler or a script that fires contineous requests?
I need to make sure that the site is only being accessed from a web browser.
Thanks.
This question is a great place to start:
Detecting 'stealth' web-crawlers
Original post:
This would take a bit to engineer a solution.
I can think of three things to look for right off the bat:
One, the user agent. If the spider is google or bing or anything else it will identify it's self.
Two, if the spider is malicious, it will most likely emulate the headers of a normal browser. Finger print it, if it's IE. Use JavaScript to check for an active X object.
Three, take note of what it's accessing and how regularly. If the content takes the average human X amount of seconds to view, then you can use that as a place to start when trying to determine if it's humanly possible to consume the data that fast. This is tricky, you'll most likely have to rely on cookies. An IP can be shared by multiple users.
You can use the robots.txt file to block access to crawlers, or you can use javascript to detect the browser agent, and switch based on that. If I understood the first option is more appropriate, so:
User-agent: *
Disallow: /
Save that as robots.txt at the site root, and no automated system should check your site.
I had a similar issue in my web application because I created some bulky data in the database for each user that browsed into the site and the crawlers were provoking loads of useless data being created. However I didn't want to deny access to crawlers because I wanted my site indexed and found; I just wanted to avoid creating useless data and reduce the time taken to crawl.
I solved the problem the following ways:
First, I used the HttpBrowserCapabilities.Crawler property from the .NET Framework (since 2.0) which indicates whether the browser is a search engine Web crawler. You can access to it from anywhere in the code:
ASP.NET C# code behind:
bool isCrawler = HttpContext.Current.Request.Browser.Crawler;
ASP.NET HTML:
Is crawler? = <%=HttpContext.Current.Request.Browser.Crawler %>
ASP.NET Javascript:
<script type="text/javascript">
var isCrawler = <%=HttpContext.Current.Request.Browser.Crawler.ToString().ToLower() %>
</script>
The problem of this approach is that it is not 100% reliable against unidentified or masked crawlers but maybe it is useful in your case.
After that, I had to find a way to distinguish between automated robots (crawlers, screen scrapers, etc.) and humans and I realised that the solution required some kind of interactivity such as clicking on a button. Well, some of the crawlers do process javascript and it is very obvious they would use the onclick event of a button element but not if it is a non interactive element such as a div. The following is the HTML / Javascript code I used in my web application www.so-much-to-do.com to implement this feature:
<div
class="all rndCorner"
style="cursor:pointer;border:3;border-style:groove;text-align:center;font-size:medium;font-weight:bold"
onclick="$TodoApp.$AddSampleTree()">
Please click here to create your own set of sample tasks to do
</div>
This approach has been working impeccably until now, although crawlers could be changed to be even more clever, maybe after reading this article :D