From Google's documents we are asked to setup our own web server to host the receiver.html.
Is there a default public chrome-cast receiver that can be used by any one?
I just need the very basic receiver, the same as the receiver.html that provided in the CastSample, but I don't want to run my own server to host this simple file.
Thanks!
There is another simple way to do this. There is a site called pastehtml.com, where you can host a simple html for free and stays forever. You can even edit it whenever you want. Just now created mine, but I havent yet submitted it to google. Will keep you posted, once I get an approval or rejection from google on my device's whitelisting.
Chromecast receivers are tied to a URL, so I don't see how there could be a public one for a custom application, unless someone publishes a receiver that offers a service that your sender needs.
Bear in mind that the receiver needs to be publicly hosted on the web, but it doesn't not need to be dynamically generated. You could, for example, upload your html/css/js files to Amazon S3 and configure it to serve them very cheaply. There are probably other places where you can host files and have them served over http for minimal cost.
Related
I am trying to understand on a high level how a system like coderpad works. Everytime I use Coderpad to practice interviews with friends, it creates a session with a temporary link that both users can access to start the coding interview.
When a someone goes to the homepage they would be served the standard html page/client for the homepage. When they create an interview session they are served the html page/client for the coding pad, and there must also be a way to users to connect to the same session and for each session to be an isolated instance? Im guessing that when each user use the link, the server process their request and based on the link, it actually set up a stream connection between the users so that they can collaborate on a shared document, share video/voice.
my questions are:
- how exactly is the temporary link created, and how can it be created so fast?
- is my understanding of how it works correct?
- Giving topics to look into that could point me in the right direction would really help
I got curious about this too, after an interview on CoderPad.io. I suspect the temporary links are just for the server to identify the session - not actual pages on the server. Probably using WebSocket to communicate between the server and clients, broadcasting back to all users whenever code is changed (or other events.)
The coding pad page is the same static HTML. The contents and users are modified on the back-end, and only the results are shown - like in a chatroom.
Hope this helps.
I'm still new to AWS and just following the documentation and asking questions here when I get stuck. Please excuse me if this question sounds really noobish.
So far, I've deployed the following:
EB to deploy my REST API
RDS to deploy my psql database
Lambda functions to handle things like authentication & sending JWTs, uploading images to S3, etc.
I have got my basic back end (no caching (just started learning about redis), etc. set up yet, just the bare bones so far) deployed.
I'm still developing my front end, and have not even thought about how I will be deploying it yet (probably another deployment on EB, since I am using universal react). I am just developing it locally but using my production env variables now so I am hitting my deployed API, etc.
One of the MAJOR things I have no idea on how to do is detecting incoming requests from client side to get the client's location by IP address. This is so that I can return the INITIAL results in your general location just like yelp, foursquare, etc. do when you go to to their sites.
For now, I am just building a web app on desktop so I just want to worry about getting the IP address to get the general area of the user. My use case is something similar to other sites you might have used which provides an INITIAL result set for things in your area (think foursquare or yelp).
Here are my questions:
What would be a good way to do this? I'm thinking of handling this in my front end react universal deployment since it will be a node server with rendered page caching. Is this a terrible idea? It would work something like
(1) request from client comes in
(2) get IP from request and lookup the IP location using some service (still not sure what I'm going to use, have found a few plus a nodejs library called node-geoip). Preferably, I can get the zip code since I am trying to save having to do so many queries by unique locations in my database, and instead return results in the zip code and the front end will show an initial map with the initial results in that zip code.
(3) return to client the rendered page with those location params if it exists, otherwise create it, send it, and cache it.
Is the above a really dumb idea? Maybe you have already done something like this, and could share your wisdom :)
Is there an AWS service which can already handle something like this for me? Perhaps there's some functionality which can already do this.
Thanks.
AGAIN - I apologize if this is long winded. I don't know anyone in real life who can help me and I feel alone :(. I appreciate the help you guys can provide.
There are two parts to this:
Getting the user's IP address. You mentioned you're using 'EB' - I presume you mean AWS ELB (Elastic Load Balancer)? If so, then you need to read the X-Forwarded-For HTTP header in your app code, since otherwise what you'll really detect is the ELB's IP address. X-Forwarded-For contains the user's real IP - or rather, the IP of the end-connection being made (there's no telling if this is really a VPN, Proxy or something else-- but it's as far as you can get with an IP.)
Querying an IP DB that can turn the addr into a location object. There are tons of libraries for you. Assuming you're using Node, you can use node-geoip as you mentioned. Or you can just search 'geoip service' on Google and find managed services, like Telize on Mashape. If you don't want to manage the DB lookup yourself or keep the thing up to date, then a managed service would help.
In either case, it's likely that you'll be doing asynchronous look-ups. In that case, you might want to use async/await to get the user's full object before injecting that into your React props and ultimately rendering it as a HTML string that's sent down to the client.
You could also use a library like redial to decorate your components with data requirements, and return a Promise you can await on to know when you're okay to render.
Since you probably want to enable client routing too (i.e. where the user can click on a route in their browser, and the server isn't touched at all), then you will probably need some way to retrieve the IP address/results based on that IP even when the server isn't involved in the initial render.
For that, you could write a REST service that retrieves the results. Or write a GraphQL back-end that gets the data. It doesn't matter how you write it, since the server will have access to the X-Forwarded-For header and can use that to retrieve the results and send back location-aware data.
FYI, I'm writing a React starter kit (called ReactNow) that uses rxjs for handling async streams. It's not ready yet, but it might help you figure out the code layout that would offer a balanced mix between rendering on the server, and writing universal code that requires some heavy lifting from the server.
Searched Google and so - no luck.
Just got this message in Azure for 3 CDN endpoints.
There seems no way to know what is going on without MS support. It is a test account and I do not recall setting this. I have been through similar obfuscated MS error messages only to discover that Azure had crashed.
What does it mean?
This isn't really a direct answer, but could help with the general problem of "what happens if the CDN goes down?".
There is a recent development called the "Progressive Web App".
Basically unless served by localhost, everything has to be over https, but script is cached as a local application in your browser.
When your app makes requests to the registered domain, these are intercepted by a callback you put in your serviceWorker.js, so you can cache even application data locally, and sync the local data occasionally with the server (or on receive events if you're using webSockets).
Since the Service Worker intercepts REST calls to the registered domain, this in theory makes it fairly easy to add to just about any framework.
https://developers.google.com/web/fundamentals/getting-started/codelabs/your-first-pwapp/
Sometimes there is a (global) problem with the CDN. It happend before.
You can check the azure CDN status on this page: https://azure.microsoft.com/en-us/status/
At this moment everything looks good, you still have problems?
I have a custom HTTP server written in C# that provides specialized application services in my application domain, such as rendering a dashboard with graphing. I am not using IIS; the server is completely written in C# using the HttpListener class to bind directly to the port(s) that it serves.
I would like to use ServiceStack to add in a RESTful API for pulling data to browser-side code to improve the user experience, using the same web server (so it can bind to the same HTTPS port, and share my security module).
All of the example code on the web that I have found assumes either that IIS owns the web port, or that Service Stack owns the web port. In my use case, my custom application owns the web port, and wants to delegate certain HTTP requests to Service Stack.
Is there a simple way to pass off a single HttpListenerContext off to Service Stack to handle? I could not find documentation examples using google, since all of the key words seem to be too common.
In the context of my custom code, I will have already evaluated the URL, determined that the user has permission to access it, and just need to have HttpListenerResponse generated.
Simple sample code or a pointer to a web article would be fantastic.
We've never tested doing this before, but the AppHostHttpListenerBase.ProcessRequest entry point accepts a HttpListenerContext.
You would still need to call new AppHost().Init() but you shouldn't need to start it.
I'm thinking about exploring the idea of having our client software run as a service on a high port and listen for simple http GET requests from 127.0.0.1. The theory is that I would be able to access this service via js from a web page that is served from my site.
1) User installs client software that installs itself as a service and waits for authenticated requests on 127.0.0.1:8080
2) When the user hits my home page js on the page makes an xhtml request to 127.0.0.1:8080 and asks for the status
3) The home page then makes another js request back to my web server sending the status that it received.
This would allow my users to upload/download and edit files on a USB attached device in real-time from a browser. Polling could be the fallback method which is close to what we do today.
Has anyone done this and what potential pitfalls are there? Will this even work?
I can't see any potential pitfalls. I do have a couple of points however.
1/ You probably want to make sure your service only accepts incoming connection from the local machine (127.0.0.1). Otherwise, anyone could look at your JavaScript and figure out that it's talking to [your-ip]:8080. They could then try that themselves from a remote site (security hole).
2/ I wouldn't use port 8080 as it's commonly used for other things (alternate HTTP servers, etc.). Make it configurable and choose a nice high random-type value.
3/ I'm not sure what you're trying to do with point 3 but I think you're trying to send the status back to the user. In which case, why wouldn't the JavaScript on your home page just get the status in a single session and output/update the HTML to be presented to the user? Your "another js request back to my web server" doesn't make sense to me.
You may not be able to do a xml http request to 127.0.0.1 as XMLHTTPRequest is usually limited to the same domain as the main content is being served from. I'm not sure if this restriction applies if the server is on the client's machine. That being said, you could still create a <script> tag that had the src pointing to 127.0.0.1, and have the web server return some Javascript to run. If you only need a simple response, this could work well.
I think it is much better for you to avoid implementation of application logic in JavaScript and html. Once user clicks button on a web page JavaScript should send request to your service and allow it do the rest of the work.
You could have problems with step 1 (Client installs itself) depending on your target user base.
You will need a customised install for each supported environment (Win2K, Vista, Linux, MAC OS 9.0/10.0 etc.).
If your user is on a locked down at work PC this simply wont be allowed.
To some users this might look distressingly similar to a trojan unless you explicitly point out you will be installing software that runs as a service.
You didnt mention an unistall procedure. Users resent "Adobe" like software which installs itself and provides no sensible un-install options
Ohterwise the approach is sound, and, there are are couple of commercial products out there that use exactly this approach!