How can I increase Google Earth plug-in cache for loading overlays? - kml

I have a lot of hi-res ground overlays on the plug-in version of GE.
Is there any way I increase the amount of memory Google Earth plug-in uses for cache for loading these image overlays?

Increasing the memory to the plugin is not the correct way to go about this. The thing is (ultimately) running in a web browser. If the KML data is not stored locally, then the user has to download your entire overlay all at once, which could take a while.
The correct approach is to break your large ground overlay into several small tiles that can be loaded individually when needed. E.g., If your overlay covers an entire state, don't load the tiles that are out of the user's view. Google calls overlays that follow this paradigm "Super Overlays"
More information on Super Overlays and how to use them with Google Earth is available here.

AFAIK there is no way within the API itself to increase the cache size available to the plugin. Indeed I don't think the plugin has any documented configurable system settings at all.
The only possible exception I have come across are various undocumented registry settings in various Windows OSs that allow some things to be configured (such as forcing OpenGL or DirectX rendering) but other than that I don't think there is any way to do this, sorry.

Related

Why browsers reveal fingerprinting information?

According to https://coveryourtracks.eff.org/ my Chrome and Firefox browsers disclose seemingly unnecessary information. Why those (and possibly other) browsers readily reveal any of that information?
Fingerprinting information that has nothing to do with serving:
Hardware concurrency
Device memory
Platform
Fingerprinting information that is needed only rarely (if at all) for serving:
User agent: browser and version
HTTP_ACCEPT headers: system language
List of browsers plugins
Timezone
Screen size and color depth; canvas hash
Audiocontext
WebGL vendor and renderer; WebGL hash
I think your'e mischaracterising this. Browsers don't deliberately reveal fingerprintable information; they provide information that is useful in lots of different contexts that also happens to contribute to fingerprinting.
Whether these are relevant for serving can't be determined from the client end. Historically, user agent and screen size/depth were critical in the days before the heavy dependence we now see on Javascript to achieve similar things, when differences between browsers and client platforms was much greater, and many sites are still built that way. For example my bank's web site still has obvious, visible workarounds for IE6. Prior to CSS media queries (around 2009), many things could only be achieved by telling the server about these things, and many of those browsers are still with us.
Knowing whether a browser supports a particular plug-in can also be critical - for example if I want to render a PDF in an iframe, your browser had better be able to render it, and it's useful for a server to know if it can before trying to use it.
You missed off fonts and battery level from your list. These are also very often part of a fingerprint, but at the same time useful for sites to know what to serve to a client.
Some clients deliberately add noise to these values. For example whether a battery is at 10.4% or 12.6% doesn't really make much difference – it's quite low, so adding 5% noise to this information retains its utility while reducing identifiability.
Precisely how much noise you need to add to preserve anonymity, or at least provide reasonable (and measurable) ambiguity of identity, is covered by the concept of differential privacy, which I'll leave you to read up on.

Prevent from screen recording

I am working on an educational e-commercial website .. In which the user need to authenticate and then the videos on particular topics will be available.. so how can I prevent my video to be screen-recorded...
Different OS's and applications support different mechanisms to try to tackle this - for example:
Microsoft Edge on Windows 10 uses integrated 'Protected Media Path' for encrypted content which will stop simple screenshots working
Website and web app developers may use a number of CCS 'tricks' to achieve a similar affect, although these can usually be workaround with standard web developer and debug tools.
Mobile video typically uses protected memory for encrypted content which will usually give a black screen on capture.
As mentioned in comments and other answers these are all 'barriers' but they don't make it impossible to copy the content - the best example being pointing a camera at the screen a copying that way.
The idea is generally to make it hard enough compared to the value of the content so that people are not prepared to invest the time to work around your barriers.
It is not possible, for a variety of reasons:
There is no Web API for that.
Even if there was, it would be possible to reverse engineer the browser/OS to allow for screen recording.
Even if, for some reason, you couldn't access and modify the software running on the computer, you could connect the computer to a capture card instead of your monitor.
And if you also couldn't do that, you could just point a camera at the screen and start recording.

Kiosk program (web browser), deployment struggles

Okay, here's a complicated one I've been breaking my head over all week.
I'm creating a self service system, which allows people to identify themselves by barcode or by smartcard, and then perform an arbitrary action. I run a Tomcat application container locally on each machine to serve up the pages and connect to external resources that are required. It also allows me to serve webpages which I then can use to display content on the screen.
I chose HTML as a display technology because it gives a lot of freedom as to how things could look. The program also involves a lot of Javascript to interact with the customer and hardware (through a RESTful API). I picked Javascript because it's a natural complement to HTML and is supported by all modern browsers.
Currently this system is being tested at a number of sites, and everything seems to work okay. I'm running it in Chrome's kiosk mode. Which serves me well, but there are a number of downsides. Here is where the problems start. ;-)
First of all I am petrified that Chrome's auto-update will eventually break my Javascript code. Secondly, I run a small Chrome plugin to read smartcard numbers, and every time the workstation is shutdown incorrectly Chrome's user profile becomes corrupted and the extension needs to be set up again. I could easily fix the first issue by turning off auto-update but it complicates my installation procedure.
Actually, having to install any browser complicates my installation procedure.
I did consider using internet explorer because it's basically everywhere, but with the three dominant versions out there I'm not sure if it's a good approach. My Javascript is quite complex and making it work on older versions will be a pain. Not even mentioning having to write an ActiveX component for my smartcards.
This is why I set out to make a small browser wrapper that runs in full screen, and can read smartcard numbers. This also has downsides. I use Qt: Qt's QtWebkit weighs a hefty 10MB, and it adds another number of dependencies to my application.
It really feels like I have to pick from three options that all have downsides. It really is something I should have investigated before I wrote the entire program. I guess it is a lesson learnt well.
On to the questions:
Is there a pain free way out of this situation? (probably not)
Is there a browser I can depend on without adding tens of megabytes to my project?
Is there another alternative you could suggest?
If you do not see another way out, which option would you pick?

What user-information is available to code running in browsers?

I recently had an argument with someone regarding the ability of a website to take screenshots on the user's machine. He argued that using a GUI-program to simulate clicking a mouse really fast to win a simple flash game could theoretically be detected (if the site cared enough) by logging abnormally high scores and taking a screenshot of those players' desktops for moderator review. I argued that since all website code runs within the browser, it cannot step outside the system to take such a screenshot.
This segued into a more general discussion of the capabilities of websites, through Javascript, Flash, or whatever other method (acceptable or nefarious), to make that step outside of the system. We agreed that at minimum some things were grabbable: the OS, the size of the user's full desktop. But we definitely couldn't agree on how sandboxed in-browser code was. All in all he gave website code way more credit than I did.
So, who's right? Can websites take desktop screenshots? Can they enumerate all your open windows? What else can (or can't) they do? Clearly any such code would have to be OS-specific, but imagine an ambitious site willing to write the code to target multiple OSes and systems.
Googling this led me to many red herrings with relatively little good information, so I decided to ask here
Generally speaking, the security model of browsers is supposed to keep javascript code completely contained within its sandbox. Anything about the local machine that isn't reflected in the properties of the window object and its children is inaccessible.
Plugins, on the other hand, have free reign. They're installed by the user, and can access anything the user can access. That's why they're able to access your webcam, upload files, do virus scans, etc. They're also able to expose APIs to javascript code, which pokes a hole in the javascript sandbox and gives javascript code some external access. That's how tools like Phonegap give javascript code in web apps access to phone hardware (gps, orientation, camera, etc.)

What would be the best suited language/technology in this scenario?

I'm about to develop a small system to display dynamic information in public spaces across an entire building (similar to Flight Information Displays on an Airport).
The system will have two main components:
a back-office for managing the
information displayed
a front-end
which acctually displays the
information.
The back-office component is covered: it's a simple crud application with a web interface, accessed through the intranet.
I have to decide which language/technology to use for the front-end. The purpose of this component is only to access the information in stored in the back-office and display it in a big LCD monitor. No input in expected, just display the information, paging once in a while as all the information won't fit in the screen at once.
I think of a Flash movie which some-how access the back-office data through the local intranet to get the information to display.
Can you think of a better option for the front-end? Why?
Other technologies that came across my mind are:
Silverlight
Flex
JavaFX
I've had pretty good success using Silverlight and C# to access and display back-end data, running it in out-of-browser mode to avoid the display of browser chrome. WPF might also work in your situation instead of Silverlight, but Silverlight seems to be the target for most of Microsoft's recent tooling efforts (via WCF RIA Services).
The advantages for me were the fact that my company largely already has a Microsoft-based infrastructure and we already owned the tools. Up-front costs can be an issue if you go the Redmond Way. Also Silverlight and WPF have fairly healthy learning curves, though there are tons of resources and tutorials available.

Resources