does the browser use computer resources only for the viewport - browser

I was wondering if I created a website, let's say a carrousel of images that will be x-translated outside the viewport will that affect the speed of the loading the page and will it use other computer resources. Is it a bad practice and should it be avoided? Does the browser focus only on the viewport while the rest of the website is in some kind of standby mode?

Related

Prevent from screen recording

I am working on an educational e-commercial website .. In which the user need to authenticate and then the videos on particular topics will be available.. so how can I prevent my video to be screen-recorded...
Different OS's and applications support different mechanisms to try to tackle this - for example:
Microsoft Edge on Windows 10 uses integrated 'Protected Media Path' for encrypted content which will stop simple screenshots working
Website and web app developers may use a number of CCS 'tricks' to achieve a similar affect, although these can usually be workaround with standard web developer and debug tools.
Mobile video typically uses protected memory for encrypted content which will usually give a black screen on capture.
As mentioned in comments and other answers these are all 'barriers' but they don't make it impossible to copy the content - the best example being pointing a camera at the screen a copying that way.
The idea is generally to make it hard enough compared to the value of the content so that people are not prepared to invest the time to work around your barriers.
It is not possible, for a variety of reasons:
There is no Web API for that.
Even if there was, it would be possible to reverse engineer the browser/OS to allow for screen recording.
Even if, for some reason, you couldn't access and modify the software running on the computer, you could connect the computer to a capture card instead of your monitor.
And if you also couldn't do that, you could just point a camera at the screen and start recording.

Implement Ask-to-Load Image to Website

How to create if a user visits a website but images on those websites won't load until a user press a button on a image to load it. So, initially the images blur. This is the same as Whatsapp's optional setting (which we must to download first in order to view the image clearly) and Twitter Lite's image load (which is what I mean).
The advantages is especially for users who have a limited data usage so loading all the images, including GIF might drain their data usage.
You can create a very light-weight (possibly in KBs and optionally blur) copy of the image at the time of upload. You can load that first and then on demand, you can use the original one.

How can I increase Google Earth plug-in cache for loading overlays?

I have a lot of hi-res ground overlays on the plug-in version of GE.
Is there any way I increase the amount of memory Google Earth plug-in uses for cache for loading these image overlays?
Increasing the memory to the plugin is not the correct way to go about this. The thing is (ultimately) running in a web browser. If the KML data is not stored locally, then the user has to download your entire overlay all at once, which could take a while.
The correct approach is to break your large ground overlay into several small tiles that can be loaded individually when needed. E.g., If your overlay covers an entire state, don't load the tiles that are out of the user's view. Google calls overlays that follow this paradigm "Super Overlays"
More information on Super Overlays and how to use them with Google Earth is available here.
AFAIK there is no way within the API itself to increase the cache size available to the plugin. Indeed I don't think the plugin has any documented configurable system settings at all.
The only possible exception I have come across are various undocumented registry settings in various Windows OSs that allow some things to be configured (such as forcing OpenGL or DirectX rendering) but other than that I don't think there is any way to do this, sorry.

When does UI (as opposed to widget toolkit type thing) need to know screen dimensions?

In a secure OS, we'd like to not tell applications anything unless they absolutely must know. Say you browse a website through an anonymity network as two different identities by running separate isolated instances of the web browser. The web browser instances can transmit your screen size, and then an adversary could see that both of your identities use the same screen size and make a more accurate guess that they are the same person. Screen size reveals identity or narrows it down.
What are some use cases of why a UI/web page (as opposed to the underlying technology) needs your screen size? Why not just use abstract dimensions for everything? Or use layout widgets that don't let you even touch dimensions. What are the limitations of these techniques?

What user-information is available to code running in browsers?

I recently had an argument with someone regarding the ability of a website to take screenshots on the user's machine. He argued that using a GUI-program to simulate clicking a mouse really fast to win a simple flash game could theoretically be detected (if the site cared enough) by logging abnormally high scores and taking a screenshot of those players' desktops for moderator review. I argued that since all website code runs within the browser, it cannot step outside the system to take such a screenshot.
This segued into a more general discussion of the capabilities of websites, through Javascript, Flash, or whatever other method (acceptable or nefarious), to make that step outside of the system. We agreed that at minimum some things were grabbable: the OS, the size of the user's full desktop. But we definitely couldn't agree on how sandboxed in-browser code was. All in all he gave website code way more credit than I did.
So, who's right? Can websites take desktop screenshots? Can they enumerate all your open windows? What else can (or can't) they do? Clearly any such code would have to be OS-specific, but imagine an ambitious site willing to write the code to target multiple OSes and systems.
Googling this led me to many red herrings with relatively little good information, so I decided to ask here
Generally speaking, the security model of browsers is supposed to keep javascript code completely contained within its sandbox. Anything about the local machine that isn't reflected in the properties of the window object and its children is inaccessible.
Plugins, on the other hand, have free reign. They're installed by the user, and can access anything the user can access. That's why they're able to access your webcam, upload files, do virus scans, etc. They're also able to expose APIs to javascript code, which pokes a hole in the javascript sandbox and gives javascript code some external access. That's how tools like Phonegap give javascript code in web apps access to phone hardware (gps, orientation, camera, etc.)

Resources