Bypassing HDCP protection - drm

I am a Linux user and was unhappy to discover that HDCP protection on websites such as vhx.tv is preventing me from viewing the content in perfectly legitimate contexts because all Linux browsers lack HDCP support. Since HDCP was "broken" years ago, in terms of master key leaking, any device could theoretically authorize itself against an HDCP check.
Hypothetically, what is preventing someone from either writing a browser extension or creating a custom firefox build that overrides the DRM API in this way?
Why is there no such project, is it just a lack of interest or is there a hard technical barrier that is preventing this? How can anyone say that HDCP was "broken" if not even legitimate users can circumvent it when they wish to do so (eg. to watch DRM content on Linux)?

DRM systems are not within the browsers control usually, they are self-contained blobs, and the only way to get licenses (and therefore the content keys) is by using the DRM module to do a license acquisition operation.
Getting the license and decrypting and displaying the content happens within the DRM component (outside the purview of the browser) and can therefore not be accessing from a browser extension. The only way to get around that, would be to create an unauthorised version of the DRM module, get that loaded in the browser, and not have the server side code discover that the DRM module has been altered. This is generally very hard, and that is really what makes the DRM module work.

Related

DAST security scaning of a IoT Nodemcu esp8266 LUA script www HTML server connected to camera and A/C relay

I have not, but shall DAST* security test, out of curiosity, an IoT device; Nodemcu esp8266 www server I built. It's showing a HTML page (on a mobile phone for example) that allows to control and interact with a camera module and a A/C relay. With it I can for example show images captured in the camera I even think it has some image recognition built in, and I can switch on and off a relay for electrical current to a light bulb (110/220v A/C power)
Before I start pentest I though I better start thinking of what types of exploits one would be able to find and detect? Which sinister exploits I will be able to find, or rather ought be able to find given a proper pentest exercise? (And if I do not find exploits, my approach to the pentest of the Iot might be wrong)
I ponder it might be a totally pointless exercise since the esp8266 www server (or rather its LUA programming libraries) might not have any security built into it, so basically it is "open doors" and everything with it is unsafe ?
The test report might just conclude what I can foresee be that the the "user input needs to be sanitized"?
Anyone have any idea what such pentest of a generic IoT device generally reports?
Maybe it is possible to crash or reset the IoT device? Buffer overruns, XXS, call own code ?
I might use ZAP or Burpsuite or similar DAST security test tool.
I could of course SAST test it instead, or too, but I think it will be hard to find a static code analyzer for the NodeMCU libraries and NUA scripting language easily ? I found some references here though: https://ieeexplore.ieee.org/abstract/document/8227299 but it seems to be a long read.
So if someone just have a short answer what to expect in a DAST scan/pentest , it would be much appreciated.
Stay safe and secure out there !
Zombieboy
I do my vulnerability scanning with OpenVAS (I assume this is what you mean by pentesting?). I am not aware of any IOT focused Tools.
If your server is running on esp8266, i would imagine that there is no much room for authentication and encryption of http traffic, but correct me if i am wrong).
Vulnerability Scan results might show things like unencrypted http traffic, credentials transmitted in cleartext (if you have any credentials fields in the pages served by the web server) etc. Depending on if there is encryption, you might also see weak encryption findings.
You might get some false positives on your lua webserver reacting like other known webservers when exploits are applied. I have seen this kind of false positive specially on DoS vulnerabilities when a vulnerability scan is testing a vulnerability and the server becomes unresponsive. Depending on how invasive your vulnerability scanner is, you might get a lot of false positives for DoS on such a constrained platform.

Installing own App on Rooted WebOS LG TV

I am trying to install my own web App to a ROOTED LG TV, ignoring Developer mode status. I have successfully run it with Devmode = On, but it expires after 48 hours, and I have to do it over. I want to use the TV as a menu display. I can install my App, using "ApplicationInstallerUtility -c install -p /tmp/com.xxx.xxx_1.0.0_all.ipk -u 0 -l /media/cryptofs -d", but when I try to start it with Developer Mode = Off , using ssh and luna://com.webos.applicationManager/launch, I get an error code 302, and a text "errorText": "Failed to identify a proper DRM file".
What can I do to solve this issue? How can I sign my app, without going to LG content store? Thanks in advance.
Many DRM implementations will have a check for rooted devices and will not work if they detect the device is rooted - the logic is that a rooted device may not have the same protection for the media path and the keys.
If for your use case you don't actually need DRM, which could be the case if the streams you want to play are not encrypted, then it may be worth looking at your application and removing any DRM configuration or libraries it includes.
Update
The term DRM can be used generally to describe Digital Rights Management for anything, e.g. software, book, media etc, or to refer to the common DRM solutions used to protect media such as Widevine, PlayReady, FairPlay etc.
Unfortunately, the LG WebOS documentation seems to use the term for both which makes it hard to interpret the type of error you are seeing.
The manual for the error code you are seeing simply suggests the error message should be "Failed to check DRM.'
http://webostv.developer.lge.com/api/webos-service-api/application-manager/?wos_flag=launch#launch.
This might be a reference to a media DRM, but it also could be talking about the DRM used to sign and protect apps themselves.
LG have a WebOS security solutions guide also that explains their app signing security which you may be able to find on the web.
I suspect that the error message you are seeing is related to this use of the term 'DRM'.
Assuming that is the case, then unfortunatly, you are either going to have to remove this security framework in your device, which I suspect will not be trivial, or submit the app to the LG content store.

How is an ARM TrustZone secure OS secure?

I am trying to read the TrustZone white paper but it is really difficult to understand some of the basic stuff. I have some questions about it. They may be simple questions but I am a beginner in this field:
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
What prevents a malicious application form the normal OS to raise an SMC exception and transfer to Secure OS?,answered
The idea of a secure world is to keep the code executing there as small and as simple as possible - the bare minimum to fulfil its duties (usually controlling access to some resource like encryption keys or hardware or facilitating some secure functions like encryption/decryption).
Because the amount of code in the secure world is small, it can be audited easily and there's reduced surface area for bugs to be introduced. However, it does not mean that the secure world is automatically 'secure'. If there is a vulnerability in the secure world code, it can be exploited just like any other security vulnerability.
Contrast this with code executing in the normal world. For example, the Linux kernel is much more complex and much harder to audit. There are plenty of examples of kernel vulnerabilities and exploits that allow malicious code to take over the kernel.
To illustrate this point, let's suppose you have a system where users can pay money via some challenge-response transaction system. When they want to make a transaction, the device must wait for the user to press a physical button before it signs the transaction with a cryptographic key and authorises the payment.
But what if some malicious code exploited a kernel bug and is able to run arbitrary code in kernel mode? Normally this means total defeat. The malware is able to bypass all control mechanisms and read out the signing keys. Now the malware can make payments to anyone it wants without even needing the user to press a button.
What if there was a way that allows for signing transactions without the Linux kernel knowing the actual key? Enter the secure world system.
We can have a small secure world OS with the sole purpose of signing transactions and holding onto the signing key. However, it will refuse to sign a transaction unless the user presses a special button. It's a very small OS (in the kilobytes) and you've hired people to audit it. For all intents and purposes, there are no bugs or security vulnerabilities in the secure world OS.
When the normal world OS (e.g. Linux) needs to sign a transaction, it makes a SMC call to transfer control to the secure world (note, the normal world is not allowed to modify/read the secure world at all) with the transaction it wants to sign. The secure world OS will wait for a button press from the user, sign the transaction, then transfer control back to normal world.
Now, imagine the same situation where malware has taken over the Linux kernel. The malware now can't read the signing key because it's in the secure world. The malware can't sign transactions without the user's consent since the secure world OS will refuse to sign a transaction unless the user presses the button.
This kind of use case is what the secure world is designed for. The whole idea is the hardware enforced separation between the secure and normal world. From the normal world, there is no way to directly tamper with the secure world because the hardware guarantees that.
I haven't worked with TrustZone in particular but I imagine once the secure world OS has booted, there is no way to directly modify it. I don't think application developers should be able to 'add' services to the secure world OS since that would defeat the purpose of it. I haven't seen any vendors allowing third parties to add code to their secure world OS.
To answer your last question, I've already answered it in an answer here. SMC exceptions are how you request a service from the secure world OS - they're basically system calls but for the secure world OS. What would malicious code gain by transferring control to the secure world?
You cannot modify/read the secure world from the normal world
When you transfer control to the secure world, you lose control in the normal world
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
The security system designer makes it secure. TrustZone is a tool. It provides a way to partition PHYSICAL memory. This can prevent a DMA attack. TrustZone generally supports lock at boot features. So once a physical mapping is complete (secure/normal world permissions) they can not be changed. TrustZone gives tools to partition interrupts as well as boot securely.
It is important to note that the secure world is a technical term. It is just a different state than the normal world. The name secure world doesn't make it secure! The system designer must. It is highly dependant on what the secure assets are. TrustZone only gives tools to partition things that can prevent the normal world access.
Conceptually there are two types of TrustZone secure world code.
A library - here there will not usually be interrupts used in the secure world. The secure API is a magic eight ball. You can ask it a question and it will give you an answer. For instance, it is possible that some Digital rights management system might use this approach. The secret keys will be hidden and in-accessible in the normal world.
A secure OS - here the secure world will have interrupts. This is more complex as interrupts imply some sort of pre-emption. The secure OS may or may not use the MMU. Usually the MMU is needed if the system cache will be used.
Those are two big differences between final TrustZone solutions. The depend on the system design and what the end application is. TrustZone is ONLY part of the tools to try and achieve this.
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
This is not defined by TrustZone. It is up to the SOC vendor (people who licence from ARM and build the CPU) to provide a secure boot technology. The Secure OS might be in ROM and not changeable for instance. Other methods are that the secure code is digitally signed. In this case, there is probably on-chip secure ROM that verifies the digital signing. The SOC vendor will provide (usually NDA) information and techniques for the secure boot. It usually depends on their target market. For instance, physical tamper protection and encrypt/decrypt hardware maybe also included in the SOC.
The on-chip ROM (only programmed by the SOC vendor) usually has mechanisms to boot from different sources like NAND flash, serial NOR flash, eMMC, ROM, Ethernet, etc). Typically it will have some one time fuseable memory (EPROM) that a device/solution vendor (the people that make things secure for the application) can program to configure the system.
Other features are secure debug, secure JTAG, etc. Obviously all of these are possible attack vectors.

How to secure the software application in a single board arm computer?

We had developed an application in C language. We then cross compiled it and have it port over to an ARM Single Board Computer that is running Linux. We are starting to sell this SBC to customers for their specific needs using this application. However, we heard rumors that they are also trying to hack into the SBC to copy our compiled code and then to decompile it so that in the future they will develop their own.
So my question is: How to protect my software application in the SBC?
We had tried standard Linux software protection solution such as truecrypt but it is difficult to cross compile to the SBC. Any suggestions?
Best regards,
new2RoR
You appear to have two separate issues to deal with here:
Disclosure of confidential information (e.g. details of the implementation of your software)
Unauthorised use/modification of your software
The only entirely reliable solution to both is using a SBC with secure boot, and fully trusted execution path (e.g. everything code-signed). This is by no means impossible, but will limit your hardware choices and be high-effort. This is fundamentally contrary to the ethos of the open source movement, and solutions are hard to come by.
Dealing with the issue of confidentiality, disassembling compiled C and C++ is not particularly easy or useful and requires a fairly high level of skill; it won't be an economic attack unless the value of doing so is very high.
Unless you can prevent the attacker getting access to the binary form of the software, you can reduce the attack surface and make life more difficult for any attacker by:
Stripping symbols
Obfuscating those symbols that need to remain
Statically link with libraries.
Obfuscation or encryption of any data compiled into the software
Preventing unauthorised usage can be achieved by some kind of authentication with something a legitimate user holds and/or code signing
Code signing can be used to prevent modified versions of the software being used (so long as you trust the operating system to enforce it)
Using a hardware authentication token or unique identity device can ensure the software isn't copied and used on another system.
In practice, you probably want both of these.

How are web site passwords encrypted by browsers?

What are some platform-specific API's that web browsers use to securely save passwords with reversible encryption on local systems?
Since they must be able to reproduce the exact characters to pass up to a web site, the data can't be a one-way hash. My initial thought is there are system methods which utilize your current authentication data to perform encryption/decryption, but do not give access to applications to read it (your system login data) directly. I'm wondering what these are on different platforms (Windows, Linux, OS X) and how well they protect the information if the hard drive is accessed directly; i.e. a stolen laptop hard drive is placed into another computer or analyzed via a Live CD.
Here's how google chrome does it. Looks like they use CryptProtectData on windows.

Resources