I have started to develop applets for smartcards using javacard.
When an applet is compiled it must be loaded to the javacard through a secure protocol defined in the Global Platform specification (https://globalplatform.org/wp-content/uploads/2018/05/GPC_CardSpecification_v2.3.1_PublicRelease_CC.pdf).
In particular, loading the applet on the card requires to know cryptographic keys that are used to setup a secure channel between the host computer and the smartcard. Blank cards are typically provided with default keys such as "404142434445464748494A4B4C4D4E4F". To 'lock' the card and ensure that it cannot be tampered, these keys must be changed to something known only by the issuer.
My question is the following:
What is the risks associated with issuing cards using the default test keys?
Here is a list of threats that I could think of:
A user can remove the applet and reuse the card for another purpose
A malicious software could uninstall the applet (denial of service)
A malicious software could remove the applet and install a backdoored one instead to steal user credentials in future usage.
Are there any other threats? In particular, is it possible to recover sensitive data (e.g. cryptographic keys) stored in an applet already installed on the card?
I would like to understand the exact security implications of using a smartcard with the default keys for the secure channel.
The data of the applet should be protected by the "firewall" that is implemented according to the Java Card Runtime Specification (JCRE), chapter 6: "Applet Isolation and Object Sharing":
The Java Card technology-based firewall (Java Card firewall) provides protection
against the most frequently anticipated security concern: developer mistakes and
design oversights that might allow sensitive data to be “leaked” to another applet. An applet may be able to obtain an object reference from a publicly accessible location. However, if the object is owned by an applet protected by its own firewall, the requesting applet must satisfy certain access rules before it can use the reference to access the object.
The firewall also provides protection against incorrect code. If incorrect code is
loaded onto a card, the firewall still protects objects from being accessed by this code.
To do allow sharing the sharing class would have to implement the javacard.framework.Shareable interface (6.2.6 Shareable Interface Details).
Beware though that the attack surface is greatly enhanced if you allow untrusted applets to be run. The likeliness that the security constraints cannot be met will definitely increase. These keys are considered vital to Java Card security and the default keys should always be replaced. If you order larger quantities of cards it is usually possible to replace the default keys with customer specific ones.
Note that at the time of writing the site of Oracle is partly down and I cannot access the documentation. This text is taken from the 3.0.1 specifications that I had stored on my personal computer.
I am trying to read the TrustZone white paper but it is really difficult to understand some of the basic stuff. I have some questions about it. They may be simple questions but I am a beginner in this field:
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
What prevents a malicious application form the normal OS to raise an SMC exception and transfer to Secure OS?,answered
The idea of a secure world is to keep the code executing there as small and as simple as possible - the bare minimum to fulfil its duties (usually controlling access to some resource like encryption keys or hardware or facilitating some secure functions like encryption/decryption).
Because the amount of code in the secure world is small, it can be audited easily and there's reduced surface area for bugs to be introduced. However, it does not mean that the secure world is automatically 'secure'. If there is a vulnerability in the secure world code, it can be exploited just like any other security vulnerability.
Contrast this with code executing in the normal world. For example, the Linux kernel is much more complex and much harder to audit. There are plenty of examples of kernel vulnerabilities and exploits that allow malicious code to take over the kernel.
To illustrate this point, let's suppose you have a system where users can pay money via some challenge-response transaction system. When they want to make a transaction, the device must wait for the user to press a physical button before it signs the transaction with a cryptographic key and authorises the payment.
But what if some malicious code exploited a kernel bug and is able to run arbitrary code in kernel mode? Normally this means total defeat. The malware is able to bypass all control mechanisms and read out the signing keys. Now the malware can make payments to anyone it wants without even needing the user to press a button.
What if there was a way that allows for signing transactions without the Linux kernel knowing the actual key? Enter the secure world system.
We can have a small secure world OS with the sole purpose of signing transactions and holding onto the signing key. However, it will refuse to sign a transaction unless the user presses a special button. It's a very small OS (in the kilobytes) and you've hired people to audit it. For all intents and purposes, there are no bugs or security vulnerabilities in the secure world OS.
When the normal world OS (e.g. Linux) needs to sign a transaction, it makes a SMC call to transfer control to the secure world (note, the normal world is not allowed to modify/read the secure world at all) with the transaction it wants to sign. The secure world OS will wait for a button press from the user, sign the transaction, then transfer control back to normal world.
Now, imagine the same situation where malware has taken over the Linux kernel. The malware now can't read the signing key because it's in the secure world. The malware can't sign transactions without the user's consent since the secure world OS will refuse to sign a transaction unless the user presses the button.
This kind of use case is what the secure world is designed for. The whole idea is the hardware enforced separation between the secure and normal world. From the normal world, there is no way to directly tamper with the secure world because the hardware guarantees that.
I haven't worked with TrustZone in particular but I imagine once the secure world OS has booted, there is no way to directly modify it. I don't think application developers should be able to 'add' services to the secure world OS since that would defeat the purpose of it. I haven't seen any vendors allowing third parties to add code to their secure world OS.
To answer your last question, I've already answered it in an answer here. SMC exceptions are how you request a service from the secure world OS - they're basically system calls but for the secure world OS. What would malicious code gain by transferring control to the secure world?
You cannot modify/read the secure world from the normal world
When you transfer control to the secure world, you lose control in the normal world
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
The security system designer makes it secure. TrustZone is a tool. It provides a way to partition PHYSICAL memory. This can prevent a DMA attack. TrustZone generally supports lock at boot features. So once a physical mapping is complete (secure/normal world permissions) they can not be changed. TrustZone gives tools to partition interrupts as well as boot securely.
It is important to note that the secure world is a technical term. It is just a different state than the normal world. The name secure world doesn't make it secure! The system designer must. It is highly dependant on what the secure assets are. TrustZone only gives tools to partition things that can prevent the normal world access.
Conceptually there are two types of TrustZone secure world code.
A library - here there will not usually be interrupts used in the secure world. The secure API is a magic eight ball. You can ask it a question and it will give you an answer. For instance, it is possible that some Digital rights management system might use this approach. The secret keys will be hidden and in-accessible in the normal world.
A secure OS - here the secure world will have interrupts. This is more complex as interrupts imply some sort of pre-emption. The secure OS may or may not use the MMU. Usually the MMU is needed if the system cache will be used.
Those are two big differences between final TrustZone solutions. The depend on the system design and what the end application is. TrustZone is ONLY part of the tools to try and achieve this.
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
This is not defined by TrustZone. It is up to the SOC vendor (people who licence from ARM and build the CPU) to provide a secure boot technology. The Secure OS might be in ROM and not changeable for instance. Other methods are that the secure code is digitally signed. In this case, there is probably on-chip secure ROM that verifies the digital signing. The SOC vendor will provide (usually NDA) information and techniques for the secure boot. It usually depends on their target market. For instance, physical tamper protection and encrypt/decrypt hardware maybe also included in the SOC.
The on-chip ROM (only programmed by the SOC vendor) usually has mechanisms to boot from different sources like NAND flash, serial NOR flash, eMMC, ROM, Ethernet, etc). Typically it will have some one time fuseable memory (EPROM) that a device/solution vendor (the people that make things secure for the application) can program to configure the system.
Other features are secure debug, secure JTAG, etc. Obviously all of these are possible attack vectors.
I suspect we're all familiar with how facebook and google and the like detect if you're using a different device than usual, I was wondering what the most reliable way to do this is?
I'm talking about the old 'It looks like you're signing in from a different device', and then when you confirm etc, it usually sends you an email and asks whether you want to trust this device or not.
Obviously one could just set a cookie, one that maybe get's checked and logged each visit, but what about when the user signs out? Do we keep the cookie?
Is there any other reliable method to 'trust' a 'device' other than setting cookies? Or is this the best/most reliable way to do it?
The most reliable way to detect a device change is to create a fingerprint of the browser/device the browser is running on. This is a complex topic to get 100% right, and there are commercial offerings that are pretty darn good but not flawless. I worked at one of those companies several years ago.
There is now at least one open source fingerprinting project Client JS. I have not used it, but it seems to cover the bases.
Just setting a cookie is not very reliable because on average users clear cookies about every 30-45 days unless you use a network that attempts to re-set the cookie (paid services). Even those are not flawless.
Just using the IP address is useless. Some devices legitimately have many IPs in a short period of time (laptop at home, work and Starbucks or most any mobile device), while sometimes a single IP is shared by a large number of users (all the folks at Starbucks or behind a corporate proxy server).
UPDATE
Thoughts on your similar hash code.
It is a complex topic to get right. I had a small team for a few years. We got pretty darn good, but you can never be 100% accurate even when people are not intentionally trying to trick you.
If the CPU changes, it's probably a different device.
The same physical device can have many user agents. Each browser on the device has a different user agent, and privacy mode of browsers have different user agents with far less entropy.
Fonts doesn't change very quickly for a given physical device, though it's not a great source of entropy on mobile devices (few fonts installed, and typically all the same ones for a given type of device).
OS is generally stable, until it suddenly changes. Does it matter in your case if every device appears to be a new device when it updates to Windows 10?
Color depth will be pretty stable. If the user installs a new graphic card, this may change. Does that matter in your case?
If you can accept thinking some devices are new when in fact they are the same and vice-versa, this type of similarity hash may work for you. Note that you can never use this type of fingerprint to uniquely identify a device for a purpose that requires positive identification such as access to secure data. It's great for making probabilistic decisions such as serving an appropriate ad.
I am looking at an embedded system where secrets are stored in flash that is internal to the chip package, and there is no physical interface to get that information out - all access to this flash is policed by program code.
All DMA attacks and JTAG and such are disabled. This seems to be a common locked-down configuration for system-on-a-chip.
How might an attacker recover the secrets in that Flash?
I understand they can fuzz for vulnerabilities in the app code and exploit it, that there could be some indistinct general side channel attack or something.
But how would an attacker really go about trying to recover those keys? Are there viable approaches for a determined attacker to somehow shave-down the chip or some kind of microscope attack?
I've been searching for information on how various game consoles, satellite TV, trusted computing and DVD systems have been physically attacked to see how this threat works and how vulnerable SoC is, but without success.
It seems that actually all those keys have been extracted from software, or multi-chip systems?
http://www.youtube.com/watch?v=tnY7UVyaFiQ
Security person analysing a smart card. Chemically strips case then uses oscilloscope to see what it's doing when decrypting.
The attack against MiFare RFID.
"...For the MiFare crack, they shaved off layers of silicon and photographed them. Using Matlab they visually identified the various gates and looked for crypto like parts..."
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Obviously there are security reasons to close a wireless network and it's not fun if someone is stealing your bandwidth. That would be a serious problem?
To address the first concern: Does a device on the same wireless network have any special privileges or access that an other device on the internet has?
Assumptions : Wireless network is connected to the internet
The second seems like a community issue. If your neighbor is stealing bandwidth, you'd act just as if he were "borrowing" water or electricity.
First, Talk to him about the problem and if that doesn't work, go to the authorities or lock stuff up. Am I missing something?
Bruce Schneier is famous for running an open wireless network at home (see here). He does it for two reasons:
To be neighborly (you'd let your neighbor borrow a cup of sugar, wouldn't you? Why not a few megabits?)
To keep away from the false sense of security that a firewall gives you. In other words, it forces him to make sure his hosts are secure.
Personally, I would never run an open wireless network for one reason: accountability. If someone does something illegal on my network, I don't want to be held accountable.
I don't think the biggest problem is just someone stealing your bandwidth, but what they do with it. It's one thing if someone uses my wireless network to browse the Internet. It's another thing if they use it for torrenting (I find that slows down the network) or any illegal activities (kiddy porn? not on my network you don't).
Yes you are, your wireless router also doubles as a firewall preventing harmful data from the Internet, by letting one of your virus-infected neighbors in on your wlan you're essentially letting him bypass that.
Now, this shouldn't be a problem in an ideal world since you'd have a well-configured system with a firewall but that's certainly not always the case. What about when you have your less security minded friends over?
Not to mention the legal hassle you could get yourself into if one of your neighbors or someone sitting with a laptop in a car close enough starts browsing kiddieporn.
I feel it all has to due with population density. My parents own a big plot of land nearest neighbor is .5 mile away. To me it doesn't make sense to lock a wireless router down. But if I lived in a apartment complex that thing will be locked down and not broadcasting it's ID.
Now at my house I just don't broadcast my ID and keep it open. The signal doesn't travel further then my property line so I am not to worried about people hijacking it.
I would actually disagree with Thomas in the sense that I think bandwidth is the biggest problem, as it's unlikely there are many dodgy people in your area who just so happen to connect to your network to misbehave. It's more likely I think that you'll have chancers, or even users who don't fully understand wireless, connecting and slowing down your connection.
I've experienced horribly laggy connections due to bandwidth stealing, a lot of the problem is with ADSL - it just can't handle big upstream traffic; if a user is using torrents and not restricting the upstream bandwidth it can basically stall everything.
For most people, the wireless access point is a router that is acting as a hardware firewall to external traffic. If someone's not on your wireless network, the only way they'll get to a service running on your machine is if the router is configured to forward requests. Once a device is behind the router, you're relying on your computer's firewall for security. From a "paranoid" layered security standpoint, I'd consider an open wireless network in this scenario to be a reduction in security.
I've met a lot of people that leave their networks open on purpose, because they feel it's a kind of community service. I don't subscribe to that theory, but I can understand the logic. They don't see it as their neighbor stealing bandwidth because they feel like they aren't using that bandwidth anyway.
Following joshhinman comment, this is a link to an article where he explains why he has chosen to leave his wireless network setup open.Schneier on Open Wireless
This guy is probably the most famous security expert at the moment, so it worths having a look at what he has to say.
As far as the security aspect goes it is a non issue. An open network can allow a determined person to 'listen' to all your unencrypted communication. This will include emails - probably forum posts - things like this. These things should never EVER be considered secure in the first place unless you are applying your own encryption. Passwords / Secure log in to servers will be encrypted already so there is no benefit to the encryption while the packets are in the air. The advantage comes when, as others have mentioned, users perform illegal actions on your access point. IANAL but it seems some corrolations can be drawn to having your car stolen and someone commits a crime with it. You will be investigated and can be determined innocent if you have some alibi or logs showing your machines were not responsible for that traffic.
The best solution to the hassle of using a key for the home user is to restrict the MAC addresses of the computers that can connect. This solves the problem of having un-authorized users (for all but the most advanced at which point your PW likely won't help you either) and it keeps you from having to input a long key every time you need to access something.
Personally, I would never run an open wireless network for one reason: accountability. If someone does something illegal on my network, I don't want to be held accountable.
The flip side of this is deniability. If the government or RIAA come knocking on your door about something done from your IP address you can always point to your insecure wireless connection and blame someone else.
I wish people would stop referring to an open network as 'insecure'. A network is only insecure if it doesn't meet your security requirements - people need to understand that not everyone has the same security requirements. Some people actually want to share their network.
An open network is open. As long as you meant that to be the case, that's all it is. If your security policy doesn't include preventing your neighbors from sharing your bandwidth, then it's not a security fault if it allows them to do that, it's faulty if it doesn't.
Are you liable for other's use of your 'insecure' network? No. No more so than your ISP is liable for your use of the Internet. Why would you want it to be otherwise? Note, by the way, that pretty much every commercial WiFi hotspot in the world is set up in exactly such an open mode. So, why should a private individual be held liable for doing exactly the same thing, merely because they don't charge for it?
Having said that, you do have to lock down your hosts, or firewall off an 'internal' portion of your network, if you want to run fileshares etc internally with such a setup.
Also, another way to deal with 'bandwidth stealing' is to run a proxy that intercepts others traffic and replaces all images with upside down images or pictures of the Hof. :-)
#kronoz: I guess it depends on where you live. Only two houses are within reach of my wireless network, excluding my own. So I doubt that small number of people can affect my bandwidth. But if you live in a major metro area, and many people are able to see and get on the network, yeah, it might become a problem.
It is so easy to lock a wireless router down now, that I think a better question is why not lock it down?
The only reason I can think of is if you had a yard large enough so that your neighbors can't get a signal and you frequently have visitors bringing devices into your home (since setting them up can be a chore).
Note that I'm saying both of those things would need to be true for me to leave one open.
Personally, I would never run an open wireless network for one reason: accountability. If someone does something illegal on my network, I don't want to be held accountable.
The flip side of this is deniability. If the government or RIAA come knocking on your door about something done from your IP address you can always point to your insecure wireless connection and blame someone else.
I would argue that anyone who is running a network is responsible for the actions of all people who use it. If you aren't controlling use, then you are failing as a network administrator. But then again, I'm not a lawyer, so...
As it turns out, when I switched DSL service, the wireless router the company provided is secured out of the box. So unless I add the old router to my network, it will be secured.
On the other hand, it was very convenient to "borrow" a few hours of network time from neighbors while I was waiting for the technician to stop by and install the service. Looks like this might not be an option soon, however.
My biggest concern is there there is never too much bandwidth so a decision to share it is only acceptable if I can somehow guarantee that other people do not use more than, say, 5% of my total bandwidth. Which may or may not render my connection useless to other people, depending on what they mean to do with it.
As most wireless standards are very hackable I can understand the logic behind not securing it, as it removes the false sense of security that wireless security provides.
However, in NZ bandwidth is expensive; I cannot afford for randoms to leech that off me. As the vast majority of people don't have a clue regarding hacking wireless connections having this admitedly pitiful defense wards of most of the lazy.
If anyone cares enough they can hack my crappy WEP encryption and get themselves some free Internet and free leech until I care enough to stop them. Then I'll upgrade to something better (white-listed MAC addresses, say) which will be harder for them to hack, and the cycle will begin anew.