What is the difference between trusted computing and confidential computing? - security

It seems that both trusted computing and confidential computing can protect data in use.
Is trusted computing based on TPM and confidential computing based on Intel SGX?
Any other difference?

This is a good question since both terms are a bit ambiguous used interchangeably.
The short answer is they mean the same in most cases.
Trusted Computing was probably the term that appeared first.
It tries to put the emphasis on the reduced "trusted parties/components", called the Trusted Computing Base (TCB), that modern processor technologies as Intel SGX, AMD SEV, ARM TrustZone provide.
They all have in common that code and data are separated and protected at all times during execution in so-called Trusted Execution Environments (TEE).
Trusted Computing doesn't necessarily need to be backed by hardware features, it could also be provided by Hypervisor technologies as Hyper-V VBS or AWS Nitro Enclaves. Naturally, the TCB is bigger on such Hypervisor TEEs.
Is trusted computing based on TPM and confidential computing based on Intel SGX?
No, SGX is probably the most prominent example of trusted computing technology.
TPMs of course can also be used to establish a root-of-trust, but they typically not able to create complete TEEs for protecting data at runtime.
They are more commonly used for secure/trusted key generation and storage, or crypto calculations. To be precise a TPM is physically isolated while a TEE resides on the same chip. See also TPM vs. TEE vs. SE
Confidential Computing is a relatively new term.
It was probably established to have a bit more business-friendly term.
"Trusted" might be harder to sell than "Confidential";-)
The term puts more emphasis on the application of TEEs and tries to address a wider audience by describing not only the technologies but the applications and business cases in general.
In the words of the Confidential Computing Consortium
Confidential Computing is the protection of data in use using
hardware-based Trusted Execution Environments. Through the use of
Confidential Computing, we are now able to provide protections against
many of the threats described in the previous section.
With both terms floating around. "Confidential Computing" got much more traction and mainstream adoption, while Trusted Computing feals more niche.
Trusted Computing will probably disappear as a general term and only be used when describing hardware features and TEEs in more technical detail.

Related

What is the difference between an exploit and a compromise?

I see the words compromise and exploit being used interchangeably. When I did basic Google searches for this question, the answers were about the difference between an exploit and a vulnerability, not the comparison I queried.
Exploit: software that makes use of a flaw of the operating system or host software to gain some kind of a benefit.
Compromise: the act of damaging the integrity, confidentiality or availability of a computer; for example, through an exploit.

BACnet does not emphasize on secure communication?

When I look into BACnet communication protocol, I found so little about secure communication, almost non-existence. BACnet is so commonly used in building automation controls. However, without mandatory authentication or encryption, wouldn't it be easy to hack by just walking into a building and tap on the building network? Am I missing something?
Your assumptions are correct - for the older standard. The BACnet ASHRAE SSPC 135 committee has addressed the security issue with the new BACnet/SC datalink. It has not been officially released yet, but you can get a sneek peek: http://www.bacnet.org/Bibliography/B-SC-Whitepaper-v10_Final_20180710.pdf
To be fair, it's not necessarily that easy to rock-up to a building, and plug-in to the building controls network; that's the unsaid part - that a lot of BACnet real-world security relies upon physical security/access (- where explicitly/actively so or implicitly).
But to be fair, BACnet did introduce a section # 24 that covered "Network Security" (which was present in the 2012 rendition) but was later "DELETED", and even with it present at the time, companies (/a fair amount of the community) didn't seem to show much active/delivered interest.
But even with the advent of BACnet/SC, BACnet/SC is a lot to take-on (in terms of complexity & time / overall investment) for a technical personal, never a mind a less technical Building Manager and/or Supervisors; there needs to be more support from the Committee.

Purpose built light-weight alternative to SSL/TLS?

Target hardware is a rather low-powered MCU (ARM Cortex-M3 #72MHz, with just about 64KB SRAM and 256KB flash), so walking the thin line here. My board does have ethernet, and I will eventually get lwIP (lightweight TCP/IP FOSS suite) running on it (currently struggling). However, I also need some kind of super light-weight alternative to SSL/TLS. I am aware of the multiple GPL'd SSL/TLS implementations for such MCU's, but their footprint is still fairly significant. While they do fit-in, given everything else, don't leave much room for others.
My traffic is not HTTP, so I don't have to worry about HTTPS, and my client/server communication can be completely proprietary, so non-standard solution is okay. Looking for suggestions on what might be the minimalistic yet robust (well a weak security is worthless), alternative that helps me --
Encrypt my communication (C->S & S->C)
Do 2-way authentication (C->S & S->C)
Avoid man-in-middle attacks
I won't be able to optimize library at ARMv7 assembly level, and thus bank entirely on my programming skills and the GNU-ARM compiler's optimization. Given above, any pointers of what might be the best options ?
If any of those small TLS implementations allow you to disable all X.509 and ASN.1 functionality and just use TLS with preshared-keys you'd have quite a small footprint. That's because only symmetric ciphers and hashes are used.
There's CurveCP. It's meant to completely replace SSL.
It's fairly new, and still undergoing development, but its author is a well-known expert in the field, and has been carefully working toward it during the past decade. A lot of careful research and design has been put into it.
My immediate reaction would be to consider Kerberos. It's been heavily studied, so the chances of major holes at this point are fairly remote. At the same time, it's fairly minimalist, so unless you restrict what you need to do, you're probably not going to be able to use anything much more lightweight without compromising security.
If that's still too heavy for your purposes, you're probably going to need to impose some restrictions on what you want done.
You might have a look at MST (Minimal Secure Transport) https://github.com/DiplIngFrankGerlach/MST.
It provides the same security assurances as TLS, but requires a pre-shared key. Also, it is very small (less than 1000 LoC, without AES) and can therefore easily be reviewed by an expert.
I know it is to come with variant of answer about two years later, yet... "PolarSSL's memory footprint can get as small as 30k and averages below 110k." https://polarssl.org/features

Software and Security - do you follow specific guidelines?

As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.

Key Generation/Validation, What's out there?

I've been asked to develop a key generation/validation system for some software. They would also be open to a developed open source or commercial system, but would prefer a system from scratch. Online activation would have to optional, since it is likely that some installations would be on isolated servers. I know there is kind of a user/security complex with a lot of anit-piracy techniques. So I guess I'm asking what software, libraries, and techniques are out there? I would appreciate personal knowledge, web sites, or books.
If you take the hash of something, it will result (ideally) in an unpredictable string of characters.
You could have an algorithm be to take the SHA1 of something predictable (like sequential numbers) concatenated with a sufficiently long salt. Your keys would be really secure as long as your salt remains a permanent secret and SHA1 is never breached.
For example, if you take the SHA1 of "1" (your first license key) and a super secret salt "stackoverflow8as7f98asf9sa78f7as9f87a7", you get the key "95d78a6331e01feca457762a092bdd4a77ef1de1". You could prepend this with version numbers if you want.
If you want online authorization, you need three things:
To ensure that the response cannot be forged
To ensure that the request cannot be forged
To ensure that if Internet is unavailable, you take appropriate action
Public key cryptography can help with items one and two. Even Photoshop CS4 has problems with item 3, that's a tricky one.
I'm biased - given that the company I co-founded developed the Cobalt software licensing solution for .NET - but I'd suggest you go with a third-party solution rather than rolling your own.
Take a look at the article Developing for Software Protection and Licensing, which makes the following point:
We believe that most companies would
be better served by buying a
high-quality third-party licensing
system. This approach will free your
developers to work on core
functionality, and will alleviate
maintenance and support costs. It also
allows you to take advantage of the
domain expertise offered by licensing
specialists, and avoid releasing
software that is easy to crack.
Another advantage to buying a
third-party solution is that you can
quickly and easily evaluate it for
suitability; with an in-house system
you have to pay in advance for the
development of a system that may not
prove adequate for your needs.
Choosing an high-quality third party
system dramatically reduces the risk
involved in developing a solution
in-house.
If you're dead set on rolling your own, a word of advice: test on the widest range of client systems possible. Real-world hardware is weird, and Windows behaviour varies quite dramatically in some ways between versions.
You'll almost certainly have to spend a lot of time ironing the creases out of whatever hardware identification system you implement.

Resources