BACnet does not emphasize on secure communication? - bacnet

When I look into BACnet communication protocol, I found so little about secure communication, almost non-existence. BACnet is so commonly used in building automation controls. However, without mandatory authentication or encryption, wouldn't it be easy to hack by just walking into a building and tap on the building network? Am I missing something?

Your assumptions are correct - for the older standard. The BACnet ASHRAE SSPC 135 committee has addressed the security issue with the new BACnet/SC datalink. It has not been officially released yet, but you can get a sneek peek: http://www.bacnet.org/Bibliography/B-SC-Whitepaper-v10_Final_20180710.pdf

To be fair, it's not necessarily that easy to rock-up to a building, and plug-in to the building controls network; that's the unsaid part - that a lot of BACnet real-world security relies upon physical security/access (- where explicitly/actively so or implicitly).
But to be fair, BACnet did introduce a section # 24 that covered "Network Security" (which was present in the 2012 rendition) but was later "DELETED", and even with it present at the time, companies (/a fair amount of the community) didn't seem to show much active/delivered interest.
But even with the advent of BACnet/SC, BACnet/SC is a lot to take-on (in terms of complexity & time / overall investment) for a technical personal, never a mind a less technical Building Manager and/or Supervisors; there needs to be more support from the Committee.

Related

Use a framework for security or do it by myself?

I have this doubt since a while and today I'm not so strong a position, despite having taken one.
Whenever I develop or participate in the development of an application (WEB), typically treat security finger-and-nail, that is, we treat all processes related to security, sessions until encryption of passwords, etc.
I remember hearing someone say that it is always better to use a Framework (Spring, Apache Shiro, etc).
What is your suggestion?
Yes it is always better to use a framework rather then re-inventing the whole wheel again. I personally prefer Apache Shiro and have made customizations to suite my needs by extending classes provided.
REad here http://shiro.apache.org/
Some points to meke up your mids are:
Custom code equals custom vulnerabilities: With web applications you typically generate most of the application code yourself (even when using common frameworks and plugins). That means most vulnerabilities will be unique to your application. It also means that unless you are constantly evaluating your own application, there’s no one to tell you when a vulnerability is discovered in the first place.
You are the vendor: When a vulnerability appears, you won’t have an outside vendor providing a patch (you will, of course, have to install patches for whatever infrastructure components, frameworks, and scripting environments you use). If you provide external services to customers, you may need to meet any service level agreements you provide and must be prepared to be viewed by them just as you view your own software vendors, even if software isn’t your business. You have to patch your own vulnerabilities, deal with your own customer relations, and provide everything you expect from those who provide you with software and services.
Reliance on frameworks/platforms: We rarely build our web applications from the ground up in shiny new C code. We use a mixture of different frameworks, development tools, platforms, and off the shelf components to piece them together. We are challenged to secure and deploy these pieces as well as the custom code we build with and on top of them. In many cases we create security issues through unintended uses of these components or interactions between the multiple layers due to the complexity of the underlying code. When we can use for all other parts why not use for security and just keep and eye if any vurneability is found in that framework and respond by updating as community can update faster and better than by oneself.

Why has JXTA been abandoned? Any alternatives out there?

P2p/Grid Computing seem like a promising concepts. JXTA looks like the only all in one framework for it. Is there a reason this field is so sparsely pursued?
I have lead the release of JXTA 2.6 and 2.7 - JXTA is not completely abandoned. Some people have posted patches on the 2.6 branch and it could easily be merged with the 2.7 branch.
There are many reasons why people did not carry on participating to JXTA:
Oracle did not follow-up on their duties regarding project governance, which left the project in a limbo state.
Oracle did not follow-up on a request to move the project to Apache.
The code base was old. We cleaned it and implemented unit tests. But in order to move the project to the next level, it would have required a lot of rewriting. Not enough volunteers.
But more fundamentally, the reason few P2P frameworks took off is because P2P is fundamentally complex when you get into details. Most people don't get it until they start putting their hands in the dirt. It is not possible to implement P2P 'in a simple way'.
So nothing to do with all-Java clients, licensing fees or others.
Update (August 2013): You thought JXTA/JXSE was dead? Well someone worked further on it and developed a DZone tutorial (unfortunately, SO does not allow links to Dzone, so Google: JXSE and Equinox Tutorial).
Update (November 2013): A group of people is working on new releases of JXTA. For more information, register on the mailing lists.
Interestingly what was missing with all the P2P initiatives of the past was a motivation for a peer to stay active. Question always was why would a peer keep running a CPU draining and XML based verbosed protocols.
Trust was another factor - how can I trust a peer. As a key member of the team, we introduced security. But security doesn't address trust.
To make it even worse, JXTA introduced the concept of super nodes - defeating the very concept of peer to peer.
However, not everything was that bad. JXTA provided a lots of new concepts. One being Edge Computing with JXME and JXTA sitting together - you can call it to be current day Fog computing where heavy lifting was on the JXTA node and some intelligence on the constrained JXME nodes.
Fast forward, Blockchain addressed gaps with addressing most if not all the questions that any P2P platform could not answer: trust, incentivizing peers, tamper proof and much more.
P2P is still alive :)
I think it's for the same reasons that RMI, CORBA, and Jini aren't much in favor: complex and closed.
Simple and open win most of the time.
It might have had something to do with all-Java clients or licensing fees or something else.
It could be competition. MPI is a widely accepted messaging standard for computing. Hadoop is getting a lot of traction.
UPDATE: The answer that was accepted discusses why people may or may not choose to participate in JXTA. I think my answer has more to do with user adoption, which is different. Mine go back to the origins of JXTA, not the details of releases 2.6 and 2.7.
If you work with Linux, try this: http://www.p2pns.org/
"P2PNS (Peer-to-Peer Name Service) is a distributed name service using a peer-to-peer network. The current focus of P2PNS is to provide a secure and efficient SIP name resolution for decentralized VoIP ( P2PSIP)."
In most cases Name Resolution is enough to build up a P2P-App on top of it.

What came before web services and SOA?

I'm interested in the history of distributed, collaborative, cross-organisational programming paradigms - web services and SOA are de-facto now, but what came before? What models have been superceded by SOA?
Thanks
Well, I suppose there was RPC - which is really what SOAP is, only they didn't piggy-back the data payload on top of a standard protocol (http in SOAP's case). So CORBA and DCE-RPC and ONC RPC all did the same thing, but only over internal networks, not over the internet.
There was also EDI as a 'standard' for exchanging data between disparate entities. This was effectively a way of defining what the data payload would look like (similar the the XML part of SOAP).
But these are still not SOAs really, they provide the same functionality but the big difference was how people thought of using them. Once you could write a machine-to-machine 'website' and have different machines talk to each other through them, it took off. You could do it before using CORBA, say, but it wasn't as easy or as widely known about. You can tell this has happened by the fact we have several terms used for effectively the same thing - SOA, SaaS, Web Services... all the same thing (but lots of money to be made 'consulting' on the difference ;) )
Maybe Silos?
...where services are just not shared across an enterprise, at least in a standard way. This is why products like BizTalk are used: to get silos to talk to each other via standard interfaces.
I don't really think you'll find anything that's been superceded by SOA. You will find that there's been progress in organizing computer programs to take advantage of the SOA type principles. As for programming models that have been in reasonably common use, well, let's see... CORBA, RPC, more generic client-server applications. Of course, computer-to-computer communications were preceded by process-to-process communication using a wide variety of conventions.
SOA as a philosophy of breaking large problems into smaller ones and then composing the results has been known and applied since humans started making bricks instead of building complete walls. Of course, that was mostly implicit. Explicit statements for SOA really started to come about with CORBA and, while SOA is independent of Web Services, the advent of HTTP and XML, and then SOAP, really started to make development of non-specialized "services" easier, more worthwhile and thus common.
This pdf A Note on Distributed Computing should be an interesting read. It is pre-SOA and would give an idea of the history up to that point (1994).
I would say distributed object technology. And before it remote procedure calls.
RPC is one of the earlier approaches and gained popularity from the Sun implementation. One of the famous uses is NFS (network file system).
As object oriented programming became more popular, distributed objects followed. Most important was Microsoft DCOM (and later COM+) and, more industry wide, CORBA.
SOA is a divide and conquer approach that is critically dependent on the concept of services. Which is different from objects as used by CORBA et al, as well as being different from resources as in REST.
Objects are created and their lifetime is typically controlled by the client. On the other hand, services are assumed to be always there provided by the server. This is one reason why SOA is not equivalent to distributed objects.
Services are also stateless, which means that the server when considering the response to a service request need not look at the history of interaction with the client. This was not a consideration when originally devising the RPC concept as scalability wasn't such an important issue then. Interestingly, large scale users of RPC did notice the relationship between scalability and statelessness. The NFS RFC explicitly mentions stateless servers, though with reliability as the main concern. Anyway, statelessness is one of the main difference between services and plain old RPC.
In short, no. I don't believe in the revisionist history of SOA being since the dawn of time. Any more than the universe being written in Lisp (or Perl for that matter). Nor is it equivalent to divide and conquer or division of labour.
SOA started as a concept at some point in the nineties. Overlapping with the development of CORBA. It is much harder to pinpoint an actual date or event and there are more than a few claims to the conceptualisation of it.

Software and Security - do you follow specific guidelines?

As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.

Key Generation/Validation, What's out there?

I've been asked to develop a key generation/validation system for some software. They would also be open to a developed open source or commercial system, but would prefer a system from scratch. Online activation would have to optional, since it is likely that some installations would be on isolated servers. I know there is kind of a user/security complex with a lot of anit-piracy techniques. So I guess I'm asking what software, libraries, and techniques are out there? I would appreciate personal knowledge, web sites, or books.
If you take the hash of something, it will result (ideally) in an unpredictable string of characters.
You could have an algorithm be to take the SHA1 of something predictable (like sequential numbers) concatenated with a sufficiently long salt. Your keys would be really secure as long as your salt remains a permanent secret and SHA1 is never breached.
For example, if you take the SHA1 of "1" (your first license key) and a super secret salt "stackoverflow8as7f98asf9sa78f7as9f87a7", you get the key "95d78a6331e01feca457762a092bdd4a77ef1de1". You could prepend this with version numbers if you want.
If you want online authorization, you need three things:
To ensure that the response cannot be forged
To ensure that the request cannot be forged
To ensure that if Internet is unavailable, you take appropriate action
Public key cryptography can help with items one and two. Even Photoshop CS4 has problems with item 3, that's a tricky one.
I'm biased - given that the company I co-founded developed the Cobalt software licensing solution for .NET - but I'd suggest you go with a third-party solution rather than rolling your own.
Take a look at the article Developing for Software Protection and Licensing, which makes the following point:
We believe that most companies would
be better served by buying a
high-quality third-party licensing
system. This approach will free your
developers to work on core
functionality, and will alleviate
maintenance and support costs. It also
allows you to take advantage of the
domain expertise offered by licensing
specialists, and avoid releasing
software that is easy to crack.
Another advantage to buying a
third-party solution is that you can
quickly and easily evaluate it for
suitability; with an in-house system
you have to pay in advance for the
development of a system that may not
prove adequate for your needs.
Choosing an high-quality third party
system dramatically reduces the risk
involved in developing a solution
in-house.
If you're dead set on rolling your own, a word of advice: test on the widest range of client systems possible. Real-world hardware is weird, and Windows behaviour varies quite dramatically in some ways between versions.
You'll almost certainly have to spend a lot of time ironing the creases out of whatever hardware identification system you implement.

Resources