Use a framework for security or do it by myself? - security

I have this doubt since a while and today I'm not so strong a position, despite having taken one.
Whenever I develop or participate in the development of an application (WEB), typically treat security finger-and-nail, that is, we treat all processes related to security, sessions until encryption of passwords, etc.
I remember hearing someone say that it is always better to use a Framework (Spring, Apache Shiro, etc).
What is your suggestion?

Yes it is always better to use a framework rather then re-inventing the whole wheel again. I personally prefer Apache Shiro and have made customizations to suite my needs by extending classes provided.
REad here http://shiro.apache.org/
Some points to meke up your mids are:
Custom code equals custom vulnerabilities: With web applications you typically generate most of the application code yourself (even when using common frameworks and plugins). That means most vulnerabilities will be unique to your application. It also means that unless you are constantly evaluating your own application, there’s no one to tell you when a vulnerability is discovered in the first place.
You are the vendor: When a vulnerability appears, you won’t have an outside vendor providing a patch (you will, of course, have to install patches for whatever infrastructure components, frameworks, and scripting environments you use). If you provide external services to customers, you may need to meet any service level agreements you provide and must be prepared to be viewed by them just as you view your own software vendors, even if software isn’t your business. You have to patch your own vulnerabilities, deal with your own customer relations, and provide everything you expect from those who provide you with software and services.
Reliance on frameworks/platforms: We rarely build our web applications from the ground up in shiny new C code. We use a mixture of different frameworks, development tools, platforms, and off the shelf components to piece them together. We are challenged to secure and deploy these pieces as well as the custom code we build with and on top of them. In many cases we create security issues through unintended uses of these components or interactions between the multiple layers due to the complexity of the underlying code. When we can use for all other parts why not use for security and just keep and eye if any vurneability is found in that framework and respond by updating as community can update faster and better than by oneself.

Related

Is it possible to program privacy-oriented software using Electron?

As part of a small research project, I have to create a program using the Electron framework (https://electronjs.org/ and https://en.wikipedia.org/wiki/Electron_(software_framework)). The problem is that this program will manage very private information from other people. These people have presented me some concerns about the Chromium frontend used by Electron, and the possibility that Electron is bundled with Chrome binaries (not open source).
How can I determine if the program I develop with Electron is not sending private information to Google servers or any other third party. Can it be determined if the application is accessing information without the user knowing about it?
In other words: Has Electron be verified to be reliable in terms of privacy and security of information? Of course, I understand there could be security vulnerabilities, like in any other software. My question is more oriented to intentional privacy vulnerabilities, i.e., the developers of Electron have taken the necessary measures to guarantee safety and privacy?
I beg you to avoid opinion-based information, and try to present verifiable facts, e.g., results of tests that have been applied to the framework, independent revisions made by reliable third parties, official statements of security companies, standard test I can run by myself on Electron, etc.
Thanks in advance for your answers!

Microservices dependence management - Governance or Domain Driven Design?

Background: an international company with a federation model is transforming into Microservices due to chronic Monolithic pain. Autonomous teams with quick deployment is highly desirable. In spite of theory, services are indeed dependent on each other for higher functionality, but are autonomous (independently developed and deployed). Since this is a federation model and decentralized control, we cannot impose strict rules - just like the UN. Without a governance platform that will manage dependencies else due to the multiple versions in production in different countries, we foresee uncontrollable chaos.
Let's call set of Microservices that needs to collaborate a "Compatibility Set". A service can be deployed but may not satisfy the higher functionality in its Compatibility Set. For example MicroService A-4.3 is fully autonomous, deployed and working perfectly. However to satisfy BusinessFunctionality 8.6 it must work together with MicroService B-5.4 and MicroService C-2.9. Together (A-4.3 , B-5.4 and C-2.9) they form a "Compatibility Set"
There are two approaches to this dilemma. Microservice in real life where the rubber hits the road and the learning from experience begins...
Approach 1) Governance Platform
Rationale: Federal model in an International company in 100+ countries. Which means Central IT can lay down the model but individual countries can choose their own destiny - and they frequently do. It frequently devolves to chaos and the Central IT team is on the hook. DDD is the solution for an ideal world where version inconsistencies do not derail functionality like releasing services which do not fit into the Compatibility set, individually blameless but together they fall apart or result in flawed or inconsistent functionality.
There is no homogeneity, there isn't even standardization of terminology
Developers are mixed skill, many junior, and many learning reactive programming and cloud native technologies
Bounded Context heavily depends on Shared Vocabulary and it can get subtle, but this is impossible to enforce and naive to assume in an International, mixed skill, fragmented scenario with multiple versions running
Standardization on a Single Business Model is not realistic in such a heterogeneous system (but ideal)
How what is Central IT to do when they're held responsible for this Chaos?
Enforce a Governance Platform
Create a Microservices governance system or framework to enforce dependency management. It verifies and enforces at design and run time dependencies on a particular Microservice through a manifest and performs some checks and balances to verify the service implementations being offered - the "Compatibility Set".
Approach 2) Domain Driven Design (DDD)
DDD is about modelling domains that are constantly evolving, where domain experts (typically a business stakeholder, or perhaps an analyst) will work alongside developers to design the system. Within each domain, a ubiquitous language is formed, such that within that context, the same word always means the same thing. An important thing to realise is that in one part of your system, “Order” might mean one thing, it might mean for example a list of products. In another part of your system, “Order” might mean something else, it might mean a financial transaction that happened. This is where the model you describe can fall down, if my service needs to get a list of orders, perhaps there is a capability out there that supplies a list of orders, but which orders are they? The list of products or the financial transaction? Trying to coordinate as many developers as you have to all use the same language here is an impossible task that is doomed to fail.
In DDD, rather than trying to manage this at a system level and force every service to use the same definition of Order, DDD embraces the inherent complexity in coordinating very large deployments with huge numbers of developers involved, and allows each team to work independently, coordinating with other teams as needed, not through some centralised dependency management system. The term used in DDD is bounded contexts, where in one context, Order means one thing, and in another bounded context, Order can mean another thing. These contexts can function truly autonomously – you describe your services as being autonomous, but if they have to match their definition of order with the entire system by registering and supplied dependencies to a central registry, then really they are tightly coupled to the rest of the system and what it considers an order to be – you end up with all the painful coupling of a monolith, with all the pain of building a distributed system, and you won’t realise many of the benefits of microservices if you try to take this approach.
So a DDD based approach doesn’t ever try to take a heavy handed approach of enforcing dependencies or capabilities at the system level, rather, it allows individual teams to work without needing central coordination, if Service A needs to interact with Service B, then the team who manages Service A will work with the team that manages service B, they can build an interface between their bounded contexts, and come to an agreement on language for that interface. It is up to these teams to manage their dependencies with each other, at the system level things can remain quite opaque / unenforced.
Too often we see people implement “Microservices” but end up with a system that is just as, if not more inflexible, and often more fragile, than a monolith. Also called a "Minilith" or "Monolith 2.0" Microservices require a complete rethink of architecture and software development processes, and require not just allowing services to be autonomous and independently managed, but also for teams to be independent, not centrally managed. Centralising the management of dependencies and capabilities in a system is likely to be an inhibitor to successfully building a microservice based system.
Intelligent and Pragmatic comments invited...
Approach 1 (Governance) is pragmatic and tactical and intended to solve very real challenges. Question is - will it undermine the long term strategic DDD model of the Enterprise?
Approach 2 (DDD) is ideal and aspirational but doesn't address the very real challenges that we have to deal with right now.
Opinions? Thought? Comments?
I've seen multi-national companies try to cooperate on a project (or be controlled from a central IT team) and it's a nightmare. This response is highly subjective to what I've personally read and seen, so it's just my opinion, it's probably not everyone's opinion. Generally broad questions aren't encouraged on Stack Overflow as they attract highly opinionated answers.
I'd say DDD probably isn't the answer. You'd need a large number of a developers to buy into the DDD idea. If you don't have that buy-in then (unless you have a team of exceptionally self-motivated people) you'll see the developers try to build the new system on-top of the existing database.
I'd also argue that microservices aren't the answer. Companies that have used microservices to their advantage are essentially using them to compartmentalise their code into small, stacks of individually running services/apps that each do a single job. These microservices (from the success stories I've seen) tend to be loosely coupled. I imagine that if you have a large number of services that are highly coupled, then you've still got the spaghetti aspects of a monolith, but one that's spread out over a network.
It sounds like you just need a well architected system, designed to your specific needs. I agree that using DDD would be great, but is it a realistic goal across a multi-national project?
I also dealt with the problem described in the question. And I came up with an approach in which I use API-definitions like OpenAPI-definitions to check compatibility between two services. The API-definitions must be attached as metadata to each service and therefore it is possible to do the check at run and design time. Important is that the API-definitions are part of the metadata as well when the API is offered and when the API is used. With tools like Swagger-Diff or OpenAPI-Diff it is possible to do the compatibility-check automated.

What would be the disadvantages of building a website purely in Eiffel using EWF (Eiffel Web Framework)?

We are looking to build a website on top of an existing Eiffel business-tier core, which is sitting over a MS SQL Server database. I am presently considering the advantages and disadvantages of writing the web and mobile tiers either purely in Eiffel, purely in typical web-stacks, or some hybrid.
For us, there are clear advantages to pure Eiffel, not the least of which are:
Inheritance and other language notation mechanisms not found in other languages.
The compiler cannot see into code from other languages, so we are at the same disadvantage one we cross out of Eiffel into something else.
Auto-Test is something we heavily rely on in our Eiffel code, which takes clear advantage of Design by Contract. In other languages, we lose this power and are left with TDD (e.g. their version of Auto-Test in Eiffel).
We now have to learn more than: Eiffel, HTML-5, CSS-3, JS, and whatever JS framework(s) we use.
Every new language and tool adds more complexity to the project.
Eiffel programs are compiled to C --> EXEs, which are far faster than their scripted and interpreted counterparts.
I think there are also some clear advantages to existing, non-Eiffel languages as well:
Existing frameworks and tools can develop simple to moderate web sites and mobile applications rather quickly.
Existing "best-practices" are not terrible and producing reasonably reliable and maintainable code.
I am not sure what all of the advantages and disadvantages are, so I am asking. However, at the end of the day: Our core business suite is pure Eiffel. That will never change.
Thanks in advance for the feedback!
Here is what I can say from my own experience (I have create several web applications in different frameworks including one in Eiffel). First, the Eiffel Web Framework is quite usable right now. The advantage of other frameworks are their features. Here is a list of the major problems I encounter when I created my web application with Eiffel:
I had to create the MVC design myself (other frameworks like Django, Rails or Laravel does that automatically).
Eiffel lack is a good templating system. The Smarty library is ok, but it really lack some really good template features that other has. Also, trying to work with UTF-8 file in Smarty can be quite difficult (this has been a pain for me).
I had to do some session management based on cookies because the one in Eiffel Web Framework was quite primitive.
The release process (removing Nino) was not easy and lack good documentation (I was using Apache, I don't know about IIS)
That's it, other than that, every thing went quite smoothly.
The next list of disadvantages is from my naïve point of view:
The EWF package is not finished, it's going to have more nice capabilities in the future, therefore you may need to follow the new development to take advantage of new functionality.
Eiffel compiler makes it impossible to update a web program on the fly, it needs to be recompiled and redeployed.
If the program is going to be multithreaded, you need to learn a structured way to deal with concurrency based on the SCOOP model.
Some tools (e.g., XSLT processors) are not readily integrated into EWF, you may need to do this yourself.
The current EWF API is rather low-level, so before higher-level frameworks built on top of EWF become widespread, you may need to do more low-level programming than expected (by low-level I mostly mean the way to generate HTML/XML/or some other format your web service is going to produce).
Having to use just one language to do both application logic and HTML generation, that allows for easy debugging, may lower the requirements for the developers and their skills, that may affect your business model.
There are several tools that address specific needs like wiki, simple web-page creation, authorization, etc., but you may need to enhance them to get richer functionality as well as to design the architecture of your software, because some idioms and usage patterns are not established yet.

What are some arguments to support the position that the Dojo JavasScript library is secure, accessible, and performant?

We have developed a small web application for a client. We decided on the Dojo framework to develop the app (requirements included were full i18n and a11y). Originally, the web app we developed was to be a "prototype", but we made the prototype production quality anyway, just in case. It turns out that the app we developed (or a variant of it) is going to production (many months hence), but it's so awesome that the enterprise architecture group is a little afraid.
508c compliant is a concern, as is security for this group. I now need to justify the use of Dojo to this architecture group, explicitly making the case that Dojo does not pose a security risk and that Dojo will not hurt accessibility (and that Dojo is there to help meet core requirements).
Note: the web app currently requires JavaScript to be turned on and a stylesheet to work. We use a relatively minor subset of Dojo: of course, dojo core, and dijit.form.Form, ValidationTextBox and a few others. We do use dojox.grid.DataGrid (but no drag N drop or editable cells, which are not fully a11y).
I have done some research of my own, of course, but I any information or advice you have would be most helpful.
Regards,
LES2
I'm not sure how to answer this question except to point out that you'd be in good company using Dojo. Several large corporations, deeply concerned about security issues have contributed to the toolkit and use it in their own products. Audits have been done on the toolkit, including one recently which did expose a problem which was quickly patched -- in fact, the CDN feature of Dojo, if you use it, means you can pick up patches like this automatically. Other than that, I'm not sure what proof to offer. A toolkit is secure until someone finds a security hole! Also, there are plenty of things you can do with Dojo, or the underlying HTML/JS technology, which are not secure. You need to follow best practices. One example is with JSON. There are a couple of methods to handle JSON. The base one is fast, works on older browsers, but is known not to be secure. It is meant to be used only with trusted data sources, and typically with the same domain policy, that's what you'll be doing. There are alternatives in dojox.secure which you might want to look at, depending on what you're doing, you may be able to provide an extra level of security to your application.
For performance, you can look at the various benchmarks like taskspeed, which focus largely on the dojo.query DOM traversal functionality common to most toolkits. Of course, YMMV depending on your usage of Dojo, but there's healthy competition between the toolkits and continuous improvement with each release.
For accessibility, all Dijit widgets were reviewed and considered to be 508c compliant. There is more precise documentation on Dojo/Dijit a11y requirements. Not all dojox widgets pass this requirement.
HTH

Application Security Audit of an .NET Web Application?

Anyone have suggestions for security auditing of an .NET Web Application?
I'm interested in all options. I'd like to be able to have something agnostically probe my application for security risks.
EDIT:
To clarify, the system has been designed with security in mind. The environment has been setup with security in mind. I want an independent measure of security, other than - 'yeah it's secure'... The cost of having someone audit 1M+ lines of code is probably more expensive than the development. It looks like there really isn't a good automated/inexpensive approach to this yet. Thanks for your suggestions.
The point of an audit would be to independently verify the security that was implemented by the team.
BTW - there are several automated hack/probe tools to probe applications/web servers, but i'm a bit concerned about whether they are worms or not...
Best Thing to do:
Hiring a security guy for source code analysis
Second best thing to do hiring a security guy / pentesting company for black-box analysis
Following tools will help :
Static Analysis Tools Fortify / Ounce Labs - Code Review
Consider solutions such as HP WebInspects's secure object (VS.NET addon)
Buying a blackbox application scanner such as Netsparker, Appscan, WebInspect, Hailstorm, Acunetix or free version of Netsparker
Hiring some security specialist is so much better idea (will cost more though) because they won't only find injection and technical issues where an automated tool might find, they will also find all logical issues as well.
Anyone in your situation has the following options available:
Code Review,
Static Analysis of the code base using a tool,
Dynamic Analysis of the application at run time.
Mitchel has already pointed out the use of Fortify. In fact, Fortify has two products to cover the areas of static and dynamic analysis - SCA (static analysis tool, to be used in development) and PTA (that performs analysis of the application as test cases are executed during testing).
However, no tool is perfect and you can end up with false positives (fragments of your code base although not vulnerable will be flagged) and false negatives. Only a code review could solve such problems. Code reviews are expensive - not everyone in your organization would be capable of reviewing code with the eyes of a security expert.
To begin, with one can start with OWASP. Understanding the principles behind security is highly recommended before studying the OWASP Development Guide (3.0 is in draft; 2.0 can be considered stable). Finally, you can prepare to perform the first scan of your code base.
One of the first things that I have started to do with our internal application is use a tool such as Fortify that does a security analysis of your code base.
Otherwise, you might consider enlisting the services of a third-party company that specializes in security to have them test your application
Testing and static analysis is a very poor way to find security vulnerabilities, and is really a method of last resort if you haven't thought of security throughout the design and implementation process.
The problem is that you are now trying to enumerate all of the ways your application could fail, and deny those (by patching), rather than trying to specify what your application should do, and prevent everything that isn't that (by defensive programming). Since your application probably has infinite ways to go wrong and only a few things that it is meant to do, you should take an approach of 'deny by default' and allow only the good stuff.
Put it another way, it's easier and more effective to build in controls to prevent whole classes of typical vulnerabilities (for examples, see OWASP as mentioned in other answers) no matter how they may arise, than it is to go looking for which specific screwup some version of your code has. You should be trying to evidence the presence of good controls (which can be done), rather than the absence of bad stuff (which can't).
If you get somebody to review your design and security requirements (what exactly are you trying to protect against?), with full access to code and all details, that will be more valuable than some kind of black box test. Because if your design is wrong then it won't matter how well you implemented it.
We have used Telus to conduct Pen Testing for us a few times and have been impressed with the results.
May I recommend you contact Artec Group, Security Compass and Veracode and check out their offerings...

Resources