SCORM - RTE reporting mechanism security - security

I am investigating SCORM compliance as an option for a software project I am involved in. If this is too esoteric for SO, I am sorry - not sure where else to turn.
I am a little confused as to how the SCO (Sharable Content Object) reports a quiz score, for example, to the LMS. From what I can gather from the official documentation, this is to be done using using LMSSetValue function in the RTE API object, which is just a bunch of Javascript.
This seems wildly insecure to me, as it takes nothing to rewrite the values passed to the LMS this way.
My question is therefore, am I missing something? Are SCOs meant simply to not report such values to the LMS? It is my impression it is the only permitted mode of communication between SCOs and the LMS.

The JavaScript API is the way data is passed from the SCO to the LMS. Are there more secure ways to pass data? Sure. But the implementation is not brand-spanking new, remember. In addition, because of portability constraints, many of the most highly secure ways of passing data are not available to SCORM developers. Portability was the main priority of the standard, not security. There is a community of experts talking about what should replace SCORM. It's called Project Tin Can. And different ways of exchanging data, including cross-domain and server-side, are being discussed there.

Related

It is advisable using libraries like #vitalets/google-translate-api or translate directly from react-native?

I am testing these "translate" and "google-translate-api" libraries, they are fantastic, I can translate to all google translate languages directly in react-native, but they can give me problems since it is unlimited and free.
I am building an application with a chat where people of different languages can chat and I am adding the option to translate the chats so that there can be a more fluid communication between people who speak different languages.
When reviewing the code of these libraries I realize that they call this URL: https://translate.googleapis.com/translate_a/single?client=gtx&sl=$%7Bfrom%7D&tl=$%7Bto%7D&dt=t&q=${ encodeURI(text)}`
This gives me the translated phrase as a result, in this way each user (client) can translate the chats directly from the application, but can calling that URL that is apparently free give me any problem?
This is not a legal forum. Nor should you rely on legal forums even if it were the case. Your best course of action is to consult a lawyer that has experience with IP and licenses.
Keep in mind that in general each license has its own caveat. And having available source code does not equal free-to-use as I heard some claim or assume. It is an incorrect assumption and can lead to trouble. Second caveat of license is when using multiple third-parties. Some apparently "free-to-use" licenses (and I say "apparently" as some have some conditions, i.e. not for commercial use) might not be compatible with other apparently "free-to-use" licenses. Again, consult a lawyer on how to handle such scenarios.
This is as far as any "legal advice" can go on this forum. TLDR: talk to a layer.

How to force the use of a domain service?

In your online documentation regarding domain services ("https://aspnetboilerplate.com/Pages/Documents/Domain-Services") you have section called "How do we force to use of the Domain Service?"
In there you imply that there is a lot of external documentation around the concept of injecting a kind of "policy" service into the entity as a way to do this, but the artice is kind of vague on the implementation of that class, along with where it should be injected and how it is used. I have been scouring the Internet for examples of this kind of design to force the use of domain services, but haven't been able to find anything.
Just browsing that documentation leaves too many questions..
Additionally, I was hoping I could find a simple implementation of an Abp that provided an example of this, but could not find anything.
I'm very curious about this because I have found it to be a big problem with large projects in the past: developers writing their own code in the application service layer, not knowing that the capabilities were already provided in some domain driven "Manager" service.
Can you provide a quick small sample of this concept fully implemented? Using Abp would be great, but a generic example would be fine as well.
take care,
jasen
A few thoughts:
The Policy pattern in that code plays no role in forcing use of the domain service. It is only making task assignment more modular and more SRP compliant.
There's only so much defensive programming can do for you. Sure, making AssignedPersonId protected so that it cannot be directly assigned is a good thing, but a programmer could just as well change it back to public. Don't rely too much on technical code to prevent bad developer behavior - shared practices and team culture are much more efficient.
Questioning application sample or template code (as you do) is sound. Don't take that code as gospel truth - it was never meant to be exemplary in the first place. Try your own stuff and learn from your mistakes. Experience is not something that can be transmitted through such a document.

Security in Play 2.2.x

I'm trying to secure my play application but I have no idea where to start. In play tutorial I have not found any chapter about that topic. As far as I see security topic is changing between play versions. So what are You guys using to secure Yours applications.
I'm new in Play so please forgive me if I'm asking obvious questions.
Edit:
Ok, maby question was't clear enough(I'm really sorry about that). When talking about security I mean that I need something to deal with users credentials and tool which allows me to restrict access to some pages and eventually to some rest actions in my application.
Edit2:
I'll try deadbolt2 now and we'll see how does it works. But I still encurage You guys to share Your knowledge about Play security with others:)
The documentation seems to still be a bit lacklustre on this topic, but essentially, authentication/authorisation functionality is usually performed using Action composition, which is the basis of reusable controller code in Play. There an example here (also linked from the docs that should help give you the general idea.)
Action composition in Play 2.2.x is done using ActionBuilders. These take a block which accepts a request and returns a Future[SimpleResult]. This allows the action builder to either execute the given block, or return a different Future[SimpleResult] (say, an Unauthorized in the case that a user's credentials did not check out.)
In our app we use the Play2-auth module for handling authentication with session cookies. This has (just) been updated to work with Play 2.2.x but uses a slightly different mechanism for action composition (stackable controllers.) You might be best off working out how the precise functionality you need can be accomplished just using the native framework tools before adding a dependency to it.
I agree with the other answers but just add that I use securesocial to integrate with other auth providers (google, FB, etc...), so I don't have to do auth myself. It's quite easy to get up and running.
https://github.com/jaliss/securesocial
Access control, security, etc. is a very wide topic, because it means very different things depending on context. This may be one of the reasons why Play has little documentation for it, which puzzled me at the beginning as well.
Play2 has some security helpers, namely it's the Authenticated method, for some insights on how to use it, check the comments in the source code. Its a simple method that you could implement yourself, and most do. It, essentially, just proposes a structure for where to place your methods that would check if request is authenticated and what to do if it's not.
Play2 also has some cryptography logic, which is used for signing cookies.
That's about it, you don't have any more pre-built security structures, but that's a good thing, because you don't want the framework making decisions like that for you, if it doesn't know in what context it will be used.
What is essential is to go and research how attacks relevant to your application are carried out, best practices and so on. I recommend going to OWASP, particularly the OWASP Cheat Sheets. If the list of Cheat Sheets seems intimidating start with the OWASP Top Ten Cheat Sheet. Don't mind the large volume of information, it's very useful knowledge.

What are some arguments to support the position that the Dojo JavasScript library is secure, accessible, and performant?

We have developed a small web application for a client. We decided on the Dojo framework to develop the app (requirements included were full i18n and a11y). Originally, the web app we developed was to be a "prototype", but we made the prototype production quality anyway, just in case. It turns out that the app we developed (or a variant of it) is going to production (many months hence), but it's so awesome that the enterprise architecture group is a little afraid.
508c compliant is a concern, as is security for this group. I now need to justify the use of Dojo to this architecture group, explicitly making the case that Dojo does not pose a security risk and that Dojo will not hurt accessibility (and that Dojo is there to help meet core requirements).
Note: the web app currently requires JavaScript to be turned on and a stylesheet to work. We use a relatively minor subset of Dojo: of course, dojo core, and dijit.form.Form, ValidationTextBox and a few others. We do use dojox.grid.DataGrid (but no drag N drop or editable cells, which are not fully a11y).
I have done some research of my own, of course, but I any information or advice you have would be most helpful.
Regards,
LES2
I'm not sure how to answer this question except to point out that you'd be in good company using Dojo. Several large corporations, deeply concerned about security issues have contributed to the toolkit and use it in their own products. Audits have been done on the toolkit, including one recently which did expose a problem which was quickly patched -- in fact, the CDN feature of Dojo, if you use it, means you can pick up patches like this automatically. Other than that, I'm not sure what proof to offer. A toolkit is secure until someone finds a security hole! Also, there are plenty of things you can do with Dojo, or the underlying HTML/JS technology, which are not secure. You need to follow best practices. One example is with JSON. There are a couple of methods to handle JSON. The base one is fast, works on older browsers, but is known not to be secure. It is meant to be used only with trusted data sources, and typically with the same domain policy, that's what you'll be doing. There are alternatives in dojox.secure which you might want to look at, depending on what you're doing, you may be able to provide an extra level of security to your application.
For performance, you can look at the various benchmarks like taskspeed, which focus largely on the dojo.query DOM traversal functionality common to most toolkits. Of course, YMMV depending on your usage of Dojo, but there's healthy competition between the toolkits and continuous improvement with each release.
For accessibility, all Dijit widgets were reviewed and considered to be 508c compliant. There is more precise documentation on Dojo/Dijit a11y requirements. Not all dojox widgets pass this requirement.
HTH

What are the best programmatic security controls and design patterns?

There's a lot of security advice out there to tell programmers what not to do. What in your opinion are the best practices that should be followed when coding for good security?
Please add your suggested security control / design pattern below. Suggested format is a bold headline summarising the idea, followed by a description and examples e.g.:
Deny by default
Deny everything that is not explicitly permitted...
Please vote up or comment with improvements rather than duplicating an existing answer. Please also put different patterns and controls in their own answer rather than adding an answer with your 3 or 4 preferred controls.
edit: I am making this a community wiki to encourage voting.
Principle of Least Privilege -- a process should only hold those privileges it actually needs, and should only hold those privileges for the shortest time necessary. So, for example, it's better to use sudo make install than to su to open a shell and then work as superuser.
All these ideas that people are listing (isolation, least privilege, white-listing) are tools.
But you first have to know what "security" means for your application. Often it means something like
Availability: The program will not fail to serve one client because another client submitted bad data.
Privacy: The program will not leak one user's data to another user
Isolation: The program will not interact with data the user did not intend it to.
Reviewability: The program obviously functions correctly -- a desirable property of a vote counter.
Trusted Path: The user knows which entity they are interacting with.
Once you know what security means for your application, then you can start designing around that.
One design practice that doesn't get mentioned as often as it should is Object Capabilities.
Many secure systems need to make authorizing decisions -- should this piece of code be able to access this file or open a socket to that machine.
Access Control Lists are one way to do that -- specify the files that can be accessed. Such systems though require a lot of maintenance overhead. They work for security agencies where people have clearances, and they work for databases where the company deploying the database hires a DB admin. But they work poorly for secure end-user software since the user often has neither the skills nor the inclination to keep lists up to date.
Object Capabilities solve this problem by piggy-backing access decisions on object references -- by using all the work that programmers already do in well-designed object-oriented systems to minimize the amount of authority any individual piece of code has. See CapDesk for an example of how this works in practice.
DARPA ran a secure systems design experiment called the DARPA Browser project which found that a system designed this way -- although it had the same rate of bugs as other Object Oriented systems -- had a far lower rate of exploitable vulnerabilities. Since the designers followed POLA using object capabilities, it was much harder for attackers to find a way to use a bug to compromise the system.
White listing
Opt in what you know you accept
(Yeah, I know, it's very similar to "deny by default", but I like to use positive thinking.)
Model threats before making security design decisions -- think about what possible threats there might be, and how likely they are. For, for example, someone stealing your computer is more likely with a laptop than with a desktop. Then worry about these more probable threats first.
Limit the "attack surface". Expose your system to the fewest attacks possible, via firewalls, limited access, etc.
Remember physical security. If someone can take your hard drive, that may be the most effective attack of all.
(I recall an intrusion red team exercise in which we showed up with a clipboard and an official-looking form, and walked away with the entire "secure" system.)
Encryption ≠ security.
Hire security professionals
Security is a specialized skill. Don't try to do it yourself. If you can't afford to contract out your security, then at least hire a professional to test your implementation.
Reuse proven code
Use proven encryption algorithms, cryptographic random number generators, hash functions, authentication schemes, access control systems, rather than rolling your own.
Design security in from the start
It's a lot easier to get security wrong when you're adding it to an existing system.
Isolation. Code should have strong isolation between, eg, processes in order that failures in one component can't easily compromise others.
Express risk and hazard in terms of cost. Money. It concentrates the mind wonderfully.
Well understanding of underlying assumptions on crypto building blocks can be important. E.g., stream ciphers such as RC4 are very useful but can be easily used to build an insecure system (i.e., WEP and alike).
If you encrypt your data for security, the highest risk data in your enterprise becomes your keys. Lose the keys, and data is lost; compromise the keys and all your data is compromised.
Use risk to make security decisions. Once you determine the probability of different threats, then consider the harm that each could do. Risk is, by definition
R = Pe × H
where Pe is the probability of the undersired event, and H is the hazard, or the amount of harm that could come from the undesired event.
Separate concerns. Architect your system and design your code so that security-critical components can be kept together.
KISS (Keep It Simple, Stupid)
If you need to make a very convoluted and difficult to follow argument as to why your system is secure, then it probably isn't secure.
Formal security designs sometimes refer to a thing called the TCB (Trusted Computing Base). But even an informal design has something like this - the security enforcing part of your code, the part you can't avoid relying on. This needs to be well encapsulated and as simple and small as possible.

Resources