Looking at http://developer.plone.org on how to check for a permission the first two results are:
http://developer.plone.org/reference_manuals/external/plone.app.dexterity/advanced/permissions.html
http://docs.plone.org/4/en/develop/plone/security/permissions.html
The first one advocates for zope.security.checkPermission while the second prefers a AccessControl.getSecurityManager().checkPermission.
Looking at the setup.py of AccessControl I see that it depends on zope.security, so the later is more low-level so to say, but at the same time zope.security seems to get more attention nowadays while AccessControl seems to be more stable (regarding getting changes on it).
So, I'm wondering which is the safe and up-to-date way to check for permissions.
I personally always use the checkPermission from AccessControl, but I believe under the hood both zope.security and AccessControl will be calling the same code. I've looked for this code before and I think it's actually in the C portion of the roles/permissions logic.
I personally prefer using plone.api.
See plone.api.user docu
This way you don't have to care, about the low level api.
Even if it will change in the future, plone.api will fix it for you :-)
Related
I see that since version 12, NodeJS has had a recursive option on the fs.rmdir function to allow removal of non-empty directories from the file system. Why is this feature marked "experimental" in the documentation? Does it work or doesn't it? The documentation doesn't say what level of concern to have over this, or under what circumstances.
I found Christopher Hiller's article explaining the difficulties that went into creating an efficient implementation of this, but he doesn't explain the "experimental" designation. Maybe the problem isn't that it doesn't work, or doesn't always work, but that it can be a bottleneck under certain circumstances? I'm trying to decide whether to depend on it or not, rather than writing my own code that's going to have exactly the same pitfalls Hiller encountered, so if anyone here has any insight, I'd appreciate it!
I was refactoring some code in a web application today and came across something like this in the base class for all webpages:
if (Request.QueryString["IgnoreValidation"] != null)
{
if (Request.QueryString["IgnoreValidation"].ToUpper() == "TRUE")
{
SessionData.IgnoreValidation = true;
}
}
To me, this appears to be a Very Bad Thing™, so I instantly removed all traces of this flag from the code. For one, there were several if statements littered throughout that checked the value of the flag, leading to cluttered and unclear logic. Secondly, I came across another, more dangerous flag named "IgnoreCreditCardValidation". You can guess what that one did...
I then got to thinking about it and remembered a similar example from a previous job. In the code of an app sold as a "secure authentication module" there was a QueryString parameter used to override the default behavior, effectively allowing anyone with knowledge of it to bypass authentication.
Now my question is more of a confirmation, is this practice as bad as it seems in my head or am I just overreacting and this is commonplace? Are there any cases where there would be a valid reason to do this? To me it just seems like an awful mix of laziness and carelessness.
If this is a duplicate, please feel free to point me in the right direction.
Thanks!
It's horrifying whether it's common practice or not. +1 to you for nuking it with extreme prejudice.
I agree with you. Especially if the module is designed to enforce security, this is a stupid thing to have in a release build (it's not a good idea to have in debug builds either, but that might be reasonable.) It's essentially security-by-obscurity.
It's not you. Whoever wrote that has never dealt with public facing web applications. As you have correctly noted, anyone with knowledge of this 'backdoor' can wreck havoc on your application.
It's the original developer being lazy when it came to both design and testing.
The ideal solution is separate out the validation or credit card authentication etc from the code into separate dlls/services/etc. The real versions can then be replaced by mocks to facilitate testing of the site. The services can also be tested independently.
These mocks never get anywhere near the production server so you should never have backdoors into your code.
You can also replace any/all of the services without having to update your application - as long as the new service presents the same interface as the original.
I'm going against the grain and saying querystrings make good debug switches.
Before everyone jumps on me - using querystring to disable validation is horrendously stupid, a giant security hole, and should never happen. You were completely valid and justified nuking the code immediately.
But for debugging, querystrings work great. We have a few querystrings that will turn on IDs for a few objects on our pages so we can quickly get them without logging into the production database (not everyone has access to the Prod DB of course), or for checking calculation components. You just have to be careful and intelligent about them.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I had a piece of work thrown out due to a single minor spec change that turned out not to have been spec'ed correctly. If it had been done right at the start of the project then most of that work would have never have been needed in the first place.
What are some good tips/design principles that keep these things from happening?
Or to lessen the amount of re-working to code that is needed in order to implement feature requests or design changes mid implementation?
Modularize. Make small blocks of code that do their job well. However, thats only the beginning. Its usually a large combination of factors that contribute to code so bad it needs a complete rework. Everything from highly unstable requirements, poor design, lack of code ownership, the list goes on and on.
Adding on to what others have brought up: COMMUNICATION.
Communication between you and the customer, you and management, you and the other developers, you and your QA department, communication between everyone is key. Make sure management understands reasonable timeframes and make sure both you and the customer understand exactly what it is that your building.
Take the time to keep communication open with the customer that your building the product for. Make milestones and setup a time to display the project to the customer at each milestone. Even if the customer is completely disappointed with a milestone when you show it, you can scratch what you have and start over from the last milestone. This also requires that your work be built in blocks that work independent of one another as Csunwold stated.
Points...
Keep open communication
Be open and honest with progress of product
Be willing to change daily as to the needs of the customers business and specifications for the product change.
Software requirements change, and there's not much one can do about that except for more frequent interaction with clients.
One can, however, build code that is more robust in face of change. It won't save you from throwing out code that meets a requirement that nobody needs anymore, but it can reduce the impact of such changes.
For example, whenever this applies, use interfaces rather than classes (or the equivalent in your language), and avoid adding operations to the interface unless you are absolutely sure you need them. By building your programs that way you are less likely to rely on knowledge of a specific implementation, and you're less likely to implement things that you would not need.
Another advantage of this approach is that you can easily swap one implementation for another. For example, it sometimes pays off to write the dumbest (in efficiency) but the fastest to write and test implementation for your prototype, and only replace it with something smarter in the end when the prototype is the basis of the product and the performance actually matters. I find that this is a very effective way to avoid premature optimizations, and thus throwing away stuff.
modularity is the answer, as has been said. but it can be a hard answer to use in practice.
i suggest focussing on:
small libraries which do predefined things well
minimal dependencies between modules
writing interfaces first is a good way to achieve both of these (with interfaces used for the dependencies). writing tests next, against the interfaces, before the code is written, often highlights design choices which are un-modular.
i don't know whether your app is UI-intensive; that can make it more difficult to be modular. it's still usually worth the effort, but if not then assume that it will be thrown away before long and follow the iceberg principle, that 90% of the work is not tied to the UI and so easier to keep modular.
finally, i recommend "the pragmatic programmer" by andrew hunt and dave thomas as full of tips. my personal favourite is DRY -- "don't repeat yourself" -- any code which says the same thing twice smells.
iterate small
iterate often
test between iterations
get a simple working product out asap so the client can give input.
Basically assume stuff WILL get thrown out, so code appropriately, and don't get far enough into something that having it be thrown out costs a lot of time.
G'day,
Looking through the other answers here I notice that everyone is mentioning what to do for your next project.
One thing that seems to be missing though is having a washup to find out why the spec. was out of sync. with the actual requirements needed by the customer.
I'm just worried that if you don't do this, no matter what approach you are taking to implementing your next project, if you've still got that mismatch between actual requirements and the spec. for your next project then you'll once again be in the same situation.
It might be something as simple as bad communication or maybe customer requirement creep.
But at least if you know the cause and you can try and help minimise the chances of it happening again.
Not knocking what other answers are saying and there's some great stuff there, but please learn from what happened so that you're not condemned to repeat it.
HTH
cheers,
Sometimes a rewrite is the best solution!
If you are writing software for a camera, you could assume that the next version will also do video, or stereo video or 3d laser scanning and include all hooks for all this functionality, or you could write such a versatile extensible astronaut architecture that it could cope with the next camera including jet engines - but it will cost so much in money, resources and performance that you might have been better off not doing it.
A complete rewrite for new functionality in a new role isn't always a bad idea.
Like csunwold said, modularizing your code is very important. Write it so that if one piece falls prone to errors, it doesn't muck up the rest of the system. This way, you can debug a single buggy section while being able to safely rely on the rest.
Beyond this, documentation is key. If your code is neatly and clearly annotated, reworking it in the future will be infinitely easier for you or whoever happens to be debugging.
Using source control can be helpful too. If you find a piece of code doesn't work properly, there's always the opportunity to revert back to a past robust iteration.
Although it doesn't directly apply to your example, when writing code I try to keep an eye out for ways in which I can see the software evolving in the future.
Basically I try to anticipate where the software will go, but critically, I resist the temptation to implement any of the things I can imagine happening. All I am after is trying to make the APIs and interfaces support possible futures without implementing those features yet, in the hope that these 'possible scenarios' help me come up with a better and more future-proof interface.
Doesn't always work ofcourse.
I'm producing a dll for a business partner of mine that he is going to integrate into his app. But I also want to somehow lock the dll so it cannot be used by anyone else. The API of the dll is quite straight forward so it'd be easy to reverse-engineer and use it elsewhere.
How do I do that? My only idea so far would be to add a function in the DLL that'd unlock it if the right parameter is passed to it. But again, it can't be static, this would be too easy to intercept, so I am looking for something semi-dynamic.
Any ideas? Thanks in advance.
A
For .net libraries, this is already built into the framework, you just need to set it up. Here is an MSDN article about it.
How to: License Components and Controls
Other than liccensing, you should also obfuscate your code using a tool such as dotFuscator.
PreEmptive's DotFuscator
How likely do you think it is that you'll actually suffer any ill effects (lost income etc) due to this? How significant would such ill effects be? Weigh that up against the cost of doing this in the first place. You could use obfuscation (potentially - it depends on what kind of DLL it is; native or .NET?) but that will only give a certain measure of protection.
You need to accept that it's unlikely (or impossible) that you'll find a solution which is 100% secure. There are shades of grey, and the harder you make it for miscreants, the more effort (or money) you're like to have to put into it too. It may well also make it harder to diagnose issues (e.g. obfuscators munge stack traces; some allow a mapping tool back to the original, but you're likely to lose some information).
It looks like you need to create and use license keys:
http://www.google.com/search?q=creating+license+keys+for+applications&rls=com.microsoft:pt&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1
Quick and dirty in .NET: strong-name all your assemblies and all assemblies that will access your "locked" dll. Mark all your API classes as internal instead of public. Then, on your "locked" dll, specify those dlls that should have access to your internal API with the InternalsVisibleTo attribute.
Are you trying to protect from casual pirates or something else ? Whatever you do, if the software is remotely useful it is gonna be craked, patched and what not - just ask any of the third party controls vendors.
Any solution that you come up with, it is going to be cracked. Someone might just open the dll in hex editor and patch your function that does the checks, validation and verification.
Hear lately I've been listening to Jeff Atwood and Joel Spolsky's radio show and they have been talking about dogfooding (the process of reusing your own code, see Jeff Atwood's blog post). So my question is should programmers use decompilers to see how that programmers code is implemented and works, to make sure it won't break your code. Or should you just trust that programmers code and adapt to it because using decompilers go against everything we as programmers have ever learn about hiding data (well OO programmers at least)?
Note: I wasn't sure which tags this would go under so feel free to retag it.
Edit: Just to clarify I was asking about decompilers as a last resort, say you can't get the source code for some reason. Sorry, I should have supplied this in the original question.
Yes, It can be useful to use the output of a decompiler, but not for what you suggest. The output of a compiler doesn't ever look much like what a human would write (except when it does.) It can't tell you why the code does what it does, or what a particular variable should mean. It's unlikely to be worth the trouble to do this unless you already have the source.
If you do have the source, then there are lots of good reasons to use a decompiler in your development process.
Most often, the reasons for using the output of a decompiler is to better optimize code. Sometimes, with high optimization settings, a compiler will just get it wrong. This can be almost impossible to sort out in some cases without comparing the output of the compiler at different levels of optimization.
Other times, when trying to squeeze the most performance out of a very hot code path, a developer can try arranging their code in a few different ways and compare the compiled results. As a last resort, this may be the simplest way to start when implementing a code block in assembly language, by duplicating the compiler's output.
Dogfooding is the process of using the code that you write, not necessarily re-using code.
However, code re-use typically means you have the source, hence 'code-reuse' otherwise its just using a library supplied by someone else.
Decompiling is hard to get right, and the output is typically very hard to follow.
You should use a decompiler if it is the tool that's required to get the job done. However, I don't think it's the proper use of a decompiler to get an idea of how well the code which is being decompiled was written. Depending on the language you use, the decompiled code can be very different from the code which was actually written. If you want to see some real code, look at open source code. If you want to see the code of some particular product, it's probably better to try to get access to the actual code through some legal means.
I'm not sure what exactly it is you are asking, what you expect "decompilers" to show you, or what this has to do with Atwood and Spolsky, or what the question is exactly. If you're programming to public interfaces then why would you need to see the original source of the the third party code to see if it will "break" your code? You could more effectively build tests to in order to determine this. As well, what the "decompiler" will tell you largely depends on the language/platform the software was written in, whether it is Java, .NET, C and so forth. It's not the same as having the original source to read, even in the case of .NET assemblies. Anyway, if you are worried about third party code not working for you then you should really be doing typical kinds of unit tests against the code rather than trying to "decompile" it. As far as whether you "should," if you mean whether you "should" in some other way other than what would be the best use of your time then I'm not sure what you mean.
Should Programmers Use Decompilers?
Use the right tool for the right job. Decompilers don't often produce results that are easy to understand, but sometimes they are what's needed.
should programmers use decompilers to
see how that programmers code is
implemented and works, to make sure it
won't break your code.
No, not unless you find a problem and need support. In general you don't use it if you don't trust it, and if you have to use it you even when you don't trust it you develop tests to prove the functionality and verify that later upgrades still work as expected.
Don't use functionality you don't test, unless you have very good support or a relationship of trust.
-Adam
Or should you just trust that programmers code and adapt to it because using decompilers go against everything we as programmers have ever learn about hiding data (well OO programmers at least)?
This is not true at all. You would use a decompiler not because you want to get around any sort of abstraction, encapsulation, or defeat OO principles, but because you want to understand why the code is behaving the way it is better.
Sometimes you need to use a decompiler (or in the Java world, a bytecode viewer) when you are troubleshooting an annoying bug with a 3rd party library where an exception is thrown with no useful error message, no logging, etc.
Use of a decompiler has nothing to do with OO principles.
The short answer to this... Program to a public and documented specification, not to an implementation. Relying on implementation specifics and side-effects will burn you.
Decompilation is not a tool to help you program correctly, though it might, in a pinch, assist you in understanding a problem with someone else's code for which you don't have source.
Also, beware of the possible legal risk of decompiling; many software companies have no-decompile clauses which could expose you and your employer to legal consequences.