My company has a requirement that all production sites pass an AppScan security scan. Sometimes, when we scan a SharePoint installation, the software detects a blind SQL injection vulnerability. I'm pretty sure this is a false positive--AppScan is probably interpreting some other activity in the HTTP response as success of the blind injection. But it's difficult to prove that this is the case.
I suspect that SharePoint, both MOSS 07 and WSS 3.0, uses stored procedures exclusively behind the scenes. Does anyone know if there is any documentation from Microsoft to this effect, and furthermore, whether any of the stored procedures use dynamically-generated SQL? If everything were sprocs, and none of them dynamic, we would have pretty good evidence that SharePoint has no SQL injection vulnerability.
They aren't all stored procs. In particular, things like cross-lists joins produce some horrendous syntax. For an example, look at the SQL Trace window from this article. Also, since both user controls and API calls can be written by developers, there is no guarantee that you aren't subject to SQL Injection if you are using custom modules.
My guess would be that SharePoint always uses, at the very least, named parameters. However, your best option might to be running a SQL trace and comparing the results. Also, if you are a large enough customer, you might just try calling your local MSFT evangelist (or posting a question on connect.microsoft.com) and seeing if you can get a response.
Thanks. I looked at the profiler myself and found some things:
It looks like SharePoint is only executing stored procedures. There are occasional bits of pure SQL, but these seem to be limited to "exec sp_oledb_ro_usrname" and "select collationname(...)", which appear to be some deep-down internal thing, and possibly are not being executed as SQL at all, but are just coming out in the profiler that way...?
SharePoint does occasionally use sp_executesql, but this is a parameterized call and is therefore probably safe from injection.
There's a number of relatively new blind SQL injection vectors which are based on response delay - for example using WAITFOR DELAY. At least sqlmap and BurpSuite use them (and others probably too).
These vectors however are prone to false positives, because the trigger is, well, HTTP response delay, which may also happen for thousand other reasons if you're scanning over Internet. If you got these over LAN, I would be more suspicious but still investigate other possible delay causes on the server side. Only if you get the delay consistently in a number of independent tried, you are probably dealing with a vulnerability.
Also note that SharePoint often triggers old FrontPage vulnerabilities in many vuln scanners, which are also false positives - for details see this article "SharePoint and FrontPage Server Extensions in security scanner results".
Related
Very closely related: How to protect strings without SecureString?
Also closely related: When would I need a SecureString in .NET?
Extremely closely related (OP there is trying to achieve something very similar): C# & WPF - Using SecureString for a client-side HTTP API password
The .NET Framework has class called SecureString. However, even Microsoft no longer recommends its use for new development. According to the first linked Q&A, at least one reason for that is that the string will be in memory in plaintext anyway for at least some amount of time (even if it's a very short amount of time). At least one answer also extended the argument that, if they have access to the server's memory anyway, in practice security's probably shot anyway, so it won't help you. (The second linked Q&A implies that there was even discussion of dropping this from .NET Core entirely).
That being said, Microsoft's documentation on SecureString does not recommend a replacement, and the consensus on the linked Q&A seems to be that that kind of a measure wouldn't be all that useful anyway.
My application, which is an ASP.NET Core application, makes extensive use of API Calls to an external vendor using the HttpClient class. The generally-recommended best practice for HttpClient is to use a single instance rather than creating a new instance for each call.
However, our vendor requires that all API Calls include our API Key as a header with a specific name. I currently store the key securely, retrieve it in Startup.cs, and add it to our HttpClient instance's headers.
Unfortunately, this means that my API Key will be kept in plaintext in memory for the entire lifecycle of the application. I find this especially troubling for a web application on a server; even though the server is maintained by corporate IT, I've always been taught to treat even corporate networks as semi-hostile environments and not to rely purely on corporate firewalls for application security in such cases.
Does Microsoft have a recommended best practice for cases like this? Is this a potential exception to their recommendation against using SecureString? (Exactly how that would work is a separate question). Or is the answer on the other Q&A really correct in saying that I shouldn't be worried about plaintext strings living in memory like this?
Note: Depending on responses to this question, I may post a follow-up question about whether it's even possible to use something like SecureString as part of HttpClient headers. Or would I have to do something tricky like populate the header right before using it and then remove it from memory right afterwards? (That would create an absolute nightmare for concurrent calls though). If people think that I should do something like this, I would be glad to create a new question for that.
You are being WAY too paranoid.
Firstly, if a hacker gets root access to your web server, you have WAY bigger problems than your super-secret web app credentials being stolen. Way, way, way bigger problems. Once the hackers are on your side of the airtight hatchway, it is game over.
Secondly, once your infosec team detects the intrusion (if they don't, again, you've got WAY bigger problems) they're going to tell you and the first thing you're going to do is change every key and password you know of.
Thirdly, if a hacker does get root access to your webserver, their first thought isn't going to be "let's take a memory dump for later analysis". A dumpfile is rather large (will take time to transfer over the wire, and the network traffic might well be noticed) and (at least on Windows) hangs the process until it's complete (so you'd notice your web app was unresponsive) - both of which are likely to raise some red flags.
No, hackers are there to grab as much valuable information in the least amount of time, because they know their access could be discovered at any second. So they're going to go for the low-hanging fruit first - usernames and passwords. Then they'll move on to trying to find out what's connected to that server, and since your DB credentials are likely in a config file on that server, they will almost certainly switch their attentions to that far more interesting target.
So all things considered, your API key is pretty darn unlikely to be compromised - and even if it is, it won't be because of something you did or didn't do. There are far more productive ways of focusing your time than trying to secure something that already is (or should be) incredibly secure. And, at the end of the day, no matter how many layers of security you put in place... that API or SSL key is going to be raw, in memory, at some stage.
I'm working on a public-facing web project that will be powered in part by an OLAP server. I wanted to compare a couple ways of doing this from a security perspective:
My initial idea was to pass some representation of the user's intent to the web server via AJAX, have the web server do lots of input validation and construct an appropriate MDX expression to pass to the OLAP server, and finally proxy the OLAP results back to the browser. (Tangentially, this seems to be the approach taken by jpivot; e.g. I just clicked to drill down into a table in a jpivot example, and what got sent to the server wasn't MDX but simply the x-www-form-urlencoded string "wcf65768426.x=3&wcf65768426.y=3".)
In contrast, the xmla4js project seems premised on opening up a firewall port and exposing your OLAP server to the world (or at least to your particular customers) via XML/A, writing MDX queries in client-side javascript, and having the browser directly hit the OLAP server.
My gut reaction is to be quite suspicious of the second approach. It seems to presume that nothing bad can happen if someone were to execute arbitrary MDX statements against my OLAP server. I'm not yet a student of particularly advanced MDX, but it's not immediately obvious to me that this is a risk-free proposition. At very least someone could kick off some very expensive queries, or download a larger chunk of your dataset than you were hoping to easily make available to people. This isn't the sort of thing people generally do with SQL servers, and I'm initially inclined to think the same reasons suggest you shouldn't do it with OLAP servers either.
But I'd also like to assume that the folks behind xmla4js had some use cases in mind that were not crazy security risks. And I guess potentially I could be thinking about this too cautiously.
Any more experienced OLAP folks want to comment on the wisdom of letting people directly bang on your OLAP server, e.g. via XML/A?
Interesting question. Certainly if you think your users might hack your webpages it's a source of risk offering a direct access to your datamart (here OLAP Server). This being xmla4js option, it's very similar to giving users direct access to a rdbm.
Yes, certainly it's relatively easy to create an MDX query that is very, very time consuming (e.g. using calculated members).
Fine grained security is possible in OLAP so users might not have access to the details. Fact and dimension security.
One issue with option 1, is the cost (time and money). You'll need more time to implement and you'll be unable to use existing widgets and libraries (e.g. GVI Library). How important is for you security and hacking versus time to delivery ?
One possible solution is using a http proxy for XMLA allowing only 'known' requests to be executed. But, what is a known query?
Some OLAP Server allow for a better control on the number of threads that are allocated by MDX request and how many requests can be executed in parallel. But this solves only partially the problem.
It's really an interesting problem... not trivial. Bad luck for us you're not one of our customers ;-)
On reflection, this is probably a question that should be asked about individual OLAP server products, rather than about OLAP in general. For example, if an OLAP server was coded with security in mind, has support for read-only, non-admin accounts, and can time out queries that are taking too long, you'd be in a better position to expose that server publicly than in the contrary cases.
Unfortunately, OLAP vendors don't seem to tend to give explicit guidance about this. For example it's relatively easy to find the info you need to set up Sql Server Analysis Services for anonymous access, but it's harder to find an explicit statement from Microsoft about how dangerous it is to open up anonymous SSAS XML/A access to the public internet.
I have a suite of Oracle Apex based applications due to have a security test. Does anyone have any tips on what I should look for to tighten things up?
The thing with Apex applications is that the underlying code is all PL/SQL, so it is no surprise that the major class of vulnerability affecting Apex application is SQL Injection.
You need to make sure that you do not use substitution variables (e.g. &P1_TEST.) as these almost always lead to exploitable injection. When they are used within PL/SQL begin/end blocks the injection is very "powerful" as an attacker can specify an arbitrary number of PL/SQL statements.
Many Apex apps use dynamic SQL (where a query is constructed in a string and then executed), either through direct calls to EXECUTE IMMEDIATE or through Apex FUNCTION_RETURNING_SQL blocks. Dynamic SQL is almost always a bad idea.
You'll also find quite a bit of Cross-Site Scripting in Apex apps, where input from users, or from queries run against the database is not escaped. The various Apex reports provide settings to enable escaping but these may not have been chosen when the report was defined.
Also consider the access-control model and ensure all the pages are protected with appropriate authorisation schemes. Do not use the APEX_APPLICATION_FILES table if you're storing uploads as that doesn't protect against unauthenticated downloads.
Hope that helps, and good luck!
A third party developer my boss has brought in, designed a "Better" System than our ASP.NET + MSSQL Server 2005 website we're using now.
Here are the relevant specs:
Excel + ODBC as the data store
Built using old school ASP, not ASP.NET
Is there any glaring problem with his solution short of the ancient tech? Thread safety etc?
Let me put it this way, "What can tell my boss (who's only partially technical) to blow this code out of the water?"
Thank you,
Vindictive Developer :)
Excel should never be used as a data store,
It is not a database
It will not handle multiple users at once at all
No support for transactions, so if an error occurs in the middle of a odbc call the excel file could end up trashed. (Even access would be better then using excel and that isn't saying much)
Excel is a spreadsheet, designed for analyzing data, not for storing data.
Straight from Microsoft: http://support.microsoft.com/kb/195951
IMPORTANT: Though ASP/ADO applications support multi-user access, an Excel spreadsheet does not. Therefore, this method of querying and updating information does not support multi-user concurrent access.
Allain, as well as the great technical reasons that have come out here, I think you need to ask yourself "why did the boss do this?"
Perception is reality, and if your boss is only partially technical, then purely technical reasoning may not get through.
Apart from the glaring architectural weaknesses, is there some functionality in this monster that makes it more appealing to your boss? Generally people don't do stupid things on purpose, it may serve you well to consider where you boss is coming from before you go making a CLM.
Ummm.... it lacks scalability: You could only have a few users. Is the data important?
Here's what you can tell him: Remind him of the nightmares that happen when two or more people need to edit the same spreadsheet at the same time. Now tell him to imagine that multiplied by a hundred people who can't call each other to tell them to "close the spreadsheet so I can update it". That's what it will be like.
Synching issues dealing with a seperate xls datastore and sql server 2005? On our IIS server, classic asp pages are prohibited by default. Maybe that's a sign lol.
How about terrible performance, since Excel is not designed to be used as a database? Tell your boss Excel isn't even a single-user database (that's what MS Access is), let alone a multi-user database designed for high, concurrent performance.
And of course by using pure ASP you're losing access to all of the libraries .NET framework (which of course is what all library developers in the MS ecosystem are focusing on). But you asked for 1 reason, and the first is better.
I would go with the mantra that those are the wrong tools for the job (assuming they are in your case). It'd be like using a screw driver as a hammer. For one nail, it might work with a lot of sweat and tears. For a real project, through, this is likely doomed.
I'd boast about the tools you are familiar with--how much better the tooling is in terms of performance, security, maintenance (esp. maintenance cost).
You could say something like he's paying someone to write a new app with decade old technology which may not be supported for much longer (if it still is...).
Ummm...row limit?
Is an Excel spreadsheet even going to handle concurrent transactions correctly? It wasn't designed for this kind of thing, and I wouldn't hold it responsible if it did something bad (like only letting one ODBC connection in at a time, or not properly locking concurrent updates).
That excel file's going to get corrupted in a hurry with many people hitting it at the same time. The scalability of Excel as a backend datastore is almost non-existent. It has a hard enough time keeping data integrity with its native Shared Workbook feature...
BTW- Is this third party a relative of your boss??? Yikes...
The ancient tech is itself a glaring problem. Will you be around forever? It will be very difficult for the boss to find new developers to maintain something like this. The tech world has moved on.
I’ve been working on a few small scale Access projects that have turned large scale rather quickly. The original designer implemented next to zero security and everyone can just walk in with a simple shift enter, way beyond just a security hole for nuclear submarines to dive through and that has always drove me bonkers.
With that said, users are currently on Office 2000, migrating slowly into 2003. I have taken this opportunity to convince higher parties to implement said security through the use of built in access tools.
Next I get to go through hundreds of functions and forms to pop in option explicit to define all the data types restricting the compile to MDE and clean up memory that was not done for some reason. There are some sensitive connection strings in the code that are plain as day that need to be compiled to reduce the risk factor.
My questions involve both the upgrade to 2003+ and the built in security. And yes, this is what I'm stuck with using unless I really want to redo everything in Visual FoxPro but building a porsche with rocks... not my idea of a good time.
When moving into office 2007, are
there any major holes that I should
be working around ahead of time?
Within the next year and a half the
whole business is supposedly
upgrading to this and I’ve only heard
horror stories about changed/obsolete
functions
Are there any major bugs that
can/will happen because of the use of
the workgroup file and permissions?
Tricks I should know ahead of time if
something crazy happens to lock
everyone out of it?
In the sandbox, I have not implemented the Encryption feature. Pros/Cons, Risks?
Any other good tips? I realize the broadness of this question and have a few good books on hand here (Professional Access 2000 Programming, Access Developers 2002, Developing Solutions with Office 2000 Components and VBA) but obviously these are before the time of current Access and Jet technology. If anything, a good book recommendation would be a booster for me, anything to give me a head start. Right now I really need to devour this security issue, its beyond just out of hand considering the sensitivity of the information at hand.
Thanks for reading my dreaded wall of text o.O
User level security does not exist for Access 2007 files (http://office.microsoft.com/en-us/access/HA101662271033.aspx). If the data is very sensitive, you may wish to consider a different back-end.
If the data is truly that sensitive it shouldn't be stored in an Access database file. Anyone can copy the entire data MDB/ACCDB and take it home with them to analyze at their leisure. Instead the data should be upsized to a database engine such as SQL Server.
Keep the current Access queries, forms and reports but get the data into a format that isn't so easy to steal.
Then think about limiting their views, logging the queries they run and such.
I would wait until A2010 is out before making any determination about upgrades beyond A2003. A2003 is fine for now, seems to me. I certainly wouldn't want to wade into targetting development to A2007 with A2010 coming out so soon and having so many really great new features (table-level data macros, really useful additions to Sharepoint integration that make a lot of really huge things possible, to name just two). My plan is to skip A2007 with clients (though I have it installed now and am playing with it so that I'll be better prepared when 2010 comes out).
One thing that doesn't often get mentioned about A2007 is that the Office FileSearch object was removed in Office 2007. If your app uses it, you can use my File Search class module to replace it. I've had it in production use since June (when I created it), but just released it more widely and am currently troubleshooting some issues that seem to be related to file names with odd characters.