Fortify Path Manipulation error - security

Fority Scan reported "Path Manipulation" security issues in following snippet
String filePath = getFilePath(fileLocation, fileName);
final File file = new File(filePath);
LOGGER.info("Saving report at : " + filePath);
BufferedWriter fileWriter = new BufferedWriter(new FileWriter(file));
fileWriter.write(fileContent);
so i am checking for blacklisted characters in fileLocation and throwing exception, still the Fortify is throwing the exception.
try {
String filePath = getFilePath(fileLocation, fileName);
if (isSecurePath(filePath)) {
final File file = new File(filePath);
LOGGER.info("Saving report at : " + filePath);
BufferedWriter fileWriter = new BufferedWriter(new FileWriter(file));
fileWriter.write(fileContent);
} else {
throw new Exception("Security Issue. File Path has blacklisted characters");
}
} catch (final Exception e) {
LOGGER.error("Unable to prepare mail attachment : ", e);
message = "Mail cannot be send, Unable to prepare mail attachment";
}
private boolean isSecurePath(String filePath) {
String[] blackListChars = {".."};
return (StringUtils.indexOfAny(filePath, blackListChars)< 0);
}
should i ignore the scan report or what would be the correct fix for this?

Firstly SCA is a static analysis tool, so can't check your custom validation to determine whether it works correctly or not, as this is something a dynamic tool such as WebInspect is designed to do.
Secondly, blacklisting is a poor way of securing anything, whitelisting is the far more secure method and the fact you're mentioning blacklisting validation to stdout would entice an attacker. This is because you have to account for every single possible way of being attacked, including ways that may not have been discovered yet, so could easily become out of date before the software is even released.
Thirdly, this definitely wouldn't suffice against stopping path manipulation since you're only accounting for people looking for relative paths and more specifically relative paths above the current directory.
There's no way you have of detecting if somebody specifies a full path, or if somebody goes to a directory that's a symbolic link to a separate directory altogether, along with a couple of other possible alternative attacks.
Ideally you should follow the recommendations shown by SCA and have a very specific allowed paths & filenames. Where this isn't possible, use a whitelisting technique to specify the only characters that are allowed, and then validate to specify that it's not for example a SMB share or a full path specified. If it doesn't validate according to the specification of what users should be specifying, reject it.
Doing this will get rid of the issue itself, but SCA will likely still show the issue in the results (again due to the differences between static vs dynamic analysis). This can be worked around by auditing it as such or creating a custom cleanse rule for the function that validates the issue.

Related

setCacheDuration on Wicket DownloadLink

I am currently using a downloadLink in Wicket to allow a user to download a created excel file, and then to be deleted afterwards. When this is done over SSL IE gives me an error:
"Unable to download.
Internet Explorer was unable to open this site. The requested site is either unavailable or cannot be found. Please try again later. "
here:
http://support.microsoft.com/kb/323308
after doing some reading from the above microsoft support link, it seems it is because
because it's over SSL, and I have
response.setHeader("Cache-Control", "no-cache, max-age=0, must-revalidate, no-store");
I set my downloadLink like so:
private void setupDownloadLink()
{
IModel excelFileModel = new AbstractReadOnlyModel()
{
public Object getObject()
{
return excelCreator();
}
};
auditDownloadlink = new DownloadLink("auditDownloadlink", excelFileModel);
auditDownloadlink.setOutputMarkupPlaceholderTag(true);
auditDownloadlink.setDeleteAfterDownload(true);
auditDownloadlink.setCacheDuration(Duration.NONE);
auditDownloadlink.setVisible(false);
findUserForm.add(auditDownloadlink);
}
However, it seems to work if I do: auditDownloadlink.setCacheDuration(Duration.Minute);
I guess I am confused on what is happening with this. Does the setCacheDuration mean how long after the file is created, it will be available for before it is deleted? Or does this mean how long in total the file will be available for from the start of declaring it?
inside the excelCreator() method I call File excelfile = new File("Access.xls"); and then go ahead and process all of the excel work and create the spreadsheet, then at the end of the method I call:
FileOutputStream output = new FileOutputStream(excelfile);
workbook.write(output);
output.close();
Will the duration time I set start from the moment I call File excelfile = new File("ssaUserIDAccess.xls")?
What is the best duration and setup I should use for this scenario? Because the files can get quit huge, and can take some time to create if it is big enough.
Thanks!
I do not remember the reason, but we had the same problem on SSL/IE and we just set the cache duration to 1 second that is enough. Just it cannot be NONE. Another solution we've never found.
auditDownloadlink.setCacheDuration(Duration.ONE_SECOND)

Overriding EmailRecieved without loosing the Base EmailRecieved functionality

I have a document library setup to recieve emails. The emails coming in have a single picture and a csv file which I use for some processing.
The override emailrecieved works perfectly but of course as I override I lose the nice SharePoint functionaliy that saves the incomming email as configured in the settings.
It was my understanding that I could call MyBase.EmailRecieved in my event for the underlying functionality to still work. This however is not working and no record of the email coming in is getting retained.
For now I am explicitly creating an audit trail but I would like to rely on SharePoints existing functionality as I believe it will be more robust.
What am I doing wrong with the MyBase.EmailRecieved call? Or what can I do instead if this doesnt work?
Thanks in advance.
When writing your own EmailReceived event receiver you will loose the default functionality.
What you will have to do is to implement this default functionality yourself. Let me give you a simple example. The following example saves all mail attachments to the list if they are *.csv files. You can do the same with the emailMessage and save it to the list as well. As you can see it is as easy as to add Files.Add to add a file to a document library.
public override void EmailReceived(SPList list, SPEmailMessage emailMessage, string receiverData)
{
SPFolder folder = list.RootFolder;
//save attachments to list
foreach (SPEmailAttachment attachment in emailMessage.Attachments)
{
if (attachment.FileName.EndsWith(".csv"))
{
var attachmentFileName = attachment.FileName;
folder.Files.Add(folder.Url + "/" + attachmentFileName, attachment.ContentStream, true);
}
}
list.Update();
}

How to get the uploaded file path in JSF

I'm importing Excel document for reading and displaying the content in the UI. I need to know how to get the path of the file uploaded using browse in order to read the content of the excel file.
I need to know how to get the path of the file uploaded using browse in order to read the content of the excel file.
There's a major thinking mistake here.
Imagine that I am the client who want to upload the file and that you are the server who need to get the file's contents. I give you the path c:\path\to\foo.xls as sole information. How would you as being the server ever get its contents? Do you have an open TCP/IP connection to my local hard disk file system? Really? Is everyone else's local disk file system with all that sensitive information really so easily accessible by Internet?
That isn't how uploading files works. This would only ever work if the server runs at physically the same machine as the client, so that you can just use FileInputStream with that path (which would only occur in local development environment).
You should instead be interested in the sole file contents which the client has sent to you along with the HTTP request body. In HTML terms, you can use <input type="file"> for this. But until the upcoming JSF 2.2 there is no equivalent standard JSF <h:xxx> component for this. You need to look for JSF component libraries offering a component for this. In your case, with RichFaces (as tagged in your question), you can use <rich:fileUpload> for this. It's unclear which RichFaces version you're using, but for RF 3.3.3 you can find a complete example in the 3.3.3 showcase site and for RF 4.0 in the 4.0 showcase site.
Whatever way you choose, you should in your JSF managed bean ultimately end up with a byte[] or an InputStream representing the file contents. Then you've all the freedom to store it wherever you want on the server's local disk file system using for example FileOutputStream. You can even also just feed it directly to whatever Excel API you're using, most of them have just a method taking an InputStream or byte[].
you better take a look at this article. The solution that uses Tomahawk 2.0 ( http://myfaces.apache.org/tomahawk-project/tomahawk20/index.html ) by BalusC is great
JSF 2.0 File upload
Everything you need is there
String fileName = FilenameUtils.getName(uploadedFile.getName());
byte[] bytes = uploadedFile.getBytes();
FileOutputStream outputStream = null;
try {
String filePath = System.getProperty("java.io.tmpdir") + "" + fileName;
outputStream = new FileOutputStream(new File(filePath));
outputStream.write(bytes);
outputStream.close();
readExcelFile(filePath);
} catch (Exception ex) {
}
In the above code Iam uploading the file using tomahawk after uploading storing the file in a temporary location.And from there i will be reading using poi.
public static void readExcelFile(String fileName)
{
try{
FileInputStream myInput = new FileInputStream(fileName);
POIFSFileSystem myFileSystem = new POIFSFileSystem(myInput);
org.apache.poi.ss.usermodel.Workbook workbook = WorkbookFactory.create(myFileSystem);
org.apache.poi.ss.usermodel.Sheet sheet = workbook.getSheetAt(0);
Iterator rowIter = sheet.rowIterator();
for(Row row : sheet) {
//u can read the file contents by iterating.
}
} catch (Exception ex) {
}
}

UnauthorizedAccessException while scanning directory 'User\Documents\My Music'

Question: Why do I get this error while scanning a users 'My Documents' folder, but not when I scan the 'My Music/My Pictures/My Videos' directory?
Secondary, less important question: Is there a way to avoid this without having to specifically filter these folders out, or using a try/catch block?
I prefer answers that teach me how to fish, instead of just giving me fish. Just at this point I am not sure where I need to look to specifically answer this question. I've read through documents about elevating permissions and iterating through the file system, and spent a good week looking for why I can set DirectoryInfo on 'User\My Music' but not 'User\Documents\My Music'(link) and just would enjoy a little boost in a different direction in regards to learning more.
I catch the initial 'UnauthorizedAccessException' that is thrown initially when attempting Directory.GetFiles('path', "*", SearchOption.AllDirectories) where path is the users 'My Documents'. To handle the exception I know that I need to walk the directory manually. Which works, returning the files from the sub-directories.
The code for the initial GetFiles function:
public static string[] GetFiles(string path)
{
string[] files;
try
{
files = Directory.GetFiles(path, "*", SearchOption.AllDirectories);
}
catch(UnauthorizedAccessException ex)
{ files = WalkDirectory(path); }
return files;
}
public static string[] WalkDirectory(string path)
{
List<string> files = new List<string>();
DirectoryInfo dir = new DirectoryInfo(path);
foreach (DirectoryInfo subDir in dir.GetDirectories())
{
try
{
files.AddRange(WalkDirectory(subDir.FullName));
}
catch(UnauthorizedAccessException ex)
{
// complete fail to walk directory listed
throw ex;
}
}
foreach (FileInfo file in dir.GetFiles())
{
files.Add(file.FullName);
}
}
This works out perfectly, until the code attempts to walk the hidden folders: My Music, My Pictures, or My Videos. No matter how I try and re-code to walk the hidden files, I keep receiving the UnauthorizedAccessException.
I understand completely that I am going to code around this. Mainly what I am curious to know, is why is the exception happening under a users folder?
An asssumption I am making is that the folder is a symlink to another directory, because I can make the path ?:\users directory\user\My (Music, Pictures, or Videos) and the code walks those directories then without any issues. This only happens when trying to scan the directory files after setting them from within the users My Documents.
OS: Windows 7
User Privliages: Administrator
Application Elevated to run as administrator
I was speaking about this with a friend, who is not technical, but knows enough tech to hold a conversation and he helped me narrow this question down further. This is actually a duplicate question and was answered at Check if a file is real or a symbolic link.
The folder is a symbolic link that was placed there for backwards compatibility purposes according to this article on TechRepublic: Answers to some common questions about symbolic links under the section Windows Vista and Windows 7 have built-in symbolic links paragraph 2.
In order to specifically avoid attempting to scan this directory without a Try/Catch block on an UnauthorizedAccessException the folder attributes need to be checked to determine if the folder or file in question is a symbolic link. Which again was answered in the above listed stackoverflow question.

How to resolve an InvalidMd5 error returned from the Windows Azure Blob Storage service?

I am building an application that needs to allow users to upload large images (up to about 100 MB) to the Windows Azure Blob Storage service. Having read Rob Gillen's excellent article on file upload optimization for Windows Azure, I borrowed his approach for doing parallel upload of file chunks, using the CloudBlockBlob.PutBlock() method within a Parallel.For loop (code is available here).
The problem I have is that whenever I try to upload a file I get an "InvalidMd5" exception from the storage client. Suspecting that the problem may be in the development storage, I also tried running the code against my live Azure storage account, but I got the same error. Looking at the traffic with Fiddler I see that the "Content-MD5" header is set to a valid MD5 hash. The description of the error says that "The MD5 value specified in the request is invalid. The MD5 value must be 128 bits and Base64-encoded.", but to the best of my knowledge the value I see being sent in Fiddler is valid (e.g. a91c588092cedbdb1b82c2d3786fd509).
Here is the code I use for calculating the hash (courtesy of Rob Gillen):
public static string GetMD5HashFromStream(byte[] data)
{
MD5 md5 = new MD5CryptoServiceProvider();
byte[] retVal = md5.ComputeHash(data);
StringBuilder sb = new StringBuilder();
for (int i = 0; i < retVal.Length; i++)
{
sb.Append(retVal[i].ToString("x2"));
}
return sb.ToString();
}
And this is the actual call to PutBlock():
blob.PutBlock(transferDetails[j].BlockId, new MemoryStream(buff), blockHash, options);
I also tried passing the hash like so:
Convert.ToBase64String(Encoding.UTF8.GetBytes(blockHash))
but the result was the same - "InvalidMd5" error :(
The MD5 hash being passed to PutBlock() with base64 encoding (e.g. YTkxYzU4ODA5MmNlZGJkYjFiODJjMmQzNzg2ZmQ1MDk=) and without it (e.g. a91c588092cedbdb1b82c2d3786fd509) doesn't seem to make a difference.
Rob's code obviously worked for him and I really have no idea what may be causing the problem in my case. The only change I've made to Rob's code is to alter the ParallelUpload() extension method to take a Stream instead of a file name and to dynamically determine the block size depending on the size of the file being uploaded.
Please, if anyone has an idea how to solve this problem, let me know! I will be really grateful! I already lost two days struggling with this.
Rob, thank you for offering to help and pointing out the difference in the MD5 hashes. Your answer got me thinking in the right direction. I spent another whole day digging into this but luckily (and thanks to your remark :)) I finally managed to resolve the problem. It turned out there were actually two issues in my case:
1) The MD5 hash: I noticed the hash you pasted in your answer is shorter than the one I was getting but it took me a while to see yours was exactly twice shorter. After some experimentation I found out that the GetMD5HashFromStream() method from your test application is converting the 16-byte hash generated by the MD5CryptoServiceProvider to a 32-character string. And it was this 32-character string that was causing the problem because it was converted to Base64 and passed to the PutBlock() method, hence the twice longer and thus invalid hash that the blob storage service was complaining about. Here is the code I ended up with:
Original:
public static string GetMD5HashFromStream(byte[] data)
{
MD5 md5 = new MD5CryptoServiceProvider();
byte[] retVal = md5.ComputeHash(data);
StringBuilder sb = new StringBuilder();
for (int i = 0; i < retVal.Length; i++)
{
sb.Append(retVal[i].ToString("x2"));
}
return sb.ToString();
}
and the call to PutBlock():
// calculate the block-level hash
string blockHash = Helpers.GetMD5HashFromStream(buff);
blob.PutBlock(transferDetails[j].BlockId, new MemoryStream(buff), blockHash, options);
Final:
MD5 md5 = new MD5CryptoServiceProvider();
byte[] blockHash = md5.ComputeHash(buff);
string convertedHash = Convert.ToBase64String(blockHash, 0, 16);
blob.PutBlock(transferDetails[j].BlockId, new MemoryStream(buff), convertedHash, options);
Rob, I'm really curious how your code worked in your case and why it didn't in mine - is it something specific to the setup on my machine, or perhaps a differing version of the Azure tools (I'm using v1.2)... Please let me know if you have any idea.
2) A bug in the development storage: lots of combing through the web led me to this page that mentions an obscure but apparently known bug in the development storage:
If two requests attempt to upload a
block to a blob that does not yet
exist in development storage, one
request will create the blob, and the
other may return status code 409
(Conflict), with storage services
error code BlobAlreadyExists.
Here is what I came up with to work around it:
public static bool IsDevelopmentStorageRunning()
{
return new Microsoft.ServiceHosting.Tools.DevelopmentStorage.DevStore().IsRunning();
}
You will need to add a reference to Microsoft.ServiceHosting.Tools.dll, which was located in "C:\Program Files\Windows Azure SDK\v1.2\bin" on my machine. Then, I use this method before the Parallel.For loop that processes the file chunks as follows:
bool isDevStorageRunning = StorageProxy.IsDevelopmentStorageRunning();
ParallelOptions parallelOptions = new ParallelOptions();
parallelOptions.MaxDegreeOfParallelism = isDevStorageRunning ? 1 : 4;
Parallel.For(0, transferDetails.Length, parallelOptions, j => { ... });
I hope this will save someone all the hassles I went through. Rob, thank you once again for helping out :)
tishon,
After seeing this post, I went back and re-tested my code, and I'm thinking that there is a problem with the data being passed (possibly what you are passing into the function?).
One thing that jumped out at me immediately was the md5 hash you provided... in every case I've tested, my md5 hashes end with two equals signs like the following (captured from fiddler):
Content-MD5: D1Mxthoqhlwm9cC0729mWA==
I'm not a crypto expert, but I know from working with the block IDs for block blobs, that if you have invalid/unsafe characters in your blob ID prior to converting it a base64 encoded value you'll get invalid data and block ids that Azure can't interpret.

Resources