protecting content in jar file - java-me

I made a mobile application. On extracting the jar file, the resource files are visible ( text files and image files ). Is there someway to protect the content files? I am not worried about the reverse engineering of the bytecode. I need to protect the content.
Thanks :)

The only 100% secure way to protect the resource files is to omit them from the jar.
Seriously.
If your application needs to use your resources, you could store the resources encrypted, and then decrypt them as needed. But in the end, you're still handing the data to an untrusted entity.

Related

How can we secure IssuerSecret in azure

Just learning azure and i've noticed most of the samples have namespace, issuer and issuersecret in plain text inside the .cscfg or the web.config files. This seems like a bad idea. What is the best way to handle this so that is it not in plain text? Thanks
Your web application needs to access them somehow anyway. So even if you encrypted them, you would still leave around an encryption key to unlock them.
Your .cscfg and .config files are securely protected on Azure. If your server is hacked, it doesn't really matter if you encrypted them or not, if you application has to use it and someone has access to your server, they can find out as much, if not more than your application and all the information your application is using.
The only secure way of storing something would be non-reverisble encryption. However you need the actual value, so it doesn't apply here. It would be more useful in storage of password's etc.

Image upload security concern

Some background info:
I am developing a website, on which users will have profiles and will be able to upload profile pictures. I am not very experienced, and do not have lots of time available on my hands (as I do it in my free time). Yet I am aware that uploads can leave a huge security gap for any website if implemented incorrectly.
My actual question:
Is it safe to limit images to, say .gif, .jpg and .png extensions, knowing the server can only parse php files (don't know if I'm using the terminology correctly)? Or is there some other security risk in doing this? Note, I also store the files in a private directory after renaming them with random numbers, and pass them through a php file whenever it is necessary.
Additional safeguards you could use:
limit the size of the uploaded file

What kind of security issues will I have if I provide my web app write access?

I would like to give my web application write access to a particular folder on my web server. My web app can create files on this folder and can write data to those files. However, the web app does not provide any interface to the users nor does it publicize the fact that it can create files or write to files. Am I susceptible to any security vulnerabilities? If so, what are they?
You are suspectible to having your server tricked into writing malicious files into that location.
The issues that can arrive out of that depend on what happens with that folder.
Is it web-accessible?
Then malicious files can be hosted, such as stealing cookies or serving up malware.
Is it a folder where applications are executed automatically?
This would be madness. Do not do this.
Is just some place where you store files for later processing?
Consider what could happen if malicious files are put there. Malicious PDFs, say, fed into your PDF processing system, and then some PDF bug is executed that causes some malicious code to be executed, and then it's all over.
Basically, the issue you expose yourself to, potentially, is as I said - malicious files in that location. You can think through carefully what happens in that folder, and how exposed it is, and decide for yourself how risky it is.
With those risks identified, you can then decide how to go ahead. And obviously, you probably don't allow direct uploads to that area, so you can consider the risk being significantly less, because you are basically assessing a situation in which someone has identified a bug in your webserver that lets them, without you providing access, telling it to save some file in some place. I'd hazard to say there aren't hundreds of these types of issues. There may be though. Hence the reason it is appropriate to minimise the risk of a file in that folder, by making sure the folder and files therein are used in a restricted way and, if possible, checked to see if they are "good" files.

What security issues appear when users can upload their own files?

I was wondering what security issues appear when the end user of a website can upload files to the server.
For instance if my website allows the users to upload a profile picture, and one user uploads something harmful instead, what could happen? What kind of security should I set up to prevent attacks like this? I'm talking here about images, but what about the case where a user can upload anything into a file-vault kind of application?
It's more a general question than a question about a specific situation, so what are the best practices in that situation? What do you usually do?
I suppose: type validation on upload, different permissions for uploaded files... what else?
EDIT: To clear up the context, I am thinking about a web application where a user can upload any kind of file and then display it in the browser. The file would be stored on the server. The users are whoever uses the website, so there is no trust involved.
I am looking for general answers that could apply for different languages/framework and production environments.
Your first line of defense will be to limit the size of uploaded files, and kill any transfer that is larger than that amount.
File extension validation is probably a good second line of defense. Type validation can be done later... as long as you aren't relying on the (user-supplied) mime-type for said validation.
Why file extension validation? Because that's what most web servers use to identify which files are executable. If your executables aren't locked down to a specific directory (and most likely, they aren't), files with certain extensions will execute anywhere under the site's document root.
File extension checking is best done with a whitelist of the file types you want to accept.
Once you validate the file extension, you can then check to verify that said file is the type its extension claims, either by checking for magic bytes or using the unix file command.
I'm sure there are other concerns that I missed, but hopefully this helps.
Assuming you're dealing with only images, one thing you can do is use an image library to generate thumbnails/consistent image sizes, and throw the original away when you're done. Then you effectively have a single point of vulnerability: your image library. Assuming you keep it up-to-date, you should be fine.
Users won't be able to upload zip files or really any non-image file, because the image library will barf if it tries to resize non-image data, and you can just catch the exception. You'll probably want to do a preliminary check on the filename extension though. No point sending a file through the image library if the filename is "foo.zip".
As for permissions, well... don't set the execute bit. But realistically, permissions won't help protect you much against malicious user input.
If your programming environment allows it, you're going to want to run some of these checks while the upload is in progress. A malicious HTTP client can potentially send a file with an infinite size. IE, it just never stops transmitting random bytes, resulting in a denial of service attack. Or maybe they just upload a gig of video as their profile picture. Most image file formats have a header at the beginning as well. If a client begins to send a file that doesn't match any known image header, you can abort the transfer. But that's starting to move into the realm of overkill. Unless you're Facebook, that kind of thing is probably unnecessary.
Edit
If you allow users to upload scripts and executables, you should make sure that anything uploaded via that form is never served back as anything other than application/octet-stream. Don't try to mix the Content-Type when you're dealing with potentially dangerous uploads. If you're going to tell users they have to worry about their own security (that's effectively what you do when you accept scripts or executables), then everything should be served as application/octet-stream so that the browser doesn't attempt to render it. You should also probably set the Content-Disposition header. It's probably also wise to involve a virus scanner in the pipeline if you want to deal with executables. ClamAV is scriptable and open source, for example.
size validation would be useful too, wouldn't want someone to intentionally upload a 100gb fake image just out of spite now would you :)
Also, you may want to consider something to prevent people from using your bandwidth just for a easy way to host images (I would mostly be concerned with hosting of illegal stuff). Most people would use imageshack for temp image hosting anyway.
For further reading, there's a great article by Acunetix on Why File Upload Forms are a Major Security Threat
With more context, it would be easier to know where the vulberabilities may lie.
If the data could be stored in a database (sounds like it won't be), then you should guard against SQL Injection attacks.
If the data could be displayed in a browser (sounds like it would be), then you may need to guard against HTML/CSS Injection attacks.
If you're using scripting languages (e.g., PHP) on the server, then you may need to guard against injection attacks against those specific languages. With compiled server code (or a poor scripting implementation), there's the chance of buffer overrun attacks.
Don't overlook user data security, too: Can your users trust you to prevent their data from being compromised?
EDIT: If you really want to cover all bases, consider the risks of JPEG and WMF security holes. These could be exploited if a malicious user can upload the files from one system, and then views the files -- or persuades another user to view the files -- from another system.
Size of the content
Restricting certain file types (.jpeg, .png etc., white-listed file types should only be allowed)
file tampering (for ex: a site supporting foreign languages, certain encoding is allowed. the hacker may take advantage of this and adds any script/malicious code encoded and appends to the original file and tries to upload)

Is It Secure To Store Passwords In Web Application Source Code?

So I have a web application that integrates with several other APIs and services which require authentication. My question is, is it safe to store my authentication credentials in plain text in my source code?
What can I do to store these credentials securely?
I think this is a common problem, so I'd like to see a solution which secures credentials in the answers.
In response to comment: I frequently use PHP, Java, and RoR
I'd like to see some more votes for an answer on this question.
Here's what we do with our passwords.
$db['hostname'] = 'somehost.com'
$db['port'] = 1234;
$config = array();
include '/etc/webapp/db/config.php';
$db['username'] = $config['db']['username'];
$db['password'] = $config['db']['password'];
No one but webserver user has access to /etc/webapp/db/config.php, this way you are protecting the username and password from developers.
The only reason to NOT store the PW in the code is simply because of the configuration issue (i.e. need to change the password and don't want to rebuild/compile the application).
But is the source a "safe" place for "security sensitive" content (like passwords, keys, algorithms). Of course it is.
Obviously security sensitive information needs to be properly secured, but that's a basic truth regardless of the file used. Whether it's a config file, a registry setting, or a .java file or .class file.
From an architecture point of view, it's a bad idea for the reason mentioned above, just like you shouldn't "hard code" any "external" dependencies in your code if you can avoid it.
But sensitive data is sensitive data. Embedding a PW in to a source code file makes that file more sensitive than other source code files, and if that's your practice, I'd consider all source code as sensitive as the password.
It is not to be recommended.
An encrypted web.config would be a more suitable place (but note can't be used with a web farm)
It appears the answer is the following:
Don't put credentials in source code but...
Put credentials in a configuration file
Sanitize log files
Set proper permissions/ownership on configs
Probably more depending on platform...
No, it is not.
Plus, you might want to change your password one day, and probably having yo change the source code may not be the best option.
No. Sometimes it is unavoidable. Better approach is to have an architecture set up where the service will implicitly trust your running code based on another trust. (Such as trusting the machine the code is running on, or trusting the application server that is running the software)
If neither of these are available, it would be perfectly acceptable to write your own trust mechanism, though I would keep it completely separate from the application code. Also, would recommend researching ways to keep passwords out of the hands of predators, even when stored on local machine - remembering that you can't protect anything if someone has control of the physical machine it is on.
If you control the Web server, and maintain it for security updates, then in the source (preferably in a configuration module) or in a configuration file that the source uses is probably best.
If you do not control the Web server (say, you are on a shared or even dedicated server provided by a hosting company), then encryption won't help you very much; if the application can decrypt the credentials on a given host, than the host can be used to decrypt the credentials without your intervention (think root or Administrator looking at the source code, and adapting the decryption routine so that it can be used to read the configuration). This is even more of a possibility if you are using unobfuscated managed code (e.g., JVM or .NET) or a Web scripting language that resides in plaintext on the server (like PHP).
As is usually the case, there is a tradeoff between security and accessibility. I'd think about what threats are the ones you are trying to guard against and come up with a means to protect against the situations that you need. If you're working with data that needs to be secure, you should probably be redacting the database fairly regularly and moving data offline to a firewalled and well-protected database server as soon as it becomes stale on the site. This would include data like social security numbers, billing information, etc., which can be referenced. This would also mean that you'd ideally want to control the servers on your own network which provide billing services or secure data storage.
I prefer to keep them in a separate config file, located somewhere outside the web server's document root.
While this doesn't protect against an attacker subverting my code in such a way that it can be coerced into telling them the password, it does still have an advantage over putting the passwords directly into the code (or any other web-accessible file) in that it eliminates concern over a web server misconfiguration (or bug/exploit) allowing an attacker to download the password-containing file directly.
One approach is to encrypt The passwords before placing the password in config.web
I'm writing this for web service app that receives password, not client:
If you save hashed passsword in source code someone who views the source code won't be able to help himself with that hash.
Your program would receive plain password and hash it and compare both hashes.
That's why we save hashed passwords into databases, not plain text. Because they can't be reversed if someone for example steals db or views it for malicious purposes he won't get all users passwords, only the hashes which are pretty useless to him.
Hashing is 1 way process: it produces same value from same source but you can't compute source value out of hash.
Storing on client: when user enters pass u save it to db/file in plaintext, maybe obfuscate a little but not much u can do to prevent someone who gets a hold of that computer to get that password.
Nobody seems to have mentioned hashing yet - with a strong hash algorithm (ie SHA-2 and not MD5), it should be much safer.

Resources