I am doing homework and having a hard time finding the information I need; I am just looking for some guidance. I need to identify some administrative IT tasks that use scripting, but the script used causes some type of security issue. What would be the issue and how would the issue be solved? Summary, keywords, links, anything would be great. Thanks
This is a sample of something I could imagine some ignorant it guy doing...
Write a php script where you pass a path of where you a database backed up to. Then an adversary could pass a path inside the HTML document root. I could then download the entire database to my computer.
Might not be the best example but it happens.
Related
I don’t know if I asked my question correctly, but I wanted to know how it’s done. We have a website, and yesterday we noticed that the index.php file was deleted in the server, and instead added the index.html. We know for sure that the problem is not in the server, I mean that they didn't hack the server, and I would like to know with what attacks they could do this. I understand that there can be a lot of options, but I ask for help, can someone describe how this can be done, or give some kind of link where I could read about it. I apologize if I described the situation poorly, but I think someone will understand what I am asking for, and maybe help, thanks in advance.
The main attacks are most likely related to a rootkit, specific modification of a server is hard to do with an automated script, so your suspected hacker is likely accessing your server through a back door; you need to make sure that you are only keeping the needed ports open and have firewalls to detect scanners being used on your server. Another option, if you have the funds, is to store your files in a backend storage server, and allowing your frontend server to access those files, it's not foolproof, but it should effectively square the amount of time to detect an open port and pass through the firewall.
Look into these website(s) if you need more info: https://www.veracode.com/security/rootkit
https://en.wikipedia.org/wiki/Rootkit
I've proof read these, and they work well for some basic elaboration on the subject, as well as some prevention methods.
I am trying to allow ssh users to be defined in Radius, but share a home directory, shell, etc. The idea is that all users share the same home directory and default shell (an application). I would like to avoid creating numerous accounts on the local machine (really a docker container) since their activity is constrained by the application. I think that I just need to replace the user database information, but I don't understand how to just override that part of the login activity. Has anyone else done this or should I be solving this a different way?
Ok, I am going to answer my own question. If you have better information, please contribute. This question might have been better in ServerFault, but as a programmer I spend more time on StackOverflow so I did not think of that.
The PAM library is useful for single sign-on, but it cannot replace the /etc/passwd file and related files. PAM and the other assets it brings in supplement the internal Linux info. So, while you can authenticate with a remote server like Radius, you will still have entries in /etc/passwd. The control flow is a list of rules in pam.conf and the top-level library works its way down the list letting each module (plug-in) do its work. Read 'man pam.conf' and 'man pam_mkhomedir' for good information on how this works.
A module implements 6 functions so it is very approachable to add new modules. See pam_deny.c for the simplest module.
Also, getpwnam is a function you may need in whatever it is you are trying to do. You can read about that using 'man getpwnam', but you probably already knew that.
I've readen a book (in german) named cookbook typo3 and typoscript http://www.amazon.de/TYPO3-TypoScript-Kochbuch-TYPO3-Programmierung/dp/3446410465
In this book the autor suggest in regards to security that the typo3_src directory should be moved out of the root-directory of the web-server, but he didn't say why should we do that?
Can someone explain to me the reason of this suggestion? What vulnerablity would exist if we do not move it?
Many thanks
You should not make public what doesn't need to be.
Not making the directory publicly accessible reduces one possible attack vector.
It might be possible that a file in that directory can be made to do things bad when called directly.
It is important to do that if you want to secure your system as much as possible.
The main reason is, that you do not need to be able to access typo3_src via the webserver. So do not put things in the public, which do not need to be. If there is a vulnerability via direct access to the source, you would not be vulnerable.
It is just a small stepp. IMHO it is not important and you can ignore it.
I've noticed yesterday by looking into my apache error log that someone tried to get access to the website via calling a lot of sites like:
mywebsite.com/phpmyadmin
mywebsite.com/dbadmin
mywebsite.com/mysqladmin
mywebsite.com/foo.php#some-javascript
...
This caused a lot of 404 errors. What's the best way to stop them doing so?
I thought about creating a fake-phpmyadmin dir with some php code that bans their ip address from my website when accessing this dir for about 12 to 24 h.
Is there a better way to deal with this kind of guys?
You should take a look at Fail2ban, it's pretty easy to set up in Apache.
You can't really prevent people from trying these sorts of attacks. The best you can do is log all these sorts of attempts like you're currently doing and maybe implement some sort of temporary blacklisting.
The security of your site shouldn't depend on people not trying to do these sorts of attacks, since you will never be able to fully prevent them.
If none of those exist, they're not gonna be able to do anything. You just have to worry about them being able to access parts that do exist and that you don't want them to access. Or using your poorly written scripts with XSS holes in it.
You could make it harder on them by checking if they're trying to access a common XSS path (like phpMyAdmin's normal path) and use an alternate 404 page that has malicious javascript on it or something.
Both the web files and the database have been tampered pointing to malicious JavaScript. They have tasked me to rebuild their site, but I would like to be able to view the site if possible to get at the content and view the site as they had a lot of pages. Since I didn’t originally build the site I don’t know the structure of the content.
I don’t have to repair the site; I just need to rebuild it with the CMS of my choice. I don’t know anything about the Joomla database, or know if I can even get access to it to be able to start there.
I originally thought using a virtual machine would be OK for this, but I wasn’t sure if I would be risking my host machine as well using this method. I would of course turn off JavaScript but I was hoping someone else may have been already been down this road and might be able to offer some insight.
Couldn't you just FTP to their host, pull it off and get it working on a machine with no connection?
If you were really paranoid. I don't think an XSS infected site would do too much damage to a properly protected machine anyway.
My paranoid answer:
It's a great idea to turn off Javascript. I would get an extension like Noscript for Firefox or Notscript for Chrome. I use these Noscript regularly, and it makes it easy to see what Javascript is coming from where.
Secondly, your idea with a VM is good, but take it a step further and run Linux in that VM. Linux can be infected, but it is rare to see something that will infect Linux.
Regular expressions and HTML parsers can also be your friends. Script something that can scan files looking for things like script tags and especially iframes. That way you can get an idea of files that have been corrupted and what is calling to where.
One other less likely gotcha is malicious executables or scripts disguised as something innocent like a JPEG, PDFs, etc. If you download and open files off of that machine, make sure it is at least onto your VM with no network connectivity.
Get server logs if you can; perhaps your assailant was sloppy and let some clue about their activities. Perhaps run Wireshark on a second machine to look for things calling out to strange domains. This may be excessive, but I find it to be a fun exercise. :)
Also things like Virustotal and Threat Expert can be your friends if you think you have a malicious file or you see malicious activity. Better to be paranoid than compromised.
Cleaning this type of stuff up isn't exactly rocket science. You just need to get a connection to the backing database server and run a couple queries to kill the xss stuff out of the stored content.
You'd do your client a great service by starting off doing just that.
The VM idea is a good one. krs1 suggests running Linux which is an even better idea as almost all trojans that get downloaded are for Windows. If you run Wireshark while you use the site so you can see what the network traffic looks like and what URLs are being requested, etc. If you run it in a Linux VM though you'll probably only get half the picture since any exploit worth the oxygen it took to keep the programmer alive while it was written will check what platform you're on and only download when you're on an exploitable one.
But I digress, you're rebuilding a website, not doing malware analysis (which is more fun IMO). Once you identify and remove the offending content you should be good. See if you can find out what the exploit was that got them and work with their IT guy if they have one so steps can be taken to mitigate it from happening again.