How can I know my website contains Malicious code - security

Recently our website got hacked by some people and it is reporting as below when we try to open our website
Reported Attack Page!
This web page at www.example.com has been reported as an attack page and has been blocked based on your security preferences.
Attack pages try to install programs that steal private information, use your computer to attack others, or damage your system.Some attack pages intentionally distribute harmful software, but many are compromised without the knowledge or permission of their owners.
At Last I came to know there is some malicious code so that it is reporting in this way...
so how can i know where is that code

I would say, first, scan your own PC to ensure the infection did not originate there.
Then download the site to a safe location(you may need to disable your antivirus software for a while for the purpose). Hopefully you have a backup of the original site before infection(or your webhosting partner). Make a compare against the infected content and you should clearly see the difference.
If you do not have a backup, scan the infected scripts with antivirus software to determine specific files, then you need to browse through the code in these files and scan for malicious code manually. Those worms/trojans mostly append to file, or specific tag, look for suspicious lines and parts of code that do not look familiar.

Related

One of our users visited different URLs in my website

I have a affiliate website. I am monitoring which websites are user visiting. For the first time I have noticed a user is visiting following url in my websites which I guess is some kind of hacking attempt. I need help. Constantly my website is performing poor. Sometimes it opens longer than normal time. Sometimes table appears blank. Sometime Cron jobs fail to execute.
Following are the few URLs visited by a user repetitively:
http://www.example.com/product.php?category=study-materials&id=SHOEMHMZH8HPAX4H%22%20or%20(1,2)=(select*from(select%20name_const(CHAR(111,108,111,108,111,115,104,101,114),1),name_const(CHAR(111,108,111,108,111,115,104,101,114),1))a)%20--%20%22x%22=%22x
http://www.example.com/product.php?category=video-albums&id=SHOEMHMZH8HPAX4H%20or%20(1,2)=(select*from(select%20name_const(CHAR(111,108,111,108,111,115,104,101,114),1),name_const(CHAR(111,108,111,108,111,115,104,101,114),1))a)%20--%20and%201%3D1
There are lots more such URLs. I am totally confused and bit scared too. What it is exactly and what the user trying to do with such URL? How can I prevent from such actions?
Since morning the user has been visiting from different IP addresses and his or her visited URLs looks like same as I have mentioned.
It's someone trying to break into the site, probably by using a tool of some kind. This happens all the time to every website, since anyone can go off and download tools freely to attack sites.
The URLs you list have attempts to send commands to your database, called SQL Injection. If you look in your web server logs, you'll probably see this kind of thing a lot.
As long as your site has been coded securely, doesn't trust user input, doesn't use vulnerable software (such as out of date plugins or un-patched operating systems) then it may be nothing to worry about.
I presume you didn't write the software. You could always contact the creator of to ask about how it was coded, tested and has it been pentested (which is when a professional hacker has been paid to try to break into the site).

What damage can a website do?

Now and then I (accidentally) come across websites that my anti-virus warns me about. Out of curiosity, what kind of damage can a website do?
I've been working in web development for around 4 years now and can't think of any 'genuine' damage worth warning the user about. Maybe I'm missing something obvious, but surely browsers and basic security measures implemented by main operating systems prevent anything particularly invasive going on?
I'm talking about threats aside from anything deceptive by the way (phishing etc.). Could taxing the browser enough warrant an anti-virus warning (i.e. overload a page with resource-draining javascript)? Typically, cookies, caches and localstorage all have limits - so I can't think of what could go on there.
I suspect this may be slightly off-topic, as it's less technically specific than what I'd usually ask. I'll happily delete it if this is the case.
The main risk is encountering a drive-by download.
A drive-by download isn't necessarily a file download in the usual sense, it could be a browser exploit that allows executable code to download and execute on your system (known as the payload).
One example is the Microsoft Internet Explorer colspan Element Processing Arbitrary Code Execution Vulnerability:
Microsoft Internet Explorer contains a vulnerability that could allow
an unauthenticated, remote attacker to execute arbitrary code on a
targeted system.
The vulnerability is due to improper processing of elements in web
pages. An unauthenticated, remote attacker could exploit this
vulnerability by convincing a user to view a malicious website. If
successful, the attacker could exploit this vulnerability to execute
arbitrary code on the system with the privileges of the user.
The vulnerability is due to improper handling of constantly changed
colspan in a fixed table layout. If colspan could be increased after
initialization, it could trigger a heap-based buffer overflow.
However, more recent exploits exist such as this one this year (2015) in Flash Player:
Adobe Flash Player before 13.0.0.269 and 14.x through 16.x before
16.0.0.305 on Windows and OS X and before 11.2.202.442 on Linux allows attackers to execute arbitrary code or cause a denial of service
(memory corruption) via unspecified vectors
Another attack vector from a website could be exploitation of a cross domain attack such as Cross Site Request Forgery. Such a malicious site could be making background requests to other sites you're logged into. For example, it might be making AJAX requests to https://facebook.com/delete_account (made up URL path), and as you're logged into Facebook your browser will pass cookies and the action would be triggered. That is, if Facebook did not have CSRF protection for the delete account function (I'm pretty sure it does though).
Another example of a cross domain attack is that the site may be trying to exploit any XSS flaw on another site you use. It could redirect you another site and capture your credentials as you log in, or it could do something more sneaky like request a site in the background and grab your session cookie. This requires the target site to contain such an XSS flaw however.
One of the main issues is that when you go onto a website it can automatically download something onto your computer. Normally an ordinary website will ask you if you are sure that you want to download the item, but a website can download something without your permission. And if the file that was downloaded was a virus, then you now have a virus on your computer and the virus can inflict any sort of damage to the computer.
See here (https://www.microsoft.com/security/pc-security/virus-whatis.aspx) to see the issues of a virus and how to remove them.

How can I browse an XSS infected Joomla website for rebuilding purposes without being infected?

Both the web files and the database have been tampered pointing to malicious JavaScript. They have tasked me to rebuild their site, but I would like to be able to view the site if possible to get at the content and view the site as they had a lot of pages. Since I didn’t originally build the site I don’t know the structure of the content.
I don’t have to repair the site; I just need to rebuild it with the CMS of my choice. I don’t know anything about the Joomla database, or know if I can even get access to it to be able to start there.
I originally thought using a virtual machine would be OK for this, but I wasn’t sure if I would be risking my host machine as well using this method. I would of course turn off JavaScript but I was hoping someone else may have been already been down this road and might be able to offer some insight.
Couldn't you just FTP to their host, pull it off and get it working on a machine with no connection?
If you were really paranoid. I don't think an XSS infected site would do too much damage to a properly protected machine anyway.
My paranoid answer:
It's a great idea to turn off Javascript. I would get an extension like Noscript for Firefox or Notscript for Chrome. I use these Noscript regularly, and it makes it easy to see what Javascript is coming from where.
Secondly, your idea with a VM is good, but take it a step further and run Linux in that VM. Linux can be infected, but it is rare to see something that will infect Linux.
Regular expressions and HTML parsers can also be your friends. Script something that can scan files looking for things like script tags and especially iframes. That way you can get an idea of files that have been corrupted and what is calling to where.
One other less likely gotcha is malicious executables or scripts disguised as something innocent like a JPEG, PDFs, etc. If you download and open files off of that machine, make sure it is at least onto your VM with no network connectivity.
Get server logs if you can; perhaps your assailant was sloppy and let some clue about their activities. Perhaps run Wireshark on a second machine to look for things calling out to strange domains. This may be excessive, but I find it to be a fun exercise. :)
Also things like Virustotal and Threat Expert can be your friends if you think you have a malicious file or you see malicious activity. Better to be paranoid than compromised.
Cleaning this type of stuff up isn't exactly rocket science. You just need to get a connection to the backing database server and run a couple queries to kill the xss stuff out of the stored content.
You'd do your client a great service by starting off doing just that.
The VM idea is a good one. krs1 suggests running Linux which is an even better idea as almost all trojans that get downloaded are for Windows. If you run Wireshark while you use the site so you can see what the network traffic looks like and what URLs are being requested, etc. If you run it in a Linux VM though you'll probably only get half the picture since any exploit worth the oxygen it took to keep the programmer alive while it was written will check what platform you're on and only download when you're on an exploitable one.
But I digress, you're rebuilding a website, not doing malware analysis (which is more fun IMO). Once you identify and remove the offending content you should be good. See if you can find out what the exploit was that got them and work with their IT guy if they have one so steps can be taken to mitigate it from happening again.

Some script is inserted by hacker in home page

How can it be done?
Did you ever experienced something like this?
If you're finding JavaScript injected into your web site content (not via XSS but actually present in the file contents) you've most likely been hit by a worm or virus.
A good example is the Gumblar virus, which spread very rapidly indeed a few months ago; it used FTP password sniffing to find FTP details of people's sites and modified them, injecting malicious JavaScript to send site visitors to malware sites etc.
The specifics of removing such viruses depends on the specific virus, but a good start is:
Replace the contents of the site with a known clean backup
Make sure all security patches are applied to your server and all software you're running on it, as well as e.g. any modules or 3rd-party libraries being used on the site
Make sure all computers which are used to access the site (via FTP or an administration interface, for example) have been marked as clean by a reputable and up-to-date virus scanner so you don't get any passwords sniffed
As the password for your site may already have leaked out into the big wide world via (say) a botnet, change all your FTP + administration passwords on the site so you don't just have to go right back to the start again.
Good luck!
You have probably experienced Cross Site Scripting (XSS).
From Wikipedia:
Cross-site scripting (XSS) is a type of computer security vulnerability typically found in web applications which enable malicious attackers to inject client-side script into web pages viewed by other users.

How would you attack a domain to look for "unknown" resources? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Given a domain, is it possible for an attacker to discover one or many of the pages/resources that exist under that domain? And what could an attacker do/use to discover resources in a domain?
I have never seen the issue addressed in any security material (because it's a solved problem?) so I'm interested in ideas, theories, best-guesses, in addition to practices; anything an attacker could use in a "black box" manor to discover resources.
Some of the things that I've come up with are:
Google -- if google can find it, an attacker can.
A brute force dictionary attack -- Iterate common words and word combinations (Login, Error, Index, Default, etc.) As well, the dictionary could be narrowed if the resource extension was known (xml, asp, html, php.) which is fairly discoverable.
Monitor traffic via a Sniffer -- Watch for a listing of pages that users go to. This assumes some type of network access, in which case URL discovery is likely small peanuts given the fact the attacker has network access.
Edit: Obviously directory listings permissions are turned off.
The list on this is pretty long; there are a lot of techniques that can be used to do this; note that some of these are highly illegal:
See what Google, archive.org, and other web crawlers have indexed for the site.
Crawl through public documents on the site (including PDF, JavaScript, and Word documents) looking for private links.
Scan the site from different IP addresses to see if any location-based filtering is being done.
Compromise a computer on the site owner's network and scan from there.
Attack an exploit in the site's web server software and look at the data directly.
Go dumpster diving for auth credentials and log into the website using a password on a post-it (this happens way more often than you might think).
Look at common files (like robots.txt) to see if they 'protect' sensitive information.
Try common URLs (/secret, /corp, etc.) to see if they give a 302 (unauthorized) or 404 (page not found).
Get a low-level job at the company in question and attack from the inside; or, use that as an opportunity to steal credentials from legitimate users via keyboard sniffers, etc.
Steal a salesperson's or executive's laptop -- many don't use filesystem encryption.
Set up a coffee/hot dog stand offering a free WiFi hotspot near the company, proxy the traffic, and use that to get credentials.
Look at the company's public wiki for passwords.
And so on... you're much better off attacking the human side of the security problem than trying to come in over the network, unless you find some obvious exploits right off the bat. Office workers are much less likely to report a vulnerability, and are often incredibly sloppy in their security habits -- passwords get put into wikis and written down on post-it notes stuck to the monitor, road warriors don't encrypt their laptop hard drives, and so on.
Most typical attack vector would be trying to find well known application, like for example /webstats/ or /phpMyAdmin/, look for some typical files that unexperienced user might left in production env (eg. phpinfo.php). And most dangerous: text editor backup files. Many text editors leave copy of original file with '~' appended or perpended. So imagine you have whatever.php~ or whatever.apsx~. As these are not executed, attacker might get access to source code.
Brute Forcing (Use something like OWASP Dirbuster , ships with a great dictionary - also it will parse responses therefore can map the application quite quickly and then find resources even in quite deeply structured apps)
Yahoo, Google and other search engines as you stated
Robots.txt
sitemap.xml (quite common nowadays, and got lots of stuff in it)
Web Stats applications (if any installed in the server and public accessible such as /webstats/ )
Brute forcing for files and directories generally referred as "Forced Browsing", might help you google searches.
The path to resource files like CSS, JavaScript, images, video, audio, etc can also reveal directories if they are used in public pages. CSS and JavaScript could contain telling URLs in their code as well.
If you use a CMS, some CMS's put a meta tag into the head of each page that indicates the page was generated by the CMS. If your CMS is insecure, it could be an attack vector.
It is usually a good idea to set your defenses up in a way that assumes an attacker can list all the files served unless protected by HTTP AUTH (aspx auth isn't strong enough for this purpose).
EDIT: more generally, you are supposed to assume the attacker can identify all publicly accessible persistent resources. If the resource doesn't have an auth check, assume an attacker can read it.
The "robots.txt" file can give you (if it exists, of course) some information about what files\directories are there (Exmaple).
Can you get the whole machine? Use common / well known scanner & exploids.
Try social engineering. You'll wonder about how efficient it is.
Bruteforce sessions (JSessionid etc.) maybe with a fuzzer.
Try common used path signatures (/admin/ /adm/ .... in the domain)
Have a look for data inserts for further processing with XSS / SQL Injection / vulnerability testing
Exploid weak known applications within the domain
Use fishing hacks (XSS/XRF/HTML-META >> IFrame) to forward the user to your fake page (and the domain name stays).
Blackbox reengineering - What programming language is used? Are there bugs in the VM/Interpreter version? Try service fingerprinting. How whould you write a page like the page you want wo attack. What are the security issues the developer of the page may have missed?
a) Try to think like a dumb developer ;)
b) Hope that the developer of the domain is dumb.
Are you talking about ethical hacking?
You can download the site with SurfOffline tools, and have a pretty idea of the folders, architecture, etc.
Best Regards!
When attaching a new box onto "teh interwebs", I always run (ze)nmap. (I know the site looks sinister - that's a sign of quality in this context I guess...)
It's pretty much push-button and gives you a detailed explanation of how vulnerable the target (read:"your server") is.
If you use mod_rewrite on your server you could something like that:
All request that does not fit the patterns can be redirected to special page. There the IP or whatever will be tracked. You you have a certain number of "attacks" you can ban this user / ip. The most efficient way you be automatically add a special rewrite condition on you mod_rewrite.
A really good first step is to try a domain transfer against their DNS servers. Many are misconfigured, and will give you the complete list of hosts.
The fierce domain scanner does just that:
http://ha.ckers.org/fierce/
It also guesses common host names from a dictionary, as well as, upon finding a live host, checking numerically close IP addresses.
To protect a site against attacks, call the upper management for a security meeting and tell them to never use the work password anywhere else. Most suits will carelessly use the same password everywhere: Work, home, pr0n sites, gambling, public forums, wikipedia. They are simply unaware of the fact that not all sites care not to look at the users passwords (especially when the sites offer "free" stuff).

Resources