I have never changed a server setting in my entire job career, but recently I have received many complaints from employees that websites sometimes get crashed, are slow to load, or don't load at all. So I did a search on Google and checked that I was getting heavy requests on the server the ddos attack. so i seen many ways to prevent this, so i did nmap and checked the open port all are fine expect the one which is open 22/tcp open ssh can this be a problem ??
Can 22/tcp open ssh? ssh the reason for a DDOS attack on the server.
You cannot prevent ddos attacks by simply closing ports. Ddos is basically activating a process that requires more processing than your processing power. I think there is enough information in this resource. You can check
https://blog.cpanel.com/how-to-survive-a-ddos-attack/
Related
I have found the security hole in website. I can to sign in throught anonymous account so I signed in. So I am trying get index.html from website and I receive following message: Illegal PORT command.
Use port or pasv mode.
How can I get this file then edit and next upload modifited file again?
if you're administering an FTP server, it would be best for you to configure your server to support passive mode FTP. However, you should bear in mind that in doing so, you would be making your system more vulnerable to attacks. Remember that, in passive mode, clients are supposed to connect to random server ports.
Thus, to support this mode, not only should your server have to have multiple ports available, your firewall should also allow connections to all those ports to pass through!
But then the more open ports you have, the more there will be to exploit. To mitigate the risks, a good solution would be to specify a range of ports on your server and then to allow only that range of ports on your firewall.
I am not a Heroku customer, just a plain old user out there.
But, I am getting a steady stream of web attacks from a herokuapp.com address. They are being blocked by my security software (Norton), but (a) they are affecting performance on my system; (b) if my security is off even for a moment, I am afraid I will get infected.
What can I do to stop the attacks. Can I get Heroku to stop them? Is there a number to call to report this? Here's the data...
IPS Alert Name -- Web Attack: JSCoinminer Website
Attacking computer -- thrillngos.herokuapp.com (54.243.125.28.443)
Source address -- thrillingos.herokuapp.com (54.243.125.28)
I signed up with Heroku and submitted a support ticket. That seems to be the way to get their attention, as the abuse team responded and reported shutting this app down within a couple of hours.
Unfortunately, the attacks continue from a variety of herokuapp addresses. I have informed them and am awaiting further responses
I'm experiencing odd error while trying to load my web page in browser. When I haven't opened it for some period of time and then try to open it just by typing address in browser and clicking enter:
1) The page doesn't load - browser message that it is not available, connection to .... was interrupted
(in Opera there is also info about proxy, network... i can paste it later when error repeat again)
2) after refreshing, loading page again it works ok (without any problem)
My web page address is crib.pl and subdomains niemiecki.crib.pl, hiszpanski.crib.pl
it is important to note that when i try first time load for example niemiecki.crib.pl then it doesn't open but next opening hiszpanski.crib.pl will open normaly also.
Some additional info:
- hosting is in bluehost (Utha, USA)
- I'm trying to access this from (Poland, Europe)
- website is on drupal
- it works for more than 4 years without problem on this server
- it works even week ago without a problem and it doesn't work since 31 december 2014
- bluehost support doesn't have any idea, they say it works perfectly 1-to-1 cases (no problem)
(If you can check it and type your country and whether yes/no you are expiriencing similar problem)
- I haven't modified anything on the web page (problem just happens without my interaction)
- Google crawlers seems to have some problems with accessing robots.txt (sth like that) file
- domain is hosted by company in Poland (crib.pl) and this domain is set using external DNS to bluehost.com servers
Any help save my life, I'm experiencing about 50% drop in earnings since this problem!
Opera message:
"
This webpage is not available
The connection to crib.pl was interrupted.
Check your internet connection.
Check any cables and reboot any routers, modems, or other network devices you may be using.
Allow Opera to access the network in your firewall or antivirus settings.
If it is already listed as a program allowed to access the network, try removing it from the list and adding it again.
If you use a proxy server...
Check your proxy settings or contact your network administrator to make sure the proxy server is working. If you don't believe you should be using a proxy server: Go to Applications > System Preferences > Network > Advanced > Proxies and deselect any proxies that have been selected.
"
There is definitely something wrong with the Bluehost box (i.e. the server behind the IP address 66.147.244.170). From Australia at 2015-Jan-05 12:19:36 UTC, I was able to reproduce a "Connection reset by peer" error just using curl, which corresponds to the browser message "connection to .. was interrupted".
Other times, it just hangs while trying to establish a connection.
In addition, other servers on the same subnet also owned by Bluehost appear to be working fine.
For example:
$ telnet 66.147.244.22 80
Trying 66.147.244.22...
Connected to 66-147-244-22.unifiedlayer.com.
Escape character is '^]'.
^]
telnet> q
Connection closed.
This tells me that it is not a routing problem on the public Internet either.
Also, after a while I tried again it succeeded in opening a connection. So, you're right that the problem is intermittent.
In other words, I think the issue lies with this particular Bluehost box. It could be one of the following causes:
OS is out of file descriptors
Apache or whatever mail server is too slow to service requests and therefore has maxed out its listen backlog
other server resource limits (perhaps memory) or network equipment issue localized to the hosting environment
Best to check with Bluehost again. My guess is that one of the other tenants sharing that server is getting heavily loaded periodically.
Yes, as I thought before the problem was with bluehost box.
Now seems that the problem has been fixed. Here's what I have done:
1) I upgraded bluehost account (standard shared to pro shared)
I do this because want to change IP address and bluehost box without changing crib.pl
domain external DNS servers configurations (it is set to bluehost)
I also would like to have automatic migration, because haven't too much time now.
2) After upgrade I get new IP address and new bluehost box but it also didn't work correctly
3) So I switch on dedicated IP option and after that about 6 hours later when dedicated IP was propageted properly website seems to work again correctly (one problem it cost me about 120$ for next year and shortening my plan for 1 year then previously)
4) the most frustrating issue was bluehost technical support approach which wasn't eager to help me in any why even though the problem was in their server configuration not my code !
It has recently come to my attention that setting up multiple A records for a hostname can be used not only for round-robin load-balancing but also for automatic failover.
So I tried testing it:
I loaded a page from our domain
Noted which of our servers had served the page
Turned off the web server on that host
Reloaded the page
And indeed the browser automatically tried a different server to load the page. This worked in Opera, Safari, IE, and Firefox. Only Chrome failed to try a different server.
But after leaving that server offline for a few minutes and looking at the access logs, I found that the number of requests to the other servers had not significantly increased. With 1 out of 3 servers offline, I had expected accesses to each of the remaining 2 servers to roughly increase by 50%, but instead I only saw 7-10%. That can only mean DNS-based failover does not work for the majority of browsers/visitors, which directly contradicts what I had just tested.
Does anyone have an idea what is up with DNS-based web browser failover? What possible reason could there be why automatic failover works for me but not the majority of our visitors?
What's happening is that the browsers are not doing automatic DNS failover.
If you have multiple A records on a domain then when your nameserver requests the IP for the domain you typed into your browser, it'll request one from the SOA. It could be any of those A records. Then it passes it along.
Some nameservers are 'smart' enough to request a new A record if the one it gets doesn't work and some aren't. So if you set multiple A records then you will have set up a pseudo redundancy failover, but only for those people with 'smart' nameservers. The rest get a toss of the dice on which IP they get and if it works then good, and if not then it will fail to load as it did for you in Chrome.
If you want to specifically test this then you can use your hosts file C:\Windows\system32\drivers\etc\hosts in Windows and /etc/hosts in Linux to specify what IP you want to go with what domain to see if you get a true failover - as what you'll run into in practicality is that DNS servers across the net will cache your domain name resolution based on its TTL. So if/when you get a real failure, that IP will still need to be resolve and be otherwise farmed out to another nameserver.
Another possible explanation is that, for most public websites, the bulk of traffic comes from bots not from browsers. Depending on the bot it is possible that they aren't quite as smart as the browsers when it comes to handling multiple A records for a domain.
Also, some bots use keep-alives to keep the TCP connections open & make multiple HTTP requests over the same connection. Given that the DNS lookup is only done when a connection is made, they will continue to make requests to the old IP address at least as long as the connection is kept open.
If the above explanation has any weight you should be able to see it in your logs by examining the user agent strings.
How can I make that a site automagically show a nice "Currently Offline" page when the server is down (I mean, the full server is down and the request can't reach IIS)
Changing the DNS manually is not an option.
Edit: I'm looking to some kind of DNS trick to redirect to other server in case the main server is down. I can make permanent changes to the DNS, but not manually as the server goes down.
I have used the uptime services at DNSMadeEasy to great success. In effect, they set the DNS TTL to a very low number (5 minutes). They take care of pinging your server.
In the event of outage, DNS queries get directed to the secondary IP. An excellent option for a "warm spare" in small shops with limited DNS requirements. I've used them for 3 years with not a single minute of downtime.
EDIT:
This allows for geographically redundant failover, which the NLB solution proposed does not address. If the network connection is down, both servers in a standard NLB configuration will be unreachable.
Some server needs to dish out the "currently offline page", so if your server is completely down, there will have to be some other server serving the file(s), so either you can set up a cluster of servers (even if just 2) and while the first one is down, the 2nd is configured only to return the "currently offline page". Once the 1st server is back up, you can take down the 2nd safetly (as server 1 will take all the load).
You probably need a second server with 100% uptime and then add some kind of failover load balancer. to it, and if the main server is online redirect to that and if it isn't redirect to itself showing a page saying server is down
I believe that if the server is down, there is nothing you can do.
The request will send up a 404 network error because when the web address is resolved to an IP, the IP that is being requested does not exist (because the server is down). If you can't change the DNS entry, then the client browser will continue to hit xxx.xxx.xxx.xxx and will never get a response.
If the server is up, but the website is down, you have options.
EDIT
Your edit mentions that you can make a permanent change the IP. But you would still need a two server setup in order to achieve what you are talking about. You can direct the DNS to a load balancer which would be able to direct the request to a server that is currently active. However, this still requires 100% uptime for the server that the DNS points to.
No matter what, if the server that the DNS is pointing to (which you must control, in order to redirect the traffic) is down, then all requests will receive a 404 network error.
EDIT Thanks to brian for pointing out my 404 error error.
Seriously, DNS is not the right answer to server load-balancing or fail-over. Too many systems (including stub clients and ISP recursive resolve) will cache records for much longer than the specified TTL.
If both servers are on the same network, use routing protocols to achieve fail-over by having both servers present the same IP address to the network, but where the fail-over server only takes over if it detects that the (supposedly) live server is offline.
If the servers are Unix, this is easily done by running Quagga on each server, and then using OSPF as the local routing protocol. I've personally used this for warm standby servers where the redundant system was actually in another data center, albeit one that was connected via a direct link to the main data center.
Certain DNS providers, such as AWS's Route 53, have a health-check option, which can be used to re-route to a static page. AWS has a how-to guide on setting this up.
I'm thinking if the site is load balanced the load balancer itself would detect that the web servers it's trying to redirect clients to are down, therefore it would send the user to a backup server with a message dictating technical problems.
Other than that.....
The only thing I can think is to control the calling page. Obviously that won't work in all circumstances... but if you know that most of your hits to this server will come from a particular source, then you could add a java script test to the source, and redirect to a "server down" page that is generated on a different server.
But if you are trying to handle all hits, from all sources (some of which you can't control), then I think you are out of luck. As other folks are saying - when a server is down, the browser gets a 404 error when it attempts a connection.
... perhaps there would be a way at a point in between to detect 404 errors being returned by servers and replacing them with a "server is down" web page. You'd need something like an HTML firewall or some other intermediate network gear between the server and the web client.