Web attacks from a herokuapp.com address -- how to stop - security

I am not a Heroku customer, just a plain old user out there.
But, I am getting a steady stream of web attacks from a herokuapp.com address. They are being blocked by my security software (Norton), but (a) they are affecting performance on my system; (b) if my security is off even for a moment, I am afraid I will get infected.
What can I do to stop the attacks. Can I get Heroku to stop them? Is there a number to call to report this? Here's the data...
IPS Alert Name -- Web Attack: JSCoinminer Website
Attacking computer -- thrillngos.herokuapp.com (54.243.125.28.443)
Source address -- thrillingos.herokuapp.com (54.243.125.28)

I signed up with Heroku and submitted a support ticket. That seems to be the way to get their attention, as the abuse team responded and reported shutting this app down within a couple of hours.
Unfortunately, the attacks continue from a variety of herokuapp addresses. I have informed them and am awaiting further responses

Related

How Can I Deflect The Scanned IP Address to The Honeypots?

I never use honeypot before. But, I have a task from my lecture, that I should use a honeypot for detecting hackers attacks.
I searched in journals, tutorials and articles. I tried using honeydrive3 and used the honeypot Kippo. When I tried that, and I attack by myself, it works, the detailed of attack is served. But, when I told that to my lecturer, he said it was not what he wanted.
The workflow he want is, we use the honeypot and then we try that to some websites. But, when the attacker scanning or do something to that web IP address, it must deflect to the honeypot, it means that the attacker really attacks the real website.. and I really don't know what to do.
You either misunderstood what the lecturer wanted, or what he wants does not make sense.
You can only analyze traffic sent to your IP (or an IP you control), it is not possible for you to "deflect the traffic" from a generic IP address.
What you did is correct: putting in place the honeypot, and then sending some traffic to it.
The next step would be to expose it to Internet to get malicious traffic (directed to your IP) but you must be very careful as the whole machine is likely going to get successfully attacked. It must not have any connection to your (home|uni|private) network, because (I am being frank reading your question), you stand no chance to secure it for the time being.
I would go for a cloud hosted machine which I would then kill.

SMTP Attack on server

I have an IIS7 web server at Rackspace that is being utilized/attacked in some manner to send SPAM. I have run several variations of anti-virus and malware software on the server and cleaned anything found, but it is still happening.
I'm leaning towards some kind of web form attack, but there are several sites on this server and I didn't create all of them, so figuring out what form(s) is being used (or even where they all are) is proving challenging.
Does anyone know of any solution to pinpoint what script(s) might be firing off these emails? Is there any way to monitor the SMTP service with more information? I've looked at SMTP logs, but all I see are things like:
2014-02-14 06:00:52 127.0.0.1 [---server info, etc---] SMTPSVC1 [-compname-] 127.0.0.1 0 MAIL - +FROM:<--------#-------------------> 250 0 56 43 0 SMTP - - - -
In fact, there are 19,608 in about a 16 hour period in this one log file I'm looking at. But unfortunately, this doesn't seem helpful.
If anyone could offer any insight, that'd be great!
If I had to guess, you have a webpage that has been compromised (which is what I think you suspect), and is being used to generate all the messages. The webpage probably accepts a FROM and a TO, without any validation.
If you start seeing these come in, as a test, start shutting off websites, until you see the attack stop.
Then, start the website back up, see if it continues. Then, I would start grepping that website location for files relating to email.
Most likely your server is configured to act as an email relay server, which allows anyone to send email that is in transit to your server for your server to send on (relay). Spammers do this to cover up the original origination point of the email.
The fix is to configure your server not to be a relay server. More background info here:
http://en.wikipedia.org/wiki/Open_mail_relay

Is it possible to register a public server, and protect the orgin of the actual processing server?

Tough question. It has to do mainly with security, but also computers. Probably not been done yet.
I was wondering, is it possible to host for example a web application, yet be able to hide *where* the actual server is, and, or who is the originator, making it very very hard ( practically impossible ) for some one to track the origin of the server, and who is behind it?
I was thinking that this might be possible through a third party server, preferably with an owner unrelated to the proxy sites. But the question then also becomes an issue of reliability *of* the third party.
Does the TOR network have support for registering for recieving incoming requests rather than outgoing ones? How secure would that be? Might it be possible that the TOR network has been infiltrated by for example a big goverment ( read USA ) ( dont get angry, please enlighten me as I do not know much of how the TOR network is hosted ).
How can one possibly create such a secure third party server, that preferably does not even know who the final recipient of the request is? Third party companies might be subjected *to* pressure from goverments, either directly from powerful *nations* such as USA, or by the USA applying pressure on the goverments of the country where the server is, applying pressure on the company behind it, and force you to enable a backdoor. ( Just my wild fantasy, think worst case scenario is my motto :) ).
I just came with the idea, that being that this is probably *impossible*, the best way would be to have a bunch of distributed servers, across several nations, make it as hard as possible to go through each and one of them to find the next bouncing server. This would have to be in a linked list, with one public server being registered on a DNS. If compromised, the public server needs to be replaced with another one.
request from user0 -> server1 -> server2 -> server3 -> final processing server -> response to user0 or through the incoming server chain.
When sending a response to someone, could it be done using UDP rather than TCP and hide who the sender was ( also in a web application ) ? So that a middle man listening on user0 computer incoming responses ( and outgoing requests ) do not figure *out who the final* processing server is, if we decide to respond directly to user0 from the final processing server?
The IP of server1 will be public and known to anyone, server1 will send the message to server2 and it is possibly to figure out by listening directly behind server1 traffic node, but perhaps it could hide its own origin if not being listened to directly, so that if big goverments have filters on big traffic nodes or routers, they wouldn't be able to track who it came from, and therefore what the message to server2 is intended for. It would blend in with all other requests.
Anyhow, if you have followed my thoughts this far I think you should know by now what I am thinking about.
Could this be possibly through a P2P network, with a central server behind it, and have the P2P network deliver it to the final server respond in some pattern? The idea is to have one processing server, and then have "minor", "cheaper" servers that acts as proxys?
Why I keep saying central server, is that I am thinking web. But any thoughts on the matter is interesting.
For those that wonders, why... I am looking into creating as secure as possible, and that could withstand goverment pressure ( read BlackBerry, Skype and others ).
This is also a theoretical question.
PS.
I would also be interested in knowing how one have a distributed SECURE database ( for keeping usernames, friendlists and passwords for example ) but this time, it is not neccessery for it to be on the web. A P2P software with a distributed secure database.
Thanks!
Yes, you're reinventing Tor. You should research Tor more fully before going further. In particular, see Hidden Service Protocol. Tor is not perfect, but you should understand it before you try to reinvent it.
If you want to find an ant's nest, follow the ants. If you want to find the original server, follow the ip packets. If you meet a proxy server not willing to provide their path, call the server administrator and have your men in black put a gun on his head. If he does not comply, eliminate the administrator and the server. Carry on following the ants in their new path. Repeat the operation until server is reached or server can't communicate anymore.
So no, you can't protect the origin and keep your server up and running when your men in black can reach any physical entity.

Is pinging to a site a good way of checking whether it's down or not?

I'm trying to write a small website monitoring program, that can check my web hosts to see whether they are down or not, and to calculate the uptime or warn me if it's down. It's going to be a standalone app.
I wanted to know whether pinging is a good way of finding out whether a site is down or not?
Thanks in advance.
That's one thing that you can do but it's by no means a certainty either way.
Some sites will ignore ICMP packets so that no ping response is given. Some sites will respond to pings even when the web server (or whatever service you're after) is down.
The only way you can be certain that a given site will provide a service is to, well, use that service. Nothing else will be as accurate.
A better method would be to provide a series of steps which would detect where a fault lay, at least in the infrastructure that you can control. For example:
allow pings to be received and acted upon.
have a static web page in the web server.
have a dynamic page in the application server which delivers static content.
have a dynamic page in the application server which uses the database.
Then your tester client would simply attempt to "contact" those four points and report on the success. Since you would expect your site to be up most of the time, I'd just check the fourth option to see if everything was okay, and do the other checks only if a problem were found.
It depends on how you define ping. If you're talking about the "low level" ICMP echo, then no it isn't likely to be a good indicator of whether or not your site is down. You would be better off to actually have an application pull a page down from your site to ensure that the HTTP server is running. There are plenty of services for this and likely some code you could download from google as well. http://www.dailyblogtips.com/test-if-a-website-is-down-for-everyone-or-just-for-your/
ICMP can prove the server is alive.
TCP checking can show the web server is working, but not the site.
To perform site checking, you should do http GET request(even HEAD doesn't work sometimes) to make sure the page was fine.(return status 200)
You can write your own checking system or use some third party site like http://allping.net/
ping gives you insight in latency from a specific location and also points to possible network issues (packet loss). As said in a previous answer, some servers don't respond to ping requests in which case ping is useless.
To check a server with ping from over 50 locations worldwide have a look at this free tool: http://just-ping.com/

DNS-based strategies for showing a nice "Currently Offline" page when the server is down

How can I make that a site automagically show a nice "Currently Offline" page when the server is down (I mean, the full server is down and the request can't reach IIS)
Changing the DNS manually is not an option.
Edit: I'm looking to some kind of DNS trick to redirect to other server in case the main server is down. I can make permanent changes to the DNS, but not manually as the server goes down.
I have used the uptime services at DNSMadeEasy to great success. In effect, they set the DNS TTL to a very low number (5 minutes). They take care of pinging your server.
In the event of outage, DNS queries get directed to the secondary IP. An excellent option for a "warm spare" in small shops with limited DNS requirements. I've used them for 3 years with not a single minute of downtime.
EDIT:
This allows for geographically redundant failover, which the NLB solution proposed does not address. If the network connection is down, both servers in a standard NLB configuration will be unreachable.
Some server needs to dish out the "currently offline page", so if your server is completely down, there will have to be some other server serving the file(s), so either you can set up a cluster of servers (even if just 2) and while the first one is down, the 2nd is configured only to return the "currently offline page". Once the 1st server is back up, you can take down the 2nd safetly (as server 1 will take all the load).
You probably need a second server with 100% uptime and then add some kind of failover load balancer. to it, and if the main server is online redirect to that and if it isn't redirect to itself showing a page saying server is down
I believe that if the server is down, there is nothing you can do.
The request will send up a 404 network error because when the web address is resolved to an IP, the IP that is being requested does not exist (because the server is down). If you can't change the DNS entry, then the client browser will continue to hit xxx.xxx.xxx.xxx and will never get a response.
If the server is up, but the website is down, you have options.
EDIT
Your edit mentions that you can make a permanent change the IP. But you would still need a two server setup in order to achieve what you are talking about. You can direct the DNS to a load balancer which would be able to direct the request to a server that is currently active. However, this still requires 100% uptime for the server that the DNS points to.
No matter what, if the server that the DNS is pointing to (which you must control, in order to redirect the traffic) is down, then all requests will receive a 404 network error.
EDIT Thanks to brian for pointing out my 404 error error.
Seriously, DNS is not the right answer to server load-balancing or fail-over. Too many systems (including stub clients and ISP recursive resolve) will cache records for much longer than the specified TTL.
If both servers are on the same network, use routing protocols to achieve fail-over by having both servers present the same IP address to the network, but where the fail-over server only takes over if it detects that the (supposedly) live server is offline.
If the servers are Unix, this is easily done by running Quagga on each server, and then using OSPF as the local routing protocol. I've personally used this for warm standby servers where the redundant system was actually in another data center, albeit one that was connected via a direct link to the main data center.
Certain DNS providers, such as AWS's Route 53, have a health-check option, which can be used to re-route to a static page. AWS has a how-to guide on setting this up.
I'm thinking if the site is load balanced the load balancer itself would detect that the web servers it's trying to redirect clients to are down, therefore it would send the user to a backup server with a message dictating technical problems.
Other than that.....
The only thing I can think is to control the calling page. Obviously that won't work in all circumstances... but if you know that most of your hits to this server will come from a particular source, then you could add a java script test to the source, and redirect to a "server down" page that is generated on a different server.
But if you are trying to handle all hits, from all sources (some of which you can't control), then I think you are out of luck. As other folks are saying - when a server is down, the browser gets a 404 error when it attempts a connection.
... perhaps there would be a way at a point in between to detect 404 errors being returned by servers and replacing them with a "server is down" web page. You'd need something like an HTML firewall or some other intermediate network gear between the server and the web client.

Resources