This is an example of dictionary brute force attack however i do not understand the principle behind it. yes i do know that dictionary brute force is when an attacker tries combinations of passwords from a dictionary file. However, how is that explicitly shown in the capture below
If you feel that it would be more helpful to see the packets before this then please let me know.
ps. i apologise if the tags are incorrect
EDIT: the attacker is 192.168.56.1 and the victim is 192.168.56.101
EDIT: what i am trying to say is that this capture is from an assignment i was given. One of the questions in this assignment was:
How did the attacker exploit this vulnerability to gain access ?
it is my belief that the vulnerability was that port 22 was left open (can be seen in other packets apart from this screenshot). A group of my friends suspect that the attacker used brute force to exploit this vulnerability and gain access.
My question is, is this true? can you tell that from this screenshot or would you need to see other packets? Can this screenshot be used as evidence that the attacker possibly used brute force to gain access?
If your assignment is a simple type of packet inspection then,
The attacker gained access by bruteforcing the SSH service and the vulnerability is due to the use of weak password and the fact that it permitted password-based authentication.
By examining the screenshot of packet capture, we see a number of SSH authentication requests being made.
So, the attack must be carrying out either DoS or bruteforce.
The question explicitly states that the attacker has gained access.
Therefore, we know the attack is SSH-bruteforcing.
Update
Packettotal showed that the attacker was carrying out
Port Scanning (Indicator: use of ICMP echos)
Smuggling Data from Web server
SSH Bruteforcing
Link to the report
Update 2:
Manual packet analysis gave these results:
Attacker first used aggressive ARP scan to scan the hosts.
Started port scan on the victim.
Then found a SSH server and a web server running on the victim machine.
Tried bruteforcing SSH but failed
Sent a GET request to victim webserver and luckily it returned
private SSH key
Has complete access to the victim
Related
I never use honeypot before. But, I have a task from my lecture, that I should use a honeypot for detecting hackers attacks.
I searched in journals, tutorials and articles. I tried using honeydrive3 and used the honeypot Kippo. When I tried that, and I attack by myself, it works, the detailed of attack is served. But, when I told that to my lecturer, he said it was not what he wanted.
The workflow he want is, we use the honeypot and then we try that to some websites. But, when the attacker scanning or do something to that web IP address, it must deflect to the honeypot, it means that the attacker really attacks the real website.. and I really don't know what to do.
You either misunderstood what the lecturer wanted, or what he wants does not make sense.
You can only analyze traffic sent to your IP (or an IP you control), it is not possible for you to "deflect the traffic" from a generic IP address.
What you did is correct: putting in place the honeypot, and then sending some traffic to it.
The next step would be to expose it to Internet to get malicious traffic (directed to your IP) but you must be very careful as the whole machine is likely going to get successfully attacked. It must not have any connection to your (home|uni|private) network, because (I am being frank reading your question), you stand no chance to secure it for the time being.
I would go for a cloud hosted machine which I would then kill.
I was going through "Low and Slow DoS Attack" and one of the examples which was listed was Sockstress. I was referring the wikipedia link for the same which is given as below:
http://en.wikipedia.org/wiki/Sockstress. In this link I understood the overall logic but I could not understand why the Fantaip command was used. Why cant we perform the attack by just using the Sockstress command? Any inputs will be appreciated.
Sockstress requires a successful TCP 3 way handshake to effectively fill the victims connection tables. This limits the attack's effectiveness as an attacker cannot spoof the client IP address to avoid traceability, but with Fantaip you can spoof your IP addresses that are not really in use by the attacking system. This makes the attack more effective, and also protects the attacking machine from the effects of the attack.
It seems that the SSH designers cared a great deal about man in the middle attack.
Their approach was, to save server's public key finger print at the first time you're connected to the server (and hope that the user doesn't connect from a poisoned network in the first time, for instance if he has a virus in the computer). The user then uses the fingerprint to verify the server's public key next time he'll connect to this server.
In practice, I found out that many user simply ignores warnings about unmatched fingerprints, and assume it's due to server re-installation. It's just MITM attack is so difficult to conduct and rare, you never worry about it. Moreover, many times a user wants to use ssh many different computers, and he wouldn't bother importing all the fingerprints to any computer he might want to use SSH with (hey, can you look why my site is down, I'm panicked! I'm not in the office, I'll drop to the nearest internet cafe and have a look).
To be fair, one can use DNSSEC and use the DNS servers as the CA. However I never saw that used in practice. And anyhow, it's not a mandatory part of the protocol.
Many years I thought one cannot avoid MITM without preshared secret, but I've been recently reading Bruce Schneir's excellent "Practical Cryptography", there he mentions the interlock protocol.
Alice sends Bob her public key.
Bob sends Alice his public key.
Alice encrypt her message using Bob's public key. She sends half of the encrypted message to Bob.
Bob encrypts his message using Alice's public key. He sends half of the encrypted message to Alice.
Alice sends the other half of her encrypted message to Bob.
Bob puts the two halves of Alice's message together and decrypts it with his private key. Bob sends the other half of his encrypted message to Alice.
Alice puts the two halves of Bob's message together and decrypts it with her private key.
Now, Mallory has to send something to Bob in step (3) of the protocol, after he receives half of Alice's message, even though he can't decrypt it until he gets everything from Alice in (5). He must fabricate a message to Bob, and Bob is likely to notice he's fabricating, say, after he ls his home directory.
Why didn't SSH use such a scheme? It seems to really fit its goals. It doesn't require any other entity, and it makes MITM attacks substantially more difficult.
Is it something inherent? A flaw in my understanding of the problem? Or just the designer thought the extra security doesn't worth complicating the protocol?
PS:
If you think that it would cause too much overhead, you can force the users of the protocol to use interlock only for the first 10K of data in the connection, so in practice it wouldn't matter too much, but MITM would be more difficult never the less.
Update:
The attack on the interlock protocol described here, does not mean a MITM attack is possible, it does mean that if a single password is sent during the communication the MITM can intercept it and the user would only see a time out error.
Update 2:
The point Eugene, raise is valid. The interlock protocol doesn't allow authentication. That is, you still can't be sure that if you're connecting to example.com, it is indeed example.com, and not malicious.com impersonating to example.com. You can't know that for sure without, say, DNSSEC. So for example, if you're SSHing to the missles silos, and write launch_missile -time now (without, say, using ls to verify the server is indeed the server in the missiles silos), it might be that you actually wrote that in a malicious server, and now the enemy know you're about to launch missiles against him. This type of attack indeed won't be prevented by the interlock protocol.
However if I understand the protocol correctly, a much more dangerous attack, and very practical attack, might be prevented. If the interlock protocol is used, even if you don't know anything about example.com, it is impossible that you would SSH to your server, and someone would eavesdrop to the entire SSH session. I think that this type of attack is much more likely.
Maybe SSH don't care about MITM attack? I think not, see for instance Putty FAQ:
Those annoying host key prompts are
the whole point of SSH. Without them,
all the cryptographic technology SSH
uses to secure your session is doing
nothing more than making an attacker's
job slightly harder; instead of
sitting between you and the server
with a packet sniffer, the attacker
must actually subvert a router and
start modifying the packets going back
and forth. But that's not all that
much harder than just sniffing; and
without host key checking, it will go
completely undetected by client or
server.
He's clearly talking about MITM attack and not about server authentication. I think using the interlock protocol will clearly prevent the attack mentioned in the Putty FAQ and I still don't understand why didn't they use it.
I don't see how interlock protocol prevents MITM.
The problem is not how to exchange keys, but how to trust them. You correctly point out, that people ignore warnings that the keys don't match. This is really the biggest flaw, but the protocol you describe doesn't solve the problem of verification of key origin. SSL uses X.509 certificates and PKI to verify trust. SSH can also use certificates, but almost no SSH software supports them.
I have a couple questions about SSL certificates.
I never used them before but my current project requires me to do so.
Question 1.
Where should you use SSL? Like I know places like logging in, resetting passwords are definite places to put it. How about once they are logged in? Should all requests go through SSL even if the data in there account is not considered sensitive data? Would that slow down SSL for the important parts? Or does it make no difference?(sort of well you got SSL might as well make everything go through it no matter what).
Question 2.
I know in smtp you can enable SSL as well. I am guessing this would be pretty good to use if your sending say a rest password to them.
If I enable this setting how can I tell if SSL if it is working? Like how do I know if it really enabled it? What happens if the mail server does not have SSL enabled and your have that boolean value enabled. Will it just send it as non SSL then?
With an SSL connection, one of the most expensive portions (relatively speaking) is the establishment of the connection. Depending on how it is set up, for example, it might create an ephemeral (created on the fly) RSA key for establishing a session key. That can be somewhat expensive if many of them have to be created constantly. If, though, the creation of new connections is less common (and they are used for longer periods of time), then the cost may not be relevant.
Once the connection has been established, the added cost of SSL is not that great although it does depend on the encryption type. For example, using 256-bit AES for encryption will take more time than using 128-bit RC4 for the encryption. I recently did some testing with communications all on the same PC where both client and server were echoing data back and forth. In other words, the communications made up almost the entire cost of the test. Using 128-bit RC4 added about 30% to the cost (measured in time), and using 256-bit AES added nearly 50% to the cost. But remember, this was on one single PC on the loopback adapter. If the data were transmitted across a LAN or WAN, then the relative costs is significantly less. So if you already have an SSL connection established, I would continue to use it.
As far as verifying that SSL is actually being used? There are probably "official" ways of verifying it, using a network sniffer is a poor man's version. I ran Wireshark and sniffed network traffic and compared a non-SSL connection and an SSL connection and looked at the raw data. I could easily see raw text data in the non-SSL version while the SSL "looked" encrypted. That, of course, means absolutely nothing. But it does show that "something" is happening to the data. In other words, if you think you are using SSL but can recognize the raw text in a network sniff, then something is not working as you expected. The converse is not true, though. Just because you can't read it, it does not mean it is encrypted.
Use SSL for any sensitive data, not just passwords, but credit card numbers, financial info, etc. There's no reason to use it for other pages.
Some environments, such as ASP.NET, allow SSL to be used for encryption of cookies. It's good to do this for any authentication or session-ID related cookies, as these can be used to spoof logins or replay sessions. You can turn these on in web.config; they're off by default.
ASP.NET also has an option that will require all authenticated pages to use SSL. Non-SSL requests get tossed. Be careful with this one, as it can cause sessions to appear hung. I'd recommend not turning on options like this, unless you really need them.
Sorry, can't help with the smtp questions.
First off, SSL is used to encrypt communications between client and server. It does this by using a public key that is used for encryption. In my opinion it is a good practice to use it for as anything that has personally identifiable information or sensitive information.
Also, it is worth pointing out that there are two types of SSL authentication:
One Way - in which there is a single, server certificate - this is the most common
Two Way - in which there is a server certificate and a client certificate - the client first verifies the server's identity and then the server ids the client's id - example is DOD CAC
With both, it is important to have up to date, signed, certificates by a reputable CA. This verifies your site's identity.
As for question 2, yes, you should use SSL over SMTP if you can. If your emails are routed through an untrusted router, they can be eavesdropped if sent without encryption. I am not sure about the 'boolean value enabled' question. I don't believe setting up SSL is simply as easy as checking a box though.
A couple people have already answered your Question 1.
For question 2 though, I wouldn't characterize SMTP over SSL as protecting the message. There could be plenty of points at which the message is exposed. If you want to protect the message itself, you need S/MIME, or something similar. I'd say SMTP over SSL is more useful for protecting your SMTP credentials, so that someone cannot grab your password.
We have a (multi-os) application which communicates with a https server using libcurl and uses SSL client certification. When the client certification is password protected, the application must ask the user to input password. The application sends hundreds of different https request to the server, so we can not ask the user to input password each time a new connection is going to be created. Now we simply prompt the user to input password once when the application starts then set the password to curllib through "CURLOPT_KEYPASSWD" option. But I'm worrying about malicious user could easily hack into the running process and read the client certification password. Is there anyway I can cache the client certification password and also prevent it from being easily read from memory?
You should not worry that much about it, in real life. If you worry about such attacks, maybe look into hardware based keys and certificates in smart cards.
Some suggestions to fight swap and coldboot attacks:
Lock the memory that holds the password, don't allow it to be swapped out (mlock+mprotect and on posix, Windows also has similar functions)
possibly encrypt the memory that holds the password and only decrypt it while using the password.
In real life, if your machine is owned so that somebody can intrude into processes he/she should not have access to - you're already compromised, no idea trying to make it obscure.
If you think that you control the cache - think again - you can never know if curl for example copies and leaks the memory with the password, under some circumstances.
If you're serious about SSL and security, use hardware tokens or smart cards. Otherwise expect that once the host is compromised, your software and any access codes running through that host have been compromised.