When is it safe to use a broken hash function? - security

It is trivial to use a secure hash function like SHA-256, and continuing to use MD5 for security is reckless behavior. However, there are some complexities to hash function vulnerabilities that I would like to better understand.
Collisions have been generated for MD4 and MD5. According to NIST, MD5 is not a secure hash function. It only takes 239 operations to generate a collision and should never be used for passwords. However SHA-1 is vulnerable to a similar collision attack in which a collision can be found in 269 operations, whereas brute force is 280. No one has generated a SHA-1 collision and NIST still lists SHA-1 as a secure message digest function.
So when is it safe to use a broken hash function? Even though a function is broken it can still be "big enough". According to Schneier a hash function vulnerable to a collision attack can still be used as an HMAC. I believe this is because the security of an HMAC is dependent on its secret key and a collision cannot be found until this key is obtained. Once you have the key used in an HMAC it's already broken, so it's a moot point. What hash function vulnerabilities would undermine the security of an HMAC?
Let's take this property a bit further. Does it then become safe to use a very weak message digest like MD4 for passwords if a salt is prepended to the password? Keep in mind the MD4 and MD5 attacks are prefixing attacks, and if a salt is prepended then an attacker cannot control the prefix of the message. If the salt is truly a secret, and isn't known to the attacker, then does it matter if it's appended to the password? Is it safe to assume that an attacker cannot generate a collision until the entire message has been obtained?
Do you know of other cases where a broken hash function can be used in a security context without introducing a vulnerability?
(Please post supporting evidence because it is awesome!)

Actually collisions are easier than what you list on both MD5 and SHA-1. MD5 collisions can be found in time equivalent to 226.5 operation (where one "operation" is the computation of MD5 over a short message). See this page for some details and an implementation of the attack (I wrote that code; it finds a collision within an average of 14 seconds on a 2.4 GHz Core2 x86 in 64-bit mode).
Similarly, the best known attack on SHA-1 is in about 261 operations, not 269. It is still theoretical (no actual collision was produced yet) but it is within the realm of the feasible.
As for implications on security: hash functions are usually said to have three properties:
No preimage: given y, it should not be feasible to find x such that h(x) = y.
No second preimage: given x1, it should not be feasible to find x2 (distinct from x1) such that h(x1) = h(x2).
No collision: it should not be feasible to find any x1 and x2 (distinct from each other) such that h(x1) = h(x2).
For a hash function with a n-bit output, there are generic attacks (which work regardless of the details of the hash function) in 2n operations for the two first properties, and 2n/2 operations for the third. If, for a given hash function, an attack is found, which, by exploiting special details of how the hash function operates, finds a preimage, a second preimage or a collision faster than the corresponding generic attack, then the hash function is said to be "broken".
However, not all usages of hash functions rely on all three properties. For instance, digital signatures begin by hashing the data which is to be signed, and then the hash value is used in the rest of the algorithm. This relies on the resistance to preimages and second preimages, but digital signatures are not, per se, impacted by collisions. Collisions may be a problem in some specific signature scenarios, where the attacker gets to choose the data that is to be signed by the victim (basically, the attacker computes a collision, has one message signed by the victim, and the signature becomes valid for the other message as well). This can be counteracted by prepending some random bytes to the signed message before computing the signature (the attack and the solution where demonstrated in the context of X.509 certificates).
HMAC security relies on an other property that the hash function must fulfill; namely, that the "compression function" (the elementary brick on which the hash function is built) acts as a Pseudo-Random Function (PRF). Details on what a PRF is are quite technical, but, roughly speaking, a PRF should be indistinguishable from a Random Oracle. A random oracle is modeled as a black box which contains a gnome, some dice and a big book. On some input data, the gnome select a random output (with the dice) and writes down in the book the input message and the output which was randomly selected. The gnome uses the book to check whether he already saw the same input message: if so, then the gnome returns the same output than previously. By construction, you can know nothing about the output of a random oracle on a given message until you try it.
The random oracle model allows the HMAC security proof to be quantified in invocations of the PRF. Basically, the proof states that HMAC cannot be broken without invoking the PRF a huge number of times, and by "huge" I mean computationally infeasible.
Unfortunately, we do not have random oracles, so in practice we must use hash functions. There is no proof that hash functions really exist, with the PRF property; right now, we only have candidates, i.e. functions for which we cannot prove (yet) that their compression functions are not PRF.
If the compression function is a PRF then the hash function is automatically resistant to collisions. That's part of the magic of PRF. Therefore, if we can find collisions for a hash function, then we know that the internal compression function is not a PRF. This does not turn the collisions into an attack on HMAC. Being able to generate collisions at will does not help in breaking HMAC. However, those collisions demonstrate that the security proof associated with HMAC does not apply. The guarantee is void. That's just the same than a laptop computer: opening the case does not necessarily break the machine, but afterwards you are on your own.
In the Kim-Biryukov-Preneel-Hong article, some attacks on HMAC are presented, in particular a forgery attack on HMAC-MD4. The attack exploits the shortcomings of MD4 (its "weaknesses") which make it a non-PRF. Variants of the same weaknesses were used to generate collisions on MD4 (MD4 is thoroughly broken; some attacks generate collisions faster than the computation of the hash function itself !). So the collisions do not imply the HMAC attack, but both attacks feed on the same source. Note, though, that the forgery attack has cost 258, which is quite high (no actual forgery was produced, the result is still theoretical) but substantially lower than the resistance level expected from HMAC (with a robust hash function with an n-bit output, HMAC should resist up to 2n work factor; n = 128 for MD4).
So, while collisions do not per se imply weaknesses on HMAC, they are bad news. In practice, collisions are a problem for very few setups. But knowing whether collisions impact a given usage of hash functions is tricky enough, that it is quite unwise to keep on using a hash function for which collisions were demonstrated.
For SHA-1, the attack is still theoretical, and SHA-1 is widely deployed. The situation has been described like this: "The alarm is on, but there is no visible fire or smoke. It is time to walk towards the exits -- but not to run."
For more information on the subject, begin by reading the chapter 9 of the Handbook of Applied Cryptography, by Menezes, van Oorschot and Vanstone, a must-read for the apprentice cryptographer (not to be confused with "Applied Cryptography" by B. Schneier, which is a well-written introduction but nowhere as thorough as the "Handbook").

The only time it is safe to use a broken hash function is when the consequences of a collision are harmless or trivial, e.g. when assigning files to a bucket on a filesystem.

When you don't care whether it's safe or not.
Seriously, it doesn't take any extra effort to use a secure hash function in pretty much every language, and performance impact is negligible, so I don't see why you wouldn't.
[Edit after actually reading your question]
According to Schneier a hash function vulnerable to a collsion attack can still be used as an HMAC. I believe this is because the security of an HMAC is Dependant on its secret key and a collision cannot be found until this key is obtained.
Actually, it's essentially because being able to generate a collision for a hash does not necessarily help you generate a collision for the hash-of-a-hash (combined with the XORing used by HMACs).
Does it then become safe to use a very weak message digest like md4 for passwords if a salt is perpended to the password?
No, not if the hash has a preimage attack which allows you to prepend data to the input. For instance, if the hash was H(pass + salt), we'd need a preimage attack which allows us to find pass2 such that H(pass2 + salt) = H(pass + salt).
There have been append attacks in the past, so I'm sure prepend attacks are possible.

Download sites use MD5 hash as a checksum to determine if the file was corrupted during download, and I would say a broken hash is good enough for that purpose.
Lets say that a MITM decides to modify the file (say a zip archive, or an exe). Now, the attacker has to do two things -
Find a hash collision and create a modified file out of it
Ensure that the newly created file is also a valid exe or a zip archive
With a broken hash, 1 is a bit easier. But ensuring that the collision simultaneously meets other known properties of the file is too expensive computationally.
This is totally my own answer, and I could be terribly wrong.

The answer entirely depends on what you're using it for. If you need to prevent somebody producing a collision with a few milliseconds I'd be less worried than if you need to prevent somebody producing a collision within a few decades.
What problem are you actually trying to solve?

Most of the worry about using something like MD4 for a password is related less to currently known attacks, than to the fact that once it has been analyzed to the point that collision generation is easy, it is generally presumed to be considerably more likely that somebody will be able to use that knowledge to create a preimage attack -- and when/if that happens, essentially all possible uses of that hash function become vulnerable.

Related

Why does HashMap need a cryptographically secure hashing function?

I'm reading a Rust book about HashMap hashing functions, and I can't understand these two sentences.
By default, HashMap uses a cryptographically secure hashing function that can provide resistance to Denial of Service (DoS) attacks. This is not the fastest hashing algorithm available, but the trade-off for better security that comes with the drop in performance is worth it.
I know what a cryptographically secure hash function is, but don't I understand the rationale behind it. From my understanding a good hash function for HashMap should only have three properties:
deterministic (the same object has same hash value)
be VERY fast,
has a uniform distribution of bits in hash value (meaning it will reduce collision)
Other properties, in cryptographically secure hash function, are not really relevant 99% (maybe even 99.99%) of the time for hash tables.
So my question is: What does "resistance to DoS attack and better security
" even mean in the context of HashMap?
Let's start backward: how do you DoS a HashMap?
Over the years, there have been multiple attacks on various software stacks based on Hash Flooding. If you know which framework a site is powered by, and therefore which hash function is used, and this hash function is not cryptographically secure then you may be able to pre-compute, offline, a large set of strings hashing to the same number.
Then, you simply inject this set into the site, and for each (simple) request, it does a disproportionately large amount of work as inserting N elements takes O(N2) operations.
Rust was conceived with the benefit of hindsight, and therefore attention was paid to avoiding this attack by default, reasoning that users who really need performance out of HashMap would simply switch the hash function.
Let's say we use HashMap to store some user data in a web-application. Suppose that users can choose (part of) the key in some way – maybe the key is a username or a filename of an uploaded file or anything like that.
If we are not using a cryptographically secure hash function, this means that the attacker could possible craft multiple inputs that all map to the same output. Of course, a hash map has to deal with collisions, because they occur naturally.
But when unnaturally many collisions occur, the hash map implementation might do strange things. For example, looking up some keys could have a runtime of O(n). Or the hash map might think that it has to grow because of all the collisions; but growing won't solve the problem, so the hash map grows until all memory is used. In either case, it's bad. Hash maps just assume that statistically, collisions rarely occur.
Of course, this is not a "stealing user data" attack -- at least not directly. But if one part of a system is weak, this makes it easier for attackers to find other weaknesses.
A cryptographically secure hash function prevents this attack, since the attacker cannot possibly craft multiple keys that map to the same value (at least not without trying out all keys).
is not really relevant 99% (maybe even 99.99%) of the time for hash tables.
Yes, probably. But this is difficult to balance. I guess we all would agree that if 20% of users would have security problems in their application due to an unsecure hash function (while 80% don't care), it's still a good idea to use the "secure by default" approach. What about 5%/95%? What about 1%/99%? Hard to tell where the threshold is, right?
There has been a ton of discussion about this already. Because yes, most people only notice the slowness of the hash map. Maybe the situation I described above is incredibly rare and it isn't worth slowing down all other users' code by default. But this has been decided, the default hash function won't change, and luckily you can choose your own hash function.
If a server application stores user input (such as post data in a web application) in a hash table, a malicious user may try to provide a large number of inputs that all have the same hash value, leading to a large number of hash collisions and thus slowing down operations on the map significantly, to the point that it can be used as a DoS attack (as described in this article for example).
If the hash is cryptographically secure, attackers will have a much harder time trying to find inputs with the same hash value.

MD5 hash reversing

I know it's not possible to reverse an MD5 hash back to its original value. But what about generating a set of random characters which would give the exact same value when hashed? Is that possible?
Finding a message that matches a given MD5 hash can happen in three ways:
You guess the original message. For passwords and other low entropy messages this is often relatively easy. That's why we use use key-stretching in such situations. For sufficiently complex messages, this becomes infeasible.
You guess about 2^127 times and get a new message fitting that hash. This is currently infeasible.
You exploit a pre-image attack against that specific hash function, obtained by cryptoanalyzing it. For MD5 there is one, with a workfactor of 2^123, but that's still infeasible.
There is no efficient attack on MD5's pre-image resistance at the moment.
There are efficient collision attacks against MD5, but they only allow an attacker to construct two different messages with the same hash. But it doesn't allow him to construct a message for a given hash.
Yes it is possible to come up with a collision (since you map from a larger space to a smaller this is something that you can assume to happen eventually). Actually MD5 is already considered as "broken" in this respect.
From wiki:
However, it has since been shown that MD5 is not collision
resistant;[3] as such, MD5 is not suitable for applications like SSL
certificates or digital signatures that rely on this property. In
1996, a flaw was found with the design of MD5, and while it was not a
clearly fatal weakness, cryptographers began recommending the use of
other algorithms, such as SHA-1—which has since been found also to be
vulnerable. In 2004, more serious flaws were discovered in MD5, making
further use of the algorithm for security purposes
questionable—specifically, a group of researchers described how to
create a pair of files that share the same MD5 checksum.[4][5] Further
advances were made in breaking MD5 in 2005, 2006, and 2007.[6] In
December 2008, a group of researchers used this technique to fake SSL
certificate validity,[7][8] and US-CERT now says that MD5 "should be
considered cryptographically broken and unsuitable for further
use."[9] and most U.S. government applications now require the SHA-2
family of hash functions.[10]
In one sense, this is possible. If you have strings that are longer than the hash itself, then you will have collisions, so such a string will exist.
However, finding such a string would be equivalent to reversing the hash, as you would be finding a value that hashes to a particular hash, so it would not be any more feasible than reversing a hash any other way.
For MD5 specifically? Yes.
Several years ago, an article was published on an exploit of the MD5 hash that allowed easy generation of data which, when hashed, gave a desired MD5 hash (well, what they actually discovered was an algorithm to find sets of data with the same hash, but you get how that can be used the other way around). You can read an overview of the principle here. No similar algorithm has been found for SHA-2, although that may change in the future.
Yes, what you're talking about is called a collision. A collision in any hashing mechanism is when two different plaintexts create the same hash after being run through a hashing algorithm.

Why doesn't having the code to the MD5 function help hackers break it?

I believe I can download the code to PHP or Linux or whatever and look directly at the source code for the MD5 function. Could I not then reverse engineer the encryption?
Here's the code - http://dollar.ecom.cmu.edu/sec/cryptosource.htm
It seems like any encryption method would be useless if "the enemy" has the code it was created with. Am I wrong?
That is actually a good question.
MD5 is a hash function -- it "mixes" input data in such a way that it should be unfeasible to do a number of things, including recovering the input given the output (it is not encryption, there is no key and it is not meant to be inverted -- rather the opposite). A handwaving description is that each input bit is injected several times in a large enough internal state, which is mixed such that any difference quickly propagates to the whole state.
MD5 is public since 1992. There is no secret, and has never been any secret, to the design of MD5.
MD5 is considered cryptographically broken since 2004, year of publication of the first collision (two distinct input messages which yield the same output); it was considered "weak" since 1996 (when some structural properties were found, which were believed to ultimately help in building collisions). However, there are other hash functions, which are as public as MD5 is, and for which no weakness is known yet: the SHA-2 family. Newer hash functions are currently being evaluated as part of the SHA-3 competition.
The really troubling part is that there is no known mathematical proof that a hash function may actually exist. A hash function is a publicly described efficient algorithm, which can be embedded as a logic circuit of a finite, fixed and small size. For the practitioners of computational complexity, it is somewhat surprising that it is possible to exhibit a circuit which cannot be inverted. So right now we only have candidates: functions for which nobody has found weaknesses yet, rather than function for which no weakness exists. On the other hand, the case of MD5 shows that, apparently, getting from known structural weaknesses to actual collisions to attacks takes a substantial amount of time (weaknesses in 1996, collisions in 2004, applied collisions -- to a pair of X.509 certificates -- in 2008), so the current trend is to use algorithm agility: when we use a hash function in a protocol, we also think about how we could transition to another, should the hash function prove to be weak.
It is not an encryption, but a one way hashing mechanism. It digests the string and produces a (hopefully) unique hash.
If it were a reversible encryption, zip and tar.gz formats would be quite verbose. :)
The reason it doesn't help hackers too much (obviously knowing how one is made is beneficial) is that if they find a password to a system that is hashed, e.g. 2fcab58712467eab4004583eb8fb7f89, they need to know the original string used to create it, and also if any salt was used. That is because when you login, for obvious reasons, the password string is hashed with the same method as it is generated and then that resulting hash is compared to what is stored.
Also, many developers are migrating to bcrypt which incorporates a work factor, if the hashing takes 1 second as opposed to .01 second, it greatly slows down generating a rainbow table for you application, and those old PHP sites using md5() only become the low hanging fruit.
Further reading on bcrypt.
One of the criteria of good cryptographic operations is that knowledge of the algorithm should not make it easier to break the encryption. So an encryption should not be reversible without knowledge of the algorithm and the key, and a hash function must not be reversible regardless of knowledge of the algorithm (the term used is "computationally infeasible").
MD5 and other hash function (like SHA-1 SHA-256, etc) perform a one-way operation on data that creates a digest or "fingerprint" that is usually much smaller than than the plaintext. This one way function cannot be reversed to retrieve the plaintext, even when you know exactly what the function does.
Likewise, knowledge of an encryption algorithm doesn't make it any easier (assuming a good algorithm) to recover plaintext from ciphertext. The reverse process is "computationally infeasible" without knowledge of the encryption key used.

why should a good hash algorithm not allow attackers to find two messages producing the same hash?

I was reading wikipedia, and it says
Cryptographic hash functions are a third type of cryptographic algorithm.
They take a message of any length as input, and output a short,
fixed length hash which can be used in (for example) a digital signature.
For good hash functions, an attacker cannot find two messages that produce the same hash.
But why? What I understand is that you can put the long Macbeth story into the hash function and get a X long hash out of it. Then you can put in the Beowulf story to get another hash out of it again X long.
So since this function maps loads of things into a shorter length, there is bound to be overlaps, like I might put in the story of the Hobit into the hash function and get the same output as Beowulf, ok, but this is inevitable right (?) since we are producing a shorter length output from our input? And even if the output is found, why is it a problem?
I can imagine if I invert it and get out Hobit instead of Beowulf, that would be bad but why is it useful to the attacker?
Best,
Yes, of course there will be collisions for the reasons you describe.
I suppose the statement should really be something like this: "For good hash functions, an attacker cannot find two messages that produce the same hash, except by brute-force".
As for the why...
Hash algorithms are often used for authentication. By checking the hash of a message you can be (almost) certain that the message itself hasn't been tampered with. This relies on it being infeasible to find two messages that generate the same hash.
If a hash algorithm allows collisions to be found relatively easily then it becomes useless for authentication because an attacker could then (theoretically) tamper with a message and have the tampered message generate the same hash as the original.
Yes, it's inevitable that there will be collisions when mapping a long message onto a shorter hash, as the hash cannot contain all possible values of the message. For the same reason you cannot 'invert' the hash to uniquely produce either Beowulf or The Hobbit - but if you generated every possible text and filtered out the ones that had your particular hash value, you'd find both texts (amongst billions of others).
The article is saying that it should be hard for an attacker to find or construct a second message that has the same hash value as a first. Cryptographic hash functions are often used as proof that a message hasn't been tampered with - if even a single bit of data flips then the hash value should be completely different.
A couple of years back, Dutch researchers demonstrated weaknesses in MD5 by publishing a hash of their "prediction" for the US presidential election. Of course, they had no way of knowing the outcome in advance - but with the computational power of a PS3 they constructed a PDF file for each candidate, each with the same hash value. The implications for MD5 - already on its way down - as a trusted algorithm for digital signatures became even more dire...
Cryptographic hashes are used for authentication. For instance, peer-to-peer protocols rely heavily on them. They use them to make sure that an ill-intentioned peer cannot spoil the download for everyone else by distributing packets that contain garbage. The torrent file that describes a download contains the hashes for each block. With this check in place, the victim peer can find out that he has been handled a corrupted block and download it again from someone else.
The attacker would like to replace Beowulf by Hobbit to increase saxon poetry's visibility, but the cryptographic hash that is used in the protocol won't let him.
If it is easy to find collisions then the attacker could create malicious data, and simply prepend it with dummy data until the collision is found. The hash check would then pass for the malicious data. That is why collisions should only be possible via brute force and be as rare as possible.
Alternatively collisions are also a problem with Certificates.

Is it possible to reverse a SHA-1 hash?

Is it possible to reverse a SHA-1?
I'm thinking about using a SHA-1 to create a simple lightweight system to authenticate a small embedded system that communicates over an unencrypted connection.
Let's say that I create a sha1 like this with input from a "secret key" and spice it with a timestamp so that the SHA-1 will change all the time.
sha1("My Secret Key"+"a timestamp")
Then I include this SHA-1 in the communication and the server, which can do the same calculation. And hopefully, nobody would be able to figure out the "secret key".
But is this really true?
If you know that this is how I did it, you would know that I did put a timestamp in there and you would see the SHA-1.
Can you then use those two and figure out the "secret key"?
secret_key = bruteforce_sha1(sha1, timestamp)
Note1:
I guess you could brute force in some way, but how much work would that actually be?
Note2:
I don't plan to encrypt any data, I just would like to know who sent it.
No, you cannot reverse SHA-1, that is exactly why it is called a Secure Hash Algorithm.
What you should definitely be doing though, is include the message that is being transmitted into the hash calculation. Otherwise a man-in-the-middle could intercept the message, and use the signature (which only contains the sender's key and the timestamp) to attach it to a fake message (where it would still be valid).
And you should probably be using SHA-256 for new systems now.
sha("My Secret Key"+"a timestamp" + the whole message to be signed)
You also need to additionally transmit the timestamp in the clear, because otherwise you have no way to verify the digest (other than trying a lot of plausible timestamps).
If a brute force attack is feasible depends on the length of your secret key.
The security of your whole system would rely on this shared secret (because both sender and receiver need to know, but no one else). An attacker would try to go after the key (either but brute-force guessing or by trying to get it from your device) rather than trying to break SHA-1.
SHA-1 is a hash function that was designed to make it impractically difficult to reverse the operation. Such hash functions are often called one-way functions or cryptographic hash functions for this reason.
However, SHA-1's collision resistance was theoretically broken in 2005. This allows finding two different input that has the same hash value faster than the generic birthday attack that has 280 cost with 50% probability. In 2017, the collision attack become practicable as known as shattered.
As of 2015, NIST dropped SHA-1 for signatures. You should consider using something stronger like SHA-256 for new applications.
Jon Callas on SHA-1:
It's time to walk, but not run, to the fire exits. You don't see smoke, but the fire alarms have gone off.
The question is actually how to authenticate over an insecure session.
The standard why to do this is to use a message digest, e.g. HMAC.
You send the message plaintext as well as an accompanying hash of that message where your secret has been mixed in.
So instead of your:
sha1("My Secret Key"+"a timestamp")
You have:
msg,hmac("My Secret Key",sha(msg+msg_sequence_id))
The message sequence id is a simple counter to keep track by both parties to the number of messages they have exchanged in this 'session' - this prevents an attacker from simply replaying previous-seen messages.
This the industry standard and secure way of authenticating messages, whether they are encrypted or not.
(this is why you can't brute the hash:)
A hash is a one-way function, meaning that many inputs all produce the same output.
As you know the secret, and you can make a sensible guess as to the range of the timestamp, then you could iterate over all those timestamps, compute the hash and compare it.
Of course two or more timestamps within the range you examine might 'collide' i.e. although the timestamps are different, they generate the same hash.
So there is, fundamentally, no way to reverse the hash with any certainty.
In mathematical terms, only bijective functions have an inverse function. But hash functions are not injective as there are multiple input values that result in the same output value (collision).
So, no, hash functions can not be reversed. But you can look for such collisions.
Edit
As you want to authenticate the communication between your systems, I would suggest to use HMAC. This construct to calculate message authenticate codes can use different hash functions. You can use SHA-1, SHA-256 or whatever hash function you want.
And to authenticate the response to a specific request, I would send a nonce along with the request that needs to be used as salt to authenticate the response.
It is not entirely true that you cannot reverse SHA-1 encrypted string.
You cannot directly reverse one, but it can be done with rainbow tables.
Wikipedia:
A rainbow table is a precomputed table for reversing cryptographic hash functions, usually for cracking password hashes. Tables are usually used in recovering a plaintext password up to a certain length consisting of a limited set of characters.
Essentially, SHA-1 is only as safe as the strength of the password used. If users have long passwords with obscure combinations of characters, it is very unlikely that existing rainbow tables will have a key for the encrypted string.
You can test your encrypted SHA-1 strings here:
http://sha1.gromweb.com/
There are other rainbow tables on the internet that you can use so Google reverse SHA1.
Note that the best attacks against MD5 and SHA-1 have been about finding any two arbitrary messages m1 and m2 where h(m1) = h(m2) or finding m2 such that h(m1) = h(m2) and m1 != m2. Finding m1, given h(m1) is still computationally infeasible.
Also, you are using a MAC (message authentication code), so an attacker can't forget a message without knowing secret with one caveat - the general MAC construction that you used is susceptible to length extension attack - an attacker can in some circumstances forge a message m2|m3, h(secret, m2|m3) given m2, h(secret, m2). This is not an issue with just timestamp but it is an issue when you compute MAC over messages of arbitrary length. You could append the secret to timestamp instead of pre-pending but in general you are better off using HMAC with SHA1 digest (HMAC is just construction and can use MD5 or SHA as digest algorithms).
Finally, you are signing just the timestamp and the not the full request. An active attacker can easily attack the system especially if you have no replay protection (although even with replay protection, this flaw exists). For example, I can capture timestamp, HMAC(timestamp with secret) from one message and then use it in my own message and the server will accept it.
Best to send message, HMAC(message) with sufficiently long secret. The server can be assured of the integrity of the message and authenticity of the client.
You can depending on your threat scenario either add replay protection or note that it is not necessary since a message when replayed in entirety does not cause any problems.
Hashes are dependent on the input, and for the same input will give the same output.
So, in addition to the other answers, please keep the following in mind:
If you start the hash with the password, it is possible to pre-compute rainbow tables, and quickly add plausible timestamp values, which is much harder if you start with the timestamp.
So, rather than use
sha1("My Secret Key"+"a timestamp")
go for
sha1("a timestamp"+"My Secret Key")
I believe the accepted answer is technically right but wrong as it applies to the use case: to create & transmit tamper evident data over public/non-trusted mediums.
Because although it is technically highly-difficult to brute-force or reverse a SHA hash, when you are sending plain text "data & a hash of the data + secret" over the internet, as noted above, it is possible to intelligently get the secret after capturing enough samples of your data. Think about it - your data may be changing, but the secret key remains the same. So every time you send a new data blob out, it's a new sample to run basic cracking algorithms on. With 2 or more samples that contain different data & a hash of the data+secret, you can verify that the secret you determine is correct and not a false positive.
This scenario is similar to how Wifi crackers can crack wifi passwords after they capture enough data packets. After you gather enough data it's trivial to generate the secret key, even though you aren't technically reversing SHA1 or even SHA256. The ONLY way to ensure that your data has not been tampered with, or to verify who you are talking to on the other end, is to encrypt the entire data blob using GPG or the like (public & private keys). Hashing is, by nature, ALWAYS insecure when the data you are hashing is visible.
Practically speaking it really depends on the application and purpose of why you are hashing in the first place. If the level of security required is trivial or say you are inside of a 100% completely trusted network, then perhaps hashing would be a viable option. Hope no one on the network, or any intruder, is interested in your data. Otherwise, as far as I can determine at this time, the only other reliably viable option is key-based encryption. You can either encrypt the entire data blob or just sign it.
Note: This was one of the ways the British were able to crack the Enigma code during WW2, leading to favor the Allies.
Any thoughts on this?
SHA1 was designed to prevent recovery of the original text from the hash. However, SHA1 databases exists, that allow to lookup the common passwords by their SHA hash.
Is it possible to reverse a SHA-1?
SHA-1 was meant to be a collision-resistant hash, whose purpose is to make it hard to find distinct messages that have the same hash. It is also designed to have preimage-resistant, that is it should be hard to find a message having a prescribed hash, and second-preimage-resistant, so that it is hard to find a second message having the same hash as a prescribed message.
SHA-1's collision resistance is broken practically in 2017 by Google's team and NIST already removed the SHA-1 for signature purposes in 2015.
SHA-1 pre-image resistance, on the other hand, still exists. One should be careful about the pre-image resistance, if the input space is short, then finding the pre-image is easy. So, your secret should be at least 128-bit.
SHA-1("My Secret Key"+"a timestamp")
This is the pre-fix secret construction has an attack case known as the length extension attack on the Merkle-Damgard based hash function like SHA-1. Applied to the Flicker. One should not use this with SHA-1 or SHA-2. One can use
HMAC-SHA-256 (HMAC doesn't require the collision resistance of the hash function therefore SHA-1 and MD5 are still fine for HMAC, however, forgot about them) to achieve a better security system. HMAC has a cost of double call of the hash function. That is a weakness for time demanded systems. A note; HMAC is a beast in cryptography.
KMAC is the pre-fix secret construction from SHA-3, since SHA-3 has resistance to length extension attack, this is secure.
Use BLAKE2 with pre-fix construction and this is also secure since it has also resistance to length extension attacks. BLAKE is a really fast hash function, and now it has a parallel version BLAKE3, too (need some time for security analysis). Wireguard uses BLAKE2 as MAC.
Then I include this SHA-1 in the communication and the server, which can do the same calculation. And hopefully, nobody would be able to figure out the "secret key".
But is this really true?
If you know that this is how I did it, you would know that I did put a timestamp in there and you would see the SHA-1. Can you then use those two and figure out the "secret key"?
secret_key = bruteforce_sha1(sha1, timestamp)
You did not define the size of your secret. If your attacker knows the timestamp, then they try to look for it by searching. If we consider the collective power of the Bitcoin miners, as of 2022, they reach around ~293 double SHA-256 in a year. Therefore, you must adjust your security according to your risk. As of 2022, NIST's minimum security is 112-bit. One should consider the above 128-bit for the secret size.
Note1: I guess you could brute force in some way, but how much work would that actually be?
Given the answer above. As a special case, against the possible implementation of Grover's algorithm ( a Quantum algorithm for finding pre-images), one should use hash functions larger than 256 output size.
Note2: I don't plan to encrypt any data, I just would like to know who sent it.
This is not the way. Your construction can only work if the secret is mutually shared like a DHKE. That is the secret only known to party the sender and you. Instead of managing this, a better way is to use digital signatures to solve this issue. Besides, one will get non-repudiation, too.
Any hashing algorithm is reversible, if applied to strings of max length L. The only matter is the value of L. To assess it exactly, you could run the state of art dehashing utility, hashcat. It is optimized to get best performance of your hardware.
That's why you need long passwords, like 12 characters. Here they say for length 8 the password is dehashed (using brute force) in 24 hours (1 GPU involved). For each extra character multiply it by alphabet length (say 50). So for 9 characters you have 50 days, for 10 you have 6 years, and so on. It's definitely inaccurate, but can give us an idea, what the numbers could be.

Resources