Hacker News new | past | comments | ask | show | jobs | submit login
First practical SHA-256 collision for 31 steps. fse2024 (twitter.com/jedisct1)
179 points by devStorms 32 days ago | hide | past | favorite | 60 comments



It took me a lot of head scratching to exactly understand what this means, so for your information: this is not a full attack and you are safe (for now). If you need a concrete proof:

    import hashlib
    m0 = bytes.fromhex('''
        c32aef52 512294ba 9db5ed8c 8c8c88ed b2de2765 63a2d14e ec7619cc 93b21182
        e5050f50 f0839b60 7b1ee176 aaa06d68 c462343c 67898962 9558f495 04281f2c
    ''')
    m1 = bytes.fromhex('''
        5d0f5ae6 05e98311 8fa3c73a 9af8c49d a2bf31f7 de547b67 5baecee3 da0d8c94
        e4c19564 f682d45c f7c57698 f871f9b5 f14469b7 fc28eb0c 2d76db75 043fe071
    ''')
    m1p = bytes.fromhex('''
        5d0f5ae6 05e98311 8fa3c73a 9af8c49d a2bf31f7 de548b61 5b8e46f2 8a1dd69a
        bcc08464 f6825458 f7c57698 f871f9b5 f14469b7 fc28eb0c 2d76db75 043fe071
    ''')
    print(hashlib.sha256(m0 + m1).hexdigest())
    # 2627577ac401cf44d837cf8471cac13ad7d8385bd00e4daf59fd3c3c646eaaae
    print(hashlib.sha256(m0 + m1p).hexdigest())
    # c945222bf0868a2218d5683c69b2b6c4720093e40c46d1197262d991e4d483b6
As far as I can understand, this is same as [1] and the first practical semi-free-start collision of 31 out of 64 rounds of SHA-256, at the complexity of 2^49.8. "Step" here equates to "round", which is not always the case and I was much confused. (RIPEMD-160 for example has 5 rounds and 16 steps per each round.) There are other theoretical cryptanalyses with more rounds of SHA-256, but this one is fairly practical and the group has explicitly demonstrated. But it is still far from the full collision attack or more like MD5 suffered back in 2009.

(By the way I couldn't exactly reproduce the claimed result even with a 31-round version of SHA-256. Maybe they simply ran a step function 31 times without any initial rounds? I don't know.)

EDIT: @Retr0id has reproduced this result: https://bsky.app/profile/retr0.id/post/3konobbmf6o2a

[1] https://eprint.iacr.org/2024/349.pdf


There was a practical collision attack on 28 rounds in 2016. Only 3 rounds of progress in 8 years is a pretty good sign for sha256.

For new code it might be better to use blake2b, blake3 or sha3, but at the same time I don't think there is any rush to migrate existing systems away from sha256.


Better off with SHAKE256: none of that "oops, I went with easier SHA3-224", plus SHAKE256 is faster.


Indeed. SHA-2 is unexpectedly stronger than the expectation a decade ago.


“Steps” means “rounds” here. For the general advances see the table under https://en.wikipedia.org/wiki/SHA-2#Cryptanalysis_and_valida... .

In 2016 there was a practical collision attack for 28 rounds. At that rate of progress, a practical collision attack for all 64 rounds would be reached in around 90 years from now.


This is a good time to re-read JP Aumasson's "Too Much Crypto" post:

https://eprint.iacr.org/2019/1492.pdf

The comparison is probably broken in a variety of ways, but the Keccak team proposed KangarooTwelve, a 12- (1/2 as many) round Keccak variant, after a practical attack on 6-round Keccak was published.


I noticed blake3 uses 7 doublerounds, i.e. 14 chacha rounds. Is it intended due to increased communication or another bug?


I assume “steps” here means rounds? For reference, standard SHA-256 is 64 rounds.


SHA-2, including SHA-256, is constructed using a Davies–Meyer compression function. That compression function starts with a block cipher - so an object like AES, but with wider keys and wider block size. For SHA-2 this block cipher is called SHACAL-2.

Now what we're seeing here is an attack on SHA-2 assuming a very, very significant degradation in SHACAL-2, where we run far fewer rounds than assumed in the standard. This is your typical cryptoanalytical result, interesting, but it is very very far from showing that "SHA-2 is broken".

As a side note I once estimated that the Bitcoin network is likely to produce a collision in SHA-256 sometime in 2050s, assuming the current rate of growth of the hash throughput. Of course that's a big assumption, and also nobody will notice the collision, as nobody is saving all those past hashes :)

Another side note - if you're interested in learning about hash functions then I recommend looking into SHA-3. Not because it's newer and shinier, but because I think it is actually the easiest to understand. It has a very clever design.


I wonder, given the current rate of development when will there be the first collision in the hashes of the Linux kernel git repository. Wait, did git finish the switch to SHA-256 or is it still using SHA-1. Googling... all I can find suggests that everyone is still using it with SHA-1 and SHA-256 repos aren't compatible with SHA-1 repos (whatever that means exactly).


So tldr is "it's in progress".

You can use SHA-256 in production. And you can convert SHA-1 repos into SHA-256 repos.

However:

- SHA-1 repos are not compatible with SHA-256 repos so you can't mix and match the trees (i.e. a SHA-256 fork couldn't upstream their commits to a SHA-1 repo).

- The conversion path from SHA-1 to SHA-256 will break all GPG signatures on the repo.

- There may be breaking changes to the SHA-256 repository implementation in the future however those changes will be guaranteed to come with an upgrade path for any users of the existing SHA-256 implementation.

So it's viable as an option but it's by no means "blessed" like the existing SHA-1 impl is.


I would only add that an organic (accidentally created) hash collision in Git will take an extreme amount of time. However, even today you can download the two PDFs from https://shattered.io/, put them both in your Git repository and watch Git crash. Given the construction of SHA-1 (Merkle-Damgard), it is easy to create an unlimited amount of derivative files that also cause a collision, they just have to have the correct prefixes (and then arbitrary but identical suffixes). Or upload only one of such files, but later pretend that it was the other. Authors were even kind enough to create a file tester on that very website :), but note that a determined adversary can recreate the attack and create a different set of prefixes.

SHA-1 really is broken, and therefore standard Git repositories do not offer integrity protection against someone who is determined to do harm and has some resources.


git has been using the hardened variant of SHA-1 for ages, so the shattered.io files haven't had that effect for a long time.

Edit: Since git 2.13, released about a month after SHAttered was published in 2017: https://github.com/git/git/blob/master/Documentation/RelNote...


A hardened variant which to this day still has not been documented anywhere.

Really disappointing and terrible for interop.


I think IPFS's IPLD facility for integrating git"s blockchain has it documented as part of discussions on how to offer splitting of git objects as they naturally can be gigabytes.


Additionally, AFAIK, none of the major repo hosting services (GitHub, gitlab, Bitbucket) support sha-256 repos.


This is true however that is changing very soon now that SHA-256 is no longer marked experimental.

GitLab has been working on integrating SHA-256 support for a while. According to this comment[1], there's only one major blocker left (which seems to be completed at the time of this comment) before they can start testing SHA256 support on GitLab.org.

1. https://gitlab.com/groups/gitlab-org/-/epics/10981#note_1797...


Thanks!


Yes, the follow-up post (hidden by default) reads:

> Don’t panic, folks. This is very good work, especially given the low memory complexity of this attack. But there are 33 steps left. Your bitcoins are safe.


Bitcoin is using double sha256, just in case someone is wondering.

Though I wonder if double sha256 makes it twice harder to break or if it's better or lower than that.


Frank @jedisct1

>Wouldn’t help in that case. Collision resistance of a composition degrades to the one of the weakest function (it’s even slightly worse). Double SHA2 only protects against length extension attacks. https://twitter.com/jedisct1/status/1772911384356868586


I don't think double sha256 makes any difference with regards to collisions. If there is a collision after single sha256 they will still collide after second layer of hashing sha256(x)=sha256(y) => sha256(sha256(x))=sha256(sha256(y)).


But your are going backwards though. You have a sha-256 value and want to find an input with the same result. But this input again has to be a sha-256 result and you need to find an input for that as well, right? This would only work if you have the intermediate sha-256 value, that produces the final sha-256 or you can find a collision that itself is a sha-256 value.


Going backwards, as you say, is called a pre-image attack. That's different from a collision attack, which is generating two inputs with the same hash.

Pre-image attacks are MUCH more difficult. How much more? well, MD-5 is considered broken, and yet, there isn't one for it.


There is a pre-image attack for MD5, it's just not considered good enough to be practical. Quoting Wikipedia:

> In April 2009, an attack against MD5 was published that breaks MD5's preimage resistance. This attack is only theoretical, with a computational complexity of 2123.4 for full preimage.


Yes, but that's very little improvement over the generic 2^128 attack - trying random messages until one happens to match the target hash. The attack quoted by Wikipedia achieves only 4.6 bits of speedup (note that it's 2^123.4, not 2134.4 :) ). There are attacks of this sort against many cryptographic primitives, including AES, where you can gain just a few bits over the generic / brute force attacks.


Let's say I have a string S.

MD5(MD5(S)) = Y

Now, I find a collision string SS (of length 128 bits, like an MD5 hash), where MD5(SS) == Y

Then I find a collision string SSS (this time, length doesn't matter), where MD5(SSS) == SS

Then we have MD5(MD5(SSS)) == Y, which was only twice harder than finding a single MD5 collision.

Could someone explain what is wrong with my reasoning ?

Edit: Oh okay, got it, when we say "MD5 is broken, it's possible to do a collision attack", what we mean is that we can easily find 2 strings S1 and S2 where MD5(S1) == MD5(S2) But S1 and S2 and found randomly, we don't have a way to find a string S3 where MD5(S3) == Y for any Y value (that is what we call a pre-image attack, not a collision attack)


Pre-image is approximately "twice as difficult" as a collision. A generic attack on, say, a 256 bit long hash function takes 2^128 time to find a collision, but 2^256 time to find a preimage. And like you say, this also shows up in practice: both MD-5 and SHA-1 are completely broken when it comes to collision resistance, but both are (probably) still OK for preimage resistance. I would still not recommend either of them for anything.


Where on earth did you get this idea from? What is a "generic attack"? How could you turn a collision somehow into a pre-image attack? How is many orders of magnitude "twice" ?


You can find this in any introduction to cryptography textbook/course. "Generic attack" is a common term for "just use brute force" [1]. It's called "generic" because it works regardless of the implementation of the primitive. For pre-image resistance the generic attack just hashes messages until it finds the right image, for collision resistance you can get a quadratic speedup via the so called birthday problem / birthday attack [1][2], where you keep hashing messages and storing the hashes until any two of the messages happen to hash to the same value.

[1] https://crypto.stackexchange.com/questions/19194/is-there-an...

[2] https://en.wikipedia.org/wiki/Birthday_problem


I don't think that "look, raw brute force has this property" is at all useful in this context where you'd obviously actually compare a real attack not brute force. There's no reason to believe (and every reason not to) that the same property somehow applies.

That Stack Exchange answer also immediately set off alarm bells in my head because it pretends to be entirely generic, but the obvious thing to do with entirely generic cryptographic intuitions is apply them to the One Time Pad and check their answers work. This intuition doesn't work. Even if you could try all the possible keys you learn nothing, because of the hand-waving about "plausible" plaintext.


Birthday attach is a real attack and often useful in practice. "Just use brute force" is a huge oversimplification, but the SO link explains it in more detail.

One time pad is not a hash algorithm so obviously a generic attack on a collision function doesn't apply to it.


twice as difficult ? It doesn't match what you say after that

2²⁵⁶ = 2¹²⁸ * 2¹²⁸

So, isn't it rather 2¹²⁸ times more difficult ?


Security is typically measured in bits. But yes you're right, maybe I should have written "square as difficult" to be more clear :)


I understand the definitions of such crypto algorithms but have no idea about differential cryptanalysis. Can someone explain how attacks like this are constructed, and why it took 8 years to advance cryptanalysis by 3 rounds? What insight was needed that took 8 years to discover and formulate as a practical attack?


Good that git still use sha1 ;)


Is it used to sign a commit, right ? Which are the probabilities to have a collision that:

a) is still code

b) is still code AND is code similar to a previous commit

c) is still code AND is code similar to a previous commit AND is valid

d) is still code AND is code similar to a previous commit AND is valid AND makes sense for something

OR at least

a) is still code

b) is still code AND is valid

d) is still code AND is valid AND makes sense for something

Let me know.


For now the SHA-1 collisions are easily detectable, but it could get worse.

In case of MD5, there is now a collision I wouldn't expect was possible: in readable ASCII.

https://mastodon.social/@Ange/112124123552605003


> "For now the SHA-1 collisions are easily detectable, but it could get worse."

Your opinion: prove it! And Again, if you instead of trolling actually read the post in THIS BRANCH , the question is: shout SHA-1 inn GIT be substituted ?


It is used to name a a commit, not to sign it. So the data structure itself will be corrupted if there is a collision, as it relies on the invariant that each commit has a unique name. And the collision has to happen within a single repo.


> It is used to name a a commit, not to sign it.

This is bullshit. Really. If you have only to "name a a commit" you can use a sequence from 0 to N. Why someone should waste computation power to calculate an hash that's also a naming system really not user friendly? Think about it.

The correct answer is to signing the commit AND for database indexing: "Git uses hashes in two important ways.

When you commit a file into your repository, Git calculates and remembers the hash of the contents of the file. When you later retrieve the file, Git can verify that the hash of the data being retrieved exactly matches the hash that was computed when it was stored. In this fashion, the hash serves as an integrity checksum, ensuring that the data has not been corrupted or altered.

For example, if somebody were to hack the DVCS repository such that the contents of file2.txt were changed to “Fred”, retrieval of that file would cause an error because the software would detect that the SHA-1 digest for “Fred” is not 63ae94dae606…

Git also uses hash digests as database keys for looking up files and data.

If you ask Git for the contents of file2.txt, it will first look up its previously computed digest for the contents of that file[45], which is 63ae94dae606… Then it looks in the repository for the data associated with that value and returns “Erik” as the result. (For the moment, you should try to ignore the fact that we just used a 40 character hex string as the database key for four characters of data.)"

Source: https://ericsink.com/vcbe/html/cryptographic_hashes.html#:~:.... ~


Earlier systems like perforce used the totally ordered integer naming scheme you describe, but it requires a centralized entity to keep the names globally unique. Using hashes for naming avoids this, and the way they are used in git imposes a partial order.


For the choices after your "OR at least" line, just consider that most of the collision material could be padded into a comment, so achieving a), b) and d) would be "trivial."


IMHO "be padded into a comment" is included in "is valid code", still 1 in <number_of_particles_in_universe_here^1E100> is a good approximation of that probability.

Please, correct me if I'm wrong.


Do you mean with the current public knowledge or hypothetically? For md5 all of these are doable right now (except maybe code that "makes sense"for human reader). Also in practice it's much easier to do this with a data file, as demonstrated for SHA1 with a "backdoored" certificate.


1) We are talking about sha1, md5 is out of topic

2) This is the main topic ! Being able to generate >>valid code<< with a >>specific purpose<< , so that GIT have to change its hashing algorithm;

3) A.K.A your answer is total nonsense.

Everyone else, ok, I'm listening, give proof that you can change code on GitHub stealthy messing with hashing, moreover inserting a "payload" creating a SHA-1 collision in a reasonable computational time, everything else is BS.


1) yes, I gave you an example of a hash algorithm that is broken right now. SHA1 is only getting there, because the attacks are always only getting stronger. Responsible people don't wait until the attacks are practical and devastating, but instead react by predicting the obvious things that will happen in the future.

Overall I don't think you're arguing in good faith, so I'm going to walk away from this discussion.


Even without comments your additional requirements aren't relevant, but not in the way I think you're assuming.

When you're searching for a practical collision you only need a way to generate systematic output that semantically will be interpreted with your intent. The easiest way to do this is to include semantically irrelevant data to something that was manually produced that is semantically relevant.

In the programming domain, source code specifically, comments are the easiest way to include semantically irrelevant information but you could also include unused functions, variable names etc. You are literally limited by the constraints of your imagination and your ability to dodge CI failure checks.

Aha! You might say, but any human that saw that change or PR would immediately notice the garbage produced and catch the collision attempt! (this is your argument) Unfortunately no, that assumes your search space that I talked about is over semantic garbage. It's a bit more work, but your search space for a collision could be "Shakespearean sonnet's that would make a literary buff cry" as long as you had a generator that could produce it and produced different outputs from different seeds.

We now have access to a generator that can take an incrementing seed number, and produce both semantically meaningful and meaningful semantically irrelevant content. The language models. Interestingly this moves the compute cost to the generator (usually the compute restriction is on the hash being attacked).

It's definitely not practical with our current compute capabilities to attack a search space of 2^256 through brute force for a simple hash much less including waiting for a language model to produce an output using a different input seed for each check but that's not what this article is about either...

What these collision attacks (such as the linked article) do is _decrease the search space_. Without any algorithmic tricks the search space for sha2-256 is 2^256. These tricks are eating away at that exponent. This work results in a reduction of a collision to 2^49.8. That is a massive drop in the search space. Is it still feasible to attack today? Absolutely not. But a few more of these tricks and I can see those "garbage comments" collision happening, but wait a tiny little fraction of a time beyond that and include language models for your search space?

Hell your changes could be _productive_ and produced incrementally through a series of commits if you really wanted to limit your search space and get creative about it.

With SHA-1 collisions attacks using semantic garbage are already considered practical. We're still probably computationally constrained in using language models to produce semantically viable collisions but we're not that far off either. Those comments won't be garbage. You will not be able to distinguish it from any other AI generated code being committed which is rapidly improving in quality and efficiency to generate.

Even without language models you could use something like a language's EBNF grammar as a token generator for source code which would probably pass any glance checks, but definitely not dedicated inspection like a code review. That is probably something that IS PRACTICAL TODAY for SHA1.


My point is: why you should change hashing algorithm in GIT ??? Let's elaborate:

1. Do SHA-1 put a security risk in GIT ?

2. Is that practically exploitable in any way?

In some application, for example password hashing, SSH MAC, etc, you have good reasons to change hashing algorithm when it became obsolete: because an attacker can be computationally advantaged to crack a password, to compromise the integrity of transmitted packets, etc.

But not because an hashing algorithm became obsolete for some application is obsolete for ALL possible application. Moreover, in some specific application could be DESIRABLE a fasted hashing algorithm.

So why You should change SHA-1 in GIT ?

>> "But a few more of these tricks and I can see those "garbage comments" collision happening"

I don't think so, is computationally astronomically difficult whatever tricks yo u invent. The point here IS NOT to generate a collision adding "garbage comments", again, is to alter the behaviour of committed code in a functional way.

>> "Even without language models you could use something like a language's EBNF grammar as a token generator for source code which would probably pass any glance checks, but definitely not dedicated inspection like a code review. That is probably something that IS PRACTICAL TODAY for SHA1"

Yeah, prove it!


I agree, the necessity of something stronger than SHA-1 should be demonstrate.


I don't need to prove that I can do a thing to prove that a thing is possible and the burden of proof is on you claiming that this isn't an active security problem because that's basically well known and well understood. The only outstanding questions is how-detectable, impactful, and available those attacks are.

Specifically the things you need to counter is at least one of the thing in the following list:

* Hash security: SHA1 collisions are feasible to generate and companies are actively moving away from them with good reason and have been doing so for at least seven years (https://security.googleblog.com/2017/02/announcing-first-sha..., https://www.howtogeek.com/238705/what-is-sha-1-and-why-will-...)

* Content generation: As I've already discussed, the contents of what you use to make that collision can be anything you want and meet any requirements you have the ability to produce a generator for. To meet this you're going to have to prove to me that no engineer can make a seeded random number that uses a language's grammar to produce plausible and valid to compile token, or to just use a language model to produce plausible code and comments (also requiring a seed). This is a _trivial_ thing to do.

* The attack: Git relies on a chain-of-hashes based on SHA1, those hashes are over the complete files included in the repository if you can generate a collision for a file in git's history you can replace the files in that commit and all subsequent commits will remain valid. This is the attack everyone is worried about related to git. The only thing that protects against this right now is the security of SHA1. Additionally signatures on commits and tags DO NOT protect against this, they're over the hash, commit message, and list of objects not the objects themselves. The attacked files will still look like they came from a valid signed commit.

The extra scary part of that attack is the malicious/changed file will not be visible to any existing checkouts, those clients will believe they have the correct object and will continue to show that correct object. But anything that does regular fresh checkouts, like say a CI system that deploys to prod, will get the poisoned object. Even if its checking the signatures on every commit, it won't see this coming.

So the security of all our git repos, our production environments, new devs are foundationally rooted in the security of either write access to the repository OR the foundational security of SHA1.

I would say that is a practical and useful attack. A faster hashing algorithm will EXACERBATE this problem as you're almost always trading collision resistance for speed. Any hashing algorithm that allows you to calculate its hashes faster is MORE vulnerable to collision attacks not less.

"Computationally astronomical" isn't a very good argument. 20 years ago SHA1 was insane in its security. These thing get weaker over time and need to be periodically replaced, not because they're failing, but because increased resource capacity has fundamentally changed the original assumptions the algorithm was designed for.

Even with the computationally astronomical argument that is a matter of cost and resources, not practicality. It absolutely is practical to do if the result is worth the outcome. What is the most famous git based project? Maybe the thing it was originally designed to manage... Think maybe _any_ nation state would be happy to pay less than ~$100k USD (https://sha-mbles.github.io/) to get some malicious code running in production builds of the Linux kernel? The kernel project specifically has extra manual checks and multiple "known good repos" with commits literally being added by hand to protect against this attack. It's practical, it's a problem. It needs to be fixed.

If you still insist on a working example pay me $125k and I'll produce one for you.


If someone can change a committed file inside a git repository , the main problem is that your system is FUBAR. Let's say I'm the attacker and I'm inside I can change committed files and I can generate a collision for each. If my goal is to deface the repository I can insert file with gibberish, i.e. I have a file with source code:

... omissis ...

ptr=calloc(SIZE, sizeof(long));

... etc ...

then I have :

aDjw'pfojqe'rf[24oijgfpoemgl;m,g02ir-9u13]9fu24[efgje2ioprn

Same sha1 hash.

But wait, why should waste 1000 GPU to deface a Git repository when I can simply delete it. I can change the files, I can delete it. It's simply stupid.

An attack with a sense is to change this:

ptr=calloc(SIZE, sizeof(long));

inserting:

ptr=calloc(SIZE-10, sizeof(long));

Now I have a BOF, same hash, only a code review can find the fraudulent change.

This is beyond "I make a collision inserting commented gibberish" , like this:

// adojwqf'pjqeworivhneq;lnvl;dqjnfvljeqrvneljvn

You have to insert a change that works and implement an attack making it invisible.

Good luck with that. I also read in some comments some AI nonsense I find Star Trek bullshit.

> If you still insist on a working example pay me $125k and I'll produce one for you

Even with 100M$ budget, you can't.

But why I even want to do that ? I have access, I can replace the whole repo with one full of exploitable bugs !

So the initial question: "If I change sha-1 in Git with some newer version, is that a security improvement?" . I feel the the answer is "NO".


Defacing git repositories doesn't even make sense. You won't mess with people's checkouts, its trivial to detect and identify the responsible party. It's the security equivalent of a child throwing a tantrum in their own room. You want to replace it? Everyone that comes after you and tries to push a change will immediately notice like opening the door to the proverbial child's room. You're busted and you've accomplished nothing.

You want to inject malicious code yourself? When it gets caught, or the file is inspected or reviewed you're busted.

This attack has the opportunity to get malicious code injected into a repository that will never show up in a PR, code review, or any existing checkout (so the senior developers that would notice the change most likely will never receive it). This is re-using an existing trusted and known good commit in your history, even the signature on it, to say "yeah this has always been here, this is perfectly safe and hasn't been modified since the author wrote it".

This is far more subtle, sneaky, and extremely valuable as an attack vector (and it gets more juicy, stay tuned) to get targeted vulnerabilities and backdoors into specific software. This isn't a novel attack method, as I mentioned the Linux kernel goes through a very rigorous process just to avoid this kind of attack.

Aside: You keep trying to use gibberish for your bad examples. You don't need to use garbage, that's the point I keep trying to hammer home to you. The added details can be from any generator and isn't constrained to living exclusively in comments. Garbage is what people use as examples for these attacks because its the easiest, and if you can demonstrate it for garbage then it works for any generator. With garbage you've made the point.

Back at the security issue. So now you have a poisoned repo that contains malicious code and is effectively undetectable through normal use. Meanwhile your production artifacts include the unaltered malicious code from the repository. It will remain unchanged and referenced until someone else creates _any_ change to the file you targeted (as once again git doesn't actually store diffs but whole files in a particular commit). That change might be something like a developer adding some print statements to try and diagnose why the CI system is failing.

When another change happens for that file the evidence mostly vanishes or at least is extremely obscured. There will be _some_ object in your repository that has the SHA-1 object, whether its the original or the malicious one depends entirely on when your checkout occurred.

On the receiving end your best case scenario is that the changed code doesn't work and causes weird bugs in your CI system that can't be reproduced in local checkouts and goes away magically as soon as anyone tries to diagnose it. This capability is worth STUPID amounts of money and I would be shocked if this isn't a technique used selectively in the wild by nation states.

SO how do you solve this problem?

* One of the inherent problems is that signatures don't actually cover the content of the commit. This is another regular complaint of git's behavior and would allow you to side-step this issue using the existing signing infrastructure. This is a bandaid but it's what most people argue for as it is significantly less of a lift than changing the hash function. If you're worried about the attack you just have to sign your commits and tags. If you sign your commits NOW without a change to git, you're still 100% vulnerable to this attack. and because the signatures will still be valid is likely to either make someone innocent look guilty of injecting a vulnerability, or will have audits look less closely at the code because it came from a trusted source causing more harm than good.

* Change out SHA-1 to something that isn't as vulnerable to collision attacks. The problem is collision attacks. Let me say that again, the core issue is with collision attacks. If you can create a chosen plaintext or chosen prefix attack the security guarantees of the git ledger goes away. You can't trust it. It needs to be replaced.

* If neither of those are options for you, your third option of protection is to adopt the Linux kernel policies. Releases are done directly from engineer's machines from a trusted known good repo that has patches added by hand by the most senior engineer.


That's what I meant.


I believe there is one more step. You have to somehow get the collision into the repository. Because if you have <hash> in your own repo and pull something from another repo with the same <hash>, the remote changes will not overwrite your blob for <hash> (it will stay the same). Or at least that’s what I seem to remember from something that Torvalds wrote.


> "I believe there is one more step. You have to somehow get the collision into the repository."

Yes, Exactly. So, is it necessary to change SHA-1 having in git ? At the moment, I think there is no reason because SHA-1 doesn't expose security vulnerabilities or functional issues.


Brave of you to assume I'm committing valid and sensical code to git


Due to the way hashing works, any change is equivalent to any other one for the purpose of finding a collision.

So you can just alter the formatting to a different convention, alter spacing, add a comment, reorder equivalent lines.

So you can insert a comment and continue altering it until you get a match by varying the line breaking, switching words with synonyms.


Doesn't it also have to be the same size in bytes?


Good old MDA4. Nothing beats that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: