imiric 4 days ago

I have yet to try Jujutsu or GitButler, but Git has a built-in way to make conflict resolution a bit easier with `rerere`. To be honest, I don't find doing this work manually a major chore, so I don't enable it, but it's there if you need it.

I would like to comment on this:

> I have been asked countless times if it's better to merge or to rebase and while I never want to stir up a hornet's nest, I have always advocated merging over rebasing.

I've been involved in this discussion many times as well, and the correct answer is that one isn't inherently "better", and you shouldn't _always_ prefer one over the other. There are situations when a merge is preferable (e.g. to keep a branch in history), and others when a rebase is (e.g. to, well, _base_ some work on a specific commit). The choice of when to use either will depend on the author's or team's preference in each case, which is why it's given as an option in most web-based PR/MR workflows. Squashing is another task you don't want to always do either.

I partly blame this confusion on Git's UI, and on the baseless fears spread about rebasing for years, which many developers mistakenly absorbed. The amount of times I've heard that force-pushing after a rebase is "dangerous" is too high. No wonder people find it scary...

  • lmm 4 days ago

    The fears are legitimate. Both rebase and force-push can lose data in some circumstances, which merge and push cannot. Yes, there are strategies which, if followed perfectly, allow one to avoid losing data when doing rebase and/or force-push. But those strategies are not simple to describe, especially to newcomers, and in practice people make mistakes; all else being equal, an inherently safe workflow is better.

    • imiric 3 days ago

      > The fears are legitimate.

      They're really not.

      First of all, no data is really lost with Git. Commits can be recovered from the reflog if they haven't been garbage collected, and there are ways of recovering anything on GitHub as well[1], even if it technically shouldn't be the case.

      But this aside, data loss is circumstantial, like you say. I've heard the idea that force-pushing in general is harmful, when it's really not if you're working solo or on an isolated branch. Rebasing and force-pushing are just different tools in the toolbox.

      In general, my objection is to the practice of describing any software as "dangerous". It creates an air of intimidation that prevents people from using the tools to their full extent, which when spread can popularize wrong practices among new users as well. This is why you see the person in the article claiming that they've always been a "merger", having a false dilemma between merging and rebasing, and describing their solution as "fearless". This line of thinking is also commonly associated with the command line and Linux itself, and is just harmful.

      Instead, users should be educated on what the software does, which does require having comprehensive UIs and documentation, and designing the software with sane defaults, fail-safes, and ways to undo any action. Git doesn't do a great job at all of these, but overall it's not so bad either. What really hurts users is spreading the wrong kind of ideas, though.

      [1]: https://neodyme.io/en/blog/github_secrets/

      • lmm 2 days ago

        > First of all, no data is really lost with Git. Commits can be recovered from the reflog if they haven't been garbage collected

        So no data is really lost except when it is.

        > I've heard the idea that force-pushing in general is harmful, when it's really not if you're working solo or on an isolated branch. Rebasing and force-pushing are just different tools in the toolbox.

        Like I said, there are specific circumstances where you can do it safely. But that's very different from being safe in general.

        > This is why you see the person in the article claiming that they've always been a "merger", having a false dilemma between merging and rebasing, and describing their solution as "fearless". This line of thinking is also commonly associated with the command line and Linux itself, and is just harmful.

        > Instead, users should be educated on what the software does, which does require having comprehensive UIs and documentation, and designing the software with sane defaults, fail-safes, and ways to undo any action.

        Users can't and won't learn the full details of everything they use, especially "secondary" tools that they use to support their main workflow - and why should they, unless the benefits are large enough to justify that cost? Using a tool in a mode that is inherently safe rather than a mode that can cause data loss in some circumstances is a perfectly reasonable choice. "Fearless" and "dangerous" are perfectly reasonable ways to characterise this distinction.

        • imiric a day ago

          > So no data is really lost except when it is.

          The period before inaccessible objects are pruned automatically is quite relaxed by default (between 2 weeks and 90 days, depending on the object), and it is configurable. So the scenario we're discussing here where data is lost by a force-push is just not a practical concern.

          > Like I said, there are specific circumstances where you can do it safely. But that's very different from being safe in general.

          No, force-push is safe _in general_. It is a bit inconvenient to recover the inaccessible commits if someone makes a mistake, but this doesn't make rebasing or force-pushing unsafe.

          > Users can't and won't learn the full details of everything they use, especially "secondary" tools that they use to support their main workflow

          Huh? How is Git a "secondary" tool for a programmer? It is an essential part of the programmer's toolkit as much as an editor is, and understanding and being proficient with both is equally important. Users in this case should be expected to learn the tools they will be relying on for a large part of their career. Compared to the complexity of programming environments, stacks and languages we deal with on a daily basis, this tooling is fairly simple to grasp.

          I'm not saying that Git doesn't have issues that can't be improved—it certainly does—but in the grand scheme of things it is a simple, reliable and well-engineered piece of software.

  • frizlab 4 days ago

    Force push should be “with lease” by default. Then force pushing is not dangerous at all.

    • keybored 4 days ago

      It’s still dangerous if you have fetched recently. You also might want `--force-if-includes`.

      (And then I don’t think there are any more “force” flags left to worry about…!)

      https://stackoverflow.com/questions/65837109/when-should-i-u...

      • hughesjj 4 days ago

        I really wish there was a global setting to turn this on by default/in config but afaik for backwards compatibility reasons Linus doesn't want to do that so the advice I've heard is to just configure it as an alias

        Personally, I'm lazy and just always have it in recent substring match history in the shell

  • sangnoir 3 days ago

    > I've been involved in this discussion many times as well, and the correct answer is that one isn't inherently "better", and you shouldn't _always_ prefer one over the other.

    That depends entirely on your organization's (or project's) preferred branching strategy and what is accepted as a unit of change: Some accept entire features as a single commit (via squash merging dev/feature branches - very useful when you have to maintain multiple release branches and can easily cherry-pick features & big fixes): here, merges faster advantage. Other places care a lot about the individual commits and preserving commit history from dev/feature branches - here merges can hide some of that granularity, and rebases are a better fit. The latter is common for projects with one evergreen release branch without any concern about back-porting features or fixes to other, currently supported release branches; supporting versions N, N-1, and N-2 is common in enterprise software, with each having its own release branch or tag.

epolanski 4 days ago

Serious question: how many times the pain of going through rebases rather than merges made a difference, or even better, really paid off in engineering terms?

To me it's virtually zero in seven years but it might be due to the teams and projects I've been involved with.

  • hinkley 4 days ago

    I spend a lot of time cleaning up after people who insist there are no b problems in their code despite all evidence to the contrary.

    That work is easier when they haven’t squashed their changes. Because I can see how they got there and if it was a mistake or a misunderstanding.

    People who prefer squash are an automatic red flag because they usually don’t like asking Why, which is a very important skill on products that are shipping and making money.

    • wakawaka28 4 days ago

      >That work is easier when they haven’t squashed their changes. Because I can see how they got there and if it was a mistake or a misunderstanding.

      That sounds like a problem with the people you work with, not with squashing in general.

      >People who prefer squash are an automatic red flag because they usually don’t like asking Why, which is a very important skill on products that are shipping and making money.

      This is a wild generalization. Thoughtful people squash when they think they have a set of changes that go together. If someone is jamming together stuff that does not go together then that is indeed a problem, but not a problem with squash. Nobody really wants to see the 50 edits someone made to come up with one final change.

      • hinkley 4 days ago

        > That sounds like a problem with the people you work with

        No, it says something about me, not them. When people can't figure out problems on their own they come to me for help. Have been since I was a sophomore in college, which was a long ass time ago. Possibly before you were born (8 month account). So I have a pretty good idea where 'rock bottom' is for every class of tool I've ever used, and how often people get close to them.

        I also get called in to look at bugs that other people refuse to believe exist, and bug forensics is where you really, really see the difference between a good commit history and a shitty one. If you aren't using 'git annotate' weekly or daily then you are not qualified to comment on how merges should or shouldn't be done. "I don't use it" means you don't have an opinion. "... so you shouldn't use it" is telling your coworkers you don't give a shit.

        > This is a wild generalization

        I think you're confusing red flag with deal breaker.

        > Thoughtful people squash when they think they have a set of changes that go together.

        True but useless distinction. Define 'go together'. Everyone has a different definition of this and you will never reach consensus there. Most of the people I'm thinking of here think everything for a single story 'goes together'. This is how you get an initial commit for a new module with 600+ lines of code and eight bugs you have to solve the hard way because all of the bugs showed up in a single commit.

        Squashing before a PR fails Knuth's aphorism about code being meant to be read by humans and only incidentally by machines.

        If you don't like that it took you three tries to figure out an off by one error in your code, that's fine. But you don't have to destroy all other evidence of your other processes in order to cover up your brainfart.

        • Quekid5 3 days ago

          I tend to think a major contributory factor to the indifference (at best) about commit hygiene is that people vastly underestimate the power of "show commit history for this range of lines" in modern IDEs/GUIs for Git.

          It's incredibly powerful for (just from decent commit messages) figuring out why some little detail in the code is the way it is.

          I'm thankful every day that I get to mandate Gerrit (so rebased-patches-on-top-of-main) workflow with every individual commit going through CI.

          ETA: Incidentally, I'm usually also someone who often gets called in to figure out obscure-yet-important bugs... and the commit log is instrumental to that.

        • wakawaka28 3 days ago

          >No, it says something about me, not them.

          OK I totally see that now.

          Speaking of red flags, your whole comment is a red flag to me, just like mentioning that common workflows are "red flags" lol.

          >If you aren't using 'git annotate' weekly or daily then you are not qualified to comment on how merges should or shouldn't be done. "I don't use it" means you don't have an opinion. "... so you shouldn't use it" is telling your coworkers you don't give a shit.

          More narcissistic garbage takes. There are many ways to work and if someone doesn't do it your favorite way then that doesn't mean they are reckless, incompetent, or whatever. If you told this to anyone I work with or have ever worked with in real life in the last 20 years, you'd get laughed at. I might know a lunatic who would argue with you in real life but even he might not be motivated enough to take the bait. He is a very junior-minded person as well, whose experience does not match his interests.

          >Squashing before a PR fails Knuth's aphorism about code being meant to be read by humans and only incidentally by machines.

          This is too reductive. You have to use common sense when squashing stuff. If you put stuff together that does not go together, then it gets harder to figure out what a changeset is supposed to do.

          >If you don't like that it took you three tries to figure out an off by one error in your code, that's fine. But you don't have to destroy all other evidence of your other processes in order to cover up your brainfart.

          There need be no evidence of "processes" in the end. I can see why you might want that if you're helping your coworkers figure something out. But once it's figured out then those changes should be reduced to modular changesets that each do a particular thing. Anything else will introduce pointless noise into the codebase. If you feel that some particular state of the code represents something significant, you can make a commit for that. But certainly 80% of the commits most people make are purely noise.

    • epolanski 4 days ago

      Issues should be catched by spending time in automating tests that ensure the correct functional and non functional requirements are met not by surgically maintaining a graph of codebase snapshots.

      History is preserved in the branch along the PRs if needed, and it rarely is.

      I'm not saying that rebasing is useless (I default to it), I'm debating if the effort is worth it in engineering terms, which I generally don't see because the benefits seem to be small compared to the cost.

      • hughesjj 4 days ago

        This argument seems weird to me because even without rerere I found myself doing a lot more work managing merge conflicts with git merge instead of git rebase

        As a git merge fan, are there any tips or tricks you suggest beyond the stock git experience when doing git merge to minimize the amount of merge conflicts you get?

        I found it was especially bad when doing a git merge on a refactor, but I admit it could just be that I abandoned git merge earlier in my career before switching to rebase and never properly learned it

        My most common use cases are feature or bug branches with a lifespan between less than one day and up to one month (although I absolutely have some features on pause for even over a year, in which case interactive git rebase and partially squashing WIP commits is my current method of updating)

        All this is for repos from literally just me, to a few changes a month between 3 devs, to 5-2 devs doing multiple commits per day, to some open source projects with commits landing every few minutes from multiple devs if it's like a release day

        My current biggest issue with rebase is verified commits with GitHub and a bit of guilt for rewriting committed feedback from other authors on my PRs

        The only time I really use git merge is when I want to see how my work interplays with more than one feature branch at once, or if the feature branch I want to integrate hasn't rebases themselves in a bit and conflicts occur

        • hinkley 4 days ago

          Most of the time I've encountered two engineers pointing fingers about who is responsible for a bug, it turns out that someone's bad merge transferred the git annotation from one engineer to another.

          The first time this happened (that I caught) I had two engineers who were sniping at each other. One was older "Max" and not great at data structure algorithms. The other "Stan" was a decent coder but had a bad attitude and was awful with git. Somehow he thought he could raise his status by getting Max kicked off the team.

          I come back from lunch one day and Stan is bitching about a bug in Max's new code that's causing issues. To keep these two from fighting I've been reviewing all of Max's PRs and the line of code Stan is complaining about I know for a fact I checked, and was relieved to see Max got it right the first time. But sure enough, the repo says Max fucked it up.

          Twenty minutes of git archaeology later and sure enough, Stan messed up a merge and resolved the conflict wrong, introducing the phantom bug. So I showed him the step by step of my diagnosis and then we had another little talk about using rebase.

      • seanwilson 3 days ago

        > I'm not saying that rebasing is useless (I default to it), I'm debating if the effort is worth it in engineering terms, which I generally don't see because the benefits seem to be small compared to the cost.

        For what's it worth, I agree for most projects I've been on. I've rarely e.g. used deep Git history forensics to figure out a regression, or to figure out why some code is the way it is. Usually I'm just tracking down the fairly recent squashed commit of a pull request that introduced the problem and it's obvious enough where to look to fix it.

        I like the idea of clean, super fine-grained commits with good summaries but I never see people mention that this takes extra time to do, because putting a pull request together is usually a messy iterative process, and not a predictable sequence of clean independent commits.

        Real work is more like "Add sketch of code ... Iterate some more ... Fix bug ... Iterate some more ... Upgrade library ... Really fix the bug ... Clean up ... Merge from main and get working ... Refactor ... Add comments ... Fix PR requests". Rebasing as you go or going back at the end to break that into chunks that will each independently make sense and pass tests costs a lot of time? Maybe I'm missing something?

        The time vs benefit trade-off is probably different with huge teams and huge projects, but for solo projects, small teams, and medium projects the trade-offs are different.

        Feels similar to test suite discussions. People don't mention there's a cost vs benefit trade-off to how fine grained your tests should be for different scenarios as it depends on a lot of factors you need to balance.

      • not_kurt_godel 3 days ago

        The graph of snapshots is for forensics when someone breaks the tests and keeps obliviously trucking along

  • PhilipRoman 4 days ago

    It's a big deal when maintaining a fork. It's tempting to merge upstream commits back into your branch, but you should always rebase and keep a clean patch set in such situation.

    • Arelius 4 days ago

      I tend to agree. While I am normally super pro-merging. I agree that for maintaining a fork, the patch set style rebase workflow is preferable.

    • Ferret7446 3 days ago

      I don't understand how you are using rebase with a fork. Are you rebasing all of the changes in your fork on top of upstream? That list of changes will grow larger over time; after a year, you'd be rebasing hundreds of commits whenever you want to merge upstream.

      • PhilipRoman 3 days ago

        A friendly fork generally has a natural limit on how many commits there will be on top. Personally I've done this with ~200 feature commits and it's not a big problem (as long as you use incremental rebase of course).

        Of course if you're planning a hard fork, merges may be unavoidable. But I've seen too many Franken-linux-kernels which were forked from 4.x with periodic merges whose correctness is impossible to verify. Inconsistencies eventually build up with each merge.

  • jjmarr 4 days ago

    I was on a team where we wrote software tests for computer hardware. Regressions were frequent. The underlying hardware wasn't very reliable because it was all very early-stage and hadn't been tested yet (as it was our job to write the tests in the first place).

    The linear commit history created by rebasing made it trivial to bisect and determine what introduced the problem.

    Huge difference to my productivity.

    • fallingsquirrel 4 days ago

      git bisect will traverse both parents of a merge commit no problem. Did you try?

      In your situation I'd prefer merges because: if commit X used to have parent A, and you move it over to parent B, it gets a new commit hash and a version of the code that has never been tested. If that commit is broken: was it broken when the author wrote it, or did it only break when you rebased? You threw away your only means of finding out when you rewrote history.

      • kazinator 4 days ago

        What you need is a "git rebase" that records a second parent for each commit pointing to the original commit that is being rebased.

        People who prefer git rebase workflow will hate the complicated history they see in "git log", but otherwise it will be the same.

        Alternatively, the right way to use "git merge" is to merge every successive commit of a branch one by one.

        The problem with "git merge" is that it collapses multiple commits into one giant patch bomb.

        If one of the commits caused a problem, you don't have that commit isolated on the relevant stream (the trunk) where you are actually debugging the problem.

        You know that the merge introduced a problem, and it seems that it was a particular commit there. But you don't have that commit by itself in the stream where you are working.

        It can easily be that a commit which worked fine on a branch only becomes a problem in its merged form on the trunk, due to some way a conflict was resolved or whatever other coincidence or situation. Then, all you know is that the giant merge bomb caused a problem, but when you switch to the branch, the problem does not reproduce and thus cannot be traced to a commit.

        If that commit is individually brought into the trunk, the breakage associated with it will be correctly attributed to it.

        In both cases, the source material the same: the original version of the commit doesn't exhibit the problem on its original branch.

        It is pretty important to merge the individual changes one by one, so that you are changing fewer things in one commit.

        People like rebase because it does that one by one thing. Git rebase breaks the relationship by not recording the extra parents, but since they have the reworked version of each change on the stream they care about, they don't care about that. Plus they like the tidy linear history.

      • jjmarr 3 days ago

        I didn't have to use git bisect. I looked at commit history directly and guessed what caused the regression.

        As we all test different parts of the microprocessor and the tagging system reflected those parts, I could rule stuff out by looking at git log --oneline. The commit messages were also required to be high quality and I could get a gut feeling about what stuff a commit would touch without looking at the code.

        > if commit X used to have parent A, and you move it over to parent B, it gets a new commit hash and a version of the code that has never been tested. If that commit is broken: was it broken when the author wrote it, or did it only break when you rebase? You threw away your only means of finding out when you rewrote history.

        This happened semi-frequently. We were using Gerrit and had every version of a rebased commit visible together. When code that fails automated testing got submitted, it immediately caused CI failures for everyone. It took an hour for someone unfamiliar with the code to look at the timestamp the failures began, find the commit that caused the failures, and revert it.

        I don't see how this would be meaningfully different in a merge scenario, because the merge commit also wouldn't be tested.

        • fallingsquirrel 3 days ago

          > the merge commit also wouldn't be tested

          Why wouldn't it? This is the "not rocket science" rule of software engineering: every commit must pass the tests. There's no special exception for merge commits.

          https://graydon2.dreamwidth.org/1597.html

          • jjmarr 2 days ago

            The CI tests could take hours because of compilation time + waiting for hardware. Trivial rebases without conflicts got exempt from additional testing, because by the time the test finished, someone else would've submitted to main. Merge commits likely wouldn't be tested in an alternative workflow either.

            Not a case of the company being too cheap to spend the money, because there literally aren't enough engineering prototypes in the world to satisfy our CI needs for testing on them.

      • lmz 4 days ago

        From their perspective what's the difference? It would be better if after rebasing all resulting commits were tested automatically, but even if they were not - the offending commit is still wrong "in context".

        • lmm 4 days ago

          Rebase can result in a long chain of commits that don't compile, which makes it impossible (or at least harder) to use automated bisect, or even semi-manual approaches like running a test case manually on each bisect step.

    • Arelius 4 days ago

      Did you ever try bisection without the linear history to compare? Or was this just conjecture?

      • kazinator 4 days ago

        I have. It was a complete fucking shitshow. In a kernel tree, doing a git bisect with the messy merge history will take you on a wild goose chase, where you land in some branch developed by an entirely different team somewhere, working on totally different hardware from you, with a different kernel version, which you have no hope of building and booting.

  • globular-toast 4 days ago

    I think you're doing it wrong. The point of rebasing is to do it often, like every day at the very least for an active integration branch. This hopefully means you'll resolve any conflicts as soon as they happen, while it's still fresh in everyone's heads. If you rebase once right at the end then, sure, it's no different to merging.

    • epolanski 4 days ago

      I'm not doing it wrong, I'm questioning whether it's worth the effort.

      I have spent hours rebasing on very active branches when a merge would've taken minutes (as many colleagues do) just because "it's a best practice" but I've never got to fully appreciate the benefits.

      • wakawaka28 4 days ago

        You don't have to keep rebasing your entire history on long-running branches. That will generate a ton of conflicts. But before you wrap it up it would be preferable for you to reduce your changes into one or a handful of stand-alone commits. If you're going to rebase often onto very active branches then you need to reduce the commits as much as possible to minimize the work involved. Ideally you could get other people to coordinate their work too.

        The main reason people want to rebase instead of merging is to keep the commit history from looking like a bowl of spaghetti. A commit history like that is hard to navigate, and more likely to contain a lot of frivolous edits.

        • globular-toast 3 days ago

          You should do the squashing/fixing regularly too, not once right at the end. Again, rebasing gives you the chance to fix these things as soon as it happens, rather than deal with one massive merge conflict at the end of a long running feature branch.

        • hughesjj 4 days ago

          Also if you use rerere you don't really spend much time doing repetitive rebases after the first rebase

      • globular-toast 3 days ago

        So here's the thing: if you are putting more work into rebasing but not getting anything more out of it, you are doing it wrong.

        You should always do the easiest thing that gets you what you want, otherwise you're just doing pointless work. If you and your colleagues are happy merging and you find that easier, that's what you should be doing.

        Rebasing supports a totally different workflow. With a rebase I can submit a well-formed set of changes for review that are conflict free. You can't do that with merging. With merging you submit a bunch of crap "history" that nobody will ever look at and the project maintainer has to deal with the conflicts.

        Merge commits look like this (newest to oldest):

            * Final tweaks
            * Merge master branch
            * Implement bar
            * Fix foo
            * Merge master branch
            * Shit I did on Wednesday before lunch
            * Implement foo
            * End of day
        
        Totally impenetrable mess that nobody will ever look at.

        Rebased commits look like this:

            * Add customise option to UI
            * Add use case baz
            * Extend model to support bar
            * Refactor model foo
        
        These can be reviewed in insolation and when approved they merge without conflict.

        If you want to do this but none of your colleagues are on board and you don't have the swing to make them, then I'm sorry. But you are wasting your time rebasing in that case. :(

        • seanwilson 3 days ago

          Doesn't the rebased commits example take longer to put together than the merge commits example (where you'd likely want to just squash the merge commits)?

          The merge version looks like the way code is actually written in practice to me, so doesn't the rebased version take extra time to create after you're done adding code? E.g. the "Add use case baz" rebase commit isn't likely to be a simple squashing of commits from the merge version, but cherrypicking specific lines from multiple commits.

          I fully agree the rebased version is nicer, but I'm not seeing anyone talk about how much extra time it takes. Or you're doing it in a way that doesn't take much time somehow?

          • globular-toast 2 days ago

            Yeah, it takes longer, but with the right tools and practice it doesn't take much longer. The main thing is rebasing often, reordering commits, using commit --amend, fixup commits, git autofixup and rebase --autosquash. That and being determined to deliver rebased commits from the start.

            I'm not saying that every single feature must be split into multiple commits. If it's one change then just keep amending that one change as you go. But quite often you'll identify standalone changes as you go, like refactors, little unrelated bugfixes you find as you go etc. When this happens I'll commit that unrelated change separately and rebase to reorder it so it comes first, then continue amending my feature commit.

            The paradigm shift for a lot of people is not to think of git as tracking history. Nobody cares about that. It's useless to you and doubly useless for everyone else. Think instead about tracking changes. I don't need to know every key press, every dead end explored or what you did on Tuesday afternoon. I want to know what changes are being applied to the project.

            • seanwilson 2 days ago

              > Yeah, it takes longer, but with the right tools and practice it doesn't take much longer.

              Maybe it depends on the kind of feature as well? I do the rebase with clean separate commits approach when it's easy, where reordering and squashing commits doesn't create tricky conflicts.

              But for more exploratory stuff like UI/UX changes where I'm moving blocks of the UI around, and making changes in multiple files to add plumbing to get data where it needs to go, and changing it after demoing and getting feedback, it can get really messy with lots of dead-ends you backtrack out of later.

              For that kind of work, it's probably easier to start again in a new branch, figure out some logical way to group the changes, then copy in code snippets from the other branch rather than rebasing? I can't see how this would be worth the effort in most cases though. The more granular commits helps figuring out where a bug got introduced, but then I don't think this happens often and when it does it's usually pretty obvious which lines of code caused the bug even in a large commit e.g. if dates are now being formatted weirdly, look for changes to code that does stuff with dates.

      • throwaway918299 4 days ago

        merge feature branches when reintegrating main, squash merge onto the main branch when you’re finished - best of both worlds imo

        I never need to rebase, or unfuck a botched rebase or go reflog diving - and the commit history is linear where it matters.

        • lmm 4 days ago

          Makes the history less useful for bisection - you'll always land on a squash merge rather than the specific commit that caused the problem.

          • throwaway918299 2 days ago

            In practice that's never been a problem for me. Work is delivered in functional units and segmented "sections" of code are basically useless on their own for the purpose of debugging.

    • lmm 4 days ago

      > The point of rebasing is to do it often, like every day at the very least for an active integration branch. This hopefully means you'll resolve any conflicts as soon as they happen, while it's still fresh in everyone's heads.

      You can do that with merge just as easily though - just merge master into your branch.

      • globular-toast 3 days ago

        Yes but at that point you might as well squash down to one commit and rebase because now you're tracking "history" which is useless, rather than tracking changes/versions.

        • lmm 2 days ago

          WTF? If you keep the original history and regularly merge master into feature branches you're tracking people's actual edits as they worked on their feature, which is the most useful thing to have when bisecting, but you're also staying close to mainline during development. It's the best of both worlds.

          • globular-toast 2 days ago

            Yeah it's sure useful when I bisect and find that commit Joe Coder made at the end of the day called "End of day. Tests not passing".

            Bisect only makes sense when commits are rebased into changes. The moment you bring in a regression you've fucked your ability to effectively bisect.

            • lmm 2 days ago

              > Yeah it's sure useful when I bisect and find that commit Joe Coder made at the end of the day called "End of day. Tests not passing".

              So your automated bisect tells you to look at two whole commits instead of one. Big deal.

              If you keep history as-is, most commits will compile and pass tests because coders tend to compile and run tests as part of their work cycle (and the occasional isolated non-compiling or non-test-passing commit isn't a problem for a bisect). If you rebase you will end up with long chains of commits that don't compile unless you have some additional mechanism to prevent that.

  • keybored 4 days ago

    It isn’t always harder than doing a merge.

    > really paid off in engineering terms?

    When you want your changes accepted by upstream and they either

    1. Won’t accept a merge-filled history

    2. Indirectly won’t because they accept changes by email (can’t send merges by email)

    • kazinator 4 days ago

      The main reason they should not accept merges is that they don't care about you and your repo. In order for an upstream repo to accept your work as a git merge, they would have to fetch all your objects so that they have enough of your repo in order to represent your original branch, where the parent pointers of the merge are aimed. Nobody who is anywhere near sane wants that kind of cruft.

  • ajkjk 4 days ago

    I frickin love rebase-only linear history. It just feels so clean. It's so much easier to understand what happened if you're just scanning the commit history.

    I also don't know of any pain to it, though. It's just simple and easy and clean.

  • hansonkd 4 days ago

    It always seemed like a needless complexity when you can just merge and get the same result in terms of the state of the files, just with a different commit history.

    The only time it might make sense if you are following some arbitrary strict style guidelines for commits. Some people care more about the commit history than others, not that either way is necessarily better.

    • nh2 4 days ago

      git bisect.

      I have a friend, he thought rebasing for linear history was not worth the effort. I told him to do it, because I once had to find a regression over thousands of commits in a merge-heavy code base and it took days. He was not convinced.

      Then he had to find a regression. It took over a month.

      With git bisect's binary search, it would have taken half a day.

      My friend now rebases.

      • samatman 4 days ago

        I don't understand why this would make a difference.

        Any given snapshot has a linear history, so it should be as bisectable as the rebased equivalent. What am I missing here?

        • nh2 4 days ago

          > Any given snapshot has a linear history

          Not sure what you mean. The key thing of merges is that they... merge... two histories.

          A git history graph tool shows that clearly then.

          A bisection has to choose whether to go left or right.

        • Arelius 4 days ago

          Agreed… I think people like to think bisect doesn’t work with merges also.

      • lmm 4 days ago

        bisect works much better with merge than it does with rebase (with rebase it's easy to end up with a long chain of commits that don't compile, so your automated bisect script doesn't work).

        • nh2 4 days ago

          Of course all your commits need to compile and pass tests.

          It didn't even occur to me that anybody would permit that in their CI.

          If you check in commits that don't compile then you can't use automatic bisection effectively (it still does work if that happens rarely, thanks to automatic `git bisect skip`).

          Of course every non-building commit will make bisecting a merge history a pain even more, not sure why you think it to be better with merges than with linear history.

          • lmm 2 days ago

            > Of course all your commits need to compile and pass tests.

            > It didn't even occur to me that anybody would permit that in their CI.

            How do you enforce it? Are you saying you make your CI compile and run tests for every single commit on a feature branch before allowing it to be merged? That takes a lot of time if you're doing the kind of small commits that make bisection most effective.

            > Of course every non-building commit will make bisecting a merge history a pain even more, not sure why you think it to be better with merges than with linear history.

            Because with rebase you're much more likely to get a long chain of commits that don't compile. E.g. imagine developer A adds a new feature and starts off by writing some code that calls some function, and meanwhile developer B renames that function in master. Then a while later developer A rebases onto master, fixes their compilation errors, and merges their feature branch in. All of the commits A did in between now don't compile, so you will "git bisect skip" all of them, and if your bisect lands somewhere in that chain of commits you have to do another round of bisection manually or something.

            With merge, all of A's commits still compile and you can bisect through to the specific commit that caused the problem. (Maybe one or two isolated commits don't compile because they were never tested on CI, sure, but that's ok - git bisect skip handles them, it's only a problem if you have a long chain of non-compiling commits)

            • kreetx 2 days ago

              Not OP, but:

              > How do you enforce it?

              I don't think you can. You just rely on the the developer to only create compiling commits (if possible). Also, code review might catch these.

              > Because with rebase you're much more likely to get a long chain of commits that don't compile

              After a rebase you try to compile the code and it will fail due to the renamed function. Then you fix the function name and move this change into the commit that started using this function (perhaps employing a fixup commit). Now, all following commits compile because they have the fixed call site, and previous commits compile as well because the call wasn't there yet.

              • lmm 2 days ago

                > I don't think you can. You just rely on the the developer to only create compiling commits (if possible).

                Right. But there's a natural incentive to create compiling commits as you work (because when you're working on something you at least occasionally compile your code and run tests). There's much less incentive to go back and check after a rebase.

                > Also, code review might catch these.

                Pretty unlikely - usually people just review the overall diff, not the individual commits, and even if they do, the commits make sense visually whether they compile or not.

                > Then you fix the function name and move this change into the commit that started using this function (perhaps employing a fixup commit).

                If you are disciplined enough to notice and do this right, sure. But it's extra work that eats into you discipline budget.

    • wakawaka28 4 days ago

      A linear commit history is objectively better. But whether it's worth the effort to maintain is up to you to decide. If your branches don't stay unmerged for long, then you're probably better off rebasing instead of generating tons of little branches for no reason.

      • kazinator 4 days ago

        The linear part of the history is objectively better than if that same change were collapsed into a single patch bomb.

        Git rebase does not destroy history, it just does not link it together. That might be a bad thing. But the individual commits all making an appearance on the destination branch is a good thing.

        From those who favor merge, what is bad is that there are no second parent pointers tracking where those changes came from.

        This coulid be obtained by reimplementing git rebase as a sequence of merges. Git rebase is a sequence of zero or more cherry picks, not merges. If git rebase merged each commit instead of cherry picking, each commit would have a parent pointing to its original.

        In a git bisect, there would be no need to chase those second parents; you would be looking for which merged commit introduced the breakage, and not care about its original, except in some rare situations where you want to analyze more deeply what went wrong (and then that parent pointer would be a bit handy).

      • lmm 4 days ago

        > A linear commit history is objectively better.

        Disagree. You can always flatten a commit graph into a linear history if you want, but you can't restore the original commit graph from a linear history. So preserving the original history is objectively better.

        • wakawaka28 3 days ago

          The original history is usually a bunch of garbage. If you need more detail you can rebase as many commits as you like. More detailed history is sometimes a distinct problem. Imagine trying to bisect some spaghetti bowl of commits with merges to find the source of a recurring issue. It would be relatively nonsensical compared to a clean linear history.

          Clean history can exist with merges, but I think merging all over the place obviously encourages messy behaviors.

          • lmm 2 days ago

            > If you need more detail you can rebase as many commits as you like.

            You can't rebase to get back to the original commits, not without knowing what they are.

            > Imagine trying to bisect some spaghetti bowl of commits with merges to find the source of a recurring issue.

            I do it all the time (well, less so now that I work with a better team where those issues are pretty rare), it's easy, that's the whole point of the git bisect command.

            > It would be relatively nonsensical compared to a clean linear history.

            Rebased history is much harder to bisect because you often get long chain of commits that don't compile or are otherwise broken.

      • hansonkd 4 days ago

        Objectively better to what?

        Git usage is only one part of a wider engineering org. That like saying "bugless code is objectively better" without considering time to delivery, engineering resources, etc.

        • wakawaka28 3 days ago

          Better to work with of course. If you have valid reasons to have branches, such as a need to ship multiple versions, I don't have a problem with that. But day-to-day work is better done via rebasing rather than making a ton of public branches that get merged. If you know what you're doing then rebasing is just as easy as merging. As others have said, git rerere helps a lot too.

  • kazinator 4 days ago

    This branch discussion doesn't speak to the present topic (much). Merging involves rebasing, possibly requiring conflict resolution.

    Merge versus rebase is just git speak for two different ways of tracking things when diverging streams recombine.

    In a nutshell, merge creates a single new commit which brings all the cumulative changes from a source branch onto the target. The commit has two parents: the prior commit on the destination branch and the commits on the source branch.

    Rebase creates a new commit out of every individual commit on the branch, bringing them individually into the target branch, much as if they were being merged. However, they have only one parent: their target branch lineage.

    (Rebasing is way better from a conflict point of view because the changes are individually brought in. A merge creates a "patch bomb" on the destination branch in which serveral totally unrelated conflicts might have been resolved, pertaining to different commits in the original.)

  • einsteinx2 2 days ago

    Maybe it’s just the codebases I work on, but I’ve never found rebasing to be particularly painful. No more painful than merges anyway. Both have the potential for conflicts and those conflicts are resolved in a similar way. I’m genuinely curious what pain you’re referring to here that would make it not worth using rebase vs merge commits.

  • Arelius 4 days ago

    I think no. It’s the same compulsion that leads to bike shedding in code review.

    And honestly, there are far too many engineers that use rebase without understanding the underlying system which is dangerous in git. (Aside, I wish git would adopt hg’s stages)

dimal 4 days ago

Off topic but why do product blogs always exist on a separate domain, with no link to the product website? I never heard of GitButler. Reading this is making me curious about it. But the home link in the top left goes to the blog home. If I want to actually see the product, I have to manually edit the url, on my iPad. Why does everyone do this? Seems obvious to have a link to the main site. /rant

  • nguyenkien 4 days ago

    Home website link in menu button.

mplanchard 4 days ago

Configuring rerere makes a huge difference in overall rebase experience. The following is a standard addition to my gitconfig

    [rerere]
    enabled = true
    autoupdate = true
  • Etheryte 4 days ago

    Somehow, despite using Git for who knows how many years, I haven't seen rerere yet. I read the manpage for it, but the usage isn't exactly clear about any possible pitfalls. Are there any gotchas? Where and how do you usually use it?

    • mplanchard 4 days ago

      I generally just enable the above config, which uses it automatically when doing rebases. So, if I’m rebasing a bunch of commits, and I’ve already resolved a set of conflicts once, it just uses that resolution again.

      If you want to have it forget a recorded resolution, for example because you messed something up, there are commands for that, but I use them very seldomly.

      I’ve never run into any particular pitfalls to speak of. I mostly just turn it on and forget it’s there. You can still always go back in time with the reflog if needed.

    • wakawaka28 4 days ago

      You don't have to do anything besides turning it on. If a conflict has been resolved before, somehow it remembers that and applies the fix. The only pitfall I've seen is if you fix the conflict erroneously, it will remember that too.

      • PaulDavisThe1st 4 days ago

        Yes, my general rule of thumb as a rerere user and devotee is to at the very least do a test build before git-add'ing your resolved files. You won't catch logical errors, but you will catch syntactical issues that came up during conflict resolution. It helps, a bit.

        • mplanchard 4 days ago

          If you accidentally record an incorrect resolution, you can also run `git rerere clear` to clear the cache, or `git rere forget <path>` to forget resolutions just for a particular file.

          • PaulDavisThe1st 3 days ago

            Yeah, I learned about this last weekend, after mistakenly trying to rebase a repo where we generally merge.

            I don't think that rerere offers fine grained enough control over "forgetting". What I needed (until I realized my mistake) was a way to clear any memory of a resolution for the current conflict in path.

eviks 3 days ago

At least one example of a conflicted hunk and what exactly gets saved would be more useful than a full screen screenshot just to show a single "conflicted" button in the UI

johnea 4 days ago

With syslog, everything in git is "fearless"

  • Yasuraka 4 days ago

    Did you mean reflog?

    Either way, even simpler, imho, than any log that one has to comb through after the fact is to create a named backup

      branch=$(git branch --show-current) && git switch -c backup-${branch} && git switch -
    
    Carry on as planned and if you bork it all, switch to the backup branch which retains the original commits and all, delete the borked one and have another go

      git switch backup-somebranch && git branch -D somebranch && git branch -m somebranch
    • keybored 4 days ago

      You don’t have to comb through the reflog for the pre-rebase branch state. Use `@{1}` from the reflog of the branch (not `HEAD`).[1]

      Note: First I thought that `ORIG_HEAD` was the thing. But that won’t work if you did `git reset` during the rebase.

      (`ORIG_HEAD` is probably “original head”, not “origin head” (like the remote) that I first thought…)

      [1] You just have to comb through documentation!

    • Pfiffer 4 days ago

      I do this as well but `git reset --hard backup-somebranch` and try again if I mess it up.

    • 0xCAP 4 days ago

      I have a custom bash function named "backup_branch" that does exactly that, along with "restore_backup" and "delete_backups". It's made my life 10x simpler.

  • sunshowers 4 days ago

    Assuming this is the reflog, this is not true. Because the working copy doesn't get snapshotted, it is relatively easy to lose uncommitted data. I've spent much of my professional career working on source control and even I've lost uncommitted data a few times.

    Dropbox doesn't have a notion of uncommitted data. Why should source control?

    • hinkley 4 days ago

      I have to use reflog, rebase -i and frequent commits to cover the spectrum of edge cases I deal with weekly. No two of them accomplish the entire job.

      • sunshowers 4 days ago

        You should try out Jujutsu :)

        • hinkley 4 days ago

          It's on my list.

  • globular-toast 4 days ago

    As long as you commit everything, yes. The reflog is the safety rope of git. Everyone who isn't confident with the reflog should go and learn it right now.

    Pick your most important repo. Make sure everything is committed. Doing something stupid like `git reset --hard HEAD~100`. Look how fucked your work is. Do `git reset --hard HEAD@{1}`. Look at how nothing was lost.

    • everybodyknows 4 days ago

      > reflog should go and learn it

      Among its other virtues, reflog makes safe the highly empowering 'git-commit --amend'.

  • IshKebab 4 days ago

    Long rebases are not. That's the whole point.

User23 4 days ago

As an aside you can avoid the vast majority of unnecessary rebase tedium with proper use of

  git rebase —onto
chx 4 days ago

I must admit I usually immediately disregard any fancy new git tools, they come and go and often don't work right and create a gigantic mess.

But... have you seen who wrote this article?

Scott Chacon. If there's anyone in this world whose article would make me try a new git tool, it's him. He wrote the Pro Git book, Git Internals. Oh and cofounded GitHub. This is not argument from authority fallacy. This is "hey! this guy knows git like very very few others, it's worth listening to what he has to say".

  • eviks 3 days ago

    This is precisely the argument from authority fallacy, though a bit more grounded than the strong prejudice with a gigantic mess

  • lmm 4 days ago

    More importantly IMO he wrote the github-flow post, the best dose of sanity in git workflows I've seen.

    • chx 4 days ago

      and yet these fuckers downvoted my post without commenting where I am wrong

  • obeavs 3 days ago

    Gitbutler is ridiculously cool and well considered. Definitely worth checking out.