-1

I've been using Git for a long time in a lite mode: branch, pull-commit-push, merge. Mostly with SourceTree. That was enough for development. So I'm an amateur)

But now I've got a repository that is watched by ArgoCD and which obtains pushes from different Gitlab pipelines. 99.9% of pushes go to their own folders. To tell you the truth I've never noticed a conflict. But as the number of pipelines grows we get more often:

error: failed to push some refs to '...'
Updates were rejected because the tip of your current branch is behind its remote counterpart. > Integrate the remote changes (e.g. 'git pull ...') before pushing again.

So the question is: why Git does not even try to merge the changes to remote branch throwing such error only if there are true conflicts? Because all you have to do is to pull changes and they never need any resolving and then repeat the push.

Git is almost 20 years old and is considered the most advanced VCS so I was surprised with such behaviour. Or maybe I've been searching not thoroughly enough and something is out there except brutal --force to deal with such situations?

2
  • Merging is a local operation. You want to know what you are pushing before you are pushing (the result of the merge could be different from what you expected). And if the merge was happening remotely, you'd have to fetch again before being able to continue working.
    – knittl
    Commented 8 hours ago
  • It's not a matter of conflicts. It's that in order to perform a true merge, you need a working tree so that Git can enact the merge. But the repo on the server is a bare repo; it has no working tree. Therefore only a fast-forward is possible. If you really want to perform merges on the server side, that is what pull requests / merge requests are for — but they are not a Git feature, they are a feature of GitHub, Bitbucket, and (yes) GitLab.
    – matt
    Commented 6 hours ago

1 Answer 1

0

I can think of reasons.

  1. Git servers are designed to be "dumb".
  2. You could break the build.
  3. Git is not an archive tool.

Dumb servers

By keeping the basic Git push/pull protocol simple, there's no need for a dedicated Git server. Git can use many different protocols including local files, or just an HTTP server with minimal changes. See The Protocols chapter of Pro Git for more.

You can break the build and not know it

Allowing git push to merge for you would encourage sharing untested and broken code.

It's important to ensure the main branch works. That way, everyone is working from a stable base. If broken code is merged into main and shared, now potentially everyone else on the project is working with broken code.

A merge results in a new commit and code changes. If the merge is done by a git push on the server, you don't know what you just shared. There was no conflict, but does it work? Who knows? You might have just broken the main branch for everybody.

All work should be tested before sharing to avoid sharing broken work. This is why work is done in branches so they can be safely pushed and tested on CI before being merged. Ideally you merge main into your branch, test it, and then merge back into main.

Git is not a tool for <insert task here>

But now I've got a repository that is watched by ArgoCD and which obtains pushes from different Gitlab pipelines. 99.9% of pushes go to their own folders

You're using Git as an archive tool. It's not designed for that.

Git is a version control system. It is not a build system, a release manager, a dependency manager, a backup system, nor an archival system. But people try to use it for that anyway. Git will "work" to a point, but not well. Use a dedicated tool instead.

Not the answer you're looking for? Browse other questions tagged or ask your own question.