Rendered at 11:14:43 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
cwkcw 2 days ago [-]
The window to audit is 2026-04-23 16:05 to 20:43 UTC, about 4h 38m, and 'merged incorrectly' is unpacked in exactly no detail on the status page. Practical check is to compare the tree of each merge commit in that window against the tree of the rebased-and-tested commit that the queue ran CI on. GitHub Enterprise Cloud with Data Residency is still affected as of the latest status update, so DR customers shouldn't clear new queue merges yet. Would be kinda useful if GitHub published the specific failure mode, because content-drift and parent-linkage and dropped-changes all need different audit scripts.
matthewbauer 2 days ago [-]
Might want to check your git logs if you are using the Merge Queue feature. This afternoon, we found that Merge Queue was silently reverting changesets in the merge queue. Acknowledged by GitHub, but could be a very hard problem to debug if you aren't looking for it.
EdwardDiego 2 days ago [-]
Same here, ffs GH.
MarkMarine 1 days ago [-]
4 people spent hours putting our repo back together at my company after this. GH has been unreliable and now they are breaking the core tenant of what I expect from this service.
ragall 1 days ago [-]
> core tenant
tenet not tenant
MarkMarine 12 hours ago [-]
You must be fun at parties
mazeez 1 days ago [-]
lol
acid__ 2 days ago [-]
Tip: identify PRs with this problem by looking for discrepancies between the metadata (files/lines affected) for PRs via GitHub API and associated commits in the time window.
This was a pain to clean up!
NelsonMinar 21 hours ago [-]
How did a bug like this get past their testing?
philpem 21 hours ago [-]
Given the push to AI/Copilot from MS I'd be impressed if they still did any testing.
estimator7292 20 hours ago [-]
They're at 88% uptime this year. I don't think they have testing.
tracker1 19 hours ago [-]
Microslop®, unmatched quality.
tracker1 19 hours ago [-]
I experienced a weird bug, where the github ci environment was checking out code that was throwing an error on build... the version tag definitely didn't have the code the error was coming from... I messed with the CI workflow yaml and finally cat'd the file in question... the error code was definitely there (2 lines repeated in the file) that was not in the repository.
I nuked the release tag and re-created it, and it started working... was just a really weird bug/experience.
EdwardDiego 2 days ago [-]
This has caused a morning of fun for our team, how do you break one of the most fundamental bits of your system? Time to look at alternatives...
ramon156 1 days ago [-]
Self-hosting has been wonders. I thought I needed the social part of GH (i still kind of do, I like seeing what people are working on or liking) but overall its the same experience.
sidewndr46 22 hours ago [-]
Github has been doing this for at least 5 years now. Merging a pull request merges whatever the latest commit on that branch is in the specific backend that handles the merge request. It does not merge what was just pushed into that branch.
orpheansodality 2 days ago [-]
for other folks currently in an incident trying to resolve the chaos this caused, the first commit we've found with issues in our repo is from ~10:30am pacific this morning
1 days ago [-]
classified 1 days ago [-]
How long will users put up with this before they finally leave? Microslop is inventing new failure modes on a weekly basis.
tenet not tenant
This was a pain to clean up!
I nuked the release tag and re-created it, and it started working... was just a really weird bug/experience.