Red Squares – GitHub outages as contributions

735 points
1/21/1970
18 hours ago
by cianmm

Comments


u_fucking_dork

Every time one of these vibe coded meme sites gets posted there’re endless comments about how it’s not actually because of load, the GitHub team is shit, their tech stack is shit, Microsoft is shit, Azure is shit, etc.

Just compare the GitHub status page for public GitHub vs the enterprise cloud pages.

Enterprise has much better numbers and I’ve personally can’t remember the last time there was an outage that prevented me from doing work.

If the problems didn’t revolve around load, I’d expect to see the same uptime problems reflected on the enterprise offering.

16 hours ago

dijit

> the GitHub team is shit, their tech stack is shit

1) Criticism of being unable to achieve service is not a fault of the individual; it simply is a fault of the system. You can criticise the system, it's permissible. Especially if they have more resources than many countries and some of the best tech talent in the world on staff.

2) Their tech stack is shit, and they've gone on record for years defending it, quite arrogantly in some cases, as if nobody can possibly know anything unless they've done github (even if you've done things which scale, or someone comes in with an even larger scale, the people on HN will happily say "but it's not github" which is valid but not intellectually curious or open).

Azure is terrible and it's being foisted on the team: even if they found some technical people to put at the top who are saying it'll be ok: it is a pretty cruel platform to use.

I've personally had a few conversations about their choice of relational database which were handled pretty defensively, and I think we're all somewhat cognisant of their frontend rewrite.

It's a waste of time to rewrite the UI and push AI tools when you can't even keep the site lit.

I have nothing against the engineers- I don't know why people keep chiming in as if we're punching down at "lowly engineers" when the reality is that it's a management failure of the highest order.

They're a billion-dollar company owned by a trillion-dollar one... it's very hard to "punch down" at this system: nobody is going after the engineer, we're punching the fact that the system that is a defacto monopoly due to network effects is putting new features or pleasing their owners over the core offering.. How is that an engineering failure? That's an active choice by management.

15 hours ago

linsomniac

>not intellectually curious or open

This checks out. I once was at a conference where they (Azure) had a giant booth. A fairly well known person in the community brings me over to talk to his manager who is working the booth. "We should hire him, he's really smart." Within a minute of talking to this manager he says "You're a Linux guy? We do Windows." and physically turns away from me, conversation over. You know, fair enough, was an easy way to find that it wasn't a good fit. But the lack of curiosity about "what do you bring to the table" was pretty stunning.

Be curious.

edit: Clarifying "they"

14 hours ago

vtbassmatt

Wait, is this Azure or GitHub who had the booth? If it was GitHub, I’m super confused and there must have been some serious missing context. I was at GitHub from 2020-2023 and am not aware of _any_ Windows usage in the service. The only meaningful Windows footprint was for client dev (`gh`, GitHub Desktop, etc.) and even there, Windows was the exception. Service side is all Linux; most engineers worked from a Mac.

If the context was an Azure booth, I’m still mildly surprised (they’ve long been invested in beyond-Windows) but not shocked.

(Edit: I forgot about the Actions stack. Some of that was on Windows. I was pretty far removed from that world and much closer to the classic Ruby monolith side.)

13 hours ago

linsomniac

Sorry about the ambiguity: I was replying to the Azure part, this was a Azure booth.

13 hours ago

netule

Oof, that’s rough, especially considering that GitHub used to be a Linux shop. I wonder what happened to all the Rails folks who built the OG platform.

14 hours ago

ornornor

They’re happy and vested probably :)

14 hours ago

holman

Happy and definitely gone, haha. Not my circus not my monkeys.

12 hours ago

someguyiguess

If they were curious they wouldn't "do Windows"

14 hours ago

etruong42

Your story (and the other posts commenting on lack of intellectual curiosity) fits into a larger model of the world that I prescribe to. Being labeled "well-known" or "smart" doesn't seem to require intellectual openness anymore. In fact, openness seems to be penalized. Being open means potentially exposing yourself to scenarios where you are not the smartest/authoritative, and that reduces your authority, so you avoid those scenarios to preserve your authority. Even when you are not "the authority", being open could be a threatening signal to the authority, where you and your "openness" could be a vector that introduces ideas/scenarios that reduces their authority. So long as authority is solidified by this lack of openness, actually being open could limit your career potential.

Seeing this happen in real time is helping me understand how authoritarian regimes/institutions/movements rise to power.

11 hours ago

sam-cop-vimes

Wow - why anyone would build a serious Saas platform this day and age on Windows is beyond me.

14 hours ago

xp84

> It's a waste of time to rewrite the UI and push AI tools when you can't even keep the site lit.

This is a flawed argument. There are many designers and frontend engineers there who have zero role in improving site reliability. They might as well keep doing their jobs, instead of having the CSS wizards and art school grads team up and try to crack Azure.

14 hours ago

dijit

The implication here is that after 8 years of having issues management has not intentionally hired UX designers or programmers to work on AI features over people who could help build more reliability.

We've reframed this argument from the original "stop punching down" to, now "well, managements allocation of resources is fine because they have staff that would otherwise do nothing".

Thing is, I agree with the base of your argument, over the course of a quarter (or 3, or even 5..) the release of a feature does not mean that resources have been taken from the core.

However... it's been a really long time, and now we're hitting a critical point where the added load of AI, the rot that has been allowed to set in at the core, and the fact that they haven't been allocating staff to improving those pieces is hitting an inflection point.

I can't say for sure, as I don't work there, but I think if the trend is going lower for literally years: management could have changed course.

Those frontend designers didn't hire themselves and normal turnover is something like 5% for a healthy org: there was a conscious effort there. And those feature designers on AI can definitely have done work on reliability.

14 hours ago

u_fucking_dork

The avalanche of same comments to every meme tier post about this is the opposite of curious.

Very little discussion of any merit happens on these posts. It’s mostly bandwagoning and repeating the same comments they read on the last iteration.

14 hours ago

dijit

I agree... https://news.ycombinator.com/item?id=48026924

Yet here we are.

I just don't feel comfortable with you defending the trillion dollar company as if we owe them something, or as if they're somehow the victim in all of this.

I can buy that there's more demand for service, but;

A) They are the ones pushing the AI hype (microsoft especially but github too)

B) These issues existed before the AI hype anyway

and, obviously:

C) We're not saying they're bad engineers, we're saying it's become a bad service... THAT is everyones problem, managements especially. We're not attacking the developers specifically, we're attacking the state of a core service that is failing.

14 hours ago

u_fucking_dork

Pointing out that attackers are targeting the wrong area is not defending anyone my friend.

I’m just saying that scaling is very likely the issue, no reason not to believe their own statements on that. And yes they are to blame for their own success here.

13 hours ago

fragmede

Assuming scaling is the issue (and I have no reason to believe they're lying about that), the obvious solution is to rate limit to below what the system can handle. Start saying you can't make a new account, you can't make new repos, you can't push. That's not something ICs are empowered to do (apparently) so it falls on management to empower them to be able to say that to customers.

Or so I imagine office politics to be there. I've never worked ah Microsoft specifically though I have worked in corporate America at other large companies.

9 hours ago

bastardoperator

Wrap it up, this guy doesn't like the database (they use two), azure is terrible despite being the cash cow for msft, and OP could easily build a more scalable scm service with their pinky and half their brain because they know better then thousands of engineers. I don't know whats more comical, GH going down everyday, or watching bros trying to flex.

12 hours ago

s_dev

It's common knowledge that the official status pages don't actually reflect downtime due to SLAs and the status page could be weaponised against them. So comparing them is useless.

You rarely see "outages" even if that what happens in reality, in marketing speak it's referred to as 'degraded performance' i.e. the cheque is in the post, your data is in the tubes on it's way, it's just slow! A business oriented lie.

Far more useful are the 'independent status pages' maintained by enthusiasts that are unaffiliated with whomever they are measuring.

15 hours ago

john_strinlai

>Far more useful are the 'independent status pages' maintained by enthusiasts that are unaffiliated with whomever they are measuring.

unless, like this one, they:

- treat "some copilot chat models are failing" and "teams notifications app down" as a major outage, the same as git operations or actions failing... those are very obviously nowhere near the same operational impact and its dishonest to group them as the same

- aggregate downtime so that there is greater than 1 day of possible downtime in a 24 hour period. if 3 services are down for the same 1pm-2pm time period, that is being counted as 3 hours of downtime despite a developer only being impacted for 1 hour.

it would be cool to have an accurate status page. the only two options seem to be company-owned status pages (incentivized to under-state impact) and karma-hunter/meme status pages (incentivized to make as much red as possible for retweeting or whatever).

13 hours ago

michaelcampbell

> I’ve personally can’t remember the last time there was an outage that prevented me from doing work.

You and I are in different domains. It's not daily, but I can't remember the last time I (in my company) went a week WITHOUT having to workaround some outage. Perhaps semantics, but I can "do work" through most of them, but that work isn't getting built or deployed in the same time frame it would have been had the outages not occurred. So "affected" is at least weekly for me.

14 hours ago

k_roy

It's weekly for me. And that's just with PRs, not even builders. I can't imagine if I relied on their runners.

12 hours ago

senko

> If the problems didn’t revolve around load

GitHub is not a mom&pop operation.

I expect the $3T company to handle the load, or at least place a prominent "only for hobby use" warning on top.

15 hours ago

jorl17

Ironically, I am in this very moment incapable of creating threads in PRs within my org because of a GH bug. It's on their status page, too.

I can reply to an existing thread, just not create a new one.

How does something like this even slip by?...and why has it been like this for an hour?

EDIT: Oh, good, the issue should be solved in the next 3 or 4 hours. How lovely of them.

12 hours ago

sqircles

Odd, our Enterprise side has been jacking up for a few days now on PRs.

15 hours ago

somehnguy

I think 2 things may be combined here.

'GitHub Enterprise Server' is hosted on your own resources, not their cloud. It makes sense that it wouldn't have the same downtime as their cloud, but that's hardly relevant.

'GitHub Enterprise Cloud' is their offering hosted on their own resources and what I suspect most enterprise customers use. It's what I at $extremelylargecompany use. It follows the same uptime/downtime as their public non-enterprise offering.

15 hours ago

voxic11

No if you use GitHub Enterprise Cloud with data residency then you are on separate infrastructure. Here is the status page for the US enterprise cloud data residency https://us.githubstatus.com/posts/dashboard (which funny enough is reporting an issue atm).

You can tell if you are on the github enterprise with data residency because you will access github at a GHE.com domain rather than github.com. It definitely has better uptime than the public cloud but is not without its own issues.

15 hours ago

everfrustrated

It's still on Azure tho, so subject to Azures underlying problems....

15 hours ago

tiagod

I don't find your conclusion that obvious. They could also be deploying changes to their regular infrastructure before updating enterprise, shielding it from some mistakes.

11 hours ago

weird-eye-issue

I don't think people generally care what the exact reason is it's more about there being any downtime in the first place

Also it's not a fair comparison because it's not necessarily the same code between the public and Enterprise...

14 hours ago

semiquaver

What do you mean by enterprise cloud? The default GitHub enterprise cloud plan is hosted on the same infra as “public github.” Do you have a link to what you mean?

16 hours ago

EE84M3i

It sounds like they're talking about "GitHub Enterprise Cloud with Data Residency." This is a separate product to the standard "GitHub Enterprise Cloud" (including Enterprise Managed Users) which runs on normal github.com infrastructure.

If you have a Data Residency tenant, you access it through an endpoint like octocorp.ghe.com

GitHub seems to try to use specific language here to avoid this confusion, because they are quite different products, but it seem to me they were named confusingly in the first place...

13 hours ago

taspeotis

Bottom of the status page links to the Enterprise regions?

> Check GitHub Enterprise Cloud status by region:

> - Australia: au.githubstatus.com

> - EU: eu.githubstatus.com

> - Japan: jp.githubstatus.com

> - US: us.githubstatus.com

15 hours ago

lenerdenator

Interested in this difference as well.

It'd make sense if there was a "you get what you pay for" attitude at MS re: public GitHub. It's not a good position for the free users, but, what else are you gonna do? Stand up your own? Retrain yourself on your SDLC in a new platform?

They need a competitor.

15 hours ago

chrisweekly

GitLab

14 hours ago

whh

It's easy to forget GitHub isn't just one thing.

It's a collection of many things. Some of us use a few things, some of us use lots of things.

I, for one, am mostly happy with GitHub and have been for the last 18 years I've been using it.

That said, GitHub Actions and Container Reg have been a bit... unreliable. This isn't to say all of GitHub is unreliable... just that these relatively new additions in GitHub's nearly 20 year history are a bit s** when it comes to uptime. I hope they can figure it out. :)

15 hours ago

shevy-java

I don't see anything in your description that would imply that those who claim Microslop is performing at below-average-quality right now, can NOT be the underlying cause either. For instance, it may be that the problem is difficult to fix AND that Microslop is really really incompetent at solving this. That is one possibility at the least.

> Just compare the GitHub status page for public GitHub vs the enterprise cloud pages.

I am not sure why that would be an explanation either - it could be that enterprise gets more time and money, whereas the non-paying free-riders naturally get less. This would also make sense from a business point of view (to some extent, though if only enterprise would use github, it would lose its status as main source code hosting website on the planet quickly; others are already waiting to nibble away at GitHub, the more Microslop fails here).

14 hours ago

JackSlateur

We are using the entreprise offering. We are seeing the problems.

14 hours ago

collinmanderson

> Disruption with Gemini 2.5 Pro model

> Disruption with Grok Code Fast 1 in Copilot

> Incident with Copilot Grok Code Fast 1

> Claude Opus 4 is experiencing degraded performance

It doesn't seem fair to blame Github for this? There's nothing they can do about it?

16 hours ago

Aurornis

The pattern recently is to collect every individual service degradation and present them as all equally significant.

Erase the severity and then present them all as “GitHub outages” or reduce it to an uptime graph.

I’m not happy with GitHub’s recent major outages either, but there is an ugly side of the pile-on where we’re getting these vibecoded attention seeking websites and social media posts to collect upvotes, likes, karma, and attention that blur the lines between small service degradations and total site outages to be more dramatic.

15 hours ago

Anon1096

Every time a github outage is posted I start wondering more and more what % of hacker news commenters have actually worked on a system with >10k active hosts and have seen what it takes to run them and how internal dashboards are presented. So much of the criticism just makes 0 sense especially the third party uptime pages.

14 hours ago

yreg

To me this makes it uninteresting. Degraded preformance of hyperscalers seems off topic to bundle with e.g. github.com availability. I think the author just wanted the chart to be as red as possible.

14 hours ago

YetAnotherNick

I think it is fair to blame Github if they repackage other services. We run a much smaller service than Github and have all sort of fallbacks to different providers and different models.

15 hours ago

collinmanderson

Github Copilot also lets you use other models when one has an outage.

15 hours ago

rnotaro

But Github also allows you do fallback to other models...

15 hours ago

lenerdenator

Depends on who's hosting the models.

15 hours ago

sd9

Weekends are the untapped frontier. Still room to scale.

17 hours ago

ahstilde

yup! When i did an analysis last month, GitHub is up 89.3% on weekdays and 96.5% on weekends. Incidents touch 62% of weekdays and 11% of weekends. Claude shows the same pattern: 92.5% weekday, 97.8% weekend. Tuesday through Thursday is the danger zone. Sunday is practically a different service.

https://www.aakash.io/tech-chase/github-and-claude-are-down-...

15 hours ago

jrumbut

I had an occasion recently where I was working a lot of late nights/early mornings with AI use. And I'd be getting these instant, beautiful responses, and then, as soon as the sun started coming in the windows, it would take longer and fail more, and by the time the clock struck 9 AM, every LLM had turned back into a pumpkin.

15 hours ago

user_7832

Which service(s) were you using, if you don't mind sharing?

I'm curious if most of the big players including eg Google do this thing of nerfing models or it's limited to more "smart" (read: black box models like ChatGPT.

12 hours ago

jannes

Are you saying US data centers idle in the night rather than serving European/Asian users?

13 hours ago

zemo

ideally european/asian users would hit european/asian servers, so potentially not surprising

13 hours ago

pojzon

Inference results for Copilot are also a lot better during weekends than workdays. Its my personal experience so take it with a grain of salt, but I work on personal projects only on weekends mostly due to that brain drain mon-fri of copilot.

10 hours ago

skor

change is the biggest cause then?

17 hours ago

sifex

Or usage

16 hours ago

atrettel

I imagine that it could be usage, but it also could be fewer people caring to report issues on the weekends too for that matter.

14 hours ago

kshahkshah

Wait until they go 996!

16 hours ago

jve

A graph I have to question is even accurate.

> Across 170 days with at least one incident · worst day Thu, Nov 20, 2025 (1.1 days)

1.1 days total how is that possible? Scrolling over that day doesn't indicate the math behind the scenes - 1.3 hours single bullet point.

Also Nov 19 has a bullet point 1.3 day outage but total is 8.1 hours

17 hours ago

hxtk

The missing status page [1] treats it as downtime any time any component of the system is down, and calculates the overall uptime based on the time that doesn't overlap with any individual category outages, and the overall downtime as any time overlapping with at least one individual category outage to avoid double-counting They show 24h of minor outage on that date.

I'm guessing that this site is taking the downtime in a given day across all services and adding it up, which would mean the worst possible day has 10 days of downtime (a day of downtime for each major category).

1: https://mrshu.github.io/github-statuses/

17 hours ago

thenewnewguy

I see a bullet point for "1.0 days of 1.3 days", and when I mouse over the previous day (Wedensday 2025-11-19), I see "7.8 hours of 1.3 days".

I haven't actually checked any sources to confirm there really was downtime on those days, but if we assume those numbers are true 7.8 hours + 1 day is about 1.3 days.

16 hours ago

figmert

Far fewer outages during the weekends. Perfect, wasn't gonna do any work then anyway.

17 hours ago

__natty__

Contrast between official [0] and third party status pages [1] is huge. How their terms of service for SLA are legal if they are so different from real world usage of their product? I really like GitHub and their services but every time when it’s broken and their status page is green something screams inside me.

[0] https://www.githubstatus.com/ [1] https://mrshu.github.io/github-statuses/

16 hours ago

xyzzy_plugh

Their terms of service are legal because their terms of service require YOU, the CUSTOMER, to track their availability against the agreed upon SLA and to pursue credits when they break their SLA.

At a recent gig we experienced many, many GitHub outages that were not tracked on their status page, and we kept a log (i.e. just search in slack). After our business people argued with our account executives at GitHub we got hundreds of dollars of credits.

Then the business peopled complained because hundreds of dollars of credits is not worth their time. And so GitHub continues to have terrible uptime and nothing is done about it.

15 hours ago

everfrustrated

This. We talked to our account reps and engineering folks at GitHub - they had no monitoring to track if they had kept their end of the contracted SLA.

They expected us to log any faults and as you say the process wasn't worth it - even with massive outages - just for a few beans in credits.

GitHub has low availability simply because it doesn't cost them and they wear no legal or contractual damage from it.

If a competitor came to me and said, we will _pay_ you damages for the time your developers are offline not able to use our product to do their jobs, we would sign up immediately.

14 hours ago

duiker101

Funnily enough, yesterday, when things were breaking, a coworker linked to the mrshu one, and it showed all green while the official showed issues with actions.

15 hours ago

philipwhiuk

> How their terms of service for SLA are legal if they are so different from real world usage of their product?

Because the SLA likely doesn't consider some features of GitHub under the SLA, whereas an outage/issues for a single model is seen as problem on the Third Party page.

16 hours ago

gen220

This idea has been around!

I made this one in January to help slice and dice uptime by incident category.

https://isgithubcooked.com

16 hours ago

culi

"Billing" is all the way in the green and "Pull Requests" all the way in the red

13 hours ago

Raed667

This is a much better project

12 hours ago

predkambrij

The answer is: yes

13 hours ago

keyle

This is one of the most creative idea I've seen this year. Tasteful and clever. Bravo!

17 hours ago

elAhmo

Funny to see this closely match contribution graphs with effectively no downtime on weekends.

17 hours ago

SlightlyLeftPad

Wow, that’s a great visualization. How many 7s of uptime is that?

14 hours ago

devy

We need one for Anthropic's Claude: https://status.claude.com/

IMO, Claude is not fairing any better than Github.

13 hours ago

mawax

Except Claude is not catching flak for it.

11 hours ago

dvh

I didn't know azure was this bad, completely changed my opinion on their cloud offerings

16 hours ago

vaylian

I know that there was a plan to move GitHub to Azure, but I don't know what the status is.

It could very well be, that GitHub is not running on Azure yet.

16 hours ago

dijit

Really? A 10 minute interaction with the platform was enough to inform me that no serious engineer is in charge, and no serious engineer chooses this platform.

It is a platform for CFOs to avoid having another vendor relationship.

16 hours ago

chrisweekly

It's even worse than I'd imagined. See this peer comment w/ link to scathing analysis from an insider:

https://news.ycombinator.com/item?id=48035171

14 hours ago

debarshri

Would be funny if you host it on github pages.

17 hours ago

jpb0104

Setup my self-hosted Forgejo last night. Very pleased so far.

17 hours ago

hosteur

Yeah me too. I moved all my public projects to codeberg and my internal repos to self-hosted forgejo.

Hosting forgejo is really easy as well. It being a single binary makes it really easy to handle with almost zero maintenance.

17 hours ago

lnenad

The memes are really painful now. I feel for the team that's is trying to survive underwater.

18 hours ago

renegade-otter

With management screaming down their necks:

YOU NEED TO USE MOAR AI!

17 hours ago

nojonestownpls

$250k can do a good job of easing meme-induced pain.

"survive underwater" what a joke. Yes, there will be good engineers there who will be sad to see it go this way, but they choose to be there, get paid better than 99% of humanity for that.

16 hours ago

dbgrman

LOL! Perfect. one change request: brighter should be more intense. Right now the more intense days are dark red and to a color blind person like me, that doesn't pop out.

14 hours ago

bnyhil31-afk

I agree. I am not color blind, but some more contrast would be helpful. Thanks for sharing!

13 hours ago

revolution88

For 30th of April, 2026 it shows it was down 1.0 days of 2.6 days (minor incident) :)

17 hours ago

danfritz

I wonder how well this corolates with azure incidents. Especially for the US regions.

17 hours ago

ngruhn

I live in Europe. I've not noticed these constant outages. But I only use GitHub after work.

17 hours ago

progbits

Interesting. I'm in EU and see these constantly but usually in the afternoon so it bothers me less as I'm already wrapping up, but my US coworkers are getting hit much worse.

16 hours ago

whirlwin

I've started experiencing them first time during Q1 this year up until now. Used to be pretty stable before that.

13 hours ago

p2detar

I also bet my money on Azure. Someone who allegedly worked there recently posted an article here on the numerous problems with Azure. Sadly I didn’t bookmark it.

17 hours ago

hosteur

The article you are thinking of was likely written by Axel Rietschin who worked on Azure core compute team.

https://isolveproblems.substack.com/p/how-microsoft-vaporize...

HN thread: https://news.ycombinator.com/item?id=47616242

17 hours ago

chrisweekly

Wow. Yikes. I never liked Azure, but this level of dysfunction is just astonishing.

16 hours ago

Robdel12

The last time one of these were posted it had pre-MS acquisition pegged at 100.0% and everyone ate it up.

This one is including external LLM services as apart of GitHub being “down”.

15 hours ago

bharxhav

Would be interesting to see if this correlated with their release cycles.

18 hours ago

rufo

At least as of when I left the company, GitHub was being deployed to fairly close to once every 60-90 minutes (the frequency of a deploy train/merge queue batch going out) 24 hours a day, at least during weekdays… there are a fair number of international engineers and deploy trains get crowded during main US business hours, so while there are fewer PRs going out at odd hours US time, there were typically still some. There aren’t dedicated releases as such for GitHub-hosted instances - everything you release needs to be gated behind a feature flag or other mechanism if it’s not going live immediately, and your code either needs to handle the database in both its pre- and post-migrated state, or you need to run the migration in advance of your code shipping out.

Fun fact: it used to be the case that GitHub was actually _less_ reliable if nobody deployed to it… there used to be various resource leaks that we didn’t see when people were deploying all day, since then the app wasn’t getting restarted constantly. After GitHub went down during a holiday break we had volunteers to deploy GitHub once a day during holiday breaks, until the underlying issues were eventually fixed.

16 hours ago

everfrustrated

Would love to know more about how they deployed their monolith. If you have anything to share or public links about it.

14 hours ago

hosteur

Well, outages seem to be distributed across all days except weekends. So this seems like people fucking around with stuff being a major factor.

17 hours ago

samlinnfer

Surely it just means more people working, resulting in more load, resulting in more outages?

17 hours ago

pwagland

Or even both. In any kind of continuous deployment, you'd expect outages at the point of deployment, or shortly thereafter as the unintended consequences ripple.

Then the load during the working days makes those ripples larger and into outages.

17 hours ago

embedding-shape

Most outages are caused by changes by humans ("actors"?), very rarely are things "People just dig our stuff so much we can't keep up" but more often "We didn't think about this performance drawback when we built thing X, now it's hurting us", and of course, more outages when you try to fix those issues without fully considering the scope and impact.

17 hours ago

qoez

Funny how it's less during weekends, as if by them working on it it gets worse and when they leave it alone it's solid.

14 hours ago

otagekki

Kind of normal I think, at the companies I've worked for, most outages were caused by a change in production, for which the impact was not properly assessed.

14 hours ago

NiloCK

Also just load. I'm sure many github services are closer to capacity / logjam at 11Am on Tuesday than at 3AM Saturday.

14 hours ago

Romario77

it's because people work (and use GitHub) a lot more on weekdays. A lot of outages are load related.

12 hours ago

adampunk

Almost as though it's load and not code causing the issue.

14 hours ago

korrectional

I don't really understand why this is happening at this scale, it's not like they just became broke and can't afford a proper server... can someone explain?

17 hours ago

fareesh

Agents are shipping code faster all over the world and in some cases 24 hours a day. Additionally, some significant number of non-developers are now developers i.e. they are also shipping to github regularly.

This is not limited to just pushing code but all the bells and whistles that github added as features under the assumption of some predictable growth are now exceeding the original plans.

I suspect a lot of their existing systems have to be re-architected for unanticipated scale, and it won't happen overnight for sure.

17 hours ago

prepend

They were sucking 5 years ago before agents existed. I don’t think this has anything to do with recent changes.

https://damrnelson.github.io/github-historical-uptime/

17 hours ago

Octoth0rpe

Pretty damning. Would also be interesting to see the number of commits overlayed. The graph tells a great story about the correlation with MS's takeover, but I wonder if at the same time that uptime went to shit, MS was shifting over large numbers of enterprise contracts to github. That would be a more complete story IMO.

None of which excuses this. Can you imagine someone's reaction in 2017 if you told them that github would be below 90% uptime in 2026? It would be unimaginable.

17 hours ago

sarchertech

That’s nonsense. GitHub didn’t have 100% uptime before 2020. I remember downtime back then. And Microsoft didn’t make changes that fast. The only thing that changed is the accuracy of their status page.

Also go back and look at the unofficial status page from 3 years ago. It’s regularly above 99% and has been dropping steadily since then. Then in the last 3 months has dropped to below 85%.

16 hours ago

prepend

This is coming from github’s status page. You need to reconcile memory of downtime with github’s official record.

I’ve been using github pretty much daily since 2010 and I never had a push fail or a repo be unavailable until recently.

13 hours ago

sarchertech

I can tell you that graph is full of shit because you can go back through recorded incidents here

https://www.githubstatus.com/history

And immediately find that there are numerous incidents that would show up on the modern status board as an issue but are reported with 100.00000% uptime on that graph.

One example:

https://www.githubstatus.com/incidents/bzj1hc2cnfkc

2018-07-16 17:32:53 - We are investigating reports of elevated error rates.

2018-07-16 17:34:27 - We are investigating reports of service unavailability.

2018-07-16 18:04:38 - We've discovered the issue causing connectivity failures and are remediating.

2018-07-16 18:26:48 - We're monitoring the site as systems recover. Some delays are expected as we process backlogged data.

2018-07-16 18:37:26 - We're continuing to monitor and work on further remediation efforts as the site recovers.

2018-07-16 18:54:21 - The site is stable. We are continuing to monitor and work through follow-up remediation efforts.

And there are other incidents with connection failures or elevated error rates during July 2018, but the linked graph shows "average uptime of all components 100.00000%" during July 2018.

Another from October (that also shows 100.0000% uptime)

2018-10-21 23:09:19 - We are investigating reports of elevated error rates.

2018-10-21 23:13:31 - We are investigating reports of service unavailability.

2018-10-21 23:43:55 - We're investigating problems accessing GitHub.com.

2018-10-22 00:05:37 - We're failing over a data storage system in order to restore access to GitHub.com.

2018-10-22 00:23:54 - We're continuing to work on migrating a data storage system in order to restore access to GitHub.com.

2018-10-22 00:43:12 - We continue to work on migrating a data storage system in order to restore access to GitHub.com.

2018-10-22 01:02:49 - We continue to migrate a data storage system in order to restore full access to GitHub.com.

2018-10-22 01:22:22 - We continue to work to migrate a data storage system in order to restore access to GitHub.com.

2018-10-22 01:41:19 - We are continuing to work to migrate a data storage system in order to restore access to GitHub.com.

>I’ve been using github pretty much daily since 2010 and I never had a push fail or a repo be unavailable until recently.

Looking back at their downtime history. Unless recently is within the last 3 years, it seems like you got really lucky.

12 hours ago

p-e-w

Whoa, if that is even remotely accurate then the talk about agents is a complete red herring.

17 hours ago

theolivenbaum

If I remember correctly the status page was not precise before the acquisition - so take with a big grain of salt the 100% pre-acquisition values

17 hours ago

prepend

I remember the status page being quite accurate before the acquisition.

I don’t like this whole casting of doubt upon sources without providing superior or even alternate sources.

It makes it hard to discuss when one person presents a source and another says “I’m not sure that is accurate.” In a vague way.

What am I supposed to do with that? Research more sources that may or may not align with how you feel?

13 hours ago

plufz

See previous days articles. Agentic coding. Going from 1b annual commits to estimated 14b or more from one year to another.

17 hours ago

baq

They’re on track to 30x volume yoy by their own words

17 hours ago

embedding-shape

The faster you move, the more you screw up, almost no company producing software have figured out how to move fast and not screw up. It's so hard, that companies even used to boast about how much they didn't care about screwing up, as long as they moved fast.

Add in new "productivity" tools that help you move even faster, with even less regards for how much you screw up (even though the tool could be used for you to move at the same speed, but with less screw ups), and an engineering culture which boils down to "Why not?", and you get platforms run by Microsoft that are unable to achieve two nines of reliability.

17 hours ago

philipwhiuk

Most of the outages are actually the unavailability of single AI models, not the core service.

16 hours ago

prepend

I suspect it’s caused because Microsoft is using buggy Microsoft tech instead of the original stack.

They’re making political decisions based on what they sell vs what’s actually useful for their use case.

It’s kind of impossible to find out if this is true though.

17 hours ago

u_fucking_dork

That doesn’t track because GitHub Enterprise Cloud has great uptime. This is all load based, vibe coded ai slop code shipped at record numbers from users who will never convert to paid. The real question is what are they doing about that?

16 hours ago

prepend

How is github enterprise cloud uptime tracked?

13 hours ago

dicksent

ai

17 hours ago

traderj0e

Wonder if there's the same thing but in green on someone's commit history

13 hours ago

letmetweakit

If you think that Github sucks, make something else and try to do it better, or don't use it.

15 hours ago

pards

This design is perfect irony. I love it.

17 hours ago

culi

I wonder how GitLab, BitBucket, Sourceforge, Codeberg, etc compare

13 hours ago

Sir_Gooner

Don't forget muh SourceHut!!! I already removed the hentai when you hit an error page lmaooo

13 hours ago

whirlwin

We just invested a lot migrating 300+ pipelines from Azure DevOps to GitHub Actions. What a bummer timing-wise. Anyone got an alternative to GitHub Actions?

14 hours ago

traverseda

You can run your own github actions compatible-ish server. https://github.com/nektos/act

Personally my favorite is probably drone-ci.

I'd suggest not buying in too hard on any one of these CI systems and just writing shell scripts. Shell scripts are portable, and you can use whatever to trigger them.

14 hours ago

vqtska

Forgejo has open source reimplementation of github actions

14 hours ago

scotty79

Fogejo is self-hosted Github with actions.

14 hours ago

scotty79

*Forgejo

12 hours ago

tealpod

If anyone wants a aggregated status page for github, cloud & AI services.

https://status-page.org/

16 hours ago

freakynit

You can let people organize/filter them into groups based on the stack they use and provide email/discord/slack notifications if any of these form their groups change their status.

16 hours ago

ktm5j

What does 'none' mean? Wouldn't all of the blank boxes (no recorded outage) qualify as 'none'? I'm confused.

14 hours ago

londons_explore

I mostly only use GitHub for hobby weekend stuff... And that's when it's down least, yay!

13 hours ago

dtran24

If there was a GitHub replacement, what would people want from it? Just reliability? Anything else?

12 hours ago

giancarlostoro

If we had this for Anthropic (and I an a happy Claude Max subscriber mind you) I wonder how bad this would look. Probably worse.

15 hours ago

FaithMB

I like this more than I expected. The intensity gradient is a nice touch too.

16 hours ago

NimraNoor

Azure was always bad but this is next level.

13 hours ago

faangguyindia

All these companies brag about being hyperscalers and cannot scale github.

Similarly, i see google releasing advancement after advancement in LLM yet i see antigravity sub where people are crying all time.

17 hours ago

Gigacore

It is funny how weekends are almost always up!

17 hours ago

cyanydeez

double entendre: Is it load based or github-employee based that weekends are sparser.

or just a multifactor of both.

18 hours ago

globular-toast

Didn't they blame "AI" for the increased load? I'm not sure why AI usage would be more during the week than the weekend, but it could be.

It does look like Friday outages were a bit rarer, which could be due to having a "no deployments on Friday" rule.

18 hours ago

mirekrusin

From the chart it seems they should have policy to deploy on weekends only.

17 hours ago

Shoetp

Yes

18 hours ago

remify

They should deploy on sundays !

13 hours ago

anant-singhal

Interesting, no outages on weekends

16 hours ago

predkambrij

Feature request: best streak

15 hours ago

nautilus12

Guess where AI Coding entered the picture

16 hours ago

airstrike

can you correlate this to data on # of commits, actions, etc?

17 hours ago

mring33621

looks like Klingon

OMG it's a secret message!

14 hours ago

shevy-java

Will Microslop ever fix GitHub again?

14 hours ago

bzreaper

same for me, at my company I can't do pr reviews rn

12 hours ago

WesSouza

Well done.

17 hours ago

rvz

Another reminder that a self hosted git repository would have more uptime than GitHub and centralizing everything to GitHub was a very bad idea. [0]

[0] https://news.ycombinator.com/item?id=22867803

17 hours ago

philprx

"Good job, Microsoft, amazing uptime."

17 hours ago

Fokamul

Clearly their team needs more LLM usage.

17 hours ago

ramon156

Please tell me this makes sense

This website has no overused ai-generated animations and... I quite enjoy it. The original website[1] has a fade-in animation, big round cards, shadows, all the jazz you can think of, it's there.

This site is very readable, very honest and sober. I don't need to sift through buzzwords to figure out tiny details.

Thank you, OP!

1: https://mrshu.github.io/github-statuses/

17 hours ago

1970-01-01

I've been pushing to GitHub all morning with 0.0000 issues. So you can complain all you want, but I have no reason to do so. YMMV.

13 hours ago