GitHub's Historic Uptime

494 points
1/21/1970
3 days ago
by todsacerdoti

Comments


fishtoaster

Is the pre-2018 data actually accurate? There seem to have been a number of outages before then: https://hn.algolia.com/?dateEnd=1545696000&dateRange=custom&...

Maybe that's just the date when they started tracking uptime using this sytem?

3 days ago

OlivOnTech

Data comes from the official status page. It may be more a marketing/communication page than an observability page (especially before selling)

3 days ago

pikzel

The status page was often down when GH was down, back in the days.

3 days ago

tibbon

I could imagine a leadership or viewpoint change in how they reported when/what was down.

I've seen so many times where Company A will complain that their vendors aren't accurate enough about uptime and how Company A notices first that their vendors are down, but then they themselves have a very laggy or inaccurate status page.

We want our vendors to be accurate to the minute on these, but many CTOs don't care to admit when they too have problems.

3 days ago

xiaoyu2006

Aha we need a status page of status page.

3 days ago

BrenBarn

Sup dawg I heard you like status pages.

3 days ago

w0m

i assume they simply fixed the status page in 2018.. lol.

3 days ago

mholt

Even better IMO is this status page: https://mrshu.github.io/github-statuses/

"The Missing GitHub Status Page" with overall aggregate percentages. Currently at 90.84% over the last 90 days. It was at 90.00% a couple days ago.

3 days ago

montroser

It has been pretty rough. Their own numbers report just a single `9` for Actions in Feb 2026 with 98% uptime. But that said -- I don't get the 90% number.

Anecdotally, it seems believable that 1 in 50 times (2%) in Feb that Actions barfed. Which is not very nice, but it wasn't at 1 in 10 times (10%).

3 days ago

verdverm

It looks like the aggregate stats are more of a venn diagram than an average. So if 1/N services are down, the aggregate is considered down. I don't think this is an accurate way to calculate this. It should be weighted or in some way show partial outages. This belief is derived from the Google SRE book, in particular chapters 3 (embracing risk) and 4 (service level objectives)

https://sre.google/sre-book/embracing-risk/

https://sre.google/sre-book/service-level-objectives/

3 days ago

ablob

If you're using all services, then any partial outage is essentially a full outage. Of course, you can massage the numbers to make it look nicer in the way you described but the conservative approach is better for the customers. If you insist, one could create this metric for selected services only to "better reflect users".

That being said, even when looking at the split uptimes, you'd have to do a very skewed weighting to achieve a number with more than one 9.

3 days ago

verdverm

> That being said, even when looking at the split uptimes, you'd have to do a very skewed weighting to achieve a number with more than one 9.

It's definitely bad no matter how it you slice the pie.

If GH pages is not serving content, my work is not blocked. (I don't use GH pages for anything personally)

3 days ago

marcosdumay

That's how you count uptime. You system is not up if it keeps failing when the user does some thing.

The problem here is the specification of what the system is. It's a bit unfair to call GH a single service, but it's how Microsoft sells it.

3 days ago

tbossanova

As a “customer”, I consider github down if I can’t push, but not down if I can’t update my profile photo (literally did this today, sending out my github to potential employers for the first time in a long time). This stuff is notoriously hard to define

3 days ago

verdverm

> That's how you count uptime.

It's not how I and many others calculate uptime. There is not uniformity, especially when you look at contracts.

3 days ago

bandrami

Thinking back to when I was hosting, I think telling a customer "your web server was running fine it's just that the database was down" would not have been received well.

3 days ago

mort96

I mean I think it's useful. It answers the question, "what percentage of the time can I rely on every part of GitHub to work correctly?". The answer seems to be roughly 90% of the time.

3 days ago

verdverm

I don't use half of the services, the answer is not straight forward

https://mrshu.github.io/github-statuses/

3 days ago

naniwaduni

Nobody cares about every part of GitHub working correctly. I mean, ok, their SREs are supposed to, but tabling the question of whether that's true: if tomorrow they announced a distributed no-op service with 100% downtime, you should not have the intuition that the overall availability of the platform is now worse.

3 days ago

formerly_proven

In a nutshell, why would the consumer care (for the SLO) care about how the vendor sliced the solution into microservices?

3 days ago

verdverm

It will depend on the contract.

When I was at IBM, they didn't meet their SLOs for Watson and customers got a refund for that portion of their spend

3 days ago

fontain

An aggregate number like that doesn’t seem to be a reasonable measure. Should OpenAI models being unavailable in CoPilot because OpenAI has an outage be considered GitHub “downtime”?

3 days ago

mort96

As long as they brand it as a part of GitHub by calling it "GitHub Copilot" and integrate it into the GitHub UI, I think it's fair game.

3 days ago

jasomill

The third-party aspect is irrelevant, but while high downtime on any product looks bad for the company and the division, I consider GitHub Copilot an entirely separate product from GitHub, and GitHub Copilot downtime doesn't interfere with my use of GitHub repos or vice versa, so I'd consider its downtime separately.

GitHub Actions, on the other hand, is frequently used in the same workflows as the base GitHub product, so it's worth considering both separately and together, much like various Azure services, whereas I see no reason at all to consider an aggregate "Microsoft" downtime metric that includes GitHub, Azure, Office 365, Xbox Live, etc.

The most useful, metric, actually, is "downtimes for the various collections of GitHub services I regularly use together", but that would obviously require effort to collect the data myself.

2 days ago

mort96

My use of GitHub is like yours; I depend on Actions, but I couldn't give less of a damn about Copilot. However, Microsoft has tried to get people to adopt Copilot-heavy workflows, where Copilot plays an integral part in the pull request review process. If your process is as Microsoft pushes for -- wait for Copilot to comment, then review and resolve the stuff Copilot points out -- then Copilot being down means you can't really handle pull request, at least not in accordance with your standard process. For people who embrace Copilot in the way Microsoft wants them to, a GitHub Copilot outage has a serious impact on their GitHub experience.

2 days ago

mememememememo

What is Google's uptime (including every single little thing with Google in the name)?

3 days ago

mort96

I don't think that's a fair comparison. Google Maps, Google Calendar, Google Drive, Google Search, Google Chrome, Google Ads, etc. are all clearly completely different products which have very little to do each other, they're just made by the same company called Google.

GitHub is a different situation. There's one "thing" users interact with, github.com, and it does a bunch of related things. Git operations, web hooks, the GitHub API (and thus their CLI tool), issues, pull requests, Actions; it's all part of the one product users think of as "GitHub", even if they happen to be implemented as different services which can fail separately.

EDIT: To illustrate the analogy: Google Code, Google Search and Google Drive are to Google what Microsoft GitHub, Microsoft Bing and Microsoft SharePoint are to Microsoft.

3 days ago

Kaliboy

Completely agree, it makes it worse actually as Github's secondary functions so to speak are things we implicitely rely on.

When I merge to master I expect a deploy to follow. This goes through git, webhooks and actions. Especially the latter two can fail silently if you haven't invested time in observation tools.

If maps is down I notice it and immediately can pivot. No such option with Github.

3 days ago

dogma1138

It depends, for example - I would consider Google Drive uptime as part of say Google Docs’ overall uptime because if I can’t access my stored documents or save a document I’ve been working on for the past 3 hours because Drive is down I would be very pissed and wouldn’t care if it’s Drive or Docs that is the problem underneath I still can’t use Google Docs as a service at that point.

3 days ago

fwip

I think reasonable people can disagree on this.

From the point of view of an individual developer, it may be "fraction of tasks affected by downtime" - which would lie between the average and the aggregate, as many tasks use multiple (but not all) features.

But if you take the point of view of a customer, it might not matter as much 'which' part is broken. To use a bad analogy, if my car is in the shop 10% of the time, it's not much comfort if each individual component is only broken 0.1% of the time.

3 days ago

remus

> But if you take the point of view of a customer, it might not matter as much 'which' part is broken. To use a bad analogy, if my car is in the shop 10% of the time, it's not much comfort if each individual component is only broken 0.1% of the time.

Not to go too out of my way to defend GH's uptime because it's obviously pretty patchy, but I think this is a bad analogy. Most customers won't have a hard reliability on every user-facing gh feature. Or to put it another way there's only going to be a tiny fraction of users who actually experienced something like the 90% uptime reported by the site. Most people are in practice are probably experienceing something like 97-98%.

3 days ago

fwip

Sorry, by 'customer' I meant to say something like a large corporate customer - you're buying the whole package, and across your org, you're likely to be a little affected by even minor outages of niche services.

But yeah, totally agree that at the individual level, the observed reliability is between 90% and 99%, and probably toward the upper end of that range.

3 days ago

[deleted]
3 days ago

wang_li

A better analogy is if one bulb in the right rear brake light group is burnt out. Technically the car is broken. But realistically you will be able to do all the things you want to do unless the thing you want to do is measure that all the bulbs in your brake lights are working.

3 days ago

Dylan16807

That's an awful analogy because "realistically you will be able to do all the things you want to do". If a random GitHub service goes down there's a significant chance it breaks your workflow. It's not always but it's far from zero.

One bulb in the cluster going out is like a single server at GitHub going down, not a whole service.

3 days ago

mememememememo

Or if your kettle is not working the house is considered not working?

3 days ago

Polizeiposaune

I've been on a flight that was late leaving the gate because the coffeemaker wasn't working.

3 days ago

skipants

These are two pages telling two different things, albeit with the same stats. The information is presented by OP in a way to show the results of the Microsoft acquisition.

3 days ago

goodmythical

holy shit that's nearly five weeks of down time.

Well, I mean, I guess that's fair really. How long has github been around? Surely it's got five weeks of paid time off by now...

3 days ago

hk__2

It’s biaised to show this without the dates at which features were introduced. A lot of the downtimes in the breakdown are GitHub Actions, which launched in August 2019; so yeah what a surprise there was no Actions downtime before because Actions didn’t exist.

3 days ago

cuu508

You can click on "Breakdown" and then on "Actions" to hide it.

3 days ago

mbauman

Even worse, those features show "100% uptime" pre-existence on the breakdowns page too.

3 days ago

siruwastaken

This is the real questionable part of the graphic. It seems that no-data pre 2018 was just considered 100% uptime (which is hardly historically accurate).

3 days ago

voxic11

Check the breakdown page. Like yes the magnitude is reduced obviously for individual services. But they all show the same trend.

3 days ago

hk__2

I checked the breakdown page, as I wrote:

> A lot of the downtimes in the breakdown are GitHub Actions

3 days ago

[deleted]
3 days ago

phillipcarter

FWIW if people are looking for a reason why, here's why I think it's happening: https://thenewstack.io/github-will-prioritize-migrating-to-a...

3 days ago

llama052

It's absolutely this. Our Azure outages correlate heavily with Github outages. It's almost a meme for us at this point.

3 days ago

honeycrispy

Azure's downtime doesn't appear to be as bad as Github's.

6 hours ago

nmaleki

You'd think they'd do all the testing elsewhere and use a much shorter window of time to implement Azure after testing. I don't think this fully explains over 6 years of poor uptime.

3 days ago

hadlock

The fact that even they struggle with github actions is a real testimate to the fact that nobody wants to host their own CD workers.

3 days ago

esseph

> The fact that even they struggle with github actions is a real testimate to the fact that nobody wants to host their own CD workers.

What a weird takeaway

3 days ago

phillipcarter

It certainly explains the issues _now_, IMO.

3 days ago

shrinks99

I got Claude to make me the exact same graph a few weeks ago! I had hypothesized that we'd see a sharp drop off, instead what I found (as this project also shows) is a rather messy average trend of outages that has been going on for some time.

The graph being all nice before the Microsoft acquisition is a fun narrative, until you realize that some products (like actions, announced on October 16th, 2018) didn't exist and therefore had no outages. Easy to correct for by setting up start dates, but not done here. For the rest that did exist (API requests, Git ops, pages, etc) I figured they could just as easily be explained with GitHub improving their observability.

3 days ago

padjo

It feels like they launched actions and it quickly turned out to be an operations and availability nightmare. Since then, they've been firefighting and now the problems have spread to previously stable things like issues and PRs

3 days ago

deepsun

They rushed to launch Actions because GitLab launched them before.

BTW, GitLab called it "CI/CD" just as a navigation section on their dashboard, and that name spread outside as well, despite being weird. Weird names are easier to remember and associate with specific meaning, instead of generic characterless "Actions".

3 days ago

nulltrace

We added Actions for CI in 2020. A year later realized our entire deploy pipeline just assumed it would be up.

Webhook doesn't fire, nothing errors out, and you find out when someone asks why staging hasn't moved in two days.

3 days ago

jamiemallers

[dead]

2 days ago

irishcoffee

Github actions needs to go away. Git, in the linux mantra, is a tool written to do one job very well. Productizing it, bolting shit onto the sides of it, and making it more than it should be was/is a giant mistake.

The whole "just because we could doesn't mean we should" quote applies here.

3 days ago

lcnPylGDnU4H9OF

The same philosophy would suggest that running some other command immediately following a particular (successful) git command is fine; it is composing relatively simple programs into a greater system. Other than the common security pitfalls of the former, said philosophy has no issue with using (for example) Jenkins instead of Actions.

3 days ago

irishcoffee

[flagged]

3 days ago

lcnPylGDnU4H9OF

Yes.

3 days ago

psini

But GitHub actions is not Git?

3 days ago

irishcoffee

Sorry yes, that was my point. GitHub turned git into some dysmorphic DVCS version of c++ on the web. Git is fine. Maybe 10% of people use plain git, it’d all wrapped in shitty web apps. Let git be git, and let ci/cd be ci/cd, the way Linux intended.

However, I don’t work on web apps. Maybe it’s better for the JavaScript folks. I hope to never write a line of js in my lifetime.

2 days ago

zja

3 days ago

dewey

I remember a lot of unicorn pages back in the days. Maybe the status page was just not updated that regularly back then?

3 days ago

imglorp

I think the unicorn is only for web pages. Things like git api services might be broken independently (and often are!) and they might show up on the status page after some time.

3 days ago

teach

One could argue that, given how singularly awful it is, GitHub's historical uptime might qualify as "historic".

3 days ago

tclancy

Bless you, was very much not what I was expecting from the title.

3 days ago

BadBadJellyBean

I feel like by now GitHub has a worse downtime record than my self hosted services on my single server where I frequently experiment, stop services or reboot.

3 days ago

agilob

It's ok because we're still paying for it. QoS degradation is worth it. No need to have 99.999% then you can have 90.84% and still people to pay for it.

3 days ago

verdverm

Those electricity savings can better used to fuel the token bonfire

3 days ago

hrmtst93837

Scale changes the math. Your uptime chart would look like a crime scene too if a million people were pushing random crap at your server all day and every tiny hiccup could land on an open PR or a hot write path you forgot about. GitHub looks like old code glued to ancient VMs that people are scared to touch, so a small outage can drag into a wierdly long one.

3 days ago

marcosdumay

It does have a worse downtime record than my tiny VPS that has a recurrent packet routing problem and keeps going offline. Measurably so.

3 days ago

frenchie4111

Github's migration to Azure has so far been a hilariously bad advertisement for Azure

3 days ago

otterley

I'm not a GitHub apologist, but that graph isn't at scale, at all. It's massively zoomed in, with a lower band of 99.5%. It makes it look far worse than it is.

3 days ago

pavon

If you plotted it from zero, then a horrible service and a great service would be indistinguishable. Their SLA for enterprise customers is 99.9%. The low end of that chart is 5x that amount downtime. It is a reasonable scale for the range people are concerned about and it looks bad because it is bad.

3 days ago

verdverm

It's an uptime chart and shouldn't need to show much more than the 99% range.

If you started the y-axis at zero, you wouldn't see much of anything. Logarithmic scale would still be a bit much imo.

3 days ago

otterley

> If you started the y-axis at zero, you wouldn't see much of anything.

That's... kind of my point.

As a reliability engineer, I'm disappointed in GitHub's 99.5% availability periods, especially as they impact paying customers. On the other hand, most users are non-paying users, and a 99.5% availability for a free service seems to me to be a reasonable tradeoff relative to the potential cost of improving reliability for them.

3 days ago

grayhatter

> the other hand, most users are non-paying users, and a 99.5% availability for a free service seems to me to be a reasonable tradeoff relative to the potential cost of improving reliability for them.

If they are using your data, you're still paying just not in cash.

As a former reliability engineer, I'm trying hard to remember back when we had multiple months in a row never reaching 100% uptime, and I can't. Yes, we've seen runs of painful months, but also runs of easy months without down time.

But let's talk root cause here, the cost of improving them here, is someone caring. This isn't simply a hard problem, it's a well understood hard problem that no one who makes decisions cares about. Which as a reliability engineer is an embarrassment. Uptime is one of those foundational aspects that you can build on top of. If you're not willing to invest in something as core as your code or service works. What are you even doing?

2 days ago

otterley

> If you're not willing to invest in something as core as your code or service works. What are you even doing?

I think Microsoft is collecting rents. :)

2 days ago

tclancy

It also has 0 reflection of load. Weren't you limited to a single private repo before Microsoft took over?

3 days ago

otterley

I don't think so. Even before Microsoft acquired GitHub, you could have as many private repos as you wanted, but you couldn't have more than 3 collaborators. This change happened back in 2019:

https://github.blog/news-insights/product-news/new-year-new-...

3 days ago

alberth

Unsolicited feedback ... changing the y-axis to be hours (not % uptime) might be more intuitive for folks to understand.

The data is there, you just have to hover over each data point.

3 days ago

simlevesque

It could even be both % and offline hours per year. To me the percentage is simpler to understand.

3 days ago

8organicbits

I'd like to move off GitHub, and I deploy some websites using GitHub Pages, so I took a look at the availability of static web hosting; GH actually does really well on this metric, although Fastly, the CDN they use, should get the credit.

https://alexsci.com/blog/static-hosting-uptime/

3 days ago

llama052

Nearly every time Github has an outage, Azure is having issues also.

Actually the last 4-5 outages from Github, Our Azure environments have issues (that they rarely post on the status page) and lo and behold I'll notice that Github is also having the same problem.

I can only assume most of this is from the Azure migration path. Such an abysmal platform to be on. I loathe it.

Looks like there's an internal service health bulletin:

Impact Statement: Starting at 19:53 UTC on 31 Mar 2026, some customers using the Key Vault service in the East US region may experience issues accessing Key Vaults. This may directly impact performing operations on the control plane or data plane for Key Vault or for supported scenarios where Key Vault is integrated with other Azure services.

Honestly all of the key vault functions are offline for us in that region. Just another day in paradise.

Also the fact that the azure status page remains green is normal. Just assume it's statically green unless enough people notice.

3 days ago

starkparker

The biggest spikes are Github Actions, starting November 2019. They didn't go GA until November 13, 2019: https://siliconangle.com/2019/11/13/github-universe-announce...

3 days ago

bob1029

I'm convinced one of my org's repos is just haunted now. It doesn't matter what the status page says. I'll get a unicorn about twice a day. Once you have 8000 commits, 15k issues, and two competing project boards, things seem to get pretty bad. Fresh repos run crazy fast by comparison.

3 days ago

chenzhekl

My impression is that, before Microsoft acquired GitHub, GitHub went for many years without really introducing new features, so part of its stability came from the fact that it wasn’t very ambitious or proactive about improving.

3 days ago

ahofmann

I loved that time. Websites, or "apps" that don't change every second time I want to use them, are great.

3 days ago

SamuelAdams

It could also be that they have more customers / clients now, or offer more capabilities.

3 days ago

_air

Do we have metrics for the uptime of other major services? Would be interesting to see if this is just a GitHub problem or industry-wide.

3 days ago

verdverm

Bitbucket Cloud incident history: https://bitbucket.status.atlassian.com/history

Though I will be the first to say I don't fully trust it based on the flakey git clone errors we see in CI.

3 days ago

barryhennessy

It’s actually great to see a living example of how sensitive users* are to what to a lay person would look like a small amount of downtime.

The fact that we’re all talking about it, and not at all surprised, is a great example we can take when making the case for more 9’s of reliability.

* well, very technical power users.

3 days ago

davebren

Reminder to keep local backups of everything important while the reliability of all these services continues to degrade.

2 days ago

verdverm

I will chime in that Jira and Bitbucket have drastically improved performance and reliability over this same time period. It actually feels snappy and they seem to listen to feedback.

3 days ago

TimLeland

How much of the downtime is due to all the AI code being committed?

3 days ago

darkhorn

When I say that Microsoft writes very bad code some people get offended. For example for Azure Event Hubs they have almost no documentation and Java libraries that mostly do not run.

3 days ago

landsman

It is ridiculous how company owned by Microsoft, making non sense money on Azure, is let to die like this. That's have to be a soft of plan or something. So sad to watch it.

3 days ago

fontain

GitHub is 100x the size today with 100x the product surface area. Pre-Microsoft GitHub was just a git host. Now, whether GitHub should have become what it is today is a fair question but to say “GitHub” is less stable today vs. 10 years ago ignores the significant changes. Also, much of these incidents are limited to products that are unreliable by nature, e.g: CoPilot depends on OpenAI and OpenAI has outages. The entire LLM API industry expects some requests to fail.

GitHub’s reliability could stand to be improved but without narrowing down to products these sort of comparisons are meaningless.

3 days ago

bigfatkitten

> Pre-Microsoft GitHub was just a git host.

And even just that aspect of the service is now extremely unreliable. If outages in the LLM side can cause that to break, that would indicate some serious architectural problems.

3 days ago

davebren

Sites are supposed to get more reliable as they grow and have more resources to allocate specifically towards site reliability.

2 days ago

tln

The article provides a way to do just that - click breakdown then you can deselect any product areas.

Just the Git operations show way more instability post acquisition.

3 days ago

robshippr

This at least makes me feel like I am not going crazy when I say "Github used to be much more reliable before Microsoft bought them"

3 days ago

mcherm

The significance of the changeover would be much more impactful if the chart showed a longer history.

3 days ago

joey5403

Based on the graphics, Microsoft doesn't seem to be doing very well

3 days ago

rvz

I guess "centralizing everything" to GitHub was never a good idea and called it 6 years ago. [0]

Looking at this now, you might as well self host and you would still get better uptime than GitHub.

[0] https://news.ycombinator.com/item?id=22867803

3 days ago

DerArzt

This has to feel a little vindicating.

2 days ago

addaon

Historical, not historic. Extremely not historic.

2 days ago

neop1x

Powered by Azure™

a day ago

keybored

I think you mean GitHub’s histrionic uptime.

3 days ago

redwood

I wonder if they got moved to Azure in 2019?

3 days ago

jrochkind1

Honestly I think their status page just got more honest -- and they are graphing this in such a way that any partial outage to any service looks really bad on teh chart.

There were definitely partial outages to services inside that row of horizontal green dots, that the status page just wasn't advertising.

3 days ago

josefritzishere

That's pretty stark.

3 days ago

[deleted]
3 days ago

yakkomajuri

I mean I'm as annoyed as the next person about the outages but I'm not sure correlating with the Microsoft acquisition tells the whole story? GitHub usage has been growing massively I'd imagine?

3 days ago

wiseowise

Programming is a solved problem, btw.

3 days ago

qrush

hot take: I would accept ads under every PR comment in GitHub if we could get back to 3 or 4 nines of reliability.

3 days ago

Jaco07

[dead]

3 days ago

theaicloser

[flagged]

3 days ago

tonymet

Nearly all the variance is from Actions, a product that didn’t exist beforehand.

It’s despicable to see everyone punching down on GitHub. Even under Microsoft they’ve continued to provide an invaluable and free service to open source developers .

And now , while vibe coders smother them to death, we ridicule them . Shameful , really

3 days ago

EdNutting

I was with you until your comment about vibe coders. Microsoft paid for and brought this vibe coding hell upon themselves. GitHub Copilot, investment in/partnership with OpenAI, and everything else they’ve done to enshitify software and the internet.

If it brings them down, they’ve only themselves to blame. More likely it’ll just hasten the end of free public repos, which will be a shame, but we’ll find other ways to share code that aren’t reliant on one semi-benevolent megacorp.

3 days ago

tonymet

The smothering would happen with or without Copilot. This just sounds like an excuse to be ungrateful .

I hope GitHub shuts down free tier , maybe developers will finally be grateful .

3 days ago

EdNutting

I’m grateful for GitHub and their support for open source, but they’re not getting any sympathy for the AI mess they’re generating (and they’re contributing more to the mess than many other organisations, due to their size, position and product strategy).

They’re a big enough corporation that we can have nuanced feelings about them. Simultaneously grateful for one part of what they do, and unsympathetic for the consequences of a different part of what they do.

3 days ago

tonymet

true colors.

3 days ago

EdNutting

Mmm, you’ve shown yours too.

3 days ago