NPM website was down

126 points
1/21/1970
2 days ago
by 18nleung

Comments


iLemming

First GitHub, now NPM? Oh no... That is happening, guys. Rise of the machines. I hope Jira is next and Slack follows.

2 days ago

[deleted]
2 days ago

corvad

I wonder if this is an underlying infra issue with Azure being that Github was also having issues.

2 days ago

nulltrace

We added a preflight curl against registry.npmjs.org before the install step in CI. Not surprising they went down together.

2 days ago

2ndorderthought

I bet 10 dollars it's DNS.

2 days ago

thanatos_dem

Nah, can't be, Azure DNS has a 100% SLA after all: https://learn.microsoft.com/en-us/azure/dns/dns-faq#what-is-...

2 days ago

shakna

"Always" up, but maybe not going where you expect. [0]

[0] https://arstechnica.com/information-technology/2026/01/odd-a...

2 days ago

parliament32

To be fair, it feels like the DNS service has been the most reliable part of our Azure infra. Never really had issues with it, whether with traffic or API calls.

2 days ago

yomismoaqui

  It's not DNS
  There's no way it's DNS
  It was DNS
- SSBroski
2 days ago

basilikum

It's DNS

If it's not DNS it's MTU if you're a person and BGP if you're a company.

a day ago

corvad

Just wait and it will be something like "Github's internal DNS was down and caused widespread service communication issues."

2 days ago

xaxfixho

it might just be *AZURE*

2 days ago

Imustaskforhelp

I am waiting for jeff geerling's "its always dns" t-shirt reference/video about it if that's the case.

2 days ago

Scipio_Afri

Easy there buddy, not everything needs to be a polymarket bet :-)

2 days ago

munk-a

It's likely someone just ran npm ls -all

2 days ago

airstrike

2 days ago

Raed667

lots of amazon pages & search seem to be degraded as well

2 days ago

cozzyd

That's one way to fix supply chain vulnerabilities.

2 days ago

tantalor

Can't have any vulnerabilities if you don't have a supply chain

2 days ago

nine_k

More seriously, keeping a local cache of external npm packages, and a local artifact storage for internal npm packages looks like a wise thing to have done long ago. Might be cheaper in the long run.

Ironically, both Nandu and Verdaccio are implemented in Tyepscript and install via npm.

(Same logic obviously applies to Python packages, Docker images, etc.)

2 days ago

hmokiguess

At my former job we had a private registry that was a mirror of npm’s with an approval gate for packages devs would request and it would always pin versions

I took that for granted back then and just assumed it was standard enterprise policy

2 days ago

jamesfinlayson

Multiple previous jobs had this too (local Packagist is thing, Artifactory is another) but my current job got rid of theirs. Seemed a little short-sighted given the risks but I don't make the decisions.

2 days ago

spartanatreyu

> a local artifact storage for internal npm packages looks like a wise thing to have done long ago

Deno already does this invisibly by default.

All packages are stored in the global cache.

No need to store multiple versions of the same dependencies across projects.

To the code in your projects: there is no such thing as a global cache. Just import your dependencies like normal and deno maps them to the global cache.

2 days ago

miohtama

Only if we had a turn key distributed cache, like IPFS

2 days ago

ibejoeb

Does IPFS support content eviction now? If not, that could go wrong really fast. You get a compromised package out there and then, I think, literally every node needs to unpin it or it remains.

2 days ago

zadikian

Presumably, how ever you mark a version as latest would also be how you mark one as compromised. IPFS files are immutable and keyed by hash. But this seems like overengineering.

2 days ago

cluckindan

Waiting for the BitTorrent package manager

2 days ago

XorNot

Caching NPM was easier when you could pull the Couchbase replicate API. Afaik that's gone and now you just have to send a bazillion http requests instead.

2 days ago

nine_k

Sending a bazillion http requests within your LAN, or at least your VPC, is much easier, faster, and cheaper.

Both yarn and pnpm support http/2 which speeds up the bazillion requests quite a bit.

2 days ago

hexasquid

Hold the jokes until we're sure this isn't an `.unwrap()`

2 days ago

normie3000

Well it is owned by github.

2 days ago

cute_boi

which is owned by microslop

2 days ago

rvz

...and proudly maintained by Microsoft's AI agents: Tay.ai, Zo, and Copilot.

They seem to be doing a pretty good job at wrecking both GitHub and npm at the same time.

2 days ago

adxl

Clippy was too stupid to qualify as an AI.

2 days ago

lrvick

Whenever NPM is offline, the internet is a little safer.

Keep up the good work Microsoft.

Let's shoot for 100% downtime though. Thanks.

2 days ago

squarefoot

2 days ago

corvad

Fixed as of 22:30 UTC. Hope there's a postmortem.

2 days ago

saadn92

ha, github is down too

2 days ago

simjnd

2 days ago

idoxer

Works for me, could be region related

2 days ago

dabinat

2 days ago

xmprt

With all the github instability, I wonder if Cloudflare or some other provider is going to look into providing a similar service.

2 days ago

dllrr

2 days ago

xmprt

I mean more like a full git competitor. Gitlab exists but more competition is generally better for the consumer and it looks like Github's lead is starting to falter with all these incidents.

2 days ago

sofixa

GitLab is right there. And overall provides a better product than GitHub, if nothing else on these two points:

* You can actually have an organisational structure (folders/namespaces), and projects can be moved around with automatic redirects. Also, inheritance of access controls, variables between the namespaces

* GitLabCI is organised in a way that makes supply chain attacks less of a risk. GitHub Actions takes the NPM/JS approach, where every step is an action, one you usually need to get off someone, with shoddy versioning, tons of transient dependencies, etc. In GitLabCI you can have templates, but you don't have to use an external template for every bit. It's shell scripting on top of containers, so you can have custom container images with your stuff, or custom scripts, or templates that bundle it all.

2 days ago

justinclift

GitLab also limits the size of PRs/MRs, which makes it Unfit for Purpose. :( :( :(

Its a problem they know about, but have no plan to fix before 2027.

2 days ago

irishcoffee

I mean, the PR limit is like a million characters. I would also reject a PR of a million characters. That’s bananas.

2 days ago

justinclift

Not sure about that "million characters", but we've been bitten by it in our production systems. :(

Thus, we're moving off GitLab.

2 days ago

skullone

What use case does a million character PR have?

2 days ago

justinclift

When an automated system creates a PR for merging from an existing dev branch (that's been extensively tested) to "master" (or "main").

The "surprise, you can't review all the files in your PR" using GitLabs standard web based tooling makes it a no-go.

a day ago

sofixa

That's interesting because GitHub's web UI craps out at much less than 1 million lines. It refuses to open even low thousand line diffs.

a day ago

xp84

I’ve personally been deeply unappreciated of Github’s changes in the last few years to automatically not show diffs to “large files” without having to click to open them - which seems to be a threshold that continues to shrink. Maybe like 3 screenfuls of content is the limit now per file. It’s crazy.

a day ago

justinclift

Yeah, agreed it's not great for that. I'm not real happy with GitHub's worsening UX either, but it'll at least show the _names_ of all the files in the PR.

With GitLab, when you hit the rate limit, any file "past" that limit doesn't even show that it exists in the MR. It just looks like the MR is missing a bunch of stuff, with no workaround available. :( :( :(

a day ago

irishcoffee

I'm sure, I looked it up.

2 days ago

[deleted]
2 days ago

fontain

All of those features are supported by GitHub in some form, e.g: Organizations can now belong to Enterprises.

2 days ago

sofixa

It's not the same, at all.

SSO, access tokens, secrets are all bound to the Organization level - if you work on multiple Organizations you have to log in separately... You also cannot have nested Organizations.

2 days ago

dijksterhuis

tree based directory structure stuff is available on gitlab’s free tier — so are all the permissions inheritance for groups etc.

so, while you’re technically right, these features are apparently paywalled heavily on github.

ime you get more features on gitlab for the same price (or less). i switched fully two years ago and im not going back.

2 days ago

dmitrygr

libc is still working just fine, as is the linux kernel. Mayhaps having 2000 dependencies on 3000 packages from 4000 unvetted sources was a mistake afterall?

2 days ago

TesterVetter

[dead]

2 days ago

cute_boi

microslop slops are down.

2 days ago

12345hn6789

Azure is completely dead across multiple resources. Confirming....

2 days ago

DaiPlusPlus

https://azure.status.microsoft/en-US/status says "There are currently no active events." - and everything's fine with my day-job's Azure sub right now.

2 days ago

naikrovek

Oh no. At least nothing of value is affected.

:)

2 days ago