Caddy – The Ultimate Server with Automatic HTTPS

568 points
1/21/1970
a month ago
by huang_chung

Comments


samwillis

One area we have found Caddy invaluable is for local testing of APIs with HTTP2 during development. Most dev servers are HTTP1 only, and so you are limited to max of 6 concurrent connections to localhost. HTTP2 requires SSL, which would normally make it a PITA to test/setup locally for development.

Throw a Caddy reverse proxy in front of your normal dev server and you immediately get HTTP2 via the root certificate it installs in your OS trust store. (https://caddyserver.com/docs/automatic-https)

We (ElectricSQL) recommend it for our users as our APIs do long polling, which with HTTP2 doesn't lock up those 6 concurrent connections.

I've also found that placing it in front of Vite for normal development makes reloads much faster. Vite uses the JS module system for loading individual files in the browser with support for HMR (hot module replacement), this can result in a lot of concurrent requests for larger apps, creating a queue for those files on the six connections. Other bundlers/build tools bundle the code during development, reducing the number of files loaded into the browser, this created a bit of a debate last year on which is the better approach. With HTTP2 via Caddy in front of Vite you solve all those problems!

a month ago

jsheard

> HTTP2 requires SSL

Strictly speaking it doesn't, unencrypted HTTP2 is allowed per the spec (and Caddy supports that mode) but the browsers chose not to support it so it's only really useful when testing non-browser clients or routing requests between servers. HTTP3 does require encryption for reals though, there's no opting out anymore.

a month ago

samwillis

Yep, it's really disappointing they didn't decide to support it for localhost.

a month ago

imdadadani

I think the reason why they have decided to not support it over plain text is because they would have to detect if the server answers with HTTP/2 or HTTP/1.1, which could be complicated. This is not needed when using TLS since ALPN is used instead

a month ago

zinekeller

No, that's rather easy for up-to-date browsers. The real reason is that dumb middle proxies might cache/mangle HTTP/2 requests to the point that it simply fails hard, but encryption makes those concerns go away. Yes, there are proxies that can decrypt TLS, but due to how negotiation of HTTP/2 works in TLS (ALPN), dumb proxies will simply not send the signals that it's HTTP/2 capable.

a month ago

jrockway

This is meh excuse. If you want your browser to connect to a gopher server, you type gopher://example.com. If you want to use http2, http2://example.com should work. (I know, I know, everyone removed Gopher support a few years ago. Same idea though.)

Having said all that, I just copied the certs out of here https://cs.opensource.google/go/go/+/refs/tags/go1.24.0:src/... and use them to do browser/http2 stuff locally. Why steal Go's certificates? Because it took 1 second less than making my own!

a month ago

johannes1234321

Using http2 as protocol in the URL makes the initial transition complex: One can't share links between different users and clients and from other websites with a guarantee of the "best" experience for the user. Not to mention all old links outside the control of the server operator.

a month ago

taftster

Hey, thanks for this. It saves me even more than 1 second!

a month ago

e12e

Which browsers/libraries trust these? Or does the go tool chain install them?

a month ago

jrockway

Nothing trusts them, they're just regular self-signed certificates. There is no benefit to using these over your own self-signed certificates except that you don't have to ask your LLM for the commands to generate them ;)

a month ago

e12e

And of course once you trust them on localhost, you expose yourself to some risk, since the whole world can get a copy of the key.

a month ago

tacone

Another way is to create a regular DNS name, and ave it redirect to localhost. If you are unable or unwilling to do so, there are free DNS services like https://traefik.me/ that provide you with a real domain name and related certificates.

I personally use traefik.me for my hobbyist fiddling, and I have a working HTTP/2 local development experience. It's also very nice to be able to build for production and test the performance locally, without having to deploy to a dev environment.

a month ago

imhoguy

It is even simpler with `/etc/hosts`:

   127.0.0.1 localhost local.foobar.com
And just use wildcard `*.foobar.com` or SAN cert with anything local like Caddy, Nginx, Haproxy or whatever.
a month ago

dspillett

For people worried about somehow exposing commercially or otherwise sensitive information by the registration of DNS names, a SAN certificate is out because of certificate transparency logs.

A wildcard certificate is safe from that though. Or just choosing names that don't give secrets away.

A certificate signed by a locally trusted CA would work too of course, but unless you already have that setup for other reasons it is a bunch of admin most funny want to touch.

a month ago

tialaramex

> a SAN certificate is out because of certificate transparency logs

First, all these certificates in the web PKI have SANs in them. X509 was designed for the X500 directory system, so when Netscape repurposed it for their SSL technology they initially used the arbitrary text fields and just wrote DNS names in as text. There are a number of reasons that's a terrible idea, but PKIX, RFC 2459 and its successors, defines a specific OID for "Subject Alternative Names". The word "alternative" here refers to the Internet's names (DNS names and IP addresses) being an alternative to the X500 directory system. PKIX says the legacy names should be phased out and everybody should use SANs.

That rule (you must use SANs in new certificates) was baked into the CA/Browser Forum Baseline Requirements (CA/B BRs or just "BRs" typically) which set the rules for certificates that will actually work in your web browser and thus in practice all the certificates people actually use. Enforcement of this rule was spotty for some time but the advent of CT logging made it possible to immediately spot any cert which violates the rule and so some years ago Google's Chome began to just reject the "legacy" just write it in a text field and hope approach and other browsers followed.

So what you're actually talking about are certificates with two or more specific DNS names rather than a single wildcard.

Secondly though, that's usually all a waste of your time if you're trying to mask the existence of named end points because of Passive DNS. A bunch of providers sell both live and historical feeds of DNS questions and answers. Who asked is not available, so this isn't PII, but if I'm wondering about your "sensitive" names at example.com I can easily ask a Passive DNS service, "Hey, other than www.example.com what else gets asked in similar DNS queries?" and get answers like "mysql.example.com" and "new-product-test.example.com".

Passive DNS isn't free, but then squirrelling away the entire CT log feed isn't free either, it's served free of charge on a small scale, but if you bang on crt.sh hard you'll get turned off.

a month ago

dspillett

> First, all these certificates in the web PKI have SANs in them.

Yes, and technically true is the best variety of true, but… Usually people don't refer to certificates where “Subject” is equal to the one and only “Subject Alternative Name”, as SAN certificates.

> So what you're actually talking about are certificates with two or more specific DNS names rather than a single wildcard.

If we are going to nitpick over the SAN designation a basic wildcard certificate is usually a SAN cert too, by the same definition. They have (at least mine always have had):

    Subject =
            “CN = *.domain.tld”
    Subject Alternate Name = 
            “DNS Name: *.domain.tld
             DNS Name: domain.tld”
(or similar for a wildcard hung off a sub-domain)

> "Hey, other than www.example.com what else gets asked in similar DNS queries?"

True, but only if those queries are hitting public DNS somehow. You can hide this by having your local DNS be authoritative for internal domains — your internal requests are never going to outside DNS servers. There could be leaks if someone who normally has access via VPN tries to connect without, but if you have something so truly sensitive that just knowing the name is a problem¹ then I hope your people are more careful than that (or your devices seriously locked down).

And I still say the easy workaround for this is names that only mean something internally. projectwhatthefuck.dev.company.tld is not going to mean much other than giving an attacker compared to projectousurpcompetitor.company.tld. Yes, they'll know the server name, and if it is publicly addressable they can connect to it, but if you have it properly configured they'll have to give it auth information they hopefully won't have before it hands over any useful information beyond the meaningless (to them) name that they already know.

--------

[1] Some of our contracts actually say that we can't reveal that we work with the other party, so technically² we could be in trouble if we leak the company name via DNS (bigwellknowmultinationalbank.ourservice.tld). Though when we offer a different name, in case the link between us can leak out that way, in those cases they've always declined.

[2] Really they don't care that much. They just don't want us to use their name/logo/other in promotional material.

25 days ago

sarlalian

I use a combination of mkcert and localdev.me … mkcert to generate a CA and install certs, then localdev.me redirects any subdomain to localhost.

a month ago

8n4vidtmkvmk

Aren't you exposing your dev instance to the world then? Not worried about that?

a month ago

adolph

DNS to a local address doesn’t expose anything.

For example, postulate a DNS entry of myTopSecrets mapped to localhost. If you use it, it will be routed to your own computer. If someone else uses it, they would be routed to their own computer. The same follows for IP addresses within your local area network.

Unless you did extra work outside the scope of DNS, nothing in your lan is addressable from outside your lan.

a month ago

TeMPOraL

You're still revealing the existence of myTopSecrets to the world, though.

Between this and certificate transparency logs, it seems insane to me that the commonly advised Correct Setup, to be able to experiment and hack random little personal stuff, and have it reliably work on modern browsers, requires you to 1) buy a subscription (domain), 2) enter into another subscription-ish contractual relationship (Let's Encrypt), and 3) announce to the whole world what you're doing (possibly in two places!).

Imagine your computer stops booting up because you repositioned your desk, and everyone tells you the Correct Way to do it is to file a form with the post office, and apply for a free building permit from the local council. That's how this feels.

a month ago

dingdingdang

I totally agree, I have long since accepted that this is how things are but.. that doesn't mean it's right.. it feels like browsers are overtly obstructing the use of the local system as a development platform and local-hosting option. This also includes base-line features like the JS localStorage API that only works when connected to a domain name (localhost is a no-go). That last one in particular just feels perverse to me - in NO WAY should that require a domain name, it feels anti-democratic and clunky as can be. It also 100% stops webapps from being local-first (i.e. I save an HTML/JS bundle to a folder and run the "app", it's automatically isolated to said folder) with network connectivity as a secondary option. If browsers could do the latter then it would be a death-blow to a lot of remaining platform-specific apps.

a month ago

adolph

> to be able to experiment and hack random little personal stuff

Yes, a more sane approach is just use replit or the like, but this thread is about keeping it complicated.

> 2) enter into another subscription-ish contractual relationship (Let's Encrypt),

afaik, LE only does certs on machines for which they can see.

Taking a moment to look it up, I'm incorrect, it looks like you can establish LE with a DNS challenge instead of http. [0]

0. https://letsencrypt.org/docs/challenge-types/#dns-01-challen...

a month ago

dspillett

> You're still revealing the existence of myTopSecrets to the world, though.

Not if you only present that name in local DNS, and use a wildcard certificate to avoid needing to reveal the name via a SAN cert or other externally referable information.

Also, perhaps refrain from calling it myTopSecrets. Perhaps ProjectLooBreak instead.

a month ago

Reubensson

Couldn't you just add the domain to /etc/hosts and have it resolve that way. No need to buy domain if you are just testing locally. Also you wouldn't be exposing anything to outside world.

a month ago

TeMPOraL

Perhaps I could, but I'm afraid to do it[0]. And I'd still need a matching certificate, and generating a one that browsers won't refuse to look at and make them trust it across multiple devices (including mobile) is it's own kind of hell.

--

[0] - I'm honestly afraid of DNS. I keep losing too much of my life to random name resolution failures, whose fixes work non-deterministically. Or at least I was until ~yesterday, when I randomly found out about https://messwithdns.net, and there I learned that nameservers are required to have a negative cache, which they use to cache failed lookups, often with absurdly high timeout values. That little bit of knowledge finally lets me make sense of those problems.

a month ago

Reubensson

I was only commenting about DNS part, self signed certificates come with their own lot of trouble. At least I havent ever run into any cache issues with local resolvers.

I have previously used https://github.com/jsha/minica which makes it at least easy to create a root certificate and matching server cert. How to get that root cert trusted on different array of devices is another story.

a month ago

deanishe

You can add what you want to /etc/hosts, but you need to actually control a domain to get a real cert for it that your browser will trust. Otherwise, you need to mess about with self-signed certs, browser exceptions, etc.

If you already own a domain, it's pretty convenient.

25 days ago

IncRnd

myTopSecrets can instead be mapped through a local redirection without needing to put that information out onto the Internet.

a month ago

JasonSage

Just a note, because this comment made me curious and prompted me to look into it:

Vite does use HTTP2 automatically if you configure the cert, which is easy to do locally without Caddy. In that case specifically there's no real reason to use Caddy locally that I can see, other than wanting to use Caddy's local cert instead of mkcert or the Vite plugin that automatically provides a local cert.

a month ago

[deleted]
a month ago

peterldowns

Completely agree. If you want a nice way to do this with a shared config that you can commit to a git repo, check out my project, Localias. It also lets you visit dev servers from other devices on the same wifi network — great for mobile testing!

Localias is built on Caddy; my whole goal is to make local web dev with https as simple as possible.

https://github.com/peterldowns/localias

a month ago

breadwinner

That only works on localhost, right? I am looking for a solution for intranet that doesn't require complex sysadmin skills such as setting up DNS servers and installing root certificates. This is for my customers who need to run my web server on the intranet while encrypting traffic (no need to verify that the server is who it claims to be).

a month ago

peterldowns

Localias is not designed for your usecase and cannot solve your problem, sorry.

a month ago

HumanOstrich

Without verifying the server identity, the encryption is useless.

24 days ago

ndriscoll

The six connections thing is just a default that you can change in about:config. Really it should probably have a higher default in $currentYear, but I don't expect major browser vendors to care.

a month ago

srameshc

I assumed almost everyone (product, enterprise) uses ngork to expose development/localhost server to get HTTP2 now a days, but it's good to realize Caddy can do the job well.

a month ago

jodrellblank

> so you are limited to max of 6 concurrent connections to localhost.

I think a web server listening on 0.0.0.0 will accept “localhost” connections on 127.0.0.2, 127.0.0.3, 127.0.0.4 … etc., and that you could have six connections to each.

https://superuser.com/questions/393700/what-is-the-127-0-0-2...

( a comment there says “not on macOS” though)

a month ago

noplacelikehome

> via the root certificate it installs in your OS trust store

This does not sound like the kind of feature I would want in a web server

a month ago

bogdan

It is optional for this purpose and you have to explicitly install it.

a month ago

seaal

After switching from nginx to caddy-docker-proxy a year ago I just recently made the move to Pangolin[0] and am really enjoying the experience. It's a frontend to traefik with built-in auth and ability to tunnel traffic through Wireguard. I needed the TCP forwarding for my Minecraft server and this made it very simple.

Would recommend it for anyone wanting a better version of Nginx Proxy Manager. The documentation is a little lacking so far but the maintainers are very helpful in their Discord.

[0] github.com/fosrl/pangolin

a month ago

aborsy

It sounds more like an alternative to Cloudflare Tunnels. Except, the Cloudflare access is secured by Cloudflare security team.

a month ago

gloflo

And all your traffic would be watched and monitored by an US American company which already has access to vast amounts of your internet browsing behaviour.

a month ago

aborsy

Yes, that’s the downside. But if you run Pangolin on a VPS, there is no downside in that respect: in both cases a cloud provider has access.

Authentik behind Caddy does that too.

a month ago

apitman

Not sure if Pangolin works this way, but in general it's possible to run tunnels end to end encrypted.

a month ago

delduca

OP told that was already using Cloudflare.

a month ago

[deleted]
a month ago

miloschwartz

Glad to see Pangolin mentioned here!

a month ago

seaal

Appreciate the hard work on the project Milo :) Glad I could spread the message.

a month ago

InMice

Thanks for this comment. Ive recently been looking to use a domain for a server (instead of ISP assigned address) to make it publicly accessible. The server machine still physically sits in a residential location so I dont want that exposed. This is another setup solution I can look into.

I have been looking into doing an ec2 or DO droplet with a static ip with tailscale funnel for the traffic proxy. I just like that its easy to go into the web interface for the ec2/droplet and control which IPs it allows ssh connections.

a month ago

npodbielski

what is the use of SSO there? How this would work with other selfhosted applications that require their own auth? Because if you need to authenticate 2 times then it would not be good.

a month ago

seaal

You can disable auth for specific subdomains. With cookies I rarely see the original auth anyways.

a month ago

npodbielski

Ah ok. Though in my ideal world I would have SSO for all of my selfhosted applications and it would be one and the same. You login to SSO, you logged to an app.

Especially when my family is using the same services for some stuff. I would rather not hear them complaining that they have to 'again login to access x or y TWICE'. :)

With mobile applications it is also tricky since some of them work on app tokens and require it to setup via some application UI.

So you wold have to login twice, from mobile, which is even less convenient, and from every app since there are no shared system cookies. In summary I would rather block/whitelist IPs or IPs ranges on proxy webserver (like right now with NGINX). Which lacks UI, yes. This is where Pangolin seems much better.

a month ago

seaal

The SSO is only once for all shared resources, the login of which is saved in password manager just like any other website so login isn't really an issue. You can create different users and roles with ease for anyone that needs it.

There's also a rules section that allows you to bypass all authentication with a IP, range, URL whitelist. It's all traefik under the hood after all so it's very extensible with crowdsec, fail2ban and there's always the yml if you want to deal with that.

I just have Jellyfin disabled since it has it's own auth and to prevent any issues with family members tv streaming since that's the only thing they care about anyways.

a month ago

npodbielski

Ah ok I understand. Thanks

a month ago

8n4vidtmkvmk

A lot of positivity in this thread. I don't have anything bad to say about Caddy, but the only advantage I'm hearing over Nginx is easier cert setup. If you're struggling with that, I can see how that's a benefit.

I configured my kubernetes cluster to automatically create and renew certs a few years ago. It's all done through Ingress now. I just point my Nginx load balancer to my new domain and it figures it out.

I don't often need local https but when I do I also need outside access so Stripe or whatever can ping my dev server (testing webhooks). For that I have a server running Nginx which I use to proxy back to localhost, I just have to run 1 command to temporarily expose my machine under a fixed domain.

Works for me. Maybe not everyone but I'll keep doing this since I don't have any reason to switch

a month ago

homebrewer

> I don't have anything bad to say about Caddy

Here's one: it does not support dynamically loadable modules, like most (all?) Go programs. So if you need e.g. geoip, you have to build your own, and then maintain it, tracking CVEs, etc. You can't rely on your distribution's package maintainer to do the work.

a month ago

vruiz

It's not like you have to maintain a fork, it's pretty minimal, all you need is a Dockerfile with what you want and build the container. Other than that you just keep bumping the version like you would the standard distribution.

For example to use rate limiting I just have a Dockerfile like this:

FROM caddy:2.9.1-builder AS builder

RUN xcaddy build --with github.com/mholt/caddy-ratelimit

FROM caddy:2.9.1

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

a month ago

maple3142

It is still a problem if you want caddy to run outside of docker (e.g. for getting real remote addr).

a month ago

Marsymars

You don’t really need to track anything either, you can set up a GitHub Actions workflow and have dependabot bump the version for you.

a month ago

baby_souffle

Or you can just ‘apt install -y nginx certbot’ and not have to worry about a build or package environment.

a month ago

dboreham

Golang fundamentally doesn't support dynamically loaded libraries. It appears at first that it does, which can waste your time, but actually it doesn't.

a month ago

silisili

Can you expand? I thought the plugin package handled this now, though I've not actually tried it. Is it a dud?

a month ago

pjmlp

It is one, yes.

Only a POC, supported in Linux and macOS, and basically relies on doing casts from loaded symbols into what they are supposed to mean.

a month ago

baby_souffle

This was the big deal-breaker for me when I last looked a little while ago.

I need route 53 and a few other DNS providers built in for let's encrypt support and the docs implied that I was going to have to build those plugins myself?!!!

I stopped reading at that point because cert bot is trivial to install and just works with the web server that was also one command to install. At no point did I have to create a ephemeral container just to build nginx or certbot...

a month ago

apitman

I wonder if caddy3 might implement WASM plugins via something like Wazero. Maybe too much of a performance hit.

a month ago

maccard

Caddy is an opinionated alternative to nginx with modern defaults.

I’m perfectly able to configure all of the bits and pieces of nginx or apache, but instead of spending 15 minutes or so doing it I tell caddy “here’s my domain name” and move on with my life. The massive benefit is that the features are easily replaced or replicated so if I do decide I want to use traefik or nginx for a specific feature, I can do it when I care about that. But caddy is just batteries included

a month ago

gamedever

> but instead of spending 15 minutes or so doing it I

you forgot to add on the months or years of experience you already have that lets you do that in 15 minutes. Maybe today with an LLM I could figure out certs but the every time I've tried it in the past there's tons and tons of jargon and tons and tons of options and everything is written from the POV of someone that already knows it all.

a month ago

SOLAR_FIELDS

Certs are a real pain. I’ve been doing infra engineering for the last few years and man oh man people are so quick to handwave away the hidden costs of ssl. You need to understand where it terminates in your stack and how to handle the termination and which pieces of the stack handle ssl and which don’t and all the while make sure you aren’t doing something dumb and insecure.

A classic example is people thinking self signed certs are a good idea without fully understanding the implications of getting every single piece of your application stack and all its third party dependencies to trust the thing.

Which I guess is a good thing, but also man it does place a lot of power into those root CA’s the internet uses.

a month ago

maccard

Oh I’m agreeing with you. My point is that even as someone who is able to and knows how to do this stuff it’s a value add.

a month ago

Diggsey

I haven't used Caddy but it's config format can't be worse than nginx's. Some crazy gotchas with the way nginx config works...

a month ago

homebrewer

> it's config format can't be worse than nginx's.

It's alright — the main upside for me is that it supports parameterized includes, thus letting you reuse large chunks of configuration without relying on something like ansible or bash + envsubst.

https://caddyserver.com/docs/caddyfile/directives/import

a month ago

8n4vidtmkvmk

That's true. Nginx config has quirks that have bewildered me in the past, but nothing I couldn't figure out, and not something I have to touch often.

a month ago

thomasfromcdnjs

- Single executable (you know where all the files are regardless of os)

- Config so easy you can remember how to do it in a day

a month ago

lynndotpy

This is it for me. I got frustrated trying to do something with Nginx, which I had ~five years of experience with at the time.

Someone recommended I try Caddy, I was surprised I could just `chmod +x caddy; caddy start`, and I had replaced my laborious Nginx configuration + the new reverse proxy I wanted in ten minutes.

If I already knew Nginx in-and-out, I'd not have had the impetus to use Caddy. If Nginx config is a daunting task or something that takes longer than two minutes, I'd recommend taking a few minutes to try out Caddy.

a month ago

homebrewer

> Single executable (you know where all the files are regardless of os)

This is your package manager's job, which even Windows has these days. Other operating systems solved this problem decades ago.

https://winstall.app/apps/nginxinc.nginx

a month ago

TeMPOraL

They're not solving it well in many cases; so-called "portable apps" are a great thing.

(If you don't believe me, consider why exactly Docker got so popular.)

In a way, it's ironic that you need package managers to keep track of software on Linux, compared to Windows, which used to let you get away with the assumption that everything that makes a program lives in a single folder tree; half the software was effectively portable by default.

Where's the irony, you ask? On Windows, you could almost say that an application install is equivalent to its folder. And a folder is a type of file. And per Unix philosophy, everything is supposed to be a file!

a month ago

pjmlp

Everything is a file is only true if we ignore OS IPC and networking.

Windows way goes back to 8 and 16 bit home computer days, across all systems that is mostly how it worked, applications were placed inside their own directory, or their own floppy, actually.

a month ago

k_bx

I didn't try Caddy yet, but to me there are obvious downsides to nginx due to which i'd like to run away from it eventually. First and foremost: slow query detection. Out of the box, it doesn't allow you to do that easily, you either need to hack your own log format (and parse it), or to get a paid version. Another is simple stuff like log rotation (access.log/error.log). Could just use journald, but it doesn't. There are others, but these are enough to find a better alternative.

a month ago

8n4vidtmkvmk

Shouldn't log rotation be handled by `logrotate`?

I mostly don't look at the logs outside of dev. Errors are caught by Sentry. Still, I can see the use for that... I'd probably try to ingest it into Grafana or some silly k8s solution if I cared enough.

a month ago

wink

I'd used nginx for work stuff for close to 10 years and still didn't trust myself writing configs from scratch without comparing with known-good ones. I never had that problem with Apache, so take from that what you want.

I'm not doing SRE stuff at work anymore (or it's on AWS) - so I've been using caddy for my own stuff for a couple of years with nearly zero problems.

For work I still might use traefik or nginx, my only reason against caddy were bad experiences in their support forum, but that was years ago.

a month ago

DistractionRect

I think nginx is great if your enterprise and want to squeeze the most utility out of your boxes. The issue is there's a large disconnect between nginx and Nginx plus, and you quickly end up making cursed configs to do basic things if you're using the former. It's literally what drove me to seek out alternatives and settle on caddy years ago.

a month ago

8n4vidtmkvmk

What kind of cursed configs are you running into? My prod config is less than 65 lines to enable php-fpm and serve some static assets.

a month ago

martinbaun

I absolutely love Caddy. Used it for years. Very reliable and so easy to setup once you learn the basics. The documentation is a bit hard to get, but it saved me so much time and energy compared to trying to get letsencrypt working reliable ontop of NGINX.

a month ago

codetrotter

I used Caddy for a couple of years but eventually went back to Nginx.

For the Let's Encrypt certs I use certbot and have my Nginx configs set up to point to the appropriate directories for the challenges for each domain.

The only difficulty I sometimes have is the situation where I am setting up a new domain or subdomain, and Nginx refuses to start all together because I don’t have the cert yet.

It’s probably not too complicated to get the setup right so that Nginx starts listening on port 80 only, instead of refusing to start just because it doesn’t have the cert for TLS needed to start up the listener on port 443.

But for me it happens just rarely enough that I instead first make the config and outcomment the TLS/:443 parts and start it so that I can respond to the request from Let’s Encrypt for the /.well-known/blah blah stuff, and then I re-enable listening on with TLS and restart Nginx.

I also used DNS verification for a while as well, so I’m already aware that’s an option too. But I kind of like the response on :80 method. Even if I’ve managed to make it a bit inconvenient for myself to do so.

a month ago

throwaway94244

I tried to setup Caddy last year as a reverse proxy for all paths matching "/backend", and serve the rest as static files from a directory. I had to give up, because the documentation was not good enough.

I tried the JSON config format that seems to be the recommended format, but most examples on Google use the old format. To make it even more complicated the official documentation mentions configuration options, without informing that it requires plugins that is not necessarily installed on Ubuntu. Apparently they just assume that you will compile it from scratch with all options included. Lots of time was wasted before I found a casual mention of it in some discussion forum (maybe stack overflow, don't remember). I just wanted the path to be rewritten to remove the "/backend" path before proxying it to the service. I guess that is uncommon for a reverse proxy, and have to be placed in a separate module

I may appear overly critical, but I really spent a lot of time and made an honest attempt

I'll go back to nginx. Setting up let's encrypt requires some additional steps, but at least it's well documented and can be found in Google searches if necessary

a month ago

mholt

Huh, sorry that it was that difficult. Based on what you wrote, this should suffice:

    example.com
    
    root /etc/www
    reverse_proxy /backend/* 127.0.0.1:8080
    file_server
a month ago

throwaway94244

It seems simple, but I used the JSON format which seems to add a lot of complexity (basically serialized version of objects)

I also had to install a separate module just to get a decent access log...

Could be my fault for going too far down a wrong path, but it could also be a sign of poor documentation

a month ago

throwaway94244

I found my last attempt here (before I gave up). I also spent a lot of time getting the basic logs working, by installing modules that wasn't part of the standard distribution

    {
      "logging": {
        "logs": {
          "default": {
            "level": "INFO",
            "encoder": {
              "format": "transform",
              "template":"{common_log}"
            },
            "writer": {
              "output": "file",
              "filename": "/var/log/caddy/access.log",
              "roll": true,
              "roll_size_mb": 5,
              "roll_gzip": true,
              "roll_local_time": true,
              "roll_keep": 5,
              "roll_keep_days": 7
            }
          },
          "dev_access": {
            "level": "INFO",
            "encoder": {
              "format": "transform",
              "template":"{common_log}"
            },
            "writer": {
              "output": "file",
              "filename": "/var/log/caddy/dev_access.log",
              "roll": true,
              "roll_size_mb": 5,
              "roll_gzip": true,
              "roll_local_time": true,
              "roll_keep": 5,
              "roll_keep_days": 7
            }
          },
          "errors": {
            "level": "ERROR",
            "writer": {
              "output": "file",
              "filename": "/var/log/caddy/error.log"
            }
          }
        }
      },
      "apps": {
        "http": {
          "servers": {
            "srv0": {
              "listen": [":443"],
              "logs": {
                "default_logger_name": "dev_access"
              },
              "routes": [
                {
                  "match": [
                    {
                      "host": ["<redacted>"],
                      "path": ["/backend/*"]
                    }
                  ],
                  "handle": [
                    {
                      "handler": "subroute",
                      "routes": [
                        {
                          "handle": [
                            {
                              "handler": "rewrite",
                              "strip_path_prefix": "/backend"
                            }
                          ]
                        },
                        {
                          "handle": [
                            {
                              "handler": "reverse_proxy",
                              "upstreams": [
                                {
                                  "dial": "localhost:8080"
                                }
                              ]
                            }
                          ]
                        }
                      ]
                    }
                  ]
                },
                {
                  "match": [
                    {
                      "host": ["<redacted>"]
                    },
                    {
                      "file": {
                        "try_files": ["{path}", "/index.html"]
                      }
                    }
                  ],
                  "handle": [
                    {
                      "handler": "file_server",
                      "pass_thru": true,
                      "root": "/home/server/web/dist"
                    }
                  ]
                },
                {
                  "match": [
                    {
                      "host": ["<redacted>"]
                    }
                  ],
                  "handle": [
                    {
                      "handler": "rewrite",
                      "uri": "/index.html"
                    }
                  ]
                }
              ]
            }
          }
        }
      }
    }
a month ago

zsoltkacsandi

> I had to give up, because the documentation was not good enough.

I had the same experience. And also somewhat bothered me that even a very basic and common functionality like rate limiting is not built-in.

a month ago

e12e

a month ago

npodbielski

There are already full solution for things like that i.e. https://github.com/linuxserver/docker-swag

a month ago

rand846633

Reading the website top to bottom, I’m now unsure about the trustworthiness of a project that seems so full of itself. Passage after passage about how great it is leaves a bad aftertaste. Maybe it’s just me—unsure.

I no longer trust the authors to be honest about known shortcomings, let alone be upfront, truthful, and transparent when dealing with security issues and reported vulnerabilities.

I hope I’m wrong. Does anyone know how they’ve handled disclosures in the past?

a month ago

CharlesW

a month ago

jeroenhd

I dislike this style of documentation as well, but Caddy is a proven piece of technology. It can easily replace nginx or any other reverse proxy unless you're using a real niche configuration. Not needing to deal with certbot is also pretty nice.

Caddy's writing style isn't necessary big-enterprise-middle-management-friendly, but luckily for big enterprises that want lengthy, dry, and boring, there are plenty of alternatives.

a month ago

troyvit

I just had my first experience with Caddy setting it up as a reverse proxy in front of Vaultwarden. Following along with Vaultwarden's documentation it worked like a charm and I was left thinking, "What a neat little project for hobbyists who want to get going quickly with the basics."

Then I checked out the home page and it's all "The most advanced HTTPS server in the world Raaawwrrr!"

Quite the divergence, but as other comments in the thread say, it's a legit good project.

a month ago

Deukhoofd

You're unsure about a product because the landing page is positive, and even go so far as to not trust the authors any more? That does sound like a strange expectation for a landing page, which is usually intended to make you want to use a project.

a month ago

layer8

I agree with the GP that hyperbole on a landing page (or anywhere else in the project’s communication) makes me not want to use the project. It communicates that the project lacks confidence that a down-to-earth description would speak for itself.

a month ago

t43562

I understand the attitude because there are a lot of corporate websites which similarly claim the moon and the stars and when you dig right down a lot of it is bullshit. I have worked in places like this.

Such companies tend to imply that their product can do anything and tend to have pages of verbiage rather than the brass tacks README with examples you get on a good open source project's github page.

a month ago

[deleted]
a month ago

[deleted]
a month ago

riffic

[flagged]

a month ago

gz5

The friendly licensing (Apache v2) is important too, especially w/ Caddy's modular architecture (single, static binary compiled for any platform).

Meaning ecosystems around Caddy to make it even simpler and more secure, e.g. keep your server private while serving Internet clients. So VPNs like Tailscale (1) or zero implicit trust like OpenZiti (also Apache v2; (2)). Similar to what we have seen with open source k8s ecosystem for example.

(1) https://tailscale.com/blog/caddy (and other VPNs but the proprietary bits in the commercial TS service make it easier to use)

(2) https://github.com/openziti-test-kitchen/ziti-caddy (disclosure: maintainer...there may be other open source zero implicit trust options with these types of Caddy integrations)

a month ago

trashburger

> modular architecture

> single, static binary compiled for any platform

Huh? Aren't these exact opposites?

a month ago

mholt

Plugins (modules) are compiled in statically.

a month ago

pjmlp

Just like in the 1980's.

a month ago

infogulch

Build-time modularity is a great balance between flexibility, installation simplicity, startup reliability, and binary size.

Look at all these comments put off at the idea that maybe the tiny annoyance of building the software to have the exact features you want is worth it for reducing deployment complexity. It's kinda sad actually, compiling software should not be so scary.

a month ago

homebrewer

It's not tiny when you include the need for ongoing support. It's the difference between enabling unattended-upgrades and (mostly) forgetting the thing exists, or adding another item onto your CVE tracking list and either building pipelines to automatically rebuild and update the server, or doing it manually every time a security bulletin comes out.

When you have more than one system, it can't be just dismissed away.

a month ago

hagbard_c

I prefer to keep certificate management separate from individual applications like web servers, mail servers, XMPP servers, database servers and all the other services I run. All of these need certificates so I have centralised certificate management and distribution. This comes down to running certbot in a container with some hook scripts to distribute new or updated certificates to services (running on different containers and machines) which need them, restarting those services when needed. Adding a new site to nginx comes down to copying a template configuration, changing the site name to the correct one, adding whatever configuration needed for the specific service and requesting a new certificate for it. The new certificate automatically gets copied to the container or machine running the service so it is available after reloading the nginx configuration. The same is true for most other services, several of which share certificates because they're running in the same domain. I used the same scheme back when I used lighttpd and will probably use it should I move to another web (or mail or XMPP or whatnot) server.

a month ago

defanor

Same here (not certbot and containers, but the part about reusing certificates for multiple services): it feels wrong to couple certificate acquisition with a web server. Apparently it is convenient when there is just a web server out of TLS-using services, or at least when it is in the center of the setup and HTTP-based certificate acquisition is used, which seems to be a common enough case to justify this, but still an odd coupling in general.

a month ago

RadiozRadioz

I also do this same thing, but my Nginx configs are templated out via automation. It gives me the best of both worlds: 95% of my sites and their certs are templated out from 3 lines of config each, then for the last special 5% I can insert literal Nginx config. For most uses I have the same experience as someone with Caddy, but for that last 5% I love the "access the full power of Nginx config from the same place" escape hatch.

a month ago

kstrauser

I migrated all my Nginx hosts to use Caddy a while back. It doesn't do anything Nginx can't, but the default configuration is identical to the way I'd previously manually configured servers. It's so pleasant to get an HTTPS site up and running with 3 lines of setup.

a month ago

pierot

A great alternative is Traefik. We have been using v1 and v2 for several years now in a setup that uses the docker labels for configuration of services.

a month ago

therein

When I had first heard of Caddy, I was experimenting with Traefik to replace an nginx setup I had for a long time.

Traefik had good potential and momentum at the time. And then Caddy started to gain some momentum too. After that there was a brief moment Caddy made the mistake of taking an ad and including it in the `Server` response header and making it be an opt-out feature. Once that was walked back and the dust has settled, Caddy kept gaining more and more momentum and exposure.

Traefik had a web panel that I thought was cool back then but it tried to be too tightly coupled with containers and insisted on making service discovery be an essential core component of its configuration model.

At least this is what I remember. At this point I am very happy with caddy and it is what I use pretty much on all my services.

Thank you mholt for such a nice project and sorry for being overly critical of the ad in the Server response header very many years ago. :)

a month ago

paulgb

In case anyone is (like me) unfamiliar with the server header ad: https://news.ycombinator.com/item?id=15237923

Glad they removed it. Caddy is a great piece of software.

a month ago

qntmfred

I also started using traefik a while back with docker labels. it was a bit more to set up than I thought it would be, but now that I've figured it all out it's not too bad.

at the time I had seem a lot of people talking about caddy as well and considered using it instead, but traefik had better perf/latency benchmarks and caddy seemed a bit too much geared toward or at least better suited for dev environment scenarios.

a month ago

oliwary

Caddy coupled with Caddy-Docker-Proxy [0] is a marvelous way to set up a server with multiple docker projects. I have it running on a couple of servers, and it just works!

[0] https://github.com/lucaslorentz/caddy-docker-proxy

a month ago

globular-toast

I used this for my homelab before I decided my life was too easy and switched to k8s+cert manager. I use traefik[0] to handle multiple docker-compose projects in development, but this is pretty much a drop in replacement and gets you HTTPS if you have a domain and your registrar has an API. I ran a single docker-compose file with everything in it behind a single reverse proxy for quite a while.

[0] https://blog.gpkb.org/posts/multiple-web-projects-traefik/

a month ago

plagiarist

This will be interesting to read, thank you. This sounds like what I was struggling with a while ago.

I found traefik to be a small nightmare in k8s. I struggled my way up to a working implementation years ago, then a k8s version change on my k8s hosting service forced me to start from scratch.

The whole thing soured me on traefik and also k8s. I wanted to learn pods and autoscaling while having interservice networking resolved for me. I don't want to spend hours struggling with the ingress and load balancing tools (I thought) k8s is supposed to solve for me.

If I ever try it again, I'd use a different ingress tool for sure.

a month ago

globular-toast

I used traefik ingress for a while for k8s as it is included by default with k3s. But I very quickly switched to ingress-nginx. It's very easy to install and then it just takes care of itself letting you just declare Ingresses and forget about it.

a month ago

plagiarist

That's what I wanted to do. If/when I try k8s again it absolutely will be a different ingress. There's no reason it should be such a struggle.

a month ago

twasold

Have you tried Traefik? Could you compare the two? I was going to migrate to traefik soon but would consider any alternative

a month ago

oliwary

I do not have in-depth knowledge of traefik unfortunately. I tried it a while ago, but decided to switch to the setup mentioned above for the setup simplicity. For my use-case, the setup mentioned on github under "Basic usage example, using docker-compose" plus adding two lines to a docker-compose file has been enough for most of my use-cases, and never given me any trouble.

I think achieving a similar setup in traefik (e.g. https://github.com/tiangolo/blog-posts/blob/master/deploying...) would be more complicated, and I felt like I was not sure what all the labels did or how to adapt the setup.

a month ago

wingworks

I've been using Caddy since the early days, and a few times looked into using Traefik, but it's config looked pretty complicated for what is a simple reverse proxy on Caddy.

a month ago

hollow-moe

Caddy is already powerful as it is but with the L4 plugin it can also work on layer 4 and proxy other stuff. I made a cursed config proxying to a website on HTTP request and towards a Minecraft server all other TCP traffic.

a month ago

gear54rus

is there write up or code for that?

a month ago

therein

a month ago

vFunct

Another great web server to try is h2o: https://h2o.examp1e.net/

Especially for its HTTP/2 and HTTP/3 QUIC support.

a month ago

shawnz

Wow, this is exactly what I've been looking for: lightweight, supports CONNECT and CONNECT-UDP with all three HTTP versions, and supports encrypted client hello. Thanks for the recommendation!

a month ago

pbowyer

h2o is also one of the few servers to get resource prioritization correct for HTTP/2 and HTTP/3.

If resource prioritisation is new, a few references: https://www.youtube.com/watch?v=MV034VqHv5Q https://calendar.perfplanet.com/2022/http-3-prioritization-d... https://github.com/andydavies/http2-prioritization-issues

a month ago

assimpleaspossi

Agree and have been using it for many years for web sites we develop for customers. H3 support has been available for quite some time.

Support is also good and the developer is quite active.

a month ago

NetOpWibby

I just launched a new site with Caddy today: https://uchu.style

Caddy is so awesome. I actually have a few other sites on the same server and updating my config is hella simple.

I spent several years optimizing my nginx setup and I haven't touched it in years (I was obsessed about getting a perfect security score).

a month ago

bradley_taunt

Streamlined “tutorial” for those looking to easily get up and running with Caddy:

https://caddy.ninja/

a month ago

sunaookami

Love Caddy! Switched to it 2 years ago from NGINX/OpenResty and it made my config much less verbose and more simple. Previously used lua-resty-auto-ssl with OpenResty but it's kinda deprecated and I will never touch certbot but needed a "fire-and-forget" solution. Serving 70k visitors monthly very well :)

a month ago

engine_y

A couple of years ago, we tried replacing nginx with Traefik. The main reason was its https integration with lets encrypt.

Let's just say it takes a lot these days to choose something that is not nginx.

a month ago

__jonas

That’s a bit vague, could you share more about what caused you to stick with nginx / problems you faced with alternatives?

a month ago

engine_y

Welp. We used it in prod for ~18 months or so but the experience was not something we'd repeat.

The configuration of Traefik, in our case, embedded in the docker-compose file was not clear. What was supposed to be a 'auto-detection' of services ended up looking like a hodge-podge of configs between several files.

The logging was sub-par - we couldn't properly debug issues.

And then we ended up migrating terminating HTTPS on AWS's ELB so the let's encrypt integration became not relevant which catalyzed us going back to nginx.

a month ago

__jonas

Gotcha, thanks! I've had similar problems with Traefik and docker compose actually, got it working well once, but then after changing some settings around it wasn't properly proxying to one of my containers anymore and I gave up trying to figure it out and switched to Caddy – since I'm not dynamically scaling services to run across many containers in a cluster or such, I don't think Traefik offers much of an advantage for me personally. I've never really looked back to nginx though, I quite like Caddy's sensible defaults.

a month ago

p2detar

I was reluctant to switch to Caddy because I couldn't understand if does or does not use Linux' sendfile sys call, which made a huge difference for me with Nginx. [0]

Nevertheless, I used Caddy to front our internal Mattermost chat server and it works flawlessly to date. The configuration was really simple, I like it a lot.

0 - https://github.com/caddyserver/caddy/issues/4731

a month ago

commandersaki

My hunch is that sendfile isn't going to give any discernable improvement and looking at the benchmarks in the thread that seems to confirm this.

a month ago

jsheard

IIRC Caddy uses sendfile on HTTP connections, but HTTPS connections are or were blocked by Go not supporting Kernel TLS yet. If the kernel is sending the file by itself then it also has to handle the crypto.

a month ago

sagolikasoppor

I have used caddy for years as a reverse proxy for all my side projects. It is one of my favorite pieces of software.

So easy to setup and performs very well.

a month ago

meander_water

I'm surprised no-one has mentioned the admin API [0], which imo is one of the main differentiators of Caddy. I've used it to dynamically change the config without any downtime.

[0] https://caddyserver.com/docs/api

a month ago

homebrewer

Most nginx options can be applied on the fly by changing the config and running 'nginx -s reload' or 'systemctl reload nginx' (which runs the first command underneath). No downtime for most use cases (including switching backends).

a month ago

jsheard

Nginx does have something similar... but it's one of the features that is exclusive to the paid version. Likewise with other features like active healthchecks, which are supported by both, but paywalled in Nginx.

a month ago

iloveitaly

Caddy is really great. In prod, but most surprisingly for all environments.

- There's a great tool, localias, which uses Caddy for a local dev server https://github.com/peterldowns/localias

- I use it locally for dev https://github.com/iloveitaly/python-starter-template/blob/m... which aligns tricky bits of a web app like HTTP redirect, cookies, and CORS to work consistently across dev and prod.

- Can be used on GHA for HTTPS as well https://github.com/iloveitaly/github-action-localias

a month ago

satvikpendem

Caddy is pretty nice, I believe Coolify uses it as part of their self-hosted open source PaaS model. Just out of curiosity, are there any alternatives in Rust? I think Pingora is one, as well as River which is built on top of it [0], but I'm not sure how widely used the latter is as a Caddy replacement.

[0] https://github.com/memorysafety/river

a month ago

aquariusDue

As far as I know only NGINX Unit[0] might be considered a viable-ish real alternative to Caddy. But other than that nothing comes close to Caddy's ease of use and versatility. You get a lot of stuff and there are heaps of community modules for it, the only downside last time I checked was increased memory usage compared to standard nginx and slower performance as a reverse proxy.

Depending on your setup the fact that you have to choose between Caddyfiles (which are easier to reason about than nginx config files) or the REST API for configuration might be a downside to some people. There's a chance I might be wrong about this one though.

But to answer your question directly there are no real alternatives in Rust as of now.

[0] https://unit.nginx.org/

a month ago

cycomanic

AFAIK traefik is written in rust. I don't have any experiences with it though. I only had a brief look when setting up some personal project, but ended up going with caddy, because traefik seemed overkill for my requirements.

a month ago

satvikpendem

Traefik is also written in Go, actually.

a month ago

cycomanic

Ah, I misremembered then. Thanks!

a month ago

k_bx

Too bad river didn't have commits in five months.

a month ago

oriettaxx

> Automatic HTTPS provisions TLS certificates for all your sites and keeps them renewed. It also redirects HTTP to HTTPS for you!

When I add set the IP of a domain to point to caddy, do I have do tell it some how to Caddy, or the certificate is created on the fly on the first https call?

It's really important for us https://news.ycombinator.com/item?id=43053955 due to our need to redirec apex domain to www ... which we can solve with the free (great) service provided by https://www.apextowww.com/#get-started ... but, we are just curious since https://www.apextowww.com/#get-started does use Caddy (I see it in their headers) so maybe we would just need Caddy :)

a month ago

timdev2

You can use Caddy's CertMagic library in your own server, if you want something super-lightweight.

a month ago

wim

I also find their library for Go (https://github.com/caddyserver/certmagic) a major timesaver! We're using it to make it easy for people to self-host our app and it takes care of all the TLS cert set up/renewal.

a month ago

andrewstuart

One day a number of years ago I decided I'd totally had anough of the arcane and difficult to debug Nginx configuration.

I heard about how Caddy did automatic https, and given the searing pain of doing https on Nginx, decided to make the switch.

Never regeretted it. Caddy it always up to the job even for sophisticated reverse proxying configs.

a month ago

geocrasher

Last year a coworker mentioned Caddy, so I decided to set it up on a spare box just to see how well it worked with WordPress, PHP etc. It did okay. I didn't do any big tests with it but it seemed to work well enough, and was super simple to configure. It does seem quite niche however.

a month ago

DerSaidin

> It did okay. I didn't do any big tests with it but it seemed to work well enough, and was super simple to configure.

What issue/experience stop you from saying "it did great"?

a month ago

geocrasher

Nothing in particular, it just did what it was supposed to do quite easily. I didn't have the opportunity to load test it and the small VM it was one would have been a bigger bottleneck, I think.

a month ago

jasongill

I have had "move our PHP infrastructure to FrankenPHP" on my todo list for the last year or two - it's Caddy combined with PHP in Go

a month ago

napkid

I did exactly this for my startup (https://easy.green). I'm very happy with this setup so far, with the code embedding feature for on premise delivery. I had to disable the worker mode though, it caused issues with uncommon features in the ORM.

a month ago

indigodaddy

Yep I've used franken a few times it's the easiest php setup one could imagine

a month ago

clementmas

Is FrankenPHP stable? It looks good but quite new

a month ago

francislavoie

Yes, stable and production ready

a month ago

samgranieri

I'm using caddy as a proxy to various services running node, ruby, or elixir. It's replaced using mkcert and nginx, and I have just about everything i need proxied to ..localhost, with caddy's awesome Step-CA derived certiifcate libraries providing the fun

a month ago

Levitating

I am still looking for a dead simple webserver that can serve files, do CGI and reverse proxy.

I have been using lighttpd for much of this. It's configuration is extremely simple although it has some quirks. It also has a few problems like not always correctly logging errors related to CGI, and not being able to proxy to a backend over SSL.

I tried caddy because of its simple configuration syntax and plugin support.

For caddy the sample webpage alone threw me off. It includes a bunch of CSS, custom fonts, and for whatever reason it has tilted text.

I'd like a test webpage to fit on my terminal screen when I SSH to it. Or at least not require a modern browser to render.

Anyway I just don't think Caddy fits my usecase. Are there no dead simple, lightweight alternatives to nginx and apache that actually work?

a month ago

orblivion

> I'd like a test webpage to fit on my terminal screen when I SSH to it. Or at least not require a modern browser to render.

It sounds like it may be worth wading into it a little more if this is what threw you off. Or is there some other reason Caddy doesn't fit your use case?

It has batteries included, so it'll have some things that are a little heavy handed or confusing. On balance I appreciate how simple it is for a user at the end of the day.

a month ago

Levitating

I might give Caddy another go. I just felt like Caddy tried to be a bit more "modern" for my needs, I only need to execute simple CGI scripts and forward requests after all. And most "webpages" of mine are plain HTML.

Or the CLI that I dont't see myself use much I just start/stop jobs with systemd anyway.

The builtin ACME support I could maybe use but I already have some beautifully handcrafted cron jobs for renewing my certs :) I even managed to more or less hack ACME support into lighttpd using its config syntax.

Caddy actually used to have a very minimal test page, I think they changed it with v2.

I really though lighttpd was perfect for me, very simple unix-like daemon that just requires a config file to run. I wish lighttpd v2 was still in development.

In any case I will probably give Caddy another go and otherwise switch back to nginx. Lighttpd has been giving me too much problems in production, like not correctly logging CGI errors, and crashing when the configuration mentions a hostname that cannot be resolved. Or even when testing simple CGI setups, getting errors to show in stdout requires setting /proc/fd/2 as the error log file...

a month ago

mooreds

We moved to caddy as a front end for our unlimited domains offering after some experimentation[0]. ALBs didn't work at the scale we needed them to, so we run our own caddy instances.

Seems to work great. We did run into a rate limiting issue with letsencrypt when we tried to provision too many certs at one time. Ended up having to use wildcard certs to decrease the number of requests. Hardly caddy's fault, though.

0: https://fusionauth.io/blog/unlimited-domains-fusionauth

a month ago

qudat

Caddy made it possible for us at https://pico.sh to provide on demand tls for user subdomains and custom domains.

It really was pretty easy to setup and “just works”

a month ago

pomdtr

I'm a big fan of pico.sh (it's one of my main inspiration for smallweb.run).

I'm sure you're aware of it, but it might be interesting to others: caddy exposes all of it's internal as library you can easily integrate to your projects: https://github.com/caddyserver/certmagic

a month ago

sam_goody

Caddy is good, especially for super simple static sites. As soon as it gets somewhat complex, the configs start becoming messy and opaque, eg.

Nginx:

    rewrite ^/old/((\w|-)+) /new/$1.php;
 
Caddy:

    @oldPath {
        path_regexp old ^/old/([\w-]+)
    }

    rewrite @oldPath /new/{re.old.1}.php
And many things are not even handled by Caddy, or fail silently (for example, we could not get NetData to reverse_proxy behind Caddy no matter what we tried, and the logs were completely useless.)
a month ago

francislavoie

You can shorten it:

    @oldPath path_regexp ^/old/([\w-]+)
    rewrite @oldPath /new/{re.oldPath.1}.php
Having matching and handling be separate steps are a huge benefit to composability of config, you can have pluggable matchers and handlers.

Re you issue with netdata, ask on the forums for help.

a month ago

Vaslo

I moved off of NPM and tried Caddy since Traefik seemed complicated. The paradox was that when I tried to do more complex setups like authentik as a front end and some web books, I could never get them to work with Caddy.

But with Traefik, albeit more complicated, had tons more examples to work from, and a little help with LLMs to clean up my configs when complete just made it much easier in the long run.

I tried Caddy with caddy-docker-proxy and maybe that was my issue? I’m happy with Traefik but for a simple config I can definitely see the advantages of Caddy.

a month ago

rmm

I love love caddy. I only use it for my homelab to get https everywhere, but it’s so much easier than traefik for me I honestly don’t know why everyone prefers it? What am I missing?

a month ago

lurking_swe

Traefik just integrates better with docker, especially docker compose. Depends on your use case honestly.

nothing wrong with caddy.

a month ago

justin_oaks

I was checking into using Caddy for new projects instead of NGINX or Apache HTTPD, but my new projects require OAuth2/OIDC authentication. It seems there's not built-in support for that kind of thing. There's the caddy-security plugin, but people online have been saying it has disclosed security vulnerabilities that aren't being fixed.

Are you using caddy-security? Or is there a better alternative?

a month ago

uriah

With nginx I'm assuming you would use something like Vouch or oauth2-proxy? Something like the architecture described here:

https://github.com/vouch/vouch-proxy?tab=readme-ov-file#what...

Can't speak for caddy-security, but the forward_auth feature is the caddy equivalent to nginx's auth_request

a month ago

mdaniel

Just watch out when using oauth2-proxy because its default session storage using cookies can easily blow out the header size of nginx leading to the dreaded 400 header too large

One fix is moving session storage to redis <https://oauth2-proxy.github.io/oauth2-proxy/configuration/se...> and the other (if you have control over the nginx config) is bumping its allowed header size "large_client_header_buffers 4 128k;" <https://nginx.org/en/docs/http/ngx_http_core_module.html#lar...>

If you're using nginx as an ingress controller, the annotations support it: <https://kubernetes.github.io/ingress-nginx/user-guide/nginx-...> and/or auth-snippet <https://kubernetes.github.io/ingress-nginx/user-guide/nginx-...>

a month ago

justin_oaks

Thanks for the heads-up.

I'm curious at what would be stored in the session to make it large enough to be a problem, but it's good to know to watch out for it.

a month ago

mdaniel

I believe it's almost always the "groups" claim <https://github.com/oauth2-proxy/oauth2-proxy/issues?q=cookie...> but I would suspect any sufficiently large set of claims would do it (e.g. a huge "iss", erroneously returning the user profile jpeg attribute, who knows)

a month ago

justin_oaks

Thanks. I've used oauth2-proxy with NGINX. So I could try to set up oauth2-proxy with Caddy in a similar way.

a month ago

LAC-Tech

I gave up on caddy when the documentation around storing logs assumed systemd. I'm just a basic bitch alpine linux user; nginx was easier there.

a month ago

francislavoie

Logs are just written out to stderr by default, and when you use systemd it ingests it automatically. If you run it with alpine, it's up to you to wire that up.

a month ago

ImpostorKeanu

I'm absolutely hooked on Caddy. Just developed an AITM phishing tool like EvilGinx2. Challenging project, but Caddy's modularity really brings it all together. Need encrypted landing pages? Just string together a few modules. Need conditional forward proxies to make sure requests originate from geographic regions? Placeholders to the rescue.

Absolute stunner project.

a month ago

braebo

I love caddy! I use it to serve webapps and APIs on my hetzner boxes.

I hate the config file though. It could be 10x safer / more discoverable / nicer to use by just using json with a schema that validates and shows docs in the tooltips similar to tsconfig.

I suspect my typescript lsp addiction and relatively limited (though non-zero) backend experience has spoiled my tolerance for the primal nature of backend tooling.

a month ago

FjordWarden

But it does support json for config files, you can even upload them to and endpoint to change your config at runtime. Dunno if it has a schema.

a month ago

aborsy

I switched to Caddy from nginx and Traefik, and never looked back.

Why do I need to write a lot of code to say map example.com to 1.2.3.4?

I get there are headers etc, but in most cases, it should be just one line, with sane defaults. That’s what caddy does. Takes care of SSL automatically, and does the job with minimal code. If you have a special setup, there are options, and you can write more code to achieve that.

a month ago

cmsj

I really like Caddy, it used to do reverse proxying and file serving for my homelab, but more recently I've demoted it to just the file serving because of how awesome it is to be able to configure reverse proxying just using container labels, which is what Traefik allows me to do.

a month ago

aglione

Check https://github.com/lucaslorentz/caddy-docker-proxy it's a Caddy plugin that does exactly this and in a less verbose way of Traefik.

Moreover if you have more of one caddy server deployed it handles TLS certification management in a shared environment, this thing it is not available in the Traefik open source edition (just with the enterprise solution).

a month ago

qwertox

> With On-Demand TLS, only Caddy obtains, renews, and maintains certificates on-the-fly during TLS handshakes. Perfect for customer-owned domains.

Does it allow to plug-in into this system so that post-renewal actions are possible, like distributing those certificates to other machines through Python scripts?

a month ago

francislavoie

Yes, it has events on cert issuance/renewal you can hook into with Go plugins (which can trigger your python script). But generally you want all Caddy instances to share storage via storage modules (e.g. Redis or Consul or a DB, or synced filesystem) and Caddy manages locks through the storage so they don't step on eachother's toes.

a month ago

sebiw

My two cents having a respectable amount of infrastructure ops experience: Use Caddy to get going quickly and to get a solid setup with minimal effort. Use Nginx if you know what you're doing and want full and deep control over the web server / proxy layer of your stack.

a month ago

drunkpotato

Caddy is beautifully simple, a joy to setup, configure & use for a simple home server with a few services. I love it! I used nginx before, and it’s great, but caddy makes things easier. I love how easy it makes SSL certificates & reverse proxies.

a month ago

inglor_cz

I use Caddy within FrankenPHP and it is a very good server. Plus the community is really helpful.

I wish it had more informative logs, though. Some subtle errors in Caddyfile may result in the server not communicating, and not telling you that something is wrong.

a month ago

ulrischa

I get sick when I think about migrating my htaccess and apache rules to this format

a month ago

francislavoie

Yeah, tech debt sucks a lot.

a month ago

daft_pink

Super curious if I can easily put this in front of my localhost jupyter notebook server or other service to get https on my local network.

a month ago

remram

You'd still need a domain to get a certificate.

a month ago

triyambakam

Couldn't self signed certs work? You would still see a warning in the browser I think but can allow it

a month ago

remram

You don't need Caddy then, use --certfile and --keyfile

a month ago

triyambakam

Caddy is still useful as a reverse proxy. The https is just a nice benefit

a month ago

remram

Somebody specifically asked about adding HTTPS to their jupyter notebook, and you are pointing out a solution that is not needed for jupyter, will show certificate errors, and is "useful" only in ways you only hint at. Thank you for your comments.

a month ago

triyambakam

Ah yeah, I see I got off course. Thanks for pointing that out

25 days ago

heraldgeezer

Im old. Why would I trust this over Apache and NGINX? Ive never heard of this. Is this for local dev or to run actual bigger sites?

a month ago

francislavoie

Production ready, 10 year old project. It's not new, it's older than Kubernetes even.

a month ago

RagnarD

I recently found Caddy and now use it extensively. A much nicer, more modern setup experience than Nginx.

a month ago

ivzhh

One thing I did not get is: why both Caddy and Traefik changed the syntax of their configurations.

a month ago

francislavoie

Caddy v0/1 was just built up naturally bit by bit, never had a proper design/structure around its config. Caddy v2 was a complete rewrite from the ground up, including thinking a lot more about config design, so it had to be a breaking change for the project's future to thrive.

a month ago

upghost

Interesting. Is this supposed to be an NGINX/reverse-proxy replacement, or is it complementary?

a month ago

vunderba

It's a standalone replacement for NGINX / Apache. I swapped over to it years ago and it's been rock solid, handles HTTPS certificates via ZeroSSL, and config files are very simple to set up. The author of the project is also really responsive.

a month ago

geocrasher

My understanding is that it is standalone.

a month ago

1oooqooq

i maintain both caddy and trafficserver.

traffic sees dozen of security releases a year... and i always wonder if its less secure or is more secure because people do find the holes there.

a month ago

soheil

massive flex by having an angled perspective view of the animated terminal when they could've much more easy stuck a flat gif in its place.

a month ago

therein

I'd imagine most of everyone here knows about Caddy. Even mholt. :)

a month ago

ej1

[dead]

a month ago

jbverschoor

Or just use orbstack and get https for free

a month ago

mikeshi42

Caddy isn't just for local docker-based mac development - I use it for any small project hosted on a linux VPS for example.

a month ago

lurking_swe

i love orbstack for personal projects but it’s not free for commercial use. FYI.

a month ago

101008

I couldn't find (using Google) a good tutorial to deploy Django with Caddy to a Digital Ocean droplet. Can anyone suggest me what I should look for?

I could ask a LLM but I'd prefer the old way for this type of stuff...

a month ago

eriklaco

Do you necessary need to deploy it to DO droplet? I can help you with that but have you considered any PaaS solutions? I've built the platform: seenode. You can take a look, it might be useful. If you need any help or you have a question let me know.

a month ago

arccy

despite knowing what caddy is, this site turns me off for all the marketing fluff.

since when was hn for ads? there's nothing notably technical on the page

a month ago

lolinder

> By default, Caddy automatically obtains and renews TLS certificates for all your sites.

This is pretty technical, and is an absolute game changer for local TLS setups. Three paragraphs under this quote go into more details.

Then a third of the way down the page there's a way to try it out live by changing your DNS records. Right after that are a few config samples. Then there are links to three different white papers. Then more code samples. Then more more code samples.

You get the idea.

What exactly are you missing here as far as technical stuff? Is it just that they add exciting fonts and stuff on top of the technical stuff?

a month ago

breadwinner

> game changer for local TLS setups

I wouldn't call it a game changer, because you have to expose port 80 and 443 to public internet to get a real certificate. If you can't do that then Caddy signs its own certificates. That means you have to install the root certificate... this is hard to do in most companies.

a month ago

lolinder

No you don't. DNS challenges are a thing, and they're easy to set up. That's how my local is configured.

a month ago

arccy

it's all just marketing fluff

a month ago

lynndotpy

Not sure how this is "fluff". That's a feature that has significant appeal for me as a sysadmin. The landing page is full of demos and technical features relevant for anyone who needs an HTTP server.

It took me ten minutes with Caddy to replace five years of Apache+Nginx. That was three years ago.

a month ago

kupopuffs

r u sure you want the technical stuff?

a month ago

arccy

it looks like any other marketing site, no talk of how or why anything in particular is implemented the way it is, doesn't admit to any tradeoffs or limitations, all positive spin designed to lure you in.

a month ago

lolinder

Your comments are more devoid of technical details than the site is. People keep asking you to clarify what you mean and you seem unable to pin down exactly what is lacking.

a month ago

BoingBoomTschak

I agree that the website isn't very appealing for techies. Still what I use to serve my website, so simple a chimp could deploy with it; though I'm kinda dreading the moment I'll put cgit behind it.

a month ago

rfurmani

I'm serving AI models on Lambda Labs and after some trial and error I found having a single vllm server along with caddy, behind cloudflare dns, to work really well and really easy to set up

vllm serve ${MODEL_REPO} --dtype auto --api-key $HF_TOKEN --guided-decoding-backend outlines --disable-fastapi-docs &

sudo caddy reverse-proxy --from ${SUBDOMAIN}.sugaku.net --to localhost:8000 &

a month ago

homebrewer

It's really best to avoid running web servers as root. It's easy to forward the port 80 with iptables, change the kernel knob to let unprivileged users use port 80 and above, or set the network capability on the binary.

https://stackoverflow.com/questions/413807/

a month ago

delduca

You can use Cloudflare Tunnel, which is even better and simple than having an extra service.

a month ago