NHacker Next
login
▲Nginx introduces native support for ACME protocolblog.nginx.org
737 points by phickey 23 hours ago | 259 comments
Loading comments...
Shank 22 hours ago [-]
> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.

DNS-01 is probably the most impactful for users of nginx that isn't public facing (i.e., via Nginx Proxy Manager). I really want to see DNS-01 land! I've always felt that it's also one of the cleanest because it's just updating some records and doesn't need to be directly tethered to what you're hosting.

uncleJoe 20 hours ago [-]
no need to wait: https://en.angie.software/angie/docs/configuration/modules/h...

(angie is the nginx fork lead by original nginx developers that left f5)

clvx 22 hours ago [-]
But you have to have your dns api key loaded and many dns providers don’t allow api keys per zone. I do like it but a compromise could be awful.
qwertox 20 hours ago [-]
You can make the NS record for the _acme-challenge.domain.tld point to another server which is under your control, that way you don't have to update the zone through your DNS hoster. That server then only needs to be able to resolve the challenges for those who query.
jacooper 17 hours ago [-]
How?
andreashaerter 17 hours ago [-]
CNAMEs. I do this for everything. Example:

1. Your main domain is important.example.com with provider A. No DNS API token for security.

2. Your throwaway domain in a dedicated account with DNS API is example.net with provider B and a DNS API token in your ACME client

3. You create _acme-challenge.important.example.com not as TXT via API but permanent as CNAME to _acme-challenge.example.net or _acme-challenge.important.example.com.example.net

4. Your ACME client writes the challenge responses for important.example.com into a TXT at the unimportant _acme-challenge.example.net and has only API access to provider B. If this gets hacked and example.net lost you change the CNAMES and use a new domain whatever.tld as CNAME target.

acme.sh supports this (see https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo... this also works for wildcards as described there), most ACME clients do.

I also wrote an acme.sh Ansible role supporting this: https://github.com/foundata/ansible-collection-acmesh/tree/m.... Example values:

  [...]
  # certificate: "foo.example.com" with an additional "bar.example.com" SAN
  - domains:
    - name: "foo.example.com"
      challenge:  # parameters depend on type
        type: "dns"
        dns_provider: "dns_hetzner"
        # CNAME _acme-challenge.foo.example.com => _acme-challenge.foo.example.com.example.net
        challenge_alias: "foo.example.com.example.net"
    - name: "bar.example.com"
      challenge:
        type: "dns"
        dns_provider: "dns_inwx"
        # CNAME _acme-challenge.bar.example.com => _acme-challenge.example.net
        challenge_alias: "example.net"
  [...]
teruakohatu 9 hours ago [-]
This has blown my mind. Its been a constant source of frustration since Cloudflare stubbornly refuses to allow non-enterprise accounts to have a seperate key per zone. The thread requesting it is a masterclass in passive aggressiveness:

https://community.cloudflare.com/t/restrict-scope-api-tokens...

Jnr 9 hours ago [-]
When setting up the API key, use the "Select zones to include or exclude." section. Works fine on the free account.
teruakohatu 5 hours ago [-]
I should have clarified, you can’t for subdomains on a non-enterprise account.
Kovah 9 hours ago [-]
Could you elaborate on the separate key per zone issue? It's possible to create different API keys which have only access to a specific zone, and I'm a non-enterprise user.
johnmaguire 8 hours ago [-]
This allows you to restrict it to a domain (e.g. example.com) but not a sub-domain of that domain.
Kovah 8 hours ago [-]
Ah I see, thanks for the clarification!
theschmed 15 hours ago [-]
Thank you for this clear explanation.
bwann 15 hours ago [-]
I used the acme-dns server (https://github.com/joohoi/acme-dns) for this. It's basically a mini DNS server with a very basic API backed with sqlite. All of my acme.sh instances talk to it to publish TXT records, and accepts queries from the internet for those TXT records.

There's a NS record so *.acme-dns.example.com delegates requests to it, so each of my hosts that need a cert have a public CNAME like _acme-challenge.www.example.com CNAME asdfasf.acme-dns.example.com which points back to the acme-dns server.

When setting up a new hostname/certificate, a REST request is sent to acme-dns to register a new username/password/subdomain which is fed to acme.sh. Then every time acme.sh needs to issue/renew the certificate it sends the TXT info to the internal acme-dns server, which in turn makes it available to the world.

dwood_dev 17 hours ago [-]
Usually you just CNAME it.

You can cname _acme-challenge.foo.com to foo.bar.com.

Now, if when you do the DNS challenge, you make a TXT at foo.bar.com with the challenge response, through CNAME redirection, the TXT record is picked up as if it were directly at _acme-challenge.foo.com. You can now issue wildcard certs for anything for foo.com.

I have it on my backlog to build an automated solution to this later this year to handle this for hundreds of individual domains and then put the resulting certificates in AWS secrets manager.

I'm going to also see if I can make some sort of ACME proxy, so internal clients authenticate to me, but they cant control dns, so I make the requests on their behalf. We need to get prepared for ACME everywhere. In May 2026, its 200 day certs, it only goes down from there.

qwertox 11 hours ago [-]
In my case I have a very small nameserver at ns.example.com. So I set the NS record for _acme-challenge.example.com to ns.example.com.

An A-record lookup for ns.example.com resolves to the IP of my server.

This server listens on port 53. It is a custom, small Python server using `dnslib`, which also listens on port let's say 8053 for incoming HTTPS connections.

In certbot I have a custom handler, which, when it is passed the challenge for the domain verification, sends the challenge information via HTTPS to ns.example.com:8053/certbot/cache. The small DNS-server then stores it and waits for a DNS query on port 53 for that challenge to come in, and if it does, it serves it that challenge's TXT record.

  elif qtype == 'TXT':
    if qname.lower().startswith('_acme-challenge.'):
      domain = qname[len('_acme-challenge.'):].strip('.').lower()
      if domain in storage['domains']:
        for verification_code in storage['domains'][domain.lower()]:
          a.add_answer(*dnslib.RR.fromZone(qname + " 30 IN TXT " + verification_code))
The certbot hook looks like this

   #!/usr/bin/env python3
   
   import ...

   r = requests.get('https://ns.example.com:8053/certbot/cache?domain='+urllib.parse.quote(os.environ['CERTBOT_DOMAIN'])+'&validation-code='+urllib.parse.quote(os.environ['CERTBOT_VALIDATION']))
That one nameserver-instance and hook can be used for any domain and certificate, so it is not just limited to the example.com-domain, but can also deal with challenges for let's say a *.testing.other-example.com wildcard certificate.

And since it already is a nameserver, it might as well serve the A records for dev1.testing.other-example.com, if you've set the NS record for testing.other-example.com to ns.example.com.

cherry_tree 15 hours ago [-]
https://cert-manager.io/docs/configuration/acme/dns01/#deleg...
yupyupyups 17 hours ago [-]
It's time for DNS providers to start supporting TSIG + key management. This is a standardized way to manipulate DNS records, and has a very granular ACL.

We don't need 100s of custom APIs.

https://en.m.wikipedia.org/wiki/TSIG

reactordev 15 hours ago [-]
The whole point is to abstract that from the users so they don’t know it’s a giant flat file. Selling a line at a time for $29.99. (I joke, obviously)
withinboredom 4 hours ago [-]
Digital Ocean DNS is free (it’s the only reason I have an account there)
grim_io 22 hours ago [-]
Sounds like a DNS provider problem. Why would Nginx feel the need to compromise because of some 3rd party implementation detail?
toomuchtodo 19 hours ago [-]
Because users would pick an alternative solution that meets their needs when they don't have leverage or ability to change DNS provider. Have to meet users where they are when they have options.
ddtaylor 21 hours ago [-]
It's a bit of a pain in the ass, but you can actually just publish the DNS records yourself. It's clear they are on the way out though as I believe it's only a 30 day valid certificate or something.

I use this for my Jellyfin server at home so that anyone can just type in blah.foo regardless of if their device supports anything like mDNS, as half the devices claim to support it but do not correctly.

fmajid 17 hours ago [-]
My company's DNS provider doesn't even have an API so I delegated to a subdomain, hosted it on PowerDNS, and used Lego to automate the ACME.
quicksilver03 18 hours ago [-]
Is having one key per zone worth paying money for? It's on the list of features I'd like to implement for PTRDNS because it makes sense for my own use case, but I don't know if there's enough interest to make it jump to the top of this list.
immibis 20 hours ago [-]
General note: your DNS provider can be different from your registrar, even though most registrars are also providers, and you can be your own DNS provider. The registrar is who gets the domain name under your control, and the provider is who hosts the nameserver with your DNS records on it.
qwertox 20 hours ago [-]
Yes, and you can be your own DNS provider only for the challenges, everything else can stay at your original DNS provider.
bananapub 22 hours ago [-]
no you don't, you can just run https://github.com/joohoi/acme-dns anywhere, and then CNAME _acme_challenge.realdomain.com to aklsfdsdl239072109387219038712.acme-dns.anywhere.com. then your ACME client just talks to the ACME DNS api, which let's it do nothing at all aside from deal with challenges for that one long random domain.
Arnavion 20 hours ago [-]
You can do it with an NS record, ie _acme_challenge.realdomain.com pointing to the DNS server that you can program to serve the challenge response. No need to make a CNAME and involve an additional domain in the middle.
aflukasz 20 hours ago [-]
Yeah, but then you can just as well use http-01 with like same effort.
gruez 20 hours ago [-]
no, because dns supports wildcard certificates, unlike http.
cpach 18 hours ago [-]
dns-01 is also good for services on a private network.
aflukasz 20 hours ago [-]
Ah, good point.
8organicbits 16 hours ago [-]
There's a SaaS version as well, if you don't want to self-host.

https://docs.certifytheweb.com/docs/dns/providers/certifydns...

rglullis 21 hours ago [-]
I've been hoping to get ACME challenge delegation on traefik working for years already. The documentation says it supports it, but it simply fails every time.

If you have any idea how this tool would work on a docker swarm cluster, I'm all ears.

hashworks 22 hours ago [-]
If you host a hidden primary yourself you get that easily.
Sesse__ 22 hours ago [-]
Many DNS providers also don't support having an external primary.
alanpearce 18 hours ago [-]
Hurricane Electric support a hidden primary as part of their free DNS nameserver service (do you actually want to expose your primary when someone else can handle the traffic?)

https://dns.he.net

Sesse__ 6 hours ago [-]
Yup, but it's a bit of a dance for bootstrapping, since they require you to already have delegated to them, but some TLDs require all NSes to be in sync and answer for the domain before delegating…
nulbyte 20 hours ago [-]
Do most of them let you add an NS record?
qwertox 20 hours ago [-]
And if they don't, you might consider switching to Cloudflare for DNS hosting.
rfmoz 10 hours ago [-]
Give a try to DNSMadeEasy or RcodeZero
UltraSane 11 hours ago [-]
This concerned me greatly so I use AWS Route53 for DNS and use an IAM policy that only allows the key to work from specific IP addresses and limit it to only create and delete TXT records for a specific record set. I love when I can create exactly the permissions I want.

AWS IAM can be a huge pain but it can also solve a lot of problems.

https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_p...

https://repost.aws/questions/QU-HJgT3V0TzSlizZ7rVT4mQ/how-do...

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/sp...

xiconfjs 22 hours ago [-]
if even PowerDNS doesn‘t support it :(
tok1 7 hours ago [-]
True for API but you can do DynDNS updates (RFC 2136), TSIG-authenticated on a per-zone basis. [1]

Can even be controlled quite granularly with a Lua-based updatepolicy, if you want e.g. restricting to only the ACME TXT records. [2]

[1] https://doc.powerdns.com/authoritative/dnsupdate.html

[2] https://github.com/PowerDNS/pdns/wiki/Lua-Examples-(Authorit...

chaz6 21 hours ago [-]
One of Traefik's shortcomings with ACME is that you can only use one api key per DNS provider. This is problematic if you want to restrict api keys to a domain, or use domains belonging to two different accounts. I hope Nginx will not have the same constraint.
mholt 19 hours ago [-]
This is one of the main reasons Caddy stopped using lego for ACME and I wrote our own ACME stack.
navigate8310 10 hours ago [-]
You can use CNAME to handle multiple DNS challenge providers. https://doc.traefik.io/traefik/reference/install-configurati...
samgranieri 22 hours ago [-]
I use dns01 in my homelab with step-ca with caddy. It's a joy to use
reactordev 21 hours ago [-]
+1 for caddy. nginx is so 2007.
darkwater 20 hours ago [-]
Caddy is just for developers that want to publish/test the thing they write. For power users or infra admins, nginx is still much more valuable. And yes, I use Caddy in my home lab and it's nice and all but it's not really flexible as nginx is.
reactordev 19 hours ago [-]
Caddy is in use here in production. 14M requests an hour.
mholt 19 hours ago [-]
Where's that if I may ask?
reactordev 18 hours ago [-]
Trust me, you don’t want to know. Just know - it’s working great and thank you. GovCloud be dragons.
j-krieger 19 hours ago [-]
We use Caddy across hundreds of apps with 10s of millions of requests per day in production.
mholt 17 hours ago [-]
Oooh. Can you tell me more about this?
j-krieger 3 hours ago [-]
Sure. University / Government sector. I know quite some unis/projects in that field that switched to caddy, since gigantic ip ranges and deep subdomains with stakeholders of many different classes have certain PKI requirements and caddy makes using ACME easy. We deploy a self serving tool where people can generate EAB-Ids and Hmac keys for a sub domain they own.

Complex root domain routing and complex dynamic rewrite logic remains behind Apache/NginX/HaProxy, a lot of apps are then served in a container architecture with Caddy for easy cert renewal without relying on hacky certbot architectures. So we don't really serve that much traffic with just one instance. Also, a lot of our traffic is bots. More than one would think.

The basic configuration being tiny makes it the perfect fit for people with varying capabilities and know how when it comes to devops. As a devops engineer, I enjoy the easy integration with tailscale.

reactordev 15 hours ago [-]
In case people are wondering, this is the author of Caddy.

He’s curious where it’s being used outside of home labs and in small shops. Matt, it’s fantastic software and will only get better as go improves.

I used it in a proxy setup for ingress to kubernetes that’s overlayed across multiple clouds - for the government (prior admin, this admin killed it). I can’t tell you more information than that. Other than it goes WWW -> ALB -> Caddy Cluster * Other Cloud -> K8s Router -> K8s pod -> Fiber Golang service. :chefs kiss:

When a pod is registered to the K8s router, we fire off a request to the caddy cluster to register the route. Bam, we got traffic, we got TLS, we got magic. No downtime.

reactordev 32 minutes ago [-]
I almost forgot. Matt. We added a little sugar to Caddy for our cluster. Hashicorp's memberlist. So we can sync the records. It worked great. Sadly, I can't share it but it's rather trivial to implement.
RadiozRadioz 19 hours ago [-]
So a tool's value should be judged as inversely proportional to its age?
reactordev 18 hours ago [-]
A tools value is in the eye of the beholder. Nginx has ceased being valuable to me when they decided to change licenses, go private equity, not adapt to orchestration needs, ignore http standards, and not release meaningful updates in a decade.
yjftsjthsd-h 16 hours ago [-]
> when they decided to change licenses,

https://github.com/nginx/nginx/blob/master/LICENSE looks like a nice normal permissive license. I don't care that there's a premium version if all the features I want are in the OSS version.

jcgl 9 hours ago [-]
Private equity? Either there’s a story I’m missing, or you’re mischaracterizing F5 as PE.
mholt 18 hours ago [-]
Maybe inversely proportional to how much the ecosystem moves around it.
supriyo-biswas 21 hours ago [-]
Only if they'd get the K8s ingress out of the WIP phase; I can't wait to possibly get rid of the cert-manager and ingress shenanigans you get with others.
reactordev 21 hours ago [-]
Yup. I can’t wait for the day I can kill my caddy8s service.

The best thing about caddy is the fact you can reload config, add sites, routes, without ever having to shutdown. Writing a service to keep your orchestration platform and your ingress in sync is meh. K8s has the events, DNS service has the src mesh records, you just need a way to tell caddy to send it to your backend.

The feature should be done soon but they need to ensure it works across K8s flavors.

pushrax 19 hours ago [-]
just send sighup to nginx and it will reload all the config—there's very few settings that require a restart
reactordev 19 hours ago [-]
Sure, how, from the container? The host it’s on? Caddy exposes this as an api.
01HNNWZ0MV43FF 20 hours ago [-]
I think you can that with Nginx too, but the SWAG wrapper discourages it for some reason
ilogik 17 hours ago [-]
Traefik seems to be ok for us
kijin 21 hours ago [-]
A practical problem with DNS-01 is that every DNS provider has a different API for creating the required TXT record. Certbot has more than a dozen plugins for different providers, and the list is growing. It shouldn't be nginx's job to keep track of all these third-party APIs.

It would also be unreasonable to tell everyone to move their domains to a handful of giants like AWS and Cloudflare who already control so much of the internet, just so they could get certificates with DNS-01. I like my DNS a bit more decentralized than that.

sureglymop 19 hours ago [-]
That is true and it is annoying. They should really just support RFC 2136 instead of building their own APIs. Lego also supports this and pretty much all DNS servers have it implemented. At least I can use it with my own DNS server...

https://datatracker.ietf.org/doc/html/rfc2136

cpach 18 hours ago [-]
This is a very good point.

I wonder what a good solution to this would be? In theory, Nginx could call another application that handles the communication with the DNS provider, so that the user can tailor it to their needs. (The user could write it in Python or Go or whatever.) Not sure how robust that would be though.

attentive 21 hours ago [-]
Yes, ACME-DNS please - https://github.com/joohoi/acme-dns

Lego supports it.

aoe6721 20 hours ago [-]
Switch to Angie then. It supports DNS-01 very well.
klysm 12 hours ago [-]
How does NGINX fit into that though?
Spivak 22 hours ago [-]
I don't even know why anyone wouldn't use the DNS challenge unless they had no other option. I've found it to be annoying and brittle, maybe less so now with native web server support. And you can't get wildcards.
cortesoft 22 hours ago [-]
My work is mostly running internal services that aren’t reachable from the external internet. DNS is the only option.

You can get wildcards with DNS. If you want *.foo.com, you just need to be able to set _acme-challenge.foo.com and you can get the wildcard.

filleokus 22 hours ago [-]
Spivak is saying that the DNS method is superior (i.e you are agreeing - and I do too).

One reason I can think of for HTTP-01 / TLS-ALPN-01 is on-demand issuance, issuing the certificate when you get the request. Which might seem insane (and kinda is), but can be useful for e.g crazy web-migration projects. If you have an enormous, deeply levelled, domain sprawl that are almost never used but you need it up for some reason it can be quite handy.

(Another reason, soon, is that HTTP-01 will be able to issue certs for IP addresses: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...)

cortesoft 22 hours ago [-]
Oh I totally misread the comment.

Nevermind, I agree!

Sharparam 20 hours ago [-]
The comment is strangely worded, I too had to read it over a couple of times to understand what they meant.
bryanlarsen 22 hours ago [-]
> DNS is the only option

DNS and wildcards aren't the only options. I've done annoying hacks to give internal services an HTTPS cert without using either.

But they're the only sane options.

22 hours ago [-]
cyberax 21 hours ago [-]
One problem with wildcards is that any service with *.foo.com can pretend to be any other service. This is an issue if you're using mutual TLS authentication and want to trust the server's certificate.

It'd be nice if LE could issue intermediary certificates constrained to a specific domain ( https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.... ).

bityard 22 hours ago [-]
The advantage to HTTP validation is that it's simple. No messing with DNS or API keys. Just fire up your server software and tell it what your hostname is and everything else happens in the background automagically.
abcdefg12 18 hours ago [-]
And you have two or more servers serving this domain you’re out of luck
lmz 14 hours ago [-]
And this is different from DNS how exactly? The key and resulting cert still needs to be distributed among your servers no matter which method is used.
cpach 9 hours ago [-]
With dns-01, multiple servers could, independently of each other, fetch a certificate for the same set of hostnames. Not sure if it’s a good idea though.
account42 6 hours ago [-]
Not really, just forward .well-known/acme-challenge/* requests to a single server or otherwise make sure that the challenge responses are served from all instances.
account42 6 hours ago [-]
> I've found it to be annoying and brittle

How so? It's just serving static files.

Dylan16807 17 hours ago [-]
I don't know how to make my server log into my DNS, and I don't particularly want to learn how. Mapping .well-known is one line of config.

Wildcards are the only temptation.

account42 6 hours ago [-]
Just like you can point .well-known/acme-challenge/ to a writable directory you can also delegate the relevant DNS keys to a name server that you can more easily update.
Dylan16807 2 hours ago [-]
Now you want me to rent or install at least two name servers, and then configure them, and then teach my web server how to send them rules?

That's so much more work than either of the options in my first comment. Aliasing a directory takes about one minute.

jeroenhd 21 hours ago [-]
If you buy your domain with a bottom-of-the-barrel domain reseller and then not pay for decent DNS, you don't have the option.

Plus, it takes setting up an API key and most of the time you don't need a wildcard anyway.

account42 6 hours ago [-]
You don't need API access to your DNS, the ability to delegate the ACME challenge records to your own server is also enough.
altairprime 20 hours ago [-]
Does DNS-01 support DNS-over-HTTPS to the registered domain name servers? If so, then it should be extremely simple to extend nginx to support DNS claims; if not, perhaps DNS-01 needs improvements.
cpach 18 hours ago [-]
When placing the order, you get a funny text string from the ACME provider. You need to create a TXT record that holds this value. How you create the TXT record is up to you and your DNS server – the ACME provider doesn’t care.

I don’t believe DNS-over-HTTPS is relevant in this context. AFAIK, it’s used by clients who want to query a DNS server, and not for an operator who wants to create a DNS record. (Please correct me if I’m wrong.)

0x0000000 18 hours ago [-]
The ACME provider makes a query to the DNS server to validate the record exists and contains the right "funny string". Parent's question was whether that query is/can be made via DoH.
cpach 18 hours ago [-]
Perhaps I have poor imagination, but I fail to see why why it would matter?
0x0000000 17 hours ago [-]
Because nginx, as an HTTP server, could answer the query?
Arrowmaster 17 hours ago [-]
You want to build a DNS server into nginx so you can respond to DoH query's for the domain you are hosting on that nginx server?

Let's ignore that DoH is a client oriented protocol and there's no same way to only run a DoH server without an underlying DNS server. How do you plan to get the first certificate so the query to the DoH server doesn't get rejected for invalid certificate?

xg15 16 hours ago [-]
At that point you might as well use the HTTP-01 challenge. I think the whole utility of DNS-01 is that you can use it if you don't want to expose the HTTP server to the internet.
jcgl 9 hours ago [-]
No, that’s just one of the use-cases. Also:

- wildcard certs. DNS-01 is a strict requirement here. - certs for a service whose TLS is terminated by multiple servers (e.g. load balancers). DNS-01 is a practical requirement here because only one of the terminating servers would be able to respond during an HTTP or ALPN challenge.

account42 6 hours ago [-]
> DNS-01 is a practical requirement here because only one of the terminating servers would be able to respond during an HTTP or ALPN challenge.

Reverse-proxying or otherwise forwarding requests for .well-known/acme-challenge/ to a single server should be just as easy to set up as DNS-01.

jcgl 3 hours ago [-]
But then you have to redistribute the cert from that single server to all the others. Which, yes, can be done. But then you've gotta write that glue yourself. What's more, you've now chosen a special snowflake server on whom renewals depend.

In other words, no, it's not just as easy as setting up DNS-01. Different operational characteristics, and a need for bespoke glue code.

xg15 9 hours ago [-]
Ah, that makes sense. Thanks!
creatonez 22 hours ago [-]
Why would nginx ever need support for the DNS-01 challenge type? It always has access to `.well-known` because nginx is running an HTTP server for the entire lifecycle of the process, so you'd never need to use a lower level way of doing DV. And that seems to violate the principle of least privilege, since you now need a sensitive API token on the server.
0x457 22 hours ago [-]
Because while Nginx always has access to .well-known, thing that validates on issuer side might not. I use DNS challenge to issue certificates for domains that resolve to IPs in my overlay network.

The issue is that supporting dns-01 is just supporting dns-01 it's providing a common interface to interact with different providers that implement dns-01.

petee 18 hours ago [-]
dns-01 is just a challenge; which api or dns update system should nginx support then? Some API, AFXR, or UPDATE?

I think this is kinda the OPs point, nginx an http server, why should it be messing with dns? There are plenty of other acme clients to do this with ease

8 hours ago [-]
justusthane 21 hours ago [-]
You can’t use HTTP-01 if the server running nginx isn’t accessible from the internet. DNS-01 works for that.
chrismorgan 21 hours ago [-]
Wildcard certificates are probably the most important answer: they’re not available via HTTP challenge.
abcdefg12 18 hours ago [-]
Because you might have more than one server serving this domain
lukeschlather 22 hours ago [-]
Issuing a new certificate with the HTTP challenge pretty much requires you allow for 15 minutes of downtime. It's really not suitable for any customer-facing endpoint with SLAs.
chrismorgan 21 hours ago [-]
Sounds like you’re doing it wrong. I don’t know about this native support, but I’d be very surprised if it was worse than the old way, which could just have Certbot put files in a path NGINX was already serving (webroot method), and then when new certificates are done send a signal for NGINX to reload its config. There should never be any downtime.
kijin 21 hours ago [-]
Certbot has a "standalone" mode that occupies port 80 and serves /.well-known/ by itself.

Whoever first recommended using that mode in anything other than some sort of emergency situation needs to be given a firm kick in the butt.

Certbot also has a mode that mangles your apache or nginx config files in an attempt to wire up certificates to your virtual hosts. Whoever wrote the nginx integration also needs a butt kick, it's terrible. I've helped a number of people fix their broken servers after certbot mangled their config files. Just because you're on a crusade to encrypt the web doesn't give you a right to mess with other programs' config files, that's not how Unix works!

jeltz 6 hours ago [-]
Certbot also fights automation and provisioning with e.g. Andible by modifying config files to remember command line options if you ever need to do anything manually in an emergency.

It is a terrible piece of software. I use dehydrated which I'd much friendlier to automation.

jofla_net 19 hours ago [-]
Also, whoever decided that service providers were no longer autonomous to determine the expiration times of their own infrastructure's certificates should get that boot-to-the-head as well.

It is not as if they couldn't already choose (to buy) such short lifetimes already.

Authoritarianism at its finest.

tomku 19 hours ago [-]
Those choices and Certbot strongly encouraging snap installation was enough to get me to switch to https://go-acme.github.io/lego/, which I've been very happy with since. It's very stable and feels like it was built by people who actually operate servers.
Kwpolska 21 hours ago [-]
Where would this downtime come from? Your setup is really badly configured if you need downtime to serve a new static file.
kijin 21 hours ago [-]
Only if you let certbot take down your normal nginx and occupy port 80 in standalone mode. Which it doesn't need to, if normal nginx can do the job by itself.

When I need to use the HTTP challenge, I always configure the web server in advance to serve /.well-known/ from a certain directory and point certbot at it with `certbot certonly --webroot-path`. No need to take down the normal web server. Graceful reload. Zero downtime. Works with any web server.

21 hours ago [-]
dizhn 22 hours ago [-]
This is pretty big. Caddy had this forever but not everybody wants to use caddy. It'll probably eat into the user share of software like Traefik.
elashri 22 hours ago [-]
What I really like about Caddy is their better syntax. I actually use nginx (via nginx proxy manager) and Traefik but recently I did one project with Caddy and found it very nice. I might get the time to change my selfhosted setup to use Caddy in the future but probably will go with something like pangolin [1] because it provides alternative to cloudflare tunnels too.

[1] https://github.com/fosrl/pangolin

kstrauser 22 hours ago [-]
I agree. That, and the sane defaults are almost always nearly perfect for me. Here is the entire configuration for a TLS-enabled HTTP/{1.1,2,3} static server:

  something.example.com {
    root * /var/www/something.example.com
    file_server
  }
That's the whole thing. Here's the setup of a WordPress site with all the above, plus PHP, plus compression:

  php.example.com {
    root * /var/www/wordpress
    encode
    php_fastcgi unix//run/php/php-version-fpm.sock
    file_server
  }
You can tune and tweak all the million other options too, of course, but you don't have to for most common use cases. It Just Works more than any similarly complex server I've ever been responsible for.
pgug 14 hours ago [-]
I find the documentation for the syntax to be a bit lacking if you want to do anything that isn't very basic and how they want you to do it. For example, I want to use a wildcard certificate for my internal services to hide service names from certificate transparency logs, and I can't get the syntax working. Chatgpt and gemini also couldn't.
dizhn 1 hours ago [-]
This here is how it's done, where you have a wildcard dns entry for subdomains of secret.domain.com.

{ acme_dns cloudflare oWN-HR__kxRoDhrixaQbI6M0uwS4bfXub4g4xia2 debug }

*.secret.domain.com {

        @sso host sso.secret.domain.com
        handle @sso {
                reverse_proxy 192.168.200.4:9000
        }

        @adguard host adguard.secret.domain.com
        handle @adguard {
                reverse_proxy 192.168.200.4:9000
        }


        @forge host     forge.secret.domain.com
        handle @forge {
                reverse_proxy http://forgejo:3000
        }

        # respond to whatever doesn't match
        handle {
                respond "Wildcard subdomain does not have a web configuration!"
        }

        handle_errors {
                respond "Error {err.status_code} {err.status_text}"
        }
}
nadanke 9 hours ago [-]
For wildcards you need a Caddy build that includes the dns plugin for your specific provider. There's a tool called xcaddy that helps with that. It's still kinda annoying because now you need to manage the binary for yourself but when I tried it with Hetzner it worked fine.
cpach 9 hours ago [-]
This integration doesn’t support the dns-01 challenge. So wildcard certs are out of the question at this point.
cpach 7 hours ago [-]
PS. Oh, this subthread is about Caddy, not Nginx. Nevermind my comment then!
Saris 20 hours ago [-]
Caddy does have some bizarre limitations I've run into, particularly logging with different permissions when it writes the file, so other processes like promtail can read the logs. With Caddy you cannot change them, it always writes with very restrictive permissions.

I find their docs also really hard to deal with, trying to figure out something that would be super simple on Nginx can be really difficult on Caddy, if it's outside the scope of 'normal stuff'

The other thing I really don't like is if you install via a package manager to get automated updates, you don't get any of the plugins. If you want plugins you have to build it yourself or use their build service, and you don't get automatic updates.

francislavoie 20 hours ago [-]
Actually, you can set the permissions for log files now. See https://caddyserver.com/docs/caddyfile/directives/log#file
Saris 20 hours ago [-]
Oh good to know!

Do you know if Caddy can self update or if is there some other easy method? Manually doing it to get the cloudflare plugin is a pain.

francislavoie 20 hours ago [-]
No, you have to build Caddy with plugins. We provide xcaddy to make it easy. Sign up for notifications on github for releases, and just write yourself a tiny bash script to build the binary with xcaddy, and restart the service. You could potentially do a thing where you hook into apt to trigger your script after Caddy's deb package version changes, idk. But it's up to you to handle.
dizhn 1 hours ago [-]
I am wondering why you said "no" to the self update thing.

https://caddyserver.com/docs/command-line#caddy-upgrade

francislavoie 43 minutes ago [-]
Because that's not automated, it's a manual command and uses caddyserver.com resources (relatively low powered cloud VMs) with no uptime guarantees. It _should not_ be used in automation scenarios, only for quick manual personal use scenarios.
1 hours ago [-]
dizhn 18 hours ago [-]
You can have the binary self update with currently included plugins. I think the command line help says it's beta but has always worked fine for me.
Saris 14 hours ago [-]
I'll give that a try!
nodesocket 19 hours ago [-]
I use Caddy as my main reverse proxy into containers with CloudFlare based DNS let’s encrypt. The syntax is intuitive and just works. I’ve used Traefik in the past with Kubernetes and while powerful the setup and grok ability has quite a bit steeper learning curve.
karmakaze 18 hours ago [-]
Not only that but Nginx how the configuration is split up into all the separate modules is a lot of extra complexity that Caddy avoids by having a single coherent way of configuring its features.
dizhn 22 hours ago [-]
I checked out pangolin too recently but then I realized that I already have Authentik and using its embedded (go based) proxy I don't really need pangolin.
tgv 21 hours ago [-]
I switched over to caddy recently. Nginx' non-information about the http 1 desync problem drove me over. I'm not going to wait for something stupid to happen or an auditor ask me questions nginx doesn't answer.

Caddy is really easier than nginx. For starters, I now have templates that cover the main services and their test services, and the special service that runs for an education institution. Logging is better. Certificate handling is perfect (for my case, at least). And it has better metrics.

Now I have to figure out plugins though, because caddy doesn't have rate limiting and some stupid bug in powerbi makes a single user hit certain images 300.000 times per day. That's a bit of a downside.

dekobon 20 hours ago [-]
I did a google search for the desync problem and found this page: https://my.f5.com/manage/s/article/K30341203

This type of thing is out of my realm of expertise. What information would you want to see about the problem? What would be helpful?

dwedge 4 hours ago [-]
It's also been in Apache since 2018
thrown-0825 22 hours ago [-]
Definitely. I use traefik for some stuff at home and will likely swap it out now.
grim_io 22 hours ago [-]
I configure traefik by defining a few docker labels on the services themselves. No way I'm going back to using the horrible huge nginx config.
thrown-0825 12 hours ago [-]
Traefik is slower AND uses more resources.
fastball 9 hours ago [-]
I felt the same but switched to Caddy for my reverse proxy last year and have had a great experience.

Admittedly this was on the back of trying to use nginx-unit, which was an overall bad experience, but ¯\_(ツ)_/¯

vivzkestrel 11 hours ago [-]
Not gonna lie, setting up Nginx, Certbot inside docker is the biggest PITA ever. you need certificates to start the NGINX server but you need the NGINX server to issue certificates? see the problem? It is made infinitely worse by a tonne of online solutions and blog posts none of which I could ever get to work. I would really appreciate if someone has documented this extensively for docker compose. I dont want to use libraries like nginx-proxy as customizing that library is another nightmare alltogether
nickjj 2 hours ago [-]
This is mostly why I run nginx outside of Docker, I've written about it here: https://nickjanetakis.com/blog/why-i-prefer-running-nginx-on...

I keep these things separate on the servers I configure:

    - Setting up PKI related things like DH Params and certs (no Docker)
    - My app (Docker)
    - Reverse proxy / TLS / etc. with nginx (no Docker)
This allows configuring a server in a way where all nginx configuration works over HTTPS and the PKI bits will either use a self-signed certificate or certbot with DNS validation depending on what you're doing. It gets around all forms of chicken / egg problems and reduces a lot of complexity.

Switching between self-signed, Let's Encrypt or 3rd party certs is a matter of updating 1 symlink since nginx is configured to read the destination. This makes things easy to test and adds a level of disaster recovery / reliability that helps me sleep at night.

This combo has been running strong since all of these tools were available. Before Let's Encrypt was available I did the same thing, except I used 3rd party certs.

bspammer 6 hours ago [-]
I must say this is something that showcases NixOS very well.

This is all it takes to start a nginx server. Add this block and everything starts up perfectly first time, using proper systemd sandboxing, with a certificate provisioned, and with a systemd timer for autorenewing the cert. Delete the block, and it's like the server never existed, all of that gets torn down cleanly.

  services.nginx = {
    enable = true;
    virtualHosts = {
      "mydomain.com" = {
        enableACME = true;
        locations."/" = {
          extraConfig = ''; # Config goes here
        };
      };
    };
  }
I recently wanted to create a shortcut domain for our wedding website, redirecting to the SaaS wedding provider. The above made that a literal 1 minute job.
nojs 5 hours ago [-]
> I would really appreciate if someone has documented this extensively for docker compose

Run `certbot certonly` on the host once to get the initial certs, and choose the option to run a temporary server rather than using nginx. Then in `compose.yml` have a mapping from the host's certificates to the nginx container. That way, you don't have to touch your nginx config when setting up a new server.

You can then use a certbot container to do the renewals.

E.g.

  nginx:
    volumes:
      - /etc/letsencrypt:/etc/letsencrypt

  certbot:
    volumes:
      - /etc/letsencrypt:/etc/letsencrypt
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"

In your nginx.conf you have

    ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
And also

    location /.well-known/ {
        alias /usr/share/nginx/html/.well-known/;
    }
For the renewals.
dwedge 4 hours ago [-]
Usually the solution is to either not add ssl until you have the certs, or use selfsigned/snakeoil placeholder certs to get nginx started.

Personally I use dns everywhere. I have a central server running dehydrated and dns challenges every night which then rsyncs to all the servers (I'm going to replace it with vault). I kind of like having one place to check for certs

mythz 11 hours ago [-]
What's the issue with nginx-proxy? We've used it for years to handle CI deploying multiple multiple Docker compose Apps to the same server [1] without issue, with a more detailed writeup at [2].

This served us well for many years before migrating to use Kamal [3] for its improved remote management features.

[1] https://docs.servicestack.net/ssh-docker-compose-deploment

[2] https://servicestack.net/posts/kubernetes_not_required

[3] https://docs.servicestack.net/kamal-deploy

vivzkestrel 11 hours ago [-]
i can write a simple rate limit block easily in raw nginx config but look at this mess when using nginx-proxy https://github.com/nginx-proxy/nginx-proxy/discussions/2524
vivzkestrel 11 hours ago [-]
the issue with nginx proxy is that i am not in control of the nginx script https://github.com/nginx-proxy/nginx-proxy/discussions/2523
atomicnumber3 11 hours ago [-]
I personally just terminate TLS at nginx, run nginx directly on the metal, and all the services are containerized behind it. I suspect if I had nginx then proxying to remote nodes I'd probably just use an internal PKI for that.
yjftsjthsd-h 8 hours ago [-]
> you need certificates to start the NGINX server but you need the NGINX server to issue certificates?

I just pre-populate with a self-signed cert to start, though I'd have to check how to do that in docker.

vivzkestrel 7 hours ago [-]
exactly! it all sounds easy unless you want to run stuff inside docker at which point there is a serious lack of documentation and resources
10 hours ago [-]
CannoloBlahnik 47 minutes ago [-]
Once Nginx gets support for DNS-01, does that mean we'll be able to use wildcards for SSL using Let's Encrypt?
josegonzalez 21 hours ago [-]
This is great. Dokku (of which I am the maintainer) has a hokey solution for this with our letsencrypt plugin, but thats caused a slew of random issues for users. Nginx sometimes gets "stuck" reloading and then can't find the endpoint for some reason. The fewer moving knobs, the better.

That said, its going to take quite some time for this to land in stable repositories for Ubuntu and Debian, and it doesn't (yet?) have DNS challenge support - meaning no wildcards - so I don't think it'll be useful for Dokku in the short-term at least.

ctxc 21 hours ago [-]
Hey! Great to see you here.

I tried dokku (and still am!) and it is so hard getting started.

For reference, - I've used Coolify successfully where it required me to create a Github app to deploy my apps on pushes to master - I've written GH actions to build and deploy containers to big cloud

This page is what I get if I want to achieve the same, and it's completely a reference book approach - I feel like I'm reading an encyclopedia. https://dokku.com/docs/deployment/methods/git/#initializing-...

Contrast it with this, which is INSTANTLY useful and helps me deploy apps hot off the page: https://coolify.io/docs/knowledge-base/git/github/integratio...

What I would love to see for Dokku is tutorials for popular OSS apps and set-objective/get-it-done style getting started articles. I'd LOVE an article that takes me from baremetal to a reverse proxy+a few popular apps. Because the value isn't in using Dokku, it's in using Dokku to get to that state.

I'm trying to use dokku for my homeserver.

Ideally I want a painless, quick way to go from "hey here's a repo I like" to "deployed on my machine" with Dokku. And then once that works, peek under the hood.

kocial 16 hours ago [-]
The problem with the big open-source companies is that they are always very late to understand and implement the most basic innovations that come out.

Caddy & Traefik did it long, long ago (half a decade ago), and after half a decade, we finally have ngxin supporting it too. Great move though, finally I won't have to manually run certbot :pray:

winter_blue 16 hours ago [-]
Caddy did it almost a decade ago. IIRC it had some form of automatic Let’s Encrypt HTTPS back in 2016.

So Nginx is just about 9 to 10 years late. Lol

squigz 8 hours ago [-]
And the brilliant thing about open source projects is that if someone felt it was so important to have it built-in, they could have done so many years ago.
stephenr 5 hours ago [-]
Given that Caddy has a history that includes choices like "refuse to start if LE cannot be contacted while a valid certificate exists on disk" I'm pretty happy to keep my certificate issuance separate from a web server.

I need a tool to issue certs for a bunch of other services anyway, I don't really see how it became such a thing for people to want it embedded in their web server.

thaumaturgy 22 hours ago [-]
Good to see this. For those that weren't aware, there's been a low-effort solution with https://github.com/dehydrated-io/dehydrated, combined with a pretty simple couple of lines in your vhost config:

    location ^~ /.well-known/acme-challenge/ {
        alias <path-to-your-acme-challenge-directory>;
    }
Dehydrated has been around for a while and is a great low-overhead option for http-01 renewal automation.
andrewmcwatters 22 hours ago [-]
This is really cool, but I find projects that have thousands of people depending on it not cutting a stable release really distasteful.

Edit: Downvote me all you want, that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.

Don't consume major version 0 software, it'll bite you one day. Convince your maintainers to release stable cuts if they've been sitting on major version 0 for years. It's just lazy and immature practice abusing semantic versioning. Maintainers can learn and grow. It's normal.

Dehydrated has been major version 0 for 7 years, it's probably past due.

See also React, LÖVE, and others that made 0.n.x jumps to n.x.x. (https://0ver.org)

CalVer: "If both you and someone you don't know use your project seriously, then use a serious version."

SemVer: "If your software is being used in production, it should probably already be 1.0.0."

https://0ver.org/about.html

nothrabannosir 21 hours ago [-]
Distasteful by whom, the people depending on it? Surely not… the people providing free software at no charge, as is? Surely not…

Maybe not distasteful by any one in particular, but just distasteful by fate or as an indicator of misaligned incentives or something?

yjftsjthsd-h 8 hours ago [-]
> Distasteful by whom, the people depending on it? Surely not…

Why not?

ygjb 21 hours ago [-]
That's the great thing about open source. If you are not satisfied with the free labour's pace of implementing a feature you want, you can do it yourself!
andrewmcwatters 21 hours ago [-]
Yes, absolutely! I would probably just pick a version to fork, set it to v1.0.0 for your org's production path, and then you'd know the behavior would never change.

You could then merge updates back from upstream.

john01dav 21 hours ago [-]
It's generally easier to just deal with breaking changes, since writing code is faster than gaining understanding and breaking changes in the external api are generally much better documented than internals.
thaumaturgy 21 hours ago [-]
FWIW I have been using and relying on Dehydrated to handle LetsEncrypt automation for something like 10 years, at least. I think there was one production-breaking change in that time, and to the best of my recollection, it wasn't a Dehydrated-specific issue, it was a change to the ACME protocol. I remember the resolution for that being super easy, just a matter of updating the Dehydrated client and touching a config file.

It has been one of the most reliable parts of my infrastructure and I have to think about it so rarely that I had to go dig the link out of my automation repository.

hju22_-3 18 hours ago [-]
You've been using Dehydrated since its initial commit in December of 2015?
thaumaturgy 11 hours ago [-]
I am pretty sure that this is the thread that introduced me to it: https://news.ycombinator.com/item?id=10681851

Unfortunately, web.archive.org didn't grab an https version of my main site from around that period. My oldest server build script in my current collection does have the following note in it:

    **Get the current version of dehydrated from https://github.com/dehydrated-io/dehydrated **
    (Dehydrated was previously found at https://github.com/lukas2511/dehydrated)
...so I was using it back when it was under the lukas2511 account. Those tech notes however were rescued from a long-dead Phabricator installation, so I no longer have the change history for them, unless I go back and try to resurrect its database, which I think I do still have kicking around in one of my cold storage drives...

But yeah, circa 2015 - 2016 should be about right. I had been hosting stuff for clients since... phew, 2009? So LetsEncrypt was something I wanted to adopt pretty early, because back then certificate renewals were kind of annoying and often not free, but I also didn't want to load whatever the popular ACME client was at the time. Then this post popped up, and it was exactly what I had been looking for, and would have started using it soon after.

edit: my Linode account has been continuously active since October 2009, though it only has a few small legacy services on it now. I started that account specifically for hosting mail and web services for clients I had at the time. So, yeah, my memory seems accurate enough.

dspillett 21 hours ago [-]
Feel free to provide and support a "stable" branch/fork that meets your standards.

Be the change you want to see!

Edit to comment on the edit:

> Edit: Downvote me all you want

I don't generally downvote, but if I were going to I would not need your permission :)

> that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.

I assume you meant "present" there rather than "consume"?

Anyway, 1.0.0 is just a number. Without relevant promises and a track record and/or contract to back them up breaking changes are as likely there as with any other number. A "version 0.x.x" of a well used and scrutinized open source project is more reliable and trustworthy than something that has just had a 1.0.0 sticker slapped on it.

Edit after more parent edits: or go with one of the other many versioning schemes. Maybe ItIsFunToWindUpEntitledDicksVer Which says "stick with 0.x for eternity, go on, you know you want to!".

juped 13 hours ago [-]
Another person who thinks semver is some kind of eldritch law-magic, serving well to illustrate the primary way in which semver was and is a mistake.

Sacrificing a version number segment as a permanent zero prefix to keep them away is the most practical way to appease semver's fans, given that they exist in numbers and make ill-conceived attempts to depend on semver's purported eldritch law-magics in tooling. It's a bit like the "Mozilla" in browser user-agents; I hope we can stop at one digit sacrificed, rather than ending up like user-agents did, though.

In other words, 0ver, unironically. Pray we do not need 0.0ver.

idoubtit 19 hours ago [-]
A little mistake with this release: they packaged the ngx_http_acme_module for many Linux distributions, but "forgot" Debian stable. Oldstable and oldoldstable are listed in https://nginx.org/en/linux_packages.html (packages built today) but Debian 13 Trixie (released 4 days ago) is not there.
thresh 18 hours ago [-]
I'm currently working on getting the Trixie packages uploaded. It'll be there this week.

As you've said Debian 13 was released 4 days ago - it takes some time to spin up the infrastructure for a new OS (and we've been busy with other tasks, like getting nginx-acme and 1.29.1 out).

(I work for F5)

triknomeister 19 hours ago [-]
That's Debian's fault I guess
sjmulder 19 hours ago [-]
How is that? These are vendor packages
stego-tech 22 hours ago [-]
The IT Roller Coaster in two reactions:

> Nginx Introduces Native Support for Acme Protocol

IT: “It’s about fucking time!”

> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.

IT: “FUCK. Alright, domain registrar, mint me a new wildcard please, one of the leading web infrastructure providers still can’t do a basic LE DNS-01 pull in 2025.”

Seriously. PKI in IT is a PITA and I want someone to SOLVE IT without requiring AD CAs or Yet Another Hyperspecific Appliance (YAHA). If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.

While we’re at it, can we also allow DNS-01 certs to be issued for intermediate authorities, allowing internally-signed certificates to be valid via said Intermediary? That’d solve like, 99% of my PKI needs in any org, ever, forever.

cnst 16 hours ago [-]
You could always switch to the Angie fork if you require the DNS challenge type with the wildcard domains:

https://en.angie.software/angie/docs/configuration/modules/h...

pointlessone 7 hours ago [-]
DNS challenge is complicated by the fact that every registrar has their own API. HTTP is easier for nginx because it’s a single flow and it already does HTTP.

I’m sure nginx will get DNS but it’s still an open question when it will support your particular registrar or if at all.

account42 5 hours ago [-]
> DNS challenge is complicated by the fact that every registrar has their own API

You can sidestep that by delegating the ACME keys to your own name server.

0xbadcafebee 21 hours ago [-]
> allowing internally-signed certificates to be valid via said Intermediary

By design, nothing is allowed to delegate signing authority, because it would become an immediate compromise of everything that got delegated when your delegated authority got compromised. Since only CAs can issue certs, and CAs have to pass at least some basic security scrutiny, clients have assurance that the thing giving it a cert got said cert from a trustworthy authority. If you want a non-trustworthy authority... go with a custom CA. It's intentionally difficult to do so.

> If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.

I mean, that's a valid ask. It will become more commonplace once some popular corporate offering includes it, and then all the competitors will adopt it so they don't leave money on the table. To get the first one to adopt it, be a whale of a customer and yell loudly that you want it, then wait 18 months.

stego-tech 20 hours ago [-]
> If you want a non-trustworthy authority... go with a custom CA. It's intentionally difficult to do so.

This is where I get rankled.

In IT land, everything needs a valid certificate. The printer, the server, the hypervisor, the load balancer, the WAP’s UI, everything. That said, most things don’t require a publicly valid certificate.

Perhaps Intermediate CA is the wrong phrase for what I’m looking for. Ideally it would be a device that does a public DNS-01 validation for a non-wildcard certificate, thus granting it legitimacy. It would then crank out certificates for internal devices only, which would be trusted via the Root CA but without requiring those devices to talk to the internet or use a wildcard certificate. In other words, some sort of marker or fingerprint that says “This is valid because I trust the root and I can validate the internal intermediary. If I cannot see the intermediary, it is not valid.”

The thinking goes is that this would allow more certificates to be issued internally and easily, but without the extra layer of management involved with a fully bespoke internal CA. Would it be as secure as that? No, but it would be SMB-friendly and help improve general security hygiene instead of letting everything use HTTPS with self-signed certificate warnings or letting every device communicate to the internet for an HTTP-01 challenge.

If I can get PKI to be as streamlined as the rest of my tech stack internally, and without forking over large sums for Microsoft Server licenses and CALs, I’d be a very happy dinosaur that’s a lot less worried about tracking the myriad of custom cert renewals and deployments.

0xbadcafebee 18 hours ago [-]
Well you can use an admin box and a script to request like 1000 different certs of different names through DNS-01. Copy the certs to the devices that need them. The big problem now is, you have ~5 days to constantly re-copy new certs and reboot the devices, thanks to LE's decision to be super annoying. If you want less annoying... pay for certs.

Installing custom CA certs isn't that hard once you figure out how to do it for each application. I had to write all the docs on this for the IT team, specific to each application, because they were too lazy to do it. Painful at first, but easy after. To avoid more pain later, make the certs expire in 2036, retire before then.

stego-tech 15 hours ago [-]
The problem I continue to encounter is that delegating this to colleagues or other teams is that - inevitably - someone thinks they're clever bypassing part or all of the procedure to, say, generate a wildcard cert and share its private key component with whoever asks for a cert, instead of going through approved processes. At PriorBigCo, we had a dedicated team who just handled global internal PKI, and despite a 72hr turnaround we still had folks bypassing procedure. That results in revocations, which results in more time being spent dealing with "emergency" renewals, which just makes it a PITA.

Automation is the goal, and right now internal PKI is far from automated like public-facing certs are. With ACME I can set-and-forget on public stuff that's not processing sensitive data or requires a premium certificate for, but internally it still seems like the only solution is an ADCA.

jcgl 9 hours ago [-]
Using CNAMEs with the _acme-challenge, plus API keys with fine-grained authorization, you can manage what each of those colleagues or teams can issue certs for. Disallowing wildcard certs for them, for example :)
everfrustrated 19 hours ago [-]
Intermediates aren't a delegation mechanism as such. They're a way to navigate to the roots trust.

The trust is always in the root itself.

It's not an active directory / LDAP / tree type mechanism where you can say I trust things at this node level and below.

account42 5 hours ago [-]
But they could and IMO should be a delegation mechanism. The Name Constraints extension already exists.
stego-tech 15 hours ago [-]
Appreciate the clarification! My grievance still stands, but at least I can articulate it better going forward.
account42 5 hours ago [-]
> By design, nothing is allowed to delegate signing authority, because it would become an immediate compromise of everything that got delegated when your delegated authority got compromised.

Or because it would expose the web PKI for the farce it is. Some shady corporation in bumfuckistan having authority to sign certificates for .gov.uk or even just your personal website is absolutely bonkers. Certificate authority should have always been delegated just like nameserver authority is.

stephenr 5 hours ago [-]
What company that has enough infrastructure to dictate an IT Department is also only using certificates on their web servers, and thus doesn't have a standard tool for issuing/renewing/deploying certificates for *all* services that need them?
metafunctor 17 hours ago [-]
I never saw it as a problem for nginx to just serve web content and let certbot handle cert renewals. Whatever happened to doing one thing well and making it composable? Fat tools that try to do everything inevitably suck at some important part.
idoubtit 8 hours ago [-]
This optional module makes simple cases simpler.

Having distinct tools for serving content and handling certs is not a problem, and nothing changes on this side. Moreover, the module won't cover every need.

BTW, cerbot is rather a "fat tool" compared to other acme tools like lego. I've had bad experiences with certbot in the past because it tried to do too much automatically and it's hard to diagnose – though I think certbot has been rewritten since then, since it has no more dependency on python zope.

pointlessone 7 hours ago [-]
Nginx with certbot is annoying to setup. Especially with HTTP challenge. Mostly because of a circular dependency. You need nginx to clear the challenge and once verboten gets a cert you need to reload nginx.

I switched to Lego because it has out of the box support for my domain registrar so I could use DNS instead of HTTP challenge. It’s also a single go binary which is much simpler to install than certbot.

account42 5 hours ago [-]
There is no circular dependency since the HTTP challenge uses unencrypted port 80 and not HTTPS. Reloading nginx config after cert updates is also not a problem as nginx can do that without any downtime.
pointlessone 4 hours ago [-]
There’s dependency in the nginx config. You have to specify where your certs are. So you have to have a working config before you start nginx, then you need to get certs and change config with the cert/key location before you can HUP nginx. This is extremely brittle, especially if you have a new box or a setup where you regularly bring up clean nodes as that’s when you can get all sorts of unexpected things to happen. It’s much less brittle when you already have a cert and a working config and just renew the certificate but not all setups are like that. I can’t even confidently say that most are like that.
SchemaLoad 16 hours ago [-]
It's kind of annoying to set up. Last I remember certbot could try to automatically configure things for you but unless you had the most default setup it wouldn't work. Just having Nginx do everything for you seems like a better solution.
account42 5 hours ago [-]
Certbot can just as easily work with a directory you have nginx set up to point .well-known/acme-challenge/ to. No automatic configuration magic needed.
stephenr 5 hours ago [-]
I wonder about the same thing. I've come to the conclusion that it's driven a lot by Management-Ideal definition of devops: developers who end up doing OPs without sufficient knowledge or experience to do it well.
RagnarD 21 hours ago [-]
After discovering Caddy, I don't use Nginx any longer. Just a much better development experience.
makaking 10 hours ago [-]
The fact that certificate management is still evolving makes me realize how young the web still is in the big scheme of things.
aorth 22 hours ago [-]
Oh this is exciting! Caddy's support is very convenient and it does a lot of other stuff right out of the box which is great.

One thing keeping me from switching to Caddy in my places is nginx's rate limiting and geo module.

kelvinjps10 3 hours ago [-]
Actually this was the reason I started using caddy, and easier config too!
st3fan 13 hours ago [-]
Basically the only reason I install Caddy instead of Nginx as a reverse proxy is the one-liner to get TLS & ACME going. Maybe this will change that? Not sure.
Humphrey 10 hours ago [-]
Anybody know how this would work for multiple nginx backends or failover machines - as I assume it's only possible to auto-fetch certificates for the live machine. Is it expected that you would use scp or similar to copy certs from the live machine to the failover / new server?
pointlessone 7 hours ago [-]
You don’t need exactly the same cert for failover. You only need a valid certificate. You don’t even need the same cert for every entry in your load balancer. Client will pick a single IP address when resolved, then connect to it and will keep using that TLS connection for the whole session.
account42 5 hours ago [-]
But you do need Let's Encrypt (or whatever ACME provider you use) to connect to the same server you are trying to set up the cert on. And they intentionally try to fetch the challenge response from multiple geographically distinct locations.
Arch-TK 3 hours ago [-]
kind of feels unnecessary honestly...

Automating webroot is trivial and I would rather use an external rust utility to handle it than a module for nginx. I guess if you _only_ need certs for your website then this helps but I have certs for a lot of other things too, so I need an external utility anyway.

And no dns-01 support yet.

miggy 21 hours ago [-]
It seems HAProxy also added ACME/DNS-01 challenge support in haproxy-3.3-dev6 very recently. https://www.mail-archive.com/haproxy@formilux.org/msg46035.h...
owenthejumper 16 hours ago [-]
It added ACME in 3.2, the DNS challenge is coming next: https://www.haproxy.com/blog/announcing-haproxy-3-2#acme-pro...
ilaksh 19 hours ago [-]
Just to check, this means we can use some extra lines in the nginx configuration as an alternative to installing and running certbot, right?

Also does it make it easier for there to be alternatives to Let's Encrypt?

pointlessone 7 hours ago [-]
Yes, a few lines in the config and you don’t need certbot any more.

You can specify any ACME API base URL. It’s not just Let’s Encrypt.

account42 7 hours ago [-]
> Support for other challenges (TLS-ALPN, DNS -01) is planned in future.

Looking forward to this. HTTP-01 already works well enough for me with certbot (which I need for other services anyway and gives me more control over having multiple domains in one cert) but for wildcard certs there are not as many good solutions.

samgranieri 22 hours ago [-]
This is a good first start. One less moving part. They should match caddy for feature parity on this, and also add dns01 challenges as well.

I'm not using nginx these days because of this.

ankit84 21 hours ago [-]
We have been using Caddy for many years now. Picked just because it has automatic cert provisioning. Caddy is really an easier alternative, secure out of the box.
smarx007 20 hours ago [-]
When will this land in mainline distros (no PPAs etc)? Given that a new stable version of Debian was released very recently, I would imagine August 2027 for Debian and maybe April 2026 for Ubuntu?

In this very thread some people complain that certbot uses snap for distribution. Imagine making a feature release and having to wait 1-2 years until your users will get it on a broad scale.

giancarlostoro 20 hours ago [-]
Nginx maintains their own repository from which you can install nginx on your Ubuntu / Debian systems.

I looked at Arch and they're a version behind, which surprised me. Must not be a heavily maintained arch package.

jonnybarnes 2 hours ago [-]
nginx has a stable release and a mainline release, which are packaged in Arch respectively as `nginx` and `nginx-mainline`. Both look up-to-date to me.
Saris 20 hours ago [-]
I assume they're complaining that it's a snap vs flatpak, not so much vs the distro package repos.
tialaramex 21 hours ago [-]
It's good to see this, it surprised me that this didn't happen to basically everything, basically immediately.

I figured either somehow Let's Encrypt doesn't work out, or, everybody bakes in ACME within 2-3 years. The idea that you can buy software in 2025 which has TLS encryption but expects you to go sort out the certificate. It's like if cars had to be refuelled periodically by taking them to a weird dedicated building which is not useful to anything else rather than just charging while you're asleep like a phone and... yeah you know what I get it now. You people are weird.

21 hours ago [-]
ugh123 19 hours ago [-]
How does something like this work for a fleet of edge services, load balancing in distinct areas, but all share a certificate. Does each nginx instance go through the same protocol/setup steps?
philsnow 19 hours ago [-]
You'd get rate limited pretty hard by Let's Encrypt, but if you're rolling your own acme servers you could do it this way.

If you wanted to use LE though, you could use a more "traditional" cert renewal process somewhere out-of-band, and then provision the resulting keys/certs through whatever coordination thing you contrive (and HUP the nginxs)

placatedmayhem 19 hours ago [-]
They don't need to share a single cert. Multiple certificates can be, and possibly should, issued for the same address (or set of addresses). This means that one front door server that gets popped doesn't expose all connections to the larger service.

Downside is obviously certificate maintenance increases, but ACME automated the vast majority of that work away.

ExoticPearTree 19 hours ago [-]
It is a start. Maybe this will serve as a proof of concept that it can be done and then other protocols could be implemented.

Probably like many others here, I would very much like to see Cloudflare DNS support.

cobbzilla 22 hours ago [-]
There’s a section on renewals but no description of how it works. Is there a background thread/process? Or is it request-driven? If request-driven, what about some hostname that’s (somehow) not seen traffic in >90 days?
adontz 22 hours ago [-]
certbot has an plugin for nginx, so I'm not sure why people think is was hard to use LetsEncrypt with nginx.
bityard 21 hours ago [-]
Maybe it's better these days, but even as an experienced systems administrator, I found certbot _incredibly_ annoying to use in practice. They tried to make it easy and general-purpose for beginners to web hosting, but they did it with a lot of magic that does Weird Stuff to your host and server configuration. It probably works great if you're in an environment where you just install things via tarball, edit your config files with Nano, and then rarely ever touch the whole setup again.

But if you're someone who needs tight control over the host configuration (managed via Ansible, etc) because you need to comply with security standards, or have the whole setup reproducible for disaster recovery, etc, then solutions like acme.sh or LEGO are far smaller, just as easy to configure, and in general will not surprise you.

creshal 22 hours ago [-]
Certbot is a giant swiss army chainsaw that can do everything middlingly well, if you don't mind vibecoding your encryption intrastructure. But a clean solution it usually isn't.

(That said, I'm not too thrilled by this implementation. How are renewals and revocations handled, and how can the processes be debugged? I hope the docs get updated soon.)

jeroenhd 21 hours ago [-]
Certbot always worked fine for me. It autodetects just about everything and takes care of just about everything, unless you manually instruct it what to do (i.e. re-use a specific CSR) and then it does what you tell it to do.

It's not exactly an Ansible/Kubernetes-ready solution, but if you use those tools you already know a tool that solves your problem anyway.

jddj 22 hours ago [-]
From the seeming consensus I was dreading setting let's encrypt up on nginx, until I did it and it was and has been... Completely straightforward and painless.

Maybe if you step off the happy path it gets hairy, but I found the default certbot flow to be easy.

vivzkestrel 11 hours ago [-]
absolute nightmare to get this to work inside docker compose dude. Nobody has documented a decent working solution for this yet. Too many quirks and third parties like nginx-proxy-manager or nginx-proxy/nginx-proxy on github make it even more terrible
orblivion 22 hours ago [-]
From a quick look it seems like a command you use to reconfigure nginx? And that's separate from auto-renewing the cert, right?

Maybe not hard, but Caddy seems like even less to think about.

orblivion 22 hours ago [-]
I guess I should compare to this new Nginx feature rather than Caddy. It seems like the benefit of this feature is that you don't have a tool to run, you have a config to put into place. So it's easier to deploy again if you move servers, and you don't have to think about making sure certbot is doing renewals.
9dev 22 hours ago [-]
Certbot is a utility that can only be installed via snap. That crap won’t make it to our servers, and many other people view it the same way I do.

So this change is most welcome.

adontz 3 hours ago [-]
It's a Python package you can install with pip, never ever installed it with Snap
orblivion 18 hours ago [-]
That doesn't sound right to me. It's been in Debian and Ubuntu for a while:

* https://packages.debian.org/bullseye/certbot

* https://packages.ubuntu.com/jammy/certbot

9dev 17 hours ago [-]
Last I was concerned with, this was the situation:

https://github.com/certbot/certbot/issues/8345#issuecomment-...

That’s been three years though. The EFF/Certbot team has lost so much goodwill with me over that, I won’t go back.

breadwinner 19 hours ago [-]
But can it generate self-signed certificate for intranet use? Often on the intranet you want to encrypt traffic, to prevent casual snooping using Wireshark.
arjie 16 hours ago [-]
Neat, that'll be nice to have. Currently I just use certbot and it does a pretty damn good job. I just set the HTTP:80 configuration and certbot will migrate it to HTTPS:443 and take care of the certificates and so on. For the moment, I'll probably stick to that till this is mature.
drchaim 16 hours ago [-]
I’ve already migrated to caddy ;)
zaik 21 hours ago [-]
Is there a way to notify other services, if renewal has succeed? My XMPP server also needs to use the certificate.
crest 2 hours ago [-]
FINALLY!!!
aoe6721 20 hours ago [-]
It was introduced long time ago in Angie fork with much better support.
cnst 17 hours ago [-]
Here's the docs for the Angie's version of the http_acme module:

https://en.angie.software/angie/docs/configuration/modules/h...

The original announcement of Angie ACME:

Angie, fork of Nginx, supports ACME - https://news.ycombinator.com/item?id=39838228 - March 27, 2024 (1 comment)

Per above, it looks like ACME support was released with Angie 1.5.0 on 2024-03-27.

BTW, if you don't care about ACME, and want the original nginx, then there's also the freenginx fork, too:

Freenginx: Core Nginx developer announces fork - https://news.ycombinator.com/item?id=39373327 - (1131 points) - Feb 14, 2024 (475 comments)

andrewstuart 21 hours ago [-]
It was this that sent me from nginx to caddy.

But I’m not going back. Nginx was a real pain to configure with so many puzzles and surprises and foot guns.

themafia 11 hours ago [-]
We had about 100 domains or so that needed to be redirected to their new homes. The previous person in my position set it all up using GoDaddy domains and redirects. I was gobsmacked at how much effort it took, and when browsers switched to HTTPS first, how badly it broke the setup.

That's when I found "golang.org/x/crypto/acme/autocert" and then I built a custom redirect server using it. It implements TLS-ALPN-01 which works fantastically with Let's Encrypt.

Now we can just add a domain to our web configuration, setup it's target and redirect style, and then push the configuration out the EC2 instance providing the public facing service. As soon as the first client makes a request, they're effectively put "on hold," while the server then arranges for the certificate in the background. As soon as it's issued and installed on the server the server continues with the original client.

It's an absolute breeze and it makes me utterly detest going backwards to DNS-01 or HTTP-01 challenges.

do_not_redeem 22 hours ago [-]
It looks like this isn't included by default with the base nginx, but requires you to install it as a separate module. Or am I wrong?

https://github.com/nginx/nginx-acme

bhaney 22 hours ago [-]
Nginx itself is mostly just a collection of modules, and it's up to the one building/packaging the nginx distribution to decide what goes in it. By default, nginx doesn't even build the ssl or gzip modules (though thankfully it does build the http module by default). Historically it only had static modules, which needed to be enabled or disabled at compile time, but now it has dynamic modules that can be compiled separately and loaded at runtime. Some older static modules now have the option of being built as dynamic modules, and new modules that can be written as dynamic modules generally are. A distro can choose to package a new dynamic module in their base nginx package, as a separate package, or not at all.

In a typical distro, you would normally expect one or more virtual packages representing a profile (minimal, standard, full, etc) that depends on a package providing an nginx binary with every reasonable static-only module enabled, plus a number of separately packaged dynamic modules.

timw4mail 22 hours ago [-]
Yes, that is correct.
johnisgood 22 hours ago [-]
For now I will stick to what works (nginx + certbot), but I will give this a try. Anyone tried it?

Caddy sounds interesting too, but I am afraid of switching because what I have works properly. :/

bityard 21 hours ago [-]
I grew up on Apache and eventually became a wizard with its configuration and myriad options and failures modes. Later on, I got semi-comfortable with nginx which was a little simpler because it did less than Apache but you could still get a fairly complex configuration going if you're running weird legacy PHP apps for example.

When I tried using Caddy with something serious for the first time, I thought I was missing something. I thought, these docs must be incomplete, there has to be more to it, how does it know to do X based on Y, this is never going to work...

But it DID work. There IS almost nothing to it. You set literally the bare minimum of configuration you could possibly need, and Caddy figures out the rest and uses sane defaults. The docs are VERY good, there is a nice community around it.

If I had any complaint at all, it would be that the plugin system is slightly goofy.

orphea 22 hours ago [-]
Caddy has been great for me. I don't think you should switch if your current setup works but give it a try in a new project.
roywashere 22 hours ago [-]
I like it!!! I am using Apache mod_md on Debian for personal project. That is working fine but when setting up a new site it somehow required two Apache restarts which is not super smooth
KronisLV 17 hours ago [-]
It's interesting that mod_md is so unknown: https://httpd.apache.org/docs/2.4/mod/mod_md.html

But also hey, now we have built-in ACME support in all the mainstream web servers: Nginx, Caddy and Apache2! Ofc Caddy will be the most polished, since that is one of its main selling points.

thway15269037 20 hours ago [-]
Does nginx still lock prometheus metrics and active probing behind $$$$$ (literal hundreds of thousands)? Forgot third most important thing. I think is was re-resolving upstreams.

Anyway, good luck staying competitive lol. Almost everyone I knew either jumped to something more saner or in process of migrating away.

burnt-resistor 19 hours ago [-]
Yeah, I don't want my webserver to turn into systemd and changing certificates. This is excessive functionality for something that should be handled elsewhere and drive the coordination of rolling certs.
andrewmcwatters 22 hours ago [-]
It seems like if you commit your NGINX config with these updates, you can have one less process to your deployment if you're doing something like:

    # https://certbot.eff.org/instructions?ws=other&os=ubuntufocal
    sudo apt-get -y install certbot
    # sudo certbot certonly --standalone
    
    ...
    
    # https://certbot.eff.org/docs/using.html#where-are-my-certificates
    # sudo chmod -R 0755 /etc/letsencrypt/{live,archive}

So, unfortunately, this support still seems more involved than using certbot, but at least one less manual step is required.

Example from https://github.com/andrewmcwattersandco/bootstrap-express

22 hours ago [-]
blessedcavapoo1 1 hours ago [-]
[dead]