Switching to systemd-resolved for mDNS

Local name resolution made simple

I recently installed Arch linux for the first time in about a decade on a small machine I had lying about and rather than install Avahi for mDNS, I opted to use systemd-resolved, seeing as it was already installed on the default system. While I think systemd-resolved still lacks some of the more advanced features of Avahi, e.g. I found no equivalent to avahi-publish, I liked the simplicity of the systemd solution. This made me think that it would be easier if all machines on my local network used the same mDNS setup. Easier to remember, easier to figure out why a host might not respond to a ping. So this post is about setting systemd-resolved up and – if necessary – disabling alternative mDNS solutions.

As a general rule, I think it best not to acknowledge the “elephant in the room”. That way, at some point, we will finally stop talking about it as an elephant and just as part of the furniture. However, on this occasion the fruit is too lowhanging, so if anybody’s first thought was to curse at Lennart Poettering and his overreach, I suggest going to Avahi’s wikipedia page and checking the infobox.

The setup I’m going for here is somewhat minimalistic. I want

  • each and every host to announce themselves on the local network by it’s $hostname and
  • on each and every host I want the ability to resolve all .local names.

That’s all. The two things obviously go hand in hand: If HOST1 does not announce it’s presence and doesn’t claim it’s name, HOST2 will be hard pressed to resolve the HOST1.local domain. However, these are logically separate functions and while my network is quite egalitarian, it is possible to imagine a setup where a host, let’s call it MAEVE, needs name resolution for other hosts but has no need to announce itself.

There are a lot fancier things you can do with mDNS. For my purposes, it’s enough that wherever I am on the network I can always reach another host by calling out ${hostname}.local. That means that a) I don’t need to memorise IP addresses when using an SSH or Samba client or the like, and b) I will not need to consider whether the target host is currently connected by cable or wifi.

Finally, in an attempt to gain a better understanding of mDNS, I am going to investigate what actually happens underneath the hood when a mDNS request is sent off.

Disabling Avahi

If Avahi comes pre-bundled with the OS, as it still does on the Manjaro install that I’m currently using, it will need to be disabled. Uninstalling it, is probably asking for trouble as pacman warns me of a lot of packages that “depend” on Avahi. So the best I can do is disable the service and mask it so no other service can start it up as a dependency.

[ ~ ] sudo systemctl stop avahi-daemon
[ ~ ] sudo systemctl disable avahi-daemon
[ ~ ] sudo systemctl mask avahi-daemon

What about all the packages listing it as a dependency? Well, package dependency and service dependency are obviously two different things. The package is still installed and so nothing is going to break package manager wise.

As for what the packages need from Avahi, I suspect it’s either a library, some binary or just access to local domain resolution (which is equally well supplied by the systemd solution). In all likelihood those package dependencies were defined when Avahi was the only game in town for that. Systemd did not complain about service dependencies when I disabled Avahi and so far nothing’s broken.

Don’t enable the systemd-resolved.service just yet, though. There are a couple of configuration steps still before we get there.

Double trouble

In order for systemd-resolved to do what I want it to do, two things need to be set. The first is simple: I need to enable mDNS in systemd-resolved’s configuration. The second is less obvious: I also need to allow mDNS on whatever network connection I want it to.

I can see that it makes sense, though. On a laptop I may well want to have it enabled on a local wifi network but not when using WWAN connections.

I will do simple first. Going to /etc/systemd/ I find a configuration file called resolved.conf. In this file I enter (or uncomment) as follows:


MulticastDNS=yes is easy to understand but why LLMNR=no? LLMNR is an mDNS alternative, proposed by Microsoft. I want to disable it for two reasons, neither of which is because it says Microsoft on the package.

First, having multiple, competing local name resolution schemes is a sure way to get in trouble – or at least to confuse myself. If I use tools like resolvectl on the command line, it will tell me what scheme it has used to resolve a name. Anywhere else, it will be hidden from me and will invite assumptions, often faulty.

Secondly, Microsoft itself is giving up on LLMNR and Windows 10 has supported mDNS for a while now. That might still leave some use cases where mDNS does not work and LLMNR does. A Windows 7 machine, never switched to Windows 10, for example. Those machines have the option, however, to add mDNS support through Bonjour print services for Windows.

At any rate, feel free to keep LLMNR. But don’t say I didn’t warn you.

Next up, doing the exact same thing in the configuration of the network. This obviously depends on what software that is used for maintaining a network connection.

Here I am going to a) set it up for NetworkManager and b) set it globally, i.e. for all network connections because I am at a desktop computer that only connects to the same router (but may do so on more interfaces). I got to the directory /etc/NetworkManager/conf.d/ and create a new globals.conf file with the contents:


LLMNR is obviously again disabled completely. Setting mdns to 2 means “register hostname and resolving for the connection” according to the man page for nm-settings-nmcli. If I wanted the “MAEVE” setup, alluded to previously, I should set this to 1 here, for “do not register hostname but allow resolving of mDNS host names”. The parallel setting in resolved.conf is resolve instead of yes.

NetworkManager obviously needs to be restarted after adding/editing the file. For systemd-networkd connections and more options, see the Arch wiki article.


How do applications actually talk to systemd-resolved? The man page for systemd-resolved lists no less than three ways, including DBUS, glibc, and finally a local IP address at (basically a pseudo DNS server, referred to as a stub). The man page states that

Programs issuing DNS requests directly, bypassing any local API may be directed to this stub, in order to connect them to systemd-resolved


But it is clear that it is considered preferable that applications access it using either of the first two options. I am not going to try to figure out how various applications – web browsers, file browsers, ssh, rsync, etc. – get their DNS fix. But I do want to make sure that all DNS requests go through systemd-resolved, again to insure against any doubt as to what’s at fault, should a request fail.

In order to dot that I need to take /etc/resolv.conf away from NetworkManager and give it to systemd-resolved. resolv.conf is the old school way of telling a host which DNS server(s) to use. In the most basic of use cases, you would simply edit it yourself and add a single IP address on a single line. In the setup here, though, /etc/resolv.conf will point to the aforementioned stub and insure that all requests go through systemd-resolved, including standard DNS requests. It is worth noting that this is specifically not the way mDNS is resolved, as the man page points out that those have to go through DBUS. If you were using resolv.conf the old way you obviously would not want your local name resolution queries sent to Cloudflare.

Also, if you want systemd-resolved to pass your standard DNS queries on to a specific DNS server, say Cloudflare’s or Quad9’s, and that server is not suggested by the network’s router, you should add it in /etc/systemd/resolved.conf.

I first need to enable and start the systemd-resolved service so that it creates the replacement file for my current /etc/resolv.conf.

[ /etc ] sudo systemctl enable systemd-resolved.service
Created symlink /etc/systemd/system/dbus-org.freedesktop.resolve1.service → /usr/lib/systemd/system/systemd-resolved.service.
Created symlink /etc/systemd/system/sysinit.target.wants/systemd-resolved.service → /usr/lib/systemd/system/systemd-resolved.service.
[ /etc ] sudo systemctl start systemd-resolved.service

I then remove the current /etc/resolv.conf (created by NetworkManager at startup) and replace it with systemd-resolved’s solution. I do this by way of inserting a symbolic link to a small file that lists as the nameserver to use.

[ /etc ] sudo rm /etc/resolv.conf  
[ /etc ] sudo ln -s /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf

Does it work?

systemd-resolved comes with a very handy tool called resolvectl. resolvectl will help me figure out if everything is working as intended:

[ ~ ] resolvectl status
           Protocols: -LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
    resolv.conf mode: stub
Fallback DNS Servers: 2606:4700:4700::1111#cloudflare-dns.com 2620:fe::9#dns.quad9.net 2001:4860:4860::8888#dns.google

Link 2 (enp0s31f6)
    Current Scopes: DNS mDNS/IPv4
         Protocols: +DefaultRoute -LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server:
       DNS Servers:

Link 3 (wlp5s0)
    Current Scopes: DNS mDNS/IPv4
         Protocols: +DefaultRoute -LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server:
       DNS Servers:

As is apparent, I have successfully set a global no to LLMNR and yes to mDNS – and the global settings are also applied to each individual connection. If I want to be sure I’m understanding the readout correctly I can ask for individual protocols:

[ ~ ] resolvectl mdns
Global: yes
Link 2 (enp0s31f6): yes
Link 3 (wlp5s0): yes
[ ~ ] resolvectl llmnr
Global: no
Link 2 (enp0s31f6): no
Link 3 (wlp5s0): no

How does it uhm… How does it work?

I was curious to see it in action but found not much had been written on how to peek at mDNS resolution. It’s not terribly difficult, though.

  • Set systemd-resolved to debug log mode
  • Drop in on the the logging through journalctl
  • Use resolvectl to send off a query
  • Oh and quickly set log mode to info again so as to avoid drowning in output
[ ~ ] sudo resolvectl log-level
[ ~ ] sudo resolvectl log-level debug
[ ~ ] sudo journalctl -f -u systemd-resolved.service

I use journalctl to inspect the logs and single out the service systemd-resolved (if not I would get signals from all services logging to the journal). -f is for follow which updates the output in realtime, as opposed to getting a snapshot. With this listening setup in place, I start a new terminal and enter:

[ ~ ] resolvectl --cache=no query host2.local

--cache is set to no (or false) so as to provoke an actual lookup on the network, rather than just using previously gathered information. Since debug mode is very wordy, I have removed bits and pieces from the logged output, which I think was mostly DBUS communication details.

11:58:20: Looking up RR for host2.local IN A.
11:58:20: Looking up RR for host2.local IN AAAA.
11:58:20: Firing regular transaction 42033 for <host2.local IN A> scope mdns on wlp5s0/INET (validate=yes).
11:58:20: Delaying mdns transaction 42033 for 36938us.
11:58:20: Initial jitter phase for transaction 42033 elapsed.
11:58:20: Retrying transaction 42033.
11:58:20: Firing regular transaction 42033 for <host2.local IN A> scope mdns on wlp5s0/INET (validate=yes).
11:58:20: Sending query packet with id 0 on interface 3/AF_INET of size 27.
11:58:20: Received mdns UDP packet of size 27, ifindex=3, ttl=255, fragsize=0, sender=, destination=
11:58:20: Received mdns UDP packet of size 37, ifindex=3, ttl=255, fragsize=0, sender=, destination=
11:58:20: Got mDNS reply packet
11:58:20: Checking for conflicts...
11:58:20: Processing incoming packet of size 37 on transaction 42033 (rcode=SUCCESS).
11:58:20: Regular transaction 42033 for <host2.local IN A> on scope mdns on wlp5s0/INET now complete with <success> from network (unsigned; non-confidential).
11:58:20: Freeing transaction 42033.
11:58:20: Added positive unauthenticated non-confidential cache entry for HOST2.local IN A 120s on wlp5s0/INET/

We clearly start out with two DNS requests, one for IPv4 (A records) and one for IPv6 (AAAA). RR is short for ressource records, I just found out, but resolvectl is only asking for A/AAAA or address records. Since I have disabled IPv6, it seems that that part is silently dropped and all we see is the IPv4 request. I have no idea what the delay or jitter parts are for, but it looks like the request is held back for a few microseconds for some reason. Once it is fired off for real (Retrying transaction 42033) a scope appropriate interface is picked for the query and off it goes.

Now, the way I understand it, multicast – the m in mDNS – works by sending off a single packet to This then gets picked up by the router which is responsible for keeping track of which hosts have expressed an interest in mDNS. The “expressed interest” part is supposedly what separates multicast from broadcast; the host on the receving end has actually ticked the box saying “please, notify me”. The router then forwards the query to those selected hosts.

According to the Wikipedia article there is an option to request a direct unicast response (which “SHOULD” in capital letters be respected). However, inspecting the requests from resolvectl as well as avahi-resolve, I found no sign that they made use of this. Wireshark also confirmed what the logs were telling me: That responses were also made using multicast (sender=, destination= Which I guess makes sense; if we’re updating one host on what address host2 is on, might as well tell everybody (who subscribes to the mDNS newsletter).

One thing I have no explanation for, though, is why I am getting two responses – and one of them is from the local host itself ( I am sure it is only confirming what host2 itself ( is saying since Checking for conflicts... doesn’t trigger anything. My suspicion is that resolvectl is echoing the response to itself somehow, muttering under it’s breath and listening to the muttering.

I also did some research on how conflicts would be detected and possibly resolved, though came up short, apart from a very old post that claimed that mDNS in general (no implementations named) would trust whoever responded first. This would obviously be a problem if you cannot trust everybody on the network. And you probably shouldn’t. That very risk has been called out for Microsoft’s local name resolution scheme, LLMNR, but not so far as I know, for mDNS.

I guess the best way would be to test it. As good an excuse as any to go out and buy more Raspberry Pis, right?

Hello My Name Is © Travis Wise, CC BY 2.0


So you didn’t have to change nsswitch.conf and add “resolve” to hosts line? I’m doing something wrong and nothing is working. What I’m trying to do us use mDNS when our real DNS goes down

Not sure I understand how that would work. mDNS is only for resolving local names. How would that be a fallback in case of a “DNS outage”?

Ok, well maybe I don’t need the mDNS part, just systemd-resolved. It sounds like it will cache my real DNS activity and if DNS servers go down will hopefully use the cache.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.