Continuously import Logfiles into journald

Continuously import Logfiles into journald

I use Journalbeat to forward my systemd-journal to Logstash/Elasticsearch. When possible, I configure every software on my servers to log to syslog which on moderns systems means logging to systemd-journald. If they don't support syslog I can also log to stdout which systemd writes to the syslog by default when running as a systemd service. This makes it easy to forward all my logs with one simple Journalbeat config. But not all software offers the ability to log to syslog or stdout, some only log to files.

I could use Filebeat to import these files, but this would mean they would be missing some fields I use for filtering like syslog.identifier and I would need to configure two agents. Importing these logs also allows me to see all logs within journalctl when I am connected to the machine directly.

But my method has one major disadvantage compared to Filebeat. It does not track where it last stopped reading the logfile and will always start at the bottom. So it is possible it will not always write every line to systemd (reboot, service crashing). If you need to guarantee this, use Filebeat.

To import logfiles into journald I need to continuously read them and write each line to journald. The first idea I had was piping tail -f into logger. But piping ist not possible directly inside a systemd service, so I would have to create a single line bash script and run that with ExecStart=.

I then stumbled upon systemd-cat, a very basic tool to write messages to journald. It can work like logger and be used after a pipe, or you can run another app from systemd-cat and automatically route stdout and stderr to systemd. This also allows me to use it directly inside the ExecStart= option.

Now for an example: ExecStart=/usr/bin/systemd-cat -t dpkg2journald /usr/bin/tail -F -n0 /var/log/dpkg.log.

The -t parameter for systemd-cat sets the syslog identifier.

The -F parameter for tail not only follows changes to the file, it can also deal with logrotate moving the file or the file not being present from the beginning.

The -n0 stops tail from outputting previous lines when restarting.

In the systemd service I use DynamicUser=true to not run the service as root and to not need to create a non privileged user. For tail to still be able to read any logfile in the filesystem, I add AmbientCapabilities=CAP_DAC_READ_SEARCH, which allows exactly that. Like always, I add any sandboxing feature possible, even disabling network access.

The full systemd service then looks like this. Replace {{ item.name }} and {{ item.log }} with your values.

[Unit]
Description={{ item.name }}2journald
After=network.target

[Service]
Type=exec
DynamicUser=true

ExecStart=/usr/bin/systemd-cat -t {{ item.name }}2journald /usr/bin/tail -F -n0 {{ item.log }}

Restart=always
RestartSec=10s

# filesystem access
ProtectSystem=strict
ProtectHome=true
PrivateTmp=true
PrivateDevices=true
ProtectControlGroups=true
ProtectKernelModules=true
ProtectKernelTunables=true

# network
PrivateNetwork=true
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6

# misc
NoNewPrivileges=true
PrivateUsers=true
RestrictRealtime=true
MemoryDenyWriteExecute=true
ProtectKernelLogs=true
LockPersonality=true
ProtectHostname=true
RemoveIPC=true
RestrictSUIDSGID=true

# capabilities
AmbientCapabilities=CAP_DAC_READ_SEARCH

[Install]
WantedBy=multi-user.target
item.name2journald.service