November 15, 2018

New package, packages
python wrapper for the Mastodon API
Microsoft goes Gold for 2018!Undeadly

Kenneth R. Westerback (krw@) writes to inform us:

Microsoft goes Gold for 2018!

The OpenBSD Foundation is happy to announce that Microsoft has increased its support level from Silver to Gold for 2018.

This is the fourth consecutive year that Microsoft has made a contribution to the OpenBSD Foundation and we are grateful for their continuing support.

Thank you, Ken for sharing the good news about the OpenBSD Foundation with the community.

Another DragonFly laptop entryDragonFly BSD Digest

There’s a section of the DragonFly website (a wiki) that records success with various laptops and DragonFly.  The latest addition: Lenovo IdeaPad Y500.

New package, softhsm2-2.5.0OpenBSD packages
software PKCS#11 cryptographic token

November 14, 2018

strict structsTed Unangst (tedu@)

Contrary to popular belief, C does have types. It even has type qualifiers. Unfortunately, the selection is somewhat limited and there are several implicit conversions that may lead to less than robust code. The good news is that with a little effort we can define our own types and enforce our own rules. I’ve forgotten where I first saw this, and don’t really have a good name for it.

Here’s a common bug which is fun to consider.

        memset(ptr, sizeof(*ptr), 0);

The problem is that integer types readily turn in to other integer types. It may be possible to coerce a warning out of the compiler, but it’s rare. Ideally we would like a size type that’s completely incompatible with other integers.

If we look at code, a fairly common pattern emerges. There’s a size, it’s used to allocate memory, passed around to various functions, etc., but rarely manipulated or operated on as an integer.

        size_t size = 1024;
        char *ptr = malloc(size);
        memset(ptr, 0, size);
        snprintf(ptr, size, "hello");

Let’s a make a new size type.

typedef struct {
        size_t s;
} size;
void *mallocS(size size);
void *memsetS(void *ptr, int val, size size);
int snprintfS(char *buf, size size, const char *fmt);

Now our example looks like this.

        size size = { 1024 };
        char *ptr = mallocS(size);
        memsetS(ptr, 0, size);
        snprintfS(ptr, size, "hello");

Practically identical. However, attempting to reintroduce our original bug will fail.

        memsetS(ptr, size, 0);

Compile that and we see an error.

error: passing 'size' to parameter of incompatible type 'int'
        memsetS(ptr, size, 0);

Size is no longer an integer.

A typical program may have a variety of integers that should not mix and match, except under special circumstances. Height, width, x position, y position, etc. Unfortunately, we often see functions like moveto(int, int, int, int) where it’s very easy to pass arguments in the wrong order. We can make them all different types to prevent this.

Another thing we may consider is adding custom qualifiers to types. C provides const to make a variable readonly. What we would like is a notnull qualifier to deal with all those pesky functions that return null. This is a fact of life, sometimes they simply don’t have a value to return, but we don’t want the null to accidentally flow into other parts of our program expecting valid data.

Fairly typical API.

thing *getthing(const char *name); /* may return NULL */
void printthing(thing *thing); /* must not be NULL */

Fairly typical code.

        thing *thing = getthing("it");

That’s a bug, and a common one. It’s just too easy to take the thing and toss it around.

Now imagine we have these types.

typedef struct maybething {
        thing *ptr;
} maybething;
typedef struct notnullthing {
        thing *ptr;
} notnullthing;

maybething getthingX(const char *name); /* may return NULL */
void printthingX(notnullthing thing); /* must not be NULL */

The compiler will enforce that the return of getthing does not immediately flow to printthing because they have incompatible types. The user must convert them, hopefully after checking for null.

        maybething maybe = getthingX("it");
        if (maybe.ptr) {
                notnullthing notnull = { maybe.ptr };

Hardly foolproof, since we can always force the conversion without a null check, but that requires a deliberate act of foolishness, not mere carelessness or forgetfulness.

With some magic macros and perhaps unwise cleverness, we can also clean it up a bit.

        maybething thing = getthingX("it");
        with (thing) {

This code only runs if thing exists, but removing the if-like with will cause a compile failure.

At the processor level, all of this indirection is completely eliminated. One word structs get passed in registers, and the compiler will eliminate the redundant temporaries and conversions. Zero runtime overhead.

ifconfig(8): vlandev and vlan options removedOpenBSD -current changes
The vlandev and vlan configuration options have been deprecated since 6.2 and were now removed. Use parent and vnetid instead.

November 13, 2018

OpenBSD/arm64 on the NanoPi NEO2Frederic Cambus

I bought the NanoPi NEO2 solely for it's form-factor, and I haven't been disappointed. It's a cute little board (40*40mm), which is to the best of my knowledge the smallest possible device one can run OpenBSD on.

The CPU is a quad-core ARM Cortex-A53 which is quite capable, a GENERIC.MP kernel build taking 15 minutes. On the downside, the board only has 512MB of RAM.

An USB to TTL serial cable is required to connect to the board and perform installation. The system doesn't have a supported miniroot so the preparation steps detailed in the INSTALL.arm64 file have to be performed to get a working installation image.

The following packages need to be installed:

pkg_add dtb u-boot-aarch64

After writing the miniroot image to an SD card, the correct DTB should be copied:

mount /dev/sdXi /mnt
mkdir /mnt/allwinner
cp /usr/local/share/dtb/arm64/allwinner/sun50i-h5-nanopi-neo2.dtb /mnt/allwinner
umount /mnt

Lastly, the correct U-Boot image should be written:

dd if=/usr/local/share/u-boot/nanopi_neo2/u-boot-sunxi-with-spl.bin of=/dev/sdXc bs=1024 seek=8

After performing the installation process, the DTB should be copied again to the SD card before attempting to boot the system.

Here is the output of running file on executables:

ELF 64-bit LSB shared object, AArch64, version 1

And this is the result of the md5 -t benchmark:

MD5 time trial.  Processing 10000 10000-byte blocks...
Digest = 52e5f9c9e6f656f3e1800dfa5579d089
Time   = 1.070000 seconds
Speed  = 93457943.925234 bytes/second

For the record, LibreSSL speed benchmark results are available here.

System message buffer (dmesg output):

OpenBSD 6.4-current (GENERIC.MP) #262: Mon Nov 12 01:54:10 MST 2018
real mem  = 407707648 (388MB)
avail mem = 367030272 (350MB)
mainbus0 at root: FriendlyARM NanoPi NEO 2
cpu0 at mainbus0 mpidr 0: ARM Cortex-A53 r0p4
cpu0: 32KB 64b/line 2-way L1 VIPT I-cache, 32KB 64b/line 4-way L1 D-cache
cpu0: 512KB 64b/line 16-way L2 cache
efi0 at mainbus0: UEFI 2.7
efi0: Das U-Boot rev 0x0
sxiccmu0 at mainbus0
psci0 at mainbus0: PSCI 0.2
simplebus0 at mainbus0: "soc"
syscon0 at simplebus0: "syscon"
sxiccmu1 at simplebus0
sxipio0 at simplebus0: 94 pins
ampintc0 at simplebus0 nirq 224, ncpu 4 ipi: 0, 1: "interrupt-controller"
sxiccmu2 at simplebus0
sxipio1 at simplebus0: 12 pins
sximmc0 at simplebus0
sdmmc0 at sximmc0: 4-bit, sd high-speed, mmc high-speed, dma
ehci0 at simplebus0
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 configuration 1 interface 0 "Generic EHCI root hub" rev 2.00/1.00 addr 1
ehci1 at simplebus0
usb1 at ehci1: USB revision 2.0
uhub1 at usb1 configuration 1 interface 0 "Generic EHCI root hub" rev 2.00/1.00 addr 1
dwxe0 at simplebus0: address 02:01:f7:f9:2f:67
rgephy0 at dwxe0 phy 7: RTL8169S/8110S/8211 PHY, rev. 5
com0 at simplebus0: ns16550, no working fifo
com0: console
sxirtc0 at simplebus0
gpio0 at sxipio0: 32 pins
gpio1 at sxipio0: 32 pins
gpio2 at sxipio0: 32 pins
gpio3 at sxipio0: 32 pins
gpio4 at sxipio0: 32 pins
gpio5 at sxipio0: 32 pins
gpio6 at sxipio0: 32 pins
gpio7 at sxipio1: 32 pins
agtimer0 at mainbus0: tick rate 24000 KHz
cpu1 at mainbus0 mpidr 1: ARM Cortex-A53 r0p4
cpu1: 32KB 64b/line 2-way L1 VIPT I-cache, 32KB 64b/line 4-way L1 D-cache
cpu1: 512KB 64b/line 16-way L2 cache
cpu2 at mainbus0 mpidr 2: ARM Cortex-A53 r0p4
cpu2: 32KB 64b/line 2-way L1 VIPT I-cache, 32KB 64b/line 4-way L1 D-cache
cpu2: 512KB 64b/line 16-way L2 cache
cpu3 at mainbus0 mpidr 3: ARM Cortex-A53 r0p4
cpu3: 32KB 64b/line 2-way L1 VIPT I-cache, 32KB 64b/line 4-way L1 D-cache
cpu3: 512KB 64b/line 16-way L2 cache
scsibus0 at sdmmc0: 2 targets, initiator 0
sd0 at scsibus0 targ 1 lun 0: <SD/MMC, SC64G, 0080> SCSI2 0/direct removable
sd0: 60906MB, 512 bytes/sector, 124735488 sectors
vscsi0 at root
scsibus1 at vscsi0: 256 targets
softraid0 at root
scsibus2 at softraid0: 256 targets
bootfile: sd0a:/bsd
boot device: sd0
root on sd0a (1fbfe51d132e41c0.a) swap on sd0b dump on sd0b
New package, pspp-1.2.0OpenBSD packages
program for statistical analysis of sampled data
Tor part 5: onioncat for IPv6 VPN over torSolène Rapenne (solene@)

This article is about a software named onioncat, it exists as package in most unix and linux systems. This software permit to create an IPv6 VPN over Tor. Allowing to being part of a whole network through Tor, with no restriction about the usage over the network on the VPN.

First, we need to install onioncat, on OpenBSD:

$ pkg_add onioncat

Run a tor hidden service as explained in one of my previous article, and get the hostname value. If you run multiples hidden services, just pick one hostname.

# cat /var/tor/ssh_hidden_service/hostname

Now that we have the hostname, we just need to run ocat.

# ocat g6adq2w15j1eakzr.onion

If everything works as expected, a tun interface will be created. With a fe80:: IPv6 adddress assigned on it, and a fd87:: address.

Your system is now reachable through its IPv6 address starting by fd87:: through Tor. It supports every protocol supported by IP. Instead of using torsocks wrapper and .onion hosntame, you can use the IPv6 address of the service with any software.

Moving away from Emacs, 130 days afterSolène Rapenne (solene@)

It has been more than four months since I wrote my article about leaving Emacs. This article will quickly speak about my journey.

First, I successfully left Emacs. Long story short, I like Emacs and think it’s a great piece of software, but I’m not comfortable being dependent of it for everthing I do. I choosed to replace all my Emacs usage by other software (agenda, notes taking , todo-list, irc client, jabber client, editor etc..).

  • agenda is not replaced by when (port productivity/when), but I plan to replace it by calendar(1) as it’s in base and that when doesn’t do much.
  • todo-list: I now use taskwarrior + a kanban board (using kanboard) for team work
  • notes: I wrote a small software named “notes” which is a wrapper for editing files and following edition using git. It’s available at git://
  • irc: weechat (not better or worse than emacs circe)
  • jabber: profanity
  • editor: vim, ed or emacs, that depend what I do. Emacs is excellent for writing Lisp or Scheme code, while I prefer to use vim for most of edition task. I now use ed for small editions.
  • mail: I wrote some kind of a wrapper on top of mblaze. I plan to share it someday.

I’m happy to have moved out from Emacs.

Fun tip #1: Apply a diff with edSolène Rapenne (solene@)

I am starting a new kind of articles that I choosed to name it ”fun facts“. Thoses articles will be about one-liners which can have some kind of use, or that I find interesting from a technical point of view. While not useless, theses commands may be used in very specific cases.

The first of its kind will explain how to programmaticaly use diff to modify file1 to file2, using a command line, and without a patch.

First, create a file, with a small content for the example:

$ printf "first line\nsecond line\nthird line\nfourth line with text\n" > file1
$ cp file1{,.orig}
$ printf "very first line\nsecond line\n third line\nfourth line\n" > file1

We will use diff(1) -e flag with the two files.

$ diff -e file1 file1.orig
fourth line
very first line

The diff(1) output is batch of ed(1) commands, which will transform file1 into file2. This can be embedded into a script as in the following example. We also add w last commands to save the file after edition.

ed file1 <<EOF
fourth line
very first line

This is a quite convenient way to transform a file into another file, without pushing the entire file. This can be used in a deployment script. This is more precise and less error prone than a sed command.

In the same way, we can use ed to alter configuration file by writing instructions without using diff(1). The following script will change the whole first line containing “Port 22” into Port 2222 in /etc/ssh/sshd_config.

ed /etc/ssh/sshd_config <<EOF
/Port 22
Port 2222

The sed(1) equivalent would be:

sed -i'' 's/.*Port 22.*/Port 2222/' /etc/ssh/sshd_config

Both programs have their use, pros and cons. The most important is to use the right tool for the right job.

November 12, 2018

OpenBSD in Stereo with Linux VFIOJoshua Stein (jcs@)

I use a Huawei Matebook X as my primary OpenBSD laptop and one aspect of its hardware support has always been lacking: audio never played out of the right-side speaker. The speaker did actually work, but only in Windows and only after the Realtek Dolby Atmos audio driver from Huawei was installed. Under OpenBSD and Linux, and even Windows with the default Intel sound driver, audio only ever played out of the left speaker.

Now, after some extensive reverse engineering and debugging with the help of VFIO on Linux, I finally have audio playing out of both speakers on OpenBSD.


The Linux kernel has functionality called VFIO which enables direct access to a physical device (like a PCI card) from userspace, usually passing it to an emulator like QEMU.

To my surprise, these days, it seems to be primarily by gamers who boot Linux, then use QEMU to run a game in Windows and use VFIO to pass the computer's GPU device through to Windows.

By using Linux and VFIO, I was able to boot Windows 10 inside of QEMU and pass my laptop's PCI audio device through to Windows, allowing the Realtek audio drivers to natively control the audio device. Combined with QEMU's tracing functionality, I was able to get a log of all PCI I/O between Windows and the PCI audio device.

Using VFIO

To use VFIO to pass-through a PCI device, it first needs to be stubbed out so the Linux kernel's default drivers don't attach to it. GRUB can be configured to instruct the kernel to ignore the PCI audio device (8086:9d71) and explicitly enable the Intel IOMMU driver by adding the following to /etc/default/grub and running update-grub:

GRUB_CMDLINE_LINUX_DEFAULT="text pci-stub.ids=8086:9d71 iommu=pt intel_iommu=on"

With the audio device stubbed out, a new VFIO device can be created from it:

sudo modprobe pci-stub
sudo modprobe vfio-pci

echo 0000:00:1f.3 | sudo tee /sys/bus/pci/devices/0000:00:1f.3/driver/unbind
echo 0x8086 0x9d71 | sudo tee /sys/bus/pci/drivers/vfio-pci/new_id

Then the VFIO device (00:1f.3) can be passed to QEMU:

sudo qemu-img create -f qcow2 -b win10.img win10-tmp.img

sudo ../qemu/x86_64-softmmu/qemu-system-x86_64 \
    -M q35 -m 2G -cpu host,kvm=off \
    -enable-kvm \
    -device vfio-pci,host=00:1f.3,multifunction=on,x-no-mmap \
    -hda win10-tmp.img \
    -trace events=events.txt 2>&1 | tee debug-output

I was using my own build of QEMU for this, due to some custom logging I needed (more on that later), but the default QEMU package should work fine. The events.txt was a file of all VFIO events I wanted logged (which was all of them).

Since I was frequently killing QEMU and restarting it, Windows 10 wanted to go through its unexpected shutdown routine each time (and would sometimes just fail to boot again). To avoid this and to get a consistent set of logs each time, I used qemu-img to take a snapshot of a base image first, then boot QEMU with that snapshot. The snapshot just gets thrown away the next time qemu-img is run and Windows always starts from a consistent state.

QEMU will now log each VFIO event which gets saved to a debug-output file.

9645@1541992466.382461:vfio_pci_read_config  (0000:00:1f.3, @0x2e, len=0x2) 0x3200
9645@1541992466.395726:vfio_region_read  (0000:00:1f.3:region0+0xc, 2) = 0x0
9645@1541992466.395792:vfio_region_read  (0000:00:1f.3:region0+0xe, 2) = 0x1
9645@1541992466.396021:vfio_region_write  (0000:00:1f.3:region0+0xc, 0x0, 2)

With a full log of all PCI I/O activity from Windows, I compared it to the output from OpenBSD and tried to find the magic register writes that enabled the second speaker. After days of combing through the logs and annotating them by looking up hex values in the documentation, diffing runtime register values, and even brute-forcing it by mechanically duplicating all PCI I/O activity in the OpenBSD driver, nothing would activate the right speaker.

One strange thing that I noticed was if I booted Windows 10 in QEMU and it activated the speaker, then booted OpenBSD in QEMU without resetting the PCI device's power in-between (as a normal system reboot would do), both speakers worked in OpenBSD and the configuration that the HDA controller presented was different, even without any changes in OpenBSD.

A Primer on Intel HDA

Most modern computers with integrated sound chips use an Intel High Definition Audio (HDA) Controller device, with one or more codecs (like the Realtek ALC269) hanging off of it. These codecs do the actual audio processing and communicate with DACs and ADCs to send digital audio to the connected speakers, or read analog audio from a microphone and convert it to a digital input stream. In my Huawei Matebook X, this is done through a Realtek ALC298 codec.

On OpenBSD, these HDA controllers are supported by the azalia(4) driver, with all of the per-codec details in the lengthy azalia_codec.c file. This file has grown quite large with lots of codec- and machine-specific quirks to route things properly, toggle various GPIO pins, and unmute speakers that are for some reason muted by default.

azalia0 at pci0 dev 31 function 3 "Intel 200 Series HD Audio" rev 0x21: msi
azalia0: host: High Definition Audio rev. 1.0
azalia0: host: 9 output, 7 input, and 0 bidi streams
azalia0: found a codec at #0
azalia0: found a codec at #2
azalia_init_corb: CORB allocation succeeded.
azalia_init_corb: CORBWP=0; size=256
azalia_init_rirb: RIRB allocation succeeded.
azalia_init_rirb: RIRBRP=0, size=256
azalia0: codec[0] vid 0x10ec0298, subid 0x320019e5, rev. 1.3, HDA version 1.0
azalia_codec_init: There are 36 widgets in the audio function.
azalia0: codecs: Realtek ALC298, Intel/0x280b, using Realtek ALC298

The azalia driver talks to the HDA controller and sets up various buffers and then walks the list of codecs. Each codec supports a number of widget nodes which can be interconnected in various ways. Some of these nodes can be reconfigured on the fly to do things like turning a microphone port into a headphone port.

The newer Huawei Matebook X Pro released a few months ago is also plagued with this speaker problem, although it has four speakers and only two work by default. A fix is being proposed for the Linux kernel which just reconfigures those widget pins in the Intel HDA driver. Unfortunately no pin reconfiguration is enough to fix my Matebook X with its two speakers.

While reading more documentation on the HDA, I realized there was a lot more activity going on than I was able to see through the PCI tracing.

For speed and efficiency, HDA controllers use a DMA engine to transfer audio streams as well as the commands from the OS driver to the codecs. In the output above, the CORBWP=0; size=256 and RIRBRP=0, size=256 indicate the setup of the CORB (Command Output Ring Buffer) and RIRB (Response Input Ring Buffer) each with 256 entries. The HDA driver allocates a DMA address and then writes it to the two CORBLBASE and CORBUBASE registers, and again for the RIRB.

When the driver wants to send a command to a codec, such as CORB_GET_PARAMETER with a parameter of COP_VOLUME_KNOB_CAPABILITIES, it encodes the codec address, the node index, the command verb, and the parameter, and then writes that value to the CORB ring at the address it set up with the controller at initialization time (CORBLBASE/CORBUBASE) plus the offset of the ring index. Once the command is on the ring, it does a PCI write to the CORBWP register, advancing it by one. This lets the controller know a new command is queued, which it then acts on and writes the response value on the RIRB ring at the same position as the command (but at the RIRB's DMA address). It then generates an interrupt, telling the driver to read the new RIRBWP value and process the new results.

Since the actual command contents and responses are handled through DMA writes and reads, these important values weren't showing up in the VFIO PCI trace output that I had gathered. Time to hack QEMU.

Logging DMA Memory Values in QEMU

Since DMA activity wouldn't show up through QEMU's VFIO tracing and I obviously couldn't get Windows to dump these values like I could in OpenBSD, I could make QEMU recognize the PCI write to the CORBWP register as an indication that a command has just been written to the CORB ring.

My custom hack in QEMU adds some HDA awareness to remember the CORB and RIRB DMA addresses as they get programmed in the controller. Then any time a PCI write to the CORBWP register is done, QEMU fetches the new CORB command from DMA memory, decodes it into the codec address, node address, command, and parameter, and prints it out. When a PCI read of the RIRBWP register is requested, QEMU reads the response and prints the corresponding CORB command that it stored earlier.

With this hack in place, I now had a full log of all CORB commands and RIRB responses sent to and read from the codec:

9645@1541992466.588081:vfio_region_read  (0000:00:1f.3:region0+0x48, 2) = 0xdb
CORBWP advance to 220, last WP 219
CORB[220] = 0x21f0800 (caddr:0x0 nid:0x21 control:0xf08 param:0x0)
9645@1541992466.588109:vfio_region_write  (0000:00:1f.3:region0+0x48, 0xdc, 2)
9645@1541992466.588386:vfio_region_write  (0000:00:1f.3:region0+0x5d, 0x1, 1)
RIRBWP advance to 220, last WP 219
CORB caddr:0x0 nid:0x21 control:0xf08 param:0x0 response:0x82 (ex 0x0)
9645@1541992466.588431:vfio_region_read  (0000:00:1f.3:region0+0x58, 2) = 0xdc

An early version of this patch left me stumped for a few days because, even after submitting all of the same CORB commands in OpenBSD, the second speaker still didn't work. It wasn't until re-reading the HDA spec that I realized the Windows driver was submitting more than one command at a time, writing multiple CORB entries and writing a CORBWP value that was advanced by two. This required turning my CORB/RIRB reading into a for loop, reading each new command and response between the new CORBWP/RIRBWP value and the one previously seen.

Sure enough, the magic commands to enable the second speaker were sent in these periods where it submitted more than one command at a time.

Minimizing the Magic

The full log of VFIO PCI activity from the Windows driver was over 65,000 lines and contained 3,150 CORB commands, which is a lot to sort through. It took me a couple more days to reduce that down to a small subset that was actually required to activate the second speaker, and that could only be done through trial and error:

  • Boot OpenBSD with the full list of CORB commands in the azalia driver
  • Comment out a group of them
  • Compile kernel and install it, halt the QEMU guest
  • Suspend and wake the laptop, resetting PCI power to the audio device to reset the speaker/Dolby initialization and ensure the previous run isn't influencing the current test (I'm guessing there is an easier to way to reset PCI power than suspending the laptop, but oh well)
  • Start QEMU, boot OpenBSD with the new kernel
  • Play an MP3 with mpg123 which has alternating left- and right-channel audio and listen for both channels to play

This required a dozen or so iterations because sometimes I'd comment out too many commands and the right speaker would stop working. Other times the combination of commands would hang the controller and it wouldn't process any further commands. At one point the combination of commands actually flipped the channels around so the right channel audio was playing through the left speaker.

The Result

After about a week of this routine, I ended up with a list of 662 CORB commands that are needed to get the second speaker working. Based on the number of repeated-but-slightly-different values written with the 0x500 and 0x400 commands, I'm guessing this is some kind of training data and that this is doing the full Dolby/Atmos system initialization, not just turning on the second speaker, but I could be completely wrong.

In any case, the stereo sound from OpenBSD is wonderful now and I can finally stop downmixing everything to mono to play from the left speaker. In case you ever need to do this, sndiod can be run with -c 0:0 to reduce the channels to one.

Due to the massive size of the code needed for this quirk, I'm not sure if I'll be committing it upstream in OpenBSD or just saving it for my own tree. But at least now the hardware support chart for my Matebook is all yeses for the things I care about.

I've also updated the Linux bug report that I opened before venturing down this path, hoping one of the maintainers of that HDA code that works at Intel or Realtek knew of a solution I could just port to OpenBSD. I'm curious to see what they'll do with it.

Thanks to rjc for proofreading and feedback.

Web browsers on DragonFlyDragonFly BSD Digest

For better or worse, there’s different browser options out there, especially for non-mainstream platforms.  You know what I mean.  DragonFly developer tuxillo has put together a helpful page listing options and how to get them to build.

November 11, 2018

Goodness, Enumerated by Robots. Or, Handling Those Who Do Not Play Well With GreylistingPeter Hansteen
SMTP email is not going away any time soon. If you run a mail service, when and to whom you present the code signifying a temporary local problem code is well worth your attention.

SMTP email is everywhere and is used by everyone.

If you are a returning reader, there is a higher probability that you run a mail service yourself than in the general population.

This in turn means that you will be aware that one of the rather annoying oversights of the original and still-current specifications of the SMTP based mail system is that while it's straightforward to announce which systems are supposed to receive mail for a domain, specifying which hosts would be valid email senders was not part or the original specification at all.

Any functioning domain MUST have at least one MX (mail exchanger) record published via the domain name system, and registrars will generally not even let you register a domain unless you have set up somewhere to receive mail for the domain.

But email worked most of the time anyway, and while you would occasionally hear about valid mail not getting delivered, it was a rarer occurrence than you might think.

Then a few years along, the Internet grew out of the pure research arena and became commercial, and spam started happening. Even in the early days of spam it seems that a significant subset of the messages, possibly even the majority, was sent with faked sender addresses in domains not connected to the actual senders.

Over time people have tried a number of approaches to the problems involved in getting rid of unwanted commercial and/or malware carrying email. If you are interested in a deeper dive into the subject, you could jump over to my earlier piece Effective Spam and Malware Countermeasures - Network Noise Reduction Using Free Tools.

Two very different methods of reducing spam traffic were originally formulated at roughly the same time, and each method's adherents are still duking it out over which approach is the better one.

One method consists simply of implementing a strict interpretation of a requirement that was already formulated in the SMTP RFC at the time.

The other is a complicated extension of the SMTP-relevant data that is published via DNS, and full implementation would require reconfiguration of every SMTP email system in the world.

As you might have guessed, the first is what is commonly referred to as greylisting, where we point to the RFC's requirement that on encountering a temporary error, the sender MUST (RFC language does not get stronger than this) retry delivery at a later time and keep trying for a reasonable amount of time.

Spammers generally did not retry as per the RFC specifications, and even early greylisting adopters saw huge drop in the volume of spam that actually made it to mailboxes.

On the other hand, end users would sometimes wonder why their messages were delayed, and some mail administrators did not take well to seeing the volume of data sitting in the mail spool directories grow measurably, if not usually uncontrollably, while successive retries after waiting were in progress.

In what could almost almost appear as a separate, unconnected universe, other network engineers set out to fix the now glaringly obvious omission in the existing RFCs.

A way to announce valid senders was needed, and the specification that was to be known as the Sender Policy Framework (SPF for short) was offered to the world. SPF offered a way to specify which IP addresses valid mail from a domain were supposed to come from, and even included ways to specify how strictly the limitations it presented should be enforced at the receiving end.

The downsides were that all mail handling would need to be upgraded with code that supported the specification, and as it turned out, traditional forwarding such as performed by common mailing list software would not easily be made compatible with SPF.

The flame wars over both methods. You either remember them or should be able to imagine how they played out.

And while the flames grew less frequent and generally less fierce over time, mail volumes grew to the level where operators would have a large number of servers for outgoing mail, and while the site would honor the requirement to retry delivery, the retries would not be guaranteed to come from the same IP address as the original attempt.

It was becoming clear to greylisting practitioners that interpreting published SPF data as known good senders was the most workable way forward. Several of us already had started maintaining nospamd tables (see eg this slide and this), and using the output of

$ host -ttxt domain.tld

(sometimes many times over because some domains use include statements), we generally made do. I even made a habit of publishing my nospamd file.

As hinted in this slide, smtpctl (part of the OpenSMTPd system and in your OpenBSD base system) now since OpenBSD 6.3 is able to retrieve the entire contents of the published SPF information for any domain you feed it.

Looking over my old nospamd file during the last week or so I found enough sedimentary artifacts there, including IP addresses for which there was no explanation and that lacked a reverse lookup, that I turned instead to deciphering which domains had been problematic and wrote a tiny script to generate a fresh nospamd on demand, based on fresh SPF lookups on those domains.

For those wary of clicking links to scripts, it reads like this:

domains=`cat thedomains.txt`
operator="Peter Hansteen <>"

echo "##############################################################################################">$outfile;
echo "# This is the `hostname` nospamd generated from domains at $generatedate. ">>$outfile;
echo "# Any questions should be directed to $operator. ">>$outfile;
echo "##############################################################################################">>$outfile;
echo >>$outfile;

for dom in $domains; do
echo "processing $dom";
echo "# $dom starts #########">>$outfile;
echo >>$outfile;
echo $dom | doas smtpctl spf walk >>$outfile;
echo "# $dom ends ###########">>$outfile;
echo >>$outfile;

echo "##############################################################################################">>$outfile;
echo "# processing done at `date`.">>$outfile;
echo "##############################################################################################">>$outfile;

echo "adding local additions from $locals";
echo "# local additions below here ----" >>$outfile;
cat $locals >> $outfile;

If you have been in the habit of fetching my nospamd, you have been fetching the output of this script for the last day or so.

What it does is simply read a prepared list of domains, run them through smtpctl spf walk and slap the results in a file which you would then load into the pf configuration on your spamd machine. You can even tack on a few local additions that for whatever reason do not come naturally from the domains list.

But I would actually recommend you do not fetch my generated data, and rather use this script or a close relative of it (it's a truly trivial script and you probably can create a better version) and your own list of domains to generate a nospamd tailored to your local environment.

The specific list of domains is derived from more than a decade of maintaining my setup and the specific requests for whitelisting I have received from my users or quick fixes to observed problems in that period. It is conceivable that some domains that were problematic in the past no longer are, and unless we actually live in the same area, some of the domains in my list are probably not relevant to your users. There is even the possibility that some of the larger operators publish different SPF information in specific parts of the world, so the answers I get may not even match yours in all cases.

So go ahead, script and generate! This is your chance to help the robots generate some goodness, for the benefit of your users.

In related news, a request from my new colleagues gave me an opportunity to update the sometimes-repeated OpenBSD and you presentation so it now has at least some information on OpenBSD 6.4. You could call the presentation a bunch of links in a thin wrapper of advocacy and you would not be very wrong.

If you have comments or questions on any of the issues raised in this article, please let me know, preferably via the (moderated) comments field, but I have also been known to respond to email and via various social media message services.

Update 2018-11-11: A few days after I had posted this article, an incident happened that showed the importance of keeping track of both goodness and badness for your services. This tweet is my reaction to a few quick glances at the mail server log:

A little later I'm clearly pondering what to do, including doing another detailed writeup.
Fortunately I had had some interaction with this operator earlier, so I knew roughly how to approach them. I wrote a couple of quick messages to their abuse contacts and made sure to include links to both my spamtrap resources and a fresh log excerpt that indicated clearly that someone or someones in their network was indeed progressing from top to bottom of the spamtraps list.
As the last tweet says, delivery attempts stopped after progressing to somewhere into the Cs. The moral might be that a list of spamtraps like the one I publish might be useful for other sites to filtering their outgoing mail. Any activity involving the known-bad addresses would be a strong indication that somebody made a very unwise purchasing decision involving address lists.
Lazy Reading for 2018/11/10DragonFly BSD Digest

The movies link should keep you busy.

November 09, 2018

Hammer2 reminder: bulkfree makes some noiseDragonFly BSD Digest

For future edification: If you have HAMMER2 installed, the bulkfree operation will create console/dmesg activity even when nothing is wrong, to show operations are happening.

OpenSMTPD reporting updateGilles Chehade (gilles@)
The reporting mechanism has been described shortly in my previous article about both reporting and filters.
Let's focus a bit more on the reporting bits this time.
The format is improving further and has extended to outgoing trafic reporting.


In previous article, I described the events reporting mechanism that has been introduced in the development branch of OpenSMTPD.

To sum it up, you could now write an event processor as simple as a shell script reading its stdin on a loop:

$ cat  /tmp/
#! /bin/sh

while read line; do
        echo $line >> /tmp/reporting.log

and configure your OpenSMTPD so it would report all incoming SMTP events:

$ grep report /etc/mail/smtpd.conf                                                                                                                                                                                   
proc reporting "/tmp/"
report smtp on reporting

which would then produce an events report log in /tmp/reporting.log containing entries similar to these:

report|smtp-in|protocol-server|1541271219|3189ac6874354895|220 ESMTP OpenSMTPD
report|smtp-in|protocol-client|1541271222|3189ac6874354895|helo localhost
report|smtp-in|protocol-server|1541271222|3189ac6874354895|250 Hello localhost [], pleased to meet you

Improvements on the reports format

I made several improvements to the format described in the previous article.

The first improvement is that there is now a version embedded in each report. This allows event processors to be able to check if they know how to parse an event report, it allows them to easily support backward-compatible versions should we make changes to the format of an event report, but most importantly it allows you to just store these events somewhere then have tools post-process them months later without ambiguity with regard to the format of entries, even if there were OpenSMTPD updates in between.

The second improvement is that the timestamp which was appearing after the event type was moved in front of it. This doesn’t seem like an improvement but it eases reading and allows me to simplify some of the code :-)

The third improvement comes from adding some new events and adding information to some existing events. For instance, OpenSMTPD reported these transaction events:


But reading from these, you could only obtain the session identifier, there was no way to find out the transaction identifier, how many envelopes were generated in the transaction or even the size of the message.

To solve these issues, the format of the events above has been extended so it would contain the transation identifier (aka. msgid), a tx-envelope event was introduced to report the generation of a new envelope in the transaction along with its envelope identifier (aka. evpid), and finally the size of the DATA part is reported on a tx-commit event.

Last but not least, the DATA part begins with a DATA command issued by the client and ends with a single . on a line by itself. Despite the single . being sent within the DATA phase, it is not really part of the DATA itself and must be considered as a commit request.

This doesn’t seem like much but the devil is in the details. Not reporting that commit request as a protocol-client command means that we go straight from DATA command to a tx-commit event, without allowing a filter to actually refuse the commit request. If we generate this commit request event, a filter may decide that it wants to reject it which will then produce a tx-rollback that was not possible before.

A pattern emerges that tx-* events should appear in between protocol-* events otherwise they cannot be filtered.

Here is a sample curated event report log from my own server as of today:

$ cat /tmp/reporting.log     
report|1|1541750432|smtp-in|protocol-server|c73c0aff0dfb6250|220 ESMTP OpenSMTPD
report|1|1541750432|smtp-in|protocol-client|c73c0aff0dfb6250|EHLO localhost
report|1|1541750432|smtp-in|protocol-server|c73c0aff0dfb6250| Hello localhost [local], pleased to meet you
report|1|1541750432|smtp-in|protocol-server|c73c0aff0dfb6250|250-SIZE 36700160
report|1|1541750432|smtp-in|protocol-server|c73c0aff0dfb6250|250 HELP
report|1|1541750432|smtp-in|protocol-client|c73c0aff0dfb6250|MAIL FROM:<>
report|1|1541750432|smtp-in|protocol-server|c73c0aff0dfb6250|250 2.0.0: Ok
report|1|1541750432|smtp-in|protocol-client|c73c0aff0dfb6250|RCPT TO:<>
report|1|1541750432|smtp-in|protocol-server|c73c0aff0dfb6250|250 2.1.5 Destination address valid: Recipient ok
report|1|1541750432|smtp-in|protocol-server|c73c0aff0dfb6250|354 Enter mail, end with "." on a line by itself
report|1|1541750432|smtp-in|protocol-server|c73c0aff0dfb6250|250 2.0.0: f84306b3 Message accepted for delivery
report|1|1541750432|smtp-in|protocol-server|c73c0aff0dfb6250|221 2.0.0: Bye

Introducing smtp-out

Obviously my plan is to be able to report and create dashboard for ALL trafic, not just incoming.

I have worked on generating reports for smtp-out and I actually have something working in a branch, which I intend to commit next week.

The format is identical with the sole difference that smtp-in is replaced with smtp-out, the event types are the same, the parameters are the same, you just need to get your head around the fact that protocol-client is your peer when in smtp-in, whereas protocol-server is your peer when in smtp-out:

report|1|1541750707|smtp-out|protocol-server|c73c0b206ad6c41c|220 ESMTP k132-v6si807155wma.16 - gsmtp
report|1|1541750707|smtp-out|protocol-server|c73c0b206ad6c41c| at your service, []
report|1|1541750707|smtp-out|protocol-server|c73c0b206ad6c41c|250-SIZE 157286400
report|1|1541750707|smtp-out|protocol-server|c73c0b206ad6c41c|250 SMTPUTF8
report|1|1541750707|smtp-out|protocol-server|c73c0b206ad6c41c|220 2.0.0 Ready to start TLS
report|1|1541750707|smtp-out|link-tls|c73c0b206ad6c41c|version=TLSv1.2, cipher=ECDHE-RSA-CHACHA20-POLY1305, bits=256
report|1|1541750708|smtp-out|protocol-server|c73c0b206ad6c41c| at your service, []
report|1|1541750708|smtp-out|protocol-server|c73c0b206ad6c41c|250-SIZE 157286400
report|1|1541750708|smtp-out|protocol-server|c73c0b206ad6c41c|250 SMTPUTF8
report|1|1541750708|smtp-out|protocol-client|c73c0b206ad6c41c|MAIL FROM:<>
report|1|1541750708|smtp-out|protocol-server|c73c0b206ad6c41c|250 2.1.0 OK k132-v6si807155wma.16 - gsmtp
report|1|1541750708|smtp-out|protocol-client|c73c0b206ad6c41c|RCPT TO:<>
report|1|1541750708|smtp-out|protocol-server|c73c0b206ad6c41c|250 2.1.5 OK k132-v6si807155wma.16 - gsmtp
report|1|1541750708|smtp-out|protocol-server|c73c0b206ad6c41c|354 Go ahead k132-v6si807155wma.16 - gsmtp
report|1|1541750708|smtp-out|protocol-server|c73c0b206ad6c41c|250 2.0.0 OK 1541750708 k132-v6si807155wma.16 - gsmtp
report|1|1541750718|smtp-out|protocol-server|c73c0b206ad6c41c|221 2.0.0 closing connection k132-v6si807155wma.16 - gsmtp

Because not everyone needs reporting and not everyone needs both incoming and outgoing reporting, I have added the smtp-in and smtp-out keywords to the grammar so that you can:

$ grep report /etc/mail/smtpd.conf                                                                                                                                                                                   
proc reporting "/tmp/"
report smtp-in on reporting
report smtp-out on reporting

I’ll probably make report smtp on a shortcut for both smtp-in and smtp-out.

There is still work to be done on the smtp-out path because the SMTP engine is more complex than for the smtp-in path. For instance, it is currently not possible to have any of the transaction events generated between the protocol-client and protocol-server events due to how the state machine is written. Not really as much of a big deal as for smtp-in since smtp-out isn’t filtered and the order issues are less annoying, but to be really clean and consistent, the smtp-in and smtp-out reports should be very parallel in terms of order events. I should be able to look at the smtp-out reports from my laptop and the smtp-in reports from my server and see them appear in the same order.

Finally, there is also an rDNS lookup that needs to be added so the report is identical to smtp-in, and we should be fine.

What’s so good about this ?

Reporting is not JUST about being able to write dashboards, it is not just about being able to generate state for filters eithers.

Generating event reports logs that can be parsed by external tools open the way for many side applications ranging from tools to replay sessions when tracking issues, tools to analyze behavior of peers and feedback into pf or OpenSMTPD tables, and more interestingly for people who will developer filters… it brings the ability to write and test a filter without a running OpenSMTPD instance, piping the event log directly into the filter.

To be very honest, I’m personally more excited by this new feature than the filters feature which might be more visible but would be far less powerful without the event logs.

What next ?

More changes should happen to the format of entries in the next few weeks and months, this is a moving target as I wrote in previous article.

Builtin filters already require some of these lines to provide more informations and this is being worked on.

My next focus is the filtering of the DATA phase which is the requirement for us to provide support for dkim and antispam stuff without the need of proxies and reenqueuing. Work has already started but I will probably not commit any code related to this before the end of November.