Scripting FTP in 2021
16 January 2021

Last night I put together a little script to automate backing up a Korg Kronos to an OmniOS zone.

I feel this is likely a unique endeavour in the history of computing, and should be recorded.

The Kronos has an internal SSD which stores samples, sequences, and programs for various synthesis engines. Up to now I’ve been backing it up to a USB stick. But with the acquisition of a USB-to-ethernet adapter (an old Apple one – the fancy new one I bought first didn’t work) I had the chance to do something better.

I configured the Kronos with a static IP address, and put it in our house’s internal DNS. Then:

$ nmap kronos

Starting Nmap 7.60 ( ) at 2021-01-16 16:37 GMT
Nmap scan report for kronos (
Host is up (0.0023s latency).
rDNS record for kronos.localnet
Not shown: 999 closed ports
21/tcp open  ftp

No SSH, no FTP. No HTTP or SMB or NFS. We’re going old-skool. (And being grateful it’s not TFTP.)

My first thought was to write a Ruby script to do it with NTP::FTP, but they’ve dropped that in 3.0.0. And honestly, when you’re doing backups, the first requirement is that they actually work, so I thought it would be better to use something proven and extant.

I chose LFTP which was in the OmniOS pkgsrc repo. (This meant rebuilding my backup zone as pkgsrc brand rather than lipkg, but that was a one-word change in a config file, and two minutes waiting for the zone to rebuild and re-Puppet.)

With LFTP installed, I wrote this script:


KRONOS_PASSWORD=<%= @kronos_backup_password %>

# It's OK if the Kronos isn't on. It probably won't be.
ping $KRONOS_HOSTNAME 1 >/dev/null 2>&1 && exit 0

/opt/local/bin/lftp <<-EOSCRIPT
  mirror -c -e /SSD1

From the top: BACKUP_DIR is a ZFS dataset, automatically snapshotted every day. So, I’ll have backups going way back.

KRONOS_PASSWORD is being filled in by Puppet, and stored as a secret.

You can see that LFTP dumps its output to stdout. That output, it turns out, is a list of files which are copied during the mirror operation. I had Puppet make a cron job which captures that output on each invocation, then popped another cron job in after it. (Lines broken for clarity.)

# Puppet Name: backup_kronos
37 * * * * /opt/site/bin/backup_kronos >/var/log/cron_jobs/kronos_backup.log 2>&1
# Puppet Name: report_files_to_wavefront
52 * * * * /opt/local/bin/wf write point -E wavefront.localnet 'backup.kronos.files_transferred' \
  $(cat /var/log/cron_jobs/kronos_backup.log | wc -l)

The second job counts the number of downloaded files in the last mirror operation, and uses my Wavefront CLI to record said number in Wavefront. Pretty pointless, but kind of nice.

(It doesn’t look very good as a line chart, but at the moment Wavefront can’t share stacked column charts. The data is much clearer on one of those.)

I have a mechanism for monitoring cron jobs, and the data it produces is already being used in a “possible failed backup” alert.

ts("cron.job", zone="cube-backup" and not exit_code=0)

This tells me if a backup fails, but absence of failure does not guarantee success. So, I also have

mcount(2h, ts("cron.job", zone="cube-backup" and cmd="/bin/ksh /opt/local/bin/lftp")) < 1

which tells me if the backup script has tried and failed to mirror the Kronos’s SSD.

I periodically run sysex dumps on my other synths to the Kronos, so everything from everywhere eventually ends up on disk.