2019-11-11

U2F USB key (Yubikey) for 2-factor authentication and Linux authentication

I just bought a pair of Yubikey U2F (Universal 2-Factor) devices (the 5 NFC model, because of the claims that it would work with iPhones). Mostly because I got tired of pulling out my phone, finding the authenticator app, searching for the entry for the appropriate website, and then typing in the number.

I'll get to the iPhone stuff at the end.

But first, using the Yubikey for the second factor works for only a few websites. Also, it depends on your web browser: I tested Chrome (on Linux, macOS, and Chromebook), and Firefox (on Linux and macOS). Chrome and Firefox can deal with reading a U2F key via USB just fine.

Yubico has clear instructions for how to set the keys up: https://www.yubico.com/setup

Among sites which accept U2F hardware keys are Facebook, Google, GitHub, GitLab,  Dropbox, and Twitter (though Twitter does not support multiple U2F keys, which sucks if you lose a key). You browse to the site as usual, type in your password, and it will prompt you to plug in your U2F key and tap the flashing bit with a gold contact sensor, and you're in.

For using the Yubikey as a U2F in Linux, to authenticate for logging in, unlocking the screensaver, and sudo, you will have to install Yubico's U2F PAM module: https://github.com/Yubico/yubico-pam There is more detailed documentation geared towards developers here: https://developers.yubico.com/yubico-pam/ The PAM module works fine in Ubuntu 19.10 Eoan Ermine.

And it works great: plug the Yubikey in first, type in your password and hit Enter, the key starts flashing, touch the flashing bit, and you are in.

On the downside, I would not use this on a server where you need to do management remotely, since you would not be able to plug in a U2F key on an SSH connection.

As for using NFC and iOS: it does not work like I expected it to, nor how the Yubico website led me to expect. If you tap the Yubikey to the iPhone, it will pop up an alert, which if you tap, will open Safari on a "validation" web page hosted at yubico.com.

The websites which work within Chrome and Firefox on a computer (Google, GitHub, etc) do not seem to have a way to read the Yubikey via NFC on iPhone. There is a Lightning + USB-C key (the 5 Ci) but it's expensive ($70 ea.) and I do not know for sure if it will work, since the Google and GitHub mobile websites viewed on iPhone and Android do not even present the option for using U2F keys.

So, at this point, I feel I should have just bought the cheaper non-NFC, and I would have been at the same point.

UPDATE 1 If you use KeePassXC for storing passwords, it can be configured to require a YubiKey. This uses the "challenge-response" feature, which has to be manually set up using the YubiKey Personalization Tool (also available at GitHub). Yubico has a video walkthrough here: https://www.yubico.com/products/services-software/personalization-tools/challenge-response/

2019-10-28

OpenLDAP annoyance

The OpenLDAP 2.4 administrator's manual is missing a section:

https://www.openldap.org/devel/admin/monitoringslapd.html

18.1. Monitor configuration via cn=config(5)
This section has yet to be written.

2019-10-23

Migrating LDAP server to a different machine and changing to OLC and from bdb to mdb (lmdb)

At our site, we have LDAP (openldap 2.4) running on one server. It uses the old slapd.conf configuration, and the Berkeley DB (bdb) backend.

As part of planning for the future, I want to move this LDAP server to a different machine. I wanted to also migrate to using on-line configuration (OLC), where the static slapd.conf file is replaced with the cn=config online LDAP "directory". This allows configuration changes to be made at runtime without restarting slapd.

I also wanted to change from using the Berkeley DB backend to the Lightning Memory-Mapped DB (LMDB; known as just "mdb" in the configs). LMDB is what OpenLDAP recommends as it is quick (everything in memory) and easier to manage (fewer tuning options, no db files to mess with). From here, I will refer to this as "mdb" per the slapd.conf line.

After doing the migration once, leaving the backend as bdb, I found out it was easier to do all three things at once: migrate to a different server, convert from slapd.conf to OLC, and change backend to mdb.

This is a multi-stage process, but nothing too strenuous.
  1. Dump the directory data to an LDIF:  slapcat -n 1 -l n1.ldif
  2. Copy n1.ldif to new machine
  3. Copy slapd.conf to new machine
  4. Edit new slapd.conf on new machine: change the line "database bdb" to "database mdb"
    1. Remove any bdb-specific options: idletimeout, cachesize
  5. Import n1.ldif: slapadd -f /etc/openldap/slapd.conf -l n1.ldif
  6. Convert slapd.conf to OLC: slaptest -f slapd.conf -F slapd.d ; chown -R ldap:ldap slapd.d
  7. Move slapd.conf out of the way: mv /etc/openldap/slapd.conf /etc/openldap/slapd.conf.old
Other complications:
  • You will probably need to generate a new SSL certificate for the new server
  • That may mean signing the new cert with your own (or established) CA
    • Or, you can set all your client nodes to not require the cert: in /etc/openldap/ldap.conf add “TLS_REQCERT never”. Fix up the /etc/sssd/sssd.conf file similarly: add ldap_tls_reqcert = never
  • Fix up your /etc/sssd/sssd.conf
Note that it is pretty easy to back things out and start from scratch. To restore the new server to a "blank slate" condition, delete everything in /etc/openldap/slapd.d/

     service slapd stop
     cd /etc/openldap/slapd.d
     rm -rf *

CAUTION: This process seems to create the n0 db with an olcRootDN of “cn=config” where it should be “cn=admin,dc=example,dc=com” (or whatever your LDAP rootDN should be; for Bright-configured clusters, that would be “cn=root,dc=cm,dc=cluster). I.e. you need to have:

dn: olcDatabase={0}config,cn=config
olcRootDN: cn=root,dc=cm,dc=cluster

but for olcRootDN, you have cn=config, instead.

To fix it, I dumped n0 and n1, deleted /etc/openldap/slapd.d, and “restored” from the dumped n0 and n1. Basically, emulating a restore from backup.

  • Dump {0} to n0.ldif
  • Shut down slapd
  • Modify n0.ldif to have the needed olcRootDN (as above)
  • Move away the old /etc/openldap/slapd.d/ directory: mv /etc/openldap/slapd.d /etc/openldap/slapd.d.BAK
  • Create a new slapd.d directory: mkdir /etc/openldap/slapd.d 
  • Add the dumped n0.ldif as the new config: slapadd -n 0 -F /etc/openldap/slapd.d -l n0.ldif
  • Fix permissions: chown -R ldap:ldap /etc/openldap/slapd.d ; chmod 700 /etc/openldap/slapd.d

These are the websites I found useful in figuring things out:

2019-10-08

Still more on SSSD in Bright Cluster Manager - cache file

It has been a few years since I got SSSD to work in Bright Cluster Manager 6, and I just figured out one little thing that has been an annoyance for a few years. There has been a spurious group hanging on: it has the same GID as an existing group, but a different group name.

Since Bright CM 6 did not handle SSSD out of the box, it also did not handle the SSSD cache file. More accurately, it did not ignore the file in the software image, and the grabimage command would grab the image to the provisioning server and then propagate it to nodes in the category.

The fix is simple: add /var/lib/sss/db/* to the various exclude list settings in the category.

To reset the cache:
    service sssd stop
    /bin/rm -f /var/lib/sss/db/cache_default.ldb
    service sssd start

I did try "sss_cache -E" which is supposed to clear the cache, but found that it did not work as I expected: the spurious group still appeared with "getent group".

2019-05-15

New Intel speculative execution vulnerability: "Microarchitectural Data Sampling" (MDS)

Another day, another hardware bug that has security implications. Like the recent Meltdown and Spectre bugs, this new bug called Microarchitectural Data Sampling (MDS) leaks data. Ars Technica has a nice write-up: "MDS attacks perform speculation based on a stale value from one of these [CPU] buffers."

Red Hat's more technical summary, and more detailed video explainers here and here.

2019-05-08

Cray and AMD will build 1.5 exaFLOPS supercomputer for DOE

From Ars Technica: AMD and Cray have announced that they're building "Frontier," a new supercomputer for the Department of Energy at Oak Ridge National Laboratory. The goal is to deliver a system that can perform 1.5 exaflops: 1.5×1018 floating point operations per second.

The current (Nov 2018) Top 500 leader, Summit at ORNL, runs at 0.2 exaflops.

2019-04-18

Notes on building HDF5

HDF5 is a suite of data-centered technology: structures, file formats, APIs, applications.

There are two ways to build HDF5: using the traditional "configure and make", or using CMake.

Using the configure-make method, a few of the tests may fail if run on an NFS filesystem. These tests are use_append_chunk and use_append_mchunks. The test programs first create a file (successfully), and then try to open them for read, where they fail. The error output looks like:

    157778: continue as the writer process
    dataset rank 3, dimensions 0 x 256 x 256
    157778: child process exited with non-zero code (1)
    Error(s) encountered
    HDF5-DIAG: Error detected in HDF5 (1.10.2) thread 0:
      #000: H5F.c line 511 in H5Fopen(): unable to open file
        major: File accessibilty
        minor: Unable to open file
      #001: H5Fint.c line 1604 in H5F_open(): unable to read superblock
        major: File accessibilty
        minor: Read failed
      #002: H5Fsuper.c line 630 in H5F__super_read(): truncated file: eof = 479232, sblock->base_addr = 0, stored_eof = 33559007
        major: File accessibilty
        minor: File has been truncated
    H5Fopen failed
    read_uc_file encountered error


The "Error detected in … thread 0" first led me to think that it was a threading issue. So, I re-configured with thread-safety on, which meant that C++ and Fortran APIs were not built, nor the high-level library. The tests still failed.

However, running the tests (with original config, i.e. without thread-safety but with C++, Fortran, and high-level library) on a local disk resulted in success.

Using CMake to build, all tests pass, even when doing them on an NFS volume.

UPDATE This fact that some tests fail on NFS mounts is documented at the HDF5 downloads page: "Please be aware! On UNIX platforms the HDF5 tests must be run on a local file system or a parallel file system running GPFS or Lustre in order for the SWMR tests to complete properly."