2023-05-26

YubiKey U2F on Ubuntu

Basic walk through of setting up U2F with YubiKey on Ubuntu 23.04 (should work on recent versions, as well). This follows the official documentation closely, removing anything not necessary for my particular setup.

N.B. this is different from challenge response, a different multifactor method. YubiKeys support multiple protocols, U2F and challenge response being two of them.

Preliminaries:

  • Ubuntu 23.04
  • YubiKey
    • I used the YubiKey 5 series: 5 NFC, 5 C, and 5 Ci. Where necessary, I used an adapter to plug in the USB-C key into an standard USB-A port.
Steps:
  • Associate YubiKey U2F with your account
    • Creates a line of text in a file containing your username and the 2nd factor string
    • Move the U2F file to a secure location readable only by root
  • Create PAM configs to require U2F for certain authentication operations, e.g. login, sudo

Create two PAM configs. Creating these configs will allow us to include them rather than copying and pasting the same config lines in multiple other PAM configs in /etc/pam.d.

In these configs, we add the “cue” and “interactive” options which will prompt the user to insert the YubiKey and to touch it.

/etc/pam.d/u2f-required will be the configuration to require the YubiKey:

auth required pam_u2f.so authfile=/etc/yubico/u2f_keys cue interactive

/etc/pam.d/u2f-sufficient will be the configuration which allows using only the YubiKey without a password:

auth sufficient pam_u2f.so authfile=/etc/yubico/u2f_keys cue interactive

For the initial setup, also add the following to the "auth" lines in the above config files:

debug debug_file=/var/log/pam_u2f.log

Then, create an empty debug log file to start: 

sudo touch /var/log/pam_u2f.log

CAUTION Best to have a root shell active, in case something goes awry, and you cannot sudo anymore:

normaluser$ sudo bash
#

DO NOT exit this terminal until you are sure at least sudo works. 

Basic idea: in each authentication scenario (i.e. PAM config file) where you want U2F, add the line

@include u2f-required

after the line 

@include common-auth

E.g. require U2F for sudo, modify the files
  • /etc/pam.d/sudo
  • /etc/pam.d/sudo-i
These are the PAM configs I updated in /etc/pam.d:
  • gdm-password -- prompts for YubiKey at GUI login screen
  • login -- prompts for YubiKey at console login
  • polkit-1 -- prompts for YubiKey when running GUI apps requiring sudo, e.g. synaptic
  • su -- prompts for YubiKey for su
  • sudo -- prompts for YubiKey for sudo
  • sudo-i -- prompts for YubiKey for sudo -i
The first one to try should be sudo since it is easy to test. Make the modification, then open a new terminal tab/window, and run a simple sudo command, e.g. "sudo ls -l /tmp". It should prompt you to insert the device, and then to touch it:

normaluser$ sudo ls -l /tmp
[sudo] password for normaluser: 
Insert your U2F device, then press ENTER.
Please touch the device. (The YubiKey should start flashing.)
total xx
[listing of files here]

If that did not work, examine the debug log /var/log/pam_u2f.log Make any adjustments, close out that sudo terminal tab/window, and launch a new one.

Once you are satisfied that everything works, you can remove the “debug debug_file=/var/log/pam_u2f.log” from /etc/pam.d/u2f_required and /etc/pam.d/u2f_sufficient

Minor annoyance: the GUI popup dialog for sudo authentication won’t accept just ENTER when it says “Insert your U2F device, then press ENTER”: you have to type in at least a SPACE for it to register that you have acknowledged the prompt, and are ready to touch the YubiKey.

GPUs in CFD



In a chat recently, I heard that computational fluid dynamics (CFD) can’t take advantage of GPUs. That seemed a bit doubtful to me, so I looked it up. Seems like there has been some work recently that showed how use of GPUs greatly accelerate CFD workloads.

This press release on OpenACC’s website talks about how a private company (AeroDynamic Solutions, Inc. (ADSCFD)) used OpenACC to give their proprietary CFD solver Code LEO GPU capabilities, with very good speedup.

By using OpenACC to GPU-accelerate their commercial flow solver, ADSCFD achieved significant value. They realized dramatically improved performance across multiple use cases with speed-ups ranging from 20 to 300 times, reductions in cost to solution of up to 70%, and access to analyses that were once deemed infeasible to instead being achieved within a typical design cycle.

Similar blog posts from Nvidia and ANSYS+Nvidia last year also show significant speedups (between 12x and 33x) and significant power consumption savings, as well.

Nvidia’s blog post show results from a beta version of ANSYS Fluent and Simcenter STAR-CCM+. 

Figure 2 shows the performance of the first release of Simcenter STAR-CCM+ 2022.1 against commonly available CPU-only servers. For the tested benchmark, an NVIDIA GPU-equipped server delivers results almost 20x faster than over 100 cores of CPU.

The performance of the Ansys Fluent 2022 beta1 server compared to CPU-only servers shows that Intel Xeon, AMD Rome, and AMD Milan had ~1.1x speedups compared to the NVIDIA A100 PCIe 80GB, which had speedups from 5.2x (one GPU) to an impressive 33x (eight GPUs). 

ANSYS’s blog post covers the same result as Nvidia, showing 33x speedup using 8 A100 GPUs. They also do a cost comparison of equal-speed clusters, one using GPUs and the other purely CPUs:

1 NVIDIA A100 GPU ≈ 272 Intel® Xeon® Gold 6242 Cores

Comparing the older V100 GPUs with Intel® Xeon® Gold 6242, the 6x V100 GPU cluster would cost $71,250 while the equivalent CPU-only cluster would cost $500,000, i.e. about one seventh the price.
 
 

2023-05-06

Fireship YouTube: Serverless was a big mistake... says Amazon

 

“Amazon Prime Video released an article explaining how they saved 90% on cloud computing costs by switching from microservices to a monolith. Let's examine the details of their software architecture and find out why serverless was a bad choice.”


2023-04-22

Industry Out of Phase With Supercomputers (IEEE Spectrum)

An article in IEEE Spectrum covers a recent report by the National Nuclear Security Administration (NNSA): 

NNSA has developed massive and sophisticated codes that run on supercomputers to verify the continued security and performance of nuclear weapons designed decades ago. Keeping them up to date requires new generations of supercomputers that can run more complex models faster than the months required on today’s machines. But industry, which has shelled out big bucks for state-of-the-art fabs, is targeting big, profitable markets like cloud computing.

Read the article here: https://spectrum.ieee.org/supercomputing 

2023-02-11

ChatGPT plays chess against Stockfish

This is not the usual subject matter for this blog, but ChatGPT has been in the news, and even seen some trials for application in research that I think it’s worth watching. Also, LOL.



2023-02-09

RPM generation error “Missing build-id”

I came across this error while trying to build an RPM for apptainer on my system, which is older than what is supported by the authors. The compilation process seems to complete successfully, but at the end, no RPM is generated. These messages are shown:
    error: Missing build-id in /home/swbuild/rpmbuild/BUILDROOT/apptainer-1.1.3-1.el8.x86_64/usr/libexec/apptainer/bin/squashfuse_ll
    error: Generating build-id links failed

    RPM build errors:
        Macro expanded in comment on line 97: %{?el7}

        Missing build-id in /home/swbuild/rpmbuild/BUILDROOT/apptainer-1.1.3-1.el8.x86_64/usr/libexec/apptainer/bin/squashfuse_ll
        Generating build-id links failed
This is because older versions of GCC do not include the build-id by default. The fix is to add LDFLAGS="-Wl,--build-id" to the configure line in the .spec.in file, e.g.
    FLAGS=-std=c99 LDFLAGS="-Wl,--build-id" ./configure --enable-multithreading

2022-12-11

Researchers Managed to Transfer Twice the Global Internet Traffic in a Single Second

Gizmodo reports

a team of researchers from the Technical University of Denmark and Chalmers University of Technology in Sweden [hit] 1.84 Pbit/s with a new chip that uses just a single laser. That’s the equivalent of moving “twice the total global Internet traffic,” all in one second.
… 
As detailed in a paper recently published in the Nature Photonics journal, the research team developed a new optical chip that functions as something called a frequency comb. Light from a single infrared laser source enters the chip, where it’s split into a rainbow spectrum of hundreds of different colors that can each be encoded with data by modulating three specific properties of each frequency: its amplitude, phase, and polarization. Those hundreds of specially modulated frequencies are then recombined back into a single beam, which is transmitted down a fiber optic cable, and then decoded on the other end.