Lenovo OneCLI 4.3.0 Linux has a bug in its bash completion file. Line 153 should have its hyphens "-" replaced with underscores "_":
ux_check_bmc_account_opts="--bmc --check-trust --never-check-trust --quiet --output --log --nolog --config --help"
How-to's and technical news about Linux and open computing, with a sprinkling of Python.
Lenovo OneCLI 4.3.0 Linux has a bug in its bash completion file. Line 153 should have its hyphens "-" replaced with underscores "_":
ux_check_bmc_account_opts="--bmc --check-trust --never-check-trust --quiet --output --log --nolog --config --help"
The Python packaging landscape has evolved a bit since setuptools. I wanted to have a ready-made example for a common use case: a Python module that provides one or more commandline scripts, and uses module-level constants.
I based mine on the Python Packaging Tutorial. It’s available on GitHub.
Improvements to be made include specifying requirements.
The National Cancer Institute’s Genomic Data Commons (GDC) produces a tool which facilitates data transfer to and from their data repository called gdc-client, which is open sourced on GitHub.
My first pass at building it gave an error while trying to build lxml without Cython:
building 'lxml.etree' extension
creating build/temp.linux-x86_64-cpython-311
creating build/temp.linux-x86_64-cpython-311/src
creating build/temp.linux-x86_64-cpython-311/src/lxml
gcc -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -DCYTHON_CLINE_IN_TRACEBACK=0 -I/usr/include/libxml2 -Isrc -Isrc/lxml/includes -I/home/chind/Src/gdc-client/venv/include -I/home/chind/opt/include/python3.11 -c src/lxml/etree.c -o build/temp.linux-x86_64-cpython-311/src/lxml/etree.o -w
src/lxml/etree.c:289:12: fatal error: longintrepr.h: No such file or directory
289 | #include "longintrepr.h"
| ^~~~~~~~~~~~~~~
compilation terminated.
Compile failed: command '/usr/bin/gcc' failed with exit code 1
The fix was to build and install lxml from source, using Cython. And Cython < 3 is needed, i.e. Cython 0.29.x.
Once lxml 4.4.2 was installed manually, following the gdc-client build instructions was successful, and the gdc-client script was created.
For more detail, see this Gist.
Just released a bugfix for AlphaFold Singularity 2.3.2. Addresses the issue where the --output_dir argument to the run_singularity.py script was not handled correctly. Find the source release at GitHub. A pre-built Singularity image is also available at Sylabs.io.
Nvidia has a new ARM-based CPU which they announced some time ago. Here, Linus Tech Tips takes a look at it at COMPUTEX Taipei 2023. The design is similar to Apple silicon, where CPU and memory are on the same chip. Nvidia does split out the GPU, connected via Nvlink.
Basic walk through of setting up U2F with YubiKey on Ubuntu 23.04 (should work on recent versions, as well). This follows the official documentation closely, removing anything not necessary for my particular setup.
N.B. this is different from challenge response, a different multifactor method. YubiKeys support multiple protocols, U2F and challenge response being two of them.
Preliminaries:
auth required pam_u2f.so authfile=/etc/yubico/u2f_keys cue interactive
auth sufficient pam_u2f.so authfile=/etc/yubico/u2f_keys cue interactive
debug debug_file=/var/log/pam_u2f.log
sudo touch /var/log/pam_u2f.log
normaluser$ sudo bash#
DO NOT exit this terminal until you are sure at least sudo works.
@include u2f-required
@include common-auth
normaluser$ sudo ls -l /tmp[sudo] password for normaluser:Insert your U2F device, then press ENTER.Please touch the device. (The YubiKey should start flashing.)total xx[listing of files here]
By using OpenACC to GPU-accelerate their commercial flow solver, ADSCFD achieved significant value. They realized dramatically improved performance across multiple use cases with speed-ups ranging from 20 to 300 times, reductions in cost to solution of up to 70%, and access to analyses that were once deemed infeasible to instead being achieved within a typical design cycle.
Figure 2 shows the performance of the first release of Simcenter STAR-CCM+ 2022.1 against commonly available CPU-only servers. For the tested benchmark, an NVIDIA GPU-equipped server delivers results almost 20x faster than over 100 cores of CPU.
…
The performance of the Ansys Fluent 2022 beta1 server compared to CPU-only servers shows that Intel Xeon, AMD Rome, and AMD Milan had ~1.1x speedups compared to the NVIDIA A100 PCIe 80GB, which had speedups from 5.2x (one GPU) to an impressive 33x (eight GPUs).
ANSYS’s blog post covers the same result as Nvidia, showing 33x speedup using 8 A100 GPUs. They also do a cost comparison of equal-speed clusters, one using GPUs and the other purely CPUs:
1 NVIDIA A100 GPU ≈ 272 Intel® Xeon® Gold 6242 Cores