Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

2016-11-08

scikit-learn with shared CBLAS and BLAS

If you have your own copies of BLAS and CBLAS installed as shared libraries, the default build of scikit-learn may end up not finding libblas.so which libcblas.so depends on.

You may, when doing "from sklearn import svm",  get an error like:

from . import libsvm, liblinearImportError: /usr/local/blas/lib64/libcblas.so: undefined symbol: cgemv_

To fix it, modify the private _build_utils module:

scikit-learn-0.xx/sklearn/_build_utils/__init__.py

--- __init__.py.orig    2016-11-08 16:19:49.920389034 -0500
+++ __init__.py 2016-11-08 15:58:42.456085829 -0500
@@ -27,7 +27,7 @@

     blas_info = get_info('blas_opt', 0)
     if (not blas_info) or atlas_not_found(blas_info):
-        cblas_libs = ['cblas']
+        cblas_libs = ['cblas', 'blas']
         blas_info.pop('libraries', None)
     else:
         cblas_libs = blas_info.pop('libraries', [])

2015-11-12

docbook2x-texi

Building Git 2.6.3 from source, I ran into this error:
    DB2TEXI user-manual.texi
/bin/sh: line 1: docbook2x-texi: command not found
This is the docbook to texi converter from the docbook2X package (which has not been updated since 2007).

For Red Hat and Fedora, the EPEL package repo provides docbook2X. However, the name of the script is changed because there is a newer docbook package. So, in the git source directory, edit the file Documentation/Makefile and change one line:

DOCBOOK2X_TEXI = db2x_docbook2texi

2015-06-30

Leap second added

The leap second has been added, and my systems seem to not have barfed. In particular, I was wary of what the NFS and Lustre file servers and clients would do.

Jun 30 20:00:00 myserver ntpd[5973]: 0.0.0.0 061b 0b leap_event

2015-05-28

Ganglia module (kludge) to monitor temperature via IPMI

Since I don't have environmental monitoring in my server room, I used ipmitool to read my cluster nodes' on-board sensors to sort of get at the cold aisle ambient temperature. One should be able to see a list of available sensor readings with "ipmitool sdr" or "ipmitool sensor", the latter giving other associated parameters for the metrics, like alarm thresholds.

Since access to /dev/ipmi0 is restricted to root, my kludge was to create a root cron job which runs every N minutes, writing the appropriate value to a log file:

ipmitool -c sdr get "Inlet Temp" | cut -f2 -d, > $LOGFILE

Then, the Ganglia python module reads that file. I followed the example.py module which is distributed with Ganglia, and the associated example.pyconf.

The code is at my github repo.

2014-12-29

Mellanox Infiniband network cards on Linux

Sometimes, when one updates the firmware for Mellanox Infiniband cards, the MAC/hardware address gets changed. This usually happens if the IB card is OEM, i.e. made by Mellanox but stamped with a different company's name.

When the MAC gets changed, the network interface will not come up. The fix is to update the HWADDR field in /etc/sysconfig/network-scripts/ifcfg-ib0 and /etc/sysconfig/network-scripts/ifcfg-ib1. Use "ip link list" to display the new MAC.

2014-12-16

RHEL 6.4 kernel 2.6.32-358.23.2, Mellanox OFED 2.1-1.0.6, and Lustre client 2.5.0

I am planning some upgrades for the cluster that I manage. As part of the updates, it would be good to have MVAPICH2 with GDR (GPU-Direct RDMA -- yes, that's an acronym of an acronym). MVAPICH2-GDR, which is provided only as binary RPMs, only supports Mellanox OFED 2.1.

Now, our cluster runs RHEL6.4, but with most non-kernel and non-glibc packages updated to whatever is in RHEL6.5. The plan is to update everything to whatever is in RHEL6.6, except for the kernel, leaving that at 2.6.32-358.23.2 which is the last RHEL6.4 kernel update. The reason for staying with that version of the kernel is because of Lustre.

We have a Terascala Lustre filesystem appliance. The latest release of TeraOS uses Lustre 2.5.0. Upgrading the server is pretty straightforward, according to the Terascala engineers. Updating the client is a bit trickier. Currently, the Lustre support matrix says that Lustre 2.5.0 is supported only on RHEL6.4.

The plan of attack is this:

  1. Update a base node with all RHEL packages, leaving the kernel at 2.6.32-358.23.2
  2. Upgrade Mellanox OFED from 1.9 to 2.1
  3. Build lustre-client-2.5.0 and upgrade the Lustre client packages

Updating the base node is straightforward. Just use "yum update", after commenting out the exclusions in /etc/yum.conf. If you had updated the <tt>redhat-release-server-6server<tt> package, which defines which RHEL release you have, you can downgrade it. (See RHEL Knowledgebase, subscription required.) First, install the last (as of 2014-12-15) RHEL6.4 kernel, and then do the downgrade:
# yum install kernel-2.6.32-358.23.2.el6
# reboot
# yum downgrade redhat-release-server-6Server

Check with "cat /etc/redhat-release".

Next, install Mellanox OFED 2.1-1.0.6. You can install it directly using the provided installation script, or if you are paranoid like me, you can use the provided script to build RPMs against the exact kernel update you have installed.

Get the tarball directly from Mellanox. Extract, and make new RPMs:
# tar xf MLNX_OFED_LINUX-2.1-1.0.6-rhel6.4-x86_64.tgz
# cd MLNX_OFED_LINUX-2.1-1.0.6-rhel6.4-x86_64
# ./mlnx_add_kernel_support.sh -m .
...
# cp /tmp/MLNX_OFED_LINUX-2.1-1.0.6-rhel6.4-x86_64-ext.tgz .
# tar xf MLNX_OFED_LINUX-2.1-1.0.6-rhel6.4-x86_64-ext.tgz
# cd MLNX_OFED_LINUX-2.1-1.0.6-rhel6.4-x86_64-ext
# ./mlnxofedinstall
# reboot

Strictly speaking, the reboot is unnecessary: you can stop and restart a couple of services and the new OFED will load.

Next, for Lustre. Get the SRPM from Intel (who bought WhamCloud). You will notice that it is for kernel 2.6.32-358.18.1. Not mentioned is the fact that by default, it uses the generic OFED that RedHat rolls into its distribution. To use the Mellanox OFED, a slightly different installation method must be used.

# rpm -Ivh lustre-client-2.5.0-2.6.32_358.18.1.el6.x86_64.src.rpm
# cd ~/rpmbuild/SOURCES
# cp lustre-2.5.0.tar.gz ~/tmp
# cd ~/tmp
# tar xf lustre-2.5.0.tar.gz
# cd lustre-2.5.0
# ./configure --disable-server --with-o2ib=/usr/src/ofa_kernel/default
# make rpms
# cd ~/rpmbuild/RPMS/x86_64
# yum install lustre-client-2.5.0-2.6.32_358.23.2.el6.x86_64.x86_64.rpm \
lustre-client-modules-2.5.0-2.6.32_358.23.2.el6.x86_64.x86_64.rpm \
lustre-client-tests-2.5.0-2.6.32_358.23.2.el6.x86_64.x86_64.rpm \
lustre-iokit-2.5.0-2.6.32_358.23.2.el6.x86_64.x86_64.rpm
To make the lustre module load at boot, I have a kludge: to /etc/init.d/netfs right after the line
STRING=$"Checking network-atttached filesystems"
add
modprobe lustre
Reboot, and then check:
# lsmod | grep lustre
lustre                921744  0
lov                   516461  1 lustre
mdc                   199005  1 lustre
ptlrpc               1295397  6 mgc,lustre,lov,osc,mdc,fid
obdclass             1128062  41 mgc,lustre,lov,osc,mdc,fid,ptlrpc
lnet                  343705  4 lustre,ko2iblnd,ptlrpc,obdclass
lvfs                   16582  8 mgc,lustre,lov,osc,mdc,fid,ptlrpc,obdclass
libcfs                491320  11 mgc,lustre,lov,osc,mdc,fid,ko2iblnd,ptlrpc,obdclass,lnet,lvfs


2014-07-02

Limiting logins under SSSD

Under SSSD, you can pretty easily limit logins to specific users or groups. The syntax is different from that of /etc/security/access.conf, and is actually easier. Red Hat has some documentation (may require login). There is also a man page for sssd.conf(5).

Under the your domain, add some lines to configure "simple" access control:
[domain/default]  
access_provider = simple 
simple_allow_users = topbanana 
simple_allow_groups = bunchofbananas,wheel

2014-06-11

root cron jobs and /etc/security/access.conf

On RHEL6, if your root cron jobs do not run, check your /var/log/secure file for lines that look like:
crontab: pam_access(crond:account): access denied for user `root' from `cron'
You may also see the following message when, as root, you type "crontab -e":
Permission deniedYou (root) are not allowed to access to (crontab) because of pam configuration.

If there are any like that, check /etc/security/access.conf -- you need to allow root access via cron and crond by adding the following line:
+ : root : cron crond 

2014-06-06

More on SSSD - getting finger(1) and command completion of ~ to work

I noticed after getting SSSD up and running that the finger(1) command no longer worked, and neither did command completion of ~username. In the first case, finger(1) never found any users, no matter if I used the exact username. In the second case, if I typed at the command line cd ~d<tab>, it would not expand to a list of possibilities. However, cd ~david worked just fine.

Turns out, there needs to be one setting in /etc/sssd/sssd.conf:


[domain/default]
    ...
    enumerate = True

That allows a local precache to be created so that finger(1) can iterate over user info to find a matching record.

Then, restart the sssd service.

Useful links:

2013-03-15

Writing a new SELinux policy module for a standard init daemon

This is going to be a summary of my experience writing new policy modules for Ganglia gmetad and gmond on RHEL5. Ganglia is a "scalable distributed monitoring system for high-performance computing systems." I downloaded the package source distribution, and built RPMs myself.

In case you are looking to apply this to something else, here are a couple of the underlying assumptions:
  • the service is a standard init-launched daemon
  • each service only has one executable, the daemon program
In the case of gmetad and gmond, the daemon programs are, respectively, /usr/sbin/gmetad and /usr/sbin/gmond.

I have written about creating new SELinux policies before, but I think this is better in that it wraps things up into a module that may be removed or updated more easily than a monolithic policy. Note, however, that rules governing network ports are not bundled into the module. (See below.)

This is going to be an iterative process. Before even starting, one needs to know which files/directories the daemons will write to, and if they run non-root. If the package one is working with is well-documented, this may be obtained from the documentation. If not, some trial and error will be needed. Also, for most programs, these file/directory locations are configurable.

We use the GUI Selinux Policy Generation tool, system-config-selinux. There is a good article on using this tool by Dan Walsh dating back to 2007.

We will start with gmetad. In the case of gmetad, the default location for the RRD files is /var/lib/ganglia/rrds. So, the policy should allow write access to /var/lib/ganglia.

In the Selinux Policy Generation tool, these are the entries used:
  • Name: gmetad
  • Executable: /usr/sbin/gmetad
  • Standard Init Daemon
  • Incoming network ports, both TCP and UDP: 8651,8652
  • Common Application Traits
    • Application uses syslog to log messages
    • Application uses /tmp to Create/Manipulate temporary files
    • Application uses nsswitch or translates UID's (daemons that run as non root)
  • Add Directory: /var/lib/ganglia
This generates 4 files in whatever directory you specify at the end of the druid: gmetad.fc, gmetad,if, gmetad.sh, gmetad.te. If you examine gmetad.sh, you will see:
#!/bin/sh
make -f /usr/share/selinux/devel/Makefile
/usr/sbin/semodule -i gmetad.pp

/sbin/restorecon -F -R -v /usr/sbin/gmetad
/sbin/restorecon -F -R -v /var/lib/ganglia
/usr/sbin/semanage port -a -t gmetad_port_t -p tcp 8651
/usr/sbin/semanage port -a -t gmetad_port_t -p tcp 8652
/usr/sbin/semanage port -a -t gmetad_port_t -p udp 8651
/usr/sbin/semanage port -a -t gmetad_port_t -p udp 8652
Note that the ports are not bundled into the "compiled" module file, gmetad.pp. The port rules are added "manually". The module merely defines the type gmetad_port_t.

The gmetad.te file is what we will be editing in the iterative steps below.  The first line determines a version number, that allows you to update a policy using "semodule -u gmetad.te".

policy_module(gmetad,1.0.0)

Make sure the gmetad service is not running. Now, turn off the auditd service, and move away the audit log file to simplify finding incremental changes in policy that are needed:
# service gmetad stop
# service auditd stop
# cd /var/log/audit< # mv audit.log audit.log.20130313-1500
Then, start up the audit daemon, followed by gmetad. Wait for a few minutes (or much longer) for gmetad to do its thing, and for auditd to accumulate all or most of the AVC denials that would affect gmetad. Once a sufficient amount of time has passed:
# grep gmetad /var/log/audit/audit.log | audit2allow -R > audit.out
The output should look like:
require {
        type gmetad_t;
        class capability { setuid setgid };
}

#============= gmetad_t ==============
allow gmetad_t self:capability { setuid setgid };
kernel_read_kernel_sysctls(gmetad_t)

Next, edit gmetad.te, and increment the version number. Append to the end of gmetad.te the contents of audit.out. Then, generate the policy file, and load the updated policy:
# make -f /usr/share/selinux/devel/Makefile
# semodule -u gmetad.pp
Next, shut down gmetad, shut down auditd, move the audit log away, start auditd, and start gmetad. Wait a bit, and look for new denials in the audit log by doing
# grep gmetad /var/log/audit/audit.log | audit2allow -R > audit2.out
To append any new rules, you have to manually pick out the new unique lines from audit2.out and put them in the appropriate sections (the 'require' section, or the block of allows) of gmetad.te. For gmetad.te, I found there wasn't much change between iterations. For gmond, however, there were quite a few, mostly the addition of file getattr permissions. This involved changing many lines like:

allow gmond_t lvm_t:file read;  -->  allow gmond_t lvm_t:file { getattr read };
This iteration may have to include alternating gmond and gmetad since gmetad has to connect to the gmond port, which means something like:
allow gmetad_t gmond_port_t:tcp_socket name_connect;

Here at the Wake Forest University HPC facility, we have a combination of cfengine and Puppet to manage machine configurations: cfengine for the RHEL5 nodes, and Puppet for the RHEL6 nodes. The policy .pp file is distributed via cfengine, and a shellcommand is run by cfengine to load/update the module, and additional commands do the file system relabelling and the port rules. Basically, reproducing the .sh file that the Policy Generation Tool creates.

UPDATE 2013-03-22: If you have a cyclic dependency in your policy modules -- in this case, gmond refers to gmetad, and gmetad refers to gmond -- you will find that you can't load the modules individually. All you have to do is load them all in one command line:
semodule -i gmond.pp gmetad.pp

2013-01-22

Cinnamon desktop for Ubuntu 12.10 Quantal

I wish I had discovered this sooner. I've been using Unity for months and never did get used to it very well. I installed the Cinnamon desktop over the weekend, and it works like an old-fashioned desktop.
sudo add-apt-repository ppa:gwendal-lebihan-dev/cinnamon-stable
sudo apt-get update
sudo apt-get install cinnamon
When logging in, click on the small Ubuntu logo, and select the desktop environment you want.

2013-01-07

Wifi connectivity issues and TCP SACK

So, my Ubuntu 12.10 laptop recently developed an issue where it would not connect to the wifi access point at a local café. My iPhone had no trouble, nor the laptop when booting into Windows 7. Other people using Mac laptops seemed to have no trouble, either.

At a guess, since similar connectivity issues came up at work a few months ago, I decided to turn off TCP SACK (Selective ACKnowledgement), and that caused the wifi connection to start working. What's odd is that SACK is supposed to be negotiated at the start of a connection. Whatever. It works, now.

UPDATE: Well, maybe that wasn't it. Right after doing this, I turned SACK back on to see its effects, and was able to connect to the access point with no trouble.

UPDATE 2: Here's the commandline:
sysctl -w net.ipv4.tcp_sack=0
There was some discussion on ServerFault about TCP SACK, and also an in-depth article by Patrick McManus from 2008.

2012-12-18

Changing the GNOME keyring password

I just changed my password yesterday -- we have a 180-day lifetime on passwords at Wake Forest -- and discovered that my GNOME keyring password still remained the same. The password is supposed to be the same as the login password so that the keyring is unlocked when you log in.

I had changed my password using the command line. Maybe the GUI tool would automatically update the keyring password, as well. (I have my doubts.)

A little googling turned up slightly outdated information. So, for posterity. One uses Seahorse which turns up as the "Passwords and Keys" application under Fedora. Just search for "seahorse":



Once you do, though, there is no obvious way to change the keyring password as this is the view first presented:


The trick is to go to the View menu, and check on "By keyring".




A left sidebar pops up showing you the three keyrings available: Passwords, PGP Keys, and Secure Shell. The Login keyring under Passwords should have a lock icon next to it:


Right click on "Login keyring", and select "Change Password", then change it to be the same as your login password. The next time you log in (even with KDE), the Login keyring will be unlocked.

2012-11-28

Full-disk encryption on Linux Mint 14

UPDATE 2014-06-29: Unfortunately, when I updated the comment system to use Google+, it cleared out the old comments, which happened to be somewhat helpful.

I have just tried, and failed, to get full-disk encryption on Linux Mint 14 (see also this Ars Technica review). I'm writing this post in the hopes that someone can figure out how to get it to work.

As part of my university's ongoing efforts to improve data security, we have just been required to encrypt the drives on our work-issued laptops.

Ubuntu 12.10 Quantal Quetzal introduced the full-disk encryption option into their default installer. Previously, it was only available via the text-mode installer available on a separate "alternate" image. However, despite using Quantal for several months, I find the Unity GUI really annoying, and decided to try Linux Mint 14 with the Cinnamon desktop GUI. The Cinnamon GUI is a GNOME 3 fork, so it uses 3D compositing, but presents a GNOME 2-like interface (or Windows-like), which I prefer.

Unfortunately, Linux Mint 14 did not adopt Quantal's Ubiquity installer, so the full-disk encryption option was not available. Andreas Haerter has written a good guide to full-disk encryption on Ubuntu prior to 12.10, and included a shell script which automates a lot of the process. I made some small modifications to the script for Mint: renaming LVM volume names, using optimal alignment in parted. However, Ubiquity now crashes after I specify the mount points and it starts doing the install proper.

Maybe someone out there has an idea of what might be broken?

My modified script is here:


2012-04-02

Using NVIDIA drivers in Fedora 16

UPDATE 2 (2012-08-23): The Nvidia installer now makes use of DKMS, which causes any kernel updates to rebuild the Nvidia kernel module. So, no need to go through this rigamarole at every kernel update.
UPDATE: There was a typo in my lspci command line. Should have been VGA and not CGA.

Installing the latest NVIDIA drivers under Fedora (or really, any distribution), is a little roundabout. Here is how I do it, which is a mix and match of several howtos on the net. Part of the reason the process is a little complicated is the use of the open source Nouveau drivers: these have to be removed before NVIDIA's drivers can be installed.

The canonical reference for all things Linux+NVIDIA is if-not-true-then-false.com. Their write-up on NVIDIA and Fedora 16 gives directions to use the RPMFusion repositories, which provide non-free software (including the NVIDIA drivers).

We will follow their instructions for removing Nouveau, but install NVIDIA drivers downloaded from NVIDIA themselves.

$ sudo -i
# yum install gcc kernel-devel
# yum update kernel* selinux-policy
# reboot

To remove nouveau, build a new initramfs image:

# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
# dracut /boot/initramfs-$(uname -r).img $(uname -r)
# reboot

Next, edit the file /etc/default/grub. To the line that defines GRUB_CMDLINE_LINUX, append the following:

rdblacklist=nouveau nouveau.modeset=0

Mine looks like:

GRUB_CMDLINE_LINUX="rd.md=0 rd.lvm.lv=vg_johnny/lv_swap rd.dm=0  KEYTABLE=us quiet  SYSFONT=latarcyrheb-sun16 rhgb rd.luks=0 LANG=en_US.UTF-8 rdblacklist=nouveau  nouveau.modeset=0"

Then, make the grub2 config:

# grub2-mkconfig -o /boot/grub2/grub.cfg

Next, find the model number of your GPU card, and find the appropriate driver from NVIDIA:

> lspci | grep VGA

Mine shows:


01:00.0 VGA compatible controller: nVidia Corporation G92 [GeForce 9800 GT] (rev a2)

Then, go to NVIDIA's Linux driver page and pick the appropriate version. For me, it was Linux x86_64/AMD64/EM64T, version 295.33 -- the driver installer is a file named NVIDIA-Linux-x86_64-295.33.run

The installer will build a kernel module, but to do so, you must be in runlevel 3 (i.e. no GUI, but with networking):

# telinit 3

You will drop down to the console prompt. Login as root, and then do:

# sh NVIDIA-Linux-x86_64-295.33.run

and answer the prompts along the way. You should be able to just do "telinit 5" to get back the GUI login, but I usually just reboot.

Now, whenever the kernel is updated, you will have to rebuild the kernel module by repeating the last step.

At the final step of the Nvidia installation, you will be asked if you want to enable dkms, which allows kernel updates to rebuild the Nvidia kernel module automatically. Say "yes".

2012-03-01

How to disable Unity global menus in Ubuntu

Wow. I am really hating this new Blogger interface. I just managed to completely delete the post again. "Revert to Draft" doesn't mean revert to original content when you edit an existing post, it just completely erases the existing content. Then, when you hit the "Close" button, the now blank post is saved, thereby losing your content.

In any case, the original post griped about Unity's auto-hiding global menubar, and gave a tip to remove it, which is to remove the package that provides that feature:

    sudo apt-get purge indicator-appmenu

If you want to completely revert Ubuntu to a stock GNOME 3 UI, there are extensive instructions here. In particular, the crazy disappearing scrollbar handles which are as easy to grab onto as eels.

2012-02-23

Own-horn-tooting: python-pbs

Hah! I'm in the changelog for the python-pbs package for Fedora/RedHat. Frankly, I don't even remember doing this, and I can't find any correspondence in my gmail about it, either.

The python-pbs package is a Python wrapper around libtorque, the library that underlies the Torque resource manager.

2012-02-03

More Puppet and SELinux

Remember my previous post about Puppet and SELinux? Well, it turns out it wasn't complete. The policy file was missing a couple of policies. This happened because I didn't completely start from scratch at each iteration of testing, and at some point, I turned SELinux to permissive, so client certificates were being signed with no problem.

In moving to our production server, there were error messages on the client side:

err: Could not request certificate: Error 400 on SERVER: Permission denied - /var/lib/puppet/ssl/ca/serial
Exiting; failed to retrieve certificate and waitforcert is disabled

On the production puppet master, AVC denials looking like:

type=1400 audit(1328213559.254:21031): avc:  denied  { remove_name } for  pid=5901 comm="ruby" name="serial.tmp" dev=dm-2 ino=131791 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=system_u:object_r:puppet_var_lib_t:s0 tclass=dir

with corresponding items in /var/log/messages (why not in /var/log/audit/audit.log? I have no idea):

puppet-master[13193]: Could not rename /var/lib/puppet/ssl/ca/serial to /var/lib/puppet/ssl/ca/serial.tmp: Permission denied - /var/lib/puppet/ssl/ca/serial.tmp or /var/lib/puppet/ssl/ca/serial


(Still unsolved mystery: on the production server, ausearch did not show any AVC denials; the denials were logged to /var/log/messages. I did not try "semodule -DB" to disable all dontaudits.)

On the test system, there were also denials like:

type=AVC msg=audit(1328221549.372:27539363): avc:  denied  { unlink } for  pid=29452 comm="ruby" name="serial.tmp" dev=dm-2 ino=134565 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:puppet_var_lib_t:s0 tclass=file
What happens is when a certificate signing request (CSR) comes in to the puppet master from a client, a file /var/lib/puppet/ca/serial.tmp is created. At the end of the signing process, that file is moved to serial. I think it just does a cp and rm. (My suspicion is based on the unlink policy that it needs.)

In any case, here is an updated policy file. Note the version number compared to the previous one.
module puppet_passenger 1.15;

require {
        type httpd_t;
        type httpd_passenger_helper_t;
        type port_t;
        type puppet_var_lib_t;
        type puppet_var_run_t;
        type puppet_log_t;
        type proc_net_t;
        type init_t;
        type user_devpts_t;
        class dir { write getattr read create search add_name remove_name rename unlink rmdir };
        class file { write append relabelfrom getattr setattr read relabelto create open rename unlink };
        class udp_socket name_bind;
}

#============= httpd_passenger_helper_t ==============
allow httpd_passenger_helper_t httpd_t:dir { getattr search };
allow httpd_passenger_helper_t httpd_t:file { read open };

#============= httpd_t ==============
#!!!! This avc can be allowed using the boolean 'allow_ypbind'

allow httpd_t port_t:udp_socket name_bind;

allow httpd_t proc_net_t:file { read getattr open };

allow httpd_t puppet_var_lib_t:dir { write read create add_name remove_name rename unlink rmdir };
allow httpd_t puppet_var_lib_t:file { relabelfrom relabelto create write append rename unlink };

allow httpd_t puppet_var_run_t:dir { getattr search };

allow httpd_t puppet_log_t:file { getattr setattr };

allow httpd_passenger_helper_t init_t:file { read };
allow httpd_passenger_helper_t init_t:dir { getattr search };

2011-05-12

Mounting .dmg files in Ubuntu

Every now and then, it may be useful to mount a Mac OS X .dmg (disk image) file in Linux. I just had to do this to get some Canon printer drivers (PPD files) which were distributed in a .dmg file.

There are two steps (not including installing the applicable packages which make reading Apple's HFS+ disk format possible):

  1. convert the .dmg file into a .img file
  2. mount the resulting .img disk image
Before that, install the hfsplus package which allows accessing HFS+ volumes. This will probably install libhfsp0 as a co-requisite. You will also need the dmg2img package to convert the compressed .dmg image format to an uncompressed .img format.

Once you have the appropriate packages installed, do the following (assume the .dmg file is mydisk.dmg):

    $ dmg2img mydisk.dmg
    $ sudo modprobe hfsplus
    $ sudo mkdir /mnt/mydisk
    $ sudo mount -t hfsplus -o loop mydisk.img /mnt/mydisk
    $ cd /mnt/mydisk

2010-10-14

Linux memory limits (rlimits)

It is useful, frequently, to set limits on the amount of memory any user program can consume. There are a few mechanisms for doing this.

But first, a look at the two types of memory limits available (ignoring stack). I refer to the man page for setrlimit(p). 
  1. RLIMIT_DATA -- “The  maximum  size  of the process’s data segment (initialized data, uninitialized data, and heap). This limit affects calls to brk(2) and sbrk(2).” The man page for brk(2) warns that malloc(3) should be used instead of brk(2) or sbrk(2).
  2. RLIMIT_AS -- “The maximum size of the process’s virtual memory (address space) in bytes. This limit affects calls to brk(2), mmap(2) and mremap(2)”
Limits can be set system-wide in /etc/security/limits.conf. The limit set here is a hard limit, i.e. users cannot manually increase their limits beyond the bounds set there, though they can set lower limits.

Limits can be set in the shell. With bash, it is the ulimit command. With tcsh, it is the limit command. In  bash, the "-d" option corresponds to RLIMIT_DATA, and the "-v" option corresponds to RLIMIT_AS. In tcsh, the "datasize" option corresponds to RLIMIT_DATA, and "vmemoryuse" corresponds to RLIMIT_AS. The "memoryuse" (maximum amount of physical memory a process may have allocated to it at a given time) option does not correspond to any rlimit: this is also noted in the man page for tcsh.

Limits can also be set using the setrlimit(p) system function.

It turns out that RLIMIT_DATA is completely ignored. It took a couple of days' worth of Googling and reading forum and mailing list postings to find this out. And then some testing on my own to confirm it.

The only memory-related limit that matters is RLIMIT_AS. I ran tests setting the limits in all three ways, and in both shells. No matter how limits are set, RLIMIT_DATA does not prevent malloc(3) from allocating memory. RLIMIT_AS, however, will terminate a program that tries to allocate too much.

Here is some abstracted output from my test program, hopefully self-explanatory:

Current system limits:
    cur data = 204800000 = 195 MiB
    max data = unlimited
    cur as = 716800000 = 683 MiB
    max as = 15734530048 = 15005 MiB

Allocating 680 MiB
        x[0] = 0
        x[11141120] = 11141120
        x[22282240] = 22282240
        x[33423360] = 33423360
        x[44564480] = 44564480
        x[55705600] = 55705600
        x[66846720] = 66846720
        x[77987840] = 77987840
Sleeping............. Done.

Current system limits:
    cur data = 204800000 = 195 MiB
    max data = unlimited
    cur as = 716800000 = 683 MiB
    max as = 15734530048 = 15005 MiB
    cur stack = 10485760 = 10 MiB
    max stack = unlimited

Allocating 683 MiB
allocation failed: Cannot allocate memory
A copy of glom.c is here.