Log Tools

The Logtools package contains a number of programs for managing log files (mainly for web servers).

  • clfmerge will merge a number of Common Logfile Format web log files into a single file while also re-ordering them in a sliding window to cope with web servers that generate log entries with the start-time of the request and write them in order of completion.
  • logprn operates like tail -f but will (after a specified period of inactivity) spawn a process and write the new data in the log file
    to it’s standard input.
  • clfsplit will split up a single CLF format web log into a number of files based on the client’s IP address.
  • funnel will write it’s standard-input to a number of files or processes.
  • clfdomainsplit split a CLF format web log containing fully qualified URLs (including the host name) into separate files, one for each host.


Porting NSA SE Linux to Hand Held devices


I presented this paper at the 2003 Ottawa Linux Symposium (OLS). is defunct, since about 2004, so I removed the link.

The NSA changed the URLs on their web site, so this version of the paper has the new ones.

The SE Linux kernel interfaces have changed, now it’s all through the proc and selinuxfs filesystems and there are no SE Linux specific system calls. Equivalent functionality is provided.

With significant changes to the code base (kernel, policy, and tools) the amounts of memory used will differ. But the methods of saving memory will remain the same.


In the first part of this paper I will describe how I ported SE Linux to User-Mode-Linux and to the ARM CPU. I will focus on providing information that is useful to people who are porting to other platforms as well. In the second part I will describe the changes necessary to applications and security policy to run on small devices. This will be focussed on hand-held devices but can also be used for embedded applications such as router or firewall type devices, and any machine that has limited memory and storage.


SE Linux offers significant benefits for security. It accomplishes this by adding another layer of security in addition to the default Unix permissions model. This is achieved by firstly assigning a type to every file, device, network socket, etc. Then every process has a domain and the level of access permitted to a type is determined by the domain of the process that is attempting the access (in addition to the usual Unix permission checks). Domains may only be changed at process execution time. The domain may automatically be changed when a process is executed based on the type of the executable program file and the domain of the process that is executing it, or a privileged process may specify the new domain for the child process.

In addition to the use of domains and types for access control SE Linux tracks the identity of the user (which will be system_u for processes that are part of the operating system or the Unix user-name) and the role. Each identity will have a list of roles that it is permitted to assume, and each role will have a list of domains that it may use. This gives a high level of control over the actions of a user which is tracked through the system. When the user runs SUID or SGID programs the original identity will still be tracked and their privileges in the SE security scheme will not change. This is very different to the standard Unix permissions where after a SUID program runs another SUID program it’s impossible to determine who ran the original process. Also of note is the fact that operations that are denied by the security [smalley] have the identity of the process in question logged.

I often run SE Linux demonstration machines on the Internet which provide root access to the world and an invitation to try and break the security [play-machine].

For a detailed description of how SE Linux works I recommend reading the paper Peter Loscocco presented at OLS in 2001 [ols2001:loscocco-smalley].

SE Linux has been shown to provide significant security benefits for little overhead on servers, desktop workstations, and laptops. However it has not had much use in embedded devices yet.

Some people believe that SE Linux is only needed for server systems. I think that is incorrect, and I believe that in many situations laptops and hand-held devices need more protection than servers. A server will usually have a firewall protecting it, with a small number of running applications which are well maintained and easy to upgrade. Portable computers are often used in hostile environments that servers do not experience, they have no firewalls to protect them, and often they are connected to routers operated by potentially negligent or hostile organizations.

But there are two main factors that cause an increased need for security on portable devices. One is that it is usually extremely difficult and expensive to upgrade them if a new security fix is needed. This means that in commercial use portable computers tend to never have security fixes applied. Another factor is that often the person in posession of a hand-held computer is not authorised to access all the data it contains, and may even be hostile to the owner of the machine.

Naturally for a full security solution for portable computers a strong encryption system will need to be used for all persistent file systems. There are various methods of doing this, but all aspects of such encryption are outside the scope of this project and can be implemented independently.

Kernel Porting

The current stable series of SE Linux is based on the 2.4.x kernels and uses the Linux Security Modules (LSM) lsm interface. The current LSM interface has a single sys_security() system call that is used to multiplex all the system calls for all of the security modules. SE Linux uses 52 different system calls through this interface. Due to problems in porting the kernel code to some platforms (particularly those that have a mixed 32 and 64bit memory model) the decision was made to change the LSM interface for kernel 2.6.0. The new interface will make the code fully portable and remove the painful porting work that is currently required. However I needed to have SE Linux working with the 2.4.x kernels so I couldn’t wait for kernel 2.6.0.

The main difficulty in porting the code is the system call execve_secure() which is used to specify the security context for the new process. This calls the kernel funtion do_exec() to perform the execution, and do_exec() needs a pointer to the stack, thus requiring architecture specific code in the sys_execve_secure() function. The sys_security_selinux_worker() function (which determines which SE Linux system call is desired and passes the appropriate parameters to it) calls sys_execve_secure() and therefore also needs architecture specific code, and so does the main system call sys_security_selinux().

My first port of SE Linux was to User-Mode Linux [uml]. This was a practice effort for the main porting work. It is quite easy to debug kernel code under UML, and as it uses the i386 system call interface I could port the kernel code without any need to port application code.

The main architecture dependent code is in the source file security/selinux/arch/i386/wrapper.c, which has code to look on the stack for the contents of particular registers. This needs to be changed for platforms with different register names, and for UML which does not permit such direct access of registers.

The solution in the case of UML was to not have a wrapper function, as the current structure had a pointer to the stack anyway that could be used inside the sys_execve_secure() function. So I renamed the sys_security_selinux_worker() function to sys_security_selinux() for the UML port and entirely removed all reference to the wrapper. Then I moved the implementation of sys_execve_secure() into the platform specific directory and implemented a different version for each port.

This was essentially all that was required to complete the port, the core code of SE Linux was all cleanly written and could just be compiled. The only other work involved getting the Makefile’s correctly configured, and adding a hook to sys_ptrace().

One thing I did differently with my port to the ARM architecture was that I removed the code to replace the system call entry. When the SE Linux kernel code loads on UML and i386 it replaces the system call with a direct call to the SE Linux code (rather than using the option for LSM to multiplex between different modules). As there is currently no support for having SE Linux be a loadable module there seems to be no benefit in this, and it seems that on ARM there will be more overhead for adding an extra level of indirection for this. So I made the SE Linux patch hard-code the SE system call into the sys-call table.

iPaQ Design Constraints

The CompaQ/HP iPaQ [ipaq] computers are small hand-held devices. The most powerful iPaQ machines on sale have a 400MHz ARM based CPU that is of comparable speed to a 300MHz Intel Celeron CPU, with 64M of RAM and 48M of flash storage.

An iPaQ is not designed for memory upgrades. There are some companies that perform such upgrades, but they don’t support all models, and this will void your warantee. Therefore you are stuck with a memory limit of 64M.

The flash storage in an iPaQ can only be written a limited number of times, this combined with the small amount of storage makes it impossible to use a swap space for virtual memory unless you purchase a special sleeve for using an external hard drive. Attaching an external hard drive such as the IBM/Hitachi Micro Drive is expensive and bulky. Therefore if you have a limited budget then storage expansion (for increased file storage or swap space) is not an option.

For storing files, the 32M file system can contain quite a lot. The Familiar distribution is optimised for low overheads (no documentation or man pages) and all programs are optimised for size not speed. Also the JFFS2 [jffs2] file system used by Familiar supports several compression algorithms including the Lempel-Ziv algorithm implemented in zlib, so more than 32M of files can fit in storage.

For a system such as SE Linux to be viable on an iPaQ it has to take up a small portion of the 32M of flash storage and 64M of RAM, and not require any long CPU intensive operations.

Finally the screen of an iPaQ only has a resolution of 240×320 and the default input device is a keyboard displayed on the screen. This makes an iPaQ unsuitable for interactive tasks that involve security contexts as it takes too much typing to enter them and too much screen space to display them. As a strictly end-user device this does not cause any problems.

CPU Requirements

Benchmarks that were performed on SE Linux operational overheads in the past show that trivial system calls (reading from /dev/zero and writing to /dev/null) can take up to 33% longer to complete when SE Linux is running, but that the overhead on complex operations such as compiles is so small as to be negligible [freenix]. The machines that were used for such tests had similar CPU power to a modern iPaQ.

One time consuming operation related to SE Linux installation is compiling the policy (which can take over a minute depending on the size of the policy and the speed of the CPU). This however is not an issue for an iPaQ as the policy takes over a megabyte of permanent storage and 5 megs of temporary file storage, as well as requiring many tools that are not normally installed (make, m4, the SE Linux policy compilation program checkpolicy, etc). The storage requirements make it impractical to compile policy on the iPaQ, and the typical use involves configuration being developed on other machines for deployment on iPaQ. So the time taken to compile the policy database is not relevant.

The only SE Linux operation which can take a lot of time that must be performed on an iPaQ is labeling the file system. The file system must be relabeled when SE Linux is first installed, and after an upgrade. On my iPaQ (H3900 with 400MHz X-Scale CPU) it takes 29.7 seconds of CPU time to label the root file system which contains 2421 files. For an operation that is only performed at installation or upgrade time 29.7 seconds is not going to cause any problems. Also the setfiles program that is used to label the file system could be optimised to reduce that time if it was considered to be a problem.

I conclude that for typical use of a hand-held machine SE Linux only requires the CPU power of an iPaQ. In fact the CPU use is small enough that even the older iPaQ machines (which had half the CPU power) should deliver more than adequate performance.

Kernel Resource Use

To compare the amounts of disk space and memory I compiled three kernels. One was 2.4.19-rmk6-pxa1-hh13 with the default config for the H3900 iPaQ. One was a SE Linux version of the same kernel with the options CONFIG_SECURITY, CONFIG_SECURITY_CAPABILITIES, and CONFIG_SECURITY_SELINUX. Another was the same SE Linux kernel with development mode enabled (which slightly increases the size and memory).

For this project I have no need for the multi-level-security (MLS) functionality of SE Linux or the options for labelled networking and extended socket calls. This optional functionality would increase the kernel size. I am focussing on evaluating the choice of whether or not to use SE Linux for specific applications, once you have decided to use SE Linux you would then need to decide whether the optional functionality provides useful benefits to your use to justify the extra disk space and memory use.

The kernel binaries are 658648 bytes for a non-SE kernel, 704708 bytes for the base SE Linux kernel, and 705560 bytes for the development mode kernel. The difference between the kernel with development mode enabled and the regular one is that the development kernel allows booting without policy loaded, and booting in permissive mode (with the policy decisions not being enforced). For most development work a kernel with development mode enabled will be used, also for this test it allowed me to determine the resource consumption of SE Linux without a policy loaded.

To test the memory use of the different kernels I configured an iPaQ to not load any kernel modules. My test method was to boot the machine, login at the serial console, wait 30 seconds to make sure that all daemons have started, and run free to see the amount of memory that is free. This is not entirely accurate as random factors may result in different amounts of memory usage, however this is not as significant on the Familiar distribution due to the use of devfs for device nodes and tmpfs for /var and /tmp which means that in the normal mode of operation almost nothing is written to the root file system, so two boots will be working on almost the same data.

From the results I looked at the total field in the results (which gives the amount of RAM that is available for user processes after the kernel has used memory in the early stages of the boot process), and the used field which shows how much of that has been used. The kernel message log gives a break-down of RAM that is used by the kernel for code and data in the early stages of boot, however that is not of relevance to this study only the total amount that is used matters.

The total memory available was reported as 63412k for the non-SE kernel, 63308k for the SE Linux kernel, and 63300k for the development mode kernel. So SE Linux takes 104k of kernel memory early in the boot process and 112k if you use the development mode option.

The memory reported as used varied slightly with each boot. For the vanilla kernel the value 18256k was reported in two out of four tests, with values of 18252k and 18260k also being reported. I am taking the value 18256k as the working value which I consider accurate to within 8k.

For a standard SE Linux kernel the amount reported as used was 19516k in three out of six tests with the values of 19532k, 19520k, and 19524k also being returned. So I consider 19516k as the working value and the accuracy to be within 16k.

For the SE Linux kernel with development mode enabled the memory used was 19516k in three out of four tests, and the other test was 19524k. So the difference between the development mode kernel and the regular SE Linux kernel is only 8K of kernel memory in the early stages of the boot process.

Finally I did a test of a development mode kernel with no policy loaded. The purpose of this test was to determine how much memory is used on a SE Linux kernel if the SE Linux code is not loading the policy. For this the memory reported as used was 18292k in three out of five tests, with the values of 18296k and 18300k also being returned.

Kernel memory used
non-SE 18256k
SE no policy 18292k
SE with policy 19516k

So an SE Linux kernel without policy loaded uses approximately 36K more memory after boot than a non-SE kernel in addition to the 104k or 112k used in the early stages of boot.

With a small policy loaded (360 types and 23,386 rules for a policy file that is 583771 bytes in size) the memory used by the kernel is about 1224k for the policy and other SE Linux data structures. The policy could be reduced in size as there are many rules which would only apply to other systems (the sample policy is quite generic and was quickly ported to the iPaQ), although there may be other areas of functionality that are desired which would use any saved space.

So it seems that when using SE Linux the memory cost is 104k when the kernel is loaded, and a further 1260k for SE Linux memory structures and policy when the boot process is complete. The total is 1364k of non-swappable kernel memory out of 64M of total RAM in an iPaQ, this is about 2% of RAM.

All tests were done with GCC 3.2.3, a modified Linux 2.4.19, and an X-scale CPU. Different hardware, kernel version, and GCC version will give different results.

Porting Utilities

The main login program used on the Familiar [familiar] distribution is gpe-login, which is an xdm type program for a GUI login. This program had to be patched to check a configuration file and the security policy to determine the correct security context for the user and to launch their login shell in that context. The patch for this functionality made the binary take 4556 bytes more disk space in my build (29988 bytes for the non-SE build compared to 34544 bytes for the version with SE Linux support).

The largest porting task was to provide SE Linux support in Busybox [busybox]. Busybox provides a large number of essential utility programs that are linked into one program. Linking several programs into one reduces disk space consumption by spreading the overhead for process startup and termination code across many programs. On arm it seems that the minimum size of an executable generated by GCC 3.2.3 is 2536 bytes. In the default configuration of Familiar Busybox is used for 115 commonly used utilities, having them in one program means that the 2.5K overhead is only used once not 115 times. So approximately 285K of uncompressed disk space is saved by using busybox if the only saving is from this overhead. The amount of disk space used for initialisation and termination code would probably increase the space used by more than 80% if all the applets were compiled separately (my build of Busybox for the iPaQ is 337028 bytes).

The programs that are of most immediate note in busybox are ls, ps, id, and login. ls needs the ability to show the security contexts of the files, ps needs to show the security contexts of the running processes, and id needs to show the context of the current process. Also the /bin/login applet had to be modified in the same manner as the gpe-login program. These changes resulted in the binary being 5600 bytes larger (337028 bytes for a non-SE version and 342628 bytes for the version with SE Linux support.

Busybox Wrappers for Domain Transition

In SE Linux different programs run in different security domains. A domain change can be brought about by using the execve_secure() system call, or it can come from an automatic domain transition. An example of an automatic domain transition is when the init process (running in the init_t domain) runs /sbin/getty which has the type getty_exec_t, which causes an automatic transition to the domain getty_t. Another example is when getty runs /bin/login which has the type login_exec_t and causes an automatic transition to the domain local_login_t. This works well for a typical Linux machine where /sbin/getty and /bin/login are separate programs.

When using Busybox the getty and login programs will both be sym-links to /bin/busybox and the type of the file as used for domain transitions will be the type of /bin/busybox, which is bin_t. SE Linux does not perform domain transitions based on the type of the sym-link, and it assignes security types to the Inodes not file names (so a file with multiple hard links will only have one type). This means that we can’t have a single Busybox program automatically transitioning into the different domains.

There are several possible solutions to this problem, one possible partial solution would be to have Busybox use execve_secure() to run copies of itself in the appropriate domain. Busybox already has similar code for determining when to change UID so that some of the Busybox applets can be effectively SETUID while others aren’t. The SETUID management of Busybox requires that it be SETUID root, and involves some risk (any bug in busybox can potentially be exploited to provide root access). Providing a similar mechanism for transitioning between SE Linux security domains would have the same security problems whereby if you crack one of the Busybox applets you could then gain full access to any domain that it could transition to. This does not provide adequate security. Also it would only work for transitions between privileged domains (it would not work for transitions from unprivileged domains). I did not even bother writing a test program for this case as it is not worth considering due to a lack of security and functionality.

A better option is to split the Busybox program into smaller programs so transitions can work in the regular manner. With the current range of applets that would require one program for getty, one for login, one for klogd, one for syslogd, one for mount and umount, one for insmod, rmmod, and modprobe, one for ifconfig, one for hwclock, one for all the fsck type programs, one for su, and one for ping. Of course there would also be one final build of busybox with all the utility programs (ls, ps, etc) which run with no special privilege. To test how this would work I compiled Busybox with all the usual options apart from modutils, and I did a separate build with only support for modutils. The non-modutils build was 323236 bytes and the build with only modutils was 37764 bytes. This gave a total of 361000 bytes compared to 342628 bytes for a single image, so an extra 18372 bytes of disk space was required for doing such a split.

Splitting the binary in such a simple fashion would likely cost 18K for each of the eleven extra programs. If we changed the policy to have syslogd and klogd run in the same domain (and thus the same program) and have hwclock run with no special privs (IE the domain that runs it needs to have access to /dev/rtc) then there would only be nine extra programs for a cost of approximately 162K of disk space. This disk space use could be reduced by further optimisation of some of the applets, for example in the case of ifconfig the code to check argv[0] to determine the applet name could be removed. A simple split in this manner would also make it more difficult for an attacker to make the program perform unauthorized actions. When a single program has /bin/login functionality as well as /bin/sh then there is potential for a buffer overflow in the login code to trigger a jump to the shell code under control of the attacker! When the shell is a separate program that can only be entered through a domain transition it is much more difficult to use an attack on the login program to gain further access to the system.

Finally if we have a single Busybox program that includes applets running in different domains we need to make some significant changes to the policy. The default policy has assert rules to prevent compilation of a policy that contains mistakes which may lead to security holes. For the domains getty_t, klogd_t, and syslogd_t there are assertions to prevent them from executing other programs without a domain transition, and to prevent those domains being entered through executing files of types other than the matching executable type (this requires that each of those domains have a separate executable type, IE they are not all the same program). Adding policy which requires removing these assertions weakens the security of the base domains and also makes the policy tree different from the default tree which has been audited by many people.

Another way of doing this which uses less disk space is to have a wrapper program such as the following:

#include <unistd.h>
#include <string.h>

int main(int argc, char **argv
       , char **envp)
  /* ptr is the basename of the
   executable that is being run */
  char *ptr = strrchr(argv[0], '/');
    ptr = argv[0];

  /* basename must match one of
     the allowed applets,
     otherwise it's a hacking
     attempt and we exit   */
  if(strcmp(ptr, "insmod")
  && strcmp(ptr, "modprobe")
  && strcmp(ptr, "rmmod"))
    return 1;
  return execve("/bin/busybox"
              , argv, envp);

This program takes 2912 bytes of disk space. The idea would be to have a copy of it named /sbin/insmod with type insmod_exec_t which has symlinks /sbin/rmmod and modprobe pointing to it. Then when insmod, rmmod, or modprobe is executed an automatic domain transition to the insmod_t domain will take place, and then the Busybox program will be executed in the correct context for that applet.

This option is easy to implement, one advantage is that there is no need to change the Busybox program. The fact that the entire Busybox code base is available in privileged domains is a minor weakness. Implementing this takes about 2900 bytes of disk space for each of the nine domains (or seven domains depending on whether you have separate domains for klogd and syslogd and whether you have a domain for hwclock). It will take less than 33K or 27K of disk space (depending on the number of domains). This saves about 130K over the option of having separate binaries for implementing the functionality.

A final option is to have a single program to act as a wrapper and change domains appropriately. Such a program would run in its own domain with an automatic domain transition rule to allow it to be run from all source somains. Then it would look at its parent domain and the type of the symlink to determine the domain of the child process. For example I want to have insmod run in domain insmod_t when run from sysadm_t. So I have an automatic transition rule to transition from sysadm_t to the domain for my wrapper (bbwrap_t). Then the wrapper determines that its parent domain is sysadm_t, determines that the type of the symlink for its argv[0] is insmod_exec_t and asks the kernel what domain should be entered when a process in sysadm_t executes a program of type insmod_exec_t, and the answer is insmod_t. So the wrapper then uses the execve_secure() system call to execute Busybox in the insmod_t domain and tell it to run the insmod applet.

I implemented a prototype program for this. For my prototype I used a configuration file to specify the domain transitions instead of asking the kernel. The resulting program was 6K in size (saving 27K of disk space over the multiple-wrapper method, and 156K of disk space over the separate programs method), although it did require some new SE Linux policy to be written which takes a small amount of disk space and kernel memory.

One problem with this method is that it allows security decisions to be made by an application instead of the kernel. It is preferrable that only the minimum number of applications can make such security decisions. In a typical configuration of SE Linux the only such applications will be login, an X login program (in this case gpe-login), cron (which is not installed in Familiar), and newrole (the SE Linux utility for changing the security context which operates in a similar manner to su).

The single Busybox wrapper is more of a risk than most of these other programs. The login programs are only executed by the system and can not be run by the user with any elevated privileges which makes them less vulnerable to attack. Newrole is well audited and the domains it can transition to are limited by kernel to only include domains that might be used for a login process (dangerous domains such as login_t are not permitted).

Due to the risks involved with a single busybox wrapper, and the fact that the benefits of using 6K on disk instead of 33K are very small (and are further reduced by an increase in kernel memory for the larger policy) I conclude that it is a bad idea.

I conclude that the only viable methods of using Busybox on a SE Linux system are having separate wrapper programs for each domain to be entered (taking 33K of extra disk space and requiring minor policy changes), or having entirely separate programs compiled from the Busybox source for each domain (taking approximately 162K of extra disk space with no other problems). Also with some careful optimisation the 162K of overhead could be reduced for the option of splitting the Busybox program. If 162K of disk space can be spared (which should not be a problem with a 32M file system) then splitting Busybox is the right solution.

Removed Functionality

A hand-held distribution doesn’t require all the features that are needed on bigger machines such as servers, desktop workstations, and laptops. Therefore we can reduce the size of the SE Linux policy and the number of support programs to save disk space and memory.

For a full SE Linux installation there are wrappers for the commands useradd, userdel, usermod, groupadd, groupdel, groupmod, chfn, chsh, and vipw. These can possibly be removed as there is less need for adding, deleting, or modifying users or groups on a hand-held device in the field. These programs would take 27K of disk space if they were included.

A default installation of Familiar does not include support for /etc/shadow, and therefore there is no need for the wrapper programs for the administrator to modify users’ accounts. However I think that the right solution here is to add /etc/shadow support to Familiar rather than removing functionality from SE Linux. This will slightly increase the size of the login programs.

In a full install of SE Linux there are programs chsid and chcon to allow changing the security type of files. These are of less importance for a small device. There will be fewer types available, and the effort of typing in long names of security contexts will be unbearable on a touch-screen input device. A hand-held device has to be configured to not require changing the contexts of files, and therefore these programs can be removed.

In the Debian distribution there is support for installing packages on a live server and having the security contexts automatically assigned to the files. As iPaQ’s are used in a different environment I believe that there is less need for such upgrades and such support could optionally be removed to save disk space. I have not written the code for this yet, but I estimate it to be about 100K.

The default policy for SE Linux has separate domains for loading policy and for policy compilation. On the iPaQ we can’t compile policy due to not having tools such as m4 and make, so we can skip the compilation program and its policy. Also the policy for a special domain for loading new policy is not needed as the system administration domain sysadm_t can be used for this purpose. It is possible to even save 3500 bytes of disk space by not including the program to load the policy (a reboot will cause the new policy to take affect).

A server configuration of SE Linux (or a full workstation configuration) includes the run_init program to start daemons in the correct security context. On a typical install of Familiar there are only three daemons, a program to manage X logins, a daemon to manage bluetooth connections, and the PCMCIA cardmgr daemon. For restarting these daemons it should be acceptable to reboot the iPaQ, so run_init is not needed.

Disk Space and RAM Use

In the section on kernel resource usage I determined that the kernel was using 1364K of RAM for SE Linux with a 583771 byte policy comprising 23,386 rules loaded. Since the time that I performed those tests I reduced the policy to 455,422 bytes and 18,141 rules which would reduce the kernel memory use. I did not do any further tests as it is likely that I will add new functionality which uses the memory I have freed. So I can expect that 1.3M of kernel memory is taken by SE Linux.

The SE Linux policy that is loaded by the kernel takes 67K on disk when compressed. The file_contexts file (which specifies the security contexts of files for the initial installation and for upgrades) takes 24K. The kernel binary takes 64K more disk space for the SE Linux kernel. So the kernel code and SE Linux configuration data takes 156K of disk space (most of which is compressed data).

The program setfiles is needed to apply the file_contexts data to the file system. Setfiles takes 20K of disk space. The file_contexts file could be reduced in size to 1K if necessary to save extra disk space, but in my current implementation it can not be removed entirely. In Familiar a large number of important system directories (such as /var) on Familiar are on a ramfs file system. I am using setfiles to label /mnt/ramfs. So far it has not seemed beneficial to have a small file_contexts file for booting the system and an optional larger one for use when installing new packages or upgrading, but this is an option to save 23K. Another option would be to write a separate program that hard-codes the security contexts for the ramfs. It would be smaller than setfiles and not require a file_contexts file, thus saving 30K or more of disk space. Currently this has not seemed worth implementing as I am still in a prototype phase, but it would not be a difficult task. Also if such a program was written then the next step would be to use a [jffs2] loop-back mount to label the root file system on a server before installation to the iPaQ (so that setfiles never needs to run on the iPaQ.

The patches for the gpe-login and busybox programs to provide SE Linux login support and modified ls, ps, and id programs cause the binaries to take a total of 10K extra disk space.

Splitting Busybox into separate programs for each domain will take an estimated 162K of disk space.

The total of this is approximately 348K of additional disk space for a minimal installation of SE Linux on an iPaQ. Adding support for /etc/shadow and other desirable features may increase that to as much as 450K depending on the features chosen. However if you use multiple Busybox wrappers instead of splitting Busybox then the disk space for SE Linux could be reduced to less than 213K. If you then replaced setfiles for the system boot labeling of the ramfs then it could be reduced to 190K.


Security Enhanced Linux on a hand-held device can consume less than 1.3M of RAM and less than 400K of disk space (or less than 200K if you really squeeze things). While the memory use is larger than I had hoped it is within a bearable range, and it could potentially be reduced by changing the kernel code to optimise for reduced memory use. The disk space usage is trivial and I don’t think it is a concern.

I believe that the benefits of reducing repair and maintenance problems with hand-held devices that are deployed in the field through better security outweigh the disadvantage of increased memory use for many applications.

All source code and security policy code releated to this article will be on my web site [my-site].


SE Linux Magic

Here is a complete list of entries for /etc/magic related to SE Linux.

# SE Linux policy database for Fedora versions less than 5, RHEL 4, and Debian before Etch
0      lelong  0xf97cff8c      SE Linux policy
>16    lelong  x              v%d
>20    lelong  1      MLS
>24    lelong  x      %d symbols
>28    lelong  x      %d ocons

# SE Linux policy modules *.pp reference policy for Fedora 5 to 9,
# RHEL5, and Debian Etch and Lenny.
0      lelong  0xf97cff8f      SE Linux modular policy
>4      lelong  x      version %d,
>8      lelong  x      %d sections,
>>(12.l) lelong 0xf97cff8d
>>>(12.l+27) lelong x          mod version %d,
>>>(12.l+31) lelong 0          Not MLS,
>>>(12.l+31) lelong 1          MLS,
>>>(12.l+23) lelong 2
>>>>(12.l+47) string >\0        module name %s
>>>(12.l+23) lelong 1          base

# for SE Linux policy source for reference policy
0      string  policy_module(  SE Linux policy module source
1      string  policy_module(  SE Linux policy module source
2      string  policy_module(  SE Linux policy module source

0      string ##\ <summary>    SE Linux policy interface source

0      search  gen_context(    SE Linux policy file contexts

0        search        gen_sens(        SE Linux policy MLS constraints source

Polyinstantiation of directories in an SE Linux system


I presented this paper at the 2006 SAGE-AU conference.


This paper describes the problems related to shared directories such as /tmp and /var/tmp as well as problems related to having multiple SE Linux security contexts used for accessing a single home directory. It then provides detailed information on the solution to this problem that has been implemented with polyinstantiated directories by using the pam_namespace module.


It is a long-standing Unix tradition that the directories /tmp and /var/tmp are used for temporary storage by all programs and on behalf of all users. This used to not be considered a problem, however in recent times it has been recognised that the use of such a shared directory is vulnerable to race-condition attacks with symbolic links.

Another problem is that in some situations a file name may convey secret information. If the file in question is in a public directory such as /tmp or /var/tmp (which may be an unintended result of a command by the user) then this will represent an information leak if there are any less privileged processes running on the machine.

Past attempts to deal with these problems have included restrictions on creating sym-links and hiding file names, which have both been inadequate. The solution chosen for use with SE Linux (which is also designed to work without SE Linux) is to have polyinstantiated directories based on Unix account name and/or SE Linux context. This means that every user will see a different version of the directory in question based on their context.

In the past this feature has been implemented as part of Multi-Level Security (Dr. Rick Smith [HREF2]) systems under the name multi-level directories. I believe that the multi-level directory variant of this solution was based on file system support, while the Linux support for this type of operation that I will describe is based in the VFS layer and thus does not require modification to any of the file systems that may be used.

Summary of Attacks that can be Prevented by Poly-Instantiated Directories

In this paper I am considering the following attack scenarios:

  1. Attack by user on user (including the case of a non-PI user as attacker or victim)
  2. Attack by user on daemon (including the case of a non-PI user as attacker)
  3. Attack by non-root daemon on user
  4. Attack by root daemon on user (will always succeed without SE Linux)

Each of the above four attack scenarios may occur with one of the following three attacks:

  1. Race-condition attacks on the integrity of processes and data (sym-link attacks, race conditions on renaming objects, or pre-creating a file to take ownership of data)
  2. Leaks of confidential data via secrets in file names
  3. Denial Of Service (DOS) attacks based on race conditions and pre-allocating file/directory names

Other Solutions

One attempt at solving this problem that has been implemented in some Linux security systems is to hide file names. This can work as long as it is not possible to guess any of the file names in question. If the file name can be guessed then the hostile party can attempt to create a new file of the same name, failure to create the file in question indicates existence. But this only solves the problem of secret data in file names.

Another partial attempt at dealing with this problem is controlling the ability to create hard-links and/or sym-links to try and prevent race conditions. A well-known implementation of this is in the OpenWall kernel patch [HREF3] which prevents the user from creating hard-links to files to which they have no write access and from creating sym-links in a +t directory (a directory such as /tmp or /var/tmp) which point to a file that they don’t own. It also prevents writing to named pipes in +t directories which are owned by a different user. This deals with some of the issues related to race-condition attacks but there are potential issues that it does not address, such as a hostile user creating sym-links to their own files to divert output or creating a file with no write permissions as a denial of service against a program that uses a fixed file name.

But this only deals with the case of race conditions used to attack system integrity. It does not prevent DOS attacks or protect secret data when it is used in a file name.

SE Linux Requirements for Shared Directories

SE Linux does not attempt to hide file names in a directory, if the name of a file contains secret data then this can be a security problem on shared directories such as /tmp and /var/tmp, this is an issue that has to be solved outside of the core SE Linux code base.

The SE Linux strict and mls policies provide good protection against most race condition attacks. Most domains are not permitted to create hard links to privileged files (types such as etc_t). Daemons are all protected from sym-link attacks by each other due to being denied access to sym-links created by other daemons and by users, and users are given similar protection against attack by daemons (both root and non-root). The main benefit for PI directories in strict and mls SE Linux systems is for protection against users attacking other users, in most cases large numbers of users will have the same SE Linux domain and therefore there will not be any effective protection against such attacks in the domain-type model (the integrity protection part of SE Linux).

When a Unix account is associated with more than one SE Linux context it is necessary to have multiple instances of the home directory to match the SE Linux context. If there is only one instance of the home directory and different SE Linux contexts are used for user logins then one of the contexts may be denied access to shared files such as .bashrc and .bash_history, or they may serve as information leaks. This use creates a requirement for PI home directories in SE Linux that does not exist for non-SE systems.

The problem of multiple logins with different contexts can occur in the older version of SE Linux (known as the example policy) that was used in Red Hat Enterprise Linux 4 and Fedora Core versions 2 to 4 when running the strict policy that permits multiple roles to be allocated to a user. But this is more of an issue with the newer versions of SE Linux policy that have functional support for MLS labels and the new MCS policy that permits different sets of categories to be assigned to a user session.

Non-SE Linux Requirements for Shared Directories

Polyinstantiation of shared directories also provides benefits for non-SE Linux systems, in fact there are probably more benefits to be gained from using this on non-SE systems. The SE Linux strict policy provides protection against sym-link race condition attacks launched by users against users in different roles, attacks by users against daemons, and attacks by daemons against users. The SE Linux MLS policy provides these benefits and also protects against attack from programs running at different levels, for example a process running at sensitivity level s2 could not be tricked into leaking data to a program running at level s1, even if the two programs ran in the same domain and with the same UID. Also SE Linux prevents unprivileged processes from creating hard links to files that are important to system integrity or data confidentiality (which is almost a complete solution to hard-link based attacks).

A non-SE system has none of the above protections and only has the Unix UID to protect both system integrity and confidentiality of data.

Linux Kernel Support for Poly-Instantiated Directories

In recent versions of Linux the current list of mounted file systems is available from the /proc/mounts file which is a sym-link to /proc/self/mounts, this permits displaying the name-space which applies to the current process. If /etc/mtab is a sym-link to /proc/mounts then programs such as df will display information on the mount points that are associated with the name-space for the process.

The initial support for PI directories was via the CLONE_NEWNS flag to the clone() system call. This flag causes the child process to be allocated a separate name space. That process and each child process that it launched would have a separate name space to the process which called clone(), and to any process that resulted from another call to clone() with the CLONE_NEWNS flag. The problem with this was the requirement that applications be modified to use clone() with this flag instead of using fork().

To solve this problem a new system call sys_unshare [HREF4] was added to the Linux kernel. The unshare system call can create a separate name-space for mounted file systems among other things (the set of kernel datra structures that can be unshared has been steadily increased since the introduction of unshare).

The unshare system call requires the SYS_ADMIN capability but does not require a fork, exec, or other operation. So it can be called from a PAM module and thus work with unmodified login programs. Also it is possible for multiple PAM modules to unshare different kernel data structures.

Shared Subtrees

One obvious problem with the functionality described in the previous section is the situation where the administrator wants to mount file systems and have all users see them, or have daemons mount file systems (such as autofs).

The solution to this is a development known as Shared Subtrees [HREF5]. This gives the option of specifying that certain subtrees will not be shared. For example if the directories /tmp and /var/tmp are being instantiated
then the following commands could be run from a system boot script to cause all other mount operations to propagate to all users:

mount --make-shared /
mount --bind /tmp /tmp
mount --make-private /tmp
mount --bind /var/tmp /var/tmp
mount --make-private /var/tmp

The above commands make the root of the name-space shared and then make /tmp and /var/tmp private. Note that the --make-private option to the mount command only applies to mount points. As on my test system both /tmp and /var/tmp are on the root file system I have to bind mount them to themselves to have a mount point that can be made private. Be aware that if you don’t correctly exclude the PI directories from the shared name space then each user who logs in may get PI directories under another user’s directories, and things generally won’t work.

Design Overview of PI Directories in Linux

The initial design for PI directories was based on having them only created for user sessions at login time by PAM [HREF6] or similar mechanisms. To implement this the PAM module will create a directory under the directory that is being instantiated, create an unshared name space, and then bind mount the new directory over the PI directory. For example if /tmp is to be PI for user rjc then the directory /tmp/tmp.inst-rjc-rjc would be created as the instance of /tmp for the user rjc. After the directory is created an unshared name space would be created via the unshare system call. Finally in the new name space a bind mount would be used to replace /tmp with /tmp/tmp.inst-rjc-rjc, the bind mount operation would be equivalent to the command:

mount --bind /tmp/tmp.inst-rjc-rjc /tmp

The directory that was created was given the Unix permission mode 1777 (all users can create files and directories, but it is only permitted to remove files or directories that you own). This solved many of the problems related to users attacking users and users attacking daemons. But it does not solve the problem of a daemon attacking a user as the daemon has access to the parent of the PI directories. Also there is a configuration option to have a user excluded from the PI directory system, a user who is granted such access (either deliberately or accidentally) would also be able to attack other users. As all directories were created under /tmp with mode 1777 there was no protection of secret file names from daemons and users who were outside the PI system (for most systems I expect that there wil be some users who will be excluded from the PI configuration).

Another problem with the initial implementation was that the directories were all created at login time, therefore a hostile process could guess the names and pre-allocate directories to allow taking over ownership and potentially allowing other race condition attacks. For example any privileged process which relies on files not being unlinked or renamed for correct operation would operate incorrectly (and possibly be subject to attack) if run in a situation where the /tmp directory did not have the mode 1777 to prevent such rename and unlink operations.

Finally the initial implementation did not have a fall-back case for when the desired name for a PI directory had been taken by a file and would cause the login process to abort, this could be used as a DOS attack against user login sessions.

I have identified two possible solutions to the problem of DOS attacks against the pam_namespace module. One solution is to have it check whether the PI directory already exists, if it exists but has the wrong permissions (either Unix or SE Linux) or if there is an object other than a directory using that name then it would try creating the directory under a different name (maybe the original name with “.1” appended) and keep trying different names until it finds one that is available. This solution does not solve the problem of protecting secret file names.

The other solution I have identified solves the problems of DOS attacks and race conditions as well as the leaks of secret data in file names. This requires that a directory be pre-allocated on the system to contain all PI directories. So instead of a PI directory having the name /tmp/tmp.inst-rjc-rjc it might have the name /tmp/.inst/tmp.inst-rjc-rjc. The /tmp/.inst directory would be created and/or verified at system boot time and would have Unix permission mode 000 (the capability dac_override which every login program posesses would be requred to access it) and would also have a SE Linux context that permits only very restrictive access. Therefore non-root daemons will be denied access to /tmp/.inst and therefore would not be able to launch attacks on users via the /tmp directory. On SE Linux systems root daemons will also be denied such access. If a user session is launched with a shared system name space (through misconfiguration or unusual requirements) then they would also be denied access to the instance of /tmp used by other users.

In the initial design of PI directories the aim was to confine users to prevent them from attacking the rest of the system, and such a confined user was still vulnerable to attack from outside. The second of the two solutions that I propose above is the one that I believe to be the best, it will protect users who have a PI version of a shared directory from attack by non-root daemons on a non-SE system and from attack by root daemons as well on a SE Linux system. It would be a viable option for the sys-admin to give a single user a PI version of /tmp to protect the files for that user while allowing all other users access to the system shared name space.

At the time of writing we have agreement on the concept of using a naming system somewhat like /tmp/.inst/tmp.inst-something where the directory /tmp/.inst will have Unix mode 000 and restrictive SE Linux access controls. This will prevent daemons and users that are not included in the PI configuration from attacking daemons and users that have it enabled. This makes PI protect the user who has such a PI directory as well as protecting the rest of the system from that user. Note that at the time of writing there was no final agreement on the directory names, while the concept of a two-level directory is agreed the actual name of the directory in the default configuration is still to be resolved.

A feature that has been discussed and agreed in concept is to have the module check the permissions of the /tmp/.inst directory and abort the login process if the directory does not have Unix permission mode 000, root ownership, and a suitable SE Linux label (if SE Linux is enabled). There will be a configuration option to disable this functionality as not all systems will need this level of protection (and not all administrators will want a system to fail-closed on such a minor security issue).

Currently Released Code

As of the time of writing Fedora Rawhide has a shared object named that implements the basic functionality. To use it the PAM configuration files in the /etc/pam.d directory must be modified to have the following line at the end:

session    required

The system will work if the pam_namespace object is not the last in the list, but the creation of the namespace may interfere with some other PAM modules (for example if a PAM module wanted to access files in the /tmp directory) and in general it is safest to have it last. The only situation in which you might not want to have pam_namespace as the last session module is if you are using pam_mkhomedir and also using pam_namespace to provide PI home directories. But currently pam_mkhomedir does not work correctly in situations where PI home directories are desired so this should not be an issue.

The most noteworthy parameter for the pam_namespace module is the optional parameter unmnt_remnt. This is used by programs that run from an unshared namespace and need to create another unshared namespace. The primary example of this is su, all other programs that perform actions which are similar in concept (IE they are run from a user session and launch a new session on behalf of another user) will have the same requirement.

The pam_namespace module uses the configuration file /etc/security/namespace.conf. This file currently has four parameters, the first gives a directory that should be instantiated (there is an option of $HOME for instantiating the user home directory). The second is the name of the real directory to be used for the instance which has variables $USER and $HOME to represent the user-name and the home directory of the user. The third parameter may have as it’s value user, context, or both to indicate whether the instantiation should be based on user-name, SE Linux context, or both. The final parameter is a list of comma-separated user-names for accounts that are exempt from poly-instantiation of the directory in question. I believe that it will be standard practice to include root in this list of accounts (usually there will be no need for other users to be excluded).

Preventing Daemons From Attacking Each Other

In the currently released code there is no protection against daemons attacking each other. I believe that to take advantage of the full benefits offered by PI directories most daemons that run as non-root need the same protection so that they can not attack each other.

In Fedora there is a new program called runuser that will start a daemon as a user other than root. It is linked against PAM and can be configured to call the pam_namespace module. When I finish the debugging then every time it launches a daemon as non-root it will be able to create a new unshared namespace. Non-root daemons that require the system shared name space will need to have their user-names specified in the namespace.conf file.

In Debian daemons are started via a program named start-stop-daemon. I plan to modify this program to have the necessary name-space functionality.

Interesting Features

There is no requirement that the PI directory be a sub-directory of the directory it replaces. In fact it can be on a different filesystem. If you have a separate filesystem for /tmp and don’t want to have a separate filesystem for /var/tmp too then you could just configure namespace.conf such that /var/tmp is instantiated under /tmp. The following is one sample configuration:

/tmp     /tmp/.inst/tmp.inst-$USER-       both      rjc,root
/var/tmp /tmp/.inst/var-tmp.inst-$USER-   both      rjc,root


With a two-level directory configuration (such as /tmp/.inst/whatever) we protect against all the attack scenarios that I consider (including an attack launched by a root-owned daemon on users when running SE Linux). The protection provided by the PI shared directory works in two ways, it protects the process with the unshared namespace and it also protects all other processes on the system against attack from that process.

Most operations that are described in this paper are usable in Fedora Rawhide as of the 31st of May 2006. The only operation that is not usable at this time is PI support for daemons. I hope to have PI working in Debian and have PI support for daemons in both Debian and Fedora Rawhide by the time this paper is published, I will describe my success in these efforts when I present this paper.

Hypertext References



The System Administrators Guild of Australia© 2006. The authors assign to The System Administrator’s Guild of Australia and other educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The authors also grant a non-exclusive licence to The System Administrators Guild of Australia to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the conference papers and for the document to be published on mirrors on the World Wide Web.

Maildir Bulletin

This program is designed to deliver bulletin messages to thousands of users on a system. If you want to deliver mail to a large number of people to be read through POP or a local email program (such as mutt) then the traditional approach has been to setup an alias to map to all the users. The problem with this is that mail delivery is very slow and even delivering to 1000 users on a fast machine can take a significant amount of time and system resources. Also if the message is large then you use a lot of disk space.

This program solves that problem by creating a single file with the message data and creating links to it from the ~/Maildir/new directory of every user who is in the group (it delivers mail based on Unix groups). This is fast (can deliver a bulletin to 30000 users in minutes on a slow machine), saves disk space, works with all Maildir client software, and makes it very easy to undeliver or modify a bulletin.

Download links:

Benchmarking Mail Relays and Forwarders


I presented this paper at the OSDC conference in 2006.

The main page for my Postal benchmark is at My blog posts about benchmarking can be found at


Postal is a mail server benchmark that I wrote. The main components of it are postal for testing the delivery of mail via SMTP, rabid for testing mail download via POP, and a new program I have just written called bhm (for Black Hole Mailer) which listens on port 25 and sends all mail to /dev/null.

The new BHM program makes it possible to test the performance of mail relay systems. This means outbound smart host gateway systems, list servers, and forwarding services. The testing method is to configure three machines, one running Postal for sending the mail, the machine to be tested running a mail server or list server configuration, and the target machine running BHM.
The initial aim of this paper was to use artificial delays in the BHM program to simulate slow network performance and also to simulate various anti-spam measures and to measure how they impact a mail relay system. However I found other issues along the way which were interesting to analyse and will be useful to other people.

Description of Postal

The first component of the benchmark suite is Postal, this program sends mail at a controlled rate. When using it you have a list of addresses for senders and a separate list of recipients that will be used for sending random messages to a mail server. It sends the mail to a specified IP address to save the effort of configuring a test DNS server because in the most common test scenario you have a single mail server that you want to test.

Postal sends at a fixed rate because in most MTAs an initial burst of mail will just go to the queue and will be actually delivered much more slowly. Often mail servers will take two minutes or more of sustained load to show the full performance impact. So I designed the programs in the Postal suite to display their results once per minute so you can watch the performance of the system over time and track the system load.

The most important thing to observe is that the load (in all areas) is below 100%. If any system resource (CPU, network IO, or disk IO) is used to 100% capacity then the queue will grow without limit. Such unlimited queue growth leads to timeouts which increases the queue and causes the system to break down. An SMTP server has a continual load from the rest of the Internet and if it goes more slowly the load will not decrease in the short-term. So a server that falls behind can simply become unusable, an unmanaged mail server can easily accumulate a queue of messages as old as a week through not having the performance required to deliver them as fast as they arrive.

The second program in the Postal suite is Rabid, a benchmark for POP servers. The experiments I document in this paper do not involve Rabid.

The most recent program is BHM which is written as an SMTP sink for testing mail relays. The idea is that a mail relay machine will have mail sent to it by Postal and then send it on to a machine running BHM. There are many ways in which machines that receive mail can delay mail and thus increase the load on the server. Unfortunately I spent all the time available for this paper debugging my code and tracking down DNS server issues so I didn’t discover much about the mail server itself.


For running Postal I used my laptop. Postal does much less work than any other piece of software in the system so I’m sure that my laptop is not a performance bottleneck. It is however a 1700MHz Pentium-M and probably the fastest machine in my network.

For the mail relay machine (the system actually being tested) I used a Compaq Evo desktop machine with a 1.5GHz P4 CPU, 384M of RAM, and an 80G IDE disk.

For running BHM I used an identical Compaq system.

The network is 100baseT full duplex with a CableTron SmartSwitch. I don’t think it will impact the performance. During the course of testing I did not notice any reports of packet loss or collisions.

All the machines in question were running the latest Fedora rawhide as of late September 2006.


To prepare for the testing I set up a server running BHM with 254 IP addresses to receive email (mail servers perform optimisations if they see the same IP address being used). The following script creates the interfaces:

for n in `seq 1 254`
  do ifconfig eth0:$n 10.254.0.$n netmask

Test 1, BIND and MTA on the Same Server

The script in appendix 1 creates the DNS configuration for the 254 zones and the file of email addresses (one per zone) to use as destinations.
I configured the server as a DNS server and a mail relay. A serious mail server will often have a DNS cache running on localhost so for my purposes
having primary zones under configured on a DNS server on localhost seemed appropriate.

I initially tested with only a single thread of Postal connecting to the server. This means that there was no contention on the origin side and it was all on the MTA side. I tested Sendmail and Postfix with an /etc/aliases file expanding to 254 addresses (one per domain). All the messages had the same sender, and the message size was a random value from 0 to 10K.

The following table shows the amount of CPU time used by the server (from top output) and the load average as well as the mail server in use and the number of messages per minute sent through it.

MTA Msgs/Minute CPU Use Load Average
Postfix 15 ~70% 9
Postfix 18 ~80% 9
Postfix 20 ~90% 11
Sendmail 10 ~50% 1
Sendmail 13 ~70% 2
Sendmail 15 ~95% 4.5
Sendmail 20 100% *

Surprisingly the named process appeared to be using ~10% of the CPU at any given time when running Postfix and 25% of the CPU when running Sendmail (not sure why Sendmail does more DNS work – both MTAs were in fairly default Fedora configurations). As for this operation CPU was the bottleneck it appears that having the named process on the same machine might not be a good optimisation.

When testing 15 and 20 messages per minute with Sendmail the CPU use was higher than with Postfix and in my early tests with 256M of RAM in the kernel started reporting ip_conntrack: table full, dropping packet. which disrupted the tests by deferring connections.

The conntrack errors are because the TCP connection tracking code in the kernel has a fixed number of entries where the default is chosen based on the amount of RAM in the system. With 256M of RAM in the test system the number of connections that could be tracked was just under 15,000. After upgrading the system to 384M of RAM there was support for tracking 24,568 connections and the problem went away. You can change the maximum number of connections by the command echo NUMBER > /proc/sys/net/ipv4/ip_conntrack_max or for a permanent change edit /etc/sysctl.conf and add the line net.ipv4.ip_conntrack_max = NUMBER and then run sysctl -p to load the settings from /etc/sysctl.conf. Note that adding more RAM will increase many system resource limits that affect the operation of the system.

My preferred solution to this problem is to add more RAM because it keeps the machine in a default configuration which decreases the chance of finding a new bug that no-one else has found. Also an extra 128M of RAM is not particularly expensive.

After performing those tests I decided that I needed to add a minimum message size option to Postal (for results that had a lower variance).

I also decided to add an option to specify the list of sender addresses separately from the recipient addresses. When I initially wrote Postal the aim was to test a mail store system. So if you have 100,000 mail boxes then sending mail between them randomly works reasonably well. However for a mail relay system a common test scenario is having two disjoint sets of users for senders and recipients.

Test 2, Analysing DNS Performance

For the second test run I moved the DNS server to the same machine that runs the BHM process which is lightly loaded as the mail relay doesn’t send enough mail to cause BHM to take much CPU time).

I then did a test with Sendmail to see what the performance would be for messages that have a size of exactly 10K for the body which are sent from 254 random sender addresses (one per domain). I noticed that the named process rapidly approached 100% CPU use and was a bottleneck on system performance. It seems that the DNS load for Sendmail is significant!

I then analysed the tcpdump output from the DNS server and saw the following requests:

IP sendmail.34226 > DNS.domain: 61788+ A? (32)
IP sendmail.34228 > DNS.domain: 22331+ MX? (32)
IP sendmail.34229 > DNS.domain: 4387+ MX? (32)
IP sendmail.34229 > DNS.domain: 18834+ A? (37)

It seems that there are four DNS requests per recipient giving a total of 1016 DNS requests per message. When 15 messages per minute are delivered to 254 recipients that means 254 DNS requests per second plus some extra requests (lookups of the sending IP address etc).

Also one thing I noticed is that Sendmail does a PTR query (reverse DNS lookup) on it’s own IP address for every delivery to a recipient. This added an extra 254 DNS queries to the total for Sendmail. I am sure that I could disable this through Sendmail configuration, but I expect that most people who use Sendmail in production would use the default settings in this regard.

Noticing that the A record is consulted first I wondered whether removing the MX record and having only an A record would change things. The following tcpdump output shows that the same number of requests are sent so it really makes no difference for Sendmail:

IP sendmail.34238 > DNS.domain: 26490+ A? (32)
IP sendmail.34240 > DNS.domain: 16187+ MX? (32)
IP sendmail.34240 > DNS.domain: 57339+ A? (32)
IP sendmail.34240 > DNS.domain: 50474+ A? (32)

Next I tested Postfix with the same DNS configuration (no MX record) and saw the following packets:

IP postfix.34245 > DNS.domain: 3448+ MX? (32)
IP postfix.34261 > DNS.domain: 50123+ A? (32)

The following is the result for testing Postfix with the MX based DNS configuration:

IP postfix.34675 > DNS.domain: 29942+ MX? (32)
IP postfix.34675 > DNS.domain: 33294+ A? (37)

It seems that in all cases Postfix does less than half the DNS work that Sendmail does in this regard and as BIND is a bottleneck this means that Sendmail can’t be used. So I excluded Sendmail from all further tests.

Below is the results for the Exim queries for sending the same message, Exim didn’t check whether IPv6 was supported before doing an IPv6 DNS query. I filed a bug report about this and was informed that there is a configuration option to disable AAAA lookups, but it is agreed that looking up an IPv6 entry when there is no IPv6 support on the system (or no support other than link-local addresses) is a bad idea.

IP exim.35992 > DNS.domain: 43702+ MX? (32)
IP exim.35992 > DNS.domain: 7866+ AAAA? (37)
IP exim.35992 > DNS.domain: 31399+ A? (37)

The total number of DNS packets sent and received for each mail server was 2546 for Sendmail, 1525 for Exim, and 1020 for Postfix. Postfix clearly wins in this case for being friendly to the local DNS cache and for not sending pointless IPv6 queries to external DNS servers. For further tests I will use Postfix as I don’t have time to configure a machine that is fast enough to handle the DNS needs of Sendmail.

Exim would also equal Postfix in this regard if configured correctly. However I am making a point of using configurations that are reasonably close to the Fedora defaults as that is similar to the common use on the net.

Test 3 – Postfix Performance

To test Postfix performance I used the DNS server on a separate machine which had the MX records. I decided to test a selection of message sizes to determine the correlation between message size and system load.

Msg Size Msgs/Minute CPU Use Load Average
10K 20 ~95% 11
0-1K 20 ~85% 7
100K 10 ~80% 6

In all the above tests there were a few messages not being sent due to connections timing out. This seems unreasonably poor performance. I had expected Postfix in the most simplistic mailing list configuration to be able to handle more than 5080 outbound messages per minute.

Test 4 – different Ethernet Card

If an Ethernet device driver takes an excessive amount of CPU time in interrupt context then it will be billed to the user-space process that was running at the time, this can result in an application being reported as using a lot more CPU time than it really uses. To check whether that is the case I decided to replace the Ethernet card in the test system and see if that changed the reported CPU use.

I installed a PCI Ethernet card with Intel Corporation 82557/8/9 chipset to use instead of the Ethernet port on the motherboard which had a Intel Corporation 82801BA/BAM/CA/CAM chipset and observed no performance difference. I did not have suitable supplies of spare hardware to test a non-Intel card.


DNS performance is more important to mail servers than I had previously thought. The choice and configuration of the mail server will affect the performance required from local DNS caches and from remote servers. Sendmail is more demanding on DNS servers and Exim needs to be carefully configured to match the requirements.

I am still considering whether it would be more appropriate for Exim to check for IPv4 addresses before checking for IPv6 addresses given that most of the Internet runs on IPv4 only. Maybe a configuration option for this would be appropriate. [After publication it occurred to me that checking for IPv4 first would be bad if you want to migrate to an IPv6 Internet.]

Also other mail servers will have the same issues to face as IPv6 increases in popularity.

The performance of 20 messages per minute doesn’t sound very good, but when you consider the outbound performance it’s more impressive. Every inbound message gets sent to 254 domains, so 20 inbound messages per minute gives 84.6 outbound messages per second on average which is a reasonable number for a low-end machine. Surprisingly there was little disk IO load.

Future Work

The next thing to implement is BHM support for tar pits, gray-listing, randomly dropping connections, temporary deferrals, and generating bounce messages. This will significantly increase the load on the mail server. Administrators of list servers often complain about the effects of being tar-pitted, I plan to do some tests to estimate the performance overhead of this and determine what it means in terms of capacity planning for list administrators.

Another thing I plan to develop is support for arbitrary delays at various points in the SMTP protocol. This will be used for similating some anti-spam measures, and also the effects of an overloaded server which will take a long time to return a SMTP 250 code in response to a complete message. It will be interesting to discover whether making your mail server faster can help the internet at large.

Appendix 1

Script to create DNS configuration

# use:
# 100 10.254.0
# the above command creates zones to with an
# A record for the mail server having the IP address 10.254.0.X where X is
# a number from 1 to 254 and an NS record
# with the IP address
# then put the following in your /etc/named.conf
#include "/etc/named.conf.postal";
# the file "users" in the current directory will have a sample user list for
# postal
my $inclfile = "/etc/named.conf.postal";
open(INCLUDE, ">$inclfile") or die "Can not create $inclfile";
open(USERS, ">users") or die "Can’t create users";
my $zonedir = "/var/named/data";
for(my $i = 0; $i < $ARGV[0]; $i++)
my $zonename = sprintf($ARGV[1], $i);
my $filename = "$zonedir/$zonename";
open(ZONE, ">$filename") or die "Can not create $filename";
print INCLUDE "zone \"$zonename\" {\n type master;\n file \"$filename\";\n};\n\n";
print ZONE "\$ORIGIN $zonename.\n\$TTL 86400\n\@ SOA localhost. root.localhost. (\n";
# serial refresh retry expire ttl
print ZONE " 2006092501 36000 3600 604800 86400 )\n";
print ZONE " IN NS ns.$zonename.\n";
print ZONE " IN MX 10 mail.$zonename.\n";
my $final = $i % 254 + 1;
print ZONE "mail IN A $ARGV[2].$final\n";
print ZONE "ns IN A $ARGV[3]\n";
print USERS "user$final\@$zonename\n";

SE Debian: how to make NSA SE Linux work in a distribution


I presented this paper at Ottawa Linux Symposium (OLS) 2002. Since that time the acceptance of SE Linux in Debian was significantly less than I expected. But the acceptance in Red Hat Enterprise Linux and Fedora has been quite good. is defunct, since about 2004.

I corrected the URLs for the NSA papers I referenced, NSA broke the links some years after I published this.

Crispin Cowan wrote some notes during my talk and sent them to a mailing list at the end, they are archived here.

SE Debian: how to make NSA SE Linux work in a distribution

Russell Coker <>


I conservatively expect that tens of thousands of Debian users will be using NSA SE Linux [1] next year. I will explain how to make SE Linux work as part of a distribution, and be managable for the administrator.

Although I am writing about my work in developing SE Linux support for Debian, I am using generic terms as much as possible, as the same things need to be done for RPM based distributions.


SE Linux offers significant benefits for security. It accomplishes this by adding another layer of security in addition to the default Unix permissions model. This is accomplished by firstly assigning a type to every file, device, network socket, etc. Then every process has a domain, and the level of access permitted to a type is determined by the domain of the process that is attempting the access (in addition to the usual Unix permission checks). Domains may only be changed at process execution time. The domain may automatically be changed when a process is executed based on the type of the executable program file and the domain of the process that is executing it, or a privileged process may specify the new domain for the child process.

In addition to the use of domains and types for access control SE Linux tracks the identity of the user (which will be system_u for processes that are part of the operating system or the Unix user-name) and the role. Each identity will have a list of roles that it is permitted to assume, and each role will have a list of domains that it may use. This gives a high level of control over the actions of a user which is tracked through the system. When the user runs SUID or SGID programs the original identity will still be tracked and their privileges in the SE security scheme will not change. This is very different to the standard Unix permissions where after a SUID program runs another SUID program it’s impossible to determine who ran the original process. Also of note is the fact that operations that are denied by the security policy [2] have the identity of the process in question logged.

For a detailed description of how SE Linux works I recommend reading the paper Peter Loscocco presented at OLS in 2001 [1].

The difficulty is that this increase in functionality also involves an increase in complexity, and requires re-authenticating more often than on a regular Unix system (the SE Linux security policy requires that the user re-authenticate for change of role). Due to this most people who could benefit from SE Linux will find themselves unable to use it because of the difficulties of managing it. I plan to address this problem through packaging SE Linux for Debian.

The first issue is getting packages of software that is patched for support of the SE Linux system calls and logic. This includes modified programs for every method of login (/bin/login, sshd, and X login programs), modified cron to run cron jobs in the correct security context, modified ps to display the security context, modified logrotate to keep the correct context on log files, as well as many other modified utilities.

The next issue is to configure the system such that when a package of software is installed the correct security contexts will be automatically applied to all files.

The most difficult problem is ensuring that configuration scripts get run in the correct security context when installing and upgrading packages.

The final problem is managing the configuration files for the security policy.

Once these problems are solved there is still the issue of the SE Linux sample policy being far from the complete policy that is needed in a real network. I estimate that at least 500 new security policy files will need to be written before the sample policy is complete enough that most people can just select the parts that they need for a working system.

Patching the Packages

The task of the login program is to authenticate the user, chown the tty device to the correct UID, and change to the appropriate UID/GID before executing the user’s shell. The SE patched version of the login program performs the same tasks, but in addition changes the security identifier (SID) on the terminal device with the chsid system call and then uses the execve_secure system call instead of the execve system call to change the SID of the child process. The login program also gives the user a choice of which of their authorised roles they will assume at login time.

This is not very different from the regular functionality of the login program and does not require a significant patch.

Typically this adds less than 9K to the object size of the login program, so hopefully soon many of the login programs will have the SE code always compiled in. For the rest we just need a set of packages containing the SE versions of the same programs. So this issue is not a difficult one to solve and most of the work needed to solve it has been done.

A similar patch needs to be applied to many other programs which perform similar operations. One example is cron which needs to be modified so cron jobs will be run in the correct security context. Another example is the suexec program from Apache. An example of a similar program for which no-one has yet written a patch is procmail.

Programs which copy files also need to have suitable options for preserving SIDs, logrotate and the fileutils package (which includes cp) have such patches, cpio lacks such a patch, and there is a patch for tar but it doesn’t apply to recent versions and probably needs to be re-written.

Setting the Correct SID When Installing Files

When a package of software is installed the final part of the installation is running a postinst script which in the case of a daemon will usually start the daemon in question. However if the files in the package do not have the correct SIDs then the daemon may not be able to run, or will be unable to run correctly!

The Debian packaging system does not currently have any support for running a script after the files of a package are installed but before the postinst script. There have been discussions for a few years on how best to do this, as I didn’t have time to properly re-write dpkg I instead did a quick hack to make it run scripts that it finds in /etc/dpkg/postinst.d/ before running the postinst of the package.

When installing an SE Linux system the program setfiles is used to apply the correct SIDs to all files in the system. I have written a patch to make it instead take a list of canonical fully-qualified file names on standard input if run with the -s switch, which is now included in the NSA source release.

The combination of the dpkg patch and the setfiles patch allow me to solve the basic problem of getting the correct SIDs applied to files, my script just queries the package management system for a list of files contained in the package and pipes it through to setfiles to set the SID on each file.

The next complication is setting the correct SID for the setfiles program, by default it gets installed with the security type sbin_t because that is the type of the directory it is installed in. However in my default policy setup I have not given the dpkg_t domain (which is used by the dpkg program when it is run administratively) the privilege of changing the SID of files. So the setfiles program needs to have the type setfiles_exec_t to trigger an automatic domain transition to the setfiles_t domain.

To solve this issue I have the preinst script (the script that is run before the package is installed) of the selinux package rename the /usr/sbin/setfiles to /usr/sbin/setfiles.old on an upgrade. Then the /etc/dpkg/postinst.d/selinux script will run the old version if it exists.

Here’s the relevant section of the selinux.preinst file:

if [ ! -f /usr/sbin/setfiles.old -a \
    -f /usr/sbin/setfiles ]; then
  mv /usr/sbin/setfiles /usr/sbin/setfiles.old

Here’s the contents of /etc/dpkg/postinst.d/selinux. The first parameter to the script is the name of the package that is being installed. Also I have “grep …” included because setfiles currently has some problems with blank lines and /. which dpkg produces.

make -s -C /etc/selinux \
if [ -x /usr/sbin/setfiles.old ]; then
dpkg -L $1 | grep ^/.. | $SETFILES -s \
if [ -x /usr/sbin/setfiles.old \
    -a “$1” = “selinux” ]; then
  rm /usr/sbin/setfiles.old

Running Configuration Scripts in the Correct Context

When a SE Linux system boots the process init is started in the domain init_t. When it runs the daemon start scripts it uses the scripts /etc/init.d/rc and /etc/init.d/rcS on a Debian system (on Red Hat it is /etc/rc.d/rc and /etc/rc.d/rc.sysinit). So these scripts are given the type initrc_exec_t and there is a rule domain_auto_trans(init_t, initrc_exec_t, initrc_t) which causes a transition to the initrc_t domain. The security policy for each daemon will have a rule causing a domain transition from the initrc_t domain to the daemon domain upon execution of the daemon. This all happens as the system_u identity and the system_r role.

When the system administrator wants to start a script manually they use the program run_init which can only be run from the sysadm_t domain, it re-authenticates the administrator (to avoid the possibility of it being called by some malicious code that the administrator accidentally runs) before running the specified script as system_u:system_r:initrc_t.

This works fine when the daemon start script is quite simple (most such start scripts just check whether the daemon is already running and then run it with appropriate parameters). However this doesn’t work for complex scripts, which may copy files, change sysctl entries via /proc, and do many other things. An example of this is the devfsd package where the start script creates device nodes for device drivers that lack kernel support for devfs. Getting this to work correctly required that the code for device node creation be split into a separate file with the same SID as the main daemon (devfsd_exec_t) which causes it to run in the same domain as the daemon (devfsd_t). Such changes will probably have to be made to about 5% of daemon start scripts.

But that is part of the standard proceedure of correctly setting up SE Linux. The package specific part comes when the scripts have to be started from the package installation. To get the correct domain (initrc_t) for the scripts I use the rule domain_auto_trans(dpkg_t, etc_t, initrc_t) which causes the dpkg_t domain to transition to the initrc_t domain when a script of type etc_t is executed. Now the hard part is getting the identity and the role correct when running dpkg. For this purpose I have written a customised version of run_init to change to the context to system_u:system_r:dpkg_t, system_u:system_r:apt_t, or system_u:system_r:dselect_t, for the programs dpkg, dselect, and apt-get respectively.

The apt_t and dselect_t domains are only used for selecting and downloading packages, and then executing dpkg, which triggers an automatic transition to the dpkg_t domain.

Managing the Configuration Files

For normal configuration files in Debian (almost every file under /etc and some files in other locations) the file is registered as a conffile in the packaging system, and the package status file contains the MD5 checksum of the file. If a file is changed from it’s original contents (according to an MD5 check) at the time the package is upgraded and if the new version has a different set of data for the file than that which was provided by the old version of the package (according to MD5) then the user will be asked if they want to replace the old file (with a default of no). However if the new version of the package contains different content and the old content was not changed, then the user will get the new content without even being informed of the fact!

This is OK for many files, but the idea of a file from your audited security configuration being replaced with one you’ve never seen is not a pleasant one! This is only the first problem with managing policy files, the next problem is the size of the database for the sample policy. If you are using an initial RAM disk (initrd) then you must have the policy database on the initrd. The default initrd size of 4 megabytes is not large enough to accomodate the usual modules and the complete sample policy.

So what we need to solve this is a way of having a set of sample policy files (one per domain), of which not all will be used, and when new policy files are added or existing files are changed the user must be prompted as to whether they want to add the new files or apply the changes. Also when adding new policy the matching entries have to be added to the database used by setfiles for setting the file context.

In the latest versions of the sample policy the Makefile creates a configuration file for setfiles to match the program configuration files used. For every application policy file domains/program/%.te the matching file file_contexts/program/%.fc will be used as part of the configuration. This change will solve the issue of determining the configuration for setfiles, but it doesn’t entirely solve the problem. One issue with this is that when a file is added to or removed from the configuration the appropriate changes need to be made to the file system. If you make an addition to the policy before installing a new package (the correct proceedure) then you can usually get away without this as long as none of the files or directories previously existed, however this is not always the case, especially when files are diverted or when dealing with standard directories such as /var/spool/mail which will exist even if you have not installed any software to use them! It should not be that difficult to write a program to relabel the files matching the specifications of the added policy, the question is whether policy additions are common enough to make it worth saving the effort of a relabel. Also there’s the risk that a bug in such a program (or it’s use) could potentially cause a security hole.

The security policy is comprised of one configuration file per application (or class of application, some domains such as the DHCP client domain dhcpd_t are used by multiple programs which perform similar functions). Also sometimes an application requires multiple domains which will therefore be defined in the one file, for example my current policy for Postfix has eleven domains (which is excessive, I plan to reduce it to three or four once I’ve determined exactly what is required). One problem I faced with this is the issue of what to do when one domain needs to interact with another domain, for example the pppd process often needs to run sendmail -q to flush the mail queue when it establishes a connection. This requires the policy statement domain_auto_trans(pppd_t, sendmail_exec_t, sysadm_mail_t), previously such a statement would be put in either the sendmail.te file or the pppd.te file, thus making one of them depend on the other. This is a bad idea because there’s no reason for either of these programs to depend on the other. The solution I devised is based on the M4 macro language (which was already used for simpler macro functionality in producing the policy file). I created a script to define a macro with the name of each application policy file that is used. So the solution to the PPP and Sendmail problem is to put the following in the pppd.te file:

`domain_auto_trans(pppd_t, sendmail_exec_t
                 , sysadm_mail_t)’)

The next problem, is how to effectively manage things so that when I ship a new and improved sample policy the administrator can update it without excessive pain.

The current method involves running diff -ru and then copying files if you like the changes. This is excessively painful even when managing one or two SE Linux machines! So it obviously won’t scale to serious production. I plan to write a Perl script to manage this, the first thing it has to do is track when the administrator doesn’t want a policy file. When a file is removed then the fact that the user has chosen not to have that file installed should be recorded, and they should not be prompted to re-install it on the next upgrade. However if the sample policy is upgraded and a new file has been added then they should be asked if they want to install it. Then when a file in the sample policy changes and it is a file that is installed the user should be asked if they want the new file copied over their existing file (and they should be provided with a diff to show what the changes would be). Finally if such changes involve the file configuration for setfiles then the user should be asked whether they want to relabel the system.

The people who are working on Red Hat packaging are considering other ways of managing the versions of configuration files, one of which involves having symbolic links pointing to the files to be used, if you decide to use your own version instead of one of the supplied policy files then you can change the sym-link.

Managing Device Nodes

In Linux there are two methods of managing device nodes. One is the traditional method of having /dev be a regular directory on the root file system and have device nodes created on it with mknod, the other is the devfs file system which allows the kernel to automatically create device nodes while the devfsd process automatically assigns the correct UID, GID, and permissions to them.

On a traditional (non-devfs) system running SE Linux the device nodes will be labelled in the same way as any other file. On a devfs system things are different, the devfs policy database contains rules for labelling device nodes. However this has some limitations, one being that when the policy database does not have an entry for the device node at the time it is created, then it will never be labelled. Another is that every type listed in the devfs configuration rules must be defined, which can cause needless dependencies.

To address these issues I wrote a module for devfsd which adds support for SE Linux. This allows you to change the mapping of SIDs to device nodes and re-apply it at any time, and if a security context listed in the configuration file does not exist in the policy then an error will be logged and the system will continue working.

This is especially useful for the case of an initrd as the types for all the possible device nodes won’t need to be in the ram disk.

Work To Be Done

Initial RAM Disk

When using an initrd to boot a modular kernel the security policy database must be stored on the initrd. The problem is that the default initrd size is 4M, which does not leave much space when libc6 is included, often not enough for the policy you want. Also even if the policy does fit you won’t really want to have such a large initrd image. If you are installing SE Linux on a single PC, or even on a network of similar PCs then you are best advised to build a kernel with all modules needed for booting statically linked and not use an initrd. However this is not possible for a distribution vendor who has to support a huge variety of hardware.

Another problem with using an initrd for storing the policy is that when you generate a new policy you then have to regenerate the initrd to avoid having your changes disappear on the next boot, of course a boot script could easily load the updated policy from the root file system before going to multi-user mode. But it is wasteful to have a large policy on the initrd that you then discard before ever using much of it.

The solution is to have a small policy that contains all the settings needed for either the first stage of boot, or alternately for running recovery tools in case a failure prevents the machine from entering multi-user mode. Then after the machine has passed the first stages of the boot process a complete policy can be loaded from the root file system, as long as the two policies don’t conflict in any major way this should work well. NB A Major policy conflict is a situation where the initrd defines domains that aren’t defined in the new policy and processes are executed in such a domain.

The latest release of SE Linux supports automatically re-loading the policy when the real root file system is mounted. Now all that needs to be done is for someone to write a mini-policy to install on the initrd.

Polishing run_init

Stephen Smalley has suggested that we develop a run_init program that incorporates the functionality of my modified program as well as of the original run_init program in a more generic fashion. It is apparent that other people will have similar needs for programs to execute programs under a different domain, role, and maybe identity. It is better that one program do this than to have many people writing programs for such things.

Also currently my program is hard-coded for the names of the Debian administration programs. An improved program should handle the needs of Debian, RPM, and the regular run_init functionality.

Writing Sample Policy Files

Currently any serious system will require policy files that are not in the sample policy. This forces everyone who uses SE Linux to start by writing policy files (which is the most difficult and time consuming task involved with the project). Currently we are writing new sample policy files for the variety of daemons and applications, and developing new macros for writing policy files quickly. With the new macros policy files are on average half the size that they used to be (and I aim to reduce the size again by new macros). The macros allow short policy files which are easy to understand, and therefore the user can easily determine how to make any required changes, or how to write a policy file for a new program based on existing programs.

Obtaining the Source

Currently most of my packages and source are available at however I plan to eventually get them all into Debian at which time I may remove that site.

I have several packages in the unstable distribution of Debian, the first is the kernel-patch-2.4-lsm and kernel-patch-2.5-lsm packages which supply the Linux Security Modules kernel patch. That patch includes SE Linux as well as LIDS and some of the OpenWall functionality. When I have time I back-port patches to older kernels and include new patches that the NSA has not officially released, so often my patches will provide more features than the official patches distributed by the NSA from or the patches distributed by Immunix. However if you want the official patches then these packages may not be what you desire.

From the selinux-small archive I create the packages selinux and libselinux-dev which are also in the unstable distribution of Debian.


I would like to thank Stephen Smalley for being so helpful when I was learning about SE Linux, and Dr. Brian May for checking my early packages and giving me some good advice when I first started.

Also thanks to Dr. May, Stephen Smalley, and Peter Loscocco for reviewing this paper.


Running the Net After a Collapse

I’ve been thinking about what we need in Australia to preserve the free software community in the face of an economic collapse (let’s not pretend that the US can collapse without taking Australia down too). For current practices of using the Internet and developing free software to continue it seems that we need the following essential things:

  1. Decentralised local net access.
  2. Reliable electricity.
  3. Viable DNS in the face of transient outages.
  4. Reasonably efficient email distribution (any message should be delivered in 48 hours).
  5. Distribution of bulk binary data (CD and DVD images of distributions). This includes providing access at short notice.
  6. Mailing lists to provide assistance which are available with rapid turn-around (measured in minutes).
  7. Good access to development resources (test machines).
  8. Reliable and fast access to Wikipedia data.
  9. A secure infrastructure.

To solve these I think we need the following:

  1. For decentralised local net access wireless is the only option at this time. Currently in Australia only big companies such as Telstra can legally install wires that cross property boundaries. While it’s not uncommon for someone to put an Ethernet cable over their fence to play network games with their neighbors or to share net access, this can’t work across roads and is quite difficult when there are intervening houses of people who aren’t interested. When the collapse happens the laws restricting cabling will all be ignored. But it seems that a wireless backbone is necessary to then permit Ethernet in small local areas. There is a free wireless project to promote and support such developments.
  2. Reliable electricity is only needed for server systems and backbone routers. The current model of having huge numbers of home PCs running 24*7 is not going to last. There are currently a variety of government incentives for getting solar power at home. The cheapest way of doing this is for a grid-connected system which feeds excess capacity into the grid – the down-side of this is that when grid power goes off it shuts down entirely to avoid the risk of injuring an employee of the electricity company. But once solar panels are installed it should not be difficult to convert them to off-grid operation for server rooms (which will be based in private homes). It will be good if home-based wind power generation becomes viable before the crunch comes, server locations powered by wind and solar power will need smaller batteries to operate overnight.
    In the third-world it’s not uncommon to transport electricity by carrying around car batteries. I wonder whether something similar could be done in a post-collapse first-world country with UPS batteries.
    Buying UPSs for all machines is a good thing to do now, when the crunch comes such things will be quite expensive. Also buying energy-efficient machines is a good idea.
  3. DNS is the most important service on the net. The current practice is for client machines to use a DNS cache at their ISP. For operation with severe and ongoing unreliability in the net we would need a hierarchy of caching DNS servers to increase the probability of getting a good response to a DNS request even if the client can’t ping the DNS server in question. One requirement would be the use of DNS cache programs which store their data on disk (so that a transient power outage on a machine which has no UPS won’t wipe out the cache), one such DNS cache is pdnsd [1] (I recall that there were others but I can’t find them at the moment). Even if pdnsd is not the ideal product for such tasks it’s a good proof of concept and a replacement could be written quickly enough.
  4. For reasonably efficient email distribution we would need at a minimum a distribution of secondary MX records. If reliable connections end to end aren’t possible then outbound smart-hosts serving geographic regions could connect to secondary MX servers in the recipient region (similar to the way most mail servers in Melbourne used to use a university server as a secondary MX until the sys-admin of that server got fed up with it and started bouncing such mail). Of course this would break some anti-spam measures and force other anti-spam measures to be more local in scope. But as most spam is financed from the US it seems likely that a reduction in spam will be one positive result of an economic crash in the US.
    It seems likely that UUCP [2] will once again be used for a significant volume of email traffic. It is a good reliable way of tunneling email over multiple hops between hosts which have no direct connectivity.
    Another thing that needs to be done to alleviate the email problem is to have local lists which broadcast traffic from major international lists, this used to be a common practice in the early 90’s but when bandwidth increased and the level of clue on the net decreased the list managers wanted more direct control to allow them to remove bad addresses more easily.
  5. Distributing bulk data requires local mirrors. Mirroring the CD and DVD images of major distributions is easy enough. Mirroring other large data (such as the talks from will be more difficult and require more manual intervention. Fortunately 750G disks are quite cheap nowadays and we can only expect disks to become larger and cheaper in the near future. Using large USB storage devices to swap data at LUG meetings is a viable option (we used to swap data on floppy disks many years ago). Transferring CD images over wireless links is possible, but not desirable.
  6. Local LUG mailing lists are a very important support channel, the quality and quantity of local mailing lists is not as high as it might be due to the competition with international lists. But if a post to an international list could be expected to take five days to get a response then there would be a much greater desire to use local lists. Setting up local list servers is not overly difficult.
  7. Access to test machines is best provided by shared servers. Currently most people who do any serious free software development have collections of test machines. But restrictions in the electricity supply will make this difficult. Fortunately virtualisation technologies are advancing well, much of my testing could be done with a few DomU’s on someone else’s server (I already do most of my testing with local DomU’s which has allowed me to significantly decrease the amount of electricity I use).
    Another challenge with test machines is getting the data. At the moment if I want to test my software on a different distribution it’s quite easy for me to use my cable link to download a DVD image. But when using a wireless link this isn’t going to work well. Using ssh to connect to a server that someone else runs over a wireless link would be a much more viable option than trying to download the distribution DVD image over wireless!
  8. is such an important resource that access to it provides a significant difference in the ability to perform many tasks. Fortunately they offer CD images of Wikipedia which can be downloaded and shared [3].
  9. I believe that it will be more important to have secure computers after the crunch, because there will be less capacity for overhead. Currently when extra hardware is required due to DOS attacks and systems need to be reinstalled we have the resources to do so. Due to the impending lack of resources we need to make things as reliable as possible so that they don’t need to be fixed as often, this requires making computers more secure.

Partitioning a Server with NSA SE Linux


I presented this paper at Linux Kongress 2002. Since that time virtualisation systems based around VMWare, Xen, and the hardware virtualisation in recent AMD and Intel CPUs has really taken off. The wide range of virtualisation options makes this paper mostly obsolete, and what isn’t obsoleted by that is obsoleted by new developments in SE Linux policy which change the way things work. One thing that’s particularly worth noting is the range of network access controls that are now available in SE Linux.

This is mostly a historical document now. is defunct, since about 2004.

Partitioning a Server with NSA SE Linux

Russell Coker <>


The requirement to purchase multiple machines is often driven by the need to have multiple administrators with root access who do not trust each other.

Having large numbers of expensive under-utilised servers with the associated management costs is not ideal.

I will describe my solution to this problem using SE Linux [1] to partition a server such that the “root” users can’t access each other’s files, kill each other’s processes, change passwords for each other’s users, etc.

DOS attacks will still be possible by excessive use of memory and CPU time, but apart from that all the benefits of separate hardware will be provided.


SE Linux dramatically improves the security of a Linux system by adding another layer of security in addition to the default Unix permissions model. This is accomplished by firstly assigning a type to every file, device, network socket, etc. Then every process has a domain and the level of access permitted to a type is determined by the domain of the process that is attempting the access (in addition to the usual Unix permission checks). Domains may only be changed at process execution time. The domain may automatically be changed when a process is executed based on the type of the executable program file and the domain of the process that is executing it, or a privileged process may specify the new domain for the child process.

In addition to the use of domains and types for access control SE Linux tracks the identity of the user (which will be system_u for processes that are part of the operating system or the Unix username) and the role. Each identity will have a list of roles that it is permitted to assume, and each role will have a list of domains that it may use. This gives a high level of control over the actions of a user which is tracked through the system. However in this paper I am not using any of the identity or role features of SE Linux (they merely give an extra layer of security on top of what I am researching). So I will not mention them again.

For a detailed description of how SE Linux works I recommend reading the paper Peter Loscocco presented at OLS in 2001[1]. For the details of SE Linux policy configuration I recommend Stephen Smalley’s paper[2].

The problem of dividing a hardware resource that is expensive to purchase and manage among many users is an ongoing issue for system administrators. In an ISP hosting environment or a software development environment a common method is to use chroot environments to partition a server into different virtual environments. I have devised some security policies and security server programs to implement a chroot environment on SE Linux with advanced security offering the following features:

  1. An unpriviledged user (chroot administrator) can setup their own chroot environment, create accounts, run sshd, let ordinary users login via ssh, and do normal root things to them without needing any help from the administrator.
  2. Processes outside the chroot environment can see the processes in the chroot, kill them, strace them, etc. Also the user can change the security labels of files in their chroot to determine which of the files are writable by a process in the chroot environment (this can be done from outside the chroot environment or from a priviledged process within the chroot environment).
  3. Processes inside the chroot can’t see any system processes, or any of the user’s processes that are outside the chroot. They can see which PID’s are in use, but that doesn’t allow them to see any information on the processes in question or molest such processes. The chroot processes won’t be allowed to read any files or directories labelled with the type of the user’s home directory. This is so that if the wrong files are accidentally moved into the chroot environment then they won’t be read by chroot processes.
  4. The administrator can set a user’s account for multiple independant chroot environments. In such a case processes inside one chroot can’t interfere with processes in other in any way, and all their files will be in different types. This prohibits a nested chroot, but I think that there’s no good cause for nested chroot’s anyway. If someone has a real need for nested chroot’s they could always build them on top of my work – it would not be particularly difficult.
  5. The user will have the option of running a priviledged process inside their chroot which can do anything to files or processes in the chroot (even files that are read-only to regular chroot processes) but which can’t do anything outside the chroot. Also if this process runs a program that is writable by a regular chroot process then it runs it in the regular chroot process domain. This is to prevent a hostile user inside the chroot from attacking the chroot administrator after tricking them into running a compromised binary.

A basic chroot environment has several limitations, the most important of which is that a process in the chroot environment can kill any process with the same UID (or in the case of a root process any process with any UID) outside the chroot environment. The second major limitation is that root access is needed to call the chroot() system call to start a chroot environment and this gives access to almost everything else that you might want to restrict. There are a number of other limitations of chroot environments, but they are all trivial compared to these.

One method of addressing this issue is that of GR security [3]. GR Security locks down a chroot environment tightly, preventing fchdir(), mount(), double-chrooting, pivot_root(), and access to non-chroot processes (or processes in a different chroot). It also has the benefit that no extra application configuration is required. However it has the limitation that you have very limited ability to configure the capabilities of the chroot, and it has no solution to the problem of requiring root access to setup the chroot environment.

Another possible solution to the problem is that of BSD Jails [4]. A jail is a chroot environment that prevents escape and also prevents processes in the jail from interfering with processes outside the jail. It is designed for running network servers and allows confining the processes in a particular jail to a single IP address (a very handy feature that is not available in SE Linux at this time and which I have to emulate in a shared object). Also the jail authors are working on a jailinit program which is similar to init but for jails (presumably the main difference is that it doesn’t expect to have PID==1), this program should work well for chroot environments in SE Linux too.

Another possible solution is to use User-Mode Linux [5]. UML is a port of the Linux kernel to the Linux system call interface. So it can run a kernel as a user (UID!=0) application using regular files on the file system for storage. One problem with UML is that it is based around file system images on disk, so to access them without the UML kernel running you have to loopback mount them which is inconvenient. Also starting or stopping a UML image is equivalent to booting or shutting down a server (which is inconvenient if you just want to run a simple command like last). A final problem with UML for hosting is that using file systems for storage is inefficient, if you have 100 users who each might use 1G of storage (but who use on average 50M) you need 100G of disk space. With chroot based systems you would only need 5G of disk space. Finally backing up a file system image from a live UML setup will give a corrupted file system, backing up files from a chroot is quite safe.

Isolation of the Chroot

The first aim is to have the chroot environment be as isolated as possible from other chroot’s, from the chroot administrator, and from the rest of the system. This is quite easy as SE Linux defaults to denying access (apart from the system administration domain sysadm_t which has full access to see and kill processes, read files, etc).

For example pivot_root, is denied by not giving the sys_admin capability. Access to other processes is denied by not permitting access to read from the /proc directories, send signals, ptrace, etc. fchdir() is prevented because processes in the chroot don’t get access to files or directories outside the chroot because of the type labels, so it can only fchdir() back into the chroot! Mount and chroot accesses are also not granted to the chroot environment, nor is access to configure network interfaces.

The next aim is to make a chroot environment more secure than a non-SE system in full control of the machine. The first step to this is having critical resources such as hard drive block devices well out of reach of the chroot environment. With a default SE Linux security policy (which my work is based on) that is already achieved, a regular root process that is not chrooted will be unable to perform such access. This isolation of block devices, the boot manager, priviledged system processes (init, named, automount, and the mail server) from the rest of the system already makes a chroot environment on SE Linux more secure than that on a system without SE Linux, most (hopefully all) attacks against such critical parts of the system that would work on other systems will be defeated by SE Linux.

One problem that I still have not solved to my satisfaction is that of not requiring super-user access to initially create the chroot environment. I have developed policy modifications to allow someone to login as root (UID==0) in a non-chroot environment and not have access to cause any damage. So for my current testing I have started all chroot environments from a root login in a user domain as the chroot and mount operations require root privileges. For production use you would probably not want to give a user-domain root account to a user, so I am writing a SUID root program for running both mount and chroot to start a chroot environment and to run umount to end such an environment (after killing all the processes). It will take a configuration file listing the target directory, the bind mounts to setup, and the programs to run. One issue that I initially had with this was to make sure that it only runs on SE Linux (if SE Linux was removed but the application remained installed then you would grant everyone the ability to run programs as root in an unrestricted fashion through chrooting to the root directory). My current method of solving this is to execute /bin/false at the start of the program, if SE Linux blocks that execution then it indicates that SE Linux is in enforcing mode (otherwise the program ends). Also it directly checks for the presence of SE Linux (so if the administrator removes /bin/false then it won’t give a false positive). However this still leaves the corner case where an administrator puts the machine in permissive mode and removes /bin/false To avoid this the program will then try reading /etc/passwd which will always be readable on a functional Unix machine unless you have SE Linux (or some similar mechanism) in place. Before running chroot the wrapper will change directory to the chroot target and run “chroot .”. My policy prohibits the chroot program from accessing any directories outside the chroot environment to prevent it from chrooting to the wrong directory, thus an absolute path will not work. Chroot administrators would be confused by this, so the wrapper hides it from them.

I believe that this method will work, but I have not tested it yet.


It would be possible to create a SE Linux policy to allow a chroot to have all the different domains that the full environment has (IE a separate security domain for each daemon). However this would be a huge amount of work for me to write the policy and maintain it as new daemons become available and as new versions of daemons are released with different functionality. It would result in a huge policy (number of daemons multiplied by number of chroot environments could result in a large number of domains) which is currently held in non-pageable kernel memory. In a future version of SE Linux they plan to make it pageable, but the fact that a large policy has a performance impact will remain. Also the typical chroot administrator does not want the bother of working with this level of complexity. Finally the limited scope of a chroot environment (no dhcp client, no need to run fsck or administer RAID devices, etc) reduces the number of interactions that can have a security impact, and as a general rule there is a security benefit in simplicity as mistakes are less likely.

In my current policy I have a single domain for the main programs in the chroot environment. This domain has write access to files under /home, /var, /tmp, and anywhere else that the chroot administrator desires (they can change the type of files and directories at any time to determine where programs can write, the writable locations of /home, /var, and /tmp are merely default values that I recommend – they are not requirements). Typically the entire chroot would default to being writable when it is installed and the chroot administrator would be encouraged to change that according to their own security needs. In the case of a chroot setup with chroot(etbe, etbe_apache) the main domain will be called etbe_apache_t.

I have been considered having a separate domain for user processes inside the chroot so that they can only write to files under /home (for example) and not /var (which would only be writable by system processes). To implement this would require either an automatic domain transition rule to transform the domain when running a user shell (as opposed to a shell script run by a system program), or a modified sshd, ftpd, etc. Requiring that chroot administrators use modified versions of standard programs is simply not practical, most users would be unable to manage correct installation and would require excessive help from the system administrator, also it might require significant programming to produce modified versions of some login programs. This leaves the option of automatic transitions. Doing this would require that a chsh inside the chroot not allow changing the shell to /bin/sh or any other shell that would be used by system shell scripts, and that all shells listed in /etc/shells be labelled as user shells. This requires a significant configuration difference between the chroot setup and that of standard configurations, that is difficult to maintain because of distribution packages and graphical system administration tools that would tend to change them back.

I conclude that having a separate domain for user processes is not feasible.

The next issue is administration of the chroot. Having certain directories and files be read-only is great for security but a pain when it is time to upgrade! This is a common problem for administrators who use a read-only mount for their chroot environment. To solve this issue I have created a domain which has write privileges for all files in the chroot, for the example above the name of this domain would be etbe_apache_super_t. This domain also has control over all processes in the chroot (ability to see them, kill them, and ptrace them), while the processes in the chroot can not even determine the existance of such super processes. If this process executes a file that is writable by the main chroot domain then it will transition to the main chroot domain, so that if a trojan is run it will not be able to damage the read-only files. This protection is not perfect however, using the source command or “/bin/sh < script” to execute a shell script will result in it being run in the super domain. However I think that this is a small problem, tricking an administrator into redirecting input for a shell from a hostile script or using the source command is very difficult (but not impossible). However it is quite easy for an administrator to mistakenly give the wrong label to files and allow them to be written by the wrong people (I made exactly this mistake when I first set it up), so preventing the execution of binaries that may have been compromised is a good security measure.

Note, that this method of managing read-only files does not require that applications running in the chroot environment be stopped for an upgrade, unlike most other methods of denying write access to files that are in use.

To have a working chroot environment you need a number of device files to be present, pseudo-tty files (for logins and expect), /dev/random (for sshd), and others. The chroot administrator can not be permitted to create their own device nodes as this would allow them to create /dev/hda and change system data! So I have created a special mount domain which in this example would be named etbe_mount_t based on Brian May’s mount policy to allow bind mounts of these device nodes. This does have one potential drawback, if you are denying a domain access to device nodes solely by denying search access to /dev then bind mounts could be used to allow access in contravention of security policy! I could imagine a situation where someone would want to allow a domain to access the pseudo-tty it inherits from it’s parent but not open other pty’s of the same type, and implementing this by denying access to the /dev/pts. This is not something that I would recommend however. I believe that this is the best solution, as not having a user mount domain would require that either the system administrator create bind mounts (a process that has risks regarding sym-links etc), create special device nodes (having two different device nodes for the same device with different types is a security issue), or otherwise be involved in the setup of the chroot in a fashion that involves work and security risks.

In summary, if you have the bad idea of restricting access to the /dev directory to prevent access to device nodes then things will break for you, however there are many other ways of breaking such things so I think that the net result is that security will not be weakened.

To actually enter the chroot environment you need to execute the chroot() system call. To allow that I created a domain for chroot, which in this example will be called etbe_chroot_t, this domain is entered when the user domain etbe_t executes the chroot program. Then when this domain executes a file of type etbe_apache_ro_t or etbe_apache_rw_t it will transition to domain etbe_apache_t, and when it executes a file of type etbe_apache_super_entry_t it will transition to domain etbe_apache_super_t.


The primary type used for labelling files and directories is for read/write files, in the case of a chroot setup by chroot(etbe, etbe_apache) the type will be etbe_apache_rw_t. It can be read and written by etbe_apache_super_t and etbe_apache_t domains, and read by the etbe_chroot_t and etbe_mount_t domains. It can also be read and written by the user domain etbe_t.

The type etbe_apache_ro_t is the same but can only be written by the user domain and etbe_apache_super_t.

To enter as domain etbe_apache_super_t I have defined a type etbe_apache_super_entry_t. The aim of this is to allow easy entry of the administration domain by a script, otherwise I might have chosen to have a wrapper program to enter the domain which prompts the user for which domain they want to enter. The wrapper program idea would have the advantage of making it easier for novice chroot administrators, and I may eventually implement that too so that users get a choice.

One problem I found when I initially setup a chroot was that when installing new Debian packages the post installation scripts (running in the etbe_apache_super_t domain) started a daemon (such as sshd) it would also start as etbe_apache_super_t, and then a user could login with that domain! To solve this problem I created a new type etbe_apache_dropdown_t, when the etbe_apache_super_t executes a program of that type it transitions to etbe_apache_t, so labelling the /etc/init.d directory (and all files it contains) with this type causes the daemons to be executed in the correct domain. The write access for this type is the same as that for etbe_apache_ro_t.

Configuring the Policy

I have based my policy for chroot environments around a single macro that takes two parameters, the name of a user domain, and the name of a chroot. For example if I have a etbe_t domain and I want to create a chroot environment for apache then I could call chroot(etbe, etbe_apache) to create the chroot.

This makes it convenient to setup for a basic chroot as all the most likely operations that don’t comprise a security risk are allowed. Naturally if you have different aims then you will sometimes need to write some more policy. One of my machines runs a chroot environment for the purpose of running Apache, it required an extra fourteen lines of SE policy configuration to allow the Apache log files to be accessed by other system processes outside the chroot (a script that runs a web log analysis program in particular).

The typical chroot environment used for an Internet server will probably require between 5 and 20 lines of extra policy configuration and will take an experienced administrator less than 30 minutes to setup. Of course these could be added to custom macros allowing bulk creation with ease, setting up 500 different chroot environments for Apache should not take more than an hour!

Here is my policy for an Apache web server run in the system_r role by the system init scripts that has read-only access to all files under /home (which is –bind mounted inside the chroot), and which allows logrotate to run the web log analysis scripts as a cron job.

# setup the chroot
chroot(initrc, apache_php4)
# allow apache to change UID/GID and to bind to the port
allow apache_php4_t self:capability { setuid setgid net_bind_service };
allow apache_php4_t http_port_t:tcp_socket name_bind;
allow apache_php4_t tmpfs_t:file { read write };
# allow apache to search the /home/user directories
allow apache_php4_t user_home_dir_type:dir search;
# allow apache to read files and directories under the users home dir
r_dir_file(apache_php4_t, user_home_type);
# allow logrotate to enter this chroot (and any other chroot environments
# that are started from initrc_t)
domain_auto_trans(logrotate_t, chroot_exec_t, initrc_chroot_t)
# this chroot is located under a users home directory so logrotate needs to
# search the home directory (x access in directory permissions) to get to it
allow logrotate_t user_home_dir_t:dir search;
# allow logrotate to search through read-only directories (does not need read
# access) and read the directories and files that the chroot can write (the
# web logs). NB I do not need to restrict logrotate this much – but why give
# it more than it needs?
allow logrotate_t apache_php4_ro_t:dir search;
r_dir_file(logrotate_t, apache_php4_rw_t)
# allow a script in the chroot to write back to a pipe created by crond
allow initrc_chroot_t { crond_t system_crond_t }:fd use;
allow initrc_chroot_t { crond_t system_crond_t }:fifo_file { read write };
allow apache_php4_t { crond_t system_crond_t }:fd use;
allow apache_php4_t { crond_t system_crond_t }:fifo_file { read write };

That’s 14 lines because I expanded it to make the policy clearer. Otherwise I would probably compress it to 10 lines.


The final issue is networking, the BSD Jail facility has good support for limiting network access via a single IP address per jail.

SE Linux lacks support for controlling networking in this fashion, server ports can only be limited by the port number. This is OK if different chroot environments provide different services. Also this works if you have an intelligent router in front of the server to direct traffic destined for different IP addresses to different ports on the same IP address (in which case the different chroot environments can be given permissions for different ports).

I have an idea for another solution to this problem which is more invasive of the chroot environment. The idea is to write a shared object to be listed in /etc/ which will replace the bind() system call. This will then communicate over a Unix domain socket (which would be accessed through a bind mount) to a server process run with system privileges, and it will pass the file descriptor for the socket to it. The server process will then use the accept_secure() system call to determine the security context of the process that is attempting the bind, it will then examine some sort of configuration database and decide whether to allow the bind, or whether to modify it. If the bind parameters have to be modified (for example converting a bind to INADDR_ANY to be a bind to a particular IP address) then it would do so. Then it would do a bind() system call and return a success code to the socket that connects to the application.

This support would be ideal as it would be easiest to automate and would allow setting up hundreds or thousands of chroot environments at the same time with ease.

Unfortunately I couldn’t get this code fully written in time for the publishing deadline.


I believe that I have achieved all my aims regarding secure development environments or other situations where there is no need to run network servers. The policy provides good security and allows easy management.

My current design for entering a chroot environment should work via a SUID root program, but I will have to test it. The current method of allowing unprivileged root logins has been tested in the field and found to work reasonably well.

Currently the only issue I have not solved to my satisfaction is that of binding chroot environments to specific IP addresses. Currently the best option that I have devised involves pre-loading a shared object into all processes in the chroot environment (which will inconveniance the user). But I have not yet implemented this so I am not certain that it will work correctly.

Obtaining the Source

Currently most of my packages and source are available at however I plan to eventually get them all into Debian at which time I may remove that site.

I have several packages in the unstable distribution of Debian, the first is the kernel-patch-2.4-lsm and kernel-patch-2.5-lsm packages which supply the Linux Security Modules kernel patch. That patch includes SE Linux as well as LIDS and some of the OpenWall functionality. When I have time I back-port patches to older kernels and include new patches that the NSA has not officially released, so often my patches will provide more features than the official patches distributed by the NSA from or the patches distributed by Immunix. However if you want the official patches then these packages may not be what you desire.

From the selinux-small archive I create the packages selinux and libselinux-dev which are also in the unstable distribution of Debian.


Thanks to Dr. Brian May for reviewing this paper and providing advice.


SE Linux Saves

Here are links to some instances when SE Linux prevented exploits from working or mitigated their damage: