Memlockd is a daemon that locks files into memory. Then if a machine starts paging heavily the chance of being able to login successfully is significantly increased. The default configuration will lock all the files needed for login to a Debian GNU/Linux system via the console or via ssh.

Free Books

Here are some free books that I have downloaded and read:

Computer Power Use

This table shows the power consumption of some of the computers I own. I use a domestic electricity meter that was certified for use in billing customers to measure this. Any inaccuracies in the measurement will
correspond to inaccuracies in electricity bills of people who use such computers.

Before anyone asks, I am not interested in contributions of data, I believe that doing tests with a different meter or in a different country with a different supply voltage will diminish the accuracy of the results. Also I will provide minimal analysis on this page (the numbers should allow you to perform your own analysis).

Before I started such tests I had significant problems cooling my house in summer. Based on the results of these tests I made changes such as replacing the Compaq 1GHz Athlon machine by an IBM 1GHz P3 machine for a small server I run, this saved 49W of power, 49W of power which mostly ends up as heat makes a significant difference in a small server room when running 24*7!

All the machines below apart from the SMP machine are workstation class machines, they don’t have ECC RAM and their PSUs are designed for small load. The SMP machine has a PSU designed for a desktop machine (I couldn’t easily obtain any other type). If it had a PSU designed for server use it would draw more power.

Unless otherwise noted all machines were idling while running Linux (idling while running DOS uses significantly more power).

The summary of this table is, P3 is a great CPU for power to computer power ratio, the P4 isn’t too good, and the Athlon sucks badly – don’t run an Athlon server if you have heat problems!

Thinkpad T20 500MHz P3 512M 30G IDE 10.7W
Cobalt Qube AMD K6-450MHz, 128M RAM, 10G IDE 20W
Thinkpad T41p 1.7GHz idle at 600MHz, screen on and battery charged 23W
Compaq SFF 800MHz P3 512M 10G IDE spun-down 28W
Compaq SFF 800MHz P3 512M 10G IDE 35W
Compaq 800MHz P3 128M 10G IDE 38W
IBM 1GHz P3 256M 30G IDE, idling 38W
HP Pavilion 513A Celeron 1.8GHz, 384M RAM, 40G IDE 45W
HP Pavilion 513A Celeron 1.8GHz, 768M RAM, 2*80G IDE + 46G IDE 58W
Compaq 1.1GHz Celeron 512M 40G IDE idling 46W
HP/Compaq Celeron 2.4GHz, 512M RAM, no hard disk 43W
HP/Compaq Celeron 2.4GHz, 512M RAM, 300G IDE 50W
NEC Pentium-E2160 1.8GHz, 1G RAM (1 DIMM), 160G S-ATA 52W
Packard-Bell (NEC) Celeron-D 2.93GHz, 512M RAM, 2*20G IDE 75W
Compaq 1.5GHz P4 256M 20G IDE, idling 78W
Compaq 1.5GHz P4 256M 20G IDE, installing 85W
SMP 2*P3 1GHz, 1GB RAM, 2*U160 SCSI 18G disks idle 81W
SMP 2*P3 1GHz, 1GB RAM, 2*U160 SCSI 18G disks disk busy 99W
SMP 2*P3 1GHz, 1GB RAM, 2*U160 SCSI 18G disks CPU busy 130W
SMP 2*P3 1GHz, 1GB RAM, 2*U160 SCSI 18G disks CPU and disk busy 136W
Compaq 1GHz Athlon 256M 20G IDE idling 87W
NEC Pentium-D (920) 2.8GHz, 1G RAM, 160G S-ATA 98W
White-box Athlon XP 1700+, 768M RAM, 2*80G IDE + 46G IDE 110W

Here is the Computer Related Power Use page [1] (for switches, filters, and other things).


This is a program I wrote to benchmark SMTP servers. I started work on this because I need to know which mail server will give the best performance with more than 1,000,000 users. I have decided to release it under the GPL because there is no benefit in keeping the source secret, and the world needs to know which mail servers perform well and which don’t!

At the OSDC conference in 2006 I presented a paper on mail relay performance based on the new BHM program that is now part of Postal.

I have a Postal category on my main blog that I use for a variety of news related to Postal. This post (which will be updated periodically) will be the main reference page for the software. Please use the comments section for bug reports and feature requests.

It works by taking a list of email addresses to use as FROM and TO addresses. I originally used a template to generate the list of users because if each email address takes 30 bytes of storage then 3,000,000 accounts would take 90M of RAM which would be more than the memory in the test machine I was using at the time. Since that time the RAM size in commodity machines has increased far faster than the size of ISP mail servers so I removed the template feature (which seemed to confuse many people).

When sending the mail the subject and body will be random data. A header field X-Postal will be used so that procmail can easily filter out such email just in case you accidentally put your own email address as one of the test addresses. ;)

I have now added two new programs to the suite, postal-list, and rabid. Postal-list will list all the possible expansions for an
account name (used for creating a list of accounts to create on your test server). Rabid is the mad Biff, it is a POP benchmark.

Postal now adds a MD5 checksum in the header X-PostalHash to all messages it sends (checksum is over the Subject, Date, Message-ID, From, and To headers and the message body including the “\r\n” that ends each line of text in the SMTP protocol). Rabid now checks the MD5 checksum and displays error messages when it doesn’t match.

I have added rate limiting support in Rabid and Postal. This means that you can specify that these programs send a specific number of messages and perform a specific number of POP connections per minute respectively. This should make it easy to determine the amount of system resources that are used by a particular volume of traffic. Also if you want to run performance analysis software to determine what the bottlenecks are on your mail server then you could set Postal and Rabid to only use half the maximum speed (so the CPU and disk usage of the analysis software won’t impact on the mail server).

I will not release a 1.0 version until the following features are implemented:

  • Matching email sent by Postal and mail received by BHM and Rabid to ensure that each message is delivered correctly (no repeats and no corruption)
  • IMAP support in Rabid that works
  • Support for simulating large numbers of source addresses in Postal. This needs to support at least 2^24 addresses so it is entirely impractical to have so many IP addresses permanently assigned to the test machine.
  • Support for simulating slow servers in Postal and BHM (probably reducing TCP window size and delaying read() calls)
  • Making BHM simulate the more common anti-spam measures that are in use to determine the impact that they have on list servers
  • Determining a solution to the problem of benchmarking DNS servers. This may mean just including documentation on how to simulate the use patterns of a mail server using someone else’s DNS benchmark, but may mean writing my own DNS benchmark.

Here are links to download the source:

  • postal-0.72.tgz – made LMTP work and accept TAB as a field delimiter.
  • postal-0.71.tgz – rewrote the md5 checking code and fixed lots of little bugs.
  • postal-0.70.tgz – tidied up the man pages and made it build without SSL support.
  • postal-0.69.tgz – fixed some compile warnings, and really made it compile with GCC 4.3
  • postal-0.68.tgz – fixed some compile warnings, made it compile with GCC 4.3, and I think I made it compile correctly with OpenSolaris.
  • postal-0.67.tgz – changed the license to GPL 3
  • postal-0.66.tgz – made GNUTLS work in BHM and added MessageId to Postal.
  • postal-0.65.tgz – significant improvement, many new features and many bugs fixed!
  • postal-0.62.tgz – Slightly improved the installation documents and made it build with GCC 3.2.
  • postal-0.61.tgz – version 0.61. Fixed the bug with optind that stopped it working on BSD systems, and a few other minor bugs.
  • postal-0.60.tgz – version 0.60. Fixed the POP deletion bug, made it compile with GCC 3.0, and added logging of all network IO to disk.
  • postal-0.59.tgz – version 0.59.
  • postal-0.58.tgz – version 0.58. Added some new autoconf stuff, RPM build support, and the first steps to OS/2 and Win32 portability.
  • postal-0.57.tgz – version 0.57. Fixed lots of trivial bugs and some BSD portability issues.
  • postal-0.56.tgz – version 0.56. Added Solaris package manager support. Made it compile without SSL. Added heaps of autoconf stuff.
  • postal-0.55.tgz – version 0.55. Made Rabid work with POP servers that support the CAPA command. Fixed some compile problems on Solaris.
  • postal-0.54.tgz – version 0.54. Added a ./configure option to turn off name expansion (for systems with buggy regex). Fixed a locking bug that allowed Rabid to access the same account through two threads.
  • postal-0.53.tgz – version 0.53. Don’t use NIS domain name etc for SMTP protocol.
  • postal-0.52.tgz – version 0.52. Better portability with autoconf.
  • postal-0.51.tgz – version 0.51. Supports compiling without SSL and some hacky Solaris support.
  • postal-0.50.tgz – version 0.50. Adds SSL support to Postal (Rabid comes next).

How to Debug POP

POP (Post Office Protocol) is the most used protocol for receiving mail from a server to a MUA (Mail User Agent) for reading. It is specified in RFC1939.

But the way it works (in most cases) is quite simple and doesn’t require reading the RFC, connect to port 110 (the standard port for POP3) and a basic session transcript is as follows (data sent by the client is prefixed with C: and data sent by the server is prefixed with S:):
C:user ABC
S:+OK USER ABC set, mate
C:pass asecret
S:+OK Mailbox locked and ready
S:+OK scan listing follows
S:1 2989

When the server successfully completes an operation it will precede it’s response with “+OK“, when it fails it will precede it’s response with “-ERR“. The data after the OK or ERR statement is for humans not machines, so in most cases your MUA will discard it. Therefore connecting to the service manually is required to properly debug problems. The unfortunate thing is that often on big mail servers it takes time for the sys-admin to do such tests. If the user can do it for them and give a bug report saying “your POP server said -ERR user unknown” then things will get fixed a lot faster than if the report is “the POP server didn’t work”.

One thing that is quite important is the initial greeting string, on any system of moderate size you will have multiple back-end servers and the greeting will tell you which server you are connecting to. If POP sometimes works and sometimes fails then your ISP might have one server failing so making a note of this greeting string in a transcript of a failed session can really help in tracking down problems.

When the list of messages is displayed, the first column is message numbers (starting at one and going up sequentially) and the second column is message sizes. If you have a POP session timing out and you have an extremely large message then that might be the cause.

A commonly used program for testing POP (and other Internet services) is telnet. So start the above process you would type telnet 110.

There are methods of hashing POP passwords (which make things a little more complex), but they often aren’t used – and in any case don’t encrypt the data. So it’s common to run POP servers with SSL, and the standard port for this is 995. This makes testing a little more complicated (but actually no more difficult).

To make an SSL connection you can use the program stunnel, it is included in many (most?) distributions of Linux, and Windows binaries are apparently at this link (NB I’ve never tested the Windows binaries as I don’t use Windows).

The command stunnel -c -r will connect you to your mail server via SSL and you can then type in the POP commands as normal.

If your POP server supports the STLS command (which allows negotiation of TLS/SSL on port 110) then you can use the command stunnel -n pop3 -c -r

To use gnutls, you can use the command “gnutls-cli -p 995” or to work with STLS on port 110 you use the command “gnutls-cli -s -p 110” and press ^D after entering the STLS command.

How to Debug SMTP with TLS(SSL) and AUTH

The first thing to test is a TLS (aka SSL) connection. The stunnel program has special code for this, the command “stunnel -n smtp -c -r” will connect to the server via SMTP and negotiate SSL.

If you use gnutls then the command “gnutls-cli -s -p 25” will connect to the server, allow you to establish the session (by typing “ehlo hostname” and then “starttls“) after which you can press ^D to enter TLS mode. This is a little more inconvenient.

Once one of these is done and you will receive a 220 message acknowledging the connection (which is the same as if you had just connected without TLS). If you want to test the TLS certificate then use the “-v” option to stunnel. Note that if the certificate is not verified successfully then stunnel will exit and log via syslog the reason why. While stunnel seems more convenient for actually using a protocol, the openssl utility is a much better program for actually testing out the SSL functionality. The command “openssl s_client -CApath /etc/ssl/certs/ -starttls smtp -connect” will dump a lot of diagnostic information about the SSL protocol. Note that the location of the SSL certificates varies by distribution, /etc/ssl/certs is the location used on Debian.

When compared to openssl and stunnel, gnutls-cli is less convenient than stunnel, and somewhere between the other two in terms of utility for debugging. It’s good to have all three clients available for testing!

Then enter the command “ehlo” (the hostname is generally not checked for the case of mail relaying so any text that vaguely resembles a real host DNS name will do).

The response to that command will be something like the following: Hello [], pleased to meet you
250 HELP

The important thing to note is the 250-AUTH message which indicates that you may authenticate, it tells us that you can use the LOGIN and PLAIN methods of authentication. All the further communication for the login will be base64 encoded, the best utilities that I know of in Debian/Etch for encoding and decoding base64 are /usr/share/fml/bin/ and /usr/share/fml/bin/ which are in the fml package. Debian/Lenny and newer have base64 as part of the coreutils package.

The command auth login will typically give the response “334 VXNlcm5hbWU6“, the command “echo VXNlcm5hbWU6|/usr/share/fml/bin/” shows that it is requesting the “Username:“.

To generate a response to the Username prompt run the command “echo -n | /usr/share/fml/bin/” (or whatever your user-name is) and you will receive an encoded message such as “dXNlckBleGFtcGxlLmNvbQ==“. Enter that to the mail server and you will get a response with another 334 code similar to “334 UGFzc3dvcmQ6“, again if you decode the part after the space you will br prompted for the “Password:“. The command “echo -n mypass | /usr/share/fml/bin/” will give a response that you can give to that prompt. If all goes well that will give a 235 message to tell you that you are authenticated. Then you can relay mail!

When relaying mail after authenticating using SASL, if the mail is authenticated then you can use the auth parameter. This means that instead of using the SMTP command “mail from: <>” you use the command “mail from: <> auth=<>“.

Normally this will all be done by your MUA, but if something goes wrong and you don’t know why then manually running through the steps can reveal the source of the problem.

Software vs Hardware RAID

It’s a commonly held myth that hardware RAID is unconditionally better than software RAID. That claim is not true in all cases and is particularly wrong at the low end.

Really Cheap Hardware RAID

The cheapest so-called hardware RAID uses RAID in the BIOS and relies on an OS driver for support when running in protected mode. This is essentially a different sort of software RAID but with BIOS support to boot from it. Using a different disk format to the standard software RAID for your OS can make it more difficult to recover when things go wrong and there’s no benefit to this. If you use software RAID-1 from your OS and set things up correctly then you can boot from either disk. Using software RAID-1 for booting and RAID-5 or RAID-6 for the OS and data is a viable option.

Cheap Hardware RAID

Cheap hardware RAID doesn’t have write-back caching and therefore can’t give any significant performance benefit over software RAID. Note that there are different options for how RAID stripes are laid out which can affect performance, so if a cheap hardware RAID device gives any significant performance benefit over software RAID then it’s probably due to where the blocks happen to be stored working well with your filesystem. Which is of course a benefit you could get from tuning software RAID.

The Mythical CPU Benefits of Hardware RAID

It’s widely regarded that hardware RAID is faster due to taking the processing away from the CPU. But the truth is that for at least the last 10 years CPUs have been fast enough and in fact it’s often been the case that RAID controllers have been the bottleneck.

When I loaded the Linux RAID-5/RAID-6 driver on my Thinkpad T61 it’s 2.2GHz T7500 CPU (which isn’t a particularly new or powerful laptop CPU) was tested and shown to be capable of 3227MB/s for RAID-6 calculations. The fastest SATA disk I’ve benchmarked was capable of sustaining almost 120MB/s on it’s outer tracks. If we assume that newer disks are capable of 150MB/s then my Thinkpad could handle the RAID calculations for an array of 20 such disks.

An old P3-1GHz desktop system I use for a low-end server can do 591MB/s of RAID-6 calculations in software, if I was able to connect SATA disks to that old system then it could drive four of them in a RAID array at full speed!

It’s often regarded that a benefit of hardware RAID is to avoid CPU use. Contiguous IO can use a moderate amount of CPU power, I could potentially use 20% of one core of a T7500 if I had four disks running at once. But usually contiguous IO isn’t that common. If you are using a Gigabit Ethernet port to transfer data then you are limited to something slightly more than 100MB/s. But most applications don’t involve large contiguous data transfers and thus the amount of data transferred goes down.

One way that hardware RAID can save CPU time is if the interface to the hard drives was inefficient. The IDE interface didn’t seem particularly efficient and large transfers to IDE disks used to often require more CPU time than was expected. For such disks having them on a RAID controller that emulated a giant SCSI disk could save some CPU time.

Back in 2000 I did some tests on a Mylex DAC 960 hardware RAID controller that was only capable of sustaining 10MB/s. This wasn’t a problem as the applications were seek intensive and the Mylex performed well for that task. But for contiguous IO software RAID would have given much better performance.

The Real Benefits of Hardware RAID

A good hardware RAID system will have NVRAM for a write-back cache. This can dramatically improve write performance which is very important on RAID-5 and RAID-6 systems that perform really badly for small writes.

Good hardware RAID controllers will often support many more disks than a non-RAID controller. If you want to have more than 4 disks then hardware RAID has some serious benefits. But it has to have NVRAM write-back cache, otherwise you get no useful benefits and you might as well use software RAID.


If you can’t afford a high-end RAID system like a HP CCISS then use software RAID. Software RAID will be faster and more reliable than cheap hardware RAID.

If you need more than four disks then you can probably benefit a lot from hardware RAID with write-back caching.

SE Linux Terminology

Security Context is the SE Linux label for a process, file, or other resource. Each process or object that a process may access has exactly one security context. It has four main parts separated by colons: User:Role:Domain/Type:Sensitivity Label. Note that the Sensitivity Label is a compile-time option that all distributions enable nowadays.

User in terms of SE Linux is also known as the Identity. The program semanage can be used to add new identities and to change the roles and sensitivities assigned to them. System users often end in “_u” (EG user_u, unconfined_u, and system_u) but this is just a convention used to distinguish system users from users that associate directly with Unix accounts – which are typically the same as the name of the account. So the user with Unix account john might have a SE Linux user/identity of john. Note that as the local sysadmin can change the user names with semanage you can’t make any strong assumptions about a naming convention. When a process creates a resource (such as a file on disk) then by default the resource will have the same user as the process.

Role for a process determines the set of domains that may be used for running a child process. Through semanage you can configure which roles may be entered by each user. The default policy has the roles user_r, staff_r, sysadm_r, and system_r. Adding new roles requires recompiling the policy which is something that most sysadmins don’t do. So you can expect that all role names end in “_r“.

Object Class refers to the object that is to be accessed, there are 82 object classes in the latest policy, many of which are related to things such as the X server. Some object classes are file, dir, chr_file, are blk_file. The reason for having an object class is so that access can be granted to one object with a given type label but not be granted to another object of a different object class.

Type is the primary label for the Domain/Type or Type-Enforcement model of access control, by tradition a type name ends in “_t“. There is no strong difference between a domain and a type, a domain is the type of a process. In the DT model there are a set of rules which specify what happens when a domain tries to access an object of a certain object class for a particular access (read, write, etc).

MLS stands for Multi Level Security, it’s a hierarchical system for restricting access to sensitive data. It’s core principle is that of no write-down and no read-up. In a MLS system you can only write data to a resource with an equal or higher sensitivity label.

MCS stands for Multi Category Security.

Sensitivity Level is for a hierarchical level of sensitivity in the MLS policy. In the default policy there are 16 levels from s0 to s15. The MCS policy uses some of the mechanisms of MLS but not the level, so in MCS the level is always set to s0. The policy can be recompiled to have different numbers of levels.

Category is a primitive for the MCS and MLS policies. The default policy has 1024 categories from c0 to c1023, the policy can be recompiled to have different numbers of categories.

Sensitivity Label is for implementing MLS and MCS access controls. It may be ranged, in which case it has a form “LOW-HIGH” where both LOW and HIGH are comprised of a Sensitivity Level and a set of categories separated by a colon – EG “s0:c1-s1:c1.c10” means the range from level s0 with category c1 to the level s1 with the set of categories from c1 to c10 inclusive. If it isn’t ranged then it just has a level and a set of categories separated by a colon. In a set of categories a dot is used to indicate a range of categories (all categories between the low one and the high one are included) while a comma indicates a discontinuity in the range. So “c1.c10,c13” means the set of all categories between c1 and c10 inclusive plus the category c13. The kernel will canonicalise category sets, so if it is passed “c1,c2,c3” then it will return “c1.c3“. These raw labels may be translated into a more human readable form by mcstransd.

Constraint is a rule that restricts access. SE Linux is based on the concept of deny by default and the domain-type model uses rules to allow certain actions. Constraints are used for special cases where access needs to be restricted outside of the domain-type model. MCS and MLS are implemented using constraints.

MySQL Cheat Sheet

This document is designed to be a cheat-sheet for MySQL. I don’t plan to cover everything, just most things that a novice MySQL DBA is likely to need often or in a hurry.

Configuring mysqld

If you are going to provide a database service to other machines edit /etc/mysql/my.cnf and set the bind-address parameter to a suitable value. A value of will cause it to accept connections on any of the server’s addresses. I recommend using a private address range (,, or for such database connections and ideally a
back-end VLAN or Ethernet switch that doesn’t carry any public data.

For the purpose of this post let’s consider the MySQL server to have a private IP address of So you want the my.cnf file to have bind-address =

To start mysql administration use the command mysql -u root. In Debian the root account has no password by default, on CentOS 5.x starting mysql for the first time gives a message:
To do so, start the server, then issue the following commands:
/usr/bin/mysqladmin -u root password ‘new-password’
/usr/bin/mysqladmin -u root -h server password ‘new-password’

That is wrong, for the second mysqladmin command you need a “-p” option (or you can reverse the order of the commands).

There is also the /usr/bin/mysql_secure_installation script that has an interactive dialog for locking down the MySQL database.

Administrative Password Recovery

If you lose the administration password the recovery process is as follows:

  1. Stop the mysqld, this may require killing the daemon if the password for the system account used for shutdown access is also lost.
  2. Start mysqld with the --skip-grant-tables option.
  3. Use SQL commands such as “UPDATE mysql.user SET Password=PASSWORD('password') WHERE User='root';” to recover the passwords you need.
  4. Use the SQL command “FLUSH PRIVILEGES;
  5. Restart mysqld in the normal manner.

User Configuration

For an account to automatically login to mysql you need to create a file named ~/.my.cnf with the following contents:

Replace USERNAME. PASSWORD, and DBNAME with the appropriate values. They are all optional parameters. This saves using mysql client parameters -u parameter for the username, “-p for the password, and specifying the database name on the command line. Note that using the “-pPASSWORD” command-line option to the mysql client is insecure on multi-user systems as (in the absence of any security system such as SE Linux) any user can briefly see the password via ps.

Note that the presence of the database= option in the config file breaks mysqlshow and mysqldump for MySQL 5.1.51 (and presumably earlier versions too). So it’s often a bad idea to use it.


To grant all access to a new database:
USE foo_db;
GRANT ALL PRIVILEGES ON foo_db.* to 'user'@'' IDENTIFIED BY 'pass';

Where is the client address and pass is the password. Replace with % if you want to allow access from any client address.

Note that if you use “foo_db” instead of “foo_db.*” then you will end up granting access to foo_db.foo_db (a table named foo_db in the foo_db database) which generally is not what you want.

To grant read-only access replace “ALL PRIVILEGES” with “SELECT“.

To show what is granted to the current user run “SHOW GRANTS;” .

To show the privs for a particular user run “SHOW GRANTS FOR ‘user’@’’;

To show all entries in the user table (user-name, password, and hostname):
USE mysql;
SELECT Host,User,Password FROM user;

To do the same thing at the command-line:
echo “SELECT Host,User,Password FROM user;” | mysql mysql

To revoke access:

To test a user’s access connect as the user with a command such as the following:
mysql -u user -h -p foo_db

Then test that the user can create tables with the following mysql commands:

Listing the Databases

To list all databases that are active on the selected server run “mysqlshow“, it uses the same methods of determining the username and password as the mysql client program.

To list all tables in a database run “SHOW TABLES;” . For more detail select from INFORMATION_SCHEMA.TABLES or run “SHOW TABLE STATUS;

For example to see the engine that is used for each table you can use the command echo “SELECT table_schema, table_name, engine FROM INFORMATION_SCHEMA.TABLES;” |mysql.

But INFORMATION_SCHEMA.TABLES is only in Mysql 5 and above, for prior versions you can use mysqldump -d to get the schema, or “SHOW CREATE TABLE table_name;” at the command-line.

Also the mysqldump program can be used to display the tables in a database via “mysqlshow database” or the columns in a table via “mysqlshow database table“.

To list active connections: “SHOW PROCESSLIST;”

Database backup

The program mysqldump is used to make a SQL dump of the database. EG: “mysqldump mysql” to dump the system tables. The data compresses well (being plain text of a regular format) so piping it through “gzip -9” is a good idea. To backup the system database you could run “mysqldump mysql | gzip -9 > mysql.sql.gz“. To restore simply run “mysql -u user database < file“, in the case of the previous example “zcat mysql.sql.gz | mysql -u root database“.

To dump only selected tables you can run “mysqldump database table1 [table2]“.

The option --skip-extended-insert means that a single INSERT statement will be used for each row. This gives a bigger dump file but allows running diff on multiple dump files.

The option --all-databases or -A dumps all databases.

The option --add-locks causes the tables to be locked on insert and improves performance.

Note that mysqldump blocks other database write operations so don’t pipe it through less or any other process that won’t read all the data in a small amount of time.

mysqldump -d DB_NAME dumps the schema.

The option --single-transaction causes mysqldump to use a transaction for the dump (so that the database can be used in the mean time). This only works with INNODB. To convert a table to INNODB the following command can be used:

To create a slave run mysqldump with the --master-data=1.

When a master has it’s binary logs get too big a command such as “PURGE MASTER LOGS BEFORE ‘2008-12-02 22:46:26’;” will purge the old logs. An alternate version is of the form “PURGE MASTER LOGS TO ‘mysql-bin.010’;“. The MySQL documentation describes how to view the slave status to make sure that this doesn’t break replication.


Portslave is a getty replacement that is designed to talk to a modem and spawn PPP or SLIP when the modem connects. It authenticates the connection via RADIUS.