Securing Unix and Linux Systems

Note that the purpose of the system in question may have an effect on how some of these suggestions are implemented, but the general focus should still be there. This list isn't in any particular order of priority, nor can I promise it's complete. The goal here is to provide some insights for fellow Unix systems administrators.

Rule #1: be PARANOID!

Try and consider all the possible ways things can go wrong, and do what you can to protect against them. In a lot of cases, you want to protect a system from your own mistakes as well as from the potential external threats. There's no difference to the users if you accidentally mess up the password file or if an external party (or malicious local user) causes it to be overwritten because of some software bug. Remember also that software bugs can equally be triggered by honest, authorized users as they can by malicious users.

Points to Consider

installed software:

OS vendors will often provide "default" installations (or hardware vendors will ship a system with a default installation of the OS), which usually install lots of software you're not likely to use. When you install the operating system on the machine (because you certainly should re-install it if you have a new machine with a default OS installed), you need to consider whether or not software packages will be necessary for the machine to fulfill its needs.

For example, by default most operating systems will install UUCP utilities (which in the days of networking via phone lines were necessary). These utilities are for the most part no longer ever used, run with elevated privileges, and are non-trivial to configure correctly. Many systems have been compromised because of incorrect UUCP configurations, and I always recommend that unless those tools are required on a system, they should not be installed.

Another example would be SNMP tools. Vendors love to show off "remote" reporting facilities which use SNMP, but if configured incorrectly, SNMP can be used by unauthorized parties to map out network relationships that you may not want outside parties knowing about. SNMP is not trivial to configure, and hardly ever required. It's another that I recommend not even installing, unless you're prepared to take the time to configure it carefully.

There are, of course other similar types of software, but without knowing how the machine will be used, it's difficult to give an exhaustive list of what should be avoided. In general, avoid installing software that won't be required for the general operation of the system.


Most systems permit file-systems to be mounted with various optional parameters. For example, if at all possible, you should consider installing the /usr directory on its own file-system, and mounting it read-only. This will assure that files within there (which generally include many binaries used by the root users, and libraries used by all binaries) cannot be accidentally over-written.

Also, if you have the option, it's preferable to mount file-systems that can be written to by unprivileged users (such as the /tmp file-system), such that executables placed within those directories cannot be run with setuid/setgid privileges. Depending on the file-system and on how the system itself is being used, it might even make sense to mount so files within it cannot be executed at all. (this might make sense with /tmp on some systems, or other file-systems on others. It's an option which should be considered).

world-writable files and directories:

They're inevitable, and you should know where they are on your system.

Make sure to use the "sticky" bit on world-writable directories. This will prevent users deleting (or overwriting) files which they do not own within the directory.

Also, on many systems, when a file is created within a directory, it inherits the group ownership that owns the directory. For example, that attribute resulted in a couple of instances (on a system I was adminning) of a non-privileged user being able to (accidentally, in this case) create executable files which were setgid to group "system" in the /tmp directory. Had the user been malicious, he might have used that to attempt to further elevate his privileges. /tmp on all systems I manage is now owned by group "nobody", an unprivileged group.

set[ug]id files:

Speaking of setgid (and setuid) files... Make a list of all setuid and setgid files on the system. You'll find that a lot of them seem mysterious, and some of them you won't be clear on why it requires elevated privileges. Remove the elevated privilege from those and see what breaks.

From those files which need the elevated privilege (the exact list will vary from system to system), you'll want to remove read permission from the binary. This way, should something go wrong while that program is running, it won't be able to drop a core file, (core dumps on many systems follow symbolic links, so for example, if I put a symlink in my current working directory, called "core" and pointing to /etc/passwd, then cause the "su" program to dump core, I might overwrite the system's password file).

In some instances, it might also make sense to make the setuid binary executable only to human users, via a "users" or similar group. There really isn't much reason for an otherwise unprivileged user such as "www" (for example) to be able to run most binaries, especially those which might be used to elevate privileges.

There are many programs which will require setuid (or setgid) permission. A brief list (off the top of my head) might include:

passwd, chfn, chsh:

All these write to the system password file (and perhaps a "shadow" password file or authentication database) and therefore must be setuid to root.


The client portions of the "r" programs need to run from a privileged network port in order for authentication to succeed at the remote server. These therefore must be setuid root.


Programs which manipulate network packets (traceroute varies the TTL, for example), require root privileges to do so.

There are most certainly others, but this should give you an idea of what sorts of programs which you'll need to leave with elevated privileges. Remember that it's very unlikely that non-human users on your system would need to run any of those, so it makes sense to make them executable only to a group that contains only human users (usually "users").


Password sniffing and brute-force guessing are still very popular methods for gaining unauthorized access to a system. Pretty much all modern Unix and Linux systems (that I'm aware of, that is) support some form of "shadow-password" mechanism, where the users' encrypted passwords are not kept in the password file itself, but rather in a separate file (or authentication database) which can only be read by the root user (or processes running with an equivalent effective userid) or (at least on some Linux systems) a special "shadow" group.

Using such a mechanism helps reduce the possibility that passwords can be "guessed" (using password cracking software). The system password file contains lots of information about each user, and must be readable to all users on the system in order for some programs (some as simple as "ls") to work correctly. As such anyone with access to an account on the system can obtain a copy of the system password file. If that file contains the encrypted passwords, given the computing power available in the average personal computer these days, it has become rather trivial to find users' passwords.

In addition to using some form of shadow password mechanism, you should use a password changing program which enforces a policy regarding selecting difficult to "guess" passwords. This way, even given encrypted passwords and a fast computer, brute-force guessing of passwords becomes less than trivial. Given sufficient time and resources, mind you, it could still be done, so you want your password policy to make it as difficult as possible, while still permitting users to create easy to remember passwords.

Some sites and some system administrators believe that passwords can be protected by forcing users to change them frequently. Although there is merit to encouraging users to change their passwords regularly, my own feeling is that forcing them to do so can potentially work against their protection. I've written a document which explains some of the issues involved, and some approaches we use where I work to address these issues. No single policy will work for every site, though. What's important is that the problem of password protection be carefully considered and a policy adopted that addresses these concerns in a suitable manner for an individual site.

Remember: if users can't remember their passwords, or they're forced to change them too frequently, they'll write them down, and that requires very little computing resource to decrypt.

network services:

Turn off any software that listens on a network port and isn't required for the general operation of the system. That is, go through the system's /etc/inetd.conf file, and comment out any lines that refer to a service that isn't required on the system in question. What is required varies from system to system, so I can't make specific recommendations here.

Certain services work in your favour, of course, such as an "ident" daemon, which can help you trace a user who may have done something causing the sysadmin of another system to complain to you, (why did user@yourhost try to telnet to ...), or perhaps to increase the granularity of certain network access controls (we will accept ssh connections from only user@myhost, rather than all users on myhost).

Also check that there aren't any stand-alone daemons that you don't need running that accept network connections from the outside. Check configuration files, and adjust them where necessary.

For example, rshd and rlogind both check the /etc/hosts.equiv file, and the target user's .rhosts file. You may need to enable that service (if you do, I strongly recommend NOT using /etc/hosts.equiv for authentication, unless you are the system administrator for all systems listed there, and you know that all your userids match on all systems), but some of your users may not be aware of how to set up their .rhosts files intelligently, and may attempt to use "+" signs to allow all users on a machine access to their account, (or all users on any machine, or a specific user on any machine, etc.)

For various reasons, this is a bad idea, and many Unix systems now permit the system administrator to include the string "NO_PLUS" in the hosts.equiv file to turn off acceptance of "+" signs in the .rhosts files. If you must permit rsh/rlogin access and your system supports that, use it.

Know that with Rsh-type services, (including Ssh), if you trust a host, (or an account), you are also trusting all other hosts (or accounts) that it trusts. Understand your trust relationships, and review them periodically.

access control:

You want to control which (remote) hosts can connect to which services on a (local) host. A very effective way to do this is to install and use Tcp_Wrappers by Wietse Venema. You would wrap most of the services left open in the inetd.conf file, (very few services shouldn't be wrapped, but there are some, such as the "ident" service mentioned earlier), as well as some stand-alone services that can be linked with Tcp_Wrappers, and then be able to control from which hosts users can access those services. For example, in a university environment, you might permit ssh access to certain administrative systems from only selected local subnets.

system startup:

You should know what programs run when the system starts, (including those installed and run by the OS installation itself). Again, you want to disable anything that is not required for general system operation.


If you can avoid it, do. If you can't avoid it, make sure to read manual pages repeatedly, and that you understand what you're doing and why before enabling NFS. NFS is certainly very useful in some environments, but it can easily be misconfigured. Be careful and review periodically.

logs and daily monitoring:

You want to read through logs on a daily basis, but you don't want to spend all your time reading logs and letting your eyes dry out. I write PERL scripts to read through and summarize the logs which are most important to me.

You also want to have various monitoring scripts which run daily and give status reports on different aspects of your systems, such as disk space availability, active network ports, new setuid/setgid files, new world-writable directories, etc. The actual requirements of each system vary, but the point is that you want to collect the information regularly in a format that's easy to read.

system integrity checking:

This is part of daily monitoring, actually, but it's important enough to warrant its own heading. Where I work, we have various mechanisms in place to look for and point out changed files on each system. The goal here is to raise a flag whenever files which aren't normally expected to change do.

If you can, at all, you want to store the data from integrity checking programs on a separate, very tightly controlled system, so that should the system being monitored get compromised, the integrity data cannot be modified to cover up the compromise.


No one can ever stress enough the importance of good backups, yet even in my list here, it's brought up nearly last. That's very typical; backups are often forgotten about until the last minute, but the truth is they are an extremely important aspect of system security.

Since what you're protecting is generally system availability and data integrity, it only makes sense to have good backups, which you can turn to in the event of disaster. Make sure to test your backup program thoroughly, by performing (perhaps random) restores periodically, and comparing the restored file with the original. Backups are only as good as your ability to restore data from them, so test that regularly.

Backups affect how you look at other aspects of the system as well, such as file-system layout (perhaps you don't really need to backup /tmp, so it should be on its own file-system, and you can save tape and time by not backing it up, or perhaps since /usr is not likely to change all that frequently, and therefore not likely to need to be restored frequently, it would make sense to have it on its own file-system, backed up near the end of the backup cycle).

Where backup tapes are stored, is of course a concern with respect to system security. If an intruder can easily get at backup tapes, they may not need to use network methods to gain access to privileged data contained on a system. On the other hand, if an authorized user needs a file restored, but the tapes must be flown in from some remote island in the middle of the Pacific ocean, it might not be the best situation to be in either. (unless of course you happen to be on a remote island in the middle of the Pacific Ocean...)

Physical security:

No matter how much you do to reduce the chance that someone might break in to your sysem remotely over the network, if they can just walk up to the system, boot it into single-user mode, and get at whatever they want, it's all been wasted. A production Unix (or Linux) system should be kept in a physically secure, environmentally controlled room, with sufficient backup power to keep the machine running in the event of a power failure (at the very least long enough for someone to run over and shut it down).

This is by no means meant to be an exhaustive list, but it is my hope that it can help people know what to look at when they consider securing a Unix or Linux system. Certainly if all the issues covered in this list are considered when setting the system up, the system will be in a pretty good state, and the system administrator will be in a good position to know when a change occurs on the system.

I hope this will be considered useful by many.