This page is offered as a service of Bristle Software, Inc. New tips are sent to an associated mailing list when they are posted here. Please send comments, corrections, any tips you'd like to contribute, or requests to be added to the mailing list, to tips@bristle.com.
Original Version: 11/4/2001
Last Updated: 5/10/2017
Applies to: All shells, All Unix flavors
How to get started if you don't know the Unix commands? Here are some starting points:
Command | What it does |
apropos | Searches the one-line summaries of each command in the
"man pages" (Unix manual pages) for the specified string
(not necessarily as a full word).
For example: You can also combine this with the "grep"
(search) command to do things like: Note: If your system has no "apropos" command, use "man -k" instead. |
whatis | Shows the one-line summary of the specified command.
Useful to see quickly whether a certain command does the kind of thing you want. |
man | Shows the entire man page, one screen at a time, for the
specified command.
Useful to see all the details about a command, once you
know which command to ask for. |
info | On systems like Linux with GNU software, shows more detailed info than some man pages. |
help (bash only?) | In the bash shell, and perhaps others,
shows info about a built-in subcommand, like cd, pushd, popd, dirs, for, if,
etc. Thanks to JP Vossen, co-author of the "Bash Cookbook"
for telling me about this one!
http://bashcookbook.com/ http://oreilly.com/catalog/9780596526788/ In shells without help, you have to page through the man page or info page of the shell itself for info about the subcommands. I do this often enough in the tcsh shell that I wrote a script to open the info page and search for the subcommand. See: http://bristle.com/Tips/Unix/manb |
whereis | Looks in a bunch of standard places for the source, binary,
and documentation files for the specified command.
Useful to find a standard command that isn't yet on your PATH. You can then run it by typing its full path (for example: /usr/sbin/useradd), or by adding its directory to your PATH. |
which | Looks in all the places on your PATH for the specified
command, showing the first match.
Useful to find out exactly which file would execute, if you were to issue a command. |
which -a | Looks in all the places on your PATH for the specified
command, showing all matches.
Useful to find out which command file is hiding which other command files by the same name. |
where (tcsh, zsh only)
type -a (ksh, bash, zsh only) whence -p (ksh, zsh only) command -v (ksh, zsh only) |
Like "which -a", but also takes into account "aliases" (see alias - Define a new command) and commands that are built in to the Unix shell (not stored as separate files). |
find | Searches the specified directory tree for the specified file. |
locate | GNU utility to search a database maintained by the system
administrators for the specified file.
Note: Faster than find, but relies on the database being current. |
Now that you know the names of these commands, use the man or info command to find out more about each one.
Note: The Unix "man pages" document more than just commands. They also document APIs that can be called from a program, "daemon" processes that run in the background, the format of configuration files, etc. Each one-line summary includes a number in parentheses, with the following meaning:
(1) Interactive commands
(2) System calls
(3) Subroutines
(4) Special files
(5) File formats and
conventions
(6) Games
(7) Macro packages and language
conventions
(8) Maintenance commands and
procedures.
Thanks to Tom Dickey, Thor Collard, Matthew Helmke and JP Vossen, for their contributions to this tip!
--Fred
Original Version: 6/27/2012
Last Updated: 6/27/2012
Applies to: All shells, All Unix flavors
The cat command is used to view, copy, append and create files. If you are looking for a command like the "type" of VMS or DOS, this is probably what you want. The name comes from the word "catenate" which means "concatenate". cat copies from standard input or a specified file to standard output, and can be redirected to an output file or piped into another Unix command, as described in Command Line Redirection. Typical usages are:
cat file1 |
Show the contents of a file1 on the screen (Copy it to standard output) |
cat file1 > file2 |
Copy file1 to file2 |
cat file1 >> file2 |
Append file1 to file2 |
cat file1 file2 |
Show the contents of file1 and file2 on the screen (Copy them to standard output) |
cat file1 file2 > file3 |
Copy the contents of both file1 and file2 to file3 |
cat file1 file2 >> file3 |
Append file1 and file2 to file3 |
cat > file1 | Create file1 from what you type until you type
Ctrl-D (Copy standard input to standard output, which is redirected to overwrite file1. Ctrl-D is the end-of-file marker, so typing it causes cat to stop reading from standard input as though the file it was reading had ended.) |
cat < file1 > file2 | Copy file1 to file2 (Same as cat file1 > file2. Just a longer way to do the same thing. Reads from standard input which is redirected from file1 instead of reading directly from file1.) |
Can also be used to generate line numbers, squeeze whitespace,
and other functions while copying the file. See man
cat for
more info.
Very useful in Unix pipes. See Command Line Redirection.
If the file scrolls by on the screen too quickly to read, you may prefer less. See less - View files.
--Fred
Original Version: 11/25/2010
Last Updated: 3/16/2011
Applies to: All shells, All Unix flavors
The less command is used to view and page through
files.
Despite its tongue-in-cheek name, less is a better version
of more. You can invoke it as a command
with one or more filenames as parameters, or can use it as a filter
(see filters). It reads standard input
or the specified files, writes a screenful of text to standard
output, then waits for keystrokes to tell it where to move in the
file before updating the display. Some of the most useful
keys are:
return | Forward one line |
space |
Forward one screen |
b |
Back one screen |
q | Exit less |
Down-Arrow | Forward one line |
Up-Arrow | Back one line |
Left-Arrow | Scroll left |
Right-Arrow | Scroll right |
-S | Toggle wrap of long lines |
/pattern | Search forward for pattern |
?pattern | Search backward for pattern |
n | Search again in current direction |
N | Search again in opposite direction |
-I | Toggle case sensitivity of searches |
g | Go to first line |
G | Go to last line |
F | Follow the file -- scroll to end and keep trying to read new lines as they are written to the file |
v | Load file into text editor |
:n | Go to next file |
:p | Go to previous file |
:x | Go to first file |
ESC-n | Search again in current direction across multiple files |
ESC-N | Search again in reverse direction across multiple files |
h | Help |
setenv LESS "-#8
-M -j.5 -F -R -S -W -X"
Here are some of the most useful options:
-#8 | Left/right arrow scroll by 8 chars, not half a screen width |
-m | Show percent in prompt |
-M | Show percent, name, etc., in prompt |
-N | Show line numbers |
-j.5 | Searches and other jumps put target line in middle of screen, not at top line. |
-F | Quit automatically if only one screen of text |
-R | Use ANSI color escape sequences |
-S | Chop long lines (can scroll left/right w/arrow keys) |
-W | Highlight new lines and search results that you jump to |
-X | Don't restore the screen to non-less contents when exiting. Otherwise, -F can cause short files to flash on the screen too briefly to be noticed. Also makes less work better with windowing environments with scrollable command line windows. Without this option, if you attempt to scroll back using the native windowing scroll mechanism, you actually scroll back to the commands before the less command, not to the previous lines of the file. |
-i | Make searches case-insensitive unless any uppercase letters are specified |
-I | Make searches case-insensitive |
--follow-name | Useful for following log files with the F command even if they are renamed as part of a log file rotation. By default, F would continue following the old file, even though it was renamed to a new archived name and is no longer the log file currently in use. With this option, F would follow the new log file that has the name previously assigned to the old file. |
+command | Run the specified less command before showing the file. Typically used to search for a pattern or jump to a location in the file. |
-? | Help |
setenv PAGER less
See man less for more info. This is just
the tip of the iceberg. There are lots more commands, and
lots more options. Did I miss any of your favorites?
--Fred
Original Version: 6/14/2012
Last Updated: 6/19/2012
Applies to: All shells, All Unix flavors
The cp command is used to copy files from one filename or another, in the same or a different directory, as:
cp file1
file2
cp file1
file2 file3 ... newdirectory
Here are some of the most useful options:
-i | Interactive. Causes cp to prompt before overwriting any existing file. Useful enough that I typically alias cp to cp -i -v as described in alias - Define a new command. |
-v | Verbose. Show the old and new name of each file copied. Useful when wildcards are used. Useful enough that I typically alias cp to cp -i -v as described in alias - Define a new command. |
-f | Force. Overwrite the file, even if write protected. Overrides any previous -n option. |
-n | Noclobber. Prevents overwrite. Overrides any previous -i and -f options. |
-R | Recursive. Copy entire directory trees of files. |
-P -H -L |
Control whether symbolic links are followed when using -R |
-p | Preserve times, permissions, owner, group, etc. |
-a | Archive. Equivalent to -pPR |
See man cp for more details.
Update: Thanks to Rich Freeman for pointing out a useful
option that was added to cp recently: --reflink=auto
It causes cp to create an additional "reflink" to
the file contents, but not make a copy of the contents
until one of the files changes, in
which case the copy is finally made. This can save
lots of disk space. I see this option on Linux
boxes using GNU coreutils 8.4 (Jan 2012), but not on
Linux boxes using GNU coreutils 6.9 (Mar 2007), and not
on Mac
OS X which is based on BSD Unix. Also,
reflinks are only supported on
newer filesystems like BTRFS and OCFS2. On systems
where cp has the option but the filesystem does not support
reflinks, it is silently ignored, making complete copies
as usual. For
more info about reflinks, see: http://www.pixelbeat.org/docs/unix_links.html
Original Version: 6/14/2012
Last Updated: 6/16/2012
Applies to: All shells, All Unix flavors
The rcp command is obsolete. Use scp or rsync instead. It was an early command used to copy files from one computer to another, but it had security problems. It copied files and passwords without encrypting them in transit. It may not even be installed on your Unix or Linux system. If it is, don't use it.
--Fred
Original Version: 6/14/2012
Last Updated: 6/15/2012
Applies to: All shells, All Unix flavors
The scp command is much like the cp command, but can also copy files from one computer to another. It uses ssh as a transport to encrypt files and passwords before sending them to the other computer. Call it as:
scp file1
file2
scp file1
host2:path2/file2
scp host1:path1/file1
file2
scp host1:path1/file1
host2:path2/file2
scp user1@host1:path1/file1
user2@host2:path2/file2
etc...
Here are
some of the most useful options:
-r | Recursive. Copy entire directory trees of files. |
-p | Preserve times, permissions, owner, group, etc. |
Since scp uses ssh as a transport, it fully supports ssh keys, passwords, pass phrases, aliases, custom port numbers, and all other ssh configurations described in ssh - Secure Shell.
Unfortunately, scp does not support all of the options of cp. For example, it always follows all symbolic links in a tree when using the -r option. You may want to use the newer rsync command instead, as described in the next tip.
See man scp for more details.
--Fred
Original Version: 6/14/2012
Last Updated: 6/15/2012
Applies to: All shells, All Unix flavors
The rsync command is much like the cp command, supporting pretty much all of the same features, though the options used to invoke those features may differ from those of the cp command. It is also much like the scp command, with the ability to copy files from one computer to another, using ssh as a transport for encryption. Call it as:
rsync
file1 file2
rsync
file1 host2:path2/file2
rsync
host1:path1/file1 file2
rsync
host1:path1/file1 host2:path2/file2
rsync
user1@host1:path1/file1 user2@host2:path2/file2
rsync
user1@host1:path1/ user2@host2:path2
etc...
However, rsync can be MUCH faster than
scp
because
it only copies files that differ, and only
the portions of those files that differ. If
a file already exists at the destination, even
on a different computer from the source, rsync very
quickly determines which parts of the file, if
any, to bother copying.
True to its name, rsync is really for synchronizing files or entire directory trees of files, not just copying them. Therefore, it has options to delete files from the destination that do not exist at the source, options to include and exclude various directories from the synchronization, and tons of other options, making it an extraordinarily powerful tool.
Like scp, since rsync uses ssh as a transport, it fully supports ssh keys, passwords, pass phrases, aliases, custom port numbers, and all other ssh configurations described in ssh - Secure Shell.
See man rsync and the following tips for more details.
--Fred
Original Version: 6/14/2012
Last Updated: 6/16/2012
Applies to: All shells, All Unix flavors
The rsync command can be used to quickly and efficiently create a full backup copy of an entire directory tree, on the same computer or a different computer. Call it as:
rsync
-rpogtlv --del path1/ path2
rsync
-rpogtlv --del path1/
user2@host2:path2
etc...
The first time you do this,
it will copy the entire directory tree to path2.
If user2 and host2 are specified, it
will use
ssh
to securely connect to host2 as
user2 and encrypt the files during transit. Each
subsequent time, it will copy only those
portions of the files that have changed, and
will delete any files that have been deleted.
I routinely use it to synchronize a massive directory tree between 2 computers. With 10,000+ directories and 100,000+ files, it finds the 100 or so modified files and copies them to the other computer in less than 60 seconds.
Here are the meanings of the options used above, plus a couple of others:
-r | Recursive. Copy entire directory trees of files. |
-p | Preserve the file permissions |
-o | Preserve the owner |
-g | Preserve the group |
-t | Preserve the times |
-l | Preserve symbolic links, even those pointing outside the copied tree |
-v | Verbose |
--del | Delete files from the destination if they are missing from the source |
--modify-window=1 | Treat times as the same even if they differ by as much as 1 second. Necessary when backing up to a Windows FAT or FAT32 formatted USB drive, where times are always rounded down to the nearest even-numbered second. Otherwise, many files would seem newer on the regular hard drive than on the USB drive and would always be re-copied. |
-a | Archive. Equivalent to -rptoglD. I don't want the -D option, so I use -rpogtl instead. |
Note the trailing slash on path1. This causes the entire contents of path1 to be copied to path2. Without the trailing slash, the contents would be copied to path1 subfolder in path2.
--Fred
Original Version: 6/15/2012
Last Updated: 6/16/2012
Applies to: All shells, All Unix flavors
The rsync command can also be used to quickly and efficiently create an incremental backup copy of an entire directory tree, on the same computer or a different computer. Call it as:
rsync
-rpogtlv --del --compare-dest=full src/ sparse
rsync -rpogtlv --del --compare-dest=full
src/ user2@host2:sparse
etc...
The --compare-dest option tells
it to compare the files in the src tree
with those in the full tree
instead of the sparse tree,
but to leave the full tree untouched
and create all new files in the sparse tree
instead. If
the full tree is a fully populated
copy of the src tree, created
as shown in Full
backup with rsync, and the sparse tree
is empty or does not yet exist, this has the
effect of creating a sparse tree
that contains all files in src that
differ from full -- an incremental
backup. Note that full is
specified as an absolute path, or as
a path relative to sparse, not
as a path relative to the current working directory.
The --prune-empty-dirs option looked promising, but it it does not fix this problem. The way rsync works is to create a list on the source computer of files that may need to be copied, and pass that list to the target computer, which uses the info in the list and the files in full to decide whether to bother copying them. The --prune-empty-dirs option is processed on the source computer, removing empty directories from the file list. However, since the directories are not empty on the source computer, but will eventually be created empty on the target computer, they are not removed from the list.
A workaround to this problem is to delete the empty directories after the rsync command, via a find command like:
find
sparse -depth -type d -empty -delete
ssh user2@host2 find
sparse -depth -type d -empty -delete
For a small tree this is fine, but not for my case. It
just seems like a bad idea to create and delete 10,000+ directories as a side
effect of backing up 100 or so files. It is an especially bad
idea if sparse resides on an SSD (solid state device -- a flash
drive) instead
of a regular hard drive, since SSDs burn out after a limited number of writes. For
details, see:
Windows.htm#xcopy_m_and_a_burn_out_usb_flash_drives
Therefore, I use a different rsync technique for my incremental backups. See the next tip.
--Fred
Original Version: 6/15/2012
Last Updated: 6/16/2012
Applies to: All shells, All Unix flavors
The rsync command can also be used to quickly and efficiently create an incremental backup copy of an entire directory tree, on the same computer or a different computer, while updating a full backup of the same directory tree on that target computer. Call it as:
rsync
-rpogtlv --del --backup --backup-dir=sparse
src/ full
rsync -rpogtlv --del --backup --backup-dir=sparse
src/ user2@host2:full
etc...
The --backup and --backup-dir options
tell it to update the full tree
as usual, but before deleting or updating any
file in the full tree, to copy
the old version of the file from the full tree
to the
corresponding place in the sparse tree.
If the full tree is your
latest full backup of the src tree,
and the sparse tree
is empty or does not yet exist, this has the
effect of updating the full backup and creating
a sparse tree
that contains all files from previous backups
that were replaced by newer versions during this
backup.
I still have the latest versions of all files and all preceding versions, but it is trickier to revert a file to a specific date. With the previous approach, I'd revert to the version in the sparse tree with the matching date, or the newest version in any previous sparse tree if the file was not modified on that exact date. However, with this approach, that finds me the previous version of the file from the one I want. Instead of the newest version before the date, I must revert to the oldest version after the date.
On the other hand, this approach conserves some space because it doesn't keep 2 copies (sparse and full) of the most recent version of each file. Also, it is easier to consider the effect of deleting old incremental backups to save space on the backup drive. If I delete the incrementals from last December and before, what I lose is exactly the ability to recover to the state I was in before a change I made last December and before, regardless of how long that state was in effect. With the previous approach, I'd be losing the ability to recover to the state I was in after a change I made last December and before, which is slightly less useful since I may still be in that state, and since the previous state may have been in effect for years. The cutoff of what date you can revert to is cleaner with this approach.
Note that sparse is
specified as an absolute path, or as a path relative
to full, not as a path
relative to the current working directory.
The biggest advantage to this approach over the --compare-dest approach
is that the sparse tree
contains only the new files, not the entire tree of empty directories. I
routinely use this approach to synchronize a massive directory tree between
2 computers. With
10,000+ directories and 100,000+ files, it finds
the 100 or so modified files and copies them
to
full on the other computer, moving
the previous versions to sparse in
less than 60 seconds.
Here's a script I use to automate the entire
process of deleting old sparse backups if necessary
to save space, doing the backup via rsync, showing
how much space is left, etc. I run this script
as a daily cron job at some of my clients, e-mailing
its output to the root user for review by the client.
Unix/CLSI/backup_clsi_www1
--Fred
Original Version: 7/18/2012
Last Updated: 7/18/2012
Applies to: All shells, All Unix flavors
The rsync command can also be used to create a backup that seems like a full backup but uses much less storage space. Call it as:
rsync
-rpogtlv --del --link-dest=full src/ sparse
rsync -rpogtlv --del --link-dest=full
src/ user2@host2:sparse
etc...
The --link-dest option is similar
to the --compare-dest option,
in that it tells rsync to compare the
files in the src tree
with those in the full tree
instead of the sparse tree,
but to leave the full tree untouched
and create all new files in the sparse tree
instead. However, where --compare-dest causes
rsync to omit files from the sparse tree
that are identical to the full tree, --link-dest tells
rsync to create hard links in the sparse tree
referring to the identical files in the full tree. Thus,
rsync copies all modified files from src to sparse,
and creates hard links in sparse to unmodified full files. Since
an additional hard link to an existing file is
indistinguishable from the original file (which
was already a hard link to the file contents),
the end result is a sparse tree
that is a fully populated copy of src.
See man ln for more info about Unix
"hard links".
If you use this technique to create
daily incremental backups in timestamped directory
trees with names like sparse_2012_06_14__16_29_38,
you can access the files in those trees exactly
like you would access files in full backups.
This is very convenient because you never
have to think about full vs incremental backups,
and never have to search for the desired version
of the file in multiple places. It's as though
you did a full backup each day, but it consumes much
less storage space.
As before, note that full is
specified as an absolute path, or as
a path relative to sparse, not
as a path relative to the current working directory.
If you use this technique, be careful to never
directly edit any of your full or incremental
backup files. Use them only to copy from
when recovering a file. Since many incremental
"copies" of a file in the sparse trees
may actually be hard links to the full version
of the file, editing one file actually edits
them all, which is probably not what you want. If
future versions of rsync add
an option that behaves
like the recently added --reflink=auto option
of the cp command (see cp
- Copy), this technique would be even better
because the hard link would be converted to a
copy if you did ever edit one of the backup files.
Personally, I don't use this technique. In
my usage scenario of 10,000+ directories and
100,000+ files, with only 100 or so modified files,
it would create all 10,000+ directories, and would
create hard links for all but 100 of the 100,000+
files. Too much overhead. However,
with smaller trees and/or a larger percentage of
modified files, it's a powerful technique.
--Fred
Original Version: 7/18/2012
Last Updated: 7/24/2019
Applies to: All shells, All Unix flavors
The rsync command has tons of options. Here are some of the most useful ones not mentioned in previous tips:
-n --dry-run |
Dry run. Show what would have been copied, but don't copy anything. |
-L -k -K -H |
Control whether symbolic links are followed. See also -l described above. |
-x | Don't cross filesystem boundaries |
--executability | Preserve executability. |
--chmod | Explicit control over permission bits |
-C | Ignore CVS (version control) files |
-I --ignore-times --size-only |
Control the criteria for whether files are considered
a match. See also --modify-window described above. |
-f --filter --exclude --include --files-from --exclude-from --include-from |
Include/exclude/filter files |
--delete --delete-before --delete-during --delete-after --delete-excluded |
Delete files from destination. |
--copy-dest | Similar to --compare-dest, but also copies identical files after comparing them. |
See also (good link sent to me by Jeff of the Philly Linux User Group):
--Fred
Original Version: 7/18/2012
Last Updated: 10/5/2018
Applies to: All shells, All Unix flavors
Here's a wise reminder from Geoff Rhine: With rsync or any other backup strategy, be sure to test your ability to recover files. It is all too common to have an elaborate backup strategy, but be unable to do a recovery when needed. Common problems are:
In fact, the inability to recover from backup is such a problem that people create hilarious music videos about it, like these 2 sent to me by JP Vossen:
Using rsync is a simple backup technique that should avoid some of these problems, since it is easy to have rsync backup an entire directory tree, and since rsync is efficient enough that you'll be less inclined to exclude various file types from your backups, and since rsync makes normal copies of files that don't require any special software to decode. However, it is always a good idea to do periodic trial recoveries, just to be sure you can do a recovery in a stressful situation someday if necessary.
--Fred
Original Version: 6/14/2012
Last Updated: 6/14/2012
Applies to: All shells, All Unix flavors
The tar command is used to ...
See man tar for details.
--Fred
Original Version: 6/14/2012
Last Updated: 6/14/2012
Applies to: All shells, All Unix flavors
The tar command is used to ...
See man tar for details.
--Fred
Original Version: 3/13/2012
Last Updated: 3/13/2012
Applies to: All shells, All Unix flavors
Having trouble finding a file in one of many tar files? Here's a script to find it for you:
If you don't have the Java jar command installed, edit the script and change "jar" to "tar". Java uses exactly the Unix tar (tape archive) format to manage its libraries of binary class files, and the Java jar command takes exactly the same options and performs exactly the same functions as the Unix tar command.
--Fred
Last Updated: 5/26/2000
Applies to: All shells, All Unix flavors
The find command is used to find and operate on files. It searches an entire directory tree starting at the specified directory for a file with a specified name, size, date, etc. For each file it finds, it can display the name, execute a user-specified command, etc. It supports a huge number of options, but is a little tricky to use. If you are a DOS, OS/2, Windows, or VMS user, looking for the recursive options on commands like DIR, and noticing that ls -R doesn't do what you expect, this is probably what you want (and a whole lot more). See man find for details.
--Fred
Original Version: 11/6/2012
Last Updated: 11/2/2017
Applies to: All shells, All Unix flavors
The top command shows an interactive display of the running processes on the system. While viewing the constantly updating display, you can sort/filter the processes, add/remove columns of info, split the screen into multiple displays, kill processes, change process priorities, etc. This is what the Windows "Task Manager" and the Mac OS X "Activity Monitor" want to be when they grow up.
Here's a short summary of some of the useful commands in top. See man top for more details.
h | Help |
q | Quit |
z | Colors |
B | Bold numbers in the header rows |
d1 | Delay = 1 sec, not 3 secs |
c | Show the command executing in the process |
H | Show threads |
x | Highlight the sort column |
y | Highlight the tasks that are running (eligible to run, not blocked by I/O etc.) |
b | Bold the highlighted stuff, instead of reverse video |
i | Hide/show idle processes |
S | Show cumulative time (parent plus dead child processes) |
f | Add/remove columns |
o | Reorder columns |
F | Column to sort the rows by |
O | Column to sort the rows by (same as F) |
> | Sort by field to the right of the current sort field |
< | Sort by field to the left of the current sort field |
R | Sort in reverse order |
M | Sort by memory usage |
T | Sort by length of time running |
P | Sort by CPU usage (default) |
A | Show multiple, separately configurable windows |
Z | Change colors |
W | Save settings in ~/.toprc for future sessions |
k | Kill a process (prompts for process PID) |
r | Re-"nice" (change the priority of a process) |
m | Hide/show memory stats at top |
1 | Show stats per CPU at top, not just aggregate values |
U | Show only processes of specified user (prompts for username) |
n | Show only top n processes |
Some of the commands above can also be specified as command line options. For example:
Thanks to Sandra Henry Stocker's blog posts for pointing out M, T, P, m, and 1:
Original Version: 9/28/2008
Last Updated: 4/9/2012
Here is a list of some of the more useful shortcut keys for Emacs:
Did I miss any good ones? Let me know.
--Fred
Original Version: 1/29/2021
Last Updated: 1/29/2021
Want a way to launch the default app for a file, the same as double-clicking the file with the mouse, but done from the command line or a shell script instead?
Use the xdg-open command. For example, the command:
xdg-open Document1.doc
launches LibreOffice, Microsoft Word, or whatever app is associated with ".doc" files on your computer.
For more info on the xdg-open command, type:
man xdg-open
Note: If xdg-open is not installed by default, you may have to install it as part of the xdg-utils package.
You can do the same thing in Windows and Mac. See:
--Fred
Original Version: 1/29/2021
Last Updated: 1/29/2021
Want a way to launch the default app for a URL, the same as clicking the URL with the mouse, but done from the command line or a shell script instead?
Use the xdg-open command. For example, the command:
xdg-open mailto:tips@bristle.com
launches your mail program (Thunderbird, Evolution, etc.) and fills in my e-mail address in a blank message. All you have to do is type in the rest of the message and hit Send. Similarly, you can open your default Web browser (Chrome, Firefox, Opera, etc.) at a specific Web page via:
xdg-open http://bristle.com
For more info on the xdg-open command, type:
man xdg-open
Note: If xdg-open is not installed by default, you may have to install it as part of the xdg-utils package.
You can do the same thing in Windows and Mac. See:
--Fred
Last Updated: 2/24/2002
Applies to: All shells, All Unix flavors
All Unix shells offer a wide variety of techniques to redirect the input and output streams from a command to a file or to another command.
The following forms are valid in all shells:
Syntax (one line) | What it does |
command > filename | Redirects the output of command to write to filename, creating filename if it doesn't exist, and overwriting its current contents if it does exist. |
command >> filename | Redirects the output of command to write to filename, creating filename if it doesn't exist, and appending to the end of its current contents if it does exist. |
command < filename | Redirects the input of command to read from filename. |
command << word | Redirects the input of command to read from the current shell script, until the first line that starts with word. Known as a "here document". Only useful in shell scripts. Many people use "EOF" as the word to indicate the EOF (end of file) of the here document. |
command1 | command2 | "Pipes" the output of command1 into command2 by redirecting the output of command1 and the input of command2. |
command1 `command2` | Redirects
the output of command2 into the command line (not into
standard input) of command1. The output of command2 is
used as command line options and arguments to
command1. Note: You must use the backtick character (`), not the apostrophe ('). On most keyboards, it is just left of the number 1 key, with tilde (~) as its shift character. |
command | tee filename | Creates 2 copies of the standard output of command, redirecting one copy to filename but not affecting the other copy. Very useful when you want to watch the output of a long-running process on the screen, but also capture it all to a log file. |
set noclobber | Sets the "noclobber" shell variable which causes > to fail if filename already exists, and >> to fail if filename does not already exist. |
The following forms are valid in csh-based shells (csh and tcsh):
Syntax (one line) | What it does |
command >! filename | Same as without the exclamation point (!), but suppresses the "noclobber" check. |
command >>! filename | |
command >& filename | Same as without the ampersand (&), but redirects standard error, as well as standard output. |
command >>& filename | |
command >&! filename | |
command >>&! filename | |
command1 |& command2 | |
(command > filename1) >& filename2 | Redirects standard output of command to filename1, and standard error to filename2. |
The following forms are valid in sh-based shells (sh, bash, ksh, zsh):
Syntax (one line) | What it does |
command >| filename | Same as without the pipe character (|), but suppresses the "noclobber" check. |
command >>| filename | |
command 0< filename | Same as command < filename. A way of explicitly referring to standard input (file descriptor 0). |
command 1> filename | Same as command > filename. A way of explicitly referring to standard output (file descriptor 1). |
command 2> filename | Redirects standard error (file descriptor 2) of command to filename without affecting standard output. |
command 1> filename1 2> filename2 | Redirects standard output (file descriptor 1) of command to filename1, and standard error (file descriptor 2) to filename2. |
command > filename 2>&1 | Same as without the 2>&1, but redirects standard error,
as well as standard output. This is the same 2> as above (redirect standard error), but the special syntax &1 redirects it into file descriptor 1 (standard output) instead of into a named file. Note: The location of the syntax 2>&1 is important. It must occur, as shown here, after the redirection of the merged stream to filename, but before the pipe character (|). |
command >> filename 2>&1 | |
command1 2>&1 | command2 | |
command &> filename |
Same as command > filename 2>&1 [Note: bash only; not sh and ksh] |
command >& filename | |
command <<- word | Same as command << word, but strips all leading tab characters from each line of the "here document", allowing you to indent it within the shell script. |
command <> filename | Same as command < filename, but opens filename for reading and writing. Only useful if command writes to standard input. |
command 0<> filename | Same as command <> filename. A way of explicitly referring to standard input (file descriptor 0). |
command 1<> filename | Same as command 1> filename, but opens filename for reading and writing. Only useful if command reads from standard output. |
command 2<> filename | Same as command 2> filename, but opens filename for reading and writing. Only useful if command reads from standard error. |
command n< filename | Redirects file descriptor n of command to read from filename. This is the general form of the 0< syntax shown above. It is only useful for programs that read from file descriptors other than 0. |
command n> filename | Redirects file descriptor n of command to write to filename. This is the general form of the 1> and 2> syntax shown above. It is only useful for programs that write to file descriptors other than 1 and 2. |
command n>> filename | All the forms shown above (>, >>, >|, >>|, &>, <>, etc.) can explicitly use 0, 1, 2, or any other file descriptor n. The number for the file descriptor is always placed immediately before the first < or > character. |
command n>| filename | |
command n<> filename | |
etc... |
For more information, see the tips below, or type:
man csh
man tcsh
man sh
man bash
man ksh
man zsh
--Fred
Last Updated: 2/24/2002
Applies to: All shells, All Unix flavors
The most commonly used form of command line redirection is redirecting the output of a command to a file, as:
ls > out.txt
which sends the output of the ls command (a list of filenames) to the file out.txt instead of to the screen.
A variation on this is:
ls >> out.txt
which appends the output to out.txt, instead of overwriting the contents of out.txt.
These forms are supported by all Unix shells. They are also supported by DOS and Windows (but there, you use dir instead of ls to list the names of files) .
--Fred
Last Updated: 2/24/2002
Applies to: All shells, All Unix flavors
You can also redirect the input of a command to read from a file, as:
sort < in.txt
which causes the sort command to read from the file in.txt, instead of from the keyboard.
This form is supported by all Unix shells, and also by DOS and Windows.
--Fred
Last Updated: 2/24/2002
Applies to: All shells, All Unix flavors
A variation on input redirection is:
sort <<EOF
line of text
another line of text
third line of text
EOF
which is only useful in a shell script. This form, called a "here document", causes the sort command to read from the shell script that contains it ("read from here"), instead of from the keyboard, until it finds a line that consists entirely of the word that occurs after the << ("EOF" in this case).
This form is supported by all Unix shells, but not by DOS or Windows.
--Fred
Last Updated: 2/24/2002
Applies to: sh-based shells, All Unix flavors
Your shell scripts will be more readable if you indent the lines of the here document to make them stand out from the rest of the script. However, you don't want the indentation to be treated as part of the input stream. You can do this, as:
sort <<-EOF
line of text
another line of text
third line of text
EOF
by adding a minus sign (-) immediately after the <<, which causes the leading tab characters to be ignored.
This form is supported by sh-based Unix shells (sh, bash, ksh, zsh), but not by csh-based shells (csh, tcsh), and not by DOS or Windows.
--Fred
Last Updated: 3/16/2002
Applies to: All shells, All Unix flavors
Ordinarily lines of a "here document" are subject to various forms of expansion (command substitution, parameter expansion, arithmetic expansion, etc) before they are passed to the command. For example, the script:
sort <<EOF
line of text
another line of text
my username is $LOGNAME
EOF
produces:
another line of text
line of text
my username is fred
To suppress such expansion, quote the word that follows the << as:
sort <<"EOF"
line of text
another line of text
my username is $LOGNAME
EOF
which produces:
another line of text
line of text
my username is $LOGNAME
You can use double quotes as shown above ("EOF"), or single quotes ('EOF'), or backslash quoting (\EOF). Also, you can quote the entire word, or just part of it (E"OF", E'OF', E\OF).
This form is supported by sh-based Unix shells (sh, bash, ksh, zsh).
It is also supported by csh-based shells (csh, tcsh), but in that case, you must include exactly the same quotes in the closing word as you did in the opening word. For example:
sort <<"EOF"
line of text
another line of text
my username is $LOGNAME
"EOF"
It is not supported by DOS or Windows.
Thanks to Tom Dickey for reminding me of this form!
--Fred
Last Updated: 2/24/2002
Applies to: All shells, All Unix flavors
You can combine input redirection and output redirection, directing the output of one command into the input of another command, as:
ls | sort
which directs the output of ls into the sort command, so that the list of filenames produced by ls is sorted. Since the output of sort is not redirected, the sorted list is displayed on the screen.
You can also string multiple pipes together, as:
ls | grep abcd | sort | more
which pipes the output of ls into the grep command to discard all lines not containing the string "abcd", pipes the output of grep into the sort command to sort it, and pipes the output of the sort command into the more command to cause it to be displayed one screenful at a time.
You can also combine piping with other forms of redirection. For example:
ls | grep abcd | sort > out.txt
which does the same as the previous example, except it sends the output of the sort command into the file out.txt, instead of paging it to the screen via the more command.
These forms are supported by all Unix shells, and also by DOS and Windows. However, in DOS and Windows, you must use the find command instead of grep, the dir command instead of ls, and must put the string in quotes as:
dir | find "abcd"
| sort | more
dir | find "abcd"
| sort > out.txt
--Fred
Original Version: 11/12/2004
Last Updated: 11/25/2010
Applies to: All shells, All Unix flavors
To take advantage of easy file redirection and piping, many Unix commands are written as "filters". That is, they don't require command line parameters to tell them what file to read and what file to write. Instead, they read from standard input and write to standard output. You can redirect to the desired files, or pipe to other commands. Some of the most useful Unix filters are:
Filter | What it does |
more | Waits for you to ask for more. It reads standard input, writes a screenful of it to standard output, then waits for keystrokes to tell it how much more of standard input to write to standard output before waiting again (return = one more line, space = one more screen, b = back one screen, h = help, etc.) . This prevents long output streams from flowing past faster than you can read them. |
less | A better version of more. Offers many more options while paging through text. For details, see: less. |
sort | Sorts the input lines alphabetically, sorting upper case letters before lower case or treating them the same (-f option), optionally ignoring leading blanks (-b option). With the -n option, it sorts numerically (2 before 10, but 20 after 10) instead of alphabetically (2 and 20 both after 10). Use the -k option to sort by a column other than the first. |
grep | Searches the input lines for a specified string or regular expression, filtering out lines that don't (or do -- see -v option) match. Named for the "vi" command "g/re/p" where g means search globally in the file, "re" is any regular expression, and p means print the matching lines. |
uniq | Discards consecutive duplicate lines |
tr | Translates characters from one set to another. |
sed | Stream editor. Applies a specified sequence of editing commands to each line. |
awk | More powerful editor. Named for its authors Aho, Weinberger, and Kernighan. |
cat | Catenates (concatenates) multiple files into one. |
tee | "Tee" pipe fitting. Copies standard input to 2 places: standard output and the specified file. |
head | Shows only the first few lines. |
tail | Shows only the last few lines. |
And there are many, many more. Almost all Unix commands read from standard input, write to standard output, or both. However, for convenience when you're not thinking of them as filters, many of these commands also allow their input and output files to be specified directly as command line parameters. That's why you can display of the contents of a file via commands, like:
cat file.txt
more file.txt
rather than having to use longer forms, like:
cat < file.txt
more < file.txt
cat < filename | more
Use the man command to learn more about these Unix commands.
DOS and Windows offer only 3 standard filters:
Filter | What it does |
more | Same as Unix more, but far less flexible. Until Win2000, any keystroke was treated as space (one more screen). |
sort | Sorts the input lines alphabetically. |
find | Same as Unix grep, but far less flexible. No regular expression support, and far fewer options. |
Some of the newer (NT-based) versions of Windows offer a help command that can tell you more about these commands.
In both Unix and Windows, you can write additional filters of your own.
Thanks to Tom Stluka for reminding me of the sort options!
--Fred
Last Updated: 2/24/2002
Applies to: All shells, All Unix flavors
Sometimes you want to redirect the output of a command into the command line of another command, not into its standard input. You can do this as:
cc `ls *.c`
Which causes the list of filenames produced by the ls *.c command to be used as command line parameters to the cc command (the C compiler). This compiles all the C source files in the current directory.
Note: You must use the backtick character (`), not the apostrophe ('). On most keyboards, it is just left of the number 1 key, with tilde (~) as its shift character.
This form is supported by all Unix shells, but not by DOS or Windows. However, in NT-based versions of Windows, there is a similar construct, using the FOR /F command. For details, see:
Windows.htm#parsing_command_output
--Fred
Last Updated: 4/5/2002
Applies to: All shells, All Unix flavors
You can split the output of a command into two identical copies with the tee filter, as:
make | tee make.log
This sends one copy of the standard output of the make command to the file make.log and another copy to standard output (the screen). For commands like make, which may take a long time to run, but may generate output continuously, this is a useful way to monitor the progress (by watching it on the screen) while still creating a log file (in case you look away at a critical moment, or need a permanent record).
The tee filter is standard in all flavors of Unix, but not in DOS or Windows.
Good news and bad news... The good news is that tee is a very simple program that you can write yourself. Write a simple loop that reads from standard input, and writes each line or character to both standard output and a named file. A simple C version looks like:
while (c = getchar())
{
putchar(c);
fputc( c, fp);
}
The bad news is that, on some versions of Windows, it still won't do what you expect. In Unix and in NT-based versions of Windows (WinNT, Win2000, WinXP), each command in a pipeline runs as a separate process. They all run concurrently. Therefore, the tee program is running at the same time as the make program, so it can write output to the log file and to the screen as that output is generated by make. This is what you want because it allows you to monitor the progress of the long-running make command. However, in DOS and non-NT-based version of Windows (Win 3.1, Win95, Win98, WinME), the commands in a pipeline run sequentially, not concurrently. Therefore, this pipeline runs the make command first, sending the output to a temporary holding area. Then it runs the tee command, copying the output to the screen and to the log file. You see no output at all until the make command is completed, so this is useless as a way to monitor the progress of the make command.
Thanks to Andy Glick for pointing out that the NT-based versions of Windows *do* run pipelines concurrently.
--Fred
Last Updated: 2/24/2002
Applies to: All shells, All Unix flavors
To protect yourself from overwriting a critical file by accidentally redirecting output to it, you may want to use the command:
set noclobber
This command adds error checking to file redirections. When the noclobber variable is set, the form:
command > filename
fails if the specified file (to be created) already exists, and the form:
command >> filename
fails if the specified file (to be appended to) does not already exist.
I recommend setting this variable in your Unix startup file. You can override it explicitly, in csh-based shells with the forms:
command >!
filename
command >>!
filename
and in sh-based shells with the forms:
command >|
filename
command >>|
filename
This variable is supported by all Unix shells, but not by DOS or Windows.
--Fred
Last Updated: 3/3/2002
Applies to: All shells, All Unix flavors
So far, we've covered standard input and standard output. Unix defines another output stream called "standard error". By convention, programs write their regular output to standard output, but their error messages to standard error. You may never have noticed because both streams default to the screen. Thus, if the current directory contains one or more files that have names starting with the letter f, but no file named crapola, the command:
ls f* crapola
writes the names of the files starting with f to standard output, and an error message about crapola not existing to standard error. The output looks like:
ls: crapola: No
such file or directory
f1 f2 f3
and the difference between standard output and standard error doesn't matter much.
However, once you start redirecting standard output to a file or pipe, you may notice that error messages don't get redirected. For example:
ls f* crapola > out.txt
sends the names of the files starting with f to the file out.txt, but the error message about crapola still comes to the screen.
In sh-based shells, you can explicitly refer to the 3 files streams by using the associated "file descriptor" numbers 0 (standard input), 1 (standard output), and 2 (standard error). Thus:
sort 0< in.txt
ls 1> out.txt
ls 2> out.txt
The first two are more explicit forms of the same thing without the 0 or 1:
sort < in.txt
ls > out.txt
but the third form allows you to explicitly redirect standard error. Thus, the command:
ls f* crapola 1> out.txt 2> err.txt
sends the names of the files starting with f to the file out.txt, and the error message about crapola to the file err.txt.
Windows (and I think, DOS) also support the concept of standard error, but programs written for DOS and Windows are pretty lax about distinguishing between them. You'll find many programs that write errors to standard output, or non-errors to standard error. For programs written correctly, you can use the same syntax as the sh-based shells.
I tried to re-create the example above for DOS/Windows using dir instead of the Unix ls to show the names of files, as:
dir f* crapola 1> out.txt 2> err.txt
However, dir has an unfortunate "feature". Instead of reporting an error when any of the specified files is missing, it reports an error only if all of the specified files are missing. So the fact that there were filenames starting with f prevented the non-existence of crapola from being reported at all. No error messages -- therefore not a good example to use here.
Then I tried re-creating the example using the type command (show the contents of the files), as:
type f* crapola 1> out.txt 2> err.txt
This works better, but shows that the type command is also written in an "interesting" way. It sends the contents of the files to standard output, but the names of the files to standard error, along with the error messages. In any case, you can play with it, and get the idea.
--Fred
Last Updated: 4/5/2002
Applies to: All shells, All Unix flavors
Sometimes, you want to redirect standard output and standard error to the same file, so that you can see the error messages intermixed with the regular output to give some context to the error messages. You can do this, in csh-based shells (csh and tcsh), by adding an ampersand (&) to the regular redirections and pipes, as:
ls f* crapola
>& out.txt
ls f* crapola
>>& out.txt
ls f* crapola
|& more
and in sh-based shells (sh, bash, ksh, zsh) as:
ls f* crapola >
out.txt 2>&1
ls f* crapola
>> out.txt 2>&1
ls f* crapola
2>&1 | more
The special syntax 2>&1 says "redirect file descriptor 2 (standard error) to the same place as file descriptor 1 (standard output)". Note that the 2>&1 occurs after the redirections (in the first and second lines), but before the pipe (in the 3rd line).
NT-based versions of Windows use the same syntax as the sh-based shells:
type f* crapola >
out.txt 2>&1
type f* crapola
>> out.txt 2>&1
type f* crapola
2>&1 | more
Non-NT-based versions of Windows don't support this feature.
The bash shell supports the syntax of the sh shell from which it is derived, but also supports one combination from the csh shell:
ls f* crapola >& out.txt
and a slight variation formed by reversing the special characters:
ls f* crapola &> out.txt
but not the other csh shell formats: >>& and |& and not their reverses: &>> and &|.
--Fred
Original Version: 3/3/2002
Last Updated: 2/14/2011
Applies to: csh-based shells, All Unix flavors
The syntax for separately redirecting standard output and standard error:
ls f* crapola 1> out.txt 2> err.txt
applies to sh-based shells (sh, bash, ksh, zsh) only. However, you can achieve the same effect in csh-based shells (csh, tcsh) as:
(ls f* crapola > out.txt) >& err.txt
This uses the general purpose grouping operators (parentheses) supported by all Unix shells (and DOS and Windows). The ls f* crapola command is executed with its standard output redirected to out.txt. Since its standard error is not redirected, it goes to the normal place. However, the next part of the grouped command redirects standard error and whatever is left of the standard output (nothing, since it was already redirected) to err.txt. The next effect is that standard output and standard error go to separate files.
--Fred
Last Updated: 3/3/2002
Applies to: sh-based shells, All Unix flavors
The sh-based shells also support the following formats, which have no equivalent in csh-based shells or in DOS or Windows.
Syntax (one line) | What it does |
command <> filename | Same as command < filename, but opens filename for reading and writing. Only useful if command writes to standard input. |
command 0<> filename | Same as command <> filename. A way of explicitly referring to standard input (file descriptor 0). |
command 1<> filename | Same as command 1> filename, but opens filename for reading and writing. Only useful if command reads from standard output. |
command 2<> filename | Same as command 2> filename, but opens filename for reading and writing. Only useful if command reads from standard error. |
--Fred
Last Updated: 3/3/2002
Applies to: sh-based shells, All Unix flavors
The sh-based shells also support the following formats, which have no equivalent in csh-based shells or in DOS or Windows.
Syntax (one line) | What it does |
command n< filename | Redirects file descriptor n of command to read from filename. This is the general form of the 0< syntax shown above. It is only useful for programs that read from file descriptors other than 0. |
command n> filename | Redirects file descriptor n of command to write to filename. This is the general form of the 1> and 2> syntax shown above. It is only useful for programs that write to file descriptors other than 1 and 2. |
command n>> filename | All the forms shown above (>, >>, >|, >>|, &>, <>, etc.) can explicitly use 0, 1, 2, or any other file descriptor n. The number for the file descriptor is always placed immediately before the first < or > character. |
command n>| filename | |
command n<> filename | |
etc... |
--Fred
Last Updated: 4/5/2002
Applies to: DOS and Windows, not Unix
There are a couple of other quirks to watch out for in the DOS and Windows implementation of command line redirection, especially older versions:
These have both been fixed in the latest versions of Windows.
Also, as described in tee, even in the latest of the non-NT-based versions, pipe segments run sequentially, not concurrently.
Finally, beware that NT-based
versions of Windows have 2 different command line interpreters:
and CMD.EXE has a set of "command extensions" that can be enabled or disabled. For best results, use CMD.EXE with command extensions enabled.
--Fred
Last Updated: 4/23/2009
Applies to: All shells, All Unix flavors
Writing a shell script is easy. You simply put the same commands into a file that you would otherwise have typed interactively. You may also use additional commands that you wouldn't tend to use interactively, like if, for, case, etc. You can read about these additional commands in the on-line manuals via man and other help commands. See "The help system" for more info.
The first line of a shell script is special. It looks like a comment since it starts with "#", but it's not. A typical first line is:
#!/bin/sh
specifying the name of the shell that should read and execute the script file. In this case, it is sh (the Bourne Shell), but it could just as easily be csh, tcsh bash, ksh, zsh, etc., or even a language like perl which is commonly used to write scripts but rarely used as an interactive shell.
In addition to the regular external commands, each documented in its own man page, there are many commands built in to each shell. These are documented in the man page of the shell itself. For info about the built-in commands available in an sh script, see man sh or info sh. For info about the built-in commands available in a csh script, see man csh or info csh, and so on. You'll also use variables, parameters, comments, etc. More on that in future tips.
One important thing to know about a shell script is that when you invoke it, from an interactive shell or from another shell script, a new process is created to run the shell specified on the first line, that shell executes the lines of the script, and that shell and process exit when the script is done. Therefore, if you change any shell variables, or even any environment variables, from within the script, they have no effect on the calling shell. This is usually what you want. If not, see: "Running a shell script in the same process".
--Fred
Last Updated: 4/24/2009
Applies to: All shells, All Unix flavors
Invoking a shell script, or any command, in Unix is easy. Just type the name of the file that contains the shell script or executable program. There is no run command or anything to prefix it with.
You can (but typically don't) type the full absolute pathname of the file, as:
~/bin/myscriptYou also can (but typically don't) type the full relative pathname of the file, as:
../../bin/myscriptInstead, you typically type just the simple filename, as:
myscriptand the shell looks for the first occurrence of such a file in the list of paths specified on the PATH environment variable, which should be a list of paths where programs and scripts are stored, separated by colons (:), as:
setenv PATH /usr/bin:/bin:/usr/local/bin:~/binYou may want to add your own ~/bin directory to the end of the path, as shown above, so that any scripts you write and store there will be found.
If files with the same name exist in multiple paths on the PATH environment variable, only the first is found and executed. To see which file would be executed if you were to type a command name, use the which command or any of its variants for the various shells (where, whence, type, command, etc.) as described in "The help system". All of the shells support which, but it shows only the first match, and ignores any aliases or built-in shell commands that may exist and would take precedence over the files on the PATH. Many of the variants are better versions of which because they show an ordered list of matches, preceded by any aliases and built-in shell commands.
If there is a conflict, you can bypass the PATH, as well as any aliases or built-ins, by specifying the full absolute or relative pathname of the file you want, as shown above. If you generally want conflicts to be resolved in favor of one particular path over another, put it earlier in the PATH than the other. Some people put their own ~/bin first, but if you do, make sure you are not hiding any standard commands that are needed by any scripts you run.
For efficiency, most shells do not actually search the entire PATH each time. Instead, they keep an internal "hash" of where to find each command. Therefore, if you put a new file into one of the paths, it may be ignored at first. If so, use the hash -r command (in sh-based shells), or the rehash command (in csh-based shells), to force the shell to search the entire PATH and rebuild the hash.
Finally, be aware that only executable files on the PATH are found. Therefore, if you add a file to any of the paths, be sure to set its executable flag via the chmod command, as:
chmod o+x ~/bin/myscript
--Fred
Last Updated: 4/24/2009
Applies to: All shells, All Unix flavors
A shell script normally runs in its own new shell and its own process, as described in "Writing a shell script". But, what if you want it to run in the context of the current shell and current process, so that it can affect the aliases, shell variables, and environment variables of the current shell?
In that case, use the . command (in sh-based shells), or the source command (in sh-based shells), as:
. ~/bin/myscriptor:
source ~/bin/myscriptThis is referred to as "sourcing" the script. It causes the shell to execute each line of the script as though it had been typed interactively, except that all comment lines are ignored, including the special first line, even by shells that don't ordinarily allow comments to be typed interactively. You can still pass parameters to the script as usual, as:
source ~/bin/myscript param1 param2 another_param
The most common use for this is the startup files (/etc/csh.cshrc, /etc/csh.login, ~/.cshrc, ~/.login, etc.), described in "sh Startup Files", "csh Startup Files", "bash Startup Files", "ksh Startup Files", "zsh Startup Files", etc., which are automatically "sourced", not run normally.
When a script is "sourced" in this manner, since the special first line is ignored and the lines of the script are interpreted by the current shell, errors are likely to occur if the script was not written in the syntax of the current shell. Only simple scripts, that use no shell-specific features can be sourced by all shells. Most scripts only work properly when sourced from the correct shell type.
--Fred
Original Version: 11/4/2001
Last Updated: 4/20/2009
Applies to: All shells, All Unix flavors
There are several different Unix shells (sh, csh, tcsh, ksh, bash, zsh, etc.) Each is a command line interpreter that can be run under any Unix flavor (BSD, System V, Solaris, HP/UX, Linux, Mac OS X, BSDI, SCO Unix, IBM AIX, etc.). They have similar but not identical syntax, predefined commands (ls, cd, set, etc.), environment variable settings, etc. They were each written by a different set of authors, but were intended to be compatible with all Unix flavors. For the most part, all shells are included with the distribution of each Unix flavor.
Any Unix program (cc, make, touch, more, etc.) can be launched from any of the shells. Once running, it is under the control of the Unix flavor, not the shell, so a Unix program should behave the same with all shells, but not necessarily the same with all flavors.
The first shell was sh, the Bourne Shell, named for it's author. Later, csh, the C Shell, was written to be an improvement on sh, using a syntax more like that of the C programming language in its scripts. More recently, a couple of sh derivatives have appeared: ksh, the Korn Shell, bash, the Bourne Again Shell, and zsh, the Z shell. These attempt to be strict supersets of sh, adding features, but maintaining 100% compatibility with sh. Also, a csh derivative has appeared: tcsh, the T C Shell, which attempts to be a strict superset of csh, adding command line editing and filename completion features.
All shells are still considered current, and are widely used; none has successfully replaced another. It is common to find different users using different shells at the same time on any Unix system. In fact, it is common to find the same user using different shells at the same time. In the 1980's I typically had multiple windows open on my Unix workstation, running csh in most of them, but sh in one. Also, even though I typically used csh interactively, I wrote most shell scripts in sh. Since the first line of each shell script identifies the shell that it is written in, you can run a script written in any shell language from any other interactive shell, or even call a script in one language from a script in another language. These days, I tend to use tcsh interactively, and write scripts in perl or any of the scripting languages.
Last Updated: 10/4/1999
Applies to: csh, tcsh
See also: CDPATH in bash, CDPATH in ksh
The shell variable cdpath specifies the directories in which the cd and pushd commands look for subdirectories. For example, in my .cshrc file, I set:
set cdpath = (.. ~ ../.. ../../.. \ ~/ste/dat ~/ste/layout ~/ste/adt ~/ste)
Then, when I type:
cd src
it looks for the following (in order):
./src (always the first place checked) ../src ~/src ../../src ../../../src ~/ste/dat/src ~/ste/layout/src ~/ste/adt/src ~/ste/src
This makes it much less tedious to navigate a complex directory tree.
--Fred
Last Updated: 5/12/2000
Applies to: csh, tcsh
Setting the shell variable filec enables filename completion in csh. For example, in my .cshrc file, I put:
set filec
Then, when I type a unique prefix of a filename and hit ESC, it completes the filename. If the prefix I typed is not unique to a single file, it does nothing. Hitting Ctrl-D after a non-unique prefix shows all possible matches. Examples:
cat abc<ESC> # Local file starting with abc /usr/ucb/m<Ctrl-D> # All files in /usr/ucb/ starting with m /usr/ucb/mer<ESC> # /usr/ucb/merge ls ~f<ESC> # Home directory of user starting with f
For more details, see man csh.
A similar capability exists in tcsh using TAB (not ESC) and operating on commands as well as filenames. It is enabled by default, so you don't need to set the filec variable.
If you type a prefix of a command and hit TAB, tcsh completes the command if the prefix is unique for commands on the PATH. Hitting Ctrl-D shows all matches. Setting the autolist shell variable causes TAB to show all matches, making Ctrl-D unnecessary.
Similarly for filenames, if you type a command, then a space, then a unique prefix of a filename, then TAB, it completes the filename. Ctrl-D shows all matches. Setting autolist makes Ctrl-D unnecessary.
For more details, see man tcsh.
For both shells, the fignore variable can be used to specify extensions of files that should be skipped when looking for unique matches to the prefixes.
This makes it much less tedious to type long detailed filenames.
--Fred
Last Updated:
9/14/2001
Applies to: csh, tcsh
Setting the shell variable prompt changes the prompt string on the command line. You can arrange for the prompt string to always reflect the current working directory or any other piece of dynamic information by aliasing the commands that change that dynamic information to have the side effect of also updating the prompt variable. As an extreme example, here is the code from my .cshrc file to maintain the prompt:
# --------------------------------------------------------------------------- # These commands maintain the prompt with the following features: # - History number is displayed in the prompt [Now commented out --Fred] # - Time at which this directory was entered is displayed. # - Your username (or the one you are currently su'd to) is displayed. # - Machine name is displayed. # - Full name of current directory is displayed. # - Your own home directory when in the prompt is always displayed as # "~" (not /home/fred, for example). # - Other people's main directories are displayed like "~zebrom00". # - When you switch to a directory via a soft link, the link path is # displayed instead of the real path. Use pwd to see the real path. # - Number of nested shells is indicated (in tcsh, not csh). # - Number of directories stacked by pushd/popd, if any, is displayed. # They are needed only in interactive mode. # Note: Moving these to non-interactive section would cause C shell to # lose ability to distinguish between interactive and non-intractive # shells because the prompt would always be defined. # --------------------------------------------------------------------------- if ($?prompt) then alias newdir1 'set noglob;set d=(`dirs`);unset noglob' # Noglob needed to prevent ~ from being # expanded alias newdir2 'set n="";@ m = $#d - 1;if ($m > 0) set n="$m"; unset m' alias newdir3 'set t=`date`;set t=`echo "$t[4]x" | sed "s/:..x//"`' alias newdir4 'set c=`whoami`@`hostname -s`' alias newdir5 'set s=""' if ($?shlvl) then alias newdir5 'set s="";if ($shlvl > 0) set s="$shlvl/"' endif # alias newdir6 'set prompt="\\! $t $c [$s$n] $d[1] % ";unset c t d n s' alias newdir6 'set prompt="$t $c [$s$n] $d[1] % ";unset c t d n s' alias newdir 'newdir1;newdir2;newdir3;newdir4;newdir5;newdir6' # Set prompt alias cd 'set old=$cwd;chdir \!*;newdir' # Remember old dir, change dir and # update prompt alias back 'set back=$old;cd $back;unset back' # Toggle back to previous dir and # update prompt alias pushd 'pushd \!*;newdir' # Push dir and update prompt alias popd 'popd \!*;newdir' # Pop dir and update prompt alias up 'cd ..' # Move up one dir, updating prompt alias dn 'cd ./\!*' # Move down to subdir, updating prompt alias ac 'cd ../\!*' # Move across to sibling dir, updating # prompt alias pd pushd # Push dir and update prompt alias pop popd # Pop dir and update prompt alias md mkdir alias rd rmdir alias mcd 'md \!* && cd \!*' # Create and move to directory endif
Note: Beware of the -s option of hostname. It doesn't exist on all Unix systems, in which case the -s is interpreted as the first argument to hostname, causing it to try to change the name of the computer. An error message is reported unless you are a superuser. Solaris (and other System V derivatives?) has no -s. Apollo Domain/IX and BSDI (and other BSD derivatives?) do have a -s. Linux also has a -s. Others?
--Fred
Original Version: 5/26/2000
Last Updated: 4/24/2009
Applies to: csh, tcsh (and similar in all shells)
You can redefine an existing command with the alias command and restore its original meaning with the unalias command. For example, I add the following lines to my .cshrc file:
alias cp cp -i -v # Prompt before overwriting, show filenames alias mv mv -i -v # Prompt before overwriting, show filenames alias rm rm -i -v # Prompt before removing each file, show filenames alias diff diff -s # List files that are the same
To bypass the alias, simply precede the command with a backslash:
rm * # Prompt for each file, and show names removed \rm * # Don't prompt or show names removed \rm -v * # Don't prompt for each file, but do show names removed \rm -i * # Prompt for each file, but don't show names removed
You can also use alias and unalias to define new commands of your own. For example, I add the following lines to my .cshrc file:
alias pd pushd alias pop popd alias dir ls -FlA alias del rm alias search grep alias up 'cd ..' # Move up one dir alias dn 'cd ./\!*' # Move down to subdir alias ac 'cd ../\!*' # Move across to sibling dir
Notice that you use exclamation point (a.k.a. "bang"), which is the history mechanism, to insert the command line parameters used on the alias into the definition of the alias (escaped with a backslash to prevent the insertion from happening while we are still defining the alias). Here, we use "!*" to insert all command line parameters. See "History substitutions" in man csh for details.
Aliases exist in most, perhaps all, other shells also, but the syntax may be slightly different. For example, bash uses an equal sign and quotes, as:
alias cp="cp -i -v"
--Fred
Original Version: 4/15/2009
Last Updated: 4/24/2009
Applies to: csh, tcsh (and similar in all shells)
Be aware that aliases are available not only at the interactive command line, but also in shell scripts, so they can break existing shell scripts by changing the behavior of the commands used in the scripts.
You might think this is not a problem because, as described in "Writing a shell script" each shell script creates its own child shell in which to run, and aliases are not inherited from the parent shell. Therefore, each script runs in a clean environment with no aliases. However, the first thing each new shell does, even before running the script it was created to run, is to automatically run the commands in the startup files (for example, ~/.cshrc. in csh). Since that it where most users are likely to define their aliases, it is a problem after all.
To prevent your aliases from causing this problem in scripts written by you and
by others, define your aliases in ~/.cshrc inside a test for interactive
shells, as:
if ($prompt) then alias cp cp -i -v # Prompt before overwriting, show filenames alias mv mv -i -v # Prompt before overwriting, show filenames alias rm rm -i -v # Prompt before removing each file, show filenames alias diff diff -s # List files that are the same endif
To protect scripts that you write from unexpected aliases that you or other users may have defined in a ~/.cshrc file, there are a couple of possible techniques. You could put a backslash in front of each command, but that's really tedious and error-prone.
Alternatively, you could put the command:unalias *at the top of the file to delete all aliases for the duration of the shell that was created to run the script. You don't have to worry about the unalias command itself having been aliased because the csh and tcsh shells disallow that. (Some other shells don't, so in bash for example, you'd use "\unalias -a" instead of "unalias *".)
Alternatively and better, what I do is change the first line of the script from:
#!/bin/cshto:
#!/bin/csh -fso that the shell created to run the script does not execute the commands in ~/.cshrc at all. That also makes the script run faster. (In the bash shell, use "#!/bin/bash --norc" instead of "#/bin/csh -f".)
--Fred
Original Version: 4/15/2009
Last Updated: 5/10/2009
Applies to: csh, tcsh (and similar in all shells)
Once you've gone to all this trouble to make sure an alias is not available from a shell script, what if you do want to use the alias in a script? The best idea is to define it as a separate shell script, not as an alias. For example, create a file named dir, containing the lines:
#!/bin/csh -f ls -FlA $*:q
put it in a directory on the PATH, make it executable via the chmod command, and use the rehash command to update the command cache.
Notice that you use dollar sign to expand command line parameters used on a shell script into the body of the shell script. Here we use $* to insert all command line parameters. We also use :q to automatically put quotes around each parameter since any quotes you may have specified on the command line (around filenames that contain spaces, for example) are stripped off automatically. See "Variable substitution" in man csh for details.
Another advantage to using shell scripts instead of aliases is that aliases must be defined again for each type of shell. Thus, if you want a dir command in both csh and bash, you'd have to define it in the ~/.cshrc file and in the ~/.bashrc file, using slightly different syntax in each case. However, if you define it as a shell script (written in any of the shell script languages: sh, csh, tcsh, ksh, bash, zsh, etc.), or as a program written in Perl, Python, C/C++, Java, etc., you can run that one copy from any of the shells.
Similarly, you may find that you can run shell scripts but not aliases in situations where you pass a command to another command to be executed. For example, you can pass a shell script, but not an alias, to the sudo command.
--Fred
Original Version: 4/21/2009
Last Updated: 4/21/2009
Applies to: csh, tcsh (and similar in all shells)
It is not always the case that a shell script is better than an alias. Aliases have their advantages also. Use aliases instead of shell scripts in the following cases:
--Fred
Original Version: 4/24/2009
Last Updated: 4/24/2009
Applies to: csh, tcsh (and possibly others)
What to do when an alias is too long or too complex to to fit reasonably on a single line?
Most of the aliases shown so far have been very simple -- an alias name followed by a command name with some options. They haven't used any special characters that had to be expanded at the right time. They've been so short that they typically fit on a single line with enough room left for an explanatory comment. For example:
alias cp cp -i -v # Prompt before overwriting, show filenames
However, you may need to define longer or more complex aliases. The next few tips discuss various techniques you can use.
You can enclose the body of the alias in single quotes ('), a.k.a apostrophes. For example:
alias logs 'pushd /var/log;ls -FlA'
The single quotes are needed to prevent the semicolon (;) from being interpreted as the end of the alias and the start of the next command. Instead, the semicolon becomes part of the alias, separating the two commands that the alias runs. Single quotes can also be used to hide wildcards like asterisk (*) and question mark (?), pipes and redirection chars (|, <, >, etc.), conditional operators like logical and (&&) and logical or (||), etc.
--Fred
Original Version: 4/24/2009
Last Updated: 4/24/2009
Applies to: csh, tcsh (and possibly others)
You can "escape" special characters with backslash (\). For example:
alias mcd 'mkdir \!* && cd \!*'
The single quotes hide the asterisks (*), making them part of the alias. Therefore, they are not expanded as filename wildcards at the time the alias is created, and also not interpreted as the special !* sequence used for history substitution at the time the alias is created. Instead, they are used as part of the history substitution later, when the alias is run.
The single quotes also hide the logical and operator (&&), causing it to be part of the alias. Therefore the cd command executes when you run the alias, but only if the mkdir command succeeds. Otherwise, the alias would contain only the mkdir command and the cd command would run immediately when the alias was created, if the alias command succeeded.
However, the single quotes do not hide the exclamation points (!) that are part of the history substitution mechanism used for parameters to an alias. Therefore, we have to also "escape" each exclamation point with a backslash.
The result, when you later use the alias, is that typing mcd abc is the same as having typed mkdir abc && cd abc.
--Fred
Original Version: 4/24/2009
Last Updated: 4/24/2009
Applies to: csh, tcsh (and possibly others)
As shown in the previous examples, you can put multiple commands into an alias. Separate the commands with semicolons (;) to cause each command to execute, one after another in the order specified. Use logical and (&&) to cause the next command to execute only if the previous command succeeds. Use logical or (||) to cause the next command to execute only if the previous command fails.
You can also use the normal mechanisms of piping output from one command to another via the vertical bar (|), inserting the output of one command into the command line of another via backticks (`), and redirecting standard input, output and/or error via the usual combinations of less than (<), greater than (>) and ampersand (&). For details, see: "Command Line Redirection".
You can also use if statements, string and arithmetic expressions, and local variables as:
alias newdir2 'set n="";@ m = $#dirstack - 1;if ($m > 0) set n="$m"; unset m'
which has the net effect of assigning the shell variable n the integer number of stacked directories other than the current working directory, or the empty string if there are none.
--Fred
Original Version: 4/24/2009
Last Updated: 4/24/2009
Applies to: csh, tcsh (and possibly others)
You can enclose the body of the alias in parentheses. For example:
alias rm '(set echo;\rm -i -v \!*)'
The parentheses cause the implicit creation of a nested shell in which to execute the commands contained in them (similar to what would happen if we'd used a shell script instead of an alias).
This is useful in this case because it allows us to set the shell's echo variable without affecting the setting of that variable in the current shell. The echo shell variable causes commands to be echoed (with wildcards expanded) before they are executed. Therefore, it causes this alias to do two additional useful things. It reminds us each time what options (-i -v) we would have had to use on the rm command if we didn't have such an alias. And it echoes the filenames that any wildcards expand to before starting to prompt about deleting each file.
Because of the parentheses, the echo variable is set in the nested shell, then the rm -i -v command and its expanded arguments are echoed and then executed, then the nested shell exits, without having ever affected the value of the echo variable in the shell from which the alias was invoked.
This technique can be used to automatically clean up after setting shell variables, environment variables, etc.
--Fred
Original Version: 4/24/2009
Last Updated: 4/24/2009
Applies to: csh, tcsh (and possibly others)
You can spread an alias over multiple lines by ending each line with a backslash (\), but this is a really bad idea.
It works because the backslash serves the general purpose of "escaping" (taking away the special meaning of) the character that follows it, but then the backslash is deleted, leaving the character exposed for the next level of interpretation to treat it as special again. Thus, as shown above, we can use \! to hide the ! while defining an alias, but to expose the ! when the alias is later executed.
In the same way, we can use backslash to escape the special meaning of the newline character, so that it does not cause the end of the alias command. Instead, the newline is included in the value of the alias, and the backslash is removed. As long as we do this in places where newlines would be tolerated (like between the multiple commands in the alias value), we can use it to spread a long alias definition over multiple lines.
However, this is a bad idea. It is very fragile because most people don't pay much attention to whitespace and someone may later add a blank char to the end of the line, after the backslash. In that case, the blank, not the newline, will be the character escaped. Escaping a blank is silently harmless. Since a blank has no special meaning in this case, the backslash has no effect.
However, the newline goes unescaped, so it ends the alias definition, and the remaining lines are not part of the alias. This can be a very hard bug to diagnose since the trailing blank is not very obvious. Also, the trailing lines, not being part of the alias, are executed immediately, at the time that alias is being defined, which can be a very bad thing, depending on the commands.
--Fred
Original Version: 4/24/2009
Last Updated: 4/24/2009
Applies to: csh, tcsh (and possibly others)
A better way to split an alias into multiple lines is to define multiple shorter aliases, and a master alias that refers to them. I do this in my long complex newdir alias, which I define in multiple parts as:
alias newdir1 '...' alias newdir2 '...' alias newdir3 '...' alias newdir4 '...' alias newdir5 '...' alias newdir6 '...' alias newdir 'newdir1;newdir2;newdir3;newdir4;newdir5;newdir6'
By default, any command in the value of an alias is subject to interpretation as an alias. Thus, the first word of the value of the alias, the first word after a semicolon (;), the first word after a logical and (&&) or a logical or (||), the first word after a pipe (|) or inside backticks (`), any command in the body of an if statement, etc.
--Fred
Original Version: 4/24/2009
Last Updated: 3/25/2010
Applies to: csh, tcsh (and possibly others)
When defining an alias, beware using the alias name inside its own definition, as:
alias rm '(set echo;rm -i -v \!*)'
where the alias rm is being defined to execute the set echo command, followed by the alias rm. No error occurs when you define such an alias, but when you attempt to use it, such "direct recursion" leads to an attempted infinite loop of alias expansion, which is quickly caught and reported as "Alias loop.".
To prevent the infinite recursion in cases where you really do want the alias to call itself, use an if statement or a logical and (&&) or a logical or (||) or some other conditional to make sure the loop ends quickly.
To prevent the recursion in cases where you don't want the alias to call itself, but are simply defining an alias that overrides the name of an existing command or shell script, use backslash to escape the name, as:
alias rm '(set echo;\rm -i -v \!*)'
where the alias rm is being defined to execute the set echo command, followed by the regular command rm. The backslash tells the shell to not treat the reference to rm as an alias invocation.
The one exception is the use of the alias as the first word of its own definition, in which it is not treated as an alias, so no backslash is required. That's how we've been getting away with things like:
alias rm rm -i -v
Also, beware the "mutual recursion" that can lead to an infinite loop in cases like:
alias rm '(set echo;xxx -i -v \!*)' alias xxx rm
where the alias rm and the alias xxx call each other in an infinite loop. Again, the solution is to either introduce a conditional so the loop is not infinite, or escape the rm command to prevent the loop as:
alias rm '(set echo;xxx -i -v \!*)' alias xxx \rm
--Fred
Original Version: 4/24/2009
Last Updated: 6/5/2009
Applies to: csh, tcsh (and possibly others)
If you have a really long and complex alias, that cannot be changed to a shell script for one of the reasons given in "Use aliases instead of shell scripts", you can put the body of the alias in a separate file, and refer to that file from the alias via the source command. For more info on the source command, see "Running a shell script in the same process".
This combines the advantages of an alias with the advantages of a shell script. It's still an alias, so it overrides built-in commands, can update shell and environment variables, etc. However, the bulk of the alias definition resides in a separate file, so it can be "sourced" from the aliases of multiple shells. Also, the separate file can contain multiple lines without having to muck about with backslashes to escape newlines, or multiple aliases calling each other. The separate file can even contain comments to explain the tricky parts.
If the alias takes parameters, use "history substitution" via exclamation point (a.k.a. "bang") as shown in "alias - Define a new command" to expand those parameters into the source command line, and use regular parameter substitution via dollar sign ($) to expand the source command parameters inside the "sourced" file as shown in "Use shell scripts instead of aliases". For example, if the body of this alias were long and complex:
alias rm '(set echo;\rm -i -v \!*)'
you might want to put the long complex body:
(set echo;\rm -i -v $*:q)
in a file named ~/bin/rm_alias_body, and define the rm alias to call it as:
alias rm 'source ~/bin/rm_alias_body \!*'
--Fred
Original Version: 5/26/2000
Last Updated: 6/16/2009
Applies to: csh, tcsh
Here's an alias to search the current directory and all subdirectories for a file with a specified name:
alias dirr 'find . -name \!* -print | sort'
Wildcards can be used, but then the parameter must be enclosed in double quotes. Examples:
cd /starting/point/of/search dirr file.txt dirr "*.txt"
This differs from ls -R in the following ways:
It differs from the alias:
alias lsr 'ls -R | grep \!* | sort'
in the following ways:
Thanks to Tom Stluka for suggesting the lsr alias as an alternative and encouraging me to research this further!
--Fred
Last Updated: 6/16/2000
Applies to: csh, tcsh
Here's an alias to execute a specified command against all specified files in the current directory and all subdirectories:
alias confirm '(set echo;find . -name \!:2 -ok \!:1 "{}" ";")'
It applies the command specified as the first parameter to the files specified as the second parameter (wildcards allowed) in the current directory tree, prompting for confirmation at each file. You must enclose the filename in double quotes if you use wildcards. It displays the find command that it generates before executing it, which is useful if you are trying to learn how to use find. Examples:
cd /starting/point/of/search confirm rm tempfile confirm rm "*.tmp" confirm cc "*.c"
If you want the same capability without being prompted at each file, use -exec instead of -ok, as:
alias noconfirm '(set echo;find . -name \!:2 -exec \!:1 "{}" ";")'
--Fred
Original Version: 6/16/2009
Last Updated: 11/24/2010
Applies to: csh, tcsh
Here's a script version of the dirr alias described at alias dirr - Search a directory tree for a file.
In addition to the advantages listed at Use shell scripts instead of aliases, as a script, it is easier to add more complexity, supporting features like:
Here's the script:
#!/bin/csh -f # dirr # ------------------------------------------------------------------------------ # Shell script to search the current directory and all subdirectories for a # file with a specified name. # ------------------------------------------------------------------------------ # Revision History: # $Log$ # ------------------------------------------------------------------------------ if ("$1" == "-h" || "$1" == "--help") then echo "Usage: $0:t [options] [filename_pattern]" echo " where options are:" echo " -i = Ignore case" echo " -l = Long format (like ls -l), not just names" echo " -v = Verbose. Show the generated pipe of commands" echo " -h = Show this help info" echo " --help = Show this help info" echo "Example: $0:t" echo "Example: $0:t -l" echo "Example: $0:t file.txt" echo "Example: $0:t -i file.txt" echo -n "Example: $0:t " ; echo '"*.txt"' echo -n "Example: $0:t -l " ; echo '"*.txt"' echo -n "Example: $0:t -i -l " ; echo '"*.txt"' echo -n "Example: $0:t -l -i " ; echo '"*.txt"' echo "It is also useful to invoke $0:t from another command via back ticks:" echo -n "Example: " ; echo -n 'rm `' ; echo -n "$0:t " ; echo '"*.txt"`' echo -n "Example: " ; echo -n 'cat `' ; echo -n "$0:t " ; echo '"*.txt"`' exit 1 endif # Collect command line options set name_or_iname_option = "-name" set sort_case_option = set print_or_ls_option = "-print" set sort_key_option = set verbose_option = "false" while ($#argv > 0) if ("$1" == "-i") then # Ignore case shift set name_or_iname_option = -iname set sort_case_option = -f else if ("$1" == "-l") then # Long format, sorting by the 11th field of the format, which is # the filename. The find -ls lines have format: # 1 2 3 4 5 6 7 8 9 10 11 # 2203712 8 -rwxr-xr-x 1 fred staff 154 Jun 11 19:33 ./dirrls shift set print_or_ls_option = -ls set sort_key_option = "-k 11" else if ("$1" == "-v") then # Verbose option shift set verbose_option = "true" else # Not a recognized option. Assume it's the filename pattern. break endif end if ($#argv == 0) then # No -name or -iname option since no argument for it was specified. set name_or_iname_option = endif if ($verbose_option == "true") then echo "find . $name_or_iname_option $*:q $print_or_ls_option | sort $sort_case_option $sort_key_option" endif find . $name_or_iname_option $*:q $print_or_ls_option | sort $sort_case_option $sort_key_option exit $status
For the very latest version that I use regularly, click here.
Thanks to Mike DeLaurentis for inspiring me to think of calling
dirr from within back ticks as shown in the updated Usage section.
--Fred
Original Version: 7/24/2010
Applies to: csh, tcsh
Here's a script to search the current directory tree for large files and folders.
#!/bin/csh -f # dubig # ------------------------------------------------------------------------------ # Shell script to show the disk usage of big directory trees. # ------------------------------------------------------------------------------ # Revision History: # $Log$ # ------------------------------------------------------------------------------ if ($#argv == 0 || "$1" == "-h" || "$1" == "--help") then echo "Usage: $0:t num_digits [du_options...] [dirs...]" echo "where: num_digits = min number of digits in the count of 1k blocks" echo " du_options = options for the du command" echo " dirs = root directories to be checked" echo "Example: $0:t 6" echo " Show sizes that are at least 100,000k (six digits) of" echo " the current directory and all subdirectories" echo "Example: $0:t 6 fred fred2 fred3" echo " Same for fred, fred2, and fred3 trees" echo "Example: $0:t 6 -a fred fred2 fred3" echo " Same, but show big individual files also." echo "Example: $0:t 6 -d 0 fred fred2 fred3" echo " Same, but don't show details of any nested dir levels" echo "Example: $0:t 6 -d 2 fred fred2 fred3" echo " Same, but only show details of 2 dir levels deep" exit 1 endif set digits = $1 shift du -c -k $*:q | grep -E "^[0-9]{$digits,}" # -c = Display a grand total also # -k = Show sizes in units of kB exit $status
For the very latest version that I use regularly, click here.
--Fred
Original Version: 10/31/2010
Applies to: csh, tcsh
Here's a script to prompt the user for a value, looping until a valid value is entered.
#!/bin/csh -f # promptloop # ----------------------------------------------------------------------------- # Shell script to prompt the user for one of a set of values, insisting # that a valid value be entered and returning that value as stdout. # ----------------------------------------------------------------------------- # Usage: # - Intended to be called from another script, in a situation where the list # of valid values is known and can be enumerated, to force the user to select # one of the values. Ideal for choices like: # y, n # yes, no # retry, use default, skip # m, t, w, th, f, sa, su # mon, tue, wed, thu, fri, sat, sun # - Typically called via backtick operators, to pipe stdout to a variable. # For example: # set reply = `promptloop "Proceed (y/n)?" y n` # Assumptions: # Effects: # - Prompts the user until a valid value is entered. # - Writes value to stdout. # Notes: # Implementation Notes: # Portability Issues: # Revision History: # $Log$ # ----------------------------------------------------------------------------- if ($#argv == 0 || "$1" == "-h" || "$1" == "--help") then echo Usage: set reply = \`$0:t prompt_string values...\` echo Example: set reply = \`$0:t \"Proceed \(y/n\)\?\" y n\` exit 1 endif while (1 == 1) # Write prompt to /dev/tty, not stdout, so that we can be called within # backticks as shown in Usage above, sending only the answer to stdout. echo -n "$1" > /dev/tty set answer=$< @ i = 2 while ($i <= $#argv) if ("$answer" == "$argv[$i]") then break; break # Break out of two nested loops endif @ i++ end end echo $answer
For the very latest version that I use regularly, click here.
--Fred
Original Version: 11/12/2010
Last Updated: 11/12/2010
Applies to: csh, tcsh
Here's a simple script to make a beeping sound to get the user's attention, and display an optional message.
#!/bin/csh -f # beep # ----------------------------------------------------------------------------- # Shell script to beep to get the user's attention, optionally also showing # a text message. # ----------------------------------------------------------------------------- # Usage: # - Typically called from another script when an error occurs. # Assumptions: # Effects: # - Writes beep (Ctrl-G chars) and optional message to stdout. # Notes: # Implementation Notes: # Portability Issues: # Revision History: # $Log$ # ----------------------------------------------------------------------------- if ("$1" == "-h" || "$1" == "--help") then echo "Usage: $0:t [message]" echo "Example: $0:t Hello there..." exit 1 endif echo "$*"
Note: The string echoed by the last line contains Ctrl-G
chars which cause a beep sound. You may not be able to
copy/paste such chars via your browser. If so, click
here to download the script.
--Fred
Original Version: 1/13/2001
Last Updated: 8/3/2010
Applies to: csh, tcsh
An instance of csh begins by executing commands from the system file /etc/csh.cshrc and, if a login shell, /etc/csh.login. It then executes commands from the user's file ~/.cshrc, and, if a login shell, from ~/.login. Then, it executes commands in the specified shell script file or from the interactive command line. When a login shell terminates normally, it executes commands from the files ~/.logout and /etc/csh.logout. See the csh man page for more details.
The sequence for tcsh is similar. It executes the same system files. It also executes the same user files, except that if ~/.tcshrc exists, it executes that file instead of ~/.cshrc. However, you can call ~/.cshrc from ~/.tcshrc (via the source command) to run both. Login shells execute the additional files: ~/.history and ~/.cshdirs. The system can be configured to run the system and user .login files before, instead of after, the .cshrc files. See the tcsh man page for more details.
--Fred
Original Version: 7/31/1987
Last Updated: 7/3/2016
Applies to: csh, tcsh
Here's an example of a .cshrc file. It's pretty close to the one I currently use (which is here), including various portions that I've commented out or evolved since I first wrote it in 1987. It's gotten pretty ugly, and may do some things the old way, where there are now newer better ways, but it has worked reliably on lots of different Unix systems for nearly 30 years. Read through it, if you are interested, to get an idea of the kinds of things that are possible. Future tips will discuss various parts in more detail. Feel free to comment, but please be kind.
#!/bin/csh -f # ------------------------------------------------------------------------------ # The following stuff is defined for interactive shells only. This means that # it is not available to shells created implicitly by scripts. # ------------------------------------------------------------------------------ if ($?prompt) then echo "Running .cshrc..." # Note: Do this output only for # interactive shells. Otherwise # it interferes with things like # scp from other computers. set history = 1000 # Save last 1000 commands set savehist = 1000 # Save commands till next session unset histdup # Don't discard duplicates from history # Have to explicitly unset it, in case # a system-wide startup file sets it, as # was happening with Amazon AWS Fedora # Core 8 Linux instances. set notify # Notify immediately when background # job completes. set filec # File name completion via ESC key # (TAB key in tcsh). set autolist # (TCSH) Show all possible completions # when TAB hit and prefix not unique. set complete = enhance # Ignore case and other minor diffs # like hyphens vs underscores when # doing file name completion. set matchbeep = nomatch # Beep only when completion finds no # match, instead of "ambiguous" (when # there are multiple matches), "never", # or "notunique" (when there is one # exact match and other longer matches) # set nobeep # Never beep (especially useful to # # suppress beeps when backspace at # # start of command line so nothing # # is there to be deleted). # # Does not suppress beeps from # # explicit use of Ctrl-G. set printexitvalue = "true" # Show any non-zero statuses that occur. # Note: Set it to a value ("true), # instead of just setting it, # so we can test that value # in .login. mesg y # Allow messages from other users. set time = (1 "Elapsed time: %E CPU seconds: %U/%S") # Report time consumed by all commands which take over 1 CPU second. # 2nd part (undocumented) is like a printf control string. # Default is: # "%Uu %Ss %E %P %X+%Dk %I+%Oio %Fpf+%Ww" # where: # %U = user cpu time # %S = system cpu time # %E = elapsed (wall clock) time # %P = percentage utilization (cpu_time/elapsed_time) # %X = text_size/cpu_second (ru_ixrss/cpu_time) # %D = data_size/cpu_second ((ru_idrss+ru_isrss)/cpu_time) # %K = image_size/cpu_second ((ru_ixrss+ru_idrss+ru_isrss)/cpu_time) # %M = maximum image size (ru_maxrss/2 -- I don't know why /2) # %I = number of input blocks (ru_inblock) # %O = number of output blocks (ru_oublock) # %F = number of hard page faults (ru_majflt) # %R = number of page reclaims (ru_minflt) # %W = number of swap-outs (ru_nswap) set watch = ( 1 any any ) # Report logins/logouts of all users, # checking every 1 minute (tcsh only). # (Doesn't work except for self unless # you are privileged enough). # set implicitcd # Treat dir name as a cd command to # #that dirname set cdpath = (.. ~/fred ~ ../.. ../../.. ~/fred/admin ~/fred/family \ ~/fred/InfrTrac ~/fred/InfrTrac/ITF \ ~/fred/bristle ~/fred/javaapps ~/fred/webapps \ ~/fred/Mozilla/TBird/Profile ~/fred/Mozilla/TBird \ ~/fred/WebSite/bristle \ /usr/local \ ) set listjobs = long # List all jobs when suspending one # --------------------------------------------------------------------------- # These aliases hide or change the operation of regular commands and so # should not be in effect when scripts are run, since the script writer may # not have anticipated this. # --------------------------------------------------------------------------- alias cp cp -i -v # -i = Prompt before overwriting # -v = Verbose alias mv mv -i -v # -i = Prompt before overwriting # -v = Verbose alias rm rm -i -v # -i = Prompt before removing # -v = Verbose alias ln ln -i -v # -i = Prompt before replacing another # symlink, rather than refusing # -v = Verbose alias grep grep --color # Highlight matches in color (see # GREP_COLORS environment variable) # alias diff diff -l -s # Long format, list files which are same # # NO. Long format contains FF which is # # too tedious. # alias diff diff -s # List files which are same # NO. Too much noise. Leave off -s. # alias more more -c # Re-paint screen, don't scroll, for # # full screen scrolls #Not needed. Used LESS environment variable instead # alias less 'less -#8' # Arrow keys move 8 columns, not # # default of 0 which means half screen. alias from 'egrep "^From |^Subject:" /usr/spool/mail/$user' # Report who new mail is from alias su 'finger;\su -m' # Show who is logged in when I su. # Preserve environment when I su. # Note: \su avoids an alias loop. # alias sudo sudo -E # Preserve environment vars during sudo # so LESS takes effect for sudo less. # No. Too global an effect. May not # be a good idea. Add LESS to env_keep # in /etc/sudoers of various servers # instead for now. Also, this would # not have taken effect in scripts. # alias ps ps -fly # Show more info about each process. alias ps ps -w -o user,tty,stime,ppid,pid,pgid,pri,rss,vsz,time,args # Show customized info about each # process. alias pstree ps --forest # ASCII-art tree of processes alias top top -d1 # Update every 1 (not 5) second. if ("`printenv OSTYPE`" == "darwin") then unalias top # No need. Defaults to 1 second and # the -d option is for a different # purpose and takes no param. endif alias which where # Shows all matches, not just first one, # and shows built-in commands as well. alias df df -h # Use suffixes GB, MB, KB, etc alias du du -h -c # Use suffixes GB, MB, KB, etc, w/grand # total alias dutop du -s # Show only top level totals, not # separate size of each subdirectory # --------------------------------------------------------------------------- # This command is inherently interactive. It forces non-prompting commands # to prompt for confirmation before acting on files. # --------------------------------------------------------------------------- alias confirm '(set echo;find . -name \!:2 -ok \!:1 "{}" ";")' # CONFIRM will apply the command # specified by P1 to the files specified # by P2 (wildcards allowed) in the # current directory tree, prompting for # confirmation for each. # Must enclose P2 in double quotes if # wildcards used. # set echo - causes find command to be # echoed, to remind me how to use find. # () - cause the sequence of commands # to be run in a subprocess. This is # so that the set echo status of the # current process is not affected. # --------------------------------------------------------------------------- # Loading the history buffer with useful stuff # --------------------------------------------------------------------------- #?? alias loadhist \ #?? 'set j=`cat ~/.one_liners|wc -l`;source -h ~/.one_liners; history|tail -"$j"' #?? # Loads file of one-liners into history #?? # buffer for convenient command-line #?? # recall. # --------------------------------------------------------------------------- # These commands maintain the prompt with the following features: # - History number is displayed in the prompt. # - No. Commented out. Was done via \\! # - Time at which this directory was entered is displayed. # - Now updated to show time at which most recent prompt was displayed. # Much more current than just time at which directory was entered. # - Your username (the one you logged in as) is displayed. # - Your effective username is displayed in standout mode if you have su'd to # a different username (root, tomcat, etc.) as a reminder that you may # currently have different privileges than usual. # - Machine name is displayed. # - Full name of current directory is displayed. # - Your own home directory when in the prompt is always displayed as # "~" (not /home/fred, for example). # - Other people's main directories are displayed like "~zebrom00". # - When you switch to a directory via a soft link, the link path is # displayed instead of the real path. Use pwd -P to see the real path. # On older systems pwd with no options may do so. On all systems # echo $cwd and echo $PWD just show the link names. # - Number of nested shells is indicated (in tcsh, not csh). # - Number of directories stacked by pushd/popd, if any, is displayed. # - "ERROR" is appended to prompt if previous command failed # They are needed only in interactive mode. # Note: Moving these to non-interactive section would cause C shell to # lose ability to distinguish between interactive and non-intractive # shells because the prompt would always be defined. # --------------------------------------------------------------------------- set e = "" alias newdir0 'if ($status > 0) set e="ERROR "' # Not needed. Use %~ below instead. # alias newdir1 'set noglob;set d=(`dirs`);unset noglob' # # Noglob needed to prevent ~ from being # # expanded # Use $#dirstack instead of $#d since there's no $d defined anymore. # alias newdir2 'set n="";@ m = $#d - 1;if ($m > 0) set n="$m"; unset m' alias newdir2 'set n="";@ m = $#dirstack - 1;if ($m > 0) set n="$m"; unset m' # Not needed. Use %t below instead. # alias newdir3 'set t=`date`;set t=`echo "$t[4]x" | sed "s/:..x//"`' # Hostname part not needed. Use %m below instead. # alias newdir4 'set c=`whoami`@`hostname -s`' # # Note: Beware -s on hostname on Unix variants # # like Solaris where it means "set" # # instead of "short" and tries to change # # the name of the host. # Use both whoami (user currently su'd to) and logname (original user who # logged in) instead of just %n (same as logname). Otherwise, there's no # indication that you've su'd to someone else. Since we can query both, # can compare them and only show both when they differ. alias newdir4 'set c1=`logname`;set c2=`whoami`;if ($c1 == $c2) set c2=""' alias newdir5 'set s=""' if ($?shlvl) then alias newdir5 'set s="";if ($shlvl > 0) set s="$shlvl/"' endif # Use simpler version with tcsh built-ins. Don't really need to support # plain old csh any more. If someday I do, restore the old stuff and move # this version to a .tcshrc file or something. # alias newdir6 'set prompt="\\! $t $c [$s$n] $d[1] % ";unset c t d n s' # alias newdir6 'set prompt="$t $c [$s$n] $d[1] % ";unset c t d n s' alias newdir6 'set prompt="%B%t $c1@%m [$s$n] %~ %S$c2%s%% $e%b";unset n s c1 c2; set e=""' # %B %b = Start/end bold text # %t = Current time # %m = Machine name # %~ = Current directory name # %S %s = Start/end standout text # %% = Display a percent sign (%) # alias newdir 'newdir1;newdir2;newdir3;newdir4;newdir5;newdir6' alias newdir 'newdir0;newdir2;newdir4;newdir5;newdir6' # Set prompt alias cd 'set old=$cwd:q;chdir \!*;newdir' # Remember old dir, change dir and # update prompt alias back 'set back=$old:q;cd $back:q;unset back' # Toggle back to previous dir and # update prompt # Note: tcsh supports "cd -" which # does this automatically, and # "owd" (old working directory) # shell variable like my "old" alias pushd 'pushd \!*;newdir' # Push dir and update prompt alias popd 'popd \!*;newdir' # Pop dir and update prompt alias up 'cd ..' # Move up one dir, updating prompt alias dn 'cd ./\!*' # Move down to subdir, updating prompt alias ac 'cd ../\!*' # Move across to sibling dir, updating # prompt alias pd pushd # Push dir and update prompt alias pop popd # Pop dir and update prompt # On Linux, no -P option supported or needed. Already shows real pwd, # not links. if ("`printenv OSTYPE`" == "darwin") then alias pwd pwd -P # Show real pwd, not links endif alias mkdir mkdir -p -v # Create parent dirs if needed, show # names of dirs created. alias md mkdir alias rmdir rmdir -p -v # Delete directory and all empty parent # directories (reverse of mkdir -p), # show names of dirs deleted. if ("`printenv OSTYPE`" == "darwin") then alias rmdir rmdir -p # Delete directory and all empty parent # directories (reverse of mkdir -p) # Note: No -v option exists on Mac OS X # 10.5 (Leopard) endif alias rd rmdir alias mcd 'md \!* && cd \!*' alias c pushd # Like the c.bat I created on Windows alias tomtailcat 'tail -F $CATALINA_HOME/logs/catalina.out' alias tomlogcat 'less $CATALINA_HOME/logs/catalina.out' alias tomlogfile '\ls -t $CATALINA_HOME/logs/localhost_log*.txt | head -1' alias tomtail 'tail -F `tomlogfile`' alias tomlog 'less `tomlogfile`' alias tomlogs 'pd $CATALINA_HOME/logs;dir' alias tomtailr 'tac `tomlogfile` | more' alias tomtail1 'tomtail | grep " 1 "' alias tomtail2 'tomtail | grep " [12] "' alias tomtail3 'tomtail | grep " [123] "' # -------------------------------------------------------------------------- # Machine specific stuff # -------------------------------------------------------------------------- if ("`printenv OSTYPE`" == "linux") then alias logs 'source ~/fred/bin/logs' alias messlogs 'source ~/fred/bin/messlogs' alias messcheck 'grep -E open\|LOGIN /var/log/messages* | grep -v -E fred\|REFUSED\|FAILED | more' alias maillogs 'source ~/fred/bin/maillogs' alias seclogs 'source ~/fred/bin/seclogs' alias bootlogs 'source ~/fred/bin/bootlogs' alias cronlogs 'source ~/fred/bin/cronlogs' alias ftplogs 'source ~/fred/bin/ftplogs' alias acclogs 'source ~/fred/bin/acclogs' alias errlogs 'source ~/fred/bin/errlogs' alias mailtail 'tail -f /var/log/maillog' alias maillog 'less /var/log/maillog' alias maillogs 'pd /var/log;dir maillog*' alias boottail 'tail -f /var/log/boot.log' alias bootlog 'less /var/log/boot.log' alias bootlogs 'pd /var/log;dir boot*' alias crontail 'tail -f /var/log/cron' alias cronlog 'less /var/log/cron' alias cronlogs 'pd /var/log;dir cron*' alias ftptail 'tail -f /var/log/xferlog' alias ftplog 'less /var/log/xferlog' alias ftplogs 'pd /var/log;dir xferlog*' alias acctail 'tail -f /var/log/httpd/access_log' alias acclog 'less /var/log/httpd/access_log' alias acclogs 'pd /var/log/httpd;dir access*' alias errtail 'tail -f /var/log/httpd/error_log' alias errlog 'less /var/log/httpd/error_log' alias errlogs 'pd /var/log/httpd;dir error*' else if ("`printenv OSTYPE`" == "darwin") then # No need yet. endif endif # ------------------------------------------------------------------------------ # The following stuff is defined for all shells. # ------------------------------------------------------------------------------ # ---------------- # File protections # ---------------- umask 022 # Protect created files against write set noclobber # Don't overwrite files by piping. # ------------------ # Directory commands # ------------------ alias ls 'ls --full-time --color=auto' # Show full date and time, not just # one or the other, when using -l. # Also shows day of week, and seconds. # Use different colors for different # file types (dirs, links, etc.) except # when piping to another command. if ("`printenv OSTYPE`" == "darwin") then alias ls 'ls -TeG@O' # -T = Show full date and time, not # just one or the other, when # using -l. Also shows seconds. # -e = Show ACL, if any, when using -l. # -@ = Show keys and sizes of extended # attributes, if any, when using # -l. # - xattr -l to see contents, not # just name and size # - xattr -p= print # - xattr -d = remove # - xattr -w # = set value # - xattr -r = recur # - xattr -h = help (no man page) # -G = Use different colors for # different file types (dirs, # links, etc.) except when piping # to another command. # -O = Show file flags like "hidden". endif alias lst 'echo "Modified Inode Changed Accessed Bytes Blocks Type Name"; echo "----------------------------------- ----------------------------------- ----------------------------------- ----- ------ ------------ ------------"; stat --printf="%y\t%z\t%x\t%s\t%b\t%F\t%n\n"' if ("`printenv OSTYPE`" == "darwin") then alias lst 'echo "Created Modified Inode Changed Accessed Bytes Blocks Type Name"; echo "-------------------- -------------------- -------------------- -------------------- ----- ------ ------------ ------------"; stat -f "%SB%t%Sm%t%Sc%t%Sa%t%z%t%b%t%HT%t%N"' endif alias fd ls -FlA # Directory with time/date, etc. if ("`printenv OSTYPE`" == "darwin") then alias fd ls -FlA@ # Directory with time/date, etc, and # extended attribute keys and sizes. endif # Moved to a shell script to be callable from sudo #alias dir 'pwd; ls -FlA \!*; echo -n "Total = "; ls -A \!* | wc -l' # Like VMS DIRECTORY ## OK to hide native dir. It's the same as ls. # Moved to a shell script to support call w/o params # alias dirr 'find . -name \!* -print | sort' # Recursive directory search # Better than dir -R because it doesn't # follow links. alias subs ls -Fd '`find . -maxdepth 1 -type d -o -type l | sort`' #?? Stopped working reliably on Mac 8/8/2010. Reported way too many files. #?? Why? #?? alias subs 'ls -d `echo .*/. */. | sed "s^/.^^g"`' # Display names of all subdirectories. alias subsb '/bin/ls -d `/bin/ls -Fa | grep /$`' # Display names of all subdirectories # except those which are soft links. alias subsc 'ls -l | grep "^d"' # Display details of all subdirectories, # except those starting with "." alias subsd 'ls -al | grep "^d"' # Display details of all subdirectories, # including those starting with "." alias subsr 'find . -type d -ls' # Display details of all subdirectories # recursively alias since 'find . -ctime -\!* -ls' # Recursive directory search for files # modified since specified number of days alias sincem 'find . -mtime -\!* -ls' # Same as above, but only if contents # were changed, not if only owner, # group, permissions, etc. were # changed. Useful? #Moved to a shell script to support multiple filespecs #alias except 'find * -maxdepth 0 -not -name \!:1 -ls' alias except '(set noglob; \except -i -l \!*)' # Show ls -dgils listing of all files # except those matching pattern # ----------------- # File manipulation # ----------------- alias del rm # Prompt before removing alias m more # ----------- # Information # ----------- # Replaced with my custom search script #alias search fgrep -i -n -e # Like VMS SEARCH #alias sea search alias processes ps -Nugxww # Show processes on node alias proc processes # Moved to a shell script to be callable from sloop #alias psgrep 'ps -A | head -1; ps -A | grep \!* | grep -v grep' # Search for a process #??alias cidiff ~/com/ci_diff.csh # Case insensitive file comparison #??alias diff_dir ~/com/diff_dir.csh # Case insensitive directory comparison alias type cat #??alias unleave ~/com/unleave.csh # Cancels alarm set by "leave" #alias bc bc ~/.bcrc # Get bc to load a startup file alias bc 'echo \!* | tr "x" "*" | \bc ~/.bcrc' # Get bc to load a startup file # and to evaluate the expression on # the command line, and to accept x # instead of * for multiplication # Or, could use expr command alias ghost script # Keep a transcript of session alias h history #alias h history 50 alias jobs jobs -l #??alias la ls -a #??alias lf ls -FA #??alias ll ls -lA # --------------- # Time management # --------------- # Note: All that use the sched command must be aliases, not shell scripts # because sched sets a timer within the current shell, but a script # creates, runs in, and kills its own shell, so the timer would be # scheduled and then abandoned. #alias beepafter 'sched +00:\!:1 beep "\!:2-$" && sched && set ignoreeof' alias beepafter 'sched +00:\!:1 beep "\!:2-$" && set ignoreeof' # Beep w/message after specified minutes # Also, echo the scheduled event list as confirmation # - No. Not necessary. Built into precmd now. # Example: beepafter 10 Ten minutes have passed #alias beepnowandevery 'beep "\!:2-$" && sched +00:\!:1 beepnowandevery \!* && sched' alias beepnowandevery 'beep "\!:2-$" && sched +00:\!:1 beepnowandevery \!*' # Beep w/message now and every specified minutes # Also, echo the scheduled event list as confirmation # - No. Not necessary. Built into precmd now. # Example: beepnowandevery 10 Ten minutes have passed again #alias beepevery 'sched +00:\!:1 beepnowandevery \!* && sched' alias beepevery 'sched +00:\!:1 beepnowandevery \!*' # Beep w/message every specified minutes # Also, echo the scheduled event list as confirmation # - No. Not necessary. Built into precmd now. # Example: beepevery 10 Ten minutes have passed again #alias beeper 'sched +00:\!:1 beepnowandevery \!:2-$ && sched' alias beeper 'if (\!:2 == \!:$) echo "Usage: \!:0 initial REPEAT message"; if (\!:2 \!= \!:$) sched +00:\!:1 beepnowandevery \!:2-$' # Beep w/message after initial minutes and every repeat minutes # Also, echo the scheduled event list as confirmation # - No. Not necessary. Built into precmd now. # Example: beeper 10 1 Another minute since the first 10 # Note: The if statement is an attempt to prevent usage errors # like omitting the repeat argument: # % beeper 20 "Twenty minutes have passed" # which should have been # % beeper 20 1 "Twenty minutes have passed" # Unfortunately, it only works when the message is # quoted or a single word. Really should rewrite it # to confirm that 2nd arg is entirely numeric. #alias alarm 'sched \!:1 beepnowandevery 1 "\!:2-$" && sched' alias alarm 'sched \!:1 beepnowandevery 1 "\!:2-$"' # Beep w/message at specified time and every minute therafter # Also, echo the scheduled event list as confirmation # - No. Not necessary. Built into precmd now. # Example: alarm 11:15 Time is now 11:15 #alias precmd sched # Show list of scheduled tasks before each prompt as a reminder # to not kill a shell where I have stuff scheduled. alias precmd 'newdir; sched; if ("`sched`" != "") set ignoreeof; if ("`sched`" == "") unset ignoreeof' # Prevent Ctrl-D from killing a shell in which there are # scheduled tasks. # Also show list of scheduled tasks before each prompt as a # reminder to not run a long-running command that will prevent # the alarms from going off until it ends. # Note: precmd is a csh "special alias" that runs just before # each command line prompt is printed. I'm using it here # as a way to check often whether this shell has any # scheduled tasks. # ---- # JIRA # ---- alias j jira alias jita jira # Frequent typo alias jiar jira # Frequent typo # --- # Git # --- setenv merged_from_cf_replacement merged_from_cf_replacement # Allows command line completion as an # env var by typing $merg alias got git # Frequent typo # Made into a script to use from other scripts: # alias gitpulldryrun git fetch --dry-run -v # Use fetch. There's no pull --dry-run alias gitfetchdryrun git fetch --dry-run -v # Show if there's anything to pull alias gitdiffremote git diff master origin/master # Show details of what would be pulled # Doesn't work. Why not? alias gitsync ./gitsync alias gits git status alias gitunstage git reset HEAD # Unstage w/o changing working copy alias gitdiff git diff HEAD # Staged and unstaged diffs alias gitaddinteractive git add -i # Git shell for status, diff, add, etc. alias gitaddpatch git add -p # Stage portion of a file to be committed alias gitwhatfileschanged git whatchanged # Show all files in a commit alias gitlogshowchanges git log -p alias gitwhatlineschanged git log -p # Show diffs of files in commit log alias gitlog git log --graph alias gitblameperline git blame alias gitshow git show alias gitlogstatsummary git log --stat --summary alias gitlogoneline git log --oneline # One line per commit alias gitlogbyperson git shortlog # Commits grouped by committer alias gitgui git gui alias gitk gitk alias gitx gitx alias gitcheckout git checkout alias gitcommit git commit -v # Show diffs in editor for Git comments alias gitcommitamend git commit --amend alias gitgrep git grep -i alias gitcherrypick git cherry-pick --signoff -x # Get files from a specified old commit # and create a new commit on the current # branch # --signoff = Add my name to the commit # comment # -x = Add "(cherry picked from # commit ...)" to the commit # comment alias gitcherrypicknonewcommit git cherry-pick --no-commit # Get files from a specified old commit # but do not create a new commit alias gitstashlist git stash list alias gitstash git stash alias gitstashapply git stash apply alias gitstashapplystage git stash apply --index alias gitstashdrop git stash drop alias gitstashpush git stash alias gitstashpop git stash pop alias gitstashdiff git stash show -p alias gitstashpoptobranch git stash branch alias gitdifffast git diff --no-ext-diff # (Bypass external diff tool) alias gitremote git remote -v # Show full URLS also alias gitbranch 'git status -b --porcelain | head -1 | cut -c 4-' # Show current branch alias gitb gitbranch alias gitbranches git branch -a -v # Show all branches alias gitbranchswitch git checkout alias gitbranchcreate git branch alias gitbranchcreateandswitch git checkout -b alias gitbranchdelete git branch -d alias gitbranchmerge 'git checkout master; git merge' alias gitbranchpush git push origin # No. Defaults to doing the pull even w/o the --track option # alias gitbranchpull 'git checkout --track origin/\!:1' alias gitbranchpull git checkout alias gitbranchchanges git log master.. # See ~/fred/git/Tips/branchlog.txt alias gitm git checkout master alias gitcf git checkout merged_from_cf_replacement alias reviewcf 'delete_pyc_files; windiff . $PWD.cf_reviewed' alias gitmergeresolve git add alias githelp git help # Help on Git commands # ---------------- # Editors and such # ---------------- setenv EDITOR vi if ("`printenv OSTYPE`" == "darwin") then setenv EDITOR ew endif setenv EXINIT 'set autoindent' setenv PAGER less setenv LESS "-#8 -M -j.5 -F -R -S -W -X" # -#8 = Left/right arrow scroll by # 8 chars # -m = Show percent in prompt # -M = Show percent, name, etc, in # in prompt # -N = Show line numbers # -j.5 = Searches and other jumps put # target line in middle of # screen, not at top line # Not supported on Mac OS X # -F = Quit automatically if only one # screen of text # -R = Use ANSI color escape # sequences # -S = Chop long lines (can scroll # left/right w/arrow keys) # -W = Highlight new lines and # jumped to search results # -X = Don't restore screen to # non-less contents when exiting # Otherwise, -F can cause short # files to flash on screen too # briefly to be noticed. # Also makes less work better # with windowing environments # with scrollable command line # windows. Without this option, # if you attempt to scroll back # using the native windowing # scroll mechanism, you actually # scroll back to the commands # before the less command, not # to the previous lines of the # file. if (`hostname -s` == "mbp1") then setenv LESS "-#8 -M -F -R -S -W -X" # -j.5 Not supported on Mac OS X # before Snow Leopard endif # --- # pub # --- alias pub ./pub # Always run the local ./pub, not the one from # ~/fred/macbin, ~/fred/bin, etc. alias review ./review # Always run the local ./review, not the one from # ~/fred/macbin, ~/fred/bin, etc. # ----------------- # Tomcat Web server # ----------------- setenv CATALINA_HOME /usr/local/tomcat # --------------------------------------- # Path settings # --------------------------------------- if (`hostname` == "neptune.bristle.com") then setenv JAVA_HOME /usr/java/j2sdk1.4.2_05 else if (`hostname` == "mbp1.local") then setenv JAVA_HOME /System/Library/Frameworks/JavaVM.framework/Home else if (`hostname` == "crapola") then endif if (`hostname` == "neptune.bristle.com") then # Probably already on the path: # /usr/kerberos/bin /usr/local/bin /bin /usr/bin /usr/X11R6/bin else if (`hostname` == "mbp1.local") then # Probably already on the path: # /usr/bin /bin /usr/sbin /sbin /usr/local/bin /usr/X11/bin else if (`hostname` == "crapola") then endif set path = ($JAVA_HOME/bin $path) # Specific Java before version in /usr/bin. # All MySQL stuff done as mysql user. Not needed on my path. if ("`printenv OSTYPE`" == "darwin") then set path = ($path /usr/local/mysql/bin) endif set path = (~/fred/bin $path) # Fred's stuff first if ("`printenv OSTYPE`" == "darwin") then set path = (~/fred/macbin $path) # Fred's Mac stuff first endif set path = (~/bin $path) # My stuff first, even before Fred's if ("`printenv OSTYPE`" == "darwin") then set path = (~/fred/macbin $path) # Mac stuff first, before Linux endif set path = ($path /usr/local/sbin /usr/sbin /sbin) # Stuff needed by root set path = ($path /usr/games) # Games set path = ($path .) # Local directory contents last # ------------------------------------------------------------------------------ # The following stuff is defined for interactive shells only. This means that # it is not available to shells created implicitly by scripts. # ------------------------------------------------------------------------------ if ($?prompt) then # --------------------------------------------------------------------------- # Info to be printed out at startup of each interactive shell. # --------------------------------------------------------------------------- date # Print current Time/date newdir # Change prompt to reflect current dir. # Note: Do this after setting the PATH # because the newdir alias uses # some commands that may not be # on the path by default. if ("`printenv OSTYPE`" == "darwin") then settitle -h endif endif
--Fred
Last Updated: 1/13/2001
Applies to: sh
A login instance of sh begins by executing commands from the system file /etc/profile, and then the user's file ~/.profile. It then follows the sequence followed by non-login instances, executing commands in the file named by the value of the ENV environment variable, if any. There is no equivalent of the .logout files of csh. See the sh man page for more details.
--Fred
Last Updated: 1/28/2012
Applies to: bash
See also: cdpath in csh and tcsh, CDPATH
in ksh
The parameter CDPATH specifies the directories in which the cd command looks for subdirectories. For example, in my .bashrc file, I set:
CDPATH="..:~:../..:../../..:~/ste/dat:~/ste/layout:~/ste/adt:~/ste"
Then, when I type:
cd src
it looks for the following (in order):
./src ../src ~/src ../../src ../../../src ~/ste/dat/src ~/ste/layout/src ~/ste/adt/src ~/ste/src
This makes it much less tedious to navigate a complex directory tree.
Note: The current directory (".") is always implicitly the first entry of CDPATH, so the cd command always search the current directory first.
--Fred
Last Updated: 1/13/2001
Applies to: bash
A login instance of bash begins by executing commands from the system file /etc/profile. Then, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. It then follows the sequence followed by non-login instances, executing commands in the file ~/.bashrc. Then, it executes commands in the specified shell script file or from the interactive command line. When a login shell terminates normally, it executes commands from the file ~/.bash_logout.
You can cause a shell script file to execute at the end of each shell (not only login shells), by adding the following line to your ~/.bashrc file:
trap '. $HOME/.sh_logout; exit' 0
Almost all of this behavior can be altered via options on the command line used to invoke bash. There are options to tell bash to skip various files, to run different files, to run in posix mode, etc. Also, bash follows the startup sequence of sh instead, if invoked by the verb "sh" (via a Unix link or by renaming the bash executable file to "sh"). Finally, it behaves differently if invoked as a remote shell via rshd.
See the bash man page for more details.
--Fred
Last Updated: 10/5/1999
Applies to: ksh
See also: cdpath in csh and tcsh, CDPATH
in bash
The parameter CDPATH specifies the directories in which the cd command looks for subdirectories. For example, in my .profile file, I set:
export CDPATH=".:..:~:../..:../../..:~/ste/dat:~/ste/layout:~/ste/adt:~/ste"
Then, when I type:
cd src
it looks for the following (in order):
./src ../src ~/src ../../src ../../../src ~/ste/dat/src ~/ste/layout/src ~/ste/adt/src ~/ste/src
This makes it much less tedious to navigate a complex directory tree.
Note: If you set CDPATH, and don't include a "." entry or a null entry (2 consecutive colons, or a leading or trailing colon), the cd command does not search the current directory.
Thanks to David "Thor" Collard for this tip.
--Fred
Last Updated: 1/13/2001
Applies to: ksh
A login instance of ksh begins by executing commands from the system file /etc/profile, and then the user's file ~/.profile. It then follows the sequence followed by non-login instances, executing commands in the file named by the value of the ENV environment variable, defaulting to the file ~/.kshrc.
There is no equivalent of the .logout files of csh, but you can mimic the effect, by adding the following line to your ~/.profile file:
trap '. $HOME/.sh_logout; exit' 0
See the ksh man page for more details.
--Fred
Last Updated: 4/20/2009
Applies to: zsh
An instance of zsh begins by executing commands from the system file /etc/zshenv, and then from ~/.zshenv. Login instances then execute commands from the system file /etc/zprofile, and then from ~/.zprofile. Interactive instances (those not created specifically to execute a shell script) then execute commands from the system file /etc/zshrc, and then from ~/.zshrc. Finally, login instances then execute commands from the system file /etc/zlogin, and then from ~/.zlogin. All instances then execute commands in the specified shell script file or from the interactive command line. When a login shell terminates normally, it executes commands from ~/.zlogout and then from the system file /etc/zlogout
Almost all of this behavior, and all of these file locations, can be altered via options on the command line used to invoke zsh, or by environment variables. Also, like bash, zsh follows the startup sequence of sh or ksh instead, if invoked by the verb "sh" or "ksh" (via a Unix link or by renaming the zsh executable file to "sh" or "ksh").
See the zsh man page for more details.
--Fred
Original Version: 11/4/2001
Last Updated: 4/24/2009
Applies to: All shells, All Unix flavors
There are several different Unix flavors (BSD, System V, Solaris, HP/UX, Linux, Mac OS X, BSDI, SCO Unix, IBM AIX, etc.). Each is a different version of the operating system, perhaps written by a different set of authors, but intended to be compatible, or perhaps derived from each other.
You can run any of the Unix shells (sh, csh, tcsh, ksh, bash, zsh, etc.) under any Unix flavor. Each flavor ships with all of the shells, and with similar, but not quite identical, sets of programs (cc, make, touch, more, etc.). Any Unix program can be launched from any of the shells. Once running, it is under the control of the Unix flavor, not the shell, so a Unix program should behave the same with all shells, but not necessarily the same with all flavors.
The name Unix is a reaction to the Multics system of the time that was considered too complex. System V (System Five) was written by AT&T, presumably as a follow on to Systems I, II, III, and IV. BSD (Berkeley Software Distribution, sometimes known also as Berkeley Systems Distribution) originates from UCB (University of California at Berkeley). For many years, these were the only two flavors. The source code for Unix was freely distributed, but support was not always provided.
Eventually, companies started producing, selling, and offering support for their own versions. Solaris is a System V derivative from Sun MicroSystems. HP/UX is a System V derivative from Hewlett Packard. IBM AIX is based on System V with some BSD extensions. SCO Unix (derived from System V) and BSDI (obviously a BSD derivative) are ports to the IBM PC (Intel x86). Apple's Mac OS X is based on BSD Unix. And everyone's heard of GNU/Linux, the wildly popular free version, with the Linux kernel written by Linus Torvald, and the GNU utilities provided by the Free Software Foundation.
For more info, see: http://www.unix-systems.org/what_is_unix/flavors_of_unix.html
--Fred
Original
Version: 2/23/2010
Last Updated: 2/23/2010
Applies to: All shells, All Unix flavors
logwatch is a security tool that monitors various
system log files, detects breakin attempts and sends you a daily e-mail
message summarizing them.
It was already installed, configured and running on the Linux Fedora
Core 8 server instance I launched at Amazon Web Services, so I've never
had to learn much about it.
I've been amazed at the number and
variety of attempts made on my server. On previous Linux servers,
I used to occasionally check the log files manually and saw lots of
attacks, but rarely counted them up. An automated tool like
logwatch that generates a daily report really reveals the intensity
of the constant barrage of attacks. They occur continuously,
around the clock, starting within minutes of when you connect to the
Internet. My ssh server (for logging in securely),
FTP server, e-mail server, etc. all get attacked, with hundreds of
different
IP addresses each day making tens of thousands of attempts.
It is interesting
to note the details of the attempts in the log files. You can
see hackers trying to get your Web server to run Windows programs (not
a risk on my Linux server), to run Linux programs (disallowed by default
by the Apache Web server), to connect to your database
server via the default password (which I always change when I install
it), and to access a variety of common security holes that you might
have opened by installing various packages and leaving default passwords
in place. You can also see brute force attacks, trying thousands
of usernames and passwords. It is scary to see so many attacks,
but gratifying to see them all fail.
So, how to block and not just watch the attacks? See the next
tip...
--Fred
Original
Version: 2/21/2010
Last Updated: 6/20/2018
Applies to: All shells, All Unix flavors
fail2ban is a security tool that detects and blocks
attacks.
For example, with its default configuration, it detects any IP
address that tries unsuccessfully to login via
ssh
more than 3
times in 10 minutes. It prevents any further attempts by
immediately updating the iptables firewall to block
that IP address for the next 10 minutes.
It is very easy to download, install, and configure. Here's
what I did:
Version 0.8.4-28 of fail2ban has a bug. It silently stops working as soon as the log file it is watching (typically /var/log/secure) gets rotated. This is typically done by logrotate once per week on Sunday, so if you stop getting fail2ban on the Monday after you updated to a new version, you may be experiencing this bug.
In version 0.8.4-28, fail2ban defaults to using the inotify mechanism for detecting that a log file has been rotated and switching to the new log file. However, there's a bug, so the inotify mechanism does not work correctly. In previous versions (0.8.4-24 at least), the default mechanism was gamin, which still works fine.
The
easiest workaround for now is to change your fail2ban config file (typically
/etc/fail2ban/jail.conf) to explicitly specify:
backend = gamin
instead of the default:
backend = auto
and tell fail2ban to reload the config file:
% sudo /usr/bin/fail2ban-client reload
For more info, see:
https://github.com/fail2ban/fail2ban/issues/44
Thanks to Mark Tilly for finding the above Web page when he and I were trying to figure out why fail2ban stopped working on his server!
--Fred
Original Version: 8/16/2018
Last Updated: 8/16/2018
Applies to: All shells, All Unix flavors
Be sure that your services log IP addresses not just DNS hostnames to your log files. Otherwise, fail2ban may not be able to block attacks on those services.
I tripped across this problem recently. Some attackers had figured out that they were likely to be blocked by fail2ban, and found a way to avoid being blocked. I had to change an option on my FTP service to fix the problem.
Details:
The attackers knew that, by default, some services do a reverse DNS lookup of an IP address to get its fully qualified hostname, and log that hostname in their log files, instead of the IP address. Services do this to make the logs easier for a human to read.
This is not usually a problem for fail2ban. When it finds a troublesome hostname in the log file, it does a forward DNS lookup to get the IP address to be blocked. That generally works fine because hosts typically define forward and reverse DNS lookups for the convenience of people and services that access them.
However, when attackers set up a host specifically to attack other computers, they don't really care whether it can be reached. They set up DNS records only to look as legitimate as possible.
In my case, the attackers chose to set up reverse DNS records (IP to hostname) but not forward DNS records (hostname to IP). So when my FTP service detected a failed connection attempt, it did a reverse DNS lookup to get the hostname and logged that hostname to the log file. But when fail2ban noticed a pattern of failed attempts in the log file, it tried to do a forward DNS lookup to get the IP address and ban it. When the lookup failed, it logged a warning and proceeded, without banning the attacking server.
I noticed that that my daily logwatch emails started reporting large numbers of unsuccessful logins to my FTP server. So I checked the log files to see why they were not being blocked. When I noticed they all used hostnames, not IP addresses, I tried to do manual nslookup commands on the hostnames and saw that they all failed.
The fix was to set a switch in my FTP server to tell it just log the IP addresses, not do the reverse DNS lookups and log the hostnames. Specificially, for the vsftpd service, I set the option: reverse_lookup_enable=NO
Now it all works fine again! In my daily logwatch emails, the counts of failed attempts have dropped from thousands or more per day to 3 or less per day per IP address And the number of blocks by fail2ban has gone up to hundreds per day. Good riddance!
Why use FTP at all?
You may ask why I even run an FTP server at all. Why not do all file transfers via a secure mechanism like scp or rsync? Good question! I recommend that approach here.
I run an FTP server because I host the web sites of several non-techie family members who still use Window's PCs, and they need a way to push their files to the server. Windows doesn't offer any of the secure tools like scp or rsync that have been invented in the past 20-30 years. It offers only insecure tools from the 1960's and 1970's like Telnet and FTP. This allows Microsoft to claim that communication from Windows computers to non-Windows computers is a security risk. Their proposed solution is to only use Windows. A much better solution is to never use Windows.
--Fred
Original
Version: 2/23/2010
Last Updated: 2/28/2010
Applies to: All shells, All Unix flavors
tripwire is a security tool that detects successful
break-ins by noticing changes to system files, and mails you a
report each day.
It keeps an encrypted database containing info about the files, and
uses that to notice changes in file contents and/or file attributes
(permissions, etc.). I've been using it on my Linux servers for
nearly 10 years. You can configure which files it watches, to
avoid getting lots of false alarms if you make lots of intentional
changes to system files and don't always remember to tell it to update
its database.
When you do update the database, it shows you the changes to each file,
and allows you to accept each change you know about, so that future
reports show only those you haven't yet accepted.
It is very easy to download, install, and configure. Here's what
I did:
Original
Version: 2/25/2010
Last Updated: 2/25/2010
Applies to: All shells, All Unix flavors
Port knocking is a security technique whereby a firewall port starts
out closed, but is opened automatically if you attempt to access a
series of other ports in a specified order. Effectively, you
knock on the door of the server with a secret knock to cause it
to be opened for you, as they used to do for a "speakeasy" in
the Prohibition era of the US. There
are a variety of tools like knockd and fwknop that
implement this.
For more info, see:
Original
Version: 1/11/2013
Last Updated: 1/11/2013
Applies to: All shells, All Unix flavors
If you want to trap an attacker and learn more about him, set up a "honey pot".
Create a separate section of your server that looks like an entire server, but contains only unimportant or fake data. Redirect all attacks on the real server to the honey pot. Allow the attacker to think he succeeded in breaking in. Then study what he does, feed him fake data and see what he does with it, track him him to his home, and take him down.
This is a great way to passively defend a real server. But if you like you can also go on the offensive, actively luring hackers to your honey pot just for the purpose of exposing, studying and attacking them.
Set up a honey pot that seems very easy to break into, or one that offers a service that a hacker would value, like an open mail relay so he thinks he can use it to send spam. Advertise it subtly, by posting innocent sounding questions and comments to blogs, forums, and user groups that are frequented by hackers. Make it clear that you are setting up a server and are clueless about security.
In the honeypot,
track the other resources used by the spammer/hacker (other mail
relays, drop boxes, servers, etc.) Study the techniques used by the
spammer to avoid spam filters, and the techniques used by the hacker
to break into other systems. Then block those techniques and
advertise them to the FBI, CERT, and other authorities.
For more info, see:
Original Version: 10/31/2010
Last Updated: 11/5/2010
Applies to: All shells, All Unix flavors
The su command allows you to temporarily become a different user, so you can access files and other resources as that user. It is most commonly used without any arguments as:
suwhich sets you to the root user after prompting you for the root password. It can also be used to set you to another user, as:
su tomcatwhich sets you to the tomcat user after prompting you for the tomcat password.
su -m su -m tomcatThis is extremely lightweight. No environment variables are changed, and the launched shell is another instance of the shell you were already running rather than the target user's default shell. Personally, I find this handy enough that I aliased su to su -m years ago and have never looked back. (For more info about aliases, see Aliases.)
su - su - tomcat su -l su -l tomcatThis is much more heavyweight. It clears all environment variables, sets HOME, SHELL, USER and LOGNAME as described above (even for root), sets PATH to a short safe value, and changes to the target user's home directory. The launched shell is a login shell, so the system-wide and target-user-specific login startup files are executed. (For more info about startup files, see "sh Startup Files", "csh Startup Files", "bash Startup Files", "ksh Startup Files", "zsh Startup Files", etc.)
su -l -s /bin/bash tomcatIt is generally a good idea to spend most of your time logged in as a regular user, so you don't accidentally do something harmful to the system. When you occasionally need root access, you can use su to temporarily gain that access, do the privileged operations, and then exit su via exit or Ctrl-D.
man suHaving said all that, there are a couple of problems with su, making it generally better to use sudo instead. See the next tip...
Original Version: 10/31/2010
Last Updated: 11/9/2010
Applies to: All shells, All Unix flavors
The sudo command is similar to su.
They both allow you to temporarily become a different user,
so you can access files and other resources as that user. However,
sudo is better than su in a couple
of ways.
The biggest advantage is that sudo doesn't require
you to enter the password of the root or other target user. The
system administrator can configure the /etc/sudoers file
to allow specific users to run specific commands as specific other
users. Once you have been authorized via the /etc/sudoers file,
you can type:
sudo command1to execute command1 as the root user or:
sudo -u user1 command1to execute command1 as the user1 user. Some examples:
sudo ls cat /etc/ssh/sshd_config sudo -u tomcat cp app.war /usr/local/tomcat/webappsWhen you use sudo, it prompts for your password (not the target user's password), checks the /etc/sudoers files to make sure you are allowed to run that command as that user, runs the command, and logs the fact that you ran it. Thus, the system administrator doesn't have to give the root password to anyone, and there is much more control over who can do what, and much more accountability for who actually did what and when they did it.
man sudo man sudoersThanks to Amul Shah for reminding me of the sudoers man page, and pointing out the need for emergency access via the console! Thanks to Linda Swyderski for pointing out the Windows "runas" command, which is similar to sudo.
Original Version: 10/31/2010
Last Updated: 11/20/2010
Applies to: All shells, All Unix flavors
The ssh command is similar to telnet.
They both allow you to login to a remote system. ssh can
also be used like rsh to execute a single command on
a remote system. Furthermore, ssh can be used
as a transport for
other commands,
such as
scp
and
rsync,
to copy files
from one system to another like
rcp and ftp. The advantage
of
ssh (and commands like
scp,
rsync,
etc., that use it) over telnet, rsh, rcp,
and ftp is that ssh encrypts all transmissions
for security, including the initial username and password. Finally, ssh can
be used to set up encrypted "tunnels" for other protocols
like HTTP.
Examples:
ssh bristle.com ssh fred@bristle.com ssh bristle.com ls /etc ssh fred@bristle.com ls /etcssh can be configured to allow all or only specific users to access the system, to allow access via passwords or via encrypted keys, etc. If a client computer stores a private key, and a server computer stores the corresponding public key, any user on the client computer with access to the private key can access the server computer without a password. This can be very convenient for automated processes and frequent tasks. They can run without a user having to enter the password each time.
man sshTo access an ssh server from Microsoft Windows, see:
http://bristle.com/Tips/Windows.htm#putty--Fred
Original Version: 1/30/2011
Last Updated: 3/22/2021
Applies to: All shells, All Unix flavors
For convenience, you may want to create short ssh aliases to access your most common ssh target hosts. For example, I frequently ssh to the host trident.bristle.com, and don't want to always have to type:
ssh trident.bristle.comTherefore, I created an ssh alias trident, by adding the following lines to my ~/.ssh/config file (which must be read/write by owner and inaccessible to all others -- chmod 600):
Host tridentNow, to login to that host, I simply type:
HostName trident.bristle.com
ssh tridentOr to run a single remote command, I type:
ssh trident commandFor example:
ssh trident ls /etc ssh trident cat /etc/passwd ssh trident sudo cat /etc/ssh/sshd_config
This is really convenient for quick in-and-outs. For example, I manage several servers, and have monitoring software running on them that sends me messages about various things. When I get a message that says some outgoing e-mails have been rejected, I can run my mailerrs script remotely on trident as:
ssh trident mailerrs
to confirm that the errors were transient and the e-mails were eventually
sent. No need to manually login, run the command, and log out.
You can also specify lots of
other info on a per-host basis in your ~/.ssh/config file:
For more info:
man ssh_config
--Fred
Original Version: 10/31/2010
Last Updated: 12/3/2010
Applies to: All shells, All Unix flavors
The combination of sudo and ssh is very powerful. You can use ssh to execute a sudo command on a remote system, with the ability to be prompted for the sudo password. However, to avoid errors like:
sudo: sorry, you must have a tty to run sudobe sure to use the -t option of ssh, as:
ssh -t user@host sudo cat /etc/ssh/sshd_configHowever, if you use the bash shell on the remote system, you may instead have to do:
ssh -t user@host bash -ic sudo cat /etc/ssh/sshd_configto make sure your .bashrc file runs. See details at Use scripts for frequent ssh access to bash.
man sudo man ssh--Fred
Original Version: 11/7/2010
Last Updated: 3/25/2021
Applies to: All shells, All Unix flavors
Here's a script to help you create and manage your public and private ssh keys, and to push your public key to a remote host.
#!/bin/csh -f # authorize_ssh_key # ----------------------------------------------------------------------------- # Shell script to add the default or specified public RSA SSH key to the # authorized_keys file of the specified user@host, so that the user can # login via ssh w/o specifying a password. Offers to create the RSA key # file pair if it doesn't already exist. Offers to create the RSA public # key from the specified RSA private key if it doesn't already exist. # ----------------------------------------------------------------------------- # Usage: See Usage section below or run with -h or --help to see usage. # Assumptions: # Effects: # - Updates the remote authorized_keys file. # Notes: # - Thanks to JP Vossen for pointing out that this is essentially the # same functionality as the existing Linux command ssh-copy-id. # I'm not sure if that already existed when I wrote this on 10/31/2010. # If so, I wasn't aware of it. I haven't ever compared them to see how # similar they are. # Implementation Notes: # Portability Issues: # Revision History: # $Log$ # ----------------------------------------------------------------------------- if ($#argv == 0 || "$1" == "-h" || "$1" == "--help") then echo "Usage:" echo " $0:t [-f rsa_public_key_file] [user@]host" echo "Examples:" echo " $0:t bristle.com " echo " $0:t fred@bristle.com " echo " $0:t -f fred_public_key_file bristle.com " echo " $0:t -f fred_public_key_file fred@bristle.com " exit 1 endif # Get and check options set key = ~/.ssh/id_rsa # No quotes, so ~ will be expanded if ($1:q == "-f") then set key = $2:q shift shift endif # Determine the name of the public key to assume for now if ("${key:e}" == "pub") then set public_key = "${key}" else if (-e "${key}.pub") then set public_key = "${key}.pub" else set public_key = "${key}" endif endif # Create the public key if missing if (-e "${public_key}") then echo "Public key ${public_key} found." else echo "Public key ${public_key} not found." if ("${public_key:e}" == "pub") then set private_key = "${public_key:r}" if (-e "${private_key}") then echo "Private key ${private_key} found." set reply = `promptloop "Create public key from private key (y/n)? " y n` if ($reply == "y") then echo "Creating public key..." ssh-keygen -y -f ${private_key} > ${public_key} set rc = $status if ($rc != 0) then beep "Error creating public key." exit $rc endif else beep "No public key found or created." exit 1 endif else beep "Private key ${private_key} not found." exit 1 endif else set private_key = "${public_key}" set public_key = "${public_key}.pub" set prompt = "Create new key pair ${private_key}, ${public_key} (y/n)? " set reply = `promptloop "${prompt}" y n` if ($reply == "y") then echo "Creating key pair..." ssh-keygen -t rsa -f ${private_key} set rc = $status if ($rc != 0) then beep "Error creating key pair." exit $rc endif else beep "No key pair created." exit 1 endif endif endif # Add the public key to the authorized_keys file echo "" echo "Pushing the public key to $1." echo "You may be prompted for the $1 password a couple times." echo "ssh $1 mkdir -v -p .ssh" ssh $1 mkdir -v -p .ssh echo "ssh $1 touch .ssh/authorized_keys" ssh $1 touch .ssh/authorized_keys echo "cat ${public_key} | ssh $1 'cat >> .ssh/authorized_keys'" cat ${public_key} | ssh $1 'cat >> .ssh/authorized_keys' echo "ssh $1 chmod g-w,o-w .ssh" ssh $1 chmod g-w,o-w .ssh echo "ssh $1 chmod g-w,o-w .ssh/authorized_keys" ssh $1 chmod g-w,o-w .ssh/authorized_keys echo "Done pushing the public key to $1." echo "" echo "You should be able to ssh to $1 with no password from now on." echo "If your private key is not in the default location (~/.ssh/id_rsa)," echo "you'll have to specify the -i option to tell ssh where to find it."
For the very latest version that I use regularly, see:
This script requires the following additional scripts:
--Fred
Original Version: 11/7/2010
Last Updated: 1/30/2011
Applies to: All shells, All Unix flavors
For even more convenience than ssh aliases, you may want to create short scripts to access your most common ssh target hosts. For example, I frequently ssh to the host trident.bristle.com, and don't want to always have to type:
ssh -t trident.bristle.com
or even (using my ssh alias):
ssh -t tridentTherefore, I created a script called trident:
#!/bin/csh -f settitle trident ssh -t trident.bristle.com $* settitle -h
(In this script "settitle" is another script I use
to set the displayed title of the current window on my local
machine, so I can easily see which remote host I am logged
into.)
Now, to login to that host, I simply type:
tridentOr to run a single remote command, I type:
trident commandFor example:
trident ls /etc trident cat /etc/passwd trident sudo cat /etc/ssh/sshd_config
This is really convenient for quick in-and-outs. For example, I manage several servers, and have monitoring software running on them that sends me messages about various things. When I get a message that says some outgoing e-mails have been rejected, I can run my mailerrs script remotely on trident as:
trident mailerrs
to confirm that the errors were transient and the e-mails were eventually
sent. No need to manually login, run the command, and log out.
--Fred
Original Version: 12/3/2010
Last Updated: 12/3/2010
Applies to: bash shell, All Unix flavors
With the bash shell, there's one further complication.
If you are connecting to a
remote system where your default shell is bash, and you want
your .bashrc file to run before the single remote command you
specify (to set environment variables, aliases, etc.), change
the trident script to:
#!/bin/csh -f settitle trident if ($#argv == 0) then ssh -t trident.bristle.com else ssh -t trident.bristle.com bash -ic $* endif settitle -h
Here's why...
Using ssh without -t:
When you login via ssh, w/o specifying a command, you get
a "login shell", so .bash_profile
runs. By default, .bash_profile
is configured to run .bashrc, so .bashrc also runs.
When you login via ssh specifying a command, you do not get a "login shell", so .bash_profile does not run and does not run .bashrc. However, you get an "interactive shell" because stdin is a socket. Therefore, .bashrc is invoked directly.
Using ssh with -t:
When you login via ssh, w/o specifying a command, everything is exactly as without -t -- you get a "login shell", so .bash_profile runs, and it runs .bashrc
However, when you login via ssh specifying a command, you do not get a "login shell", so .bash_profile does not run and does not run .bashrc. Furthermore, because of the -t, stdin is a "pseudo-tty", not a socket, so you do not get an "interactive shell". Therefore, .bashrc is not invoked.
Solution:
Instead of:
ssh -t trident.bristle.com command
use:
ssh -t trident.bristle.com bash
-ic command
Now, you're telling ssh to run bash, instead of command, and -c is telling bash to run command and then exit, and -i is telling bash to run an interactive shell, so it runs .bashrc first.
Lots of details from Chet Ramey, the current maintainer
of bash, in his replies to a question at:
http://www.mail-archive.com/bug-bash@gnu.org/msg03492.html
For more info on bash startup files, see:
bash
Startup
Files
Thanks to Chris Hunter for reminding me to not specify bash -ic in the script
when there are no arguments, and for inspiring me to look into this whole bash
issue in the first place!
--Fred
Original Version: 6/27/2012
Last Updated: 6/27/2012
Applies to: All shells, All Unix flavors
telnet was an early command used to login from one computer to another, but it had security problems. Everything you typed, including your username and password, was sent across the wire in plain text, with no encryption. To login to a remote computer now, use ssh. See ssh - Secure Shell.
telnet is also useful for debugging network problems. For example, you can connect directly to port 25 on a remote machine, as though you were an SMTP mail client. Then you can type in the commands of the SMTP protocol (MAIL FROM, RCPT, etc.) and see the responses sent by the SMTP server. For security reasons, telnet may not even be installed on your Unix or Linux system. In that case, use the more modern nc (netcat) for such debugging.
--Fred
Original Version: 6/27/2012
Last Updated: 6/27/2012
Applies to: All shells, All Unix flavors
The rlogin command is obsolete. It was was an early command used to login from one computer to another, with the convenience of not having to specify your password once you configured one computer to trust incoming connections from another computer. However, it had security problems. Everything you typed was sent across the wire in plain text, with no encryption. It may not even be installed on your Unix or Linux system. If it is, don't use it. Use ssh instead, with all the same convenience features and none of the security issues. See ssh - Secure Shell.
--Fred
Original Version: 6/27/2012
Last Updated: 6/27/2012
Applies to: All shells, All Unix flavors
The rsh command is obsolete. It was was an early command used to execute commands remotely on one computer from another computer, defaulting to logging you in for an interactive session via rlogin if you didn't specify a command. However, it had security problems. Everything you typed was sent across the wire in plain text, with no encryption. It may not even be installed on your Unix or Linux system. If it is, don't use it. Use ssh instead, with all the same convenience features and none of the security issues. See ssh - Secure Shell.
--Fred
Original Version: 6/27/2012
Last Updated: 6/27/2012
Applies to: All shells, All Unix flavors
See ssh - Secure Shell.
--Fred
Original Version: 6/27/2012
Last Updated: 6/27/2012
Applies to: All shells, All Unix flavors
ftp was an early command used to copy files from one computer to another, but it had security problems. Everything you typed, including your username and password, was sent across the wire in plain text, with no encryption. To copy files to a remote computer now, use scp or rsync. See scp - Secure Copy and rsync - Advanced File Copying.
--Fred
Original Version: 6/27/2012
Last Updated: 6/27/2012
Applies to: All shells, All Unix flavors
See rcp - Remote Copy.
--Fred
Original Version: 6/27/2012
Last Updated: 6/27/2012
Applies to: All shells, All Unix flavors
See scp - Secure Copy.
--Fred
Original Version: 6/27/2012
Last Updated: 6/27/2012
Applies to: All shells, All Unix flavors
See rsync - Advanced File Copying.
--Fred
Original Version: 6/27/2012
Last Updated: 6/27/2012
Applies to: All shells, All Unix flavors
The nc (netcat) command is a network version of the cat command (cat - View, copy, append and create files). That is, it "catenates" (concatenates, copies, appends, creates, etc.) a stream of raw data, but it does so from one computer to another.
Here's an excellent article describing some of the things that are possible with nc:
Thanks to Sonny To for pointing me to the article!
See man nc for more details.
--Fred