In all honesty, I always thought the “Windows Key” on my keyboard was just a
nuisance. It was just the thing I accidentally hit once in a while and the darn Start
menu would pop up. Turns out. I was wrong; it has a lot of uses.
When used in combination with other keys, here’s what you can do:
Windows Key + Tab: Cycle through the buttons in the Task Bar.
Windows Key + D: Minimize or restore all windows
Windows Key + E: Launch Windows Explorer
Windows Key + F: Launch Search for Files
Windows Key + Ctrl + F: Launch Search for Computers
Windows Key + F1: Launch the Help and Support Center
Windows Key + R: Launch the Run dialog box
Windows Key + Pause/Break: Launch System Properties dialog box
Windows Key + M: Minimizes all open windows.
Windows Key + Shift + M: Undo minimize all windows
Windows Key + L: Locks the workstation
Windows Key + U: Launch the Utility Manager
Windows Key + Ctrl + Tab: According to Microsoft: Moves focus from Start, to
the Quick Launch toolbar, to the system tray.
Nobody really knows what the Bourne shell's grammar is. Even
examination of the source code is little help.
Tom Duff
Please visit Tips & Tricks
section of Sheldorado as well as
SHELLdorado - Newsletter
Archive which contain a lot of useful tips.
Most shell scripts are quick 'n dirty solutions to non-complex problems.
Therefore the first and the most important tip I can give is "not too much
zeal in optimization". Optimizing scripts for speed usually is a bad idea.
In case that script performs an important task, but runs too slowly
convert it to a scripting language. For example Perl. That is especially
prudent if the script has nested loops. Time consumed by repetitive operations
adds up quickly. Use the time and
times tools to profile computation-intensive commands.
Bash is not particularly efficient at handling files, so consider using
more appropriate tools for this within the script, such as awk or Perl.
Unless you know, or willing to learn Perl, Awk is much underappreciated
utility that can and should be used more widely in shell scripts. That's
probably the second most important tip I can give.
Try to write your scripts in a structured, coherent form, so they can
be reorganized and reused as necessary. Borrow from somebody and use a standard
header that explains that purpose of the script and document changes that
you made. Above all, use common sense.
The problem with bash is that it pretty baroque and
there are just too many features that you need
to remember. This command, that command and so on till infinity.
It is difficult to remember details of each bash has an on-line help feature
that provides basic information about most of its built-in commands. To
see the help description for a particular command, enter
help command
(for example, help alias) at the bash UNIX prompt. To see a list
of bash commands for which help is available, type help at the bash
UNIX prompt. You may access the manual page for bash by entering man
bash at a UNIX prompt, but beware, it is 60 pages long and not very
readable.
With bash 3.x, you can reissue commands like in C-shell using arrow keys
and use Ctrl-r to browse command
history. (In emacs mode you can also use
CTRL-p and
CTRL-n)
Bash also supports "file
name completion" which, if not abused, can save some typing.
Like any decent shell Bash also allows you to define aliases and you
should avoid retying the same command twice not only by browsing the history
but defining aliases from history. But remember that too many aliases
are counterproductive. Limit you repertoire to a dozen. For such thing like
browsing /var/log/messages
or /var/adm/messages it's better
to define functions that are more powerful tool the aliases.
Use a separate dot file with functions and aliases for example an
.aliases file, and .bashrc
This section describes another of the more unusual commands in the
shell: eval. Its format is as follows:
eval command-line
where command-line is a normal command line that you would type at
the terminal. When you put eval in front of it, however, the
net effect is that the shell scans the command line twice before executing
it.[1] For the simple case, this
really has no effect:
[1] Actually, what happens
is that eval simply executes the command passed to it as
arguments; so the shell processes the command line when passing
the arguments to eval, and then once again when eval
executes the command. The net result is that the command line is
scanned twice by the shell.
$ eval echo hello
hello
$
But consider the following example without the use of eval:
$ pipe="|"
$ ls $pipe wc -l
|: No such file or directory
wc: No such file or directory
-l: No such file or directory
$
Those errors come from ls. The shell takes care of pipes
and I/O redirection before variable substitution, so it never recognizes
the pipe symbol inside pipe. The result is that the three arguments
|, wc, and -l are passed to ls as
arguments.
Putting eval in front of the command sequence gives the
desired results:
$ eval ls $pipe wc –l
16
$
The first time the shell scans the command line, it substitutes
| as the value of pipe. Then eval causes
it to rescan the line, at which point the | is recognized by
the shell as the pipe symbol.
The eval command is frequently used in shell programs that
build up command lines inside one or more variables. If the variables
contain any characters that must be seen by the shell directly on the
command line (that is, not as the result of substitution), eval
can be useful. Command terminator (;, |, &),
I/O redirection (<, >), and quote characters are among
the characters that must appear directly on the command line to have
any special meaning to the shell.
For the next example, consider writing a program last whose
sole purpose is to display the last argument passed to it. You needed
to get at the last argument in the mycp program in
Chapter 10, "Reading and Printing Data." There you did so by shifting
all the arguments until the last one was left. You can also use
eval to get at it as shown:
$ cat last
eval echo \$$#
$ last one two three four
four
$ last * Get the last file
zoo_report
$
The first time the shell scans
echo \$$#
the backslash tells it to ignore the $ that immediately
follows. After that, it encounters the special parameter $#,
so it substitutes its value on the command line. The command now looks
like this:
echo $4
(the backslash is removed by the shell after the first scan). When
the shell rescans this line, it substitutes the value of $4
and then executes echo.
This same technique could be used if you had a variable called
arg that contained a digit, for example, and you wanted to
display the positional parameter referenced by arg. You could
simply write
eval echo \$$arg
The only problem is that just the first nine positional parameters
can be accessed this way; to access positional parameters 10 and greater,
you must use the ${n} construct:
eval echo \${$arg}
Here's how the eval command can be used to effectively create
"pointers" to variables:
$ x=100
$ ptrx=x
$ eval echo \$$ptrx Dereference ptrx
100
$ eval $ptrx=50 Store 50 in var that ptrx points to
$ echo $x See what happened
50
$
A common eval use is to build a dynamic string containing valid
Unix commands and then use eval to execute the string. Why
do we need eval? Often, you can build a command that doesn't
require eval:
evalstr="myexecutable"
$evalstr # execute the command string
However, chances are the above command won't work if "myexecutable"
requires command-line arguments. That's where eval comes in.
Our man page says that the arguments to the eval command
are "read as input to the shell and the resulting commands executed".
What does that mean? Think of it as the eval command forcing
a second pass so the string's arguments become the arguments of the
spawned child shell.
In a previous column, we built a dynamic sed command that
skipped 3 header lines, printed 5 lines, and skipped 3 more lines until
the end of the file:
This command fails without eval. When the sed command
executes in the child shell, eval forces the remainder of the
string to become arguments to the child.
Possibly the coolest eval use is building dynamic Unix shell
variables. The following stub script dynamically creates shell variables
user1 and user2 setting them equal to the strings John and Ed, respectively:
Another novice asked how to line up three files line by line sending
the output to another file. Given the following:
file1:
1
2
3
file2:
a
b
c
file3:
7
8
9
the output file should look like this:
1a7
2b8
3c9
The paste command is a ready-made solution:
paste file1 file2 file3
By default, the delimiter character between the columns is a tab key.
The paste command provides a -d delimiter option. Everything
after -d is treated as a list. For example, this paste rendition
uses the pipe symbol and ampersand characters as a list:
paste -d"|&" file1 file2 file3
The command produces this output:
1|a&7
2|b&8
3|c&9
The pipe symbol character, |, is used between columns 1 and 2, while
the ampersand, &, separates column 2 and 3. If the list is completely
used, and if the paste command contains more files arguments, then paste
starts at the beginning of the list.
To satisfy our original requirement, paste provides a null character,
\0, signifying no character. To prevent the shell from interpreting
the character, it must also be quoted:
paste -d"\0" file1 file2 file3
Process a String One Character at a Time
Still another user asked how to process a string in a shell script
one character at a time. Certainly, advanced scripting languages such
as Perl and Ruby can solve this problem, but the cut command's
-b option, which specifies the byte position, is a simple alternative:
#!/bin/ksh
mystring="teststring"
length=${#mystring}
count=0
until [ $count -eq $length ]
do
((count+=1))
char=$(echo $mystring|cut -b"$count")
echo $char
done
In the stub above, string mystring's length is determined using the
advanced pattern-matching capabilities of the bash and ksh shells. Any
number of external Unix commands can provide a string length, but probably
the command with the smallest foot print is expr:
length=$(expr "$mystring" : '.*')
Also, the bash shell contains a substring expansion parameter:
${parameter:offset:length}
According to the bash man page, the substring expansion expands "up
to length characters of parameter starting at the character specified
offset". Note that the offset starts counting from zero:
#!/bin/bash
mystring="teststring"
length=${#mystring}
ol=1
offset=0
until [ $offset -eq $length ]
do
echo "${mystring:${offset}:${ol}}"
((offset+=1))
done
# end script
Deleting a File Named dash
Finally, a novice inadvertently created a file named with the single
character dash, and asked us how to delete the file. No matter how he
escaped the dash in the rm command, it still was considered an
rm option.
It's easy enough to create the file using the touch
command:
touch -
To remove it, use a path to the file -- either full or relative. Assuming
the dash file exists in the mydir directory, provide a full path to
the file:
rm /pathto/mydir/-
Or if the file exists in the current directory, provide a relative path:
rm ./-
Of course, our old friend find can clobber that file
everywhere:
Shell scripts can be powerful tools for writing software.
Graphical interfaces notwithstanding, they are capable of performing
nearly any task that could be performed with a more traditional
language. This chapter describes several techniques that will help
you write more complex software using shell scripts.
“Background Jobs and Job Control” explains how to do
complex tasks in the background while your script continues to
execute, including how to perform some basic parallel
computation. It also explains how to obtain the result codes
from these jobs after they exit.
“Networking With Shell Scripts” describes how to use
the
nc tool (otherwise known as netcat) to write shell
scripts that take advantage of TCP/IP sockets.
Once upon a time, Unix had only one shell, the Bourne shell, and
when a script was written, the shell read the script and executed the
commands. Then another shell appeared, and another. Each shell had its
own syntax and some, like the C shell, were very different from the
original. This meant that if a script took advantage of the features
of one shell or another, it had to be run using that shell. Instead
of typing: doit
The user had to know to type: /bin/ksh doit
or: /bin/csh doit
To remedy this, a clever change was made to the Unix kernel -- now
a script can be written beginning with a hash-bang (#!)
combination on the first line, followed by a shell that executes the
script. As an example, take a look at the following script, named
doit: #! /bin/ksh
#
# do some script here
#
In this example, the kernel reads in the script doit,
sees the hash-bang, and continues reading the rest of the line, where
it finds /bin/ksh. The kernel then starts the Korn shell
with doit as an argument and feeds it the script, as if
the following command had been issued: /bin/ksh doit
When /bin/ksh begins reading in the script, it sees
the hash-bang in the first line as a comment (because it starts with
a hash) and ignores it. To be run, the full path to the shell is required,
as the kernel does not search your PATH variable. The hash-bang
handler in the kernel does more than just run an alternate shell; it
actually takes the argument following the hash-bang and uses it as a
command, then adds the name of the file as an argument to that command.
You could start a Perl script named doperl by using
the hash-bang: #! /bin/perl
# do some perl script here
If you begin by typing doperl, the kernel spots the
hash-bang, extracts the /bin/perl command, then runs it
as if you had typed: /bin/perl doperl
There are two mechanisms in play that allow this to work. The first
is the kernel interpretation of the hash-bang; the second is that Perl
sees the first line as a comment and ignores it. This technique will
not work for scripting languages that fail to treat lines starting with
a hash as a comment; in those cases, it will most likely cause an error.
You needn't limit your use of this method to running scripts either,
although that is where it's most useful.
The following script, named helpme, types itself to
the terminal when you enter the command helpme: #! /bin/cat
vi unix editor
man manual pages
sh Bourne Shell
ksh Korn Shell
csh C Shell
bash Bourne Again Shell
This kernel trick will execute one argument after the name of the
command. To hide the first line, change the file to use more
by starting at line 2, but be sure to use the correct path: #! /bin/more +2
vi unix editor
man manual pages
sh Bourne Shell
ksh Korn Shell
csh C Shell
bash Bourne Again Shell
Typing helpme as a command causes the kernel to convert
this to: /bin/more +2 helpme
Everything from line 2 onward is displayed: helpme
vi unix editor
man manual pages
sh Bourne Shell
ksh Korn Shell
csh C Shell
bash Bourne Again Shell
etc.
You can also use this technique to create apparently useless scripts,
such as a file that removes itself: #! /bin/rm
If you named this file flagged, running it would cause
the command to be issued as if you had typed: /bin/rm flagged
You could use this in a script to indicate that you are running something,
then execute the script to remove it: #! /bin/ksh
# first refuse to run if the flagged file exists
if [-f flagged ]
then
exit
fi
# create the flag file
echo "#! /bin/rm" >flagged
chmod a+x flagged
# do some logic here
# unflag the process by executing the flag file
flagged
Before you begin building long commands with this technique, keep
in mind that systems often have an upper limit (typically 32 characters)
on the length of the code in the #! line. Testing command line arguments and usage
When you write a shell script, arguments are commonly needed for
it to function properly. In order to ensure that those arguments make
sense, it's often necessary to validate them.
Testing for enough arguments is the easiest method of validation.
For example, if you've created a shell script that requires two file
names to operate, test for at least two arguments on the command line.
To do this in the Bourne and Korn shells, check the value of $#
-- a variable that contains the count of arguments, other than the command
itself. It is also good practice to include a message detailing the
reasons why the command failed; this is usually created in a usage function.
The script twofiles below tests for two arguments on
the command line: #! /bin/ksh
# twofile script handles two files named on the command line
# a usage function to display help for the hapless user
# test if we have two arguments on the command line
if [ $# != 2 ]
then
usage
exit
fi
# we are ok at this point so continue processing here
A safer practice is to validate as much as you can before running
your execution. The following version of twofiles checks
the argument count and tests both files. If file 1 doesn't exist (if
[ 1 ! -f $1 ]) an error message is set up, a usage is displayed,
and the program exits. The same is done for file 2: #! /bin/ksh
# twofile script handles two files named on the command line
# a usage function to display help for the hapless user
# plus an additional error message if it has been filled in
# test if we have two arguments on the command line
if [ $# != 2 ]
then
usage
exit
fi
# test if file one exists and send an additional error message
# to usage if not found
if [ ! -f $1 ]
then
errmsg=${1}":File Not Found"
usage
exit
fi
# same for file two
if [ ! -f $2 ]
then
errmsg=${2}":File Not Found"
usage
exit
fi
# we are ok at this point so continue processing here
Note that in the Korn shell you can also use the double bracket test
syntax, which is faster. The single bracket test actually calls a program
named test to test the values, while the double bracket
test is built into the Korn shell and does not have to call a separate
program.
The double bracket test will not work in the Bourne shell:
if [[ $# != 2 ]]
or
if [[ ! -f $1 ]]
or
if [[ ! -f $2 ]]
This thorough validation can prevent later errors in the program
logic when a file is suddenly found missing. Consider it good programming
practice.
I can give one practical purpose for this error redirection which I
use on a regular basis. When I am searching for a file in the whole
hard disk as a normal user, I get a lot of errors such as :
find: /file/path: Permission denied
In such situations I use the error redirection to weed out these
error messages as follows:
# find / -iname \* 2> /dev/null
Now all the error messages are redirected to
/dev/null
device and I get only the actual
find results on the
screen.
Note:/dev/null
is a special kind of file in that its size is always zero. So what
ever you write to that file will just disappear. The opposite of
this file is
/dev/zero
which acts as an infinite source. For example, you can use
/dev/zero to create
a file of any size - for example, when creating a swap file for
instance.
I've been using this grep invocation for years to trim comments out
of config files. Comments are great but can get in your way if you just
want to see the currently running configuration. I've found files hundreds
of lines long which had fewer than ten active configuration lines, it's
really hard to get an overview of what's going on when you have to wade
through hundreds of lines of comments.
$ grep ^[^#] /etc/ntp.conf
The regex ^[^#] matches the first character of any line, as long
as that character that is not a #. Because blank lines don't have a
first character they're not matched either, resulting in a nice compact
output of just the active configuration lines.
INDEX 6) Small tricks, aliases and other bit 'n' pieces
This is a list of small ``tricks'' that can be incorperated into your
own
.cshrc/.login startup files.
i) Show only new MOTD (messages of the the day) on login
if (-f /etc/motd ) then
cmp -s /etc/motd ~/.hushlogin
if ($status) tee ~/.hushlogin < /etc/motd
endif
ii) Changing the prompt to reflect the current directory
alias setprompt 'set prompt = "`pwd` > "'
alias cd
'chdir \!* && setprompt'
alias pushd 'pushd \!*
&& setprompt'
alias popd 'popd
\!* && setprompt'
setprompt
iii) Searching for a particular process (given as argument)
WARNING this is for a SunOS environment and may
be different for
other OS's.
alias pf 'ps auxgww|awk '\''/(^| |\(|\/)\!:1( |\)|$)/'\''|cut
-c1-15,36-99'
iv) Multiline prompt
alias setprompt 'set prompt="\\
${hostname:h}:${cwd}\\
\! % "'
v) Log remote (rsh) non-interactive commands executed in this account.
add something like the following to your .cshrc (non-interactive
part)
if ( ! $?prompt ) then
# Record the
set column = "`ps ww1 | head -1`"
# figure out column from ps header
set column = `expr "$column" : '\(.*\)COMMAND'
: '.*' + 1`
ps ww$$ | tail -1 | cut -c${column}-
>> ~/command.log
exit
endif
vi) Csh Function Scripts.
Scripts which are executed by the current shell as if
internal This
allows more complex setprompt scripts, and for scripts
to change the
prompt, set environment variables or change the current
directory.
# Csh function scripts
alias function 'set argv=(\!*); shift;
source \!:1'
# Specific Csh function
alias setprompt function ~/bin/scripts/setprompt
# Directory of Csh functions (initialization)
foreach i (~/bin/csh.functions/*)
alias $i:t function $i
end
vii) File/Directory mailing Aliases
Mail files, binaries, and directories to other
people easily
Usage: mailfile addresss
file
alias a alias
a mailfile 'cat ~/lib/line-cut \!:2 ~/lib/line-cut
|\\
/usr/ucb/mail -s "file \!:2" \!:1'
a mailuu 'uuencode \!:2 \!:2 | cat
~/lib/line-cut - ~/lib/line-cut |\\
/usr/ucb/mail -s "file \!:2.uu" \!:1'
a maildir 'tar cvf - "\!:2" | compress |
uuencode "\!:2.tar.Z" |\\
cat ~/lib/line-cut - ~/lib/line-cut |\\
/usr/ucb/mail -s "file \!:2.tar.Z.uu" \!:1'
-- miscellaneous sources
------------------------------------------------------------------------------
INDEX 7) Disclamer: Csh Script Programming Considered Harmful
There are plenty of reasons not to use csh for script writing.
See Csh Programming Considered Harmful
ftp://convex.com/pub/csh.whynot
http://www.cit.gu.edu.au/~anthony/info/shell/csh.whynot-1.4
also
http://www.cit.gu.edu.au/~anthony/info/shell/csh.whynot.extra
This file is an attempt to explain how to make it easier and more convenient
to use it interactively. It does NOT provide a guide to using csh as
a
general script writing language, and the authors recommend that it not
be
used for that purpose.
But why use csh interactively?
The aliases and history list alone make it worthwhile, extra features
such
as file completion, tilde expansion, and job control make even more
useful.
The tcsh command line editing and other interactive enhancements make
it one
of the best interactive shells around.
There are arguably `better' shells avaliable that can be used, but I
have
found many of them lacking in some important aspect or, generally not
installed on most systems. A delivered vanilla machine however, is almost
certain to have csh. A .cshrc and .login setup can then be easily
copied
and is available immediately.
Faced with the choice between plain sh and bad old csh I'll take csh
any
day.
-- Paul Davey (pd@x.co.uk)
-- Anthony Thyssen (anthony@cit.gu.edu.au)
When writing a shell program, you often come across some special
situation that you'd like to handle automatically. This tutorial includes
examples of such situations from small Bourne shell scripts. These situations
include base conversion from one string to another (decimal to hex,
hex to decimal, decimal to octal, and so on), reading the keyboard while
in a piped loop, subshell execution, inline input, executing a command
once for each file in a directory, and multiple ways to construct a
continuous loop.
Part 4 of this series wraps up with a collection of shell one-liners
that perform useful functions.
the last argument
The arguments to a script (or function) are $1, $2, ...
and can be referred to as a group by $* (or $@
). But is there an easy way to refer to the last argument in
list ? Try ${!#} as in:
#!/bin/bash
variable="This is a fine mess."
echo "$variable"
if [[ "$variable" =~ "T*fin*es*" ]]
# Regex matching with =~ operator within [[ double brackets ]].
then
echo "match found"
# match found
fi
Alternatively, the script can test for the presence of
i in the $- flag.
1 case $- in
2 *i*) # interactive script
3 ;;
4 *) # non-interactive script
5 ;;
6 # (Thanks to "UNIX F.A.Q.", 1993)
Scripts may be forced to run in interactive
mode with the i option or with a #!/bin/bash
-i header. Be aware that this may cause erratic script behavior
or show error messages where no error is present.
This is just a idea of writing log clumsy worded...
To keep a record of which user scripts have run during a particular
session or over a number of sessions, add the following lines to each
script you want to keep track of. This will keep a continuing file record
of the script names and invocation times.
1 # Append (>>) following to end of save file.
2 date>> $SAVE_FILE #Date and time.
3 echo $0>> $SAVE_FILE #Script name.
4 echo>> $SAVE_FILE #Blank line as separator.
5 # Of course, SAVE_FILE defined and exported as environmental variable in ~/.bashrc
6 # (something like ~/.scripts-run)
A shell script may act as an embedded command inside another shell
script, a Tcl or wish
script, or even a Makefile. It can be invoked as as an external shell
command in a C program using the system()
call, i.e., system("script_name");.
Put together a file of your favorite and most useful definitions
and functions, then "include" this file in scripts as necessary with
either the "dot" (.) or source
command (see
Section 3.2).
It would be nice to be able to invoke X-Windows widgets from a shell
script. There do, in fact, exist a couple of packages that purport to
do so, namely Xscript and
Xmenu, but these seem to be pretty much defunct.
If you dream of a script that can create widgets, try
wish (a Tcl derivative),
PerlTk (Perl with Tk extensions), or
tksh (ksh with Tk extensions).
IFS Specifies internal field separators (normally space, tab,
and new line) used to separate command words that result from command
or parameter substitution and for separating words with the regular
built-in command read. The first character of the IFS
parameter is used to separate arguments for the $* substitution.
... ... ...
[May 7, 2007] basename and dirname
basename strips off the path leaving
only the final component of the name, which is assumed to be the file
name. If you specify suffix and the remaining portion of name
contains a suffix which matches suffix, basename
removes that suffix. For example
basename src/dos/printf.c .c
produces
printf
dirname returns the directory part of the full path+name
combination.
Also can be done directly in bash basename=${file##*/}
dirname=${file%/*}
[May 7, 2007] To strip file extensions in bash, like
this.rbl --> this
Date: Tue, 12 Jan 1999 19:18:15 +0200
From: Reuben Sumner,
rasumner@iname.com Here is a two cent tip that I have been meaning to
submit for a long long time now. If you have a large stack of CD-ROMS, finding where
a particular file lies can be a time consuming task. My solution uses
the locate program and associated utilities to build up a database of
the CDs' contents that allows for rapid searching. First we need to create the database, the following
script does the trick nicely.
#!/bin/bash
onedisk()
{
mount /mnt/cdrom
find /mnt/cdrom -maxdepth 7 -print | sed "s;^/mnt/cdrom;$1;" > $1.find
eject -u cdrom
}
echo Enter name of disk in device:
read diskname
while [ -n "$diskname" ]; do
onedisk $diskname
echo Enter name of next disk or Enter if done:
read diskname
done
echo OK, preparing cds.db
cat *.find | sort -f | /usr/lib/findutils/frcode > cds.db
echo Done...
Start with no CD mounted. Run the script. It will ask
for a label for the CD, a short name like "sunsite1" is best. It will
then quickly scan the CD, eject it and prompt for another. When you
have exhausted your collection just hit enter at the prompt. A file
called cds.db will be done. To make it simple to use copy cds.db to
/var/lib (or anywhere else, that is where locatedb is on my system).
Now create an alias like
In order to prevent locate from warning you that the
database is old try touch -t 010100002020 /var/lib/cds.db to set the
modification date to January 1 2020. --
Reuben
Ever wondered what's inside some of those binary
files on your system (binary executables or binary data)? Several times
I've gotten error messages from some command in the Solaris system,
but I couldn't tell where the error was coming from because it was buried
in some binary executable file.
The Solaris "strings" command lets you look at the ASCII text buried
inside of executable files, and can often help you troubleshoot problems.
For instance, one time I was seeing error messages like this when a
user was trying to log in:
Could not
set ULIMIT
I finally traced the problem down to the /bin/login command by running
the "strings" command like this:
root> strings
/bin/login | more
The strings command lists ASCII character sequences in binary files,
and help me determine that the "Could not set ULIMIT" error was coming
from this file. Once I determined that the error message I was seeing
was coming from this file, solving the problem became a simple matter.
If you're like many Solaris users and administrators,
you spend a lot of time moving back and forth between directories in
similar locations. For instance, you might often work in your home directory
(such as "/home/al"), the /usr/local directories, web page directories,
or other user's home directories in /home.
If you're often moving back-and-forth between the same directories,
and you use the Bourne shell (sh) or Korn shell (ksh) as your login
shell, you can use the CDPATH shell variable to save yourself a lot
of typing, and quickly move between directories.
Here's a quick demo. First move to the root directory:
cd /
Next, if it's not set already, set your CDPATH shell variable as follows:
CDPATH=/usr/spool
Then, type this cd command:
cd cron
What happens? Type this and see what happened:
pwd
The result should be "/usr/spool/cron".
When you typed "cd cron", the shell looked in your local directory for
a sub-directory named "cron". When it didn't find one, it searched the
CDPATH variable, and looked for a "cron" sub-directory. When it found
a sub-directory named cron in the /usr/spool directory, it moved you
there.
You can set your CDPATH variable just like your normal PATH variable:
CDPATH=/home/al:/usr/local:/usr/spool:/home
Group commands together with parentheses
Have you ever needed to run a series of commands,
and pipe the output of all of those commands into yet another command?
For instance, what if you wanted to run the "sar", "date", "who", and
"ps -ef" commands, and wanted to pipe the output of all three of those
commands into the "more" command? If you tried this:
sar -u 1 5; date; who; ps -ef | more
you'll quickly find that it won't work. Only the output of the "ps -ef"
command gets piped through the "more" command, and the rest of the output
scrolls off the screen.
Instead, group the commands together with a pair of parentheses (and
throw in a few echo statements for readability) to get the output of
all these commands to pipe into the more command:
Many times it's necessary to schedule programs
to run at a later time. For instance, if your computer system is very
busy during the day, you may need
to run jobs late at night when nobody is logged on the system.
Solaris makes this very easy with the "at" command. You can use the
"at" command to run a job at almost any time--later today, early tomorrow...whenever.
Suppose you want to run the program "my_2_hour_program" at ten o'clock
tonight. Simply tell the at command to run the job at 10 p.m. (2200):
/home/al> at 2200
at> my_2_hour_program > /tmp/2hour.out
at> <CTRL><D>
warning: commands will be executed using /bin/ksh
job 890193600.a at Tue Mar 17 22:00:00 1998
Or suppose you'd like to run a find command at five o'clock tomorrow
morning:
/home/al> at 0500 tomorrow
at> find /home > /tmp/find.out
at> <CTRL><D>
warning: commands will be executed using /bin/ksh
job 890215200.a at Wed Mar 18 05:00:00 1998
When you're at the "at" prompt, just type the command you want to run.
Try a few tests with the at command until you become comfortable with
the way
it works.
Create a directory and move into it
at the same time
Question: How often do you create a new
directory and then move into that directory in your next command?
Answer: Almost always.
I realized this trend in my own work habits, so I created a simple shell
function to do the hard work for me.
md () {
mkdir -p $1
&& cd $1
}
This is a Bourne shell function named "md" that works for Bourne and
Korn shell users. It can be easily adapted for C shell users.
Taking advantage of the -p option of the mkdir command, the function
easily creates multi-level subdirectories, and moves you into the lowest
level of the directory structure. You can use the command to create
one subdirectory like this:
/home/al> md docs
/home/al/docs> _
or you can create an entire directory tree and move right into the new
directory like this:
Date: Fri, 15 Jan 1999 19:55:51 +0100 (CET)
From: JL Hopital, cdti94@magic.fr My English is terrible,so feel free to correct if
you decide to publish... Hello,i am a French linuxer and here is my two cent
tips. If you have many CD-ROMs and want to retrieve this_file_I'm_sure_i_have_but_can't_remember_where,
it can helps. It consist of 2 small scripts using gnu utilities:
updatedb and locate. Normally 'updatedb' run every night,
creating a database for all the mounted file systems and 'locate' is
used to query this system-wide database.But you can tell them where
are the files to index and where to put the database.That's what my
scripts does: The first script (addcd.sh) create a database for
the cd actually mounted.You must run it once for every cdrom. The second ( cdlocate.sh ) search in the databases
created by addcd.sh and display the cdname and full path of the files
matching the pattern you give in parameter. So you can search for unmounted
files ! To use:
( if your mount point is different , you must adapt
the script )
run addcd.sh with a fully descriptive name for
this cdrom as parameter (this description will be used as part of
the database name ,don't use space):
./addcd.sh Linux.Toolkit.Disk1.Oct.1996
It will take some time to updatedb to create the
databases specially if the cdrom contain many files.
umount the cdrom and go to step 2 for all the
cdroms you want or every time you've got a new one(I have more than
70 databases created this way).
you can now use cdlocate.sh,to retrieve files
./cdlocate.sh '*gimp*rpm'
Beware that locate's regular expressions have some peculiarities,
'man locate' will explain. Hope this help and happy linuxing !
---Cut here------------------------------
# addcd.sh
# Author: Jose-Luc.Hopital@ac-creteil.fr
# Create a filename's database in $DATABASEHOME for the cd mounted
# at $MOUNTPOINT
# Example usage: addcd.sh Linux.Toolkit.Disk3.Oct.1996
# to search the databases use cdlocate.sh
CDNAME=$1
test "$CDNAME" = "" && { echo Usage:$0 name_of_cdrom ; exit 1 ; }
# the mount point for the cd-ROM
MOUNTPOINT=/mnt/cdrom
# where to put the database
DATABASEHOME=/home/cdroms
updatedb --localpaths=$MOUNTPOINT --output=$DATABASEHOME/$CDNAME.updatedb && \
echo Database added for $CDNAME
---Cut here--------------------------------
# cdlocate.sh
# Author : Jose-Luc.Hopital@ac-creteil.fr
# Usage $0 pattern
# search regular expression in $1 in the database's found in $DATABASEHOME
# to add a database for a new cd-rom , use addcd.sh
test "$*" = "" && { echo Usage:$0 pattern ; exit 1 ; }
DATABASEHOME=/home/cdroms
cd $DATABASEHOME
# get ride of locate warning:more than 8 days old
touch *.updatedb
CDROMLIST=`ls *.updatedb`
for CDROM in $CDROMLIST
do
CDROMNAME=`basename $CDROM .updatedb`
locate --database=$DATABASEHOME/$CDROM $@ |sed 's/^/'$CDROMNAME:'/'
done
1 #!/bin/bash
2
3 # String expansion.
4 # Introduced in version 2 of bash.
5
6 # Strings of the form $'xxx'
7 # have the standard escaped characters interpreted.
8
9 echo $'Ringing bell 3 times \a \a \a'
10 echo $'Three form feeds \f \f \f'
11 echo $'10 newlines \n\n\n\n\n\n\n\n\n\n'
12
13 exit
Indirect variable references - the new way
1 #!/bin/bash
2
3 # Indirect variable referencing.
4 # This has a few of the attributes of references in C++.
5
6
7 a=letter_of_alphabet
8 letter_of_alphabet=z
9
10 # Direct reference.
11 echo "a = $a"
12
13 # Indirect reference.
14 echo "Now a = ${!a}"
15 # The ${!variable} notation is greatly superior to the old "eval var1=\$$var2"
16
17 echo
18
19 t=table_cell_3
20 table_cell_3=24
21 echo "t = ${!t}"
22 table_cell_3=387
23 echo "Value of t changed to ${!t}"
24 # Useful for referencing members
25 # of an array or table,
26 # or for simulating a multi-dimensional array.
27 # An indexing option would have been nice (sigh).
28
29
30 exit 0
Using arrays and other miscellaneous trickery to deal four random
hands from a deck of cards
1 #!/bin/bash2
2 # Must specify version 2 of bash, else might not work.
3
4 # Cards:
5 # deals four random hands from a deck of cards.
6
7 UNPICKED=0
8 PICKED=1
9
10 DUPE_CARD=99
11
12 LOWER_LIMIT=0
13 UPPER_LIMIT=51
14 CARDS_IN_SUITE=13
15 CARDS=52
16
17 declare -a Deck
18 declare -a Suites
19 declare -a Cards
20 # It would have been easier and more intuitive
21 # with a single, 3-dimensional array. Maybe
22 # a future version of bash will support
23 # multidimensional arrays.
24
25
26 initialize_Deck ()
27 {
28 i=$LOWER_LIMIT
29 until [ $i -gt $UPPER_LIMIT ]
30 do
31 Deck[i]=$UNPICKED
32 let "i += 1"
33 done
34 # Set each card of "Deck" as unpicked.
35 echo
36 }
37
38 initialize_Suites ()
39 {
40 Suites[0]=C #Clubs
41 Suites[1]=D #Diamonds
42 Suites[2]=H #Hearts
43 Suites[3]=S #Spades
44 }
45
46 initialize_Cards ()
47 {
48 Cards=(2 3 4 5 6 7 8 9 10 J Q K A)
49 # Alternate method of initializing array.
50 }
51
52 pick_a_card ()
53 {
54 card_number=$RANDOM
55 let "card_number %= $CARDS"
56 if [ ${Deck[card_number]} -eq $UNPICKED ]
57 then
58 Deck[card_number]=$PICKED
59 return $card_number
60 else
61 return $DUPE_CARD
62 fi
63 }
64
65 parse_card ()
66 {
67 number=$1
68 let "suite_number = number / CARDS_IN_SUITE"
69 suite=${Suites[suite_number]}
70 echo -n "$suite-"
71 let "card_no = number % CARDS_IN_SUITE"
72 Card=${Cards[card_no]}
73 printf %-4s $Card
74 # Print cards in neat columns.
75 }
76
77 seed_random ()
78 {
79 # Seed random number generator.
80 seed=`eval date +%s`
81 let "seed %= 32766"
82 RANDOM=$seed
83 }
84
85 deal_cards ()
86 {
87 echo
88
89 cards_picked=0
90 while [ $cards_picked -le $UPPER_LIMIT ]
91 do
92 pick_a_card
93 t=$?
94
95 if [ $t -ne $DUPE_CARD ]
96 then
97 parse_card $t
98
99 u=$cards_picked+1
100 # Change back to 1-based indexing (temporarily).
101 let "u %= $CARDS_IN_SUITE"
102 if [ $u -eq 0 ]
103 then
104 echo
105 echo
106 fi
107 # Separate hands.
108
109 let "cards_picked += 1"
110 fi
111 done
112
113 echo
114
115 return 0
116 }
117
118
119 # Structured programming:
120 # entire program logic modularized in functions.
121
122 #================
123 seed_random
124 initialize_Deck
125 initialize_Suites
126 initialize_Cards
127 deal_cards
128
129 exit 0
130 #================
131
132
133
134 # Exercise 1:
135 # Add comments to thoroughly document this script.
136
137 # Exercise 2:
138 # Revise the script to print out each hand sorted in suites.
139 # You may add other bells and whistles if you like.
140
141 # Exercise 3:
142 # Simplify