We looked at the source builtin a little in TLCL when we were examining how the shell's environment is established with the .profile and .bashrc files. In this adventure, we will delve further into how to use this feature to support configuration files and shareable function libraries for our bash scripts.
http://linuxcommand.org/lc3_adv_source.php
Showing posts with label Tips. Show all posts
Showing posts with label Tips. Show all posts
Saturday, March 21, 2020
Saturday, April 27, 2019
(Not so new) Adventure: Vim, with Vigor
A few months ago I wrote an adventure but I forgot to announce it. In it, we advance our skill with the vim text editor from the beginner level to an intermediate level. This adventure is one of my favorites so far. I learned a lot while writing it.
Vim is a very capable and configurable program. For example, it's easy to configure vim to behave differently according to the type of file it is editing and in this adventure we will configure vim to be optimized for writing shell scripts and plain text documentation files making vim a useful partner when working on the command line. We also introduce a number of powerful editing tricks that make using vim a lot more fun.
You can find the Vim, with Vigor adventure here.
Vim is a very capable and configurable program. For example, it's easy to configure vim to behave differently according to the type of file it is editing and in this adventure we will configure vim to be optimized for writing shell scripts and plain text documentation files making vim a useful partner when working on the command line. We also introduce a number of powerful editing tricks that make using vim a lot more fun.
You can find the Vim, with Vigor adventure here.
Tuesday, June 21, 2016
Adventure: Other Shells and Power Terminals
Two new Adventures! First, we'll look at some of the other shell programs available to Linux users. Most are of historical interest, but one attempts to do bash one better. Learn more about Other Shells.
Second, we'll explore some of the often overlooked features of our most frequently used command line tool-- our terminal emulator. Explore Power Terminals.
Enjoy!
Second, we'll explore some of the often overlooked features of our most frequently used command line tool-- our terminal emulator. Explore Power Terminals.
Enjoy!
Tuesday, January 26, 2016
Adventure: AWK
Another new Adventure! The AWK programming language is one of the truly classic Unix tools still in wide use today. Often embedded in shell scripts or employed directly at the command line, this powerful and elegant text processing and pattern matching language is a must-have for every Linux user's toolbox. In this adventure, we'll try it out.
Thursday, February 5, 2015
Adventure: dialog
Another new Adventure! dialog is a program that, as the name might suggest, creates dialog boxes in text mode. We can use it to give our scripts a friendly face. In this adventure, we will look at what it does, and how to use it.
Friday, January 16, 2015
Friday, November 7, 2014
Adventure: tput
Another new Adventure! Tired of looking at the same old text? Learn how to add color and text effects to your scripts with tput.
Monday, May 12, 2014
Adventure: Less Typing
Another new Adventure! Fingers getting tired? Making more mistakes than you should? You should learn to do more with Less Typing!
Tuesday, March 25, 2014
Adventure: Terminal Multiplexers
I have just posted another Adventure! This one explores terminal multiplexers; programs that allow your terminal to perform clever tricks. Enjoy!
Monday, March 3, 2014
Adventures
I have just posted the first unit of a new series on LinuxCommand.org called Adventures. These are tutorials that supplement my book, The Linux Command Line.
The first tutorial in the Adventures series is Midnight Commander. Midnight Commander is a text-based directory browser and file manager. A very powerful and useful program.
Look for more Adventures in the coming weeks. Enjoy!
The first tutorial in the Adventures series is Midnight Commander. Midnight Commander is a text-based directory browser and file manager. A very powerful and useful program.
Look for more Adventures in the coming weeks. Enjoy!
Thursday, October 4, 2012
My Raspberry Pi Adventure
Last Christmas, my friend +Norman Robinson gave me a BeagleBoard-xM computer to play with. The BeagleBoard is a small, single-board, ARM-based computer. As I started to work with it I was impressed with its relative performance and low-power consumption. However, due to its price (approx. $150), I didn't think it represented a tremendous value. After all, I only paid $199 for my HP Mini netbook at my local computer store and the netbook comes with a case and a keyboard!
I had been hearing a lot about the Raspberry Pi computer which appeared to be very similar to the BeagleBoard but only costs $35. That price, being clearly in impulse-buy territory, appealed to my computer buying impulses.
First Discovery: They're hard to get.
The Raspberry Pi computer is the brainchild of a group of British computer science educators. They set out to solve a problem:
The idea behind a tiny and cheap computer for kids came in 2006, when Eben Upton and his colleagues at the University of Cambridge’s Computer Laboratory, including Rob Mullins, Jack Lang and Alan Mycroft, became concerned about the year-on-year decline in the numbers and skills levels of the A Level students applying to read Computer Science in each academic year. From a situation in the 1990s where most of the kids applying were coming to interview as experienced hobbyist programmers, the landscape in the 2000s was very different; a typical applicant might only have done a little web design.
Something had changed the way kids were interacting with computers. A number of problems were identified: the colonisation of the ICT curriculum with lessons on using Word and Excel, or writing webpages; the end of the dot-com boom; and the rise of the home PC and games console to replace the Amigas, BBC Micros, Spectrum ZX and Commodore 64 machines that people of an earlier generation learned to program on.As they had modest objectives and resources, they only planned to produce a few thousand boards, however when they announced the computer and its price, they were flooded with hundreds of thousands of orders. Needless to say, this has led to some supply problems. It was many months before the boards became generally available in the U.S.. Even now, you should expect a long wait to receive a board.
I ordered mine from Allied Electronics and waited about 12 weeks for delivery.
If you're willing to pay more, there are scalpers out there (like this guy on Amazon) who will happily sell you a board albeit at a greatly inflated price.
Second Discovery: You'll need accessories.
For $35 all you get is a bare board, nothing else, not even documentation. That's available on-line. I ordered a case, power supply, and cables from Adafruit Industries which offers an extensive array of Raspberry Pi accessories. You'll also need a 4GB (or larger) SD card to act as its boot disk.
One interesting cable I bought from Adafruit was the USB TTL to Serial Cable. With this cable, you can both power the Raspberry Pi and interact with it via a serial terminal program on your computer. I installed GtkTerm on my Ubuntu box and was able to work with the board while I waited for my back-ordered power supply.
The Raspberry Pi is powered with 5 volts over a USB cable so many mobile phone chargers will work. My Nexus 7 charger worked fine for testing purposes.
Third Discovery: It ain't no slouch.
So how good is a $35 computer? Surprisingly good. In terms of computing horsepower, you can think of the Raspberry Pi as a late 1990s-era desktop PC, but with a better graphics card. The CPU performance is roughly equivalent to a 300 MHz Pentium II.
The board comes equipped with 256 MB of RAM, 2 USB ports, an Ethernet port, HDMI video out, RCA composite video out, and a speaker port. Notice that there is no VGA output. It's really designed to be hooked up to a digital TV or monitor via HDMI. If you have a monitor that supports DVI (as I do), you can use a HDMI-to-DVI cable. I only used a monitor with the board once and it worked fine. Be aware that the HDMI is not bi-directional, meaning that the board cannot detect the capabilities of the monitor it's attached to. You may need to adjust some configuration files to get the video the correct size on your monitor. Another limitation of the Raspberry Pi is that it does not have a real-time clock. It relies on a network connection and Network Time Protocol to keep accurate time.
Fourth Discovery: USB is an issue.
Due to its limited power supply, both my BeagleBoard and Raspberry Pi cannot meet the official specification for USB power output. This means you should not try to power devices via USB. My mouse and keyboard worked okay, but forget about USB powered hard disks and the like. Either use an externally powered device or use a good quality powered USB hub.
Fifth Discovery: There's lots of software.
I set up my Raspberry Pi as a headless media server. I downloaded and installed Raspbian, the version of Debian 7 for the Raspberry Pi. It is officially supported by the Raspberry Pi Foundation and you can download it from their site. To install the image, use dd to write the image file on to the SD card. After it's done, insert the SD card into the Raspberry Pi, attach a monitor and keyboard, plug in the power supply and it boots up. There are a few setup questions (start with setting the correct keyboard layout for your locale) and you're done.
Most every application that's available for Debian (more than 30,000) is also available for the Raspberry Pi. I installed Samba (Windows style file server), Apache (web server), OpenSSH-server (secure shell server), byobu (terminal screen multiplexer), apcupsd (UPS monitor), and the Logitech Media Server.
It is also possible to build an all-text Linux workstation as I describe in my blog series.
I attached a 200 GB 3.5 inch external hard drive and added a line to /etc/fstab to explicitly mount it since Raspian, in its standard form, does not automatically mount drives.
I wrote a bash shell function for my .bashrc file (on the Raspberry Pi) to provide a quick snapshot of the system's health:
status() {
{ echo -e "\nuptime:"
uptime
echo -e "\ndisk:"
df -h 2> /dev/null
echo -e "\nblock devices:"
blkid
echo -e "\nmemory:"
free -m
echo -e "\nrsync_backup.log:"
tail /var/log/rsync_backup.log
echo -e "\nsyslog:"
tail /var/log/syslog
} | less
}
The rsync_backup.log file is created by a script of mine that performs nightly backups of the system.
Here is my Raspberry Pi in its final production configuration:
On the left we see the power cable, on the right, USB cables to the APC UPS and external hard drive, and 10 Base-T Ethernet.
Final Discovery: I like it.
The Raspberry Pi, along with my BeagleBoard, have replaced two old desktop machines that acted as servers on my home network. My office is now eerily quiet (and much cooler!).
Thursday, September 27, 2012
Unity Tutorial Video
I came across this 27-minute Unity tutorial a few weeks back. If you you are using Ubuntu 12.04, or are considering it, check it out. It is pretty through and explains many of its hidden features.
Unity remains controversial and I have mixed feelings about it myself. I think that application discovery is more difficult when using the dash than it is using the previous pull-down menu system. Changing workspaces requires an additional step in Unity and the placement of window controls and menus in the top corner of the workspace is awkward at best.
But on the other hand, I get what the designers are trying to do. In addition to making the user interface more touch-friendly (because it's the future du jour), it attempts to reduce distraction on the desktop and make your work easier to focus on.
I have 12.04 running on my laptops and netbooks (it's a good fit for netbooks) and will be upgrading my main desktop to it soon.
Unity remains controversial and I have mixed feelings about it myself. I think that application discovery is more difficult when using the dash than it is using the previous pull-down menu system. Changing workspaces requires an additional step in Unity and the placement of window controls and menus in the top corner of the workspace is awkward at best.
But on the other hand, I get what the designers are trying to do. In addition to making the user interface more touch-friendly (because it's the future du jour), it attempts to reduce distraction on the desktop and make your work easier to focus on.
I have 12.04 running on my laptops and netbooks (it's a good fit for netbooks) and will be upgrading my main desktop to it soon.
Monday, August 1, 2011
Ask Ars: how do I use the find command in a pipeline?
Ask Ars: how do I use the find command in a pipeline?:

In 1998, Ask Ars was an early feature of the newly launched Ars Technica. Now, as then, it's all about your questions and our community's answers. Each week, we'll dig into our question bag, provide our own take, then tap the wisdom of our readers. To submit your own question, see our helpful tips page.
Q: I know I can use the
The
In this tutorial, I'll explain how to use the
In 1998, Ask Ars was an early feature of the newly launched Ars Technica. Now, as then, it's all about your questions and our community's answers. Each week, we'll dig into our question bag, provide our own take, then tap the wisdom of our readers. To submit your own question, see our helpful tips page.
Q: I know I can use the
find command at the command line to locate files, but how do I use it with other commands to perform a real-world task? What's the difference between the -exec parameter and piping into xargs?The
find command is a standard utility on UNIX and Linux systems. It will recurse through directory structures and look for files that conform with the user's specified parameters. There are a number of different search operators that can be used together to achieve fine-grained file matching.In this tutorial, I'll explain how to use the
find command with several common search operators and then I'll show you some examples of how to use the find command in a pipeline.
Wednesday, July 27, 2011
Google Chromebooks: Some Helpful Tips
When I was considering the purchase of a Chromebook, I was looking for a device that could fulfill several different usage cases. I had in mind the ability to take the device on vacation and perform the ordinary tasks I usually perform when I’m not at my desk. These include:
This post will cover some of the interesting things I discovered when I starting using my Chromebook and attempted my usage cases. There is a nearly secret little switch next to the SIM slot that is used to put the system into “developer mode” which affords the user nearly complete control of the machine including installing a replacement OS, however, everything I will discuss here can be accomplished in regular user mode.
Getting Help
Out of the box, the Samsung Chromebook comes with almost no documentation aside from a very concise quick-start guide. It relies instead on web-based help. The Chromebook on-line help can be accessed by typing Ctrl-/. Chromebooks also have an extensive set of keyboard shortcuts. A list of key assignments can be displayed by pressing Ctrl-Alt-/.
File Management
Many of my usage cases involve manipulating files in some way or another. The Chromebook concept of cloud computing does not encourage local storage and this is reflected in the limited number of file operations available to the user. A rudimentary file manager is invoked by typing Ctrl-m. There are two directories that may be accessed. These are the “File Shelf” (the default downloads directory for the browser) and the “External Storage” directory containing the mount points for an SD card or USB mass storage devices. Chrome OS supports a variety of file system types including FAT, VFAT, NTFS, ISO9660, and Ext3/4 making the system quite Linux-friendly.
Since a Chromebook does not provide local application programs that process files, a Chromebook supports uploading and downloading files and little else. The file manager, in its present incarnation is limited to deleting and renaming files. Copying and moving files between directories and devices is not yet supported.
Fortunately, the web browser does support the file: URI scheme allowing access to the File Shelf and External Devices directories. No other directories are accessible unless the system is operating in developer mode. The URLs for the accessible directories are listed below:
- Document production, primarily blog post composition and editing.
- Photo management. As I take a lot of pictures when I travel, I need to view and upload my photos to my SmugMug account where my photography site is hosted.
- Media viewing. Like my editor and her iPad, I sometimes want to view some video and listen to a little music.
- System administration. After all, this is LinuxCommand.org, so I have to be able to log into a remote system now and them and get some real work done.
This post will cover some of the interesting things I discovered when I starting using my Chromebook and attempted my usage cases. There is a nearly secret little switch next to the SIM slot that is used to put the system into “developer mode” which affords the user nearly complete control of the machine including installing a replacement OS, however, everything I will discuss here can be accomplished in regular user mode.
Getting Help
Out of the box, the Samsung Chromebook comes with almost no documentation aside from a very concise quick-start guide. It relies instead on web-based help. The Chromebook on-line help can be accessed by typing Ctrl-/. Chromebooks also have an extensive set of keyboard shortcuts. A list of key assignments can be displayed by pressing Ctrl-Alt-/.
File Management
Many of my usage cases involve manipulating files in some way or another. The Chromebook concept of cloud computing does not encourage local storage and this is reflected in the limited number of file operations available to the user. A rudimentary file manager is invoked by typing Ctrl-m. There are two directories that may be accessed. These are the “File Shelf” (the default downloads directory for the browser) and the “External Storage” directory containing the mount points for an SD card or USB mass storage devices. Chrome OS supports a variety of file system types including FAT, VFAT, NTFS, ISO9660, and Ext3/4 making the system quite Linux-friendly.
Since a Chromebook does not provide local application programs that process files, a Chromebook supports uploading and downloading files and little else. The file manager, in its present incarnation is limited to deleting and renaming files. Copying and moving files between directories and devices is not yet supported.
Fortunately, the web browser does support the file: URI scheme allowing access to the File Shelf and External Devices directories. No other directories are accessible unless the system is operating in developer mode. The URLs for the accessible directories are listed below:
| Directory | URL |
| File Shelf | file:///home/chronos/user/Downloads/ |
| External Storage | file:///media/ |
To copy a file from one directory or device to another, use the URL listed above to locate the target file, then right click on the file and select “Save link as...” to copy the file to a new location.
Media Viewing
The file manager allows a few file types to be viewed. It can display JPEGs, and play both MPEG-4, and MP3 files. As a bonus for Linux users, both Ogg Vorbis and Ogg Theora files are also supported. Strangely, while the web browser incorporates a PDF viewer, the file manager cannot launch it. The file manager can launch a media player for video playback. It is limited to either thumbnail size or full screen, however full screen performance is quite poor. Using the URLs above to have the web browser directly play the file yields a much better result, I found that m4v files transcoded for playback on an iPad played fine in the browser.
Chromebooks do not, as of yet, have a full featured media player. I understand that having one might “pollute” the cloud-only idea behind the Chrome OS, but mobile device owners expect this functionality in portables.
Photo Uploading
Uploading photos from an SD card is very easy. Modern HTML5 uploaders such as the ones at SmugMug and Google+ work great.
The Terminal
One of the really unexpected features on a Chromebook is the terminal. Typing Ctrl-Alt-t opens a new full screen window (as opposed to a tab) containing the Chrome OS shell, called “crosh.” The shell is very limited. It supports just a few commands, mostly network diagnostics, but it also supports an SSH client so you can open a terminal, launch SSH and get access to remote systems. Since it is possible to open multiple terminal windows you can perform some useful work. Chrome OS uses the X window system for its underlying graphics, and the usual middle click (3 finger click on the touch pad) will paste text on the terminal. Even though the SSH client is present, there are no scp or sftp commands available in crosh. In fact, no file system access is possible from the shell.
One problem I have with the terminal is the small font size. I think its probably fine for many people, but old folks like me will find it difficult. Unfortunately, the text size is not adjustable.
**UPDATE** August 13, 2011
Version 13 of Chrome OS was pushed out a few days ago (Google touts that they will update the OS about every 6 weeks) and among its improvements are speedups for video playback in the file manager. The bookmarks suggested above are still useful but now you can realistically watch a video in the file manager, unlike before.
Further Reading
Thursday, June 3, 2010
My Top 5 Bash Resources
Over the course of writing The Linux Command Line and this blog, I've had frequent need of good reference resources for command line programs including the shell itself, bash. Here is my list of the ones that stand out:
Chet Ramey is the current maintainer of bash and he has his own page. On this page, you can find version information, latest news, and other things. The most useful document on the Bash Page is its version of the Bash FAQ. The NEWS file contains a concise list of features that have been added to each version of bash.
1. The Bash Man Page
Yeah, I know. I spent nearly half a page in my book trashing the bash man page for its impenetrable style and its lack of any trace of user-friendliness, but nothing beats typing "man bash" when you're already working in a terminal. The trick is finding what you want in its enormous length. This can sometimes be a significant problem, but once you find what you are looking for, the information is always concise and authoritative though not always easy to understand. Still, this is the resource I use most often.
Perhaps in response to the usability issues found in the bash man page, the GNU Project produced the Bash Reference Manual. You can think of it as the bash man page translated into human readable form. While it lacks a tutorial focus and contains no usage examples, it is much easier to read and is more usefully organized than the bash man page.
3. Greg's Wiki
The bash man page and the Bash Reference Manual both extensively document the features found in bash. However, when we need a description of bash behavior, different resources are needed. The best by far is Greg's Wiki. This site covers a variety of topics, but of particular interest to us are the Bash FAQ which contains over one hundred frequently asked questions about bash, the Bash Pitfalls which describes many of the common problems script writers encounter with bash, and the Bash Guide, a useful set of tutorials for bash users. There are also several fun to read rants.
Like Greg's Wiki, the Bash Hackers Wiki provides many different articles relating to bash, its features, and its behavior. Included are some useful tutorials on various programming techniques and issues with scripting with bash. While the writing is, at times, a little chaotic, it does contain useful information. Heck, they even trash my Writing Shell Scripts tutorial (Hmmm...I really ought to fix some of that stuff).
Chet Ramey is the current maintainer of bash and he has his own page. On this page, you can find version information, latest news, and other things. The most useful document on the Bash Page is its version of the Bash FAQ. The NEWS file contains a concise list of features that have been added to each version of bash.
There you have it. Enough reading to keep even the most curious shell user busy for weeks. Enjoy!
Tuesday, June 1, 2010
Using Configuration Files With Shell Scripts
If you have worked with the command line for a while, you have no doubt noticed that many programs use text configuration files of one sort or another. In this lesson, we will look at how we can control shell scripts with external configuration files.
Why Use Configuration Files?
Since shell scripts are just ordinary text files, why should we bother with additional text configuration files? There are a couple of reasons that you might want to consider them:
Sourcing Files
Implementing configuration files in most programming languages is a fairly complicated undertaking, as you must write code to parse the configuration file's content. In the shell, however, parsing is automatic because you can use regular shell syntax.
The shell builtin command that makes this trick work is named source. The source command reads a file and processes its content as if it were coming from the keyboard. Let's create a very simple shell script to demonstrate sourcing in action. We'll use the cat command to create the script:
me@linuxbox:~$ cat > bin/cd_script
#!/bin/bash
cd /usr/local
echo $PWD
Press Ctrl-d to signal end-of-file to the cat command. Next, we will set the file attributes to make the script executable:
me@linuxbox:~$ chmod +x bin/cd_script
Finally, we will run the script:
me@linuxbox:~$ cd_script
/usr/local
me@linuxbox:~$
The script executes and by doing so it changes the directory to /usr/local and then outputs the name of the current working directory which is /usr/local. Notice however, that when the shell prompt returns, we are still in our home directory. Why is this? While it may appear at first that the script did not change directories, it did as evidenced by the output of the PWD shell variable. So why isn't the directory still changed when the script terminates?
The answer lies in the fact that when you execute a shell script, a new copy of the shell is launched and with it comes a new copy of the environment. When the script finishes, the copy of the shell is destroyed and so is its environment As a general rule, a child process, such as the shell running a script, is not permitted to modify the environment of the parent process.
So if we actually wanted to change the working directory in the current shell, we would need to use the source command and to read the contents of our script. Note that the name of the source command may be abbreviated as a single dot followed by a space.
me@linuxbox:~$ . cd_script
/usr/local
me@linuxbox:/usr/local$
By sourcing the file, the working directory is changed in current shell as we can see by the trailing portion of the shell prompt. Be aware that, by default, the shell will search the directories listed in the PATH variable for the file to be read. Files that are read by source do not have to be executable, nor do they need to start with the shebang (i.e. #!) mechanism.
Implementing Configuration Files In Scripts
Now that we see how sourcing works, let's try our hand at writing a script that uses a the source command to read a configuration file.
In part 4 of the Getting Ready For Ubuntu 10.04 series, we wrote a script to perform a backup of our system to an external USB disk drive. The script looked like this:
#!/bin/bash
# usb_backup # backup system to external disk drive
SOURCE="/etc /usr/local /home"
DESTINATION=/media/BigDisk/backup
if [[ -d $DESTINATION ]]; then
sudo rsync -av \
--delete \
--exclude '/home/*/.gvfs' \
$SOURCE $DESTINATION
fi
You will notice that the source and destination directories are hard-coded into the SOURCE and DESTINATION constants at the beginning of the script. We will remove these and modify the script to read a configuration file instead:
#!/bin/bash
# usb_backup2 # backup system to external disk drive
CONFIG_FILE=~/.usb_backup.conf
if [[ -f $CONFIG_FILE ]]; then
. $CONFIG_FILE
fi
if [[ -d $DESTINATION ]]; then
sudo rsync -av \
--delete \
--exclude '/home/*/.gvfs' \
$SOURCE $DESTINATION
fi
Now we can create a configuration file named ~/.usb_backup2.conf that contains these two lines:
SOURCE="/etc /usr/local /home"
DESTINATION=/media/BigDisk/backup
When we run the script, the contents of the configuration file is read and the SOURCE and DESTINATION constants are added to the script's environment just as though the lines were in the text of the script itself. The
if [[ -f $CONFIG_FILE ]]; then
. $CONFIG_FILE
fi
construct is a common way to set up the reading of a file. In fact, if you look at your ~/.profile or ~/.bash_profile startup files, you will probably see something like this:
if [ -f "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
fi
which is how your environment is established when you log in at the console.
While our script in its current form requires the configuration file to define the SOURCE and DESTINATION constants, it's easy to make the use of the file optional by setting default values for the constants if the configuration file is either missing or does not contain the required definitions. We will modify our script to set default values and also support an optional command line option (-c) to specify an optional, alternate configuration file name:
#!/bin/bash
# usb_backup3 # backup system to external disk drive
# Look for alternate configuration file
if [[ $1 == -c ]]; then
CONFIG_FILE=$2
else
CONFIG_FILE=~/.usb_backup.conf
fi
# Source configuration file
if [[ -f $CONFIG_FILE ]]; then
. $CONFIG_FILE
fi
# Fill in any missing values with defaults
SOURCE=${SOURCE:-"/etc /usr/local /home"}
DESTINATION=${DESTINATION:-/media/BigDisk/backup}
if [[ -d $DESTINATION ]]; then
sudo rsync -av \
--delete \
--exclude '/home/*/.gvfs' \
$SOURCE $DESTINATION
fi
Code Libraries
Since the files read by the source command can contain any valid shell commands, source is often used to load collections of shell functions into scripts. This allows central libraries of common routines to be shared by multiple scripts. This can make code maintenance considerably easier.
Security Considerations
On the other hand, since sourced files can contain any valid shell command, care must be take to make sure that nothing malicious is placed in a file that is to be sourced. This holds especially true for any script that is to be run by the superuser. When writing such scripts, make sure that the super user owns the file to be sourced and that the file is not world-writable. Some code like this could do the trick:
if [[ -O $CONFIG_FILE ]]; then
if [[ $(stat --format %a $CONFIG_FILE) == 600 ]]; then
. $CONFIG_FILE
fi
fi
Further Reading
The bash man page:
The Linux Command Line:
Why Use Configuration Files?
Since shell scripts are just ordinary text files, why should we bother with additional text configuration files? There are a couple of reasons that you might want to consider them:
- Having configuration files removes the need to make changes to a script. There may be cases where you want to insure that a script remains in its original form.
- In particular, you may want to have a script that is shared by multiple users and each user has a specific desired configuration. Using individual configuration files prevents the need to have multiple copies of the script, thus making administration easier.
Sourcing Files
Implementing configuration files in most programming languages is a fairly complicated undertaking, as you must write code to parse the configuration file's content. In the shell, however, parsing is automatic because you can use regular shell syntax.
The shell builtin command that makes this trick work is named source. The source command reads a file and processes its content as if it were coming from the keyboard. Let's create a very simple shell script to demonstrate sourcing in action. We'll use the cat command to create the script:
me@linuxbox:~$ cat > bin/cd_script
#!/bin/bash
cd /usr/local
echo $PWD
Press Ctrl-d to signal end-of-file to the cat command. Next, we will set the file attributes to make the script executable:
me@linuxbox:~$ chmod +x bin/cd_script
Finally, we will run the script:
me@linuxbox:~$ cd_script
/usr/local
me@linuxbox:~$
The script executes and by doing so it changes the directory to /usr/local and then outputs the name of the current working directory which is /usr/local. Notice however, that when the shell prompt returns, we are still in our home directory. Why is this? While it may appear at first that the script did not change directories, it did as evidenced by the output of the PWD shell variable. So why isn't the directory still changed when the script terminates?
The answer lies in the fact that when you execute a shell script, a new copy of the shell is launched and with it comes a new copy of the environment. When the script finishes, the copy of the shell is destroyed and so is its environment As a general rule, a child process, such as the shell running a script, is not permitted to modify the environment of the parent process.
So if we actually wanted to change the working directory in the current shell, we would need to use the source command and to read the contents of our script. Note that the name of the source command may be abbreviated as a single dot followed by a space.
me@linuxbox:~$ . cd_script
/usr/local
me@linuxbox:/usr/local$
By sourcing the file, the working directory is changed in current shell as we can see by the trailing portion of the shell prompt. Be aware that, by default, the shell will search the directories listed in the PATH variable for the file to be read. Files that are read by source do not have to be executable, nor do they need to start with the shebang (i.e. #!) mechanism.
Implementing Configuration Files In Scripts
Now that we see how sourcing works, let's try our hand at writing a script that uses a the source command to read a configuration file.
In part 4 of the Getting Ready For Ubuntu 10.04 series, we wrote a script to perform a backup of our system to an external USB disk drive. The script looked like this:
#!/bin/bash
# usb_backup # backup system to external disk drive
SOURCE="/etc /usr/local /home"
DESTINATION=/media/BigDisk/backup
if [[ -d $DESTINATION ]]; then
sudo rsync -av \
--delete \
--exclude '/home/*/.gvfs' \
$SOURCE $DESTINATION
fi
You will notice that the source and destination directories are hard-coded into the SOURCE and DESTINATION constants at the beginning of the script. We will remove these and modify the script to read a configuration file instead:
#!/bin/bash
# usb_backup2 # backup system to external disk drive
CONFIG_FILE=~/.usb_backup.conf
if [[ -f $CONFIG_FILE ]]; then
. $CONFIG_FILE
fi
if [[ -d $DESTINATION ]]; then
sudo rsync -av \
--delete \
--exclude '/home/*/.gvfs' \
$SOURCE $DESTINATION
fi
Now we can create a configuration file named ~/.usb_backup2.conf that contains these two lines:
SOURCE="/etc /usr/local /home"
DESTINATION=/media/BigDisk/backup
When we run the script, the contents of the configuration file is read and the SOURCE and DESTINATION constants are added to the script's environment just as though the lines were in the text of the script itself. The
if [[ -f $CONFIG_FILE ]]; then
. $CONFIG_FILE
fi
construct is a common way to set up the reading of a file. In fact, if you look at your ~/.profile or ~/.bash_profile startup files, you will probably see something like this:
if [ -f "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
fi
which is how your environment is established when you log in at the console.
While our script in its current form requires the configuration file to define the SOURCE and DESTINATION constants, it's easy to make the use of the file optional by setting default values for the constants if the configuration file is either missing or does not contain the required definitions. We will modify our script to set default values and also support an optional command line option (-c) to specify an optional, alternate configuration file name:
#!/bin/bash
# usb_backup3 # backup system to external disk drive
# Look for alternate configuration file
if [[ $1 == -c ]]; then
CONFIG_FILE=$2
else
CONFIG_FILE=~/.usb_backup.conf
fi
# Source configuration file
if [[ -f $CONFIG_FILE ]]; then
. $CONFIG_FILE
fi
# Fill in any missing values with defaults
SOURCE=${SOURCE:-"/etc /usr/local /home"}
DESTINATION=${DESTINATION:-/media/BigDisk/backup}
if [[ -d $DESTINATION ]]; then
sudo rsync -av \
--delete \
--exclude '/home/*/.gvfs' \
$SOURCE $DESTINATION
fi
Code Libraries
Since the files read by the source command can contain any valid shell commands, source is often used to load collections of shell functions into scripts. This allows central libraries of common routines to be shared by multiple scripts. This can make code maintenance considerably easier.
Security Considerations
On the other hand, since sourced files can contain any valid shell command, care must be take to make sure that nothing malicious is placed in a file that is to be sourced. This holds especially true for any script that is to be run by the superuser. When writing such scripts, make sure that the super user owns the file to be sourced and that the file is not world-writable. Some code like this could do the trick:
if [[ -O $CONFIG_FILE ]]; then
if [[ $(stat --format %a $CONFIG_FILE) == 600 ]]; then
. $CONFIG_FILE
fi
fi
Further Reading
The bash man page:
- BUILTIN COMMANDS (source command)
- CONDITIONAL EXPRESSIONS (testing file attributes)
The Linux Command Line:
- Chapter 35 - Strings And Numbers (parameter expansions to set default values)
Thursday, March 25, 2010
byobu
One of the features I have discovered during my testing of the Ubuntu10.04 beta is a nifty program called byobu, a slick wrapper for the screen terminal multiplexing program. If you are unfamiliar with screen, please see the final installment of my Building An All-Text Linux Workstation series.
byobu was introduced in Ubuntu 09.10 and continues in 10.04. The name "byobu" comes from a Japanese word meaning "folding screen." Essentially, byobu adds an improved user interface to screen and takes advantage some advanced features in recent screen versions. On a system so equipped, you launch byobu like this:
bshotts@twin7:~$ byobu
and you get a display like this:

We get a screen terminal session with system status notifications at the bottom of the terminal display.
byobu allows the use of function keys in addition to the usual Ctrl-a sequences supported by screen. As indicated at the bottom right corner of the terminal, the F9 key invokes a menu:

Function Key Assignments
byobu assigns function keys to perform many of the common operations performed by screen, such as opening new terminal sessions, moving from session to session, and entering scroll back mode. Selecting the "Help" item on the menu displays a map of the program's function key assignments:

Status Notifications
Also of interest is the list of available system indicators which is visible when the "Toggle status notifications" item is selected on the menu:

As you can see this is a pretty big list.
Other Features
It's also possible to create default windows, that is, terminal sessions that appear automatically when you start byobu. This is handy if you have several applications that you want to start when you log in at the terminal.
Further Reading
A Wikipedia article on Japanese folding screens
byobu was introduced in Ubuntu 09.10 and continues in 10.04. The name "byobu" comes from a Japanese word meaning "folding screen." Essentially, byobu adds an improved user interface to screen and takes advantage some advanced features in recent screen versions. On a system so equipped, you launch byobu like this:
bshotts@twin7:~$ byobu
and you get a display like this:
We get a screen terminal session with system status notifications at the bottom of the terminal display.
byobu allows the use of function keys in addition to the usual Ctrl-a sequences supported by screen. As indicated at the bottom right corner of the terminal, the F9 key invokes a menu:
Function Key Assignments
byobu assigns function keys to perform many of the common operations performed by screen, such as opening new terminal sessions, moving from session to session, and entering scroll back mode. Selecting the "Help" item on the menu displays a map of the program's function key assignments:
Status Notifications
Also of interest is the list of available system indicators which is visible when the "Toggle status notifications" item is selected on the menu:
As you can see this is a pretty big list.
Other Features
It's also possible to create default windows, that is, terminal sessions that appear automatically when you start byobu. This is handy if you have several applications that you want to start when you log in at the terminal.
Further Reading
- The screen and byobu man pages
A Wikipedia article on Japanese folding screens
Monday, March 1, 2010
dialog
Now that we have covered launching a script from the GUI, it's time to look at how to make our scripts a little more "graphical." Interestingly, there are a number of tools that we can use to produce user interfaces for shell scripts. In this installment, we will concentrate on the most mature of these tools, dialog.
The dialog program lets scripts display dialog boxes for user interaction. It does this in a text-based manner, using the ncurses library. It can produce a variety of user interface elements and can pass user input back to a script for processing. The dialog package is not installed by default on most Linux distributions, but is readily available in most distribution repositories.
dialog has been around for a long time and has gone through a number of incarnations. The current version (which is maintained by the Debian project) contains a number of features not found in earlier versions.
To demonstrate a few of the powers of dialog, we will write a script that displays a series of dialog boxes and provide some ideas of how to work with the output. We'll start the script with the following code:
#!/bin/bash
# dialog-demo: script to demonstrate the dialog utility
BACKTITLE="Dialog Demonstration"
# message box
dialog --title "Message Box" \
--backtitle "$BACKTITLE" \
--msgbox "This is a message box which simply\
displays some text and an OK button." \
9 50
The script invokes the dialog program with a series of options to set the title for the dialog box, the title for the background screen, the dialog box type (in this example a message box), the text to appear in the box and the size of the box (9 lines high by 50 characters wide). When we run the script we get the following results:

Next we'll add the following code to the end of the script to generate another kind of dialog called a yes/no box. In addition to the dialog box itself, we will also include some code to act on the results of the user's selection:
# yes/no box
dialog --title "Yes/No Box" \
--backtitle "$BACKTITLE" \
--yesno "This is a yes/no box. It has two buttons." \
9 50
# examine the exit status and act on it
case $? in
0) dialog --title "Yes" \
--backtitle "$BACKTITLE" \
--msgbox "You answered \"Yes\"" \
9 50
;;
1) dialog --title "No" \
--backtitle "$BACKTITLE" \
--msgbox "You answered \"No\"" \
9 50
;;
255) dialog --title "Esc" \
--backtitle "$BACKTITLE" \
--msgbox "You pressed Esc" \
9 50
;;
esac
The yes/no box offers the user three choices: yes, no, and a cancel which occurs if the Esc key is pressed. To select a button, the user my use the Tab key to switch from button to button or, if dialog is being run inside a terminal emulator on a graphical desktop, the mouse may be used to select a button.
dialog communicates the user's choice back to the script via its exit status. In the code that follows the yes/no box, we evaluate the exit status. If the yes button is pressed, the exit status is 0, if the no button is pressed, the exit status is 1, and if Esc is pressed, the exit status is 255.

dialog can provide more than just simple button presses. It can support edit boxes. forms, menus, file selectors. etc. When returning more complex data, dialog outputs its results on standard error. To demonstrate how this works, we will insert the following code near the beginning of the script (just after the BACKTITLE=... line) to define a function (read_tempfile) that will display the dialog output:
TEMPFILE=/tmp/dialog-demo.$$
read_tempfile() {
# display what is returned in the temp file
local tempfile
# read file contents then delete
tempfile=$(cat $TEMPFILE)
rm $TEMPFILE
# message box to display contents
dialog --title "Tempfile Contents" \
--backtitle "$BACKTITLE" \
--msgbox "The temp file contains: $tempfile" \
0 0
}
Next, we will add the following code to the end of our script to display a menu box:
# menu
dialog --title "Menu" \
--backtitle "$BACKTITLE" \
--menu "This is a menu. Please select one of the following:" \
15 50 10 \
1 "Item number 1" \
2 "Item number 2" \
3 "Item number 3" 2> $TEMPFILE
read_tempfile
This code displays a menu containing three items. Each item consists of two elements; a "tag" which is returned when a choice is made and an "item string" which describes the menu selection. In the example code above, we see that the first menu item has the tag "1" and the item string "Item number 1." When we execute the code, dialog displays the following menu:

To capture and display the output of the menu, we redirect the standard error of dialog to the file named in the constant TEMPFILE. The read_tempfile function assigns the contents of the temporary file to a variable (tempfile) and passes it to dialog to display in a message box. After the menu is displayed and a selection is made by the user, the message box appears and displays the data returned to the script.
Next, we will add a checklist dialog by adding the following code to the end of the script.
# checklist
dialog --title "Checklist" \
--backtitle "$BACKTITLE" \
--checklist "This is a checklist. Please from the following:" \
15 50 10 \
1 "Item number 1" off \
2 "Item number 2" off \
3 "Item number 3" off 2> $TEMPFILE
read_tempfile
As we can see, this code is almost the same as the code used to create the menu dialog. The main difference is that the items in the checklist add a third field called "status." The value of status may be either "on" or "off" and will determine if a selection is already selected or deselected when the program is run:

The user may select one or more checkbox items by pressing the space bar.
Next, we will add the following code to the end of the script to display a file selector:
# file selector
dialog --title "File Selector" \
--backtitle "$BACKTITLE" \
--fselect ~/ 10 30 2> $TEMPFILE
read_tempfile
For the file selector, dialog is passed the base directory for the selection (in this example the directory ~/) and the size of selector. For this example we specify 10 files in the list and a 30 character wide dialog.

I think the file selector is easy to code, but awkward to use. You move around the dialog with the Tab key and select directories and files with the space bar.
One of the more novel types of boxes provided by dialog is the "gauge" which displays a progress bar. To try out this feature, we will add this code to the end of the script:
# gauge
# generate a stream of integers (percent) and pipe into dialog
percent=0
while (( percent < 101 )); do
echo $percent
sleep 1
percent=$((percent + 10))
done | dialog --title "Gauge" \
--backtitle "$BACKTITLE" \
--gauge "This is a gauge. It shows a progress bar." \
0 0
The gauge dialog does not output anything, rather it accepts a stream of integers (representing the percent of completion) via standard input. To provide this, we create a while loop to generate the stream of integers and pipe the results of the loop into dialog. When the script runs, the stream of numbers causes the progress bar to advance:

Note too, that we specified the size of the dialog as 0 0. When this is done, dialog attempts to auto-size the box to fit in the minimum necessary space.
Finally, we'll add another of the novel dialogs, a calendar:
# calendar
dialog --title "Calendar" \
--backtitle "$BACKTITLE" \
--calendar "This is a calendar." \
0 0 \
0 0 0 2> $TEMPFILE
read_tempfile
The calendar dialog box is given a date in day month year format. If the year is specified as zero, as it is in this example, the current date is used. The user can then select a date and the date is returned to the script.

This concludes our brief look at dialog. As you can imagine, dialog can be very handy for adding improved user interfaces for those scripts that require it. While we have covered its general themes, dialog has many more options and features. Check out the documentation link below for the complete story.
Further Reading
Documentation for dialog:
Other dialog-like programs:
The dialog program lets scripts display dialog boxes for user interaction. It does this in a text-based manner, using the ncurses library. It can produce a variety of user interface elements and can pass user input back to a script for processing. The dialog package is not installed by default on most Linux distributions, but is readily available in most distribution repositories.
dialog has been around for a long time and has gone through a number of incarnations. The current version (which is maintained by the Debian project) contains a number of features not found in earlier versions.
To demonstrate a few of the powers of dialog, we will write a script that displays a series of dialog boxes and provide some ideas of how to work with the output. We'll start the script with the following code:
#!/bin/bash
# dialog-demo: script to demonstrate the dialog utility
BACKTITLE="Dialog Demonstration"
# message box
dialog --title "Message Box" \
--backtitle "$BACKTITLE" \
--msgbox "This is a message box which simply\
displays some text and an OK button." \
9 50
The script invokes the dialog program with a series of options to set the title for the dialog box, the title for the background screen, the dialog box type (in this example a message box), the text to appear in the box and the size of the box (9 lines high by 50 characters wide). When we run the script we get the following results:
Next we'll add the following code to the end of the script to generate another kind of dialog called a yes/no box. In addition to the dialog box itself, we will also include some code to act on the results of the user's selection:
# yes/no box
dialog --title "Yes/No Box" \
--backtitle "$BACKTITLE" \
--yesno "This is a yes/no box. It has two buttons." \
9 50
# examine the exit status and act on it
case $? in
0) dialog --title "Yes" \
--backtitle "$BACKTITLE" \
--msgbox "You answered \"Yes\"" \
9 50
;;
1) dialog --title "No" \
--backtitle "$BACKTITLE" \
--msgbox "You answered \"No\"" \
9 50
;;
255) dialog --title "Esc" \
--backtitle "$BACKTITLE" \
--msgbox "You pressed Esc" \
9 50
;;
esac
The yes/no box offers the user three choices: yes, no, and a cancel which occurs if the Esc key is pressed. To select a button, the user my use the Tab key to switch from button to button or, if dialog is being run inside a terminal emulator on a graphical desktop, the mouse may be used to select a button.
dialog communicates the user's choice back to the script via its exit status. In the code that follows the yes/no box, we evaluate the exit status. If the yes button is pressed, the exit status is 0, if the no button is pressed, the exit status is 1, and if Esc is pressed, the exit status is 255.
dialog can provide more than just simple button presses. It can support edit boxes. forms, menus, file selectors. etc. When returning more complex data, dialog outputs its results on standard error. To demonstrate how this works, we will insert the following code near the beginning of the script (just after the BACKTITLE=... line) to define a function (read_tempfile) that will display the dialog output:
TEMPFILE=/tmp/dialog-demo.$$
read_tempfile() {
# display what is returned in the temp file
local tempfile
# read file contents then delete
tempfile=$(cat $TEMPFILE)
rm $TEMPFILE
# message box to display contents
dialog --title "Tempfile Contents" \
--backtitle "$BACKTITLE" \
--msgbox "The temp file contains: $tempfile" \
0 0
}
Next, we will add the following code to the end of our script to display a menu box:
# menu
dialog --title "Menu" \
--backtitle "$BACKTITLE" \
--menu "This is a menu. Please select one of the following:" \
15 50 10 \
1 "Item number 1" \
2 "Item number 2" \
3 "Item number 3" 2> $TEMPFILE
read_tempfile
This code displays a menu containing three items. Each item consists of two elements; a "tag" which is returned when a choice is made and an "item string" which describes the menu selection. In the example code above, we see that the first menu item has the tag "1" and the item string "Item number 1." When we execute the code, dialog displays the following menu:
To capture and display the output of the menu, we redirect the standard error of dialog to the file named in the constant TEMPFILE. The read_tempfile function assigns the contents of the temporary file to a variable (tempfile) and passes it to dialog to display in a message box. After the menu is displayed and a selection is made by the user, the message box appears and displays the data returned to the script.
Next, we will add a checklist dialog by adding the following code to the end of the script.
# checklist
dialog --title "Checklist" \
--backtitle "$BACKTITLE" \
--checklist "This is a checklist. Please from the following:" \
15 50 10 \
1 "Item number 1" off \
2 "Item number 2" off \
3 "Item number 3" off 2> $TEMPFILE
read_tempfile
As we can see, this code is almost the same as the code used to create the menu dialog. The main difference is that the items in the checklist add a third field called "status." The value of status may be either "on" or "off" and will determine if a selection is already selected or deselected when the program is run:
The user may select one or more checkbox items by pressing the space bar.
Next, we will add the following code to the end of the script to display a file selector:
# file selector
dialog --title "File Selector" \
--backtitle "$BACKTITLE" \
--fselect ~/ 10 30 2> $TEMPFILE
read_tempfile
For the file selector, dialog is passed the base directory for the selection (in this example the directory ~/) and the size of selector. For this example we specify 10 files in the list and a 30 character wide dialog.
I think the file selector is easy to code, but awkward to use. You move around the dialog with the Tab key and select directories and files with the space bar.
One of the more novel types of boxes provided by dialog is the "gauge" which displays a progress bar. To try out this feature, we will add this code to the end of the script:
# gauge
# generate a stream of integers (percent) and pipe into dialog
percent=0
while (( percent < 101 )); do
echo $percent
sleep 1
percent=$((percent + 10))
done | dialog --title "Gauge" \
--backtitle "$BACKTITLE" \
--gauge "This is a gauge. It shows a progress bar." \
0 0
The gauge dialog does not output anything, rather it accepts a stream of integers (representing the percent of completion) via standard input. To provide this, we create a while loop to generate the stream of integers and pipe the results of the loop into dialog. When the script runs, the stream of numbers causes the progress bar to advance:
Note too, that we specified the size of the dialog as 0 0. When this is done, dialog attempts to auto-size the box to fit in the minimum necessary space.
Finally, we'll add another of the novel dialogs, a calendar:
# calendar
dialog --title "Calendar" \
--backtitle "$BACKTITLE" \
--calendar "This is a calendar." \
0 0 \
0 0 0 2> $TEMPFILE
read_tempfile
The calendar dialog box is given a date in day month year format. If the year is specified as zero, as it is in this example, the current date is used. The user can then select a date and the date is returned to the script.
This concludes our brief look at dialog. As you can imagine, dialog can be very handy for adding improved user interfaces for those scripts that require it. While we have covered its general themes, dialog has many more options and features. Check out the documentation link below for the complete story.
Further Reading
Documentation for dialog:
Other dialog-like programs:
Friday, February 19, 2010
Launching Shell Scripts In GNOME
Today, we're going to look at how you launch a shell script (or other terminal-based application) in GNOME. We are all familiar with launching applications from the command line. There are advantages to this even for graphical applications. When you launch an application on the command line, you can see what is produced on standard output and standard error, which can sometimes provide valuable diagnostic information. On the other hand, there is something to be said for double-click convenience as well.
We will write a tiny shell script and then set up GNOME to allow launching the script from a desktop icon. Here's the script we will use:
#!/bin/bash
# touch_foo - Script to touch the file "foo"
if [ ! -d foo ]; then
touch foo
echo "foo is touched."
else
echo "foo is a directory."
fi
Pretty simple. This script touches a file name "foo" but if a directory with that name already exists, it displays a message and exits. We will enter this script into our text editor and save it as ~/bin/touch_foo.
Next we need to create a launcher for the script. To do this, we right click on the GNOME desktop and select "Create Launcher...":

A dialog box will appear where we can enter information about the launcher:

Since we want to see the script execute, we will select "Application in Terminal," otherwise the script will execute silently in the background:

Next, we give the launcher a name, browse for the script file, and fill in an optional comment:

You may also click on the icon at the right of the dialog box and select another icon for the launcher. After the dialog box is filled out, we can click the OK button and the launcher is created and it will appear on the desktop. Double clicking on the new launcher icon launches the script.
But there's a problem. The terminal appears for an instant and vanishes. Not exactly what we had in mind. What gives?
The problem is that when the script finishes, the terminal session ends. To prevent this, we have to stop the script from finishing. We could do this by having an endless loop at the end of the script, or by including a read command which will wait for user input before proceeding. We can adjust our script by adding a line at the end:
#!/bin/bash
# touch_foo - Script to touch the file "foo"
if [ ! -d foo ]; then
touch foo
echo "foo is touched."
else
echo "foo is a directory."
fi
read -p "Press Enter to continue > "
After making this change, a terminal will appear and wait until the Enter key is pressed:

Another Approach
Another way we can configure commands for the launcher is to directly launch gnome-terminal and pass the desired command as an argument. This allows us to control the configuration of the terminal. For example we can set the window title, window geometry, and terminal profile used with our command. Here is a command we can use to launch the top program and set the window title to "Top":
gnome-terminal -e /usr/bin/top -t Top
Note however, that if you want to run shell scripts this way, you must construct the command this way:
gnome-terminal -e 'bash -c touch_foo'
including the bash program in the command so that there is a program present that can interpret your script.
Further Reading
We will write a tiny shell script and then set up GNOME to allow launching the script from a desktop icon. Here's the script we will use:
#!/bin/bash
# touch_foo - Script to touch the file "foo"
if [ ! -d foo ]; then
touch foo
echo "foo is touched."
else
echo "foo is a directory."
fi
Pretty simple. This script touches a file name "foo" but if a directory with that name already exists, it displays a message and exits. We will enter this script into our text editor and save it as ~/bin/touch_foo.
Next we need to create a launcher for the script. To do this, we right click on the GNOME desktop and select "Create Launcher...":
A dialog box will appear where we can enter information about the launcher:
Since we want to see the script execute, we will select "Application in Terminal," otherwise the script will execute silently in the background:
Next, we give the launcher a name, browse for the script file, and fill in an optional comment:
You may also click on the icon at the right of the dialog box and select another icon for the launcher. After the dialog box is filled out, we can click the OK button and the launcher is created and it will appear on the desktop. Double clicking on the new launcher icon launches the script.
But there's a problem. The terminal appears for an instant and vanishes. Not exactly what we had in mind. What gives?
The problem is that when the script finishes, the terminal session ends. To prevent this, we have to stop the script from finishing. We could do this by having an endless loop at the end of the script, or by including a read command which will wait for user input before proceeding. We can adjust our script by adding a line at the end:
#!/bin/bash
# touch_foo - Script to touch the file "foo"
if [ ! -d foo ]; then
touch foo
echo "foo is touched."
else
echo "foo is a directory."
fi
read -p "Press Enter to continue > "
After making this change, a terminal will appear and wait until the Enter key is pressed:
Another Approach
Another way we can configure commands for the launcher is to directly launch gnome-terminal and pass the desired command as an argument. This allows us to control the configuration of the terminal. For example we can set the window title, window geometry, and terminal profile used with our command. Here is a command we can use to launch the top program and set the window title to "Top":
gnome-terminal -e /usr/bin/top -t Top
Note however, that if you want to run shell scripts this way, you must construct the command this way:
gnome-terminal -e 'bash -c touch_foo'
including the bash program in the command so that there is a program present that can interpret your script.
Further Reading
- gnome-terminal man page
- bash man page (OPTIONS section)
Wednesday, February 17, 2010
tput
In a recent post, we covered a technique that can produce colored text on the command line. Today, we will look at a more general approach to producing not only text effects, but also gaining more visual control of our terminal.
A Little History
Back in the old days, when computers were connected to remote terminals, many brands of terminals existed and they were all a little different in terms of their feature sets and capabilities. As a result, different terminals used different sets of commands to control them.
Terminals responds to codes (called control codes) embedded in the stream of text sent to them. Some of these codes are standard and familiar like carriage return and line feed. Others, like those to turn on bold text or underlining are not. Terminals can, in fact, perform many kinds of functions. As microprocessors became available and the advent of the personal computer loomed, terminals became increasingly "smart" and feature laden.
However the proliferation of terminal brands and feature sets posed a problem for software developers. Software had to be painstakingly customized to support a particular terminal. What was needed was a software system that supported hardware independence so that applications could use a standard set of commands to deal with any terminal. This problem was addressed in two ways. First a standard set of control sequences were developed by ANSI (American National Standards Institute) and adopted (in varying degrees) by terminal manufactures to give all terminals a common set of commands. We looked at the ANSI commands in a an earlier post. The second approach was development of an intermediary layer (much like today's notion of a device driver) that translates a standardized command into the specific control codes used by a particular terminal.
In the Unix world, there are two such systems, the original, termcap and the more recent terminfo. Both contain a database of control code sequences used by different kinds of terminals.
Enter tput
tput is a command that can query the terminfo database to see if a particular terminal can support a particular feature. It can also accept terminal commands and output (via standard output) the control code sequences for that terminal. tput is generally used like this:
tput capname [parameters...]
where capname is the name of a terminal capability and parameters are any option parameters associated with the specified capability. For example, to output the sequence of instructions needed to move the cursor to the upper left corner of the screen (the "home" position):
tput cup 0 0
which means cursor position row 0, column 0.
Since tput actually outputs the sequence to standard output (you won't see the sequence since it is interpreted by your terminal emulator as an instruction), you can store the sequences in variables. Here we will store the sequences to control bold text:
bold_on=$(tput bold)
bold_off=$(tput sgr0)
Now, to highlight some text, you could:
echo "This is ${bold_on}important${bold_off}."
and you get this:
This is important.
There are a huge number of terminal capabilities, though most terminals only support a small subset. Besides changing text colors and positioning the cursor, it is possible to erase text, insert text, and control text attributes. The terminfo man page lists all the terminal capabilities and the Bash Prompt HOWTO section 6.5 (see "Further Reading" below) describes the ones most useful for ordinary screen control.
Before we leave, here is a version of the prompt_colors script that uses tput to set foreground and background text colors:
#!/bin/bash
# prompt_colors -- demonstrate prompt color combinations.
for fore in {0..7}; do
set_foreground=$(tput setf $fore)
for back in {0..7}; do
set_background=$(tput setb $back)
echo -n $set_background$set_foreground
printf ' F:%s B:%s ' $fore $back
done
echo $(tput sgr0)
done
Further Reading
A Little History
Back in the old days, when computers were connected to remote terminals, many brands of terminals existed and they were all a little different in terms of their feature sets and capabilities. As a result, different terminals used different sets of commands to control them.
Terminals responds to codes (called control codes) embedded in the stream of text sent to them. Some of these codes are standard and familiar like carriage return and line feed. Others, like those to turn on bold text or underlining are not. Terminals can, in fact, perform many kinds of functions. As microprocessors became available and the advent of the personal computer loomed, terminals became increasingly "smart" and feature laden.
However the proliferation of terminal brands and feature sets posed a problem for software developers. Software had to be painstakingly customized to support a particular terminal. What was needed was a software system that supported hardware independence so that applications could use a standard set of commands to deal with any terminal. This problem was addressed in two ways. First a standard set of control sequences were developed by ANSI (American National Standards Institute) and adopted (in varying degrees) by terminal manufactures to give all terminals a common set of commands. We looked at the ANSI commands in a an earlier post. The second approach was development of an intermediary layer (much like today's notion of a device driver) that translates a standardized command into the specific control codes used by a particular terminal.
In the Unix world, there are two such systems, the original, termcap and the more recent terminfo. Both contain a database of control code sequences used by different kinds of terminals.
Enter tput
tput is a command that can query the terminfo database to see if a particular terminal can support a particular feature. It can also accept terminal commands and output (via standard output) the control code sequences for that terminal. tput is generally used like this:
tput capname [parameters...]
where capname is the name of a terminal capability and parameters are any option parameters associated with the specified capability. For example, to output the sequence of instructions needed to move the cursor to the upper left corner of the screen (the "home" position):
tput cup 0 0
which means cursor position row 0, column 0.
Since tput actually outputs the sequence to standard output (you won't see the sequence since it is interpreted by your terminal emulator as an instruction), you can store the sequences in variables. Here we will store the sequences to control bold text:
bold_on=$(tput bold)
bold_off=$(tput sgr0)
Now, to highlight some text, you could:
echo "This is ${bold_on}important${bold_off}."
and you get this:
This is important.
There are a huge number of terminal capabilities, though most terminals only support a small subset. Besides changing text colors and positioning the cursor, it is possible to erase text, insert text, and control text attributes. The terminfo man page lists all the terminal capabilities and the Bash Prompt HOWTO section 6.5 (see "Further Reading" below) describes the ones most useful for ordinary screen control.
Before we leave, here is a version of the prompt_colors script that uses tput to set foreground and background text colors:
#!/bin/bash
# prompt_colors -- demonstrate prompt color combinations.
for fore in {0..7}; do
set_foreground=$(tput setf $fore)
for back in {0..7}; do
set_background=$(tput setb $back)
echo -n $set_background$set_foreground
printf ' F:%s B:%s ' $fore $back
done
echo $(tput sgr0)
done
Further Reading
Subscribe to:
Comments (Atom)







