webstonne site     Scroogle                


This chapter is from the book
Windows PowerShell Unleashed

This chapter looks at what a shell is and describes the power that can be harnessed by interacting with a shell by walking through some basic shell commands and building a shell script from those basic commands.

Shells are a necessity when using key components of nearly all operating systems, because they make it possible to perform arbitrary actions such as traversing the file system, running commands, or using applications. As such, every computer user has interacted with a shell by typing commands at a prompt or by clicking an icon to start an application. Shells are an ever-present component of modern computing, frequently providing functionality that is not available anywhere else when working on a computer system.

In this chapter, you take a look at what a shell is and see the power that can be harnessed by interacting with a shell. To do this, you walk through some basic shell commands, and then build a shell script from those basic commands to see how they can become more powerful via scripting. Next, you take a brief tour of how shells have evolved over the past 35 years. Finally, you learn why PowerShell was created, why there was a need for PowerShell, what its inception means to scripters and system administrators, and what some of the differences between PowerShell 1.0 and PowerShell 2.0 CTP2 are.

What Is a Shell?

A shell is an interface that enables users to interact with the operating system. A shell isn’t considered an application because of its inescapable nature, but it’s the same as any other process that runs on a system. The difference between a shell and an application is that a shell’s purpose is to enable users to run other applications. In some operating systems (such as UNIX, Linux, and VMS), the shell is a command-line interface (CLI); in other operating systems (such as Windows and Mac OS X), the shell is a graphical user interface (GUI).

In addition, two types of systems in wide use are often neglected in discussions of shells: networking equipment and kiosks. Networking equipment usually has a GUI shell (mostly a Web interface on consumer-grade equipment) or a CLI shell (in commercial-grade equipment). Kiosks are a completely different animal; because many kiosks are built from applications running atop a more robust operating system, often kiosk interfaces aren’t shells. However, if the kiosk is built with an operating system that serves only to run the kiosk, the interface is accurately described as a shell. Unfortunately, kiosk interfaces continue to be referred to generically as shells because of the difficulty in explaining the difference to nontechnical users.

Both CLI and GUI shells have benefits and drawbacks. For example, most CLI shells allow powerful command chaining (using commands that feed their output into other commands for further processing, this is commonly referred to as the pipeline). GUI shells, however, require commands to be completely self-contained and generally do not provide a native method for directing their output into other commands. Furthermore, most GUI shells are easy to navigate, whereas CLI shells do not have an intuitive interface and require a preexisting knowledge of the system to successfully complete automation tasks. Your choice of shell depends on what you’re comfortable with and what’s best suited to perform the task at hand.

Even though GUI shells exist, the term “shell” is used almost exclusively to describe a command-line environment, not a task you perform with a GUI application, such as Windows Explorer. Likewise, shell scripting refers to collecting commands normally entered on the command line or into an executable file.

As you can see, historically there has been a distinction between graphical and nongraphical shells. An interesting development in PowerShell 2.0 CTP2 is the introduction of an alpha version of Graphical PowerShell, which provides a CLI and a script editor in the same window. Although this type of interface has been available for many years in IDE (Integrated Development Environment) editors for programming languages such as C, this alpha version of Graphical PowerShell gives a sense of the direction from the PowerShell team on where they see PowerShell going in the future—a fully featured CLI shell with the added benefits of a natively supported GUI interface.

Basic Shell Use

Many shell commands, such as listing the contents of the current working directory, are simple. However, shells can quickly become complex when more powerful results are required. The following example uses the Bash shell to list the contents of the current working directory.

$ ls
apache2 bin     etc     include lib     libexec man     sbin    share   var

However, often seeing just filenames isn’t enough and so a command-line argument needs to be passed to the command to get more details about the files.

The following command gives you more detailed information about each file using a command-line argument.

$ ls -l
total 8
drwxr-xr-x    13 root  admin   442 Sep 18 20:50 apache2
drwxrwxr-x    57 root  admin  1938 Sep 19 22:35 bin

drwxrwxr-x     5 root  admin   170 Sep 18 20:50 etc
drwxrwxr-x    30 root  admin  1020 Sep 19 22:30 include
drwxrwxr-x   102 root  admin  3468 Sep 19 22:30 lib
drwxrwxr-x     3 root  admin   102 Sep 18 20:11 libexec
lrwxr-xr-x     1 root  admin     9 Sep 18 20:12 man -> share/man
drwxrwxr-x     3 root  admin   102 Sep 18 20:11 sbin
drwxrwxr-x    13 root  admin   442 Sep 19 22:35 share
drwxrwxr-x     3 root  admin   102 Jul 30 21:05 var

Now you need to decide what to do with this information. As you can see, directories are interspersed with files, making it difficult to tell them apart. If you want to view only directories, you have to pare down the output by piping the ls command output into the grep command. In the following example, the output has been filtered to display only lines starting with the letter d, which signifies that the file is a directory.

$ ls -l | grep '^d'
drwxr-xr-x    13 root  admin   442 Sep 18 20:50 apache2
drwxrwxr-x    57 root  admin  1938 Sep 19 22:35 bin
drwxrwxr-x     5 root  admin   170 Sep 18 20:50 etc
drwxrwxr-x    30 root  admin  1020 Sep 19 22:30 include
drwxrwxr-x   102 root  admin  3468 Sep 19 22:30 lib
drwxrwxr-x     3 root  admin   102 Sep 18 20:11 libexec

drwxrwxr-x     3 root  admin   102 Sep 18 20:11 sbin
drwxrwxr-x    13 root  admin   442 Sep 19 22:35 share
drwxrwxr-x     3 root  admin   102 Jul 30 21:05 var

However, now that you have only directories listed, the other information such as date, permissions, size, and so on is superfluous because only the directory names are needed. So in this next example, you use the awk command to print only the last column of output shown in the previous example.

$ ls -l | grep '^d' | awk '{ print $NF }'
apache2
bin
etc

include
lib
libexec
sbin
share
var

The result is a simple list of directories in the current working directory. This command is fairly straightforward, but it’s not something you want to type every time you want to see a list of directories. Instead, we can create an alias or command shortcut for the command that we just executed.

$ alias lsd="ls -l | grep '^d' | awk '{ print \$NF }'"

Then, by using the lsd alias, you can get a list of directories in the current working directory without having to retype the command from the previous examples.

$ lsd
apache2
bin
etc
include
lib
libexec

sbin
share
var

As you can see, using a CLI shell offers the potential for serious power when you’re automating simple, repetitive tasks.

Basic Shell Scripts

Working in a shell typically consists of typing each command, interpreting the output, deciding how to put that data to work, and then combining the commands into a single, streamlined process. Anyone who has gone through dozens of files, manually adding a single line at the end of each one, will agree that scripting this type of manual process is a much more efficient approach than manually editing each file, and the potential for data entry errors is greatly reduced. In many ways, scripting makes as much sense as breathing.

You’ve seen how commands can be chained together in a pipeline to manipulate output from the preceding command, and how a command can be aliased to minimize typing. Command aliasing is the younger sibling of shell scripting and gives the command line some of the power of shell scripts. However, shell scripts can harness even more power than aliases.

Collecting single-line commands and pipelines into files for later execution is a powerful technique. Putting output into variables for further manipulation and reference later in the script takes the power to the next level. Wrapping any combination of commands into recursive loops and flow control constructs takes scripting to the same level of sophistication as programming.

Some may say that scripting isn’t programming, but this distinction is quickly becoming blurred with the growing variety and power of scripting languages these days. With this in mind, let’s try developing the one-line Bash command from the previous section into something more useful.

The lsd command alias from the previous example (referencing the Bash command ls -l | grep ‘^d’ | awk ‘{ print $NF }’) produces a listing of each directory in the current working directory. Now, suppose you want to expand this functionality to show how much space each directory uses on the disk. The Bash utility that reports on disk usage does so on a specified directory’s entire contents or a directory’s overall disk usage in a summary. It also reports disk usage amounts in bytes by default. With all that in mind, if you want to know each directory’s disk usage as a freestanding entity, you need to get and display information for each directory, one by one. The following examples show what this process would look like as a script.

Notice the command line you worked on in the previous section. The for loop parses through the directory list the command returns, assigning each directory name to the DIR variable and executing the code between the do and done keywords.

#!/bin/bash

for DIR in $(ls -l | grep '^d' | awk '{ print $NF }'); do
    du -sk ${DIR}
done

Saving the previous code as a script file named directory.sh and then running the script in a Bash session produces the following output.

$ big_directory.sh
17988   apache2
5900    bin
72      etc
2652    include
82264   lib

0       libexec
0       sbin
35648   share
166768  var

Initially, this output doesn’t seem especially helpful. With a few additions, you can build something considerably more useful. In this example, we add an additional requirement to report the names of all directories using more than a certain amount of disk space. To achieve this requirement, modify the directory.sh script file as shown in this next example.

#!/bin/bash

PRINT_DIR_MIN=35000

for DIR in $(ls -l | grep '^d' | awk '{ print $NF }'); do
    DIR_SIZE=$(du -sk ${DIR} | cut -f 1)
    if [ ${DIR_SIZE} -ge ${PRINT_DIR_MIN} ];then
        echo ${DIR}
    fi
done

One of the first things that you’ll notice about this version of directory.sh is that we have started adding variables. PRINT_DIR_MIN is a value that represents the minimum number of kilobytes a directory uses to meet the printing criteria. This value could change fairly regularly, so we want to keep it as easily editable as possible. Also, we could reuse this value elsewhere in the script so that we don’t have to change the amount in multiple places when the number of kilobytes changes.

You might be thinking the find command would be easier to use. However, although find is terrific for browsing through directory structures, it is too cumbersome for simply viewing the current directory, so the convoluted ls command is used instead. If we were looking for files in the hierarchy, the find command would be the most appropriate choice. However, because we are simply looking for directories in the current directory, the ls command is the best tool for the job in this situation.

The following is an example of the output rendered by the script so far.

$ big_directory.sh
lib
share
var

This output can be used in a number of ways. For example, systems administrators might use this script to watch user directories for disk usage thresholds if they want to notify users when they have reached a certain level of disk space. For this purpose, knowing when a certain percentage of users reaches or crosses the threshold would be useful.

In our next Bash scripting example, we modify the directory.sh script to display a message when a certain percentage of directories are a specified size.

#!/bin/bash

DIR_MIN_SIZE=35000
DIR_PERCENT_BIG_MAX=23

DIR_COUNTER=0
BIG_DIR_COUNTER=0

for DIR in $(ls -l | grep '^d' | awk '{ print $NF }'); do
    DIR_COUNTER=$(expr ${DIR_COUNTER} + 1)
    DIR_SIZE=$(du -sk ${DIR} | cut -f 1)
    if [ ${DIR_SIZE} -ge ${DIR_MIN_SIZE} ];then
        BIG_DIR_COUNTER=$(expr ${BIG_DIR_COUNTER} + 1)
    fi
done

if [ ${BIG_DIR_COUNTER} -gt 0 ]; then
    DIR_PERCENT_BIG=$(expr $(expr ${BIG_DIR_COUNTER} \* 100) / ${DIR_COUNTER})
    if [ ${DIR_PERCENT_BIG} -gt ${DIR_PERCENT_BIG_MAX} ]; then
        echo "${DIR_PERCENT_BIG} percent of the directories are larger than
${DIR_MIN_SIZE} kilobytes."
    fi
fi

Now, the preceding example barely looks like what we started with. The variable name PRINT_DIR_MIN has been changed to DIR_MIN_SIZE because we’re not printing anything as a direct result of meeting the minimum size. The DIR_PERCENT_BIG_MAX variable has been added to indicate the maximum allowable percentage of directories at or above the minimum size. Also, two counters have been added: one (DIR_COUNTER) to count the directories and one (BIG_DIR_COUNTER) to count the directories exceeding the minimum size.

Inside the for loop, DIR_COUNTER is incremented, and the if statement in the for loop now simply increments BIG_DIR_COUNTER instead of printing the directory’s name. An if statement has been added after the for loop to do additional processing, figure out the percentage of directories exceeding the minimum size, and then print the message if necessary. With these changes, the script now produces the following output.

$ big_directory.sh
33 percent of the directories are larger than 35000 kilobytes.

The output shows that 33 percent of the directories are 35MB or more. By modifying the echo line in the script to feed a pipeline into a mail delivery command and tweaking the size and percentage thresholds for the environment, systems administrators can schedule this shell script to run at specified intervals and produce directory size reports easily. If administrators want to get fancy, they can make the size and percentage thresholds configurable via command-line parameters.

As you can see, even a basic shell script can be powerful. With a mere 22 lines of code, we have a useful shell script. Some quirks of the script might seem inconvenient (using the expr command for simple math can be tedious, for example), but every programming language has its strengths and weaknesses. As a rule, some tasks you need to do are convoluted to perform, no matter what language you’re using.

The moral of this story is that shell scripting, or scripting in general, can make life much easier. For example, say your company merges with another company. As part of that merger, you have to create 1,000 user accounts in Active Directory or another authentication system. Usually, a systems administrator grabs the list, sits down with a cup of coffee, and starts clicking or typing away. If an administrator manages to get a migration budget, he can hire an intern or consultants to do the work or purchase migration software. But why bother performing repetitive tasks or spending money that could be put to better use (such as a bigger salary)?

Instead, the answer should be to automate those iterative tasks by using scripting. Automation is the purpose of scripting. As a systems administrator, you should take advantage of scripting with CLI shells or command interpreters to gain access to the same functionality developers have when coding the systems you manage. However, scripting tools tend to be more open, flexible, and focused on the tasks that you as an IT professional need to perform, as opposed to development tools that provide a framework for building an entire application from a blank canvas.

This chapter is from the book
Windows PowerShell Unleashed

This chapter looks at what a shell is and describes the power that can be harnessed by interacting with a shell by walking through some basic shell commands and building a shell script from those basic commands.

A Shell History

The first shell in wide use was the Bourne shell, the standard user interface for the UNIX operating system, which is still required by UNIX systems during the startup sequence. This robust shell provided pipelines and conditional and recursive command execution. It was developed by C programmers for C programmers.

Oddly, however, despite being written by and for C programmers, the Bourne shell didn’t have a C-like coding style. This lack of a similarity to the C language drove the invention of the C shell, which introduced more C-like programming structures. While the C shell inventors were building a better mousetrap, they decided to add command-line editing and command aliasing (defining command shortcuts), which eased the bane of every UNIX user’s existence: typing. The less a UNIX user has to type to get results, the better.

Although most UNIX users liked the C shell, learning a completely new shell was a challenge for some. So the Korn shell was invented, which added a number of the C shell features to the Bourne shell. Because the Korn shell is a commercially licensed product, the open-source software movement needed a shell for Linux and FreeBSD. The collaborative result was the Bourne Again Shell, or Bash, invented by the Free Software Foundation.

Throughout the evolution of UNIX and the birth of Linux and FreeBSD, other operating systems were introduced along with their own shells. Digital Equipment Corporation (DEC) introduced Virtual Memory System (VMS) to compete with UNIX on its VAX systems. VMS had a shell called Digital Command Language (DCL) with a verbose syntax, unlike that of its UNIX counterparts. Also, unlike its UNIX counterparts, it wasn’t case sensitive nor did it provide pipelines.

Somewhere along the line, the PC was born. IBM took the PC to the business market, and Apple rebranded roughly the same hardware technology and focused on consumers. Microsoft made DOS run on the IBM PC, acting as both kernel and shell and including some features of other shells. (The pipeline syntax was inspired by UNIX shells.)

Following DOS was Windows, which quickly evolved from an application to an operating system. Windows introduced a GUI shell, which has become the basis for Microsoft shells ever since. Unfortunately, GUI shells are notoriously difficult to script, so Windows provided a DOSShell-like environment. It was improved with a new executable, cmd.exe instead of command.com, and a more robust set of command-line editing features. Regrettably, this change also meant that shell scripts in Windows had to be written in the DOSShell syntax for collecting and executing command groupings.

Over time, Microsoft realized its folly and decided systems administrators should have better ways to manage Windows systems. Windows Script Host (WSH) was introduced in Windows 98, providing a native scripting solution with access to the underpinnings of Windows. It was a library that enabled scripting languages to use Windows in a powerful and efficient manner. WSH is not its own language, however, so a WSH-compliant scripting language was required to take advantage of it, such as JScript, VBScript, Perl, Python, Kixstart, or Object REXX. Some of these languages are quite powerful in performing complex processing, so WSH seemed like a blessing to Windows systems administrators.

However, the rejoicing was short lived because there was no guarantee that the WSH-compliant scripting language you chose would be readily available or a viable option for everyone. The lack of a standard language and environment for writing scripts made it difficult for users and administrators to incorporate automation by using WSH. The only way to be sure the scripting language or WSH version would be compatible on the system being managed was to use a native scripting language, which meant using DOSShell and enduring the problems that accompanied it. In addition, WSH opened a large attack vector for malicious code to run on Windows systems. This vulnerability gave rise to a stream of viruses, worms, and other malicious programs that have wreaked havoc on computer systems, thanks to WSH’s focus on automation without user intervention.

The end result was that systems administrators viewed WSH as both a blessing and a curse. Although WSH presented a good object model and access to a number of automation interfaces, it wasn’t a shell. It required using Wscript.exe and Cscript.exe; scripts had to be written in a compatible scripting language, and its attack vulnerabilities posed a security challenge. Clearly, a different approach was needed for systems management; over time, Microsoft reached the same conclusion.

This chapter is from the book
Windows PowerShell Unleashed

This chapter looks at what a shell is and describes the power that can be harnessed by interacting with a shell by walking through some basic shell commands and building a shell script from those basic commands.

Enter PowerShell

Microsoft didn’t put a lot of effort into a CLI shell; instead, it concentrated on a GUI shell, which is more compatible with its GUI-based operating systems. (Mac OS X didn’t put any effort into a CLI shell, either; it used the Bash shell.) However, the resulting DOSShell had a variety of limitations, such as conditional and recursive programming structures not being well documented and heavy reliance on goto statements. These drawbacks hampered shell scripters for years, and they had to use other scripting languages or write compiled programs to solve common problems.

The introduction of WSH as a standard in the Windows operating system offered a robust alternative to DOSShell scripting. Unfortunately, WSH presented a number of challenges, as discussed in the preceding section. Furthermore, WSH didn’t offer the CLI shell experience that UNIX and Linux administrators had enjoyed for years, which resulted in Windows administrators being made fun of by the other chaps for the lack of a CLI shell and its benefits.

Luckily, Jeffrey Snover (the architect of PowerShell) and others on the PowerShell team realized that Windows needed a strong, secure, and robust CLI shell for systems management. Enter PowerShell. PowerShell was designed as a shell with full access to the underpinnings of Windows via the .NET Framework, Component Object Model (COM) objects, and other methods. It also provided an execution environment that’s familiar, easy, and secure. PowerShell is aptly named, as it puts the power into the Windows shell. For users wanting to automate their Windows systems, the introduction of PowerShell was exciting because it combined the power of WSH with the familiarity of a traditional shell.

PowerShell provides a powerful native scripting language, so scripts can be ported to all Windows systems without worrying about whether a particular language interpreter is installed. You might have gone through the rigmarole of scripting a solution with WSH in Perl, Python, VBScript, JScript, or another language, only to find that the next system you worked on didn’t have that interpreter installed. At home, users can put whatever they want on their systems and maintain them however they see fit, but in a workplace, that option isn’t always viable. PowerShell solves that problem by removing the need for nonnative interpreters. It also solves the problem of wading through Web sites to find command-line equivalents for simple GUI shell operations and coding them into .cmd files. Last, PowerShell addresses the WSH security problem by providing a platform for secure Windows scripting. It focuses on security features such as script signing, lack of executable extensions, and execution policies (which are restricted by default).

For anyone who needs to automate administration tasks on a Windows system, PowerShell provides a much-needed injection of power. Its object-oriented nature boosts the power available to you, too. If you’re a Windows systems administrator or scripter, becoming a PowerShell expert is highly recommended.

PowerShell is not just a fluke or a side project at Microsoft. The PowerShell team succeeded at creating an amazing shell and winning support within Microsoft for its creation. For example, the Exchange product team adopted PowerShell as the backbone of the management interface in Exchange Server 2007. That was just the start. Other product groups at Microsoft, such as System Center Operations Manager 2007, System Center Data Protection Manager V2, and System Center Virtual Machine Manager, are being won over by what PowerShell can do for their products. In fact, PowerShell is the approach Microsoft has been seeking for a general management interface to Windows-based systems. Over time, PowerShell could replace current management interfaces, such as cmd.exe, WSH, CLI tools, and so on, and become integrated into the Windows operating system as its backbone management interface. With the introduction of PowerShell, Microsoft has addressed a need for CLI shells. The sky is the limit for what Windows systems administrators and scripters can achieve with it.

This chapter is from the book
Windows PowerShell Unleashed

This chapter looks at what a shell is and describes the power that can be harnessed by interacting with a shell by walking through some basic shell commands and building a shell script from those basic commands.

New Capabilities in PowerShell 2.0 CTP2

With the release of PowerShell 2.0 CTP2, the PowerShell team has expanded the capabilities of PowerShell 1.0 to include a number of key new features. Although PowerShell 2.0’s final feature set is likely to change from the CTP2 release, these features are central to PowerShell 2.0 and are expected to make it into the final release of the product.

The first major new feature of PowerShell 2.0 CTP2 is the addition of PowerShell Remoting. In a major step forward from the original release of PowerShell 1.0, PowerShell 2.0 CTP2 provides support for running cmdlets and scripts on a remote machine. The Windows Remote Management Service (WS-Man) is used to accomplish this, and a new cmdlet named Invoke-Expression is used to designate the target machine and the command to be executed. The following code example shows the general usage of the Invoke-Expression cmdlet to run the command get-process powershell on a remote computer named XP1.

PS C:\> invoke-expression -comp XP1 -command "get-process powershell"

Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName
-------  ------    -----      ----- -----   ------     -- -----------
    522      12    30652      29076   158     3.70   1168 powershell

PS C:\>

Another new feature of PowerShell 2.0 CTP2 is the introduction of background jobs or PSJobs. A PSJob is simply a command or expression that executes asynchronously, freeing up the command prompt immediately for other tasks. A new series of Cmdlets related to PSJobs are included, which enable PSJobs to be started, stopped, paused, and listed. It also enables the results analyzed.

Also included in PowerShell 2.0 CTP2 is a new functionality called ScriptCmdlets. Previously, cmdlets had to be written in a .NET framework programming language such as C#, which made it a challenge for many scripters to create their own cmdlets. In this release of PowerShell, the ScriptCmdlets functionality enables scripters to write their own cmdlets with no more effort than writing a PowerShell function. While ScriptCmdlets are handled differently from compiled cmdlets and have certain limitations in this release of PowerShell (such as lack of support for help files), this functionality makes it far easier for scripters to extend PowerShell to address their specific requirements.

The last new feature of PowerShell 2.0 CTP2 that we discuss in this chapter is the introduction of Graphical PowerShell. Graphical PowerShell is currently in an early alpha version, but includes a number of powerful new capabilities that enhance the features of the basic PowerShell CLI shell. Graphical PowerShell provides an interface with both an interactive shell pane and a multi-tabbed scripting pane, as well as the ability to launch multiple shell processes (also known as runspaces) from within Graphical PowerShell.

This chapter is from the book
Windows PowerShell Unleashed

This chapter looks at what a shell is and describes the power that can be harnessed by interacting with a shell by walking through some basic shell commands and building a shell script from those basic commands.

Summary

In summary, this chapter has served as an introduction to what a shell is, where shells came from, how to use a shell, and how to create a basic shell script. While learning these aspects about shells, you have also learned why scripting is so important to systems administrators. As you have come to discover, scripting enables systems administrators to automate repetitive tasks. In doing so, task automation enables systems administrators to perform their jobs more effectively, freeing them to perform more important business-enhancing tasks.

In addition to learning about shells, you have also been introduced to what PowerShell is and why PowerShell was needed. As explained, PowerShell is the replacement to WSH, which, although it was powerful, had a number of shortcomings (security and interoperability being the most noteworthy). PowerShell was also needed because Windows lacked a viable CLI that could be used to easily complete complex automation tasks. The end result for replacing WSH and improving on the Windows CLI, is PowerShell, which is built around the .NET Framework and brings a much needed injection of backbone to the world of Windows scripting and automation. Lastly, the key new features of PowerShell 2.0 CTP2 were reviewed at a high level, with detailed analysis of these new capabilities to be provided in subsequent chapters.









Luciferian, exopolitical, transhumanism, magick, occult, music, martial art, karate, yoga