Multiprocessing computers of today have a lot going on at a single slice of time. Between the many users that may be utilizing the computer, there are also many other processes going on in the background. Communication between processes through the usage of sockets and pipes is a common occurrence in the system programming world. I will only illustrate some basic examples of this large area.
We have seen earlier that it is possible to send output from a program through a pipe and write it to a file, as well as send the contents of a file through a pipe to a program. It is possible to do much more than these operations; the best place to start is with fork(). fork is the basic way to start a new process in *NIX. Deep inside of the system a clone of the currently running process (your program) will be created. It will have its own memory space and process ID. It may be used to do some work, in the end it will exit when the 'parent' process exits. The relationship between the two can be described as parent / child. Here is a basic example:
#!/usr/bin/perl -w
$parent_pid = getppid();
$child_pid = fork();
if($child_pid == 0) {
print "I am the child of ", $parent_pid, "\n";
}
else {
print "I am the parent of ", $child_pid, "\n";
}
The output at the time was:
I am the child of 2222
I am the parent of 3080
The $parent_pid = getppid(); will always return the process ID of
the current program (i.e. the parent). After we execute
$child_pid = fork();, two things happen. First off all, at that
exact point we fork, we create a duplicate of the program. That
duplicate program is running at the same spot as well. The parent program will
receive the process ID of the child (the result of the fork function,
and the child will have a 0 for it's process ID (but it still knows who the
parent is). When both programs execute the if statement, the child will
do the top part, and the parent will do the bottom part.
This is all well and good, but what can it be used for? Lots of things really,
but lets start slow. Why don't we try to execute some commands. We have
access to 2 different processes (or we can create more), so lets make
them do a little work. We will make the child execute the
ls -l command (long list of the current directory), and we will
make the parent do the ps command (process list). Here it goes:
#!/usr/bin/perl -w
$parent_pid = getppid();
$child_pid = fork();
if($child_pid == 0) {
print "I am the child of ", $parent_pid, ", and I will execute 'ls -l'\n";
exec("ls -l");
}
else {
print "I am the parent of ", $child_pid, ", and I will execute 'ps'\n";
exec("ps");
}
The output at the time was:
I am the child of 2222, and I will execute 'ls -l'
total 320
-rw-rw-r-- 1 jason jason 27 Feb 19 12:04 data.txt
-rwx------ 1 jason jason 45 Feb 23 10:03 HelloWorld.pl
-rw-rw-r-- 1 jason jason 8521 Feb 23 09:18 hist.txt
-rwxr-xr-x 1 jason jason 4354 Feb 19 12:06 hw.pl
-rw-rw-r-- 1 jason jason 5407 Feb 24 22:28 outline.txt
-rw-rw-r-- 1 jason jason 942 Feb 23 10:38 out.txt
-rw-rw-r-- 1 jason jason 251443 Feb 23 08:13 PerlTimeline.pdf
drwxrwxr-x 2 jason jason 4096 Feb 25 15:51 present
-rwx------ 1 jason jason 319 Feb 25 15:53 ps.pl
-rwx------ 1 jason jason 226 Feb 25 15:07 readdir.pl
-rwx------ 1 jason jason 252 Feb 25 12:47 read.pl
-rwx------ 1 jason jason 141 Feb 25 12:37 var2.pl
-rwx------ 1 jason jason 2908 Feb 25 10:19 var.pl
-rwx------ 1 jason jason 477 Feb 25 14:14 write.pl
I am the parent of 3099, and I will execute 'ps'
PID TTY TIME CMD
2222 pts/3 00:00:06 bash
2259 pts/3 00:00:04 nedit
3059 pts/3 00:00:04 nedit
3098 pts/3 00:00:00 ps
3099 pts/3 00:00:00 ls <defunct>
The basic idea is that both processes did what they were told, then made a
hasty exit. Notice that in the output of the parent process (the
ps command) we can still see the 'remains' of the dead child.
Now we have them working, but can they talk to each? Perhaps collaborate on a single project? Of course they can. It is easy for any number of processes to communicate using the idea of a pipe. In this next example, I create a pipe BEFORE ANYTHING ELSE. This will ensure that each process (the parent and the child) have the same pipe. After that we split them up like we did before. One process will close the read end (and write into the write end), while the second process will close the write end (and read from the read end). This is usually a good idea, that way you don't run into data accidents. Here it goes:
#!/usr/bin/perl -w
pipe(README, WRITEME);
$parent_pid = getppid();
$child_pid = fork();
if($child_pid == 0) {
print "I am the child of ", $parent_pid, "\n";
close(README);
print WRITEME "Hey There Pops!";
exit;
}
else {
print "I am the parent of ", $child_pid, "\n";
waitpid($child_pid, 0);
close(WRITEME);
@strings = <README>;
print "The Parent read: ";
foreach $string (@strings) {
print $string;
}
print "\n";
}
The output should be:
I am the child of 2222
I am the parent of 3173
The Parent read: Hey There Pops!
Another important feature to note is that I forced the parent to 'wait' until
the child was done before it could continue. This ensures that the parent
will get all of the data from the pipe. We accomplished this using the
waitpid($child_pid, 0); command. Again, I was able to
store all of the data from the pipe into an array (a handy little feature, no
need to read each thing byte by byte like in C), and print it out at the end.
It is possible to write more complex programs than the toys that were illustrated above, perhaps piping the output of a single program into a database, or reading from STDERR (the error stream) and placing the output into another analysis program. The last example will illustrate how we can close down a stream, such as STDOUT (anything that is displayed on the screen normally), and redirect it into a pipe. This will essentially 'mute' a process, but allow the process at the other end of the pipe hear what is going on. Here it goes:
#!/usr/bin/perl -w
pipe(README, WRITEME);
$parent_pid = getppid();
$child_pid = fork();
if($child_pid == 0) {
print "I am the child of ", $parent_pid, "\n";
print "Before we close STDOUT...\n";
close(README);
open(STDOUT, ">&WRITEME") or die "Can't redirect stdout";
print "After we close STDOUT...here comes the ps!\n";
system("ps");
close(STDOUT);
exit;
}
else {
print "I am the parent of ", $child_pid, ", here is what the child said:\n";
waitpid($child_pid, 0);
close(WRITEME);
@strings = <README>;
foreach $string (@strings) {
print $string;
}
print "\n";
}
The output at the time was:
I am the child of 3636
Before we close STDOUT...
I am the parent of 4124, here is what the child said:
After we close STDOUT...here comes the ps!
PID TTY TIME CMD
3636 pts/2 00:00:07 bash
3751 pts/2 00:00:01 nedit
3758 pts/2 00:00:01 nedit
3781 pts/2 00:00:01 nedit
3782 pts/2 00:00:01 nedit
3783 pts/2 00:00:01 nedit
3784 pts/2 00:00:01 nedit
4115 pts/2 00:00:01 nedit
4116 pts/2 00:00:01 nedit
4117 pts/2 00:00:01 nedit
4118 pts/2 00:00:02 nedit
4123 pts/2 00:00:00 stdout.pl
4124 pts/2 00:00:00 stdout.pl
4125 pts/2 00:00:00 ps
What we have basically done is the same as before, create a pipe, fork the process. On the child end of things though, we redirect all of STDOUT into the pipe. Anything that is output; a print statement, or the results of the ps command all get shoved into the pipe. When the child finishes up, the parent is then allowed to read from the pipe, and it will take all of the child's info. Additional processing can be done at this time for a better result.
We will now look at a more practical feature of Perl, the ability to utilize subroutines.