Sunday, August 12, 2012

Bash scripts to distribute parallel processing across machines

In cases where the resources on a single host are not enough for the work you are doing, just splitting that work across multiple processes on that one host will not optimize performance. Rather, we need to distribute the work across multiple hosts. Following are changes to the script framework that executes work in parallel that I discussed earlier, now supporting execution across machines.
:
# execute in N parallel processes (default 5)
# if HOST1 HOST2 HOST3 ... specified, 
# execute in N processes per specified host
n=$1

if [ -z "$n" ]; then
        n=5
        hosts=''
else
        case "$1" in
                [0-9]*)
                        n=$1
                ;;
                *)
                        hosts=$*
                ;;
        esac
fi

if [ -z "$hosts" ]; then
        hosts='localhost'
fi

t=$TMP/parallel.$$

mkdir -p $t/in
mkdir -p $t/work
mkdir -p $t/out
mkdir -p $t/hosts

x=0

while read cmd; do
        echo $cmd > $t/in/$x
        ((x++))
done

id=1
for host in $hosts; do
        n2=$n
        while [ $n2 != 0 ]; do
                echo `expand_host_name $host` > $t/hosts/$id
                parallel.1proc $id $t &
                let id++
                n2=`expr $n2 - 1`
        done
done

wait
cat $t/out/*

rm -r $t
The individual processes still run in a separate script parallel.1proc with the following code:
:
id=$1
t=$2

host_fn=`ls.nth $id $t/hosts`
if [ -z "$host_fn" ]; then
        host=localhost
else
        host=`cat $t/hosts/$host_fn`
fi

echo "parallel.1proc $id $t starting (will direct work to $host)..."
x=0
while [ 1 ]; do
        current_cmd_fn=$t/work/$id.$x
        while [ ! -f "$current_cmd_fn" ]; do
                next_cmd_base=`ls.nth $id $t/in`
                if [ -z "$next_cmd_base" ]; then
                        echo parallel.1proc $id done
                        exit 0
                fi
                next_cmd_fn=$t/in/$next_cmd_base
                mv $next_cmd_fn $current_cmd_fn 2> /dev/null
        done
        (
        cmd=`cat $current_cmd_fn`
        echo "==============================================================================="
        echo "`date` parallel proc $id executing $cmd on $host"
        if [ $host = "localhost" ]; then
                eval $cmd
        else
                unset DISPLAY
                export DISPLAY
                ssh -o StrictHostKeyChecking=no $host $cmd
        fi
        date
        ) > $t/out/$id.$x

        x=`expr $x + 1`
done
Of course it is assumed that each of the hosts will be able to access the needed disks to execute the work at hand. This approach works nicely in an environment where there is some set of NFS disks that are available across all the hosts you want working on the task.

Sunday, July 22, 2012

Simple bash scripts to achieve parallel processing of shell tasks

We've all had the experience of scripting commands which we know will take a long time to finish, and wishing for the ability to run them in parallel in some easy fashion. On occasion I scripted special-purpose scripts to manage parallel processes to achieve a long-running task, e.g., to copy large numbers of files across machines. But this is such a general problem it makes more sense to have a general solution.

To solve this problem I wrote parallel, a bash script which takes a single integer argument telling it how many processes it should spawn to do work. It reads standard input for a series of commands which it then distributes to its subprocesses. There must be no ordering dependency between these commands since the order of their execution will not be known in advance. The subprocesses execute these commands until no work is left.

Here is the code.

:
n=$1

if [ -z "$n" ]; then
        n=5
else
        shift
fi

t=$TMP/parallel.$$

mkdir -p $t/in
mkdir -p $t/work
mkdir -p $t/out
x=0

while read cmd; do
        echo $cmd > $t/in/$x
        x=`expr $x + 1`
done

while [ $n != 0 ]; do
        parallel.1proc $n $t &
        n=`expr $n - 1`
done
wait
cat $t/out/*

rm -r $t
The script creates a temporary directory to keep track of the work. Under that directory is a subdirectory named in, where we keep the commands waiting to be executed. The subdirectory out is where we keep the output from the individual commands. The subdirectory work is where we keep track of the work which is currently being done.

A second script parallel.1proc does the actual work per process. This script expects an integer ID and a pointer to the temporary directory. It loops across a sequence where it repeatedly grabs work from the in directory, records what it's doing in the work directory and puts the output into the out directory. Here is the code:

:
id=$1
t=$2

echo parallel.1proc $id $t starting...
x=0
while [ 1 ]; do
        current_cmd_fn=$t/work/$id.$x
        while [ ! -f "$current_cmd_fn" ]; do
                next_cmd_base=`ls $t/in | tail -$id | head -1`
                if [ -z "$next_cmd_base" ]; then
                        echo parallel.1proc $id done
                        exit 0
                fi
                next_cmd_fn=$t/in/$next_cmd_base
                mv $next_cmd_fn $current_cmd_fn
        done
        (
        date
        cat $current_cmd_fn
        eval `cat $current_cmd_fn`
        date
        ) > $t/out/$id.$x
      
        x=`expr $x + 1`
done
The in is logically the work queue. There will be a race condition between the multiple copies of parallel.1proc as they attempt to move individual command files from the in directory to the work directory. The scheme hinges on the ability of each process to tell whether it won the race. When they attempt to copy work files they make a destination filename which incorporates their unique IDs, so if the destination file including, for example, the ID 4 exists after the move attempt, then sub process 4 knows that it was the lucky winner who succeeded in grabbing the work and can go ahead and execute it. Any other process which was trying to grab the same command will realize that it failed since the file named by its variable next_cmd_fn will not correspond to an existing file. A losing subprocess simply retries until it either does succeed in grabbing a command or else the supply of commands runs out, at which point the sub process exits.

Tuesday, July 10, 2012

Generalizing visibility control of an HTML span depending on form field state

Often a form user's answer to a question determines whether parts of a form are relevant, e.g., is the billing address the same as the shipping address? Earlier I wrote about a span attribute which leads to the automatic presentation of a checkbox to control a span's visibility. But in other circumstances it can be nice to have the ability to control a span's visibility depending on some arbitrary form field elsewhere in the page. As usual I am hoping to be able to implement this linkage in markup without a rat's nest of onchange JavaScript peeking around to see what the form looks like.
Using jquery and a small amount of code it is easy to do this. In my implementation described below, I support an attribute visibility_tied_to_field on spans; the attribute's value is the DOM ID of a check box form field whose state will determine the span's visibility. So, for example, the billing address versus shipping address situation could be handled as so:
Billing address different from shipping address? <input type=checkbox id='xyz'>
<span visibility_tied_to_field='xyz'>

... billing address form fields...

</span>

I also support an inverse linking between checkboxes and spans' visibility with an analogous span attribute visibility_inversely_tied_to_field. When this latter attribute is used, the span is visible if the checkbox is not checked, and invisible if the checkbox is checked (the logical inverse of visibility_tied_to_field):
Billing address same as shipping address? <input type=checkbox id='xyz'>
<span visibility_inversely_tied_to_field='xyz'>

... billing address form fields...

</span>

I implement this functionality using just the following code:

function visibility_ties_init()
{
    var init_onchange_show_if_checked = function(index, span)
    {
        init_onchange(true, span, "visibility_tied_to_field")
    }
    
    var init_onchange_show_if_NOT_checked = function(index, span)
    {
        init_onchange(false, span, "visibility_inversely_tied_to_field")
    }
    
    var init_onchange = function(show_if_checked, span, span_attr_that_points_to_field)
    {
        var checkbox_id_attr = span.attributes.getNamedItem(span_attr_that_points_to_field)
        if (checkbox_id_attr != null)
        {
            var checkbox_id = checkbox_id_attr.value
            var checkbox = $('#' + checkbox_id)
            var show_it = (show_if_checked && checkbox.is(':checked')) || (!show_if_checked && !checkbox.is(':checked'))
            span.hidden = !show_it

            checkbox.change(function()
            {
                span.hidden = (show_if_checked ? !this.checked : this.checked)
            })
        }
    }
    
    $("span[visibility_tied_to_field]").each(          init_onchange_show_if_checked)
    $("span[visibility_inversely_tied_to_field]").each(init_onchange_show_if_NOT_checked)
}

The code iterates across spans with the visibility_tied_to_field or visibility_inversely_tied_to_field attributes, looking up the checkboxes which are pointed to. For each one the code hangs a new onchange handler to take care of adjusting the corresponding span's visibility depending on the checkbox state.

Friday, June 1, 2012

Declaratively tie the visibility of form fields to a checkbox value

Fields Contingently Displayed Based on a Checkbox 

In theory one of the great benefits of moving forms to the computer from paper is the ability to hide away sections of the form which are irrelevant to the user. We've all had the experience filling out tax forms where large sections of fields were dedicated to a case which just didn't apply to us, leading to a disagreeable proliferation of superfluous fields which nevertheless require inspection to verify they truly are irrelevant. If we can hide away those superfluous fields in the computer versions of our forms then we will have nicely simplified -- and reduced -- the work. Obviously there is ample support for making divs invisible, but the control of the underlying logic can make pages' internal logic complicated. When I was recently putting together a long form which needed this treatment, I wasn't looking forward to larding the page with ad hoc JavaScript to suppress superfluous fields depending on user settings. What I wanted was a declarative way to get this logic. Something like the following:
    <span id='checkbox_contingent_additional_person' visible=no>
        <table border=1 width=100% >
            <tr>
                <td>First Name</td>
                <td><input type=text id=some_relative_First_Name__0</td>
            </tr>
            <tr>
                <td>Last Name</td>
                <td><input type=text id=some_relative_Last_Name__0</td>
            </tr>
        </table>
    </span>  

I wanted to use spans for the outer layers so that if there were some browser compatibility problem with my added functions, there would be a graceful degradation to simply displaying the entire form. But what I wanted was for special logic to come in to play for spans with IDs beginning with the substring checkbox_contingent. I wanted also to support an attribute called visible which would determine whether the fields enclosed by the span would be initially displayed or suppressed.

I was able to implement this using jquery and the following code:


function checkbox_contingency_init()
{
    $("span[id^=checkbox_contingent_]").each(function(index, outer_span_dom)
    {
        var outer_span = $("#" + outer_span_dom.id)
        var prefix = outer_span_dom.id.replace(/checkbox_contingent_/, "")
        var checkbox_id = "conditional_checkbox_" + prefix
        var span_name = checkbox_id.replace(/checkbox/, "span") // + "_" + this.value
        var prompt = prefix.replace(/_/g, " ")
        var checkbox_html = prompt + "? <input id='" + checkbox_id + "' name='" + checkbox_id + "' type='checkbox' /><br>"
        //debugger
        var user_html = outer_span.html()
        var span_html = "<span id=" + span_name + ">" + user_html + "</span>"
        outer_span.html(checkbox_html + span_html)
        var variable_span = $("#" + span_name)
        var visible = outer_span.attr("visible")
        if (visible && visible=="yes")
        {
            variable_span.show()
            var checkbox = $("#" + checkbox_id)
            checkbox.attr("checked", true)
        }
        else
        {
            variable_span.hide()
        }

        $("#" + checkbox_id).change(function()
        {
            if (checkbox_id != this.id) throw "oops mismatch: " + checkbox_id + " != " + this.id

            //alert('looking for span '+ span_name)
            var span = $('#' + span_name)
            if (!span)
            {
                throw ("no span matching " + span_name)
            }
            var val = this.checked
            if (val)
            {
                span.show()
            }
            else
            {
                span.hide()
            }
        })
    })
}


$(document).ready(
function()
{
    checkbox_contingency_init()
})



Using jquery I am able to find all the spans with IDs beginning with the substring checkbox_contingent. For each of these spans I rewrite the HTML to include a checkbox which controls whether the form fields are displayed.

Expandable Arrays 

The technique described above worked nicely for my purpose. As I continued, I realized I had repeated work in a variation on my use case, where in some cases it would be appropriate to show the user fields for some array of entries of indeterminate length. In this case what I wanted was to repeatedly show a checkbox which would allow the user to indicate that he wanted to enter data for yet another entry. Each time he made such an indication by checking the checkbox, a new cell of field forms would be displayed (along with an additional checkbox to allow further extending the entries). Here is the HTML syntax I wanted to use:


        <span id='expandable_array_more_children' visible=yes max_length=3>
   <table border=1 width=100% >
       <tr>
           <td>First Name</td>
           <td><input type=text id=more_children_First_Name__0</td>
       </tr>
       <tr>
           <td>Last Name</td>
           <td><input type=text id=more_children_Last_Name__0</td>
       </tr>
   </table>
</span>
       


In this case the span would contain the fields for a single entry. My jquery code would replace this HTML with a checkbox to enable the showing of the field. Each time the user checks the checkbox, another entry in the array is displayed, along with an additional checkbox to indicate that further entries are appropriate. I also put in support for a max_length attribute to limit the total number of entries. Here is the code to accomplish this: 

function expandable_array_init1(html_to_show_first, span, prefix, x)
{
    if (!html_to_show_first)
    {
        html_to_show_first = ""
    }
    if (x==null)
    {
        span.html(html_to_show_first)
    }
    else
    {
        var checkbox_id = "conditional_checkbox_" + prefix + "_" + x
        var span_id = checkbox_id.replace(/checkbox/, "span")
        var prompt = prefix.replace(/_/g, " ")
        if (!x) // first time through?
        {
            prompt = prompt.replace(/^more /, "")
        }
        span.html(html_to_show_first + "<br>" + prompt + "? <input id='" + checkbox_id + "' type='checkbox' value='" + x + "' /><br><span id=" + span_id + "/>")
        
        
        var checkbox = $("#" + checkbox_id)
        checkbox.change(function()
        {
            assert(checkbox_id == this.id) 
            var span = $('#' + span_id)
            assert(span)
            var val = this.checked
            if (val)
            {
                var x = parseInt(this.id.replace(/.*_/, ""))
                
                var pattern    = document.expandable_array_patterns[prefix]
                var max_length = document.expandable_array_max_length[prefix]
                assert(pattern)
                var filled_pattern = pattern.replace(/__0/, "__" + x)
                
                var next_checkbox_x = ((max_length==null) || (x < max_length-1)) ? (x+1) : null
                expandable_array_init1(filled_pattern, span, prefix, next_checkbox_x)
                
                span.show()
            }
            else
            {
                span.hide()
            }
        })
    } 
    return checkbox
}

function expandable_array_init()
{
    $("span[id^=expandable_array_]").each(function(index, outer_span_dom)
    {
        var outer_span = $("#" + outer_span_dom.id)
        assert(outer_span)
        var prefix = outer_span_dom.id.replace(/expandable_array_/, "")
        if (!document.expandable_array_patterns)
        {
            document.expandable_array_patterns = new Object()
            document.expandable_array_max_length = new Object()
        } 
        document.expandable_array_patterns[prefix] = outer_span.html()
        var max_length = outer_span.attr("max_length")
        if (max_length)
        {
            document.expandable_array_max_length[prefix] = parseInt(max_length)
        }
        outer_span.html("")
    
        var checkbox = expandable_array_init1(null, outer_span, prefix, 0)
        var visible = outer_span.attr("visible")
        if (visible && visible=="yes")
        {
            checkbox.attr("checked", true)
            checkbox.change()
        }
    })
}


$(document).ready(
function()
{
    expandable_array_init()
})


In this case I use jquery to search for spans whose IDs begin with expandable_array. For each of these spans I replace the HTML with a checkbox and an additional embedded span which when visible shows the fields responding to a single entry. I use the initial HTML within the span as a pattern from which I can create new fields for additional entries. It is expected that each field within the HTML has a naming convention ending with the index of the entry it refers to; the initial field IDs all and with __0; as new entries are generated, I use the exact same HTML except with the index incremented, resulting in similar field names ending with __1, __2, __3, etc.

Monday, March 12, 2012

Simple method to add multilevel filtering support to text producing applications

Single level text filtering seems to be becoming more common for GUIs, normally implemented with a single text entry field that the user can type a string into, thereby indicating that the displayed data should contain that string. This is a nice feature but in my opinion inadequate for larger data sets where typically one wants to successively apply multiple filters to limit the display to a result matching several conditions. This is a little bit painful to implement in a GUI application, but very easy in the text world. The recipe I have been following uses grepm, a script I wrote as the fundamental building block. This script is so simple I'll just quote it (or really its perl guts):
use strict;
use diagnostics;

my $i = 0;
  
if ($#ARGV >= 0 && $ARGV[0] eq "-i")
{
  $i = 1;
  shift @ARGV;
}

while ()
{
  my $printIt = 1;
  foreach my $pat (@ARGV)
  {
    if ($i)
    {
      if (!/$pat/i)
      {
        $printIt = 0;
        last;
      }
    }
    else
    {
      if (!/$pat/)
      {
        $printIt = 0;
        last;
      }
    }
  }   
  print if $printIt;
}

(Although I've become enamored of Ruby, the nature of this script is to be repeatedly executed as part of other small scripts, and with that purpose in mind good performance is absolutely a requirement, and Ruby does not provide that for me in all environments. Perl, meanwhile, for all its faults, is always fast, so Perl it is.)

The script simply accepts some number of text tokens. Each of these tokens is interpreted as a regular expression that can be applied to data coming from standard input; lines which match all of the regular expressions are passed through, with everything else getting discarded. If there are no arguments to the script, then everything gets passed through.

With this building block in hand, it is a simple matter to take arbitrary scripts and bolt it on internally after shifting away commandline arguments which do not pertain to the filter. For a contrived example:

#!/bin/sh
some_arg=$1
shift

some_other_arg=$2
shift

Do_some_stuff $some_arg $some_other_arg | grepm $*


And we are done. Since the script has executed a shift for each of the arguments that it pays attention to, remaining arguments can be safely passed on to grepm to serve as filters. If there are no additional arguments, grepm has no effect, simply passing through all the data it sees. But if we do want to filter the output, this occurs with nearly no impact on the structure and complexity of the calling script. Now, for example, when I am interested in getting a listing of all the OEL VMs owned by my colleague Olfat which include the product AS10G and had an upgrade, I can easily generate what I want by calling my simple script lmd with additional filtering arguments:


% lmd olfat OEL AS10G upgraded


This yields the desired data set:


adc AS10G10.1.4.3 Upgraded to AS11GPS1 DB112-133.140-allcompconfig/oel5u5x86-2/6272, fusionmats2, false, AS10G10.1.4.3 Upgraded to AS11GPS1 DB112-133.140-allcompconfig, undeployed,2822000,0,0, olfat.aly@oracle.com, lib,1411000,1411000,46336333, null/507 -> null/538 -> null/622 -> null/626 -> null/675 -> null/733 -> null/735 -> oel5u5x86-2/743 -> oel5u5x86-2/6272, undeployed, , alwaysup=yes importance=10, ,0, ,342, 140.84.133.140
adc stage9-AS10G10.1.4.3 Upgraded to AS11GPS1 DB112/oel5u5x86-2/8975, fusionmats2, false, stage9-AS10G10.1.4.3 Upgraded to AS11GPS1 DB112, undeployed,28086000,0,0, olfat.aly@oracle.com, work,14043000,14043000,58968333, null/507 -> null/538 -> null/622 -> null/626 -> null/675 -> null/733 -> null/735 -> oel5u5x86-2/743 -> oel5u5x86-2/8975, undeployed, , importance=10, ,0, ,52, 140.84.133.120
adc AS10G10.1.4.3 Upgraded to AS11GPS1 DB112 - backup/oel5u5x86-2/743, fusionmats2, false, AS10G10.1.4.3 Upgraded to AS11GPS1 DB112 - backup, undeployed,642000,0,0, olfat.aly@oracle.com, lib,321000,321000,45085833, null/507 -> null/538 -> null/622 -> null/626 -> null/675 -> null/733 -> null/735 -> oel5u5x86-2/743, undeployed, , alwaysup=yes importance=10, ,0, ,504, 140.84.131.230
adc AS10G10.1.4.3 Upgraded to AS11GPS2 DB112 - backup/oel5u5x86-2/843, fusionmats2, false, AS10G10.1.4.3 Upgraded to AS11GPS2 DB112 - backup, undeployed,34000,0,0, olfat.aly@oracle.com, lib,17000,13035000,57799833, null/507 -> null/538 -> null/622 -> null/626 -> null/675 -> null/733 -> null/735 -> null/744 -> oel5u5x86-2/843, undeployed, , alwaysup=yes importance=10, ,0, ,0, 140.84.131.230

Monday, February 6, 2012

how to control and measure resource consumption on a modern server farm?

At the big company where I work, there is an ongoing tussle for resources, some of it controlled by quotas directed from above, and some of it not. The quotas that get handed down are in terms of numbers of hosts or VMs, but this doesn't work well in a truly fluid server farm environment like that provided by VMware's LabManager product. There you can consume VMs or disk or RAM or CPUs temporarily, and the value of these resources changes over time as new capacity is delivered to the data center in various configurations. It is plain to see that a bare machine count is not adequate to measure or control resource consumption in this type of environment.

So how should one measure consumption in a world where different VMs can have very different footprints depending on how their type and how they are configured? An answer is to establish one more layer of indirection by using "points" to measure consumption of various things, according to prices which are dynamically updated to reflect the current value of these things to the organization.

At the implementation level, our lives can be simplified if we centralize record-keeping, but decouple this record-keeping from the various resource providers.

Using "points," i.e., an arbitrary unit of value, to measure consumption has several advantages.

- simple to record, share and combine:

"Points" make it very simple to distribute resources to people in a transparent way, since point holdings can be represented by integers. Dividing resources between members of a group is as simple as dividing a number. Likewise, aggregating the buying power of a set of people is as easy as summing the points held by the individuals. This makes it easy to award resources at a high level based on business priorities without getting caught up in the low-level details of which particular compute resources will really be used.

- flexibility of what is being measured

"Points" facilitate arbitrary complexity of resource pricing. The alternative is to establish quotas in terms of units of hardware or hardware use, and this can lead to awkward situations. For example, limiting users to fixed numbers of VM's doesn't distinguish between categories of VMs which may have very different cost profiles. And even if we are talking about the same type of VM, there may be other properties which make one more valuable than another, such as which data center it lives in or what virtual hardware it is provided with. If we use points, we can establish pricing on any aspect of VMs that is proving to be a precious resource.

- facilitates comparisons and prioritizing between different resources

"Points" provide a common denominator between different kinds of resources. If it proves to be the case that there is a shortage of OVM instances in ADC, but lots of VirtualBox instances in UCF, their relative prices can reflect that. As the situation changes, these prices could also be adjusted to express the relative values of these commodities within the organization.

- decouple central records from resource providers

Record-keeping must be centralized, but this needn't imply a monolithic system. Since priorities are set at the organizational level, there should only be one reckoning of the points held by each individual employee. But this doesn't mean that the individual resource providers must be combined; there just has to be an interface to the central record-keeper which allows these providers to report consumption (and download quotas).

This allows different providers to report and query on unrelated schedules. The central recordkeeper will always be aggregating reports in terms of points, but must remember the source of each piece of information to allow updates. This decoupled scheme allows the central recordkeeper to provide reports even if individual resource managers are down or cannot be contacted.

Monday, January 16, 2012

Easy path to memoization for UNIX text piping applications

We've all had the experience of slow commandline applications which produce text. For those applications with no significant side effects, below is a recipe to institute caching without invasive changes. This solution uses bash and perl, and is most conveniently applied to bash scripts by simply inserting a line at the head of the script as follows:

. cache $*

To apply this technique to text producing programs which are not themselves bash scripts, simply make a bash wrapper for the program to be cached, and apply this technique to the wrapper.

Below is the code, which simply calls a Perl script to execute the program which has been targeted for caching, guarding against an infinite loop by means of the __DISABLE_CACHE__ environment variable (which also gives cache-aware programs an interface to prevent this scheme from being in effect at all times, if there are known occasions where we want to disable caching).


# add . cache $* to scripts.
# (TOO SLOW W/ some rubys, so using perl instead
# (of ruby -w $DROP/bin/ruby/cache.rb $*))

if [ -z "$__DISABLE_CACHE__" ]; then
export __DISABLE_CACHE__=yes
perl -w $DROP/bin/perl/cache.pl $0 $*
exit
fi

The perl script which does the work is likewise very straightforward. If there is any output on stderr, the script assumes there was trouble and does not cache anything; otherwise, you should just get a single real execution per permutation of intput arguments.


use strict;

my $__trace = 0;
my $cmd = join ' ', @ARGV;

my $cached_output = $cmd;
$cached_output =~ s{[^\w]}{_}g;
$cached_output = $ENV{'TMP'} . "/cache." . $cached_output;

if (! -f $cached_output)
{
my $cmd_with_redirects = "$cmd > $cached_output 2> $cached_output.err";
`$cmd_with_redirects`;
if ($__trace)
{
print "Executed $cmd_with_redirects\n";
}
if ( `cat $cached_output.err` eq '' )
{
if ($__trace)
{
print "No error output, so deleting $cached_output.err\n";
}
unlink "$cached_output.err";
}
}
if ($__trace)
{
print `ls -l $cached_output $cached_output.err`;
}

print `cat $cached_output`;
if (-f "$cached_output.err" )
{
print STDERR `cat $cached_output.err`;
# assume trouble if there was output to stderr, and remove the cached output:
unlink "$cached_output.err";
unlink $cached_output;
}