id
int64
0
25.6k
text
stringlengths
0
4.59k
25,300
we are now able to express the buffer module in form that functions properly when used by individualconcurrent processesmodule bufferimport signalsconst (*buffer size*var ninoutintegernonfullsignals signal(* *nonemptysignals signal(* *bufarray of charprocedure deposit (xchar)begin if then signals wait(nonfullendinc( )buf[in:xin :(in mod nif then signals send(nonemptyend end depositprocedure fetch (var xchar)begin if then signals wait(nonemptyenddec( ) :buf[out]out :(out mod nif - then signals send(nonfullend end fetchbegin : in : out : signals init(nonfull)signals init(nonemptyend buffer an additional caveat must be madehowever the scheme fails miserablyif by coincidence both consumer and producer (or two producers or two consumersfetch the counter value simultaneously for updating unpredictablyits resulting value will be either + or - but not it is indeed necessary to protect the processes from dangerous interference in generalall operations that alter the values of shared variables constitute potential pitfalls sufficient (but not always necessarycondition is that all shared variables be declared local to module whose procedures are guaranteed to be executed under mutual exclusion such module is called monitor [ - the mutual exclusion provision guarantees that at any time at most one process is actively engaged in executing procedure of the monitor should another process be calling procedure of the (samemonitorit will automatically be delayed until the first process has terminated its procedure note by actively engaged is meant that process execute statement other than wait statement at last we return now to the problem where the producer or the consumer (or bothrequire the data to be available in certain block size the following module is variant of the one previously shownassuming block size of np data elements for the producerand of nc elements for the consumer in these casesthe buffer size is usually chosen as common multiple of np and nc in order to emphasise that symmetry between the operations of fetching and depositing datathe single counter is now represented by two countersnamely ne and nf they specify the numbers of empty and filled buffer slots respectively when the consumer is idlenf indicates the number of elements needed for the consumer to proceedand when the producer is waitingne specifies the number of elements needed for the producer to resume (therefore ne nf does not always hold module bufferimport signalsconst np (*size of producer block*
25,301
nc (*size of consumer block* (*buffer sizecommon multipe of np and nc*var nenfintegerinoutintegernonfullsignals signal(*ne > *nonemptysignals signal(*nf > *bufarray of charprocedure deposit (var xarray of char)begin ne :ne npif ne then signals wait(nonfullendfor : to np- do buf[in: [ ]inc(inendif in then in : endnf :nf npif nf > then signals send(nonemptyend end depositprocedure fetch (var xarray of char)begin nf :nf ncif nf then signals wait(nonemptyendfor : to nc- do [ :buf[out]inc(outendif out then out : endne :ne ncif ne > then signals send(nonfullend end fetchbegin ne :nnf : in : out : signals init(nonfull)signals init(nonemptyend buffer textual input and output by standard input and output we understand the transfer of data to (froma computer system from (togenuinely external agentsin particular its human operator input may typically originate at keyboard and output may sink into display screen in any caseits characteristic is that it is readableand it typically consists of sequence of characters it is text this readability condition is responsible for yet another complication incurred in most genuine input and output operations apart from the actual data transferthey also involve transformation of representation for examplenumbersusually considered as atomic units and represented in binary formneed be transformed into readabledecimal notation structures need to be represented in suitable layoutwhose generation is called formatting whatever the transformation may bethe concept of the sequence is once again instrumental for considerable simplification of the task the key is the observation thatif the data set can be considered as sequence of charactersthe transformation of the sequence can be implemented as sequence of (identicaltransformations of elements (we shall briefly investigate the necessary operations for transforming representations of natural numbers for input and output the basis is that number represented by the sequence of decimal digits <dn- has the value sii - di
25,302
dn- - dn- - (dn- dn- assume now that the sequence is to be read and transformedand the resulting numeric value to be assigned to the simple algorithm terminates with the reading of the first character that is not digit (arithmetic overflow is not consideredx : read(ch)(adens charstonumber *while (" <ch(ch <" "do : * (ord(chord(" "))read(chend in the case of output the transformation is complexified by the fact that the decomposition of into decimal digits yields them in the reverse order the least digit is generated first by computing mod this requires an intermediate buffer in the form of first-in-last-out queue (stackwe represent it as an array with index and obtain the following programi : (adens numbertochars *repeat [ : mod : div inc(iuntil repeat dec( )write(chr( [iord(" "))until note consistent substitution of the constant in these algorithms by positive integer will yield number conversion routines to and from representations with base frequently used case is (hexadecimal)because the involved multiplications and divisions can be implemented by simple shifts of the binary numbers obviouslyit should not be necessary to specify these ubiquitous operations in every program in full detail we therefore postulate utility module that provides the most commonstandard input and output operations on numbers and strings this module is referenced in most programs throughout this bookand we call it texts it defines type textreaders and writers for text sand procedures for reading and writing characteran integera cardinal numberor string before we present the definition of module textswe point out an essential asymmetry between input and output of texts whereas text is generated by sequence of calls of writing procedureswriting integersreal numbersstrings etc reading text by sequence of calls of reading procedures is questionable practice this is because we rather wish to read the next element without having to know its type we rather wish to determine its type after reading the item this leads us to the concept of scanner whichafter each scan allows to inspect type and value of the item read scanner acts like rider in the case of files howeverit imposes certain syntax on the text to be read we postulate scanner for texts consisting of sequence of integersreal numbersstringsnamesand special characters given by the following syntax specified in ebnf (extended backus naur form)item integer realnumber identifier string specialchar integer ["-"digit {digitrealnumber ["-"digit {digitdigit {digit[(" " ")["+"-digit {digit}identifier letter {letter digitstring '"{any character except quote'"specialchar "!"?"@"#"$"%"^"&"+"-"*"/"\"|"(")"["]"{"}
25,303
"",":";"~items are separated by blanks and/or line breaks definition texts(adens _texts *const int real name char type textwriterreader record eotboolean endscanner record classintegeriintegerxrealsarray of charchcharnextchchar endprocedure openreader (var rreaderttextposinteger)procedure openwriter (var wwriterttextposinteger)procedure openscanner (var sscannerttextposinteger)procedure read (var rreadervar chchar)procedure readint (var rreadervar ninteger)procedure scan (var sscanner)procedure write (var wwriterchchar)procedure writeln (var wwriter)(*terminate line*procedure writestring (var wwritersarray of char)procedure writeint (var wwriterxninteger)(*write integer with (at leastn characters if is greater than the number of digits neededblanks are added preceding the number*procedure writereal (var wwriterxreal)procedure close (var wwriter)end texts hence we postulate that after call of scan(ss class int implies is the integer read class real implies is the real number read class name implies is the identifier of string read implies ch is the special character read class char nextch is the character immediately following the read itempossibly blank
25,304
searching the task of searching is one of most frequent operations in computer programming it also provides an ideal ground for application of the data structures so far encountered there exist several basic variations of the theme of searchingand many different algorithms have been developed on this subject the basic assumption in the following presentations is that the collection of dataamong which given element is to be searchedis fixed we shall assume that this set of elements is represented as an arraysay as aarray of item typicallythe type item has record structure with field that acts as key the task then consists of finding an element of whose key field is equal to given search argument the resulting index isatisfying [ikey xthen permits access to the other fields of the located element since we are here interested in the task of searching onlyand do not care about the data for which the element was searched in the first placewe shall assume that the type item consists of the key onlyi is the key linear search when no further information is given about the searched datathe obvious approach is to proceed sequentially through the array in order to increase step by step the size of the sectionwhere the desired element is known not to exist this approach is called linear search there are two conditions which terminate the search the element is foundi ai the entire array has been scannedand no match was found this results in the following algorithmi : while ( ( [ixdo inc(iend (adens _search *note that the order of the terms in the boolean expression is relevant the invarianti the condition satisfied before and after each loop stepis ( < (ak < ak xexpressing that for all values of less than no match exists note that the values of before and after each loop step are different the invariant is preserved nevertheless due to the while-clause from this and the fact that the search terminates only if the condition in the while-clause is falsethe resulting condition is derived as(( nor (ai )(ak < ak xthis condition not only is our desired resultbut also implies that when the algorithm did find matchit found the one with the least indexi the first one implies that no match exists termination of the repetition is evidently guaranteedbecause in each step is increased and therefore certainly will reach the limit after finite number of stepsin factafter stepsif no match exists each step evidently requires the incrementing of the index and the evaluation of boolean expression could this task be simplifedand could the search thereby be acceleratedthe only possibility lies in finding simplification of the boolean expression which notably consists of two factors hencethe only chance for finding simpler solution lies in establishing condition consisting of single factor that implies both factors this is possible only by guaranteeing that match will be foundand is achieved by posting an additional element with value at the end of the array we call this auxiliary element sentinelbecause it prevents the search from passing beyond the index limit the array is now declared as
25,305
aarray + of integerand the linear search algorithm with sentinel is expressed by [ :xi : while [ix do inc(iend (adens _search *the resulting conditionderived from the same invariant as beforeis (ai (ak < ak xevidentlyi implies that no match (except that for the sentinelwas encountered binary search there is quite obviously no way to speed up searchunless more information is available about the searched data it is well known that search can be made much more effectiveif the data are ordered imaginefor examplea telephone directory in which the names were not alphabetically listed it would be utterly useless we shall therefore present an algorithm which makes use of the knowledge that is orderedi of the condition ak < ak- <ak the key idea is to inspect an element picked at randomsay amand to compare it with the search argument if it is equal to xthe search terminatesif it is less than xwe infer that all elements with indices less or equal to can be eliminated from further searchesand if it is greater than xall with index greater or equal to can be eliminated this results in the following algorithm called binary searchit uses two index variables and marking the left and at the right end of the section of in which an element may still be found : : - (adens _search * :any value between and rwhile ( < ( [mxdo if [mx then : + else : - endm :any value between and end note fundamental structural similarity of this algorithm and the linear search in the preceding sectionthe role of is now played by the triplet lmr to explicate the similarity andtherebyto better ensure the loop correctnesswe resisted the tempation of minor optimization that would eliminate one of the two identical assignments to the loop invarianti the condition satisfied before and after each stepis (ak xfrom which the result is derived as (( ror (am )(ak xwhich implies (( (ak < ak )or (am xthe choice of is apparently arbitrary in the sense that correctness does not depend on it but it does influence the algorithm' effectiveness clearly our goal must be to eliminate in each step as many elements
25,306
as possible from further searchesno matter what the outcome of the comparison is the optimal solution is to choose the middle elementbecause this eliminates half of the array in any case as resultthe maximum number of steps is log nrounded up to the nearest integer hencethis algorithm offers drastic improvement over linear searchwhere the expected number of comparisons is / the efficiency can be somewhat improved by interchanging the two if-clauses equality should be tested secondbecause it occurs only once and causes termination but more relevant is the questionwhether -as in the case of linear search - solution could be found that allows simpler condition for termination we indeed find such faster algorithmif we abandon the naive wish to terminate the search as soon as match is established this seems unwise at first glancebut on closer inspection we realize that the gain in efficiency at every step is greater than the loss incurred in comparing few extra elements remember that the number of steps is at most log the faster solution is based on the following invariant(ak xand the search is continued until the two sections span the entire array : : (adens _search *while do :( +rdiv if [mx then : + else : end end the terminating condition is > is it guaranteed to be reachedin order to establish this guaranteewe must show that under all circumstances the difference - is diminished in each step holds at the beginning of each step the arithmetic mean then satisfies < hencethe difference is indeed diminished by either assigning + to (increasing lor to (decreasing )and the repetition terminates with howeverthe invariant and do not yet establish match certainlyif nno match exists otherwise we must take into consideration that the element [rhad never been compared hencean additional test for equality [rx is necessary in contrast to the first solutionthis algorithm -like linear search -finds the matching element with the least index table search search through an array is sometimes also called table searchparticularly if the keys are themselves structured objectssuch as arrays of numbers or characters the latter is frequently encountered casethe character arrays are called strings or words let us define type string as string array of char and let order on strings and be defined as follows( (aj < xj yj( yei < ((aj < xj yj(xi yi)in order to establish matchwe evidently must find all characters of the comparands to be equal such comparison of structured operands therefore turns out to be search for an unequal pair of comparandsi search for inequality if no unequal pair existsequality is established assuming that the length of the words be quite smallsay less than we shall use linear search in the following solution in most practical applicationsone wishes to consider strings as having variable length this is accomplished by associating length indication with each individual string value using the type declared abovethis length must not exceed the maximum length this scheme allows for sufficient flexibility for many casesyet avoids the complexities of dynamic storage allocation two representations of string
25,307
lengths are most commonly used the length is implicitly specified by appending terminating character which does not otherwise occur usuallythe non-printing value (it is important for the subsequent applications that it be the least character in the character set the length is explicitly stored as the first element of the arrayi the string has the form sn- where sn- are the actual characters of the string and chr(nthis solution has the advantage that the length is directly availableand the disadvantage that the maximum length is limited to the size of the character setthat isto in the case of the ascii set for the subsequent search algorithmwe shall adhere to the first scheme string comparison then takes the form : while ( [ ( [iy[ ]do : + end the terminating character now functions as sentinelthe loop invariant is aj < xj yj xand the resulting condition is therefore ((xi xor (xi yi)(aj < xj yj xit establishes match between and yprovided that xi yiand it establishes yif xi yi we are now prepared to return to the task of table searching it calls for nested searchnamely search through the entries of the tableand for each entry sequence of comparisons between components for examplelet the table and the search argument be defined as tarray of stringxstring assuming that may be fairly large and that the table is alphabetically orderedwe shall use binary search using the algorithms for binary search and string comparison developed abovewe obtain the following program segment :- : :nwhile do :( +rdiv : while ( [ ( [ ,ix[ ]do : + endif [ ,ix[ithen : + else : end endif then : while ( [ ( [ ,ix[ ]do : + end end (( ( [ ,ix[ ]establish match*(adens *
25,308
string search frequently encountered kind of search is the so-called string search it is characterized as follows given an array of elements and an array of elementswhere ndeclared as sarray of item parray of item string search is the task of finding the first occurrence of in typicallythe items are charactersthen may be regarded as text and as pattern or wordand we wish to find the first occurrence of the word in the text this operation is basic to every text processing systemand there is obvious interest in finding an efficient algorithm for this task specific feature of this problem is the presence of two arrays and necessity to scan them simultaneously in such way that the coordination of the two indices used to scan the arrays is determined by the data correct implementation of such "braidedloops is simplified by expressing them using the so-called dijkstra' loopi multibranch version of the while loop this fundamental and powerful control structure is described in appendix we consider three string search algorithmsstraight string searchan optimization of the straight search due to knuthmorris and pratt,andfinallythe algorithm of boyer and moor based on revision of the basic idea of the straight searchwhich proves to be the most efficient of the three straight string search before paying particular attention to efficiencyhoweverlet us first present straightforward searching algorithm we shall call it straight string search it is convenient to have in view fig that schematically pictures the pattern of length being matched against the text of length in position the index numbers the elements of the patternand the pattern element [jis matched against the text element [ + - string pattern - fig pattern of length is being matched against text of length in position the predicate (ithat describes complete match of the pattern against text characters in position is formulated as followsr(iaj < pj si+ the allowed values of where match can occur range from to - inclusive is evaluated by repeatedly comparing the corresponding pairs of characters thisevidentlyamounts to linear search of non-matching pairr( (aj < pj si+ ~(ej < pj si+jtherefore (iis easily formulated as followsprocedure (iinteger)booleanvar jintegerbegin ( < * :
25,309
while ( ( [js[ + ]do inc(jendreturn ~( mend let the result be the index iwhich points to the first occurrence of match of the pattern within the string then (ishould hold in additionr(kmust be false for all denote the latter condition as ( ) (iak < ~ (kwith the problem thus formulateda linear search suggests itself (see sec ) : (adens _straight *while ( < - ~ (ido inc(iend the invariant of this loop is ( )which holds both before the instruction inc(iand -thanks to the second operand of the guard -after it an advantage of this algorithm is the transparency of its logicthe two loops are completely decoupled and one is hidden inside the function-procedure howeverthis same property may also be disadvantagefirstlythe additional procedure call at each step of potentially long loop may be too costly in such basic operation as the string search secondlythe more sophisticated algorithms considered in subsequent sections make use of the information obtained in the inner loop in order to increase in the external loop by value larger than so that the two loops are no longer independent one could eliminate the procedure by introducing logical variable to store its result and by embedding the loop from in the body of the main loop over howeverthe interaction of two loops via logical variable looses the original transparencywhich may cause errors in the program evolution formulation of such loops is facilitated by the so-called dijkstra loopwhich is multibranch version of the while loop with each branch having its own guard (see appendix cin the present case the two branches correspond to the steps in and jrespectively recall fig and introduce the predicate (ijthat expresses the match of the first characters of the pattern with the characters of the text starting from position ip(ijak < si+ pk then (ip(imfig shows that the current search state is characterized by and the invariant ( the condition that holds after each increase of or jcan be chosen as followsin the positions below there is no matchand in the position there is match of the first characters of the pattern this is formally expressed as followsq(ip(ijevidentlyj would mean there is required math of the entire pattern in the position iwhereas would mean that the text contains no match at all it is then sufficient to try to increment by in order to increase the matching segmentand if that is impossibleto move the pattern into new position by incrementing by and setting to in order to restart verifying the match from the beginning of the patterni : : while the matching segment can be increased do incj elsif the pattern can be moved do inci ) : end it remains to consider each step separately and to accurately formulate the conditions for which each step
25,310
makes sensei preserves the invariant for the first branch it is the condition ( <nm( (si+ pjwhich guarantees (ijafter is incremented for the second branchthe last operand of this conjunction must have inequality instead of equalitywhich would imply ~ (iand guarantee (iafter is incremented taking into account that the two guards are evaluated in their textual orderthe last operand of the conjunction can be dropped in the second branchand one arrives at the following programi : : (adens _straight *while ( < - ( ( [ +jp[ ]do incj elsif ( < - ( mdo inci ) : end after the loop terminatesthe condition is guaranteed to holdwhich is equal to conjuncttion of negations of all quardsi ( -mor ( >mmoreoverfrom the structure of the loop it follows that the two operands cannot hold simultanelouslyand cannot exceed then - means that there is no match anywhere in the textwhereas mthat (ip(imis truei complete match is found in position analysis of straight string search this algorithm operates quite effectivelyif we can assume that mismatch between character pairs occurs after at most few comparisons in the inner loop this is likely to be the caseif the cardinality of the item type is large for text searches with character set size of we may well assume that mismatch occurs after inspecting or characters only neverthelessthe worst case performance is rather alarming considerfor examplethat the string consist of - ' followed by single band that the pattern consist of - ' followed by then in the order of * comparisons are necessary to find the match at the end of the string as we shall subsequently seethere fortunately exist methods that drastically improve this worst case behaviour the knuth-morris-pratt string search around knuthj morrisand pratt invented an algorithm that requires essentially in the order of character comparisons onlyeven in the worst case [ - the new algorithm is based on the observation that by starting the next pattern comparison at its beginning each timewe may be discarding valuable information gathered during previous comparisons after partial match of the beginning of the pattern with corresponding characters in the stringwe indeed know the last part of the stringand perhaps could have precompiled some data (from the patternwhich could be used for more rapid advance in the text string the following example of search for the word hooligan illustrates the principle of the algorithm underlined characters are those which were compared note that each time two compared characters do not matchthe pattern is shifted all the waybecause smaller shift could not possibly lead to full match hoola-hoola girls like hooligans hooligan hooligan hooligan hooligan hooligan hooligan hooligan in contrast with the simple algorithmhere the comparison point (the position of the text element being compared with some element of the patternis never moved backwards this suggests that we must abandon the notion that always marks the current position of the first pattern character in the text rather
25,311
will now store the comparison pointthe variable will as before point to the corresponding element of the pattern see fig - string pattern - fig in the notations of the kmp algorithmthe alignment position of the pattern is now - (and not ias was the case with the simple algorithmthe central pont of the algorithm is the comparison of [iand [ ]if they are equal then and are both increased by otherwise the pattern must be shifted by assigning to of some smaller value the boundary case shows that one should provide for shift of the pattern entirely beyond the current comparison point (so that [ becomes aligned with [ + ]for thisit is convenient to choose - the main loop of the algorithm takes the following formi : : while ( ( (( or ( [ip[ ])do inci )incj elsif ( ( [ip[ ]* : end this formulation is admittedly not quite completebecause it contains an unspecified shift value we shall return to it shortlybut first point out that the invariant here is chosen the same as in the simple algorithmin the new notation it is ( -jp( -jjthe post condition of the loop -evaluated as conjuction of negations of all guards -is given by the expression ( >mor ( > )but in reality only equalities can occur if the algorithm terminates due to mthe term ( -jjof the invariant implies ( -mmr( )that isa match at position - otherwise it terminates with nand since mthe first term of the invariantq( - )implies that no match exists at all we must now demonstrate that the algorithm never falsifies the invariant it is easy to show that it is established at the beginning with the values let us first investigate the effect of the two statements incrementing and by in the first branch they apparently do not falsify ( - )since the difference - remains unchanged nor do they falsify ( -jjthanks to the equality in the guard (see the definition of pas to the second branchwe shall simply postuate that the value always be such that replacing by will maintain the invariant provided that the assignment : represents shift of the pattern to the right by - positions naturallywe wish this shift to be as large as possiblei to be as small as possible this is illustrated by fig string pattern = = fig assignment : shifts pattern by - positions =
25,312
evidently the condition ( -ddq( -dmust hold before assigning :dif the invariant (ijjq( -jis to hold thereafter this precondition is therefore our guideline for finding an appropriate expression for along with the condition ( -jjq( - )which is assumed to hold prior to the assignment (all subsequent reasoning concerns this point of the programthe key observation is that thanks to ( -jjwe know that pj- si- si- (we had just scanned the first characters of the pattern and found them to matchtherefore the condition ( -ddwith ji pd- si- si- translates into an equation for dp pd- pj- pj- as to ( - )this condition follows from ( -jprovided ~ ( -kfor + the validity of ~ ( -kfor is guaranteed by the inequality [ip[jalthough the conditions ~ ( - ~ (ikmfor + - cannot be evaluated from the scanned text fragment onlyone can evaluate the sufficient conditions ~ ( - ,kexpanding them and taking into account the already found matches between the elements of and pwe obtain the following conditionp pk- pj- pj- for all + - that isd must be the maximal solution of the above equation fig illustrates the function of examples , string pattern shifted pattern = = (max shift = - = , = = (max shift = - = , = = (max shift = - = fig partial pattern matches and computation of if there is no solution for dthen there is no match in positions below + then we set :- such situation is shown in the upper part in fig the last example in fig suggests that we can do even slightly betterhad the character pj been an instead of an fwe would know that the corresponding string character could not possibly be an aand the shift of the pattern with - had to be performed in the next loop iteration (see fig the lower
25,313
parttherefore we may impose the condition pd pj when solving for this alows us to fully utilize the information from the inequality in the guard of this loop branch string pattern shifted pattern = =- (shift = =- (shift fig shifting pattern past position of last character the essential result is that the value apparently is determined by the pattern alone and does not depend on the text string we shall denote for given as dj the auxiliary table may be computed before starting the actual searchthis computation amounts to precompilation of the pattern this effort is evidently only worthwhile if the text is considerably longer than the pattern ( <nif multiple occurrences of the same pattern are to be foundthe same values of can be reused sothe computation of dj is the search for the longest matching sequence pd[ ]- pj- [jpj- with the additional constraint of pd[jpj evidentlythe computation of dj presents us with the first application of string searchand we may as well use the fast kmp version itself procedure search (var psarray of charmnintegervar rinteger)(adens _kmp *(*search for pattern of length in text of length nm <mmax*(*if is foundthen indicates the position in sotherwise - *var ijkintegerdarray mmax of integerbegin (*compute from * [ :- if [ [ then [ : else [ :- endj : : while ( ( [jp[ ]do : [kelsif - do (( or ( [jp[ ]*incj )inck )if [jp[kthen [ : else [ : [kendassertd[jd( )end(*search proper* : :
25,314
while ( ( [ip[ ]do : [ ]elsif ( ( ndo inc( )inc( )endif then : - else :- end end search analysis of kmp search the exact analysis of the performance of kmp-search islike the algorithm itselfvery intricate in [ - its inventors prove that the number of character comparisons is in the order of +nwhich suggests substantial improvement over * for the straight search they also point out the welcome property that the scanning pointer never backs upwhereas in straight string search the scan always begins at the first pattern character after mismatchand therefore may involve characters that had actually been scanned already this may cause awkward problems when the string is read from secondary storage where backing up is costly even when the input is bufferedthe pattern may be such that the backing up extends beyond the buffer contents the boyer-moore string search the clever scheme of the kmp-search yields genuine benefits only if mismatch was preceded by partial match of some length only in this case is the pattern shift increased to more than unfortunatelythis is the exception rather than the rulematches occur much more seldom than mismatches therefore the gain in using the kmp strategy is marginal in most cases of normal text searching the method to be discussed here does indeed not only improve performance in the worst casebut also in the average case it was invented by boyer and moore around and we shall call it bm search we shall here present simplified version of bm-search before proceeding to the one given by boyer and moore bm-search is based on the unconventional idea to start comparing characters at the end of the pattern rather than at the beginning like in the case of kmp-searchthe pattern is precompiled into table before the actual search starts letfor every character in the character setdx be the distance of the rightmost occurrence of in the pattern from its end now assume that mismatch between string and pattern was discovered then the pattern can immediately be shifted to the right by dp[ - positionsan amount that is quite likely to be greater than if pm- does not occur in the pattern at allthe shift is even greaternamely equal to the entire pattern' length the following example illustrates this process hoola-hoola girls like hooligans hooligan hooligan hooligan hooligan hooligan since individual character comparisons now proceed from right to leftthe followingslightly modified versions of of the predicates pr and are more convenient (ijakj < si- + pk (ip( (iakm < ~ (kthe loop invariant has the form (ip(ijit is convenient to define - + then the bm-algorithm can be formulated as follows :mj :mk :
25,315
while ( ( < ( [ - [ - ]do dec( )dec(jelsif ( ( <ndo : [ord( [ - ])] :mk :iend the indices satisfy < <mm <ii < thereforetermination with implies ( ( ) match at position - termination with demands that nhence (iimplies ( )signalling that no match exists of course we still have to convince ourselves that (iand (ijare indeed invariants of the two repetitions they are trivially satisfied when repetition startssince (mand (xmare always true consider the first branch simultaneously decrementing and does not affect ( )andsince sk pj- had been establishedand (ijholds prior to decrementing jthen (ijholds after it as well in the second branchit is sufficient to show that the statement : ds[ - never falsifies the invariant (ibecause (ijis satisfied automatically after the remaining assignments (iis satisfied after incrementing provided that before the assignment ( +ds[ - ]is guaranteed since we know that (iholdsit suffices to establish ~ ( +hfor ds[ - ]- we now recall that dx is defined as the distance of the rightmost occurrence of in the pattern from the end this is formally expressed as akm-dx < - pk substituting si- for xwe obtain akm-ds[ - < - si- pk ah < <ds[ - ]- si- pm- - ah < <ds[ - ]- ~ ( +hthe following program includes the presentedsimplified boyer-moore strategy in setting similar to that of the preceding kmp-search program procedure search (var sparray of charmnintegervar rinteger)(adens _bm *(*search for pattern of length in text of length *(*if is foundthen indicates the position in sotherwise - *var ijkintegerdarray of integerbegin for : to do [ : endfor : to - do [ord( [ ]): - - endi :mj :mk :iwhile ( ( < ( [ - [ - ]do dec( )dec(jelsif ( ( <ndo : [ord( [ - ])] :mk :iendif < then : else :- end end search analysis of boyer-moore search the original publication of this algorithm [ - contains detailed analysis of its performance the remarkable property is that in all except especially construed cases it requires substantially less than comparisons in the luckiest casewhere the last character of the pattern always hits an unequal character of the textthe number of comparisons is /
25,316
the authors provide several ideas on possible further improvements one is to combine the strategy explained abovewhich provides greater shifting steps when mismatch is presentwith the knuth-morrispratt strategywhich allows larger shifts after detection of (partialmatch this method requires two precomputed tablesd is the table used aboveand is the table corresponding to the one of the kmpalgorithm the step taken is then the larger of the twoboth indicating that no smaller step could possibly lead to match we refrain from further elaborating the subjectbecause the additional complexity of the table generation and the search itself does not seem to yield any appreciable efficiency gain in factthe additional overhead is largerand casts some uncertainty whether the sophisticated extension is an improvement or deterioration exercises assume that the cardinalities of the standard types integerreal and char are denoted by intc real and char what are the cardinalities of the following data types defined as exemples in this complexdatepersonrowcardname which are the instruction sequences (on your computerfor the following(afetch and store operations for an element of packed records and arrays(bset operationsincluding the test for membership what are the reasons for defining certain sets of data as sequences instead of arrays given is railway timetable listing the daily services on several lines of railway system find representation of these data in terms of arraysrecordsor sequenceswhich is suitable for lookup of arrival and departure timesgiven certain station and desired direction of the train given text in the form of sequence and lists of small number of words in the form of two arrays and assume that words are short arrays of characters of small and fixed maximum length write program that transforms the text into text by replacing each occurrence of word ai by its corresponding word bi compare the following three versions of the binary search with the one presented in the text which of the three programs are correctdetermine the relevant invariants which versions are more efficientwe assume the following variablesand the constant var ijkxintegeraarray of integerprogram ai : : - repeat :( +jdiv if [kx then : else : end until ( [kxor ( jprogram bi : : - repeat :( +jdiv if [kthen : - endif [kx then : + end until
25,317
program ci : : - repeat :( +jdiv if [kthen : else : + end until hintall programs must terminate with ak xif such an element existsor ak xif there exists no element with value company organizes poll to determine the success of its products its products are records and tapes of hitsand the most popular hits are to be broadcast in hit parade the polled population is to be divided into four categories according to sex and age (sayless or equal to and older than every person is asked to name five hits hits are identified by the numbers to (sayn the results of the poll are to be appropriately encoded as sequence of characters hintuse procedures read and readint to read the values of the poll type hit integerreponse record namefirstnamenamemalebooleanageintegerchoicearray of hit endvar pollfiles file this file is the input to program which computes the following results list of hits in the order of their popularity each entry consists of the hit number and the number of times it was mentioned in the poll hits that were never mentioned are omitted from the list four separate lists with the names and first names of all respondents who had mentioned in first place one of the three hits most popular in their category the five lists are to be preceded by suitable titles references [ oj dahle dijkstrac hoare structured programming genuysed new yorkacademic press [ hoarein structured programming [ ]pp - [ jensen and wirth pascal -user manual and report springer-verlag [ wirth program development by stepwise refinement comm acm no ( ) - [ wirth programming in modula- springer-verlag [ wirth on the composition of well-structured programs computing surveys no ( - [ hoare the monitoran operating systems structuring concept comm acm (oct ) - [ knuthj morrisand pratt fast pattern matching in strings siam comput (june ) - [ boyer and moore fast string searching algorithm comm acm (oct )
25,318
-
25,319
sorting introduction the primary purpose of this is to provide an extensive set of examples illustrating the use of the data structures introduced in the preceding and to show how the choice of structure for the underlying data profoundly influences the algorithms that perform given task sorting is also good example to show that such task may be performed according to many different algorithmseach one having certain advantages and disadvantages that have to be weighed against each other in the light of the particular application sorting is generally understood to be the process of rearranging given set of objects in specific order the purpose of sorting is to facilitate the later search for members of the sorted set as such it is an almost universally performedfundamental activity objects are sorted in telephone booksin income tax filesin tables of contentsin librariesin dictionariesin warehousesand almost everywhere that stored objects have to be searched and retrieved even small children are taught to put their things "in order"and they are confronted with some sort of sorting long before they learn anything about arithmetic hencesorting is relevant and essential activityparticularly in data processing what else would be easier to sort than dataneverthelessour primary interest in sorting is devoted to the even more fundamental techniques used in the construction of algorithms there are not many techniques that do not occur somewhere in connection with sorting algorithms in particularsorting is an ideal subject to demonstrate great diversity of algorithmsall having the same purposemany of them being optimal in some senseand most of them having advantages over others it is therefore an ideal subject to demonstrate the necessity of performance analysis of algorithms the example of sorting is moreover well suited for showing how very significant gain in performance may be obtained by the development of sophisticated algorithms when obvious methods are readily available fig the sorting of an array the dependence of the choice of an algorithm on the structure of the data to be processed an ubiquitous phenomenon is so profound in the case of sorting that sorting methods are generally classified into two categoriesnamelysorting of arrays and sorting of (sequentialfiles the two classes are often called internal and external sorting because arrays are stored in the fasthigh-speedrandom-access "internalstore of computers and files are appropriate on the slowerbut more spacious "externalstores based on mechanically moving devices (disks and tapesthe importance of this distinction is obvious from the example of sorting numbered cards structuring the cards as an array corresponds to laying them out in
25,320
front of the sorter so that each card is visible and individually accessible (see fig structuring the cards as filehoweverimplies that from each pile only the card on the top is visible (see fig fig the sorting of file such restriction will evidently have serious consequences on the sorting method to be usedbut it is unavoidable if the number of cards to be laid out is larger than the available table before proceedingwe introduce some terminology and notation to be used throughout this if we are given items an- sorting consists of permuting these items into an array ak ak ak[ - such thatgiven an ordering function (ak < (ak << (ak[ - ]ordinarilythe ordering function is not evaluated according to specified rule of computation but is stored as an explicit component (fieldof each item its value is called the key of the item as consequencethe record structure is particularly well suited to represent items and might for example be declared as followstype item record keyinteger(*other components declared here*end the other components represent relevant data about the items in the collectionthe key merely assumes the purpose of identifying the items as far as our sorting algorithms are concernedhoweverthe key is the only relevant componentand there is no need to define any particular remaining components in the following discussionswe shall therefore discard any associated information and assume that the type item be defined as integer this choice of the key type is somewhat arbitrary evidentlyany type on which
25,321
total ordering relation is defined could be used just as well sorting method is called stable if the relative order if items with equal keys remains unchanged by the sorting process stability of sorting is often desirableif items are already ordered (sortedaccording to some secondary keysi properties not reflected by the (primarykey itself this is not to be regarded as comprehensive survey in sorting techniques rathersome selectedspecific methods are exemplified in greater detail for thorough treatment of sortingthe interested reader is referred to the excellent and comprehensive compendium by knuth [ - (see also lorin [ - ] sorting arrays the predominant requirement that has to be made for sorting methods on arrays is an economical use of the available store this implies that the permutation of items which brings the items into order has to be performed in situand that methods which transport items from an array to result array are intrinsically of minor interest having thus restricted our choice of methods among the many possible solutions by the criterion of economy of storagewe proceed to first classification according to their efficiencyi their economy of time good measure of efficiency is obtained by counting the numbers of needed key comparisons and of moves (transpositionsof items these numbers are functions of the number of items to be sorted whereas good sorting algorithms require in the order of *log(ncomparisonswe first discuss several simple and obvious sorting techniquescalled straight methodsall of which require in the order comparisons of keys there are three good reasons for presenting straight methods before proceeding to the faster algorithms straight methods are particularly well suited for elucidating the characteristics of the major sorting principles their programs are easy to understand and are short remember that programs occupy storage as well although sophisticated methods require fewer operationsthese operations are usually more complex in their detailsconsequentlystraight methods are faster for sufficiently small nalthough they must not be used for large sorting methods that sort items in situ can be classified into three principal categories according to their underlying methodsorting by insertion sorting by selection sorting by exchange these three pinciples will now be examined and compared the procedures operate on global variable whose components are to be sorted in situi without requiring additionaltemporary storage the components are the keys themselves we discard other data represented by the record type itemthereby simplifying matters in all algorithms to be developed in this we will assume the presence of an array and constant nthe number of elements of atype item integervar aarray of item
25,322
sorting by straight insertion this method is widely used by card players the items (cardsare conceptually divided into destination sequence ai- and source sequence ai an- in each stepstarting with and incrementing by unitythe ith element of the source sequence is picked and transferred into the destination sequence by inserting it at the appropriate place the process of sorting by insertion is shown in an example of eight numbers chosen at random (see table nachal'nye kliuchi = = = = = = = table sample process of straight insertion sorting the algorithm of straight insertion is for : to - do : [ ]insert at the appropriate place in ai- end in the process of actually finding the appropriate placeit is convenient to alternate between comparisons and movesi to let sift down by comparing with the next item ajand either inserting or moving aj to the right and proceeding to the left we note that there are two distinct conditions that may cause the termination of the sifting down process an item aj is found with key less than the key of the left end of the destination sequence is reached procedure straightinsertion(adens _sorts *var ijintegerxitembegin for : to - do : [ ] :iwhile ( ( [ - ]do [ : [ - ]dec(jenda[ : end end straightinsertion analysis of straight insertion the number ci of key comparisons in the -th sift is at most - at least and -assuming that all permutations of the keys are equally probable - / in the average the number mi of moves (assignments of itemsis ci thereforethe total numbers of comparisons and
25,323
moves are cmin mmin *( cave ( )/ mave ( )/ cmax ( )/ mmax ( )/ the minimal numbers occur if the items are initially in orderthe worst case occurs if the items are initially in reverse order in this sensesorting by insertion exhibits truly natural behavior it is plain that the given algorithm also describes stable sorting processit leaves the order of items with equal keys unchanged the algorithm of straight insertion is easily improved by noting that the destination sequence ai- in which the new item has to be insertedis already ordered thereforea faster method of determining the insertion point can be used the obvious choice is binary search that samples the destination sequence in the middle and continues bisecting until the insertion point is found the modified sorting algorithm is called binary insertion procedure binaryinsertion(adens _sorts *var ijmlrintegerxitembegin for : to - do : [ ] : :iwhile do :( +rdiv if [ < then : + else : end endfor : to + by - do [ : [ - enda[ : end end binaryinsertion analysis of binary insertion the insertion position is found if thusthe search interval must in the end be of length and this involves halving the interval of length log(itimes thusc si < < - log(iwe approximate this sum by the integral integral ( : - log(xdx *(log(ncc where log( /ln( the number of comparisons is essentially independent of the initial order of the items howeverbecause of the truncating character of the division involved in bisecting the search intervalthe true number of comparisons needed with items may be up to higher than expected the nature of this bias is such that insertion positions at the low end are on the average located slightly faster than those at the high endthereby favoring those cases in which the items are originally highly out of order in factthe minimum number of comparisons is needed if the items are initially in reverse order and the maximum if they are already in order hencethis is case of unnatural behavior of sorting algorithm the number of comparisons is then approximately *(log(nlog( + unfortunatelythe improvement obtained by using binary search method applies only to the number of comparisons but not to the number of necessary moves in factsince moving itemsi keys and associated informationis in general considerably more time-consuming than comparing two keysthe
25,324
improvement is by no means drasticthe important term is still of the order andin factsorting the already sorted array takes more time than does straight insertion with sequential search this example demonstrates that an "obvious improvementoften has much less drastic consequences than one is first inclined to estimate and that in some cases (that do occurthe "improvementmay actually turn out to be deterioration after allsorting by insertion does not appear to be very suitable method for digital computersinsertion of an item with the subsequent shifting of an entire row of items by single position is uneconomical one should expect better results from method in which moves of items are only performed upon single items and over longer distances this idea leads to sorting by selection sorting by straight selection this method is based on the following principle select the item with the least key exchange it with the first item then repeat these operations with the remaining - itemsthen with - itemsuntil only one item -the largest -is left this method is shown on the same eight keys as in table initial keys = = = = = = = table sample process of straight selection sorting the algorithm is formulated as followsfor : to - do assign the index of the least item of ai an- to kexchange ai with ak end this methodcalled straight selectionis in some sense the opposite of straight insertionstraight insertion considers in each step only the one next item of the source sequence and all items of the destination array to find the insertion pointstraight selection considers all items of the source array to find the one with the least key and to be deposited as the one next item of the destination sequence procedure straightselectionvar ijkintegerxitembegin for : to - do :ix : [ ]for : + to - do if [jx then :jx : [kend enda[ : [ ] [ : (adens _sorts *
25,325
end end straightselection analysis of straight selection evidentlythe number of key comparisons is independent of the initial order of keys in this sensethis method may be said to behave less naturally than straight insertion we obtain ( )/ the number of moves is at least mmin *( in the case of initially ordered keys and at most mmax / *( )if initially the keys are in reverse order in order to determine mavg we make the following deliberationsthe algorithm scans the arraycomparing each element with the minimal value so far detected andif smaller than that minimumperforms an assignment the probability that the second element is less than the firstis / this is also the probability for new assignment to the minimum the chance for the third element to be less than the first two is / and the chance of the fourth to be the smallest is / and so on therefore the total expected number of moves is hn- where hn is the -th harmonic number hn / / / hn can be expressed as hn ln(ng / / where is euler' constant for sufficiently large nwe may ignore the fractional terms and therefore approximate the average number of assignments in the -th pass as fi ln(ig the average number of moves mavg in selection sort is then the sum of fi with ranging from to nmavg *( + (si < <nln( )by further approximating the sum of discrete terms by the integral integral ( :nln(xdx ln(nn we obtain an approximate value mavg (ln(ngwe may conclude that in general the algorithm of straight selection is to be preferred over straight insertionalthough in the cases in which keys are initially sorted or almost sortedstraight insertion is still somewhat faster sorting by straight exchange the classification of sorting method is seldom entirely clear-cut both previously discussed methods can also be viewed as exchange sorts in this sectionhoweverwe present method in which the exchange of two items is the dominant characteristic of the process the subsequent algorithm of straight exchanging is based on the principle of comparing and exchanging pairs of adjacent items until all items are sorted as in the previous methods of straight selectionwe make repeated passes over the arrayeach time sifting the least item of the remaining set to the left end of the array iffor changewe view the array to be in vertical instead of horizontal positionand with the help of some imagination the items as bubbles in water tank with weights according to their keysthen each pass over the array results in the
25,326
ascension of bubble to its appropriate level of weight (see table this method is widely known as the bubblesort table sample of bubblesorting procedure bubblesort(adens _sorts *var ijintegerxitembegin for : to - do for : - to by - do if [ - [jthen : [ - ] [ - : [ ] [ : end end end end bubblesort this algorithm easily lends itself to some improvements the example in table shows that the last three passes have no effect on the order of the items because the items are already sorted an obvious technique for improving this algorithm is to remember whether or not any exchange had taken place during pass last pass without further exchange operations is therefore necessary to determine that the algorithm may be terminated howeverthis improvement may itself be improved by remembering not merely the fact that an exchange took placebut rather the position (indexof the last exchange for exampleit is plain that all pairs of adjacent items below this index are in the desired order subsequent scans may therefore be terminated at this index instead of having to proceed to the predetermined lower limit the careful programmer noticeshowevera peculiar asymmetrya single misplaced bubble in the heavy end of an otherwise sorted array will sift into order in single passbut misplaced item in the light end will sink towards its correct position only one step in each pass for examplethe array is sorted by the improved bubblesort in single passbut the array requires seven passes for sorting this unnatural asymmetry suggests third improvementalternating the direction of consecutive passes we appropriately call the resulting algorithm shakersort its behavior is illustrated in table by applying it to the same eight keys that were used in table procedure shakersortvar jklrintegerxitembegin : : - :rrepeat for : to by - do (adens _sorts *
25,327
if [ - [jthen : [ - ] [ - : [ ] [ :xk : end endl : + for : to by + do if [ - [jthen : [ - ] [ - : [ ] [ :xk : end endr : - until end shakersort dir table an example of shakersort analysis of bubblesort and shakersort the number of comparisons in the straight exchange algorithm is ( )/ and the minimumaverageand maximum numbers of moves (assignments of itemsare mmin mavg *( )/ mmax *( )/ the analysis of the improved methodsparticularly that of shakersortis intricate the least number of comparisons is cmin - for the improved bubblesortknuth arrives at an average number of passes proportional to * / and an average number of comparisons proportional ( *( ln( )))/ but we note that all improvements mentioned above do in no way affect the number of exchangesthey only reduce the number of redundant double checks unfortunatelyan exchange of two items is generally more costly operation than comparison of keysour clever improvements therefore have much less profound effect than one would intuitively expect this analysis shows that the exchange sort and its minor improvements are inferior to both the insertion and the selection sortsand in factthe bubblesort has hardly anything to recommend it except its catchy name the shakersort algorithm is used with advantage in those cases in which it is known that the items are already almost in order rare case in practice it can be shown that the average distance that each of the items has to travel during sort is / places this figure provides clue in the search for improvedi more effective sorting methods all straight sorting methods essentially move each item by one position in each elementary step thereforethey are bound to require in the order such steps any improvement must be based on the principle of moving items over greater distances in single leaps
25,328
subsequentlythree improved methods will be discussednamelyone for each basic sorting methodinsertionselectionand exchange advanced sorting methods insertion sort by diminishing increment refinement of the straight insertion sort was proposed by shell in the method is explained and demonstrated on our standard example of eight items (see table firstall items that are four positions apart are grouped and sorted separately this process is called -sort in this example of eight itemseach group contains exactly two items after this first passthe items are regrouped into groups with items two positions apart and then sorted anew this process is called -sort finallyin third passall items are sorted in an ordinary sort or -sort -sort yields -sort yields -sort yields table an insertion sort with diminishing increments one may at first wonder if the necessity of several sorting passeseach of which involves all itemsdoes not introduce more work than it saves howevereach sorting step over chain either involves relatively few items or the items are already quite well ordered and comparatively few rearrangements are required it is obvious that the method results in an ordered arrayand it is fairly obvious that each pass profits from previous passes (since each -sort combines two groups sorted in the preceding -sortit is also obvious that any sequence of increments is acceptableas long as the last one is unitybecause in the worst case the last pass does all the work it ishowevermuch less obvious that the method of diminishing increments yields even better results with increments other than powers of the procedure is therefore developed without relying on specific sequence of increments the increments are denoted by ht- with the conditions ht- hi+ hi the algorithm is described by the procedure shellsort [ for procedure shellsortconst var ijkmsintegerxitemharray of integerbegin [ : [ : [ : [ : for : to - do : [ ]for : to - do : [ ] : -kwhile ( > ( [ ]do [ + : [ ] : - endif ( >kor ( > [ ]then (adens _sorts *
25,329
[ + : else [ + : [ ] [ : end end end end shellsort analysis of shellsort the analysis of this algorithm poses some very difficult mathematical problemsmany of which have not yet been solved in particularit is not known which choice of increments yields the best results one surprising facthoweveris that they should not be multiples of each other this will avoid the phenomenon evident from the example given above in which each sorting pass combines two chains that before had no interaction whatsoever it is indeed desirable that interaction between various chains takes place as often as possibleand the following theorem holdsif -sorted sequence is -sortedthen it remains -sorted knuth [ indicates evidence that reasonable choice of increments is the sequence (written in reverse order where hk- hk+ ht and log ( he also recommends the sequence where hk- hk + ht and log ( for the latter choicemathematical analysis yields an effort proportional to required for sorting items with the shellsort algorithm although this is significant improvement over we will not expound further on this methodsince even better algorithms are known tree sort the method of sorting by straight selection is based on the repeated selection of the least key among itemsthen among the remaining - itemsetc clearlyfinding the least key among items requires - comparisonsfinding it among - items needs - comparisonsetc and the sum of the first - integers is ( - )/ so how can this selection sort possibly be improvedit can be improved only by retaining from each scan more information than just the identification of the single least item for instancewith / comparisons it is possible to determine the smaller key of each pair of itemswith another / comparisons the smaller of each pair of such smaller keys can be selectedand so on with only - comparisonswe can construct selection tree as shown in fig and identify the root as the desired least key [ fig repeated selection among two keys the second step now consists of descending down along the path marked by the least key and eliminating it by successively replacing it by either an empty hole at the bottomor by the item at the alternative branch at intermediate nodes (see figs and againthe item emerging at the root of the tree has the (now secondsmallest key and can be eliminated after such selection stepsthe tree is empty ( full of holes)and the sorting process is terminated it should be noted that each of the
25,330
selection steps requires only log(ncomparisons thereforethe total selection process requires only on the order of *log(nelementary operations in addition to the steps required by the construction of the tree this is very significant improvement over the straight methods requiring stepsand even over shellsort that requires steps naturallythe task of bookkeeping has become more elaborateand therefore the complexity of individual steps is greater in the tree sort methodafter allin order to retain the increased amount of information gained from the initial passsome sort of tree structure has to be created our next task is to find methods of organizing this information efficiently fig selecting the least key fig refilling the holes of courseit would seem particularly desirable to eliminate the need for the holes that in the end populate the entire tree and are the source of many unnecessary comparisons moreovera way should be found to represent the tree of items in units of storageinstead of in units as shown above these goals are indeed achieved by method called heapsort by its inventor williams [ - ]it is plain that this method represents drastic improvement over more conventional tree sorting approaches heap is defined as sequence of keys hlhl+ hr ( > such that hi + and hi + for / - if binary tree is represented as an array as shown in fig then it follows that the sort trees in figs and are heapsand in particular that the element of heap is its least elementh min( hn- fig array viewed as binary tree
25,331
fig heap with elements fig key sifting through the heap let us now assume that heap with elements hl+ hr is given for some values and rand that new element has to be added to form the extended heap hl hr takefor examplethe initial heap shown in fig and extend the heap to the left by an element new heap is obtained by first putting on top of the tree structure and then by letting it sift down along the path of the smaller comparandswhich at the same time move up in the given example the value is first exchanged with then with and thus forming the tree shown in fig we now formulate this sifting algorithm as followsij are the pair of indices denoting the items to be exchanged during each sift step the reader is urged to convince himself that the proposed method of sifting actually preserves the heap invariants that define heap neat way to construct heap in situ was suggested by floyd it uses the sifting procedure shown below given is an array hn- clearlythe elements hm hn- (with div form heap alreadysince no two indices ij are such that + or + these elements form what may be considered as the bottom row of the associated binary tree (see fig among which no ordering relationship is required the heap is now extended to the leftwhereby in each step new element is included and properly positioned by sift this process is illustrated in table and yields the heap shown in fig procedure sift (lrinteger)var ijintegerxitembegin :lj : * + : [ ]if ( ( [ + [ ]then : + endwhile ( < ( [jxdo [ : [ ] :jj : * if ( ( [ + [ ]then : + end enda[ : end sift table constructing heap
25,332
consequentlythe process of generating heap of elements hn- in situ is described as followsl : div while do dec( )sift(ln- end in order to obtain not only partialbut full ordering among the elementsn sift steps have to followwhereby after each step the next (leastitem may be picked off the top of the heap once morethe question arises about where to store the emerging top elements and whether or not an in situ sort would be possible of course there is such solutionin each step take the last component (say xoff the heapstore the top element of the heap in the now free location of xand let sift down into its proper position the necessary - steps are illustrated on the heap of table the process is described with the aid of the procedure sift as followsr : - while do : [ ] [ : [ ] [ :xdec( )sift( rend table example of heapsort process the example of table shows that the resulting order is actually inverted thishowevercan easily be remedied by changing the direction of the ordering relations in the sift procedure this results in the following procedure heapsort (note that sift should actually be declared local to heapsort procedure sift (lrinteger)var ijintegerxitembegin :lj : * + : [ ]if ( ( [ja[ + ]then : + endwhile ( < ( [ ]do [ : [ ] :jj : * + if ( ( [ja[ + ]then : + end enda[ : end siftprocedure heapsortvar lrintegerxitembegin : div : - while do dec( )sift(lrend(adens _sorts *
25,333
while do : [ ] [ : [ ] [ :xdec( )sift(lrend end heapsort analysis of heapsort at first sight it is not evident that this method of sorting provides good results after allthe large items are first sifted to the left before finally being deposited at the far right indeedthe procedure is not recommended for small numbers of itemssuch as shown in the example howeverfor large nheapsort is very efficientand the larger isthe better it becomes -even compared to shellsort in the worst casethere are / sift steps necessarysifting items through log( / )log( / + )log( - positionswhere the logarithm (to the base is truncated to the next lower integer subsequentlythe sorting phase takes - siftswith at most log( - )log( - ) moves in additionthere are - moves for stashing the item from the top away at the right this argument shows that heapsort takes of the order of log(nmoves even in the worst possible case this excellent worst-case performance is one of the strongest qualities of heapsort it is not at all clear in which case the worst (or the bestperformance can be expected but generally heapsort seems to like initial sequences in which the items are more or less sorted in the inverse orderand therefore it displays an unnatural behavior the heap creation phase requires zero moves if the inverse order is present the average number of moves is approximately / log( )and the deviations from this value are relatively small partition sort after having discussed two advanced sorting methods based on the principles of insertion and selectionwe introduce third improved method based on the principle of exchange in view of the fact that bubblesort was on the average the least effective of the three straight sorting algorithmsa relatively significant improvement factor should be expected stillit comes as surprise that the improvement based on exchanges to be discussed subsequently yields the best sorting method on arrays known so far its performance is so spectacular that its inventorc hoarecalled it quicksort [ and quicksort is based on the recognition that exchanges should preferably be performed over large distances in order to be most effective assume that items are given in reverse order of their keys it is possible to sort them by performing only / exchangesfirst taking the leftmost and the rightmost and gradually progressing inward from both sides naturallythis is possible only if we know that their order is exactly inverse but something might still be learned from this example let us try the following algorithmpick any item at random (and call it )scan the array from the left until an item ai is found and then scan from the right until an item aj is found now exchange the two items and continue this scan and swap process until the two scans meet somewhere in the middle of the array the result is that the array is now partitioned into left part with keys less than (or equal toxand right part with keys greater than (or equal tox this partitioning process is now formulated in the form of procedure note that the relations and and <=whose negations in the while clause are with this change acts as sentinel for both scans procedure partitionvar ijintegerwxitembegin : : - select an item at randomrepeat while [ix do : + end
25,334
while [jdo : - endif < then : [ ] [ : [ ] [ :wi : + : - end until end partition as an exampleif the middle key is selected as comparand xthen the array of keys requires the two exchanges and to yield the partitioned array and the final index values and keys ai- are less or equal to key and keys aj+ an- are greater or equal to key consequentlythere are three partsnamely ak < ak < aki < < ak akj < - <ak the goal is to increase and decrease jso that the middle part vanishes this algorithm is very straightforward and efficient because the essential comparands ij and can be kept in fast registers throughout the scan howeverit can also be cumbersomeas witnessed by the case with identical keyswhich result in / exchanges these unnecessary exchanges might easily be eliminated by changing the scanning statements to while [ < do : + endwhile < [jdo : - end in this casehoweverthe choice element xwhich is present as member of the arrayno longer acts as sentinel for the two scans the array with all identical keys would cause the scans to go beyond the bounds of the array unless more complicated termination conditions were used the simplicity of the conditions is well worth the extra exchanges that occur relatively rarely in the average random case slight savinghowevermay be achieved by changing the clause controlling the exchange step to instead of < but this change must not be extended over the two statements : + : - which therefore require separate conditional clause confidence in the correctness of the partition algorithm can be gained by verifying that the ordering relations are invariants of the repeat statement initiallywith and - they are trivially trueand upon exit with jthey imply the desired result we now recall that our goal is not only to find partitions of the original array of itemsbut also to sort it howeverit is only small step from partitioning to sortingafter partitioning the arrayapply the same process to both partitionsthen to the partitions of the partitionsand so onuntil every partition consists of single item only this recipe is described as follows (note that sort should actually be declared local to quicksortprocedure sort (lrinteger)var ijintegerwxitembegin :lj :rx : [( +rdiv ]repeat (adens _sorts *
25,335
while [ix do : + endwhile [jdo : - endif < then : [ ] [ : [ ] [ :wi : + : - end until jif then sort(ljendif then sort(irend end sortprocedure quicksortbegin sort( - end quicksort procedure sort activates itself recursively such use of recursion in algorithms is very powerful tool and will be discussed further in chap in some programming languages of older proveniencerecursion is disallowed for certain technical reasons we will now show how this same algorithm can be expressed as non-recursive procedure obviouslythe solution is to express recursion as an iterationwhereby certain amount of additional bookkeeping operations become necessary the key to an iterative solution lies in maintaining list of partitioning requests that have yet to be performed after each steptwo partitioning tasks arise only one of them can be attacked directly by the subsequent iterationthe other one is stacked away on that list it isof courseessential that the list of requests is obeyed in specific sequencenamelyin reverse sequence this implies that the first request listed is the last one to be obeyedand vice versathe list is treated as pulsating stack in the following nonrecursive version of quicksorteach request is represented simply by left and right index specifying the bounds of the partition to be further partitioned thuswe introduce two array variables lowhighused as stacks with index the appropriate choice of the stack size will be discussed during the analysis of quicksort procedure nonrecursivequicksortconst var ijlrsintegerxwitemlowhigharray of integer(*index stack*begin : low[ : high[ : - repeat (*take top request from stack* :low[ ] :high[ ]dec( )repeat (*partition [la[ ]* :lj :rx : [( +rdiv ]repeat while [ix do : + endwhile [jdo : - endif < then : [ ] [ : [ ] [ :wi : + : - end until jif then (*stack request to sort right partition*inc( )low[ :ihigh[ : (adens _sorts *
25,336
endr : (*now and delimit the left partition*until > until end nonrecursivequicksort analysis of quicksort in order to analyze the performance of quicksortwe need to investigate the behavior of the partitioning process first after having selected bound xit sweeps the entire array henceexactly comparisons are performed the number of exchanges can be determind by the following probabilistic argument with fixed bound uthe expected number of exchange operations is equal to the number of elements in the left part of the partitionnamely umultiplied by the probability that such an element reached its place by an exchange an exchange had taken place if the element had previously been part of the right partitionthe probablity for this is ( - )/ the expected number of exchanges is therefore the average of these expected values over all possible bounds um [su < < - *( - )]/ *( - )/ ( )/ ( / )/ assuming that we are very lucky and always happen to select the median as the boundthen each partitioning process splits the array in two halvesand the number of necessary passes to sort is log(nthe resulting total number of comparisons is then *log( )and the total number of exchanges is *log( )/ of courseone cannot expect to hit the median all the time in factthe chance of doing so is only / surprisinglyhoweverthe average performance of quicksort is inferior to the optimal case by factor of only *ln( )if the bound is chosen at random but quicksort does have its pitfalls first of allit performs moderately well for small values of nas do all advanced methods its advantage over the other advanced methods lies in the ease with which straight sorting method can be incorporated to handle small partitions this is particularly advantageous when considering the recursive version of the program stillthere remains the question of the worst case how does quicksort perform thenthe answer is unfortunately disappointing and it unveils the one weakness of quicksort considerfor instancethe unlucky case in which each time the largest value of partition happens to be picked as comparand then each step splits segment of items into left partition with - items and right partition with single element the result is that (instead of log( )splits become necessaryand that the worst-case performance is of the order apparentlythe crucial step is the selection of the comparand in our example program it is chosen as the middle element note that one might almost as well select either the first or the last element in these casesthe worst case is the initially sorted arrayquicksort then shows definite dislike for the trivial job and preference for disordered arrays in choosing the middle elementthe strange characteristic of quicksort is less obvious because the initially sorted array becomes the optimal case in factalso the average performance is slightly betterif the middle element is selected hoare suggests that the choice of be made at randomor by selecting it as the median of small sample ofsaythree keys [ and such judicious choice hardly influences the average performance of quicksortbut it improves the worstcase performance considerably it becomes evident that sorting on the basis of quicksort is somewhat like gamble in which one should be aware of how much one may afford to lose if bad luck were to strike there is one important lesson to be learned from this experienceit concerns the programmer directly
25,337
what are the consequences of the worst case behavior mentioned above to the performance quicksortwe have realized that each split results in right partition of only single elementthe request to sort this partition is stacked for later execution consequentlythe maximum number of requestsand therefore the total required stack sizeis this isof coursetotally unacceptable (note that we fare no better -in fact even worse -with the recursive version because system allowing recursive activation of procedures will have to store the values of local variables and parameters of all procedure activations automaticallyand it will use an implicit stack for this purpose the remedy lies in stacking the sort request for the longer partition and in continuing directly with the further partitioning of the smaller section in this casethe size of the stack can be limited to log(nthe change necessary is localized in the section setting up new requests it now reads if then if then (*stack request for sorting right partition*inc( )low[ :ihigh[ : endr : (*continue sorting left partition*else if then (*stack request for sorting left parition*inc( )low[ :lhigh[ : endl : (*continue sorting right partition*end finding the median the median of items is defined as that item which is less than (or equal tohalf of the items and which is larger than (or equal tothe other half of the items for examplethe median of is the problem of finding the median is customarily connected with that of sortingbecause the obvious method of determining the median is to sort the items and then to pick the item in the middle but partitioning yields potentially much faster way of finding the median the method to be displayed easily generalizes to the problem of finding the -th smallest of items finding the median represents the special case / the algorithm invented by hoare [ - functions as follows firstthe partitioning operation of quicksort is applied with and - and with ak selected as splitting value the resulting index values and are such that ah for all ah for all > there are three possible cases that may arise the splitting value was too smallas resultthe limit between the two partitions is below the desired value the partitioning process has to be repeated upon the elements ai ar (see fig < >
25,338
fig bound too small the chosen bound was too large the splitting operation has to be repeated on the partition al aj (see fig < > fig bound too large ithe element ak splits the array into two partitions in the specified proportions and therefore is the desired quantile (see fig < >jki fig correct bound the splitting process has to be repeated until case arises this iteration is expressed by the following piece of programl : :nwhile - do : [ ]partition ( [la[ - ])if then : endif then : end end for formal proof of the correctness of this algorithmthe reader is referred to the original article by hoare the entire procedure find is readily derived from this procedure find (kinteger)(*reorder such that [kis -th largest*var lrijintegerwxitembegin : : - while - do : [ ] :lj :rrepeat while [ix do : + endwhile [jdo : - endif < then : [ ] [ : [ ] [ :wi : + : - end until jif then : endif then : end end (adens _sorts *
25,339
end find if we assume that on the average each split halves the size of the partition in which the desired quantile liesthen the number of necessary comparisons is / / it is of order this explains the power of the program find for finding medians and similar quantilesand it explains its superiority over the straightforward method of sorting the entire set of candidates before selecting the -th (where the best is of order *log( )in the worst casehowevereach partitioning step reduces the size of the set of candidates only by resulting in required number of comparisons of order againthere is hardly any advantage in using this algorithmif the number of elements is smallsayfewer than comparison of array sorting methods to conclude this parade of sorting methodswe shall try to compare their effectiveness if denotes the number of items to be sortedc and shall again stand for the number of required key comparisons and item movesrespectively closed analytical formulas can be given for all three straight sorting methods they are tabulated in table the column headings minmaxavg specify the respective minimamaximaand values averaged over all npermutations of items min avg max straight insertion cn- ( )/ ( )/ ( - ( - )/ ( )/ straight selection ( )/ ( )/ ( )/ ( - *(ln( / ( - straight exchange ( - )/ ( - )/ ( - )/ ( - )* ( - )* table comparison of straight sorting methods no reasonably simple accurate formulas are available on the advanced methods the essential facts are that the computational effort needed is * in the case of shellsort and is * *log(nin the cases of heapsort and quicksortwhere the are appropriate coefficients these formulas merely provide rough measure of performance as functions of nand they allow the classification of sorting algorithms into primitivestraight methods ( and advanced or "logarithmicmethods ( *log( )for practical purposeshoweverit is helpful to have some experimental data available that shed light on the coefficients which further distinguish the various methods moreoverthe formulas do not take into account the computational effort expended on operations other than key comparisons and item movessuch as loop controletc clearlythese factors depend to some degree on individual systemsbut an example of experimentally obtained data is nevertheless informative table shows the times (in secondsconsumed by the sorting methods previously discussedas executed by the modula- system on lilith personal computer the three columns contain the times used to sort the already ordered arraya random permutationand the inversely ordered array table is for itemstable for items the data clearly separate the methods from the *log(nmethods the following points are noteworthy the improvement of binary insertion over straight insertion is marginal indeedand even negative in the case of an already existing order bubblesort is definitely the worst sorting method among all compared its improved version
25,340
shakersort is still worse than straight insertion and straight selection (except in the pathological case of sorting sorted array quicksort beats heapsort by factor of to it sorts the inversely ordered array with speed practically identical to the one that is already sorted ordered random inverse straightinsertion binaryinsertion straightselection bubblesort shakersort shellsort heapsort quicksort nonrecquicksort straightmerge table execution times of sort programs with elements ordered random random straightinsertion binaryinsertion straightselection bubblesort shakersort shellsort heapsort quicksort nonrecquicksort straightmerge table execution times of sort programs with elements
25,341
sorting sequences straight merging unfortunatelythe sorting algorithms presented in the preceding are inapplicableif the amount of data to be sorted does not fit into computer' main storebut if it isfor instancerepresented on peripheral and sequential storage device such as tape or disk in this case we describe the data as (sequentialfile whose characteristic is that at each moment one and only one component is directly accessible this is severe restriction compared to the possibilities offered by the array structureand therefore different sorting techniques have to be used the most important one is sorting by merging merging (or collatingmeans combining two (or moreordered sequences into singleordered sequence by repeated selection among the currently accessible components merging is much simpler operation than sortingand it is used as an auxiliary operation in the more complex process of sequential sorting one way of sorting on the basis of mergingcalled straight mergingis the following split the sequence into two halvescalled and merge and by combining single items into ordered pairs call the merged sequence aand repeat steps and this time merging ordered pairs into ordered quadruples repeat the previous stepsmerging quadruples into octetsand continue doing thiseach time doubling the lengths of the merged subsequencesuntil the entire sequence is ordered as an exampleconsider the sequence in step the split results in the sequences the merging of single components (which are ordered sequences of length )into ordered pairs yields splitting again in the middle and merging ordered pairs yields third split and merge operation finally produces the desired result each operation that treats the entire set of data once is called phaseand the smallest subprocess that by repetition constitutes the sort process is called pass or stage in the above example the sort took three passeseach pass consisting of splitting phase and merging phase in order to perform the sortthree tapes are neededthe process is therefore called three-tape merge actuallythe splitting phases do not contribute to the sort since they do in no way permute the itemsin sense they are unproductivealthough they constitute half of all copying operations they can be eliminated altogether by combining the split and the merge phase instead of merging into single sequencethe output of the merge process is immediately redistributed onto two tapeswhich constitute the sources of the subsequent pass in contrast to the previous two-phase merge sortthis method is called single-phase merge or balanced merge it is evidently superior because only half as many copying operations are necessarythe price for this advantage is fourth tape we shall develop merge program in detail and initially let the data be represented as an array which
25,342
howeveris scanned in strictly sequential fashion later version of merge sort will then be based on the sequence structureallowing comparison of the two programs and demonstrating the strong dependence of the form of program on the underlying representation of its data single array may easily be used in place of two sequencesif it is regarded as double-ended instead of merging from two source fileswe may pick items off the two ends of the array thusthe general form of the combined merge-split phase can be illustrated as shown in fig the destination of the merged items is switched after each ordered pair in the first passafter each ordered quadruple in the second passetc thus evenly filling the two destination sequencesrepresented by the two ends of single array after each passthe two arrays interchange their rolesthe source becomes the new destinationand vice versa destination source merge distribute fig straight merge sort with two arrays further simplification of the program can be achieved by joining the two conceptually distinct arrays into single array of doubled size thusthe data will be represented by aarray * of item and we let the indices and denote the two source itemswhereas and designate the two destinations (see fig the initial data areof coursethe items an- clearlya boolean variable up is needed to denote the direction of the data flowup shall mean that in the current pass components an- will be moved up to the variables an - whereas ~up will indicate that an - will be transferred down into an- the value of up strictly alternates between consecutive passes andfinallya variable is introduced to denote the length of the subsequences to be merged its value is initially and it is doubled before each successive pass to simplify matters somewhatwe shall assume that is always power of thusthe first version of the straight merge program assumes the following formprocedure straightmergevar ijklpintegerupbooleanbegin up :truep : repeat initialize index variablesif up then : : - :nl : * - else : : - :nj : * - endmerge -tuples from iand -sources to kand -destinationsup :~upp : * until end straightmerge in the next development step we further refine the statements expressed in italics evidentlythe merge pass involving items is itself sequence of merges of sequencesi of -tuples between every such
25,343
partial merge the destination is switched from the lower to the upper end of the destination arrayor vice versato guarantee equal distribution onto both destinations if the destination of the merged items is the lower end of the destination arraythen the destination index is kand is incremented after each move of an item if they are to be moved to the upper end of the destination arraythe destination index is land it is decremented after each move in order to simplify the actual merge statementwe choose the destination to be designated by at all timesswitching the values of the variables and after each -tuple mergeand denote the increment to be used at all times by hwhere is either or - these design discussions lead to the following refinementh : : (* no of items to be merged*repeat :pr :pm : *pmerge items from -source with items from -source destination index is increment by hh :-hexchange and until in the further refinement step the actual merge statement is to be formulated here we have to keep in mind that the tail of the one subsequence which is left non-empty after the merge has to be appended to the output sequence by simple copying operations while ( ( do if [ia[jthen move an item from -source to -destinationadvance and kq : - else move an item from -source to -destinationadvance and kr : - end endcopy tail of -sequencecopy tail of -sequence after this further refinement of the tail copying operationsthe program is laid out in complete detail before writing it out in fullwe wish to eliminate the restriction that be power of which parts of the algorithm are affected by this relaxation of constraintswe easily convince ourselves that the best way to cope with the more general situation is to adhere to the old method as long as possible in this example this means that we continue merging -tuples until the remainders of the source sequences are of length less than the one and only part that is influenced are the statements that determine the values of and the following four statements replace the three statements :pr :pm : - * andas the reader should convince himselfthey represent an effective implementation of the strategy specified abovenote that denotes the total number of items in the two source sequences that remain to be mergedif > then : else : endm : -qif > then : else : endm : - in additionin order to guarantee termination of the programthe condition nwhich controls the outer
25,344
repetitionmust be changed to > after these modificationswe may now proceed to describe the entire algorithm in terms of procedure operating on the global array with elements procedure straightmerge(adens _mergesorts *var ijkltinteger(*index range of is * - *hmpqrintegerupbooleanbegin up :truep : repeat : :nif up then : : - :nl : * - else : : - :nj : * - endrepeat (*merge run from iand -sources to -destination*if > then : else : endm : -qif > then : else : endm : -rwhile ( ( do if [ia[jthen [ : [ ] : +hi : + : - else [ : [ ] : +hj : - : - end endwhile do [ : [ ] : +hj : - : - endwhile do [ : [ ] : +hi : + : - endh :-ht :kk :ll : until up :~upp : * until >nif ~up then for : to - do [ : [ +nend end end straightmerge analysis of mergesort since each pass doubles pand since the sort is terminated as soon as nit involves log(npasses each passby definitioncopies the entire set of items exactly once as consequencethe total number of moves is exactly log(nthe number of key comparisons is even less than since no comparisons are involved in the tail copying operations howeversince the mergesort technique is usually applied in connection with the use of peripheral storage devicesthe computational effort involved in the move operations dominates the effort of comparisons often by several orders of magnitude the detailed analysis of the number of comparisons is therefore of little practical interest
25,345
the merge sort algorithm apparently compares well with even the advanced sorting techniques discussed in the previous howeverthe administrative overhead for the manipulation of indices is relatively highand the decisive disadvantage is the need for storage of items this is the reason sorting by merging is rarely used on arraysi on data located in main store figures comparing the real time behavior of this mergesort algorithm appear in the last line of table they compare favorably with heapsort but unfavorably with quicksort natural merging in straight merging no advantage is gained when the data are initially already partially sorted the length of all merged subsequences in the -th pass is less than or equal to kindependent of whether longer subsequences are already ordered and could as well be merged in factany two ordered subsequences of lengths and might be merged directly into single sequence of + items mergesort that at any time merges the two longest possible subsequences is called natural merge sort an ordered subsequence is often called string howeversince the word string is even more frequently used to describe sequences of characterswe will follow knuth in our terminology and use the word run instead of string when referring to ordered subsequences we call subsequence ai aj such that (ai- ai(aki aj+ maximal run orfor shorta run natural merge sortthereforemerges (maximalruns instead of sequences of fixedpredetermined length runs have the property that if two sequences of runs are mergeda single sequence of exactly runs emerges thereforethe total number of runs is halved in each passand the number of required moves of items is in the worst case *log( )but in the average case it is even less the expected number of comparisonshoweveris much larger because in addition to the comparisons necessary for the selection of itemsfurther comparisons are needed between consecutive items of each file in order to determine the end of each run our next programming exercise develops natural merge algorithm in the same stepwise fashion that was used to explain the straight merging algorithm it employs the sequence structure (represented by filessee sect instead of the arrayand it represents an unbalancedtwo-phasethree-tape merge sort we assume that the file variable represents the initial sequence of items (naturallyin actual data processing applicationthe initial data are first copied from the original source to for reasons of safety and are two auxiliary file variables each pass consists of distribution phase that distributes runs equally from to and band merge phase that merges runs from and to this process is illustrated in fig merge phase distribution phase st run nd run nth run fig sort phases and passes
25,346
table example of natural mergesort as an exampletable shows the file in its original state (line and after each pass (lines - in natural merge sort involving numbers note that only three passes are needed the sort terminates as soon as the number of runs on is (we assume that there exists at least one non-empty run on the initial sequencewe therefore let variable be used for counting the number of runs merged onto by making use of the type rider defined in sect the program can be formulated as followsvar lintegerr files rider(*see *repeat files set( )files set( )files set( )distribute( )(* to and *files set( )files set( )files set( ) : merge( (* and into *until the two phases clearly emerge as two distinct statements they are now to be refinedi expressed in more detail the refined descriptions of distribute (from rider to riders and and merge (from riders and to rider followrepeat copyrun( )if ~ eof then copyrun( end until eof repeat mergerun( )inc(luntil eofif ~ eof then copyrun( )inc(lend this method of distribution supposedly results in either equal numbers of runs in both and bor in sequence containing one run more than since corresponding pairs of runs are mergeda leftover run may still be on file awhich simply has to be copied the statements merge and distribute are formulated in terms of refined statement mergerun and subordinate procedure copyrun with obvious tasks when attempting to do soone runs into serious difficultyin order to determine the end of runtwo consecutive keys must be compared howeverfiles are such that only single element is immediately accessible we evidently cannot avoid to look aheadi to associate buffer with every sequence the buffer is to contain the first element of the file still to be read and constitutes something like window sliding over the file instead of programming this mechanism explicitly into our programwe prefer to define yet another level of abstraction it is represented by new module runs it can be regarded as an extension of module files of sect introducing new type riderwhich we may consider as an extension of type files rider this new type will not only accept all operations available on riders and indicate the end of filebut also indicate the end of run and the first element of the remaining part of the file the new type as well as its
25,347
operators are presented by the following definition definition runs(adens _runs *import filestextstype rider record (files riderfirstintegereorboolean endprocedure openrandomseq (ffiles filelengthseedinteger)procedure set (var rridervar ffiles file)procedure copy (var sourcedestinationrider)procedure listseq (var wtexts writerffiles file)end runs few additional explanations for the choice of the procedures are necessary as we shall seethe sorting algorithms discussed here and later are based on copying elements from one file to another procedure copy therefore takes the place of separate read and write operations for convenience of testing the following exampleswe also introduce procedure listseqconverting file of integers into text also for convenience an additional procedure is includedopenrandomseq initializes file with numbers in random order these two procedures will serve to test the algorithms to be discussed below the values of the fields eof and eor are defined as results of copy in analogy to eof having been defined as result of read operation module runs(adens _runs *import filestextstype riderrecord (files riderfirstintegereorboolean endprocedure openrandomseqffiles filelengthseedinteger)var iintegerwfiles riderbegin files set(wf )for : to length- do files writeint(wseed)seed :( *seedmod endfiles close(fend openrandomseqprocedure set(var rriderffiles file)begin files set(rf )files readint (rr first) eor : eof end setprocedure copy(var srcdestrider)begin dest first :src firstfiles writeint(destdest first)files readint(srcsrc first)src eor :src eof or (src first dest firstend copyprocedure listseq(var wtexts writerffiles file;)var xyknintegerrfiles riderbegin : : files set(rf )files readint(rx)while ~ eof do texts writeint(wx )inc( )files readint(ry)
25,348
if then (*konets serii*texts write( "|")inc(nendx : endtexts write( "$")texts writeint(wk )texts writeint(wn )texts writeln(wend listseqend runs we now return to the process of successive refinement of the process of natural merging procedure copyrun and the statement merge are now conveniently expressible as shown below note that we refer to the sequences (filesindirectly via the riders attached to them in passingwe also note that the rider' field first represents the next key on sequence being readand the last key of sequence being written procedure copyrun (var xyruns rider)begin (*copy from to *repeat runs copy(xyuntil eor end copyrun (*merge from and to *repeat if first first then runs copy( )if eor then copyrun( end else runs copy( )if eor then copyrun( end end until eor or eor the comparison and selection process of keys in merging run terminates as soon as one of the two runs is exhausted after thisthe other run (which is not exhausted yethas to be transferred to the resulting run by merely copying its tail this is done by call of procedure copyrun this should supposedly terminate the development of the natural merging sort procedure regrettablythe program is incorrectas the very careful reader may have noticed the program is incorrect in the sense that it does not sort properly in some cases considerfor examplethe following sequence of input data by distributing consecutive runs alternately to and bwe obtain these sequences are readily merged into single runwhereafter the sort terminates successfully the examplealthough it does not lead to an erroneous behaviour of the programmakes us aware that mere distribution of runs to serveral files may result in number of output runs that is less than the number of input runs this is because the first item of the + nd run may be larger than the last item of the -th runthereby causing the two runs to merge automatically into single run although procedure distribute supposedly outputs runs in equal numbers to the two filesthe important consequence is that the actual number of resulting runs on and may differ significantly our merge procedurehoweveronly merges pairs of runs and terminates as soon as is readthereby losing the tail of one of the sequences consider the following input data that are sorted (and truncatedin two subsequent passes
25,349
table incorrect result of mergesort program the example of this programming mistake is typical for many programming situations the mistake is caused by an oversight of one of the possible consequences of presumably simple operation it is also typical in the sense that serval ways of correcting the mistake are open and that one of them has to be chosen often there exist two possibilities that differ in very importantfundamental way we recognize that the operation of distribution is incorrectly programmed and does not satisfy the requirement that the number of runs differ by at most we stick to the original scheme of operation and correct the faulty procedure accordingly we recognize that the correction of the faulty part involves far-reaching modificationsand we try to find ways in which other parts of the algorithm may be changed to accommodate the currently incorrect part in generalthe first path seems to be the safercleaner onethe more honest wayproviding fair degree of immunity from later consequences of overlookedintricate side effects it isthereforethe way toward solution that is generally recommended it is to be pointed outhoweverthat the second possibility should sometimes not be entirely ignored it is for this reason that we further elaborate on this example and illustrate fix by modification of the merge procedure rather than the distribution procedurewhich is primarily at fault this implies that we leave the distribution scheme untouched and renounce the condition that runs be equally distributed this may result in less than optimal performance howeverthe worst-case performance remains unchangedand moreoverthe case of highly unequal distribution is statistically very unlikely efficiency considerations are therefore no serious argument against this solution if the condition of equal distribution of runs no longer existsthen the merge procedure has to be changed so thatafter reaching the end of one filethe entire tail of the remaining file is copied instead of at most one run this change is straightforward and is very simple in comparison with any change in the distribution scheme (the reader is urged to convince himself of the truth of this claimthe revised version of the merge algorithm is shown below in the form of function procedureprocedure copyrun (var xyruns rider)begin (*from to *repeat runs copy(xyuntil eor end copyrunprocedure naturalmerge (srcfiles file)files filevar linteger(*no of runs merged* files filer runs riderbegin runs set( src)repeat :files new("test ")files set( ) :files new("test ")files set ( )(*distribute from to and *repeat copyrun( )(adens _mergesorts *
25,350
if ~ eof then copyrun( end until eofruns set( )runs set( ) :files new("")files set( )(*merge from and to * : repeat repeat if first first then runs copy( )if eor then copyrun( end else runs copy( )if eor then copyrun( end end until eor eorinc(luntil eof or eofwhile ~ eof do copyrun( )inc(lendwhile ~ eof do copyrun( )inc(lendruns set( until return end naturalmergebalanced multiway merging the effort involved in sequential sort is proportional to the number of required passes sinceby definitionevery pass involves the copying of the entire set of data one way to reduce this number is to distribute runs onto more than two files merging runs that are equally distributed on files results in sequence of / runs second pass reduces their number to / third pass to / and after passes there are /nk runs left the total number of passes required to sort items by -way merging is therefore logn(nsince each pass requires copy operationsthe total number of copy operations is in the worst case logn(nas the next programming exercisewe will develop sort program based on multiway merging in order to further contrast the program from the previous natural two-phase merging procedurewe shall formulate the multiway merge as single phasebalanced mergesort this implies that in each pass there are an equal number of input and output files onto which consecutive runs are alternately distributed using filesthe algorithm will therefore be based on -way merging following the previously adopted strategywe will not bother to detect the automatic merging of two consecutive runs distributed onto the same file consequentlywe are forced to design the merge program whithout assuming strictly equal numbers of runs on the input files in this program we encounter for the first time natural application of data structure consisting of arrays of files as matter of factit is surprising how strongly the following program differs from the previous one because of the change from two-way to multiway merging the change is primarily result of the circumstance that the merge process can no longer simply be terminated after one of the input runs is exhausted insteada list of inputs that are still activei not yet exhaustedmust be kept another complication stems from the need to switch the groups of input and output files after each pass here the indirection of access to files via riders comes in handy in each passdata may be copied from the same riders to the same riders at the end of each pass we merely need to reset the input and output files to
25,351
different riders obviouslyfile numbers are used to index the array of files let us then assume that the initial file is the parameter srcand that for the sorting process files are availablefgarray of files filerwarray of runs rider the algorithm can now be sketched as followsprocedure balancedmerge (srcfiles file)files filevar ijintegerlinteger(*no of runs distributed*rruns riderbegin runs set(rsrc)(*distribute initial runs from to [ [ - ]* : : position riders on files grepeat copy one run from to [ ]inc( )inc( )if then : end until eofrepeat (*merge from riders to riders *switch files to riders rl : : (* index of output file*repeat inc( )merge one run from inputs to [ ]if then inc(jelse : end until all inputs exhausteduntil (*sorted file is with [ ]*end balancedmerge having associated rider with the source filewe now refine the statement for the initial distribution of runs using the definition of copywe replace copy one run from to [jbyrepeat runs copy(rw[ ]until eor copying run terminates when either the first item of the next run is encountered or when the end of the entire input file is reached in the actual sort algorithmthe following statements remain to be specified in more detail( position riders on files ( merge one run from inputs to [ ( switch files to riders ( all inputs exhausted firstwe must accurately identify the current input sequences notablythe number of active inputs may be less than obviouslythere can be at most as many sources as there are runsthe sort terminates as soon as there is one single sequence left this leaves open the possibility that at the initiation of the last sort pass there are fewer than runs we therefore introduce variablesay to denote the actual number of inputs used we incorporate the initialization of in the statement switch files as follows
25,352
if then : else : endfor : to - do runs set( [ ] [ ]end naturallystatement ( is to decrement whenever an input source ceases hencepredicate ( may easily be expressed by the relation statement ( )howeveris more difficult to refineit consists of the repeated selection of the least key among the available sources and its subsequent transport to the destinationi the current output sequence the process is further complicated by the necessity of determining the end of each run the end of run may be reached because (athe subsequent key is less than the current key or (bthe end of the source is reached in the latter case the source is eliminated by decrementing in the former case the run is closed by excluding the sequence from further selection of itemsbut only until the creation of the current output run is completed this makes it obvious that second variablesay is needed to denote the number of sources actually available for the selection of the next item this value is initially set equal to and is decremented whenever run teminates because of condition (aunfortunatelythe introduction of is not sufficient we need to know not only the number of filesbut also which files are still in actual use an obvious solution is to use an array with boolean components indicating the availability of the files we choosehowevera different method that leads to more efficient selection procedure whichafter allis the most frequently repeated part of the entire algorithm instead of using boolean arraya file index mapsay is introduced this map is used so that - are the indices of the available sequences thus statement ( can be formulated as followsk : repeat select the minimal keylet [mbe the sequence number on which it occursruns copy( [ [ ]] [ ])if [ [ ]eof then eliminate sequence elsif [ [ ]eor then close run end until since the number of sequences will be fairly small for any practical purposethe selection algorithm to be specified in further detail in the next refinement step may as well be straightforward linear search the statement eliminate sequence implies decrease of as well as and also reassignment of indices in the map the statement close run merely decrements and rearranges components of accordingly the details are shown in the following procedurebeing the last refinement the statement switch files is elaborated according to explanations given earlier procedure balancedmerge (srcfiles file)files file(adens _mergesorts *var ijmtxintegerlk integerminxintegertarray of integer(*index map*rruns rider(*source*fgarray of files filerwarray of runs riderbegin runs set(rsrc)for : to - do [ :files new("")files set( [ ] [ ] end(*distribute initial runs from src to [ [ - ]* : : repeat
25,353
repeat runs copy(rw[ ]until eorinc( )inc( )if then : end until eofrepeat if then : else : endk : for : to - do (*set input riders*runs set( [ ] [ ]endfor : to - do (*set output riders* [ :files new("")files set( [ ] [ ] end(*merge from [ [ - to [ [ - ]*for : to - do [ : endl : (*nof runs merged* : repeat (*merge on run from inputs to [ ]*inc( ) : repeat (*select the minimal key* : min : [ [ ]firsti : while do : [ [ ]firstif min then min :xm : endinc(iendruns copy( [ [ ]] [ ])if [ [ ]eof then (*eliminate this sequence*dec( )dec( ) [ : [ ] [ : [ elsif [ [ ]eor then (*close run*dec( )tx : [ ] [ : [ ] [ :tx end until inc( )if then : end until until return [ end balancedmerge polyphase sort we have now discussed the necessary techniques and have acquired the proper background to investigate and program yet another sorting algorithm whose performance is superior to the balanced sort we have seen that balanced merging eliminates the pure copying operations necessary when the distribution and the merging operations are united into single phase the question arises whether or not the given sequences could be processed even more efficiently this is indeed the casethe key to this next improvement lies in abandoning the rigid notion of strict passesi to use the sequences in more sophisticated way than by always having sources and as many destinations and exchanging sources and
25,354
destinations at the end of each distinct pass insteadthe notion of pass becomes diffuse the method was invented by gilstad [ - and called polyphase sort it is first illustrated by an example using three sequences at any timeitems are merged from two sources into third sequence variable whenever one of the source sequences is exhaustedit immediately becomes the destination of the merge operations of data from the non-exhausted source and the previous destination sequence as we know that runs on each input are transformed into runs on the outputwe need to list only the number of runs present on each sequence (instead of specifying actual keysin fig we assume that initially the two input sequences and contain and runsrespectively thusin the first pass runs are merged from and to in the second pass the remaining runs are merged from and to etc in the endf is the sorted sequence fig polyphase mergesort of runs with sequences second example shows the polyphase method with sequences let there initially be runs on on on on and on in the first partial pass runs are merged onto in the endf contains the sorted set of items (see fig fig polyphase mergesort of runs with sequences polyphase is more efficient than balanced merge becausegiven sequencesit always operates with an - -way merge instead of an / -way merge as the number of required passes is approximately log
25,355
being the number of items to be sorted and being the degree of the merge operationspolyphase promises significant improvement over balanced merging of coursethe distribution of initial runs was carefully chosen in the above examples in order to find out which initial distributions of runs lead to proper functioningwe work backwardstarting with the final distribution (last line in fig rewriting the tables of the two examples and rotating each row by one position with respect to the prior row yields tables and for six passes and for three and six sequencesrespectively ( (lsum ai( table perfect distribution of runs on two sequences (la (la (la ( (lsum ai( table perfect distribution of runs on five sequences from table we can deduce for the relations ( + (la ( + (la (land ( ( defining fi+ ( )we obtain for fi+ fi fi- these are the recursive rules (or recurrence relationsdefining the fibonacci numbersf each fibonacci number is the sum of its two predecessors as consequencethe numbers of initial runs on the two input sequences must be two consecutive fibonacci numbers in order to make polyphase work properly with three sequences how about the second example (table with six sequencesthe formation rules are easily derived as ( + (la ( + (la (la (la ( - ( + (la (la (la ( - ( - ( + (la (la (la ( - ( - ( - ( + (la (la (la ( - ( - ( - ( -
25,356
substituting fi for (iyields + fi fi- - fi- fi- fi for for these numbers are the fibonacci numbers of order in generalthe fibonacci numbers of order are defined as followsf + (pfi(pf - (pfi- (pf ( ( for for < note that the ordinary fibonacci numbers are those of order we have now seen that the initial numbers of runs for perfect polyphase sort with sequences are the sums of any - - (see table consecutive fibonacci numbers of order - table numbers of runs allowing for perfect distribution this apparently implies that this method is only applicable to inputs whose number of runs is the sum of - such fibonacci sums the important question thus ariseswhat is to be done when the number of initial runs is not such an ideal sumthe answer is simple (and typical for such situations)we simulate the existence of hypothetical empty runssuch that the sum of real and hypothetical runs is perfect sum the empty runs are called dummy runs but this is not really satisfactory answer because it immediately raises the further and more difficult questionhow do we recognize dummy runs during mergingbefore answering this question we must first investigate the prior problem of initial run distribution and decide upon rule for the distribution of actual and dummy runs onto the - tapes in order to find an appropriate rule for distributionhoweverwe must know how actual and dummy runs are merged clearlythe selection of dummy run from sequence imeans precisely that sequence is ignored during this merge resulting in merge from fewer than - sources merging of dummy run from all - sources implies no actual merge operationbut instead the recording of the resulting dummy run on the output sequence from this we conclude that dummy runs should be distributed to the -
25,357
sequences as uniformly as possiblesince we are interested in active merges from as many sources as possible let us forget dummy runs for moment and consider the problem of distributing an unknown number of runs onto - sequences it is plain that the fibonacci numbers of order - specifying the desired numbers of runs on each source can be generated while the distribution progresses assumingfor examplen and referring to table we start by distributing runs as indicated by the row with index ( )if there are more runs availablewe proceed to the second row ( )if the source is still not exhaustedthe distribution proceeds according to the third row ( )and so on we shall call the row index level evidentlythe larger the number of runsthe higher is the level of fibonacci numbers whichincidentallyis equal to the number of merge passes or switchings necessary for the subsequent sort the distribution algorithm can now be formulated in first version as follows let the distribution goal be the fibonacci numbers of order - level distribute according to the set goal if the goal is reachedcompute the next level of fibonacci numbersthe difference between them and those on the former level constitutes the new distribution goal return to step if the goal cannot be reached because the source is exhaustedterminate the distribution process the rules for calculating the next level of fibonacci numbers are contained in their definition we can thus concentrate our attention on step wherewith given goalthe subsequent runs are to be distributed one after the other onto the - output sequences it is here where the dummy runs have to reappear in our considerations let us assume that when raising the levelwe record the next goal by the differences di for - where di denotes the number of runs to be put onto sequence in this step we can now assume that we immediately put di dummy runs onto sequence and then regard the subsequent distribution as the replacement of dummy runs by actual runseach time recording replacement by subtracting from the count di thusthe di indicates the number of dummy runs on sequence when the source becomes empty it is not known which algorithm yields the optimal distributionbut the following has proved to be very good method it is called horizontal distribution (cf knuthvol ) term that can be understood by imagining the runs as being piled up in the form of silosas shown in fig for level (cf table in order to reach an equal distribution of remaining dummy runs as quickly as possibletheir replacement by actual runs reduces the size of the piles by picking off dummy runs on horizontal levels proceeding from left to right in this waythe runs are distributed onto the sequences as indicated by their numbers as shown in fig fig horizontal distribution of runs
25,358
we are now in position to describe the algorithm in the form of procedure called selectwhich is activated each time run has been copied and new source is selected for the next run we assume the existence of variable denoting the index of the current destination sequence ai and di denote the ideal and dummy distribution numbers for sequence jlevelintegeradarray of integerthese variables are initialized with the following valuesai di an- dn- level for - dummy note that select is to compute the next row of table the values (lan- (leach time that the level is increased the next goali the differences di ai(lai( - are also computed at that time the indicated algorithm relies on the fact that the resulting di decrease with increasing index (descending stair in fig note that the exception is the transition from level to level this algorithm must therefore be used starting at level select ends by decrementing dj by this operation stands for the replacement of dummy run on sequence by an actual run procedure selectvar izintegerbegin if [jd[ + then inc(jelse if [ then inc(level) : [ ]for : to - do [ : [ + [ ] [ : [ + end endj : enddec( [ ]end select assuming the availability of routine to copy run from the source src with rider onto fj with rider rjwe can formulate the initial distribution phase as follows (assuming that the source contains at least one run)repeat selectcopyrun until eof herehoweverwe must pause for moment to recall the effect encountered in distributing runs in the previously discussed natural merge algorithmthe fact that two runs consecutively arriving at the same destination may merge into single runcauses the assumed numbers of runs to be incorrect by devising the sort algorithm such that its correctness does not depend on the number of runsthis side effect can safely be ignored in the polyphase sorthoweverwe are particularly concerned about keeping track of the exact number of runs on each file consequentlywe cannot afford to overlook the effect of such coincidental merge an additional complication of the distribution algorithm therefore cannot be avoided it
25,359
becomes necessary to retain the keys of the last item of the last run on each sequence fortunatelyour implementation of runs does exactly this in the case of output sequencesr first represents the item last written next attempt to describe the distribution algorithm could therefore be repeat selectif [jfirst < first then continue old run endcopyrun until eof the obvious mistake here lies in forgetting that [jfirst the obvious mistake here lies in forgetting that - destination sequences without inspection of first the remaining runs are distributed as followswhile ~ eof do selectif [jfirst < first then copyrunif eof then inc( [ ]else copyrun end else copyrun end end now we are finally in position to tackle the main polyphase merge sort algorithm its principal structure is similar to the main part of the -way merge programan outer loop whose body merges runs until the sources are exhaustedan inner loop whose body merges single run from each sourceand an innermost loop whose body selects the initial key and transmits the involved item to the target file the principal differences to balanced merging are the following instead of nthere is only one output sequence in each pass instead of switching input and output sequences after each passthe sequences are rotated this is achieved by using sequence index map the number of input sequences varies from run to runat the start of each runit is determined from the counts di of dummy runs if di for all ithen - dummy runs are pseudo-merged into single dummy run by merely incrementing the count dn- of the output sequence otherwiseone run is merged from all sources with di and di is decremented for all other sequencesindicating that one dummy run was taken off we denote the number of input sequences involved in merge by it is impossible to derive termination of phase by the end-of status of the - 'st sequencebecause more merges might be necessary involving dummy runs from that source insteadthe theoretically necessary number of runs is determined from the coefficients ai the coefficients ai were computed during the distribution phasethey can now be recomputed backward the main part of the polyphase sort can now be formulated according to these rulesassuming that all - sequences with initial runs are set to be readand that the tape map is initially set to repeat (*merge from [ [ - to [ - ]* : [ - ] [ - : repeat (*merge one run* : (*determine no of active sequences*for : to - do if [ then dec( [ ]
25,360
else ta[ : [ ]inc(kend endif then inc( [ - ]else merge one real run from [ [ - to [ - enddec(zuntil runs set( [ [ - ]] [ [ - ]])rotate sequences in map tcompute [ifor next leveldec(leveluntil level (*sorted output is [ [ ]]*the actual merge operation is almost identical with that of the -way merge sortthe only difference being that the sequence elimination algorithm is somewhat simpler the rotation of the sequence index map and the corresponding counts di (and the down-level recomputation of the coefficients aiis straightforward and can be inspected in the following program that represents the polyphase algorithm in its entirety procedure polyphase (srcfiles file)files filevar ijmxtnintegerkdnzlevelintegerxminintegeradarray of integerttaarray of integer(*index maps*rruns rider(*source*farray of files filerarray of runs riderprocedure selectvar izintegerbegin if [jd[ + then inc(jelse if [ then inc(level) : [ ]for : to - do [ : [ + [ ] [ : [ + end endj : enddec( [ ]end selectprocedure copyrun(*from src to [ ]*begin (adens _mergesorts *
25,361
repeat runs copy(rr[ ]until eor end copyrunbegin runs set(rsrc)for : to - do [ : [ : [ :files new("")files set( [ ] [ ] end(*distribute initial runs*level : : [ - : [ - : repeat selectcopyrun until eof or ( - )while ~ eof do select(* [jfirst last item written on [ ]*if [jfirst < first then copyrunif eof then inc( [ ]else copyrun end else copyrun end endfor : to - do [ :iruns set( [ ] [ ]endt[ - : - repeat (*slitiz [ [ - [ - ]* : [ - ] [ - : [ [ - ]:files new("")files set( [ [ - ]] [ [ - ]] )repeat (*merge one run* : for : to - do if [ then dec( [ ]else ta[ : [ ]inc(kend endif then inc( [ - ]else (*merge one real run from [ [ - to [ - ]*repeat mx : min : [ta[ ]firsti : while do : [ta[ ]firstif min then min :xmx : endinc(iendruns copy( [ta[mx]] [ [ - ]])if [ta[mx]eor then ta[mx:ta[ - ]dec(
25,362
end until enddec(zuntil runs set( [ [ - ]] [ [ - ]])(*rotate sequences*tn : [ - ]dn : [ - ] : [ - ]for : - to by - do [ : [ - ] [ : [ - ] [ : [ - endt[ :tnd[ :dna[ :zdec(leveluntil level return [ [ ]end polyphase distribution of initial runs we were led to the sophisticated sequential sorting programsbecause the simpler methods operating on arrays rely on the availability of random access store sufficiently large to hold the entire set of data to be sorted often such store is unavailableinsteadsufficiently large sequential storage devices such as tapes or disks must be used we know that the sequential sorting methods developed so far need practically no primary store whatsoeverexcept for the file buffers andof coursethe program itself howeverit is fact that even small computers include random accessprimary store that is almost always larger than what is needed by the programs developed here failing to make optimal use of it cannot be justified the solution lies in combining array and sequence sorting techniques in particularan adapted array sort may be used in the distribution phase of initial runs with the effect that these runs do already have length of approximately the size of the available primary data store it is plain that in the subsequent merge passes no additional array sorts could improve the performance because the runs involved are steadily growing in lengthand thus they always remain larger than the available main store as resultwe may fortunately concentrate our attention on improving the algorithm that generates initial runs naturallywe immediately concentrate our search on the logarithmic array sorting methods the most suitable of them is the tree sort or heapsort method (see sect the heap may be regarded as funnel through which all items must passsome quicker and some more slowly the least key is readily picked off the top of the heapand its replacement is very efficient process the action of funnelling component from the input sequence src (rider through full heap onto an output sequence dest (rider may be described simply as followswrite( [ ])read( [ ])sift( - sift is the process described in sect for sifting the newly inserted component down into its proper place note that is the least item on the heap an example is shown in fig the program eventually becomes considerably more complex for the following reasons the heap is initially empty and must first be filled toward the endthe heap is only partially filledand it ultimately becomes empty we must keep track of the beginning of new runs in order to change the output index at the right time
25,363
[ fig sifting key through heap before proceedinglet us formally declare the variables that are evidently involved in the processvar lrxintegersrcdestfiles filerwfiles riderharray of integer(*heap* is the size of the heap we use the constant mh to denote / and are indices delimiting the heap the funnelling process can then be divided into five distinct parts read the first mh keys from src (rand put them into the upper half of the heap where no ordering among the keys is prescribed read another mh keys and put them into the lower half of the heapsifting each item into its appropriate position (build heap set to and repeat the following step for all remaining items on srcfeed to the appropriate output sequence if its key is less or equal to the key of the next item on the input sequencethen this next item belongs to the same run and can be sifted into its proper position otherwisereduce the size of the heap and place the new item into secondupper heap that is built up to contain the next run we indicate the borderline between the two heaps with the index thusthe lower (currentheap consists of the items hl- the upper (nextheap of hl hm- if then switch the output and reset to now the source is exhausted firstset to mthen flush the lower part terminating the current runand at the same time build up the upper part and gradually relocate it into positions hl hr- the last run is generated from the remaining items in the heap we are now in position to describe the five stages in detail as complete programcalling procedure switch whenever the end of run is detected and some action to alter the index of the output sequence has to be invoked in the program presented belowa dummy routine is used insteadand all runs are written onto sequence dest if we now try to integrate this program withfor instancethe polyphase sortwe encounter serious difficulty it arises from the following circumstancesthe sort program consists in its initial part of fairly complicated routine for switching between sequence variablesand relies on the availability of procedure
25,364
copyrun that delivers exactly one run to the selected destination the heapsort programon the other handis complex routine relying on the availability of closed procedure select which simply selects new destination there would be no problemif in one (or bothof the programs the required procedure would be called at single place onlybut insteadthey are called at several places in both programs this situation is best reflected by the use of coroutine (thread)it is suitable in those cases in which several processes coexist the most typical representative is the combination of process that produces stream of information in distinct entities and process that consumes this stream this producer-consumer relationship can be expressed in terms of two coroutinesone of them may well be the main program itself the coroutine may be considered as process that contains one or more breakpoints if such breakpoint is encounteredthen control returns to the program that had activated the coroutine whenever the coroutine is called againexecution is resumed at that breakpoint in our examplewe might consider the polyphase sort as the main programcalling upon copyrunwhich is formulated as coroutine it consists of the main body of the program presented belowin which each call of switch now represents breakpoint the test for end of file would then have to be replaced systematically by test of whether or not the coroutine had reached its endpoint procedure distribute (srcfiles file)files file(adens _mergesorts *const mh div (*heap size*var lrintegerxintegerdestfiles filerwfiles riderharray of integer(*heap*procedure sift (lrinteger)var ijxintegerbegin :lj : * + : [ ]if ( [ + ]then inc(jendwhile ( [ ]do [ : [ ] :jj : * + if ( [ + ]then inc(jend endh[ : end siftbegin files set(rsrc )dest :files new("")files set(wdest )(*step fill upper half of heap* :mrepeat dec( )files readint(rh[ ]until mh(*step fill lower half of heap*repeat dec( )files readint(rh[ ])sift(lm- until (*step pass elements through heap* :mfiles readint(rx)while ~ eof do files writeint(wh[ ])if [ < then (* belongs to same run* [ :xsift( - else (*start next run*
25,365
dec( ) [ : [ ]sift( - ) [ :xif mh then sift(lm- endif then (*heap fullstart new run* : end endfiles readint(rxend(*step flush lower half of heap* :mrepeat dec( )files writeint(wh[ ]) [ : [ ]sift( - )dec( ) [ : [ ]if mh then sift(lr- end until (*step flush upper half of heapstart new run*while do files writeint(wh[ ]) [ : [ ]dec( )sift( rendreturn dest end distribute analysis and conclusions what performance can be expected from polyphase sort with initial distribution of runs by heapsortwe first discuss the improvement to be expected by introducing the heap in sequence with randomly distributed keys the expected average length of runs is what is this length after the sequence has been funnelled through heap of size mone is inclined to say mbutfortunatelythe actual result of probabilistic analysis is much betternamely (see knuthvol thereforethe expected improvement factor is an estimate of the performance of polyphase can be gathered from table indicating the maximal number of initial runs that can be sorted in given number of partial passes (levelswith given number of sequences as an examplewith six sequences and heap of size file with up to ' ' initial runs can be sorted within partial passes this is remarkable performance reviewing again the combination of polyphase and heapsortone cannot help but be amazed at the complexity of this program after allit performs the same easily defined task of permuting set of items as is done by any of the short programs based on the straight array sorting principles the moral of the entire may be taken as an exhibition of the following the intimate connection between algorithm and underlying data structureand in particular the influence of the latter on the former the sophistication by which the performance of program can be improvedeven when the available structure for its data (sequence instead of arrayis rather ill-suited for the task exercises which of the algorithms given for straight insertionbinary insertionstraight selectionbubble sortshakersortshellsortheapsortquicksortand straight mergesort are stable sorting methods would the algorithm for binary insertion still work correctly if were replaced by < in the while clausewould it still be correct if the statement : + were simplified to :mif notfind sets of values an- upon which the altered program would fail program and measure the execution time of the three straight sorting methods on your computer
25,366
and find coefficients by which the factors and have to be multiplied to yield real time estimates specifty invariants for the repetitions in the three straight sorting algorithms consider the following "obviousversion of the procedure partition and find sets of values an- for which this version fails : : - : [ div ]repeat while [ix do : + endwhile [jdo : - endw : [ ] [ : [ ] [ : until write procedure that combines the quicksort and bubblesort algorithms as followsuse quicksort to obtain (unsortedpartitions of length ( )then use bubblesort to complete the task note that the latter may sweep over the entire array of elementshenceminimizing the bookkeeping effort find that value of which minimizes the total sort time noteclearlythe optimum value of will be quite small it may therefore pay to let the bubblesort sweep exactly - times over the array instead of including last pass establishing the fact that no further exchange is necessary perform the same experiment as in exercise with straight selection sort instead of bubblesort naturallythe selection sort cannot sweep over the whole arraythereforethe expected amount of index handling is somewhat greater write recursive quicksort algorithm according to the recipe that the sorting of the shorter partition should be tackled before the sorting of the longer partition perform the former task by an iterative statementthe latter by recursive call (henceyour sort procedure will contain only one recursive call instead of two find permutation of the keys for which quicksort displays its worst (bestbehavior ( construct natural merge program similar to the straight mergeoperating on double length array from both ends inwardcompare its performance with that of the procedure given in this text note that in (two-waynatural merge we do not blindly select the least value among the available keys insteadupon encountering the end of runthe tail of the other run is simply copied onto the output sequence for examplemerging of results in the sequence instead of which seems to be better ordered what is the reason for this strategy sorting method similar to the polyphase is the so-called cascade merge sort [ and it uses different merge pattern givenfor instancesix sequences the cascade mergealso starting with "perfect distributionof runs on performs five-way merge from onto until is emptythen (without involving four-way merge onto then three-way merge onto two-way merge onto and finally copy operation from onto the next pass operates in
25,367
the same way starting with five-way merge to and so on although this scheme seems to be inferior to polyphase because at times it chooses to leave some sequences idleand because it involves simple copy operationsit surprisingly is superior to polyphase for (verylarge files and for six or more sequences write well structured program for the cascade merge principle references [ betz and carter proc acm national conf ( )paper [ floyd treesort (algorithms and comm acm no ( ) and comm acm no ( ) [ gilstad polyphase merge sorting an advanced technique proc afips eastern jt comp conf ( ) - [ hoare proof of programfind comm acm no ( ) - [ hoare proof of recursive programquicksort comp no ( ) - [ hoare quicksort comp no ( ) - [ knuth the art of computer programming vol readingmass addisonwesley [ lorin guided bibliography to sorting ibm syst no ( ) - [ shell highspeed sorting procedure comm acm no ( ) - [ singleton an efficient algorithm for sorting with minimal storage (algorithm comm acm no ( ) [ van emden increasing the efficiency of quicksort (algorithm comm acm no ( ) - [ williams heapsort (algorithm comm acm no ( ) -
25,368
recursive algorithms introduction an object is said to be recursiveif it partially consists or is defined in terms of itself recursion is encountered not only in mathematicsbut also in daily life who has never seen an advertising picture which contains itselffig picture with recursion recursion is particularly powerful technique in mathematical definitions few familiar examples are those of natural numberstree structuresand of certain functions natural numbers( is natural number (bthe successor of natural number is natural number tree structures(ais tree (called the empty tree(bif and are treesthen the structure consisting of node with two descendants and is also (binarytree the factorial function ( ) ( (nn ( for the power of recursion evidently lies in the possibility of defining an infinite set of objects by finite statement in the same manneran infinite number of computations can be described by finite recursive programeven if this program contains no explicit repetitions recursive algorithmshoweverare primarily appropriate when the problem to be solvedor the function to be computedor the data structure to be processed are already defined in recursive terms in generala recursive program can be expressed as composition of sequence of statements (not containing pand itself [spthe necessary and sufficient tool for expressing programs recursively is the procedure or subroutinefor it allows statement to be given name by which this statement may be invoked if procedure contains an explicit reference to itselfthen it is said to be directly recursiveif contains reference to another procedure qwhich contains (direct or indirectreference to pthen is said to be indirectly recursive the use of recursion may therefore not be immediately apparent from the program text
25,369
it is common to associate set of local objects with procedurei set of variablesconstantstypesand procedures which are defined locally to this procedure and have no existence or meaning outside this procedure each time such procedure is activated recursivelya new set of localbound variables is created although they have the same names as their corresponding elements in the set local to the previous instance of the proceduretheir values are distinctand any conflict in naming is avoided by the rules of scope of identifiersthe identifiers always refer to the most recently created set of variables the same rule holds for procedure parameterswhich by definition are bound to the procedure like repetitive statementsrecursive procedures introduce the possibility of nonterminating computationsand thereby also the necessity of considering the problem of termination fundamental requirement is evidently that the recursive calls of are subjected to condition bwhich at some time becomes false the scheme for recursive algorithms may therefore be expressed more precisely by either one of the following formsp if then [spend [sif then endfor repetitionsthe basic technique of demonstrating termination consists of defining function ( ( shall be the set of variables)such that ( implies the terminating condition (of the while or repeat clause)and proving that (xdecreases during each repetition step is called the variant of the repetition in the same mannertermination of recursion can be proved by showing that each execution of decreases some ( )and that ( implies ~ particularly evident way to ensure termination is to associate (valueparametersay nwith pand to recursively call with - as parameter value substituting for then guarantees termination this may be expressed by the following program schematap(nif then [sp( - )end (np[sif then ( - endin practical applications it is mandatory to show that the ultimate depth of recursion is not only finitebut that it is actually quite small the reason is that upon each recursive activation of procedure some amount of storage is required to accommodate its variables in addition to these local variablesthe current state of the computation must be recorded in order to be retrievable when the new activation of is terminated and the old one has to be resumed we have already encountered this situation in the development of the procedure quicksort in chap it was discovered that by naively composing the program out of statement that splits the items into two partitions and of two recursive calls sorting the two partitionsthe depth of recursion may in the worst case approach by clever reassessment of the situationit was possible to limit the depth to log(nthe difference between and log(nis sufficient to convert case highly inappropriate for recursion into one in which recursion is perfectly practical when not to use recursion recursive algorithms are particularly appropriate when the underlying problem or the data to be treated are defined in recursive terms this does not meanhoweverthat such recursive definitions guarantee that recursive algorithm is the best way to solve the problem in factthe explanation of the concept of recursive algorithm by such inappropriate examples has been chief cause of creating widespread apprehension and antipathy toward the use of recursion in programmingand of equating recursion with inefficiency
25,370
programs in which the use of algorithmic recursion is to be avoided can be characterized by schema which exhibits the pattern of their composition the equivalent schemata are shown below their characteristic is that there is only single call of either at the end (or the beginningof the composition if then sp end sif then end these schemata are natural in those cases in which values are to be computed that are defined in terms of simple recurrence relations let us look at the well-known example of the factorial numbers fi ! fi the first number is explicitly defined as whereas the subsequent numbers are defined recursively in terms of their predecessorfi+ ( + this recurrence relation suggests recursive algorithm to compute the -th factorial number if we introduce the two variables and to denote the values and fi at the -th level of recursionwe find the computation necessary to proceed to the next numbers in the sequences to be : : andsubstituting these two statements for swe obtain the recursive program if then : : fp end : : the first line is expressed in terms of our conventional programming notation as procedure pbegin if then : : *fp end end more frequently usedbut essentially equivalentform is the one given below is replaced by function procedure fi procedure with which resulting value is explicitly associatedand which therefore may be used directly as constituent of expressions the variable therefore becomes superfluousand the role of is taken over by the explicit procedure parameter procedure (iinteger)integerbegin if then return ( else return end end it now is plain that in this example recursion can be replaced quite simply by iteration this is expressed by the program : : while do : : * end in generalprograms corresponding to the original schemata should be transcribed into one according to the following schemap [ : while do endthere also exist more complicated recursive composition schemes that can and should be translated into an iterative form an example is the computation of the fibonacci numbers which are defined by the recurrence relation
25,371
for fibn+ fibn fibn- and fib fib directnaive transcription leads to the recursive program procedure fib (ninteger)integervar resintegerbegin if then res : elsif then res : else res :fib( - fib( - endreturn res end fib computation of fibn by call fib(ncauses this function procedure to be activated recursively how oftenwe notice that each call with leads to further callsi the total number of calls grows exponentially (see fig such program is clearly impractical fig the activations of fib( but fortunately the fibonacci numbers can be computed by an iterative scheme that avoids the recomputation of the same values by use of auxiliary variables such that fibi and fibi- : : : while do :xx : yy :zi : end notethe assignments to xyz may be expressed by two assignments only without need for the auxiliary variable zx : yy : thusthe lesson to be drawn is to avoid the use of recursion when there is an obvious solution by iteration thishowevershould not lead to shying away from recursion at any price there are many good applications of recursionas the following paragraphs and will demonstrate the fact that implementations of recursive procedures on essentially non-recursive machines exist proves that for practical purposes every recursive program can be transformed into purely iterative one thishoweverinvolves the explicit handling of recursion stackand these operations will often obscure the essence of program to such an extent that it becomes most difficult to comprehend the lesson is that algorithms which by their nature are recursive rather than iterative should be formulated as recursive procedures in order to appreciate this pointthe reader is referred to the algorithms for quicksort and nonrecursivequicksort in sect for comparison the remaining part of this is devoted to the development of some recursive programs in situations in which recursion is justifiably appropriate also chap makes extensive use of recursion in cases in which the underlying data structures let the choice of recursive solutions appear obvious and natural
25,372
two examples of recursive programs the attractive graphic pattern shown in fig consists of superposition of five curves these curves follow regular pattern and suggest that they might be drawn by display or plotter under control of computer our goal is to discover the recursion schemaaccording to which the drawing program might be constructed inspection reveals that three of the superimposed curves have the shapes shown in fig we denote them by and the figures show that hi+ is obtained by the composition of four instances of hi of half size and appropriate rotationand by tying together the four hi by three connecting lines notice that may be considered as consisting of four instances of an empty connected by three straight lines hi is called the hilbert curve of order after its inventorthe mathematician hilbert ( fig hilbert curves of order and since each curve hi consists of four half-sized copies of hi- we express the procedure for drawing hi as composition of four calls for drawing hi- in half size and appropriate rotation for the purpose of illustration we denote the four parts by abc and dand the routines drawing the interconnecting lines by arrows pointing in the corresponding direction then the following recursion scheme emerges (see fig ad bc cb da let us assume that for drawing line segments we have at our disposal procedure line which moves drawing pen in given direction by given distance for our conveniencewe assume that the direction be indicated by an integer parameter as degrees if the length of the unit line is denoted by uthe procedure corresponding to the scheme is readily expressed by using recursive activations of analogously designed procedures and and of itself procedure (iinteger)begin if then ( - )line( ) ( - )line( ) ( - )line( ) ( - end end this procedure is initiated by the main program once for every hilbert curve to be superimposed the
25,373
main procedure determines the initial point of the curvei the initial coordinates of the pen denoted by and and the unit increment the square in which the curves are drawn is placed into the middle of the page with given width and height these parameters as well as the drawing procedure line are taken from module draw note that this module retains the current position of the pen definition draw(adens _draw *const width height procedure clear(*clear drawing plane*procedure setpen(xyinteger)procedure line(dirleninteger)(*draw line of length len in direction dir* degreesmove pen accordingly*end draw procedure hilbert draws the hilbert curves hn it uses the four auxiliary procedures abc and recursivelyvar uinteger(adens _hilbert *procedure (iinteger)begin if then ( - )draw line( ) ( - )draw line( ) ( - )draw line( ) ( - end end aprocedure (iinteger)begin if then ( - )draw line( ) ( - )draw line( ) ( - )draw line( ) ( - end end bprocedure (iinteger)begin if then ( - )draw line( ) ( - )draw line( ) ( - )draw line( ) ( - end end cprocedure (iinteger)begin if then ( - )draw line( ) ( - )draw line( ) ( - )draw line( ) ( - end end dprocedure hilbert (ninteger)const squaresize var ix integerbegin draw clearx :draw width div :draw height div :squaresizei : repeat inc( ) : div
25,374
: ( div ) : ( div )draw set( ) (iuntil end hilbert fig hilbert curves similar but slightly more complex and aesthetically more sophisticated example is shown in fig this pattern is again obtained by superimposing several curvestwo of which are shown in fig si is called the sierpinski curve of order what is its recursion schemeone is tempted to single out the leaf as basic building blockpossibly with one edge left off but this does not lead to solution the principal difference between sierpinski curves and hilbert curves is that sierpinski curves are closed (without crossoversthis implies that the basic recursion scheme must be an open curve and that the four parts are connected by links not belonging to the recusion pattern itself indeedthese links consist of the four straight lines in the outermost four cornersdrawn with thicker lines in fig they may be regarded as belonging to non-empty initial curve which is square standing on one corner now the recursion schema is readily established the four constituent patterns are again denoted by abc and dand the connecting lines are drawn explicitly notice that the four recursion patterns are indeed identical except for degree rotations
25,375
fig sierpinski curves and the base pattern of the sierpinski curves is sa and the recursion patterns are (horizontal and vertical arrows denote lines of double length aa bb cc dd if we use the same primitives for drawing as in the hilbert curve examplethe above recursion scheme is transformed without difficulties into (directly and indirectlyrecursive algorithm procedure (kinteger)begin if then ( - )draw line( ) ( - )draw line( * ) ( - )draw line( ) ( - end end this procedure is derived from the first line of the recursion scheme procedures corresponding to the patterns bc and are derived analogously the main program is composed according to the base pattern its task is to set the initial values for the drawing coordinates and to determine the unit line length according to the size of the plane the result of executing this program with is shown in fig var hintegerprocedure (kinteger)begin if then ( - )draw line( ) ( - )draw line( * ) ( - )draw line( ) ( - end end aprocedure (kinteger)(adens _sierpinski *
25,376
begin if then ( - )draw line( ) ( - )draw line( * ) ( - )draw line( ) ( - end end bprocedure (kinteger)begin if then ( - )draw line( ) ( - )draw line( * ) ( - )draw line( ) ( - end end cprocedure (kinteger)begin if then ( - )draw line( ) ( - )draw line( * ) ( - )draw line( ) ( - end end dprocedure sierpinski(ninteger)const squaresize var ix integerbegin draw clearh :squaresize div :draw width div :draw height div hi : repeat inc( ) : -hh : div : +hdraw set( ) ( )draw line( , ) ( )draw line( , ) ( )draw line( , ) ( )draw line( ,huntil end sierpinski the elegance of the use of recursion in these exampes is obvious and convincing the correctness of the programs can readily be deduced from their structure and composition patterns moreoverthe use of an explicit (decreasinglevel parameter guarantees termination since the depth of recursion cannot become greater than in contrast to this recursive formulationequivalent programs that avoid the explicit use of recursion are extremely cumbersome and obscure trying to understand the programs shown in [ - should easily convince the reader of this
25,377
fig sierpinski curves backtracking algorithms particularly intriguing programming endeavor is the subject of so-called general problem solving the task is to determine algorithms for finding solutions to specific problems not by following fixed rule of computationbut by trial and error the common pattern is to decompose the trial-and-error process onto partial tasks often these tasks are most naturally expressed in recursive terms and consist of the exploration of finite number of subtasks we may generally view the entire process as trial or search process that gradually builds up and scans (prunesa tree of subtasks in many problems this search tree grows very rapidlyoften exponentiallydepending on given parameter the search effort increases accordingly frequentlythe search tree can be pruned by the use of heuristics onlythereby reducing computation to tolerable bounds it is not our aim to discuss general heuristic rules in this text ratherthe general principle of breaking up such problem-solving tasks into subtasks and the application of recursion is to be the subject of this we start out by demonstrating the underlying technique by using an examplenamelythe well known knight' tour given is board with fields knight -being allowed to move according to the rules of chess -is placed on the field with initial coordinates the problem is to find covering of the entire boardif there exists onei to compute tour of - moves such that every field of the board is visited exactly once
25,378
the obvious way to reduce the problem of covering fields is to consider the problem of either performing next move or finding out that none is possible let us define the corresponding algorithm first approach is to employ linear search in order to find the next move from which the tour can be completedprocedure trynextmovebegin if board is not full then initialize selection of candidates for the next move and select first onewhile ~(no more candidates~(tour can be completed from this candidatedo select next candidate endhandle search results else handle the case of full board end end trynextmovenotice the if encompassing the procedure bodyit ensures that the degenerate case of full board is handled correctly this is general device that is similar to how arithmetic operations are defined to handle the zero casethe purpose is convenience and robustness if such checks are performed outside of the procedure (as is often done for optimizationthen each call must be accompanied by such check -or its absense must be properly justified in each case introducing such complications is best postponed until correct algorithm is constructed and their necessity is seen the predicate tour can be completed from this candidate is conveniently expressed as functionprocedure that returns logical value since it is necessary to record the sequence of moves being generatedthe function-procedure is proper place both for recording the next move and for its rejection because it is in this procedure that the success of completion of the tour is determined procedure canbedone move )booleanbegin record movetrynextmoveif failed to complete the tour then erase move endreturn tour has been completed end canbedone the recursion scheme is already evident here if we wish to be more precise in describing this algorithmwe are forced to make some decisions on data representation we wish to keep track of the history of successive board occupations then each move in the tour sequence can be characterized by an integer in addition to its two coordinates on the board xy at this point we can make decisions about the appropriate parameters for the two procedures trynextmove and canbedone the parameters of trynextmove are to determine the starting conditions for the next move and also to report on its success the former task is adequately solved by specifying the coordinates xy from which the move is to be madeand by specifying the number of the move (for recording purposesthe latter task requires boolean result parameter with the meaning the move was successful the resulting signature is
25,379
procedure trynextmove (xyiintegervar donebooleanthe function-procedure canbedone expresses the predicate tour can be completed from this move and is used within trynextmove in the linear search over possible move destinations determined according to the jump pattern of knights introduce two local variables and to stand for the coordinates of the move destinations examined by the linear search loop the function canbedone must know uv it is also efficient to pass to canbedone the number of this new movewhich is known to be + then the signature of canbedone can be chosen as followsprocedure canbedone (uvi integer)boolean certainly board not full in trynextmove can be expressed as also introduce logical variable eos to signal the condition no more candidates then the algorithm is refined as followsprocedure trynextmove (xyiintegervar doneboolean)var eosbooleanuvintegerbegin if then initialize selection of candidates for the next move and select first one uvwhile ~eos ~canbedone(uvi+ do select next candidate uv enddone :~eos else done :true end end trynextmoveprocedure canbedone (uvi integer)booleanvar donebooleanbegin record movetrynextmove(uvi done)if ~done then erase move endreturn done end canbedone an obvious next step is to represent the board by matrixsay let us also introduce type to denote index valuesvar harray nn of integer the decision to represent each field of the board by an integer instead of boolean value denoting occupation allows to record the history of successive board occupations in the simplest fashion the following convention is an obvious choiceh[xy [xyifield xy has not been visited field xy has been visited in the -th move ( < which statements can now be refined on the basis of these decisionswe can actually complete the refinement of the function-procedure canbedonethe operation of recording the legal move is expressed by the assignment hxy :iand the cancellation of this recording as hxy : alsowe can now express the enumeration of the allowed moves uv from the position xy in the
25,380
procedure trynextmove on board that is infinite in all directionseach position xy has number candidate moves uvwhich at this point there is no need to specify (see fig the predicate to choose an acceptable move can be expressed as the logical conjunction of the conditions that the new field lies on the boardi < and < nand that it had not been visited previouslyi huv one further detail must not be overlookeda variable huv does exist only if both and lie within the index range - consequentlythe boolean expressionsubstituted for acceptable in the general schemais valid only if its first four constituent terms are true it is therefore relevant that the term huv appears last as resultthe selection of the next acceptable candidate move is expressed by the familiar scheme of linear search (only formulated now in terms of the repeat loop instead of whilewhich in this case is possible and convenientto signal that there are now further candidate movesthe variable eos can be used let us formulate the operation as procedure nextexplicitly specifying the relevant variables as parametersprocedure next (var eosbooleanvar uvinteger)begin (*~eos*repeat select the next candidate move uv until (no more candidatesor (( <= ( < ( <= ( < ( [uv= ))eos :no more candidates end next the enumeration of candidate moves is accomplished in similar procedure first that generates the first candidate movesee the details in the final program presented below just one more refinement step will lead us to program expressed fully in terms of our basic programming notation we should note that so far the program was developed completely independently of the laws governing the jumps of the knight this delaying of considerations of particularities of the problem was quite deliberate but now is the time to take them into account given starting coordinate pair xy there are eight potential candidates uv of the destination they are numbered to in fig fig the possible moves of knight simple method of obtaining uv from xy is by addition of the coordinate differences stored in either an array of difference pairs or in two arrays of single differences let these arrays be denoted by dx and dyappropriately initialized dx ( - - - - dy ( - - - - then an index may be used to number the next candidate the details are shown in the program presented below we assume global matrix representing the resultthe constant (and nsqr )and arrays dx and dy representig the possible moves of knight (see fig the recursive procedure is initiated by call with the coordinates of that field as parameters from which the tour is to start this field must
25,381
be given the value all others are to be marked free var harray nn of integer(adens _knightstour *dxdyarray of integerprocedure canbedone (uviinteger)booleanvar donebooleanbegin [uv:itrynextmove(uvidone)if ~done then [uv: endreturn done end canbedoneprocedure trynextmove (xyiintegervar doneboolean)var eosbooleanuvintegerkintegerprocedure next (var eosbooleanvar uvinteger)begin repeat inc( )if then : dx[ ] : dy[kenduntil ( or (( < ( ( < ( ( [uv ))eos :( end nextprocedure first (var eosbooleanvar uvinteger)begin eos :falsek :- next(eosuvend firstbegin if nsqr then first(eosuv)while ~eos ~canbedone(uvi+ do next(neosuvenddone :~eos else done :true endend trynextmoveprocedure clearvar ijintegerbegin for : to - do for : to - do [ , : end end end clearprocedure knightstour ( integervar doneboolean)begin clearh[ , : trynextmove( done)end knightstourtable indicates solutions obtained with initial positions for and for
25,382
table three knightstours what abstractions can now be made from this examplewhich pattern does it exhibit that is typical for this kind of problem-solving algorithmswhat does it teach usthe characteristic feature is that steps toward the total solution are attempted and recorded that may later be taken back and erased in the recordings when it is discovered that the step does not possibly lead to the total solutionthat the step leads into dead-end street this action is called backtracking the general pattern below is derived from trynextmoveassuming that the number of potential candidates in each step is finite procedure trybegin if solution incomplete then initialize selection of candidates and choose the first onewhile ~(no more candidates~canbedone(candidatedo select next end end end tryprocedure canbedone (move)booleanbegin record movetryif not successful then cancel recording endreturn successful end canbedone actual programs mayof courseassume various derivative forms of this schema in particularthe way input is passed to trynextmove may vary depending on the problem specifics indeedthis procedure accesses global variables in order to record the solutionand those variables containin principlea complete information about the current state of the construction process for instancein the knight' tour problem trynextmove needs to know the knight' last position on the boardwhich can be found by search in the matrix howeverthis information is explicitly available when the procedure is calledand it is simpler to pass it via parameters in subsequent examples we will see variations on this theme
25,383
note that the search condition in the while loop is modeled as procedure-function canbedone for maximal clarification of the logic of the algorithm while keeping the program easily comprehensible certainly the program can be optimized in other respects via appropriate equivalent transformations one canfor instanceeliminate the two procedures first and next by merging the two easily verifiable loops into one such single loop wouldin generalbe more complexbut the final result can turn out to be quite transparent in the case when all solutions are to be found (see the last program in the next sectionthe remainder of this is devoted to the treatment of three more examples they display various incarnations of the abstract schema and are included as further illustrations of the appropriate use of recursion the eight queens problem the problem of the eight queens is well-known example of the use of trial-and-error methods and of backtracking algorithms it was investigated by gauss in but he did not completely solve it this should not surprise anyone after allthe characteristic property of these problems is that they defy analytic solution insteadthey require large amounts of exacting laborpatienceand accuracy such algorithms have therefore gained relevance almost exclusively through the automatic computerwhich possesses these properties to much higher degree than peopleand even geniusesdo the eight queens poblem is stated as follows (see also [ - ])eight queens are to be placed on chess board in such way that no queen checks against any other queen we will use the last schema of sect as template since we know from the rules of chess that queen checks all other figures lying in either the same columnrowor diagonal on the boardwe infer that each column may contain one and only one queenand that the choice of position for the -th queen may be restricted to the -th column the next move in the general recursive scheme will be to position the next queen in the order of their numbersso try will attempt to position the -th queenreceiving as parameter which therefore becomes the column indexand the selection process for positions ranges over the eight possible values for row index procedure try (iinteger)begin if then initialize selection of safe and select the first onewhile ~(no more safe ~canbedone(ijdo select next safe end end end tryprocedure canbedone (ijinteger)boolean(*solution can be completed with -th queen in -th row*begin setqueentry( + )if not successful then removequeen endreturn successful end canbedone in order to proceedit is necessary to make some commitments concerning the data representation an obvious choice would again be square matrix to represent the boardbut little inspection reveals that
25,384
such representation would lead to fairly cumbersome operations for checking the availability of positions this is highly undesirable since it is the most frequently executed operation we should therefore choose data representation which makes checking as simple as possible the best recipe is to represent as directly as possible that information which is truly relevant and most often used in our case this is not the position of the queensbut whether or not queen has already been placed along each row and diagonals (we already know that exactly one is placed in each column for < this leads to the following choice of variablesvar xarray of integeraarray of booleanbcarray of boolean where xi denotes the position of the queen in the -th columnaj means "no queen lies in the -th row"bk means "no queen occupies the -th /-diagonalc means "no queen sits on the -th \-diagonal we note that in /-diagonal all fields have the same sums of their coordinates and jand that in \diagonal the coordinate differences - are constant the appropriate solution is shown in the following program queens given these datathe statement setqueen is elaborated to [ :ja[ :falseb[ + :falsec[ - + :false the statement removequeen is refined into [ :trueb[ + :truec[ - + :true the field is safe if it lies in row and in diagonals which are still free henceit can be expressed by the logical expression [jb[ +jc[ - + this allows one to formulate the procedures for enumeration of safe rows for the -th queen by analogy with the preceding example this completes the development of this algorithmthat is shown in full below the computed solution is ( and is shown in fig fig solution to the eight queens problem procedure try (iintegervar doneboolean)var eosbooleanjintegerprocedure next(adens _queens *
25,385
begin repeat inc( )until ( or ( [jb[ +jc[ - + ])eos :( end nextprocedure firstbegin eos :falsej :- next end firstbegin if then firstwhile ~eos ~canbedone(ijdo next enddone :~eos else done :true end end tryprocedure canbedone (ijinteger)boolean(*solution can be completed with -th queen in -th row*var donebooleanbegin [ :ja[ :falseb[ + :falsec[ - + :falsetry( + done)if ~done then [ :- [ :trueb[ + :truec[ - + :true endreturn done end canbedoneprocedure queens*var donebooleanijinteger(*uses global writer *begin for : to do [ :truex[ :- endfor : to do [ :truec[ :true endtry( done)if done then for : to do texts writeint(wx[ ] endtexts writeln(wend end queens before we abandon the context of the chess boardthe eight queens example is to serve as an illustration of an important extension of the trial-and-error algorithm the extension is -in general terms -to find not only onebut all solutions to posed problem the extension is easily accommodated we are to recall the fact that the generation of candidates must progress in systematic manner that guarantees no candidate is generated more than once this property of the algorithm corresponds to search of the candidate tree in systematic fashion in which every node is visited exactly once it allows -once solution is found and duly recorded -merely to proceed to the next candidate delivered by the systematic selection process the modification is formally accomplished by carrying the procedure function canbedone from the loop' guard into its body and substituting the
25,386
procedure body in place of its call to return logical value is no longer necessary the general schema is as followsprocedure trybegin if solution incomplete then initialize selection of candidate moves and select the first onewhile ~(no more movesdo record movetryerase moveselect next move end else print solution end end try it comes as surprise that the search for all possible solutions is realized by simpler program than the search for single solution in the eight queenn problem another simplification is posssible indeedthe somewhat cumbersome mechanism of enumeration of safe positions which consists of the two procedures first and nextwas used to disentangle the linear search of the next safe position (the loop over within nextand the linear search of the first which yields complete solution nowthanks to the simplification of the latter loopsuch disentanglement is no longer necessary as the simplest loop over will sufficewith the safe filtered by an if embedded within the loopwithout invoking the additional procedures the extended algorithm to determine all solutions of the eight queens problem is shown in the following program actuallythere are only significantly differing solutionsour program does not recognize symmetries the solutions generated first are listed in table the numbers to the right indicate the frequency of execution of the test for safe fields its average over all solutions is procedure write(adens _queens *var kintegerbegin for : to do texts writeint(wx[ ] endtexts writeln(wend writeprocedure try (iinteger)var jintegerbegin if then for : to do if [jb[ +jc[ - + then [ :ja[ :falseb[ + :falsec[ - + :falsetry( ) [ :- [ :trueb[ + :truec[ - + :true end end else writem : + (*solutions count*
25,387
end end tryprocedure allqueens*var ijintegerbegin for : to do [ :truex[ :- endfor : to do [ :truec[ :true endm : try( )log string('no of solutions')log int( )log ln end allqueens table twelve solutions to the eight queens problem the stable marriage problem assume that two disjoint sets and of equal size are given find set of pairs such that in and in satisfy some constrains many different criteria for such pairs existone of them is the rule called the stable marriage rule assume that is set of men and is set of women each man and each women has stated distinct preferences for their possible partners if the couples are chosen such that there exists man and woman who are not marriedbut who would both prefer each other to their actual marriage partnersthen the assignment is unstable if no such pair existsit is called stable this situation characterizes many related problems in which assignments have to be made according to preferences such asfor examplethe choice of school by studentsthe choice of recruits by different branches of the armed servicesetc the example of marriages is particularly intuitivenotehoweverthat the stated list of preferences is invariant and does not change after particular assignment has been made this assumption simplifies the problembut it also represents grave distortion of reality (called abstractionone way to search for solution is to try pairing off members of the two sets one after the other until the two sets are exhausted setting out to find all stable assignmentswe can readily sketch solution by using the program schema of allqueens as template let try(mdenote the algorithm to find partner for man mand let this search proceed in the order of the man' list of stated preferences the first version based on these assumptions isprocedure try (mman)
25,388
var rrankbegin if then for : to - do pick the -th preference of man mif acceptable then record the marriagetry(successor( ))cancel the marriage end end else record the stable set end end try the initial data are represented by two matrices that indicate the men' and women' preferences var wmrarray nn of womanmwrarray nn of man accordinglywmrm denotes the preference list of man mi wmrm, is the woman who occupies the -th rank in the list of man similarlymwrw is the preference list of woman wand mwrw, is her -th choice sample data set is shown in table the result is represented by an array of women xsuch that xm denotes the partner of man in order to maintain symmetry between men and womenan additional array ysuch that yw denotes the partner of woman wvar xyarray of integer table sample input data for wmr and mwr actuallyy is redundantsince it represents information that is already present through the existence of in factthe relations [ [ ]wy[ [ ] hold for all and who are married thusthe value yw could be determined by simple search of xthe array yhoweverclearly improves the efficiency of the algorithm the information represented by and is needed to determine stability of proposed set of marriages since this set is constructed stepwise by marrying individuals and testing stability after each proposed marriagex and are needed even before all their components are defined in order to keep track of defined componentswe may introduce boolean arrays singlemsinglewarray of boolean
25,389
with the meaning that singlemm implies that xm is undefinedand singleww implies that yw is undefined an inspection of the proposed algorithmhoweverquickly reveals that the marital status of man is determined by the value through the relation ~singlem[kk this suggests that the array singlem be omittedaccordinglywe will simplify the name singlew to single these conventions lead to the refinement shown by the following procedure try the predicate acceptable can be refined into the conjunction of single and stablewhere stable is function to be still further elaborated procedure try (mman)var rrankwwomanbegin if then for : to - do :wmr[ , ]if single[wstable then [ :wy[ :msingle[ :falsetry( + )single[ :true end end else record the solution end end try at this pointthe strong similarity of this solution with procedure allqueens is still noticeable the crucial task is now the refinement of the algorithm to determine stability unfortunatelyit is not possible to represent stability by such simple expression as the safety of queen' position the first detail that should be kept in mind is that stability follows by definition from comparisons of ranks the ranks of men or womenhoweverare nowhere explicitly available in our collection of data established so far surelythe rank of woman in the mind of man can be computedbut only by costly search of in wmrm since the computation of stability is very frequent operationit is advisable to make this information more directly accessible to this endwe introduce the two matrices rmwarray manwoman of rankrwmarray womanman of rank such that rmwm, denotes woman ' rank in the preference list of man mand rwmw, denotes the rank of man in the list of it is plain that the values of these auxiliary arrays are constant and can initially be determined from the values of wmr and mwr the process of determining the predicate stable now proceeds strictly according to its original definition recall that we are trying the feasibility of marrying and wwhere wmrm, is man ' -th choice being optimisticwe first presume that stability still prevailsand then we set out to find possible sources of trouble where could they be hiddenthere are two symmetrical possibilities there might be women pwpreferred to by mwho herself prefers over her husband there might be man pmpreferred to by wwho himself prefers over his wife pursuing trouble source we compare ranks rwmpw, and rwmpw, [pwfor all women preferred to by wi for all pw wmrm, such that we happen to know that all these candidate women are
25,390
already married becausewere anyone of them still singlem would have picked her beforehand the described process can be formulated by simple linear searchs denotes stability :- :truerepeat inc( )if then pw :wmr[ , ]if ~single[pwthen :rwm[pw,mrwm[pwy[pw]end end until ( ror ~ hunting for trouble source we must investigate all candidates pm who are preferred by to their current assignation mi all preferred men pm mwrw, such that rwmw, in analogy to tracing trouble source comparison between ranks rmwpm, and rmwpm, [pmis necessary we must be carefulhoweverto omit comparisons involving xpm where pm is still single the necessary safeguard is test pm msince we know that all men preceding are already married the complete algorithm is shown below table specifies the nine computed stable solutions from input data wmr and mwr given in table procedure write(adens _marriages *(*global writer *var mmanrmrwintegerbegin rm : rw : for : to - do texts writeint(wx[ ] )rm :rmw[mx[ ]rmrw :rwm[ [ ]mrw endtexts writeint(wrm )texts writeint(wrw )texts writeln(wend writeprocedure stable (mwrinteger)booleanvar pmpwrankilimintegersbooleanbegin :- :truerepeat inc( )if then pw :wmr[ , ]if ~single[pwthen :rwm[pw,mrwm[pwy[pw]end end until ( ror ~si :- lim :rwm[ , ]repeat inc( )if lim then pm :mwr[ , ]if pm rmw[pmx[pm]end end until ( limor ~
25,391
return end stableprocedure try (minteger)var wrintegerbegin if then for : to - do :wmr[ , ]if single[wstable( , ,rthen [ :wy[ :msingle[ :falsetry( + )single[ :true end end else write end end tryprocedure findstablemarriages (var stexts scanner)var mwrintegerbegin for : to - do for : to - do texts scan( )wmr[ , : irmw[mwmr[ , ]: end endfor : to - do single[ :truefor : to - do texts scan( )mwr[ , : irwm[wmwr[ , ]: end endtry( end findstablemarriages this algorithm is based on straightforward backtracking scheme its efficiency primarily depends on the sophistication of the solution tree pruning scheme somewhat fasterbut more complex and less transparent algorithm has been presented by mcvitie and wilson [ - and - ]who also have extended it to the case of sets (of men and womenof unequal size algorithms of the kind of the last two exampleswhich generate all possible solutions to problem (given certain constraints)are often used to select one or several of the solutions that are optimal in some sense in the present exampleone mightfor instancebe interested in the solution that on the average best satisfies the menor the womenor everyone notice that table indicates the sums of the ranks of all women in the preference lists of their husbandsand the sums of the ranks of all men in the preference lists of their wives these are the values rm sm < nrmwm, [mrw sm < nrwmx[ ],
25,392
rm rw number of evaluations of stability solution male optimal solutionsolution female optimal solution table result of the stable marriage problem the solution with the least value rm is called the male-optimal stable solutionthe one with the smallest rw is the female-optimal stable solution it lies in the nature of the chosen search strategy that good solutions from the men' point of view are generated first and the good solutions from the women' perspective appear toward the end in this sensethe algorithm is based toward the male population this can quickly be changed by systematically interchanging the role of men and womeni by merely interchanging mwr with wmr and interchanging rmw with rwm we refrain from extending this program further and leave the incorporation of search for an optimal solution to the next and last example of backtracking algorithm the optimal selection problem the last example of backtracking algorithm is logical extension of the previous two examples represented by the general schema first we were using the principle of backtracking to find single solution to given problem this was exemplified by the knight' tour and the eight queens then we tackled the goal of finding all solutions to given problemthe examples were those of the eight queens and the stable marriages now we wish to find an optimal solution to this endit is necessary to generate all possible solutionsand in the course of generating them to retain the one that is optimal in some specific sense assuming that optimality is defined in terms of some positive valued function ( )the algorithm is derived from the general schema of try by replacing the statement print solution by the statement if (solutionf(optimumthen optimum :solution end the variable optimum records the best solution so far encountered naturallyit has to be properly initializedmoreverit is customary to record to value (optimumby another variable in order to avoid its frequent recomputation an example of the general problem of finding an optimal solution to given problem followswe choose the important and frequently encountered problem of finding an optimal selection out of given set of objects subject to constraints selections that constitute acceptable solutions are gradually built up by investigating individual objects from the base set procedure try describes the process of investigating the suitability of one individual objectand it is called recursively (to investigate the next objectuntil all objects have been considered we note that the consideration of each object (called candidates in previous exampleshas two possible outcomesnamelyeither the inclusion of the investigated object in the current selection or its exclusion this makes the use of repeat or for statement inappropriateinsteadthe two cases may as well be explicitly written out this is shownassuming that the objects are numbered -
25,393
procedure try (iinteger)begin if then if inclusion is acceptable then include -th objecttry( + )eliminate -th object endif exclusion is acceptable then try( + end else check optimality end end try from this pattern it is evident that there are possible setsclearlyappropriate acceptability criteria must be employed to reduce the number of investigated candidates very drastically in order to elucidate this processlet us choose concrete example for selection problemlet each of the objects an- be characterized by its weight and its value let the optimal set be the one with the largest sum of the values of its componentsand let the constraint be limit on the sum of their weight this is problem well known to all travellers who pack suitcases by selecting from items in such way that their total value is optimal and that their total weight does not exceed specific allowance we are now in position to decide upon the representation of the given facts in terms of global variables the choices are easily derived from the foregoing developmentstype object record weightvalueinteger endvar aarray of objectlimwtotvmaxvintegersoptsset the variables limw and totv denote the weight limit and the total value of all objects these two values are actually constant during the entire selection process represents the current selection of objects in which each object is represented by its name (indexopts is the optimal selection so far encounteredand maxv is its value which are now the criteria for acceptability of an object for the current selectionif we consider inclusionthen an object is selectableif it fits into the weight allowance if it does not fitwe may stop trying to add further objects to the current selection ifhoweverwe consider exclusionthen the criterion for acceptabilityi for the continuation of building up the current selectionis that the total value which is still achievable after this exclusion is not less than the value of the optimum so far encountered forif it is lesscontinuation of the searchalthough it may produce some solutionwill not yield the optimal solution hence any further search on the current path is fruitless from these two conditions we determine the relevant quantities to be computed for each step in the selection process the total weight tw of the selection so far made the still achievable value av of the current selection these two entities are appropriately represented as parameters of the procedure try the condition inclusion is acceptable can now be formulated as tw [iweight limw and the subsequent check for optimality as
25,394
if av maxv then (*new optimumrecord it*opts :smaxv :av end the last assignment is based on the reasoning that the achievable value is the achieved valueonce all objects have been dealt with the condition exclusion is acceptable is expressed by av [ivalue maxv since it is used again thereafterthe value av [ivalue is given the name av in order to circumvent its reevaluation the entire procedure is now composed of the discussed parts with the addition of appropriate initialization statements for the global variables the ease of expressing inclusion and exclusion from the set by use of set operators is noteworthy the results opts and maxv of the program selection with weight allowances ranging from to are listed in table type object record valueweightinteger end(adens _optselection *var aarray of objectlimwtotvmaxvintegersoptssetprocedure try (itwavinteger)var tw av integerbegin if then (*try inclusion*tw :tw [iweightif tw <limw then : { }try( + tw av) : {iend(*try exclusion*av :av [ivalueif av maxv then try( + twav end elsif av maxv then maxv :avopts : end end tryprocedure selection (weightincweightlimitinteger)begin limw : repeat limw :limw weightincmaxv : :{}opts :{}try( totv)until limw >weightlimit end selection weightvalue limw maxv
25,395
table sample output from optimal selection program the asterisks mark the objects that form the optimal sets opts for the total weight limits ranging from to this backtracking scheme with limitation factor curtailing the growth of the potential search tree is also known as branch and bound algorithm exercises (towers of hanoigiven are three rods and disks of different sizes the disks can be stacked up on the rodsthereby forming towers let the disks initially be placed on rod in the order of decreasing sizeas shown in fig for the task is to move the disks from rod to rod such that they are ordered in the original way this has to be achieved under the constraints that in each step exactly one disk is moved from one rod to another rod disk may never be placed on top of smaller disk rod may be used as an auxiliary store find an algorithm that performs this task note that tower may conveniently be considered as consisting of the single disk at the topand the tower consisting of the remaining disks describe the algorithm as recursive program fig the towers of hanoi write procedure that generates all npermutations of elements an- in situi without the aid of another array upon generating the next permutationa parametric procedure is to be called which mayfor instanceoutput the generated permutation hintconsider the task of generating all permutations of the elements am- as consisting of the subtasks of generating all permutations of am- followed by am- where in the -th subtask the two elements ai and am- had initially been interchanged deduce the recursion scheme of fig which is superposition of the four curves the structure is similar to that of the sierpinski curves in fig from the recursion patternderive recursive program that draws these curves
25,396
fig curves only of the solutions computed by the eight queens algorithm are essentially different the other ones can be derived by reflections about axes or the center point devise program that determines the principal solutions note thatfor examplethe search in column may be restricted to positions change the stable marriage program so that it determines the optimal solution (male or femaleit therefore becomes branch and bound program of the type represented by the program selection certain railway company serves stations sn- it intends to improve its customer information service by computerized information terminals customer types in his departure station sa and his destination sdand he is supposed to be (immediatelygiven the schedule of the train connections with minimum total time of the journey devise program to compute the desired information assume that the timetable (which is your data bankis provided in suitable data structure containing departure (arrivaltimes of all available trains naturallynot all stations are connected by direct lines (see also exercise the ackermann function is defined for all non-negative integer arguments and as followsa( nn ( ( - ( (mna( - (mn- )(mn design program that computes ( ,nwithout the use of recursion as guidelineuse the procedure nonrecursivequicksort from sec devise set of rules for the transformation of recursive into iterative programs in general references [ mcvitie and wilson the stable marriage problem comm acm no ( ) - [ mcvitie and wilson stable marriage assignment for unequal sets bit ( ) - [ space filling curvesor how to waste time on plotter software -practice and
25,397
experience no ( ) - [ wirth program development by stepwise refinement comm acm no ( )
25,398
dynamic information structures recursive data types in chap the arrayrecordand set structures were introduced as fundamental data structures they are called fundamental because they constitute the building blocks out of which more complex structures are formedand because in practice they do occur most frequently the purpose of defining data typeand of thereafter specifying that certain variables be of that typeis that the range of values assumed by these variablesand therefore their storage patternis fixed once and for all hencevariables declared in this way are said to be static howeverthere are many problems which involve far more complicated information structures the characteristic of these problems is that not only the values but also the structures of variables change during the computation they are therefore called dynamic structures naturallythe components of such structures are -at some level of resolution -statici of one of the fundamental data types this is devoted to the constructionanalysisand management of dynamic information structures it is noteworthy that there exist some close analogies between the methods used for structuring algorithms and those for structuring data as with all analogiesthere remain some differencesbut comparison of structuring methods for programs and data is nevertheless illuminating the elementaryunstructured statement is the assignment of an expression' value to variable its corresponding member in the family of data structures is the scalarunstructured type these two are the atomic building blocks for composite statements and data types the simplest structuresobtained through enumeration or sequencingare the compound statement and the record structure they both consist of finite (usually smallnumber of explicitly enumerated componentswhich may themselves all be different from each other if all components are identicalthey need not be written out individuallywe use the for statement and the array structure to indicate replication by knownfinite factor choice among two or more elements is expressed by the conditional or the case statement and by extensions of record typesrespectively and finallya repetiton by an initially unknown (and potentially infinitefactor is expressed by the while and repeat statements the corresponding data structure is the sequence (file)the simplest kind which allows the construction of types of infinite cardinality the question arises whether or not there exists data structure that corresponds in similar way to the procedure statement naturallythe most interesting and novel property of procedures in this respect is recursion values of such recursive data type would contain one or more components belonging to the same type as itselfin analogy to procedure containing one or more calls to itself like proceduresdata type definitions might be directly or indirectly recursive simple example of an object that would most appropriately be represented as recursively defined type is the arithmetic expression found in programming languages recursion is used to reflect the possibility of nestingi of using parenthesized subexpressions as operands in expressions hencelet an expression here be defined informally as followsan expression consists of termfollowed by an operatorfollowed by term (the two terms constitute the operands of the operator term is either variable -represented by an identifier -or an expression enclosed in parentheses data type whose values represent such expressions can easily be described by using the tools already available with the addition of recursiontype expression record opintegeropd opd term
25,399
end type term record if tboolean then idname else subexexpression end end henceevery variable of type term consists of two componentsnamelythe tagfield andif is truethe field idor of the field subex otherwise consider nowfor examplethe following four expressions + ( ( ( ( /( ) these expressions may be visualized by the patterns in fig which exhibit their nestedrecursive structureand they determine the layout or mapping of these expressions onto store second example of recursive information structure is the family pedigreelet pedigree be defined by (the name ofa person and the two pedigrees of the parents this definition leads inevitably to an infinite structure real pedigrees are bounded because at some level of ancestry information is missing assume that this can be taken into account by again using conditional structuretype ped record if knownboolean then namenamefathermotherped end end note that every variable of type ped has at least one componentnamelythe tagfield called known if its value is truethen there are three more fieldsotherwise there is none particular value is shown here in the forms of nested expression and of diagram that may suggest possible storage pattern (see fig (tted(tfred(tadam( )( ))( ))(tmary( )(teva( )( )) fig storage patterns for recursive record structures