id
int64
0
25.6k
text
stringlengths
0
4.59k
25,000
as you single-step through the algorithmyou'll notice that the explanation we gave in the last section is slightly simplified the sequence for the -sort is not actually ( , , )( , , )( , )and ( , instead the first two elements of each group of three are sorted firstthen the first two elements of the second groupand so on once the first two elements of all the groups are sortedthe algorithm returns and sorts three-element groups the actual sequence is ( , )( , )( , )( , )( , , )( , , it might seem more obvious for the algorithm to -sort each complete subarray first( , )( , , )( , )( , , )( , )( , )but the algorithm handles the array indices more efficiently using the first scheme the shellsort is actually not very efficient with only itemsmaking almost as many swaps and comparisons as the insertion sort howeverwith bars the improvement becomes significant it' instructive to run the workshop applet starting with inversely sorted bars (remember thatas in the first press of new creates random sequence of barswhile the second press creates an inversely sorted sequence figure shows how this looks after the first passwhen the array has been completely -sorted figure shows the situation after the next passwhen it is -sorted with each new value of hthe array becomes more nearly sorted figure after the -sort figure after the -sort
25,001
is largethe number of items per pass is smalland items move long distances this is very efficient as grows smallerthe number of items per pass increasesbut the items are already closer togetherwhich is more efficient for the insertion sort it' the combination of these trends that makes the shellsort so effective notice that later sorts (small values of hdon' undo the work of earlier sorts (large values of han array that has been -sorted remains -sorted after -sortfor example if this wasn' so the shellsort couldn' work java code for the shellsort the java code for the shellsort is scarcely more complicated than for the insertion sort starting with the insertion sortyou substitute for in appropriate places and add the formula to generate the interval sequence we've made shellsort( method in the arraysh classa version of the array classes from listing shows the complete shellsort java program listing the shellsort java program /shellsort java /demonstrates shell sort /to run this programc>java shellsortapp //class arraysh private double[thearray/ref to array thearray private int nelems/number of data items //public arraysh(int max/constructor thearray new double[max]/create the array nelems /no items yet //public void insert(double value/put element into array thearray[nelemsvalue/insert it nelems++/increment size //public void display(/displays array contents system out print(" =")for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")
25,002
int innerouterdouble tempint while( <nelems/ * /find initial value of while( > /decreasing huntil = /( / -sort the file for(outer=houter<nelemsouter++temp thearray[outer]inner outer/one subpass (eg while(inner - &thearray[inner- >thearray[innerthearray[inner- ]inner -hthearray[innertemp/end for ( - /decrease /end while( > /end shellsort(temp///end class arraysh ///////////////////////////////////////////////////////////////class shellsortapp public static void main(string[argsint maxsize /array size arraysh arrarr new arraysh(maxsize)/create the array for(int = <maxsizej++/fill array with /random numbers double (int)(java lang math random()* )arr insert( )arr display()/display unsorted array arr shellsort()/shell sort the array
25,003
/end main(/display sorted array /end class shellsortapp in main(we create an object of type arrayshcapable of holding itemsfill it with random datadisplay itshellsort itand display it again here' some sample outputa= = you can change maxsize to higher numbersbut don' go too high , items take fraction of minute to sort the shellsort algorithmalthough it' implemented in just few linesis not simple to follow to see the details of its operationstep through -item sort with the workshop appletcomparing the messages generated by the applet with the code in the shellsort(method other interval sequences picking an interval sequence is bit of black art our discussion so far used the formula = * + to generate the interval sequencebut other interval sequences have been used with varying degrees of success the only absolute requirement is that the diminishing sequence ends with so the last pass is normal insertion sort in shell' original paperhe suggested an initial gap of / which was simply divided in half for each pass thus the descending sequence for = is this approach has the advantage that you don' need to calculate the sequence before the sort begins to find the initial gapyou just divide by howeverthis turns out not to be the best sequence although it' still better than the insertion sort for most datait sometimes degenerates to ( running timewhich is no better than the insertion sort better approach is to divide each interval by instead of for = this leads to this is considerably better than dividing by as it avoids some worst case circumstances that lead to ( behavior some extra code is needed to ensure that the last value in the sequence is no matter what is this gives results comparable to knuth' sequence shown in the listing another possibility for descending sequence (from flamigsee appendix "further reading"is if( else ( * - it' generally considered important that the numbers in the interval sequence are relatively primethat isthey have no common divisors except this makes it more likely that each pass will intermingle all the items sorted on the previous pass the inefficiency of shell' original / sequence is due to its failure to adhere to this rule you may be able to invent gap sequence of your own that does just as well (or possibly even betterthan those shown whatever it isit should be quick to calculate so as not to slow down the algorithm
25,004
no one so far has been able to analyze the shellsort' efficiency theoreticallyexcept in special cases based on experimentsthere are various estimateswhich range from / / ( down to ( table shows some of these estimated (valuescompared with the slower insertion sort and the faster quicksort the theoretical times corresponding to various values of are shown note that ex/ means the yth root of raised to the power thus if is / is the square root of which is , also(lognmeans the log of nsquared this is often written log nbut that' easy to confuse with log nthe logarithm to the base of table estimates of shellsort running time (value type of sort items items , items , items insertionetc , , , , , / shellsort , , , , shellsort , , *(logn / shellsort , , / shellsort , , *logn quicksortetc , , for most data the higher estimatessuch as / are probably more realistic partitioning partitioning is the underlying mechanism of quicksortwhich we'll explore nextbut it' also useful operation on its ownso we'll cover it here in its own section to partition data is to divide it into two groupsso that all the items with key value higher than specified amount are in one groupand all the items with lower key value are in another it' easy to imagine situations in which you would want to partition data maybe you want to divide your personnel records into two groupsemployees who live within miles of the office and those who live farther away or school administrator might want to divide students into those with grade point averages higher and lower than so as to know who deserves to be on the dean' list the partition workshop applet
25,005
bars before partitioningand figure shows them again after partitioning figure twelve bars before partitioning figure twelve bars after partitioning the horizontal line represents the pivot value this is the value used to determine into which of the two groups an item is placed items with key value less than the pivot value go in the left part of the arrayand those with greater (or equalkey go in the right part (in the section on quicksortwe'll see that the pivot value can be the key value of an actual data itemcalled the pivot for nowit' just number the arrow labeled partition points to the leftmost item in the right (highersubarray this value is returned from the partitioning methodso it can be used by other methods that need to know where the division is for more vivid display of the partitioning processset the partition workshop applet to bars and press the run button the leftscan and rightscan pointers will zip toward each otherswapping bars as they go when they meetthe partition is complete you can choose any value you want for the pivot valuedepending on why you're doing the partition (such as choosing grade point average of for varietythe workshop applet chooses random number for the pivot value (the horizontal black lineeach time new or size is pressedbut the value is never too far from the average bar height after being partitionedthe data is by no means sortedit has simply been divided into two groups howeverit' more sorted than it was before as we'll see in the next sectionit doesn' take much more trouble to sort it completely
25,006
originally in factpartitioning tends to reverse the order of some of the data in each group the partition java program how is the partitioning process carried outlet' look at some sample code listing shows the partition java programwhich includes the partitionit(method for partitioning an array listing the partition java program /partition java /demonstrates partitioning an array /to run this programc>java partitionapp ///////////////////////////////////////////////////////////////class arraypar private double[thearray/ref to array thearray private int nelems/number of data items //public arraypar(int max/constructor thearray new double[max]/create the array nelems /no items yet //public void insert(double value/put element into array thearray[nelemsvalue/insert it nelems++/increment size //public int size(/return number of items return nelems//public void display(/displays array contents system out print(" =")for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")/
25,007
int leftptr left /right of first elem int rightptr right /left of pivot while(truewhile(leftptr right &/find bigger item thearray[++leftptrpivot/(nopwhile(rightptr left &/find smaller item thearray[--rightptrpivot/(nopif(leftptr >rightptr/if pointers crossbreak/partition done else /not crossedso swap(leftptrrightptr)/swap elements /end while(truereturn leftptr/return partition /end partitionit(//public void swap(int dex int dex /swap two elements double temptemp thearray[dex ]/ into temp thearray[dex thearray[dex ]/ into thearray[dex temp/temp into /end swap///end class arraypar ///////////////////////////////////////////////////////////////class partitionapp public static void main(string[argsint maxsize /array size arraypar arr/reference to array arr new arraypar(maxsize)/create the array for(int = <maxsizej++/fill array with /random numbers double (int)(java lang math random()* )arr insert( )arr display()/display unsorted array double pivot /pivot value
25,008
int size arr size()/partition array int partdex arr partitionit( size- pivot)system out println("partition is at index partdex)arr display()/display sorted array /end main(/end class partitionapp the main(routine creates an arraypar object that holds items of type double the pivot value is fixed at the routine inserts random values into arraypardisplays thempartitions them by calling the partitionit(methodand displays them again here' some sample outputa= pivot is partition is at index = you can see that the partition is successfulthe first eight numbers are all smaller than the pivot value of the last eight are all larger notice that the partitioning process doesn' necessarily divide the array in half as it does in this examplethat depends on the pivot value and key values of the data there may be many more items in one group than in the other the partition algorithm the partitioning algorithm works by starting with two pointersone at each end of the array (we use the term pointers to mean indices that point to array elementsnot +pointers the pointer on the leftleftptrmoves toward the rightand the one of the rightrightptrmoves toward the left notice that leftptr and rightptr in the partition java program correspond to leftscan and rightscan in the partition workshop applet actually leftptr is initialized to one position to the left of the first celland rightptr to one position to the right of the last cellbecause they will be incremented and decrementedrespectivelybefore they're used stopping and swapping when leftptr encounters data item smaller than the pivot valueit keeps goingbecause that item is in the right place howeverwhen it encounters an item larger than the pivot valueit stops similarlywhen rightptr encounters an item larger than the pivotit keeps goingbut when it finds smaller itemit also stops two inner while loopsthe first for leftptr and the second for rightptrcontrol the scanning process pointer stops because its while loop exits here' simplified version of the code that scans for out-of-place itemswhilethearray[++leftptrpivot /(nopwhilethearray[--rightptrpivot /(nopswap(leftptrrightptr) /find bigger item /find smaller item /swap elements
25,009
exits when an item smaller than pivot is found when both these loops exitboth leftptr and rightptr point to items that are in the wrong part of the arrayso these items are swapped after the swapthe two pointers continue onagain stopping at items that are in the wrong part of the array and swapping them all this activity is nested in an outer while loopas can be seen in the partitionit(method in listing when the two pointers eventually meetthe partitioning process is complete and this outer while loop exits you can watch the pointers in action when you run the partition workshop applet with bars these pointersrepresented by blue arrowsstart at opposite ends of the array and move toward each otherstopping and swapping as they go the bars between them are unpartitionedthose they've already passed over are partitioned when they meetthe entire array is partitioned handling unusual data if we were sure that there was data item at the right end of the array that was smaller than the pivot valueand an item at the left end that was largerthe simplified while loops previously shown would work fine unfortunatelythe algorithm may be called upon to partition data that isn' so well organized if all the data is smaller than the pivot valuefor examplethe leftptr variable will go all the way across the arraylooking in vain for larger itemand fall off the right endcreating an array index out of bounds exception similar fate will befall rightptr if all the data is larger than the pivot value to avoid these problemsextra tests must be placed in the while loops to check for the ends of the arrayleftptrleft in the second this can be seen in context in listing in the section on quicksortwe'll see that clever pivot-selection process can eliminate these end-of-array tests eliminating code from inner loops is always good idea if you want to make program run faster delicate code the code in the while loops is rather delicate for exampleyou might be tempted to remove the increment operators from the inner while loops and use them to replace the nop statements (nop refers to statement consisting only of semicolonand means no operation for exampleyou might try to change thiswhile(leftptr right &thearray[++leftptrpivot/(nopto thiswhile(leftptr right &thearray[leftptrpivot++leftptrand similarly for the other inner while loop this would make it possible for the initial values of the pointers to be left and rightwhich is somewhat clearer than left- and right+ howeverthese changes result in the pointers being incremented only when the condition
25,010
while loop would be required to bump the pointers the nop version is the most efficient solution efficiency of the partition algorithm the partition algorithm runs in (ntime it' easy to see this when running the partition workshop appletthe two pointers start at opposite ends of the array and move toward each other at more or less constant ratestopping and swapping as they go when they meetthe partition is complete if there were twice as many items to partitionthe pointers would move at the same ratebut they would have twice as far to go (twice as many items to compare and swap)so the process would take twice as long thus the running time is proportional to more specificallyfor each partition there will be + or + comparisons every item will be encountered and used in comparison by one or the other of the pointersleading to comparisonsbut the pointers overshoot each other before they find out they've "crossedor gone beyond each otherso there are one or two extra comparisons before the partition is complete the number of comparisons is independent of how the data is arranged (except for the uncertainty between and extra comparisons at the end of the scanthe number of swapshoweverdoes depend on how the data is arranged if it' inversely ordered and the pivot value divides the items in halfthen every pair of values must be swappedwhich is / swaps (remember in the partition workshop applet that the pivot value is selected randomlyso that the number of swaps for inversely sorted bars won' always be exactly / for random datathere will be fewer than / swaps in partitioneven if the pivot value is such that half the bars are shorter and half are taller this is because some bars will already be in the right place (short bars on the lefttall bars on the rightif the pivot value is higher (or lowerthan most of the barsthere will be even fewer swaps because only those few bars that are higher (or lowerthan the pivot will need to be swapped on averagefor random dataabout half the maximum number of swaps take place although there are fewer swaps than comparisonsthey are both proportional to thus the partitioning process runs in (ntime running the workshop appletyou can see that for random bars there are about swaps and comparisonsand for random bars there are about swaps and comparisons quicksort quicksort is undoubtedly the most popular sorting algorithmand for good reasonin the majority of situationsit' the fastestoperating in ( *logntime (this is only true for internal or in-memory sortingfor sorting data in disk files other methods may be better quicksort was discovered by hoare in to understand quicksortyou should be familiar with the partitioning algorithm described in the last section basically the quicksort algorithm operates by partitioning an array into two subarraysand then calling itself to quicksort each of these subarrays howeverthere are some embellishments we can make to this basic scheme these have to do with the selection of the pivot and the sorting of small partitions we'll examine these refinements after we've looked at simple version of the main algorithm it' difficult to understand what quicksort is doing before you understand how it does itso we'll reverse our usual presentation and show the java code for quicksort before presenting the quicksort workshop applet the quicksort algorithm
25,011
public void recquicksort(int leftint rightif(right-left < /if size is return/it' already sorted else /size is or larger /partition range int partition partitionit(leftright)recquicksort(leftpartition- )/sort left side recquicksort(partition+ right)/sort right side as you can seethere are three basic steps partition the array or subarray into left (smaller keysand right (larger keysgroups call ourselves to sort the left group call ourselves again to sort the right group after partitionall the items in the left subarray are smaller than all those on the right if we then sort the left subarray and sort the right subarraythe entire array will be sorted how do we sort these subarraysby calling ourself the arguments to the recquicksort(method determine the left and right ends of the array (or subarrayit' supposed to sort the method first checks whether this array consists of only one element if sothe array is by definition already sortedand the method returns immediately this is the base case in the recursion process if the array has two or more cellsthe algorithm calls the partitionit(methoddescribed in the last sectionto partition it this method returns the index number of the partitionthe left element in the right (larger keyssubarray the partition marks the boundary between the subarrays this is shown in figure figure recursive calls sort subarrays
25,012
part of its arrayfrom left to partition- and once for the right partfrom partition+ to right note that the data item at the index partition is not included in either of the recursive calls why notdoesn' it need to be sortedthe explanation lies in how the pivot value is chosen choosing pivot value what pivot value should the partitionit(method usehere are some relevant ideasthe pivot value should be the key value of an actual data itemthis item is called the pivot you can pick data item to be the pivot more or less at random for simplicitylet' say we always pick the item on the right end of the subarray being partitioned after the partitionif the pivot is inserted at the boundary between the left and right subarraysit will be in its final sorted position this last point may sound unlikelybut remember thatbecause the pivot' key value is used to partition the arraythen following the partition the left subarray holds items smaller than the pivotand the right subarray holds items larger the pivot starts out on the rightbut if it could somehow be placed between these two subarraysit would be in the right placethat isin its final sorted position figure shows how this looks with pivot whose key value is figure the pivot and the subarrays this figure is somewhat fanciful because you can' actually take an array apart as we've shown so how do we move the pivot to its proper placewe could shift all the items in the right subarray to the right one cell to make room for the pivot howeverthis is inefficient and unnecessary remember that all the items in the right subarrayalthough they are larger than the pivotare not yet sortedso they can be moved around within the right subarray without affecting anything thereforeto simplify inserting the pivot in its proper placewe can simply swap the pivot ( and the left item in the right subarraywhich is this places the pivot in its proper position between the left and right groups the is switched to the right endbut because it remains in the right (largergroupthe partitioning is undisturbed this is shown in figure
25,013
once it' swapped into the partition' locafiguretionthe pivot is in its final resting place all subsequent activity will take place on one side of it or on the otherbut the pivot itself won' be moved (or indeed even accessedagain to incorporate the pivot selection process into our recquicksort(methodlet' make it an overt statementand send the pivot value to partitionit(as an argument here' how that lookspublic void recquicksort(int leftint rightif(right-left < /if size < return/already sorted else /size is or larger double pivot thearray[right]/rightmost item /partition range int partition partitionit(leftrightpivot)recquicksort(leftpartition- )/sort left side recquicksort(partition+ right)/sort right side /end recquicksort(when we use this scheme of choosing the rightmost item in the array as the pivotwe'll need to modify the partitionit(method to exclude this rightmost item from the partitioning processafter allwe already know where it should go after the partitioning process is completeat the partitionbetween the two groups alsoonce the partitioning process is completedwe need to swap the pivot from the right end into the partition' location listing shows the quicksort java programwhich incorporates these features listing the quicksort java program /quicksort java /demonstrates simple version of quick sort /to run this programc>java quicksort app ///////////////////////////////////////////////////////////////
25,014
private double[thearrayprivate int nelems/ref to array thearray /number of data items //public arrayins(int max/constructor thearray new double[max]/create the array nelems /no items yet //public void insert(double value/put element into array thearray[nelemsvalue/insert it nelems++/increment size //public void display(/displays array contents system out print(" =")for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")//public void quicksort(recquicksort( nelems- )//public void recquicksort(int leftint rightif(right-left < /if size < return/already sorted else /size is or larger double pivot thearray[right]/rightmost item /partition range int partition partitionit(leftrightpivot)recquicksort(leftpartition- )/sort left side recquicksort(partition+ right)/sort right side /end recquicksort(
25,015
int leftptr left- /left (after ++int rightptr right/right- (after --while(true/find bigger item while(thearray[++leftptrpivot/(nop/find smaller item while(rightptr &thearray[--rightptrpivot/(nopif(leftptr >rightptrbreakelse swap(leftptrrightptr)/end while(trueswap(leftptrright)return leftptr/end partitionit(/if pointers cross/partition done /not crossedso /swap elements /restore pivot /return pivot location //public void swap(int dex int dex /swap two elements double temp thearray[dex ]/ into temp thearray[dex thearray[dex ]/ into thearray[dex temp/temp into /end swap///end class arrayins ///////////////////////////////////////////////////////////////class quicksort app public static void main(string[argsint maxsize /array size arrayins arrarr new arrayins(maxsize)/create array for(int = <maxsizej++/fill array with /random numbers double (int)(java lang math random()* )arr insert( )arr display()/display items arr quicksort()/quicksort them
25,016
/end main(/display them again /end class quicksort app the main(routine creates an object of type arrayinsinserts random data items of type double in itdisplays itsorts it with the quicksort(methodand displays the results here' some typical outputa= = an interesting aspect of the code in the partitionit(method is that we've been able to remove the test for the end of the array in the first inner while loop this testseen in the earlier partitionit(method in the partition java program in listing was leftptr right it prevented leftptr running off the right end of the array if there was no item there larger than pivot why can we eliminate the testbecause we selected the rightmost item as the pivotso leftptr will always stop there howeverthe test is still necessary for rightptr in the second while loop (later we'll see how this test can be eliminated as well choosing the rightmost item as the pivot is thus not an entirely arbitrary choiceit speeds up the code by removing an unnecessary test picking the pivot from some other location would not provide this advantage the quicksort workshop applet at this point you know enough about the quicksort algorithm to understand the nuances of the quicksort workshop applet the big picture for the big pictureuse the size button to set the applet to sort random barsand press the run button following the sorting processthe display will look something like figure figure the quicksort workshop applet with bars
25,017
by partitioning it into two partsand so oncreating smaller and smaller subarrays when the sorting process is completeeach dotted line provides visual record of one of the sorted subarrays the horizontal range of the line shows which bars were part of the subarrayand its vertical position is the pivot value (the height of the pivotthe total length of all these lines on the display is measure of how much work the algorithm has done to sort the arraywe'll return to this topic later each dotted line (except the shortest onesshould have line below it (probably separated by othershorter linesand line above it that together add up to the same length as the original line (less one barthese are the two partitions into which each subarray is divided the details for more detailed examination of quicksort' operationswitch to the -bar display in the quicksort applet and step through the sorting process you'll see how the pivot value corresponds to the height of the pivot on the right side of the arrayhow the algorithm partitions the arrayswaps the pivot into the space between the two sorted groupssorts the shorter group (using many recursive calls)and then sorts the larger group figure shows all the steps involved in sorting bars the horizontal brackets under the arrays show which subarray is being partitioned at each stepand the circled numbers show the order in which these partitions are created pivot being swapped into place is shown with dotted arrow the final position of the pivot is shown as dotted cell to emphasize that this cell contains sorted item that will not be changed thereafter horizontal brackets under single cells (steps and are base case calls to recquicksort()they return immediately
25,018
sometimesas in steps and the pivot ends up in its original position on the right side of the array being sorted in this situationthere is only one subarray remaining to be sortedthat to the left of the pivot there is no second subarray to its right the different steps in figure occur at different levels of recursionas shown in table the initial call from main(to recquicksort(is the first levelrecquicksort(calling two new instances of itself is the second levelthese two instances calling four more instances is the third leveland so on the order in which the partitions are createdcorresponding to the step numbersdoes not correspond with depth it' not the case that all the first-level partitions are done firstthen all the second level onesand so on insteadthe left group at every level is handled before any of the right groups table recursion levels for figure step recursion level in theory there should be eight steps in the fourth level and in the fifth levelbut in this small array we run out of items before these steps are necessary
25,019
enough space for sets of arguments and return valuesone for each recursion level this isas we'll see latersomewhat greater than the logarithm to the base of the number of itemslog the size of the machine stack is determined by your particular system sorting very large numbers of data items using recursive procedures may cause this stack to overflowleading to memory errors things to notice here are some details you may notice as you run the quicksort workshop applet you might think that powerful algorithm like quicksort would not be able to handle subarrays as small as or items howeverthis version of the quicksort algorithm is quite capable of sorting such small subarraysleftscan and rightscan just don' go very far before they meet for this reason we don' need to use different sorting scheme for small subarrays (althoughas we'll see laterhandling small subarrays differently may have advantages at the end of each scanthe leftscan variable ends up pointing to the partition--that isthe left element of the right subarray the pivot is then swapped with the partition to put the pivot in its proper placeas we've seen as we notedin steps and of figure leftscan ends up pointing to the pivot itselfso the swap has no effect this may seem like wasted swapyou might decide that leftscan should stop one bar sooner howeverit' important that leftscan scan all the way to the pivototherwisea swap would unsort the pivot and the partition be aware that leftscan and rightscan start at left- and right this may look peculiar on the displayespecially if left is then leftscan will start at - similarly rightscan initially points to the pivotwhich is not included in the partitioning process these pointers start outside the subarray being partitionedbecause they will be incremented and decremented respectively before they're used the first time the applet shows ranges as numbers in parenthesesfor example( - means the subarray from index to index the range given in some of the messages may be negativefrom higher number to lower onearray partitionedleft ( - )right ( - )for example the ( - range means single cell ( )but what does ( meanthis range isn' realit simply reflects the values that left and rightthe arguments to recquicksort()have when this method is called here' the code in questionint partition partitionit(leftrightpivot)recquicksort(leftpartition- )/sort left side recquicksort(partition+ right)/sort right side if partitionit(is called with left and right for exampleand happens to return as the partitionthen the range supplied in the first call to recquicksort(will be ( - and the range to the second will be ( - this is normal the base case in recquicksort(is activated by array sizes less than as well as by so it will return immediately for negative ranges negative ranges are not shown in figure although they do cause (briefcalls to recquicksort(degenerates to ( performance if you use the quicksort workshop applet to sort inversely sorted barsyou'll see that the algorithm runs much more slowly and that many more dotted horizontal lines are generatedindicating more and larger subarrays are being partitioned what' happening here
25,020
items being sorted that ishalf the items should be larger than the pivotand half smaller this would result in the array being partitioned into two subarrays of equal size two equal subarrays is the optimum situation for the quicksort algorithm if it has to sort one large and one small arrayit' less efficient because the larger subarray has to be subdivided more times the worst situation results when subarray with elements is divided into one subarray with element and the other with - elements (this division into cell and - cells can also be seen in steps and in figure if this and - division happens with every partitionthen every element requires separate partition step this is in fact what takes place with inversely sorted datain all the subarraysthe pivot is the smallest itemso every partition results in an - element in one subarray and only the pivot in the other to see this unfortunate process in actionstep through the quicksort workshop applet with inversely sorted bars notice how many more steps are necessary than with random data in this situation the advantage gained by the partitioning process is lost and the performance of the algorithm degenerates to ( besides being slowthere' another potential problem when quicksort operates in ( time when the number of partitions increasesthe number of recursive function calls also increases every function call takes up room on the machine stack if there are too many callsthe machine stack may overflow and paralyze the system to summarizein the quicksort appletwe select the rightmost element as the pivot if the data is truly randomthis isn' too bad choicebecause usually the pivot won' be too close to either end of the array howeverwhen the data is sorted or inversely sortedchoosing the pivot from one end or the other is bad idea can we improve on our approach to selecting the pivotmedian of three partitioning many schemes have been devised for picking better pivot the method should be simple but have good chance of avoiding the largest or smallest value picking an element at random is simple but--as we've seen--doesn' always result in good selection howeverwe could examine all the elements and actually calculate which one was the median this would be the ideal pivot choicebut the process isn' practicalas it would take more time than the sort itself compromise solution is to find the median of the firstlastand middle elements of the arrayand use this for the pivot (the median or middle item is the data item chosen so that exactly half the other items are smaller and half are larger picking the median of the firstlastand middle elements is called the median-of-three approach and is shown in figure figure the median of three
25,021
the itemsand yet it successfully avoids picking the largest or smallest item in cases where the data is already sorted or inversely sorted there are probably some pathological arrangements of data where the median-of-three scheme works poorlybut normally it' fast and effective technique for finding the pivot besides picking the pivot more effectivelythe median of three approach has an additional benefitwe can dispense with the rightptr>left test in the second inside while loopleading to small increase in the algorithm' speed how is this possiblethe test can be eliminated because we can use the median-of-three approach to not only select the pivotbut also to sort the three elements used in the selection process figure shows how this looks figure sorting the leftcenterand right elements once these three elements are sortedand the median item is selected as the pivotwe are guaranteed that the element at the left end of the subarray is less than (or equal tothe pivotand the element at the right end is greater than (or equal tothe pivot this means that the leftptr and rightptr indices can' step beyond the right or left ends of the arrayrespectivelyeven if we remove the leftptr>right and rightptr<left tests (the pointer will stopthinking it needs to swap the itemonly to find that it has crossed the other pointer and the partition is complete the values at left and right act as sentinels to keep leftptr and rightptr confined to valid array values another small benefit to median-of-three partitioning is that after the leftcenterand right elements are sortedthe partition process doesn' need to examine these elements again the partition can begin at left+ and right- because left and right have in effect already been partitioned we know that left is in the correct partition because it' on the left and it' less than the pivotand right is in the correct place because it' on the right and it' greater than the pivot thusmedian-of-three partitioning not only avoids ( performance for already sorted datait also allows us to speed up the inner loops of the partitioning algorithm and reduce slightly the number of items that must be partitioned the quicksort java program listing shows the quicksort java programwhich incorporates median-of-three partitioning we use separate methodmedianof ()to sort the leftcenterand right
25,022
the partitionit(method listing the quicksort java program /quicksort java /demonstrates quick sort with median-of-three partitioning /to run this programc>java quicksort app ///////////////////////////////////////////////////////////////class arrayins private double[thearray/ref to array thearray private int nelems/number of data items //public arrayins(int max/constructor thearray new double[max]/create the array nelems /no items yet //public void insert(double value/put element into array thearray[nelemsvalue/insert it nelems++/increment size //public void display(/displays array contents system out print(" =")for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")//public void quicksort(recquicksort( nelems- )//public void recquicksort(int leftint rightint size right-left+ if(size < /manual sort if small manualsort(leftright)
25,023
/quicksort if large double median medianof (leftright)int partition partitionit(leftrightmedian)recquicksort(leftpartition- )recquicksort(partition+ right)/end recquicksort(//public double medianof (int leftint rightint center (left+right)/ /order left center ifthearray[leftthearray[centerswap(leftcenter)/order left right ifthearray[leftthearray[rightswap(leftright)/order center right ifthearray[centerthearray[rightswap(centerright)swap(centerright- )return thearray[right- ]/end medianof (/put pivot on right /return median value //public void swap(int dex int dex /swap two elements double temp thearray[dex ]/ into temp thearray[dex thearray[dex ]/ into thearray[dex temp/temp into /end swap//public int partitionit(int leftint rightdouble pivotint leftptr left/right of first elem int rightptr right /left of pivot while(truewhile(thearray[++leftptrpivot/find bigger /(nopwhile(thearray[--rightptrpivot/find smaller /(nopif(leftptr >rightptr/if pointers crossbreak/partition done else /not crossedso
25,024
swap(leftptrrightptr)/end while(trueswap(leftptrright- )/swap elements return leftptr/end partitionit(/return pivot location /restore pivot //public void manualsort(int leftint rightint size right-left+ if(size < return/no sort necessary if(size = / -sort left and right ifthearray[leftthearray[rightswap(leftright)returnelse /size is / -sort leftcenter (right- right ifthearray[leftthearray[right- swap(leftright- )/leftcenter ifthearray[leftthearray[rightswap(leftright)/leftright ifthearray[right- thearray[rightswap(right- right)/centerright /end manualsort(///end class arrayins ///////////////////////////////////////////////////////////////class quicksort app public static void main(string[argsint maxsize /array size arrayins arr/reference to array arr new arrayins(maxsize)/create the array for(int = <maxsizej++/fill array with /random numbers double (int)(java lang math random()* )arr insert( )arr display()/display items arr quicksort()/quicksort them
25,025
/end main(/display them again /end class quicksort app this program uses another new methodmanualsort()to sort subarrays of or fewer elements it returns immediately if the subarray is cell (or less)swaps the cells if necessary if the range is and sorts cells if the range is the recquicksort(routine can' be used to sort ranges of or because median partitioning requires at least cells the main(routine and the output of quicksort java are similar to those of quicksort java the quicksort workshop applet the quicksort workshop applet demonstrates the quicksort algorithm using median-ofthree partitioning this applet is similar to the quicksort workshop appletbut starts off sorting the firstcenterand left elements of each subarray and selecting the median of these as the pivot value at leastit does this if the array size is greater than if the subarray is or unitsthe applet simply sorts it "by handwithout partitioning or recursive calls notice the dramatic improvement in performance when the applet is used to sort inversely ordered bars no longer is every subarray partitioned into cell and - cellsinsteadthe subarrays are partitioned roughly in half other than this improvement for ordered datathe quicksort workshop applet produces results similar to quicksort it is no faster when sorting random datait' advantages become evident only when sorting ordered data handling small partitions if you use the median-of-three partitioning methodit follows that the quicksort algorithm won' work for partitions of three or fewer items the number in this case is called cutoff point in the previous examples we sorted subarrays of or items by hand is this the best wayusing an insertion sort for small partitions another option for dealing with small partitions is to use the insertion sort when you do thisyou aren' restricted to cutoff of you can set the cutoff to or any other number it' interesting to experiment with different values of the cutoff to see where the best performance lies knuth (see the bibliographyrecommends cutoff of howeverthe optimum number depends on your computeroperating systemcompiler (or interpreter)and so on the quicksort java programshown in listing uses an insertion sort to handle subarrays of fewer than cells listing the quicksort java program /quicksort java /demonstrates quick sortuses insertion sort for cleanup /to run this programc>java quicksort app ///////////////////////////////////////////////////////////////class arrayins
25,026
private double[thearrayprivate int nelems/ref to array thearray /number of data items //public arrayins(int max/constructor thearray new double[max]/create the array nelems /no items yet //public void insert(double value/put element into array thearray[nelemsvalue/insert it nelems++/increment size //public void display(/displays array contents system out print(" =")for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")//public void quicksort(recquicksort( nelems- )insertionsort( nelems- )//public void recquicksort(int leftint rightint size right-left+ if(size /insertion sort if small insertionsort(leftright)else /quicksort if large double median medianof (leftright)int partition partitionit(leftrightmedian)recquicksort(leftpartition- )recquicksort(partition+ right)/end recquicksort(
25,027
int center (left+right)/ /order left center ifthearray[leftthearray[centerswap(leftcenter)/order left right ifthearray[leftthearray[rightswap(leftright)/order center right ifthearray[centerthearray[rightswap(centerright)swap(centerright- )return thearray[right- ]/end medianof (/put pivot on right /return median value //public void swap(int dex int dex /swap two elements double temp thearray[dex ]/ into temp thearray[dex thearray[dex ]/ into thearray[dex temp/temp into /end swap//public int partitionit(int leftint rightdouble pivotint leftptr left/right of first elem int rightptr right /left of pivot while(truewhilethearray[++leftptrpivot /find bigger /(nopwhilethearray[--rightptrpivot /find smaller /(nopif(leftptr >rightptr/if pointers crossbreak/partition done else /not crossedso swap(leftptrrightptr)/swap elements /end while(trueswap(leftptrright- )/restore pivot return leftptr/return pivot location /end partitionit(///insertion sort public void insertionsort(int leftint right
25,028
int inoutout right /sorted on left of for(out=left+ out<=rightout++double temp thearray[out]/remove marked item in out/start shifts at out /until one is smallerwhile(in>left &thearray[in- >tempthearray[inthearray[in- ]/shift item to --inthearray[intemp/end for /end insertionsort(/go left one position /insert marked item ///end class arrayins ///////////////////////////////////////////////////////////////class quicksort app public static void main(string[argsint maxsize /array size arrayins arr/reference to array arr new arrayins(maxsize)/create the array for(int = <maxsizej++/fill array with /random numbers double (int)(java lang math random()* )arr insert( )arr display()/display items arr quicksort()/quicksort them arr display()/display them again /end main(/end class quicksort app using the insertion sort for small subarrays turns out to be the fastest approach on our particular installationbut it is not much faster than sorting subarrays of or fewer cells by handas in quicksort java the numbers of comparisons and copies are reduced substantially in the quicksort phasebut are increased by an almost equal amount in the insertion sortso the time savings are not dramatic howeverit' probably worthwhile approach if you are trying to squeeze the last ounce of performance out of quicksort insertion sort following quicksort
25,029
smaller than the cutoff when quicksort is finishedthe array will be almost sorted you then apply the insertion sort to the entire array the insertion sort is supposed to operate efficiently on almost-sorted arraysand this approach is recommended by some expertsbut on our installation it runs very slowly the insertion sort appears to be happier doing lot of small sorts than one big one removing recursion another embellishment recommended by many writers is removing recursion from the quicksort algorithm this involves rewriting the algorithm to store deferred subarray bounds (left and righton stackand using loop instead of recursion to oversee the partitioning of smaller and smaller subarrays the idea in doing this is to speed up the program by removing method calls howeverthis idea arose with older compilers and computer architectureswhich imposed large time penalty for each method call it' not clear that removing recursion is much of an improvement for modern systemswhich handle method calls more efficiently efficiency of quicksort we've said that quicksort operates in ( *logntime as we saw in the discussion of mergesort in this is generally true of the divide-and-conquer algorithmsin which recursive method divides range of items into two groups and then calls itself to handle each group in this situation the logarithm actually has base of the running time is proportional to *log you can get an idea of the validity of this *log running time for quicksort by running one of the quicksort workshop applets with random bars and examining the resulting dotted horizontal lines each dotted line represents an array or subarray being partitionedthe pointers leftscan and rightscan moving toward each othercomparing each data items and swapping when appropriate we saw in the section on partitioning that single partition runs in (ntime this tells us that the total length of all the lines is proportional to the running time of quicksort but how long are all the linesit would be tedious to measure them with ruler on the screenbut we can visualize them different way there is always one line that runs the entire width of the graphspanning bars this results from the first partition there will also be two lines (one below and one above the first linethat have an average length of / barstogether they are again bars long then there will be four lines with an average length of / that again total barsthen lines and so on figure shows how this looks for and lines
25,030
in this figure solid horizontal lines represent the dotted horizontal lines in the quicksort appletsand captions like / cells long indicate averagenot actualline lengths the circled numbers on the left show the order in which the lines are created each series of lines (the eight / linesfor examplecorresponds to level of recursion the initial call to recquicksort(is the first level and makes the first linethe two calls from within the first call--the second level of recursion--make the next two linesand so on if we assume we start with cellsthe results are shown in table table line lengths and recursion recursion level step numbers in figure average line length (cellsnumber of lines total length (cells not shown not shown not shown total where does this division process stopif we keep dividing by and count how many times we do thiswe get the series which is levels of recursion this looks about right on the workshop appletsif you pick some point on the graph and count all the dotted lines directly above and below itthere will be an average of approximately (in figure because not all levels of recursion are shownonly lines intersect any vertical slice of the graph table shows total of cells this is only an approximation because of round-off errorsbut it' close to times the logarithm to the base of which is thus this informal analysis suggests the validity of the *log running time for quicksort more specificallyin the section on partitioningwe found that there should be + comparisons and fewer than / swaps multiplying these quantities by log for various values of gives the results shown in table
25,031
log *log comparisons( + )*log swapsfewer than / *log the log quantity used in table is actually true only in the best-case scenariowhere each subarray is partitioned exactly in half for random data the figure is slightly greater neverthelessthe quicksort and quicksort workshop applets approximate these results for and barsas you can see by running them and observing the swaps and comparisons fields because they have different cutoff points and handle the resulting small partitions differentlyquicksort performs fewer swaps but more comparisons than quicksort the number of swaps shown in the table is the maximum (which assumes the data is inversely sortedfor random data the actual number of swaps turns out to be half to two thirds of the figures shown summary the shellsort applies the insertion sort to widely spaced elementsthen less widely spaced elementsand so on the expression -sorting means sorting every nth element sequence of numberscalled the interval sequenceor gap sequenceis used to determine the sorting intervals in the shellsort widely used interval sequence is generated by the recursive expression = * + where the initial value of is if an array holds , itemsit could be -sorted -sorted -sorted -sorted -sortedand finally -sorted the shellsort is hard to analyzebut runs in approximately ( *(logn) time this is much faster than the ( algorithms like insertion sortbut slower than the ( *lognalgorithms like quicksort to partition an array is to divide it into two subarraysone of which holds items with key values less than specified valuewhile the other holds items with keys greater or equal to this value the pivot value is the value that determines into which group an item will go during partitioningitems smaller than the pivot value go in the left grouplarger items go in
25,032
in the partitioning algorithmtwo array indiceseach in its own while loopstart at opposite ends of the array and step toward each otherlooking for items that need to be swapped when an index finds an item that needs to be swappedits while loop exits when both while loops exitthe items are swapped when both while loops exit and the indices have met or passed each otherthe partition is complete partitioning operates in linear (ntimemaking plus or comparisons and fewer than / swaps the partitioning algorithm may require extra tests in its inner while loops to prevent the indices running off the ends of the array quicksort partitions an array and then calls itself twice recursively to sort the two resulting subarrays subarrays of one element are already sortedthis can be base case for quicksort the pivot value for partition in quicksort is the key value of specific itemcalled the pivot in simple version of quicksortthe pivot can always be the item at the right end of the subarray during the partition the pivot is placed out of the way on the rightand is not involved in the partitioning process later the pivot is swapped againinto the sace between the two partitions this is its final sorted position in the simple version of quicksortperformance is only ( for already sorted (or inversely sorteddata in more advanced version of quicksortthe pivot can be the median of the firstlastand center items in the subarray this is called median-of-three partitioning median-of-three partitioning effectively eliminates the problem of ( performance for already sorted data in median-of-three partitioningthe leftcenterand right items are sorted at the same time the median is determined this sort eliminates the need for the end-of-array tests in the inner while loops in the partitioning algorithm quicksort operates in ( *log ntime (except when the simpler version is applied to already sorted datasubarrays smaller than certain size (the cutoffcan be sorted by method other than quicksort
25,033
the insertion sort can also be applied to the entire arrayafter it has been sorted down to cutoff point by quicksort binary trees overview in this we switch from algorithmsthe focus of the last on sortingto data structures binary trees are one of the fundamental data storage structures used in programming they provide advantages that the data structures we've seen so far cannot in this we'll learn why you would want to use treeshow they workand how to go about creating them why use binary treeswhy might you want to use treeusuallybecause it combines the advantages of two other structuresan ordered array and linked list you can search tree quicklyas you can an ordered arrayand you can also insert and delete items quicklyas you can with linked list let' explore these topics bit before delving into the details of trees slow insertion in an ordered array imagine an array in which all the elements are arranged in orderthat isan ordered arraysuch as we saw in "simple sorting as we learnedit' quick to search such an array for particular valueusing binary search you check in the center of the arrayif the object you're looking for is greater than what you find thereyou narrow your search to the top half of the arrayif it' lessyou narrow your search to the bottom half applying this process repeatedly finds the object in (logntime it' also quick to iterate through an ordered arrayvisiting each object in sorted order on the other handif you want to insert new object into an ordered arrayyou first need to find where the object will goand then move all the objects with greater keys up one space in the array to make room for it these multiple moves are time consumingrequiringon the averagemoving half the items ( / movesdeletion involves the same multimove operationand is thus equally slow if you're going to be doing lot of insertions and deletionsan ordered array is bad choice slow searching in linked list on the other handas we saw in "advanced sorting,insertions and deletions are quick to perform on linked list they are accomplished simply by changing few references these operations require ( time (the fastest big- timeunfortunatelyhoweverfinding specified element in linked list is not so easy you must start at the beginning of the list and visit each element until you find the one you're looking for thus you will need to visit an average of / objectscomparing each one' key with the desired value this is slowrequiring (ntime (notice that times considered fast for sort are slow for data structure operations you might think you could speed things up by using an ordered linked listin which the elements were arranged in orderbut this doesn' help you still must start at the beginning and visit the elements in orderbecause there' no way to access given element without following the chain of references to it (of coursein an ordered list it' much quicker to visit the nodes in order than it is in non-ordered listbut that doesn'
25,034
trees to the rescue it would be nice if there were data structure with the quick insertion and deletion of linked listand also the quick searching of an ordered array trees provide both these characteristicsand are also one of the most interesting data structures what is treewe'll be mostly interested in particular kind of tree called binary treebut let' start by discussing trees in general before moving on to the specifics of binary trees tree consists of nodes connected by edges figure shows tree in such picture of tree (or in our workshop appletthe nodes are represented as circlesand the edges as lines connecting the circles trees have been studied extensively as abstract mathematical entitiesso there' large amount of theoretical knowledge about them tree is actually an instance of more general category called graphbut we don' need to worry about that here we'll discuss graphs in "graphs,and "weighted graphs figure tree in computer programsnodes often represent such entities as peoplecar partsairline reservationsand so onin other wordsthe typical items we store in any kind of data structure in an oop language such as javathese real-world entities are represented by objects the lines (edgesbetween the nodes represent the way the nodes are related roughly speakingthe lines represent convenienceit' easy (and fastfor program to get from one node to another if there is line connecting them in factthe only way to get from node to node is to follow path along the lines generally you are restricted to going in one direction along edgesfrom the root downward edges are likely to be represented in program by referencesif the program is written in java (or by pointers if the program is written in or ++typically there is one node in the top row of treewith lines connecting to more nodes on the second roweven more on the thirdand so on thus trees are small on the top and large on the bottom this may seem upside-down compared with real treesbut generally program starts an operation at the small end of the treeand it' (arguablymore natural to think about going from top to bottomas in reading text there are different kinds of trees the tree shown in figure has more than two children per node (we'll see what "childrenmeans in moment howeverin this we'll be discussing specialized form of tree called binary tree each node in binary tree has maximum of two children more general treesin which nodes can have more than two childrenare called multiway trees we'll see an example in tables and
25,035
why use binary treeswhy might you want to use treeusuallybecause it combines the advantages of two other structuresan ordered array and linked list you can search tree quicklyas you can an ordered arrayand you can also insert and delete items quicklyas you can with linked list let' explore these topics bit before delving into the details of trees slow insertion in an ordered array imagine an array in which all the elements are arranged in orderthat isan ordered arraysuch as we saw in "simple sorting as we learnedit' quick to search such an array for particular valueusing binary search you check in the center of the arrayif the object you're looking for is greater than what you find thereyou narrow your search to the top half of the arrayif it' lessyou narrow your search to the bottom half applying this process repeatedly finds the object in (logntime it' also quick to iterate through an ordered arrayvisiting each object in sorted order on the other handif you want to insert new object into an ordered arrayyou first need to find where the object will goand then move all the objects with greater keys up one space in the array to make room for it these multiple moves are time consumingrequiringon the averagemoving half the items ( / movesdeletion involves the same multimove operationand is thus equally slow if you're going to be doing lot of insertions and deletionsan ordered array is bad choice slow searching in linked list on the other handas we saw in "advanced sorting,insertions and deletions are quick to perform on linked list they are accomplished simply by changing few references these operations require ( time (the fastest big- timeunfortunatelyhoweverfinding specified element in linked list is not so easy you must start at the beginning of the list and visit each element until you find the one you're looking for thus you will need to visit an average of / objectscomparing each one' key with the desired value this is slowrequiring (ntime (notice that times considered fast for sort are slow for data structure operations you might think you could speed things up by using an ordered linked listin which the elements were arranged in orderbut this doesn' help you still must start at the beginning and visit the elements in orderbecause there' no way to access given element without following the chain of references to it (of coursein an ordered list it' much quicker to visit the nodes in order than it is in non-ordered listbut that doesn' help to find an arbitrary object trees to the rescue it would be nice if there were data structure with the quick insertion and deletion of linked listand also the quick searching of an ordered array trees provide both these characteristicsand are also one of the most interesting data structures what is treewe'll be mostly interested in particular kind of tree called binary treebut let' start by discussing trees in general before moving on to the specifics of binary trees
25,036
of tree (or in our workshop appletthe nodes are represented as circlesand the edges as lines connecting the circles trees have been studied extensively as abstract mathematical entitiesso there' large amount of theoretical knowledge about them tree is actually an instance of more general category called graphbut we don' need to worry about that here we'll discuss graphs in "graphs,and "weighted graphs figure tree in computer programsnodes often represent such entities as peoplecar partsairline reservationsand so onin other wordsthe typical items we store in any kind of data structure in an oop language such as javathese real-world entities are represented by objects the lines (edgesbetween the nodes represent the way the nodes are related roughly speakingthe lines represent convenienceit' easy (and fastfor program to get from one node to another if there is line connecting them in factthe only way to get from node to node is to follow path along the lines generally you are restricted to going in one direction along edgesfrom the root downward edges are likely to be represented in program by referencesif the program is written in java (or by pointers if the program is written in or ++typically there is one node in the top row of treewith lines connecting to more nodes on the second roweven more on the thirdand so on thus trees are small on the top and large on the bottom this may seem upside-down compared with real treesbut generally program starts an operation at the small end of the treeand it' (arguablymore natural to think about going from top to bottomas in reading text there are different kinds of trees the tree shown in figure has more than two children per node (we'll see what "childrenmeans in moment howeverin this we'll be discussing specialized form of tree called binary tree each node in binary tree has maximum of two children more general treesin which nodes can have more than two childrenare called multiway trees we'll see an example in tables and external storage,where we discuss trees an analogy one commonly encountered tree is the hierarchical file structure in computer system the root directory of given device (designated with the backslashas in :\on many systemsis the tree' root the directories one level below the root directory are its children there may be many levels of subdirectories files represent leavesthey have no children of their own clearly hierarchical file structure is not binary treebecause directory may have many children complete pathnamesuch as :\sales\east\november\smith datcorresponds to the path from the root to the smith dat leaf terms used for the file structuresuch as root and pathwere borrowed
25,037
hierarchical file structure differs in significant way from the trees we'll be discussing here in the file structuresubdirectories contain no dataonly references to other subdirectories or to files only files contain data in treeevery node contains data ( personnel recordcar-part specificationsor whateverin addition to the dataall nodes except leaves contain references to other nodes how do binary trees worklet' see how to carry out the common binary-tree operations of finding node with given keyinserting new nodetraversing the treeand deleting node for each of these operations we'll first show how to use the tree workshop applet to carry it outthen we'll look at the corresponding java code the tree workshop applet start up the binary tree workshop applet you'll see screen something like that shown in figure howeverbecause the tree in the workshop applet is randomly generatedit won' look exactly the same as the tree in the figure figure the binary tree workshop applet using the applet the key values shown in the nodes range from to of coursein real treethere would probably be larger range of key values for exampleif employeessocial security numbers were used for key valuesthey would range up to , , another difference between the workshop applet and real tree is that the workshop applet is limited to depth of that isthere can be no more than levels from the root to the bottom this restriction ensures that all the nodes in the tree will be visible on the screen in real tree the number of levels is unlimited (until you run out of memoryusing the workshop appletyou can create new tree whenever you want to do thisclick the fill button prompt will ask you to enter the number of nodes in the tree this can vary from to but will give you representative tree after typing in the numberpress fill twice more to generate the new tree you can experiment by creating trees with different numbers of nodes unbalanced trees notice that some of the trees you generate are unbalancedthat isthey have most of their nodes on one side of the root or the otheras shown in figure individual
25,038
figure an unbalanced tree (with an unbalanced subtreetrees become unbalanced because of the order in which the data items are inserted if these key values are inserted randomlythe tree will be more or less balanced howeverif an ascending sequence (like and so onor descending sequence is generatedall the values will be right children (if ascendingor left children (if descendingand the tree will be unbalanced the key values in the workshop applet are generated randomlybut of course some short ascending or descending sequences will be created anywaywhich will lead to local imbalances when you learn how to insert items into the tree in the workshop applet you can try building up tree by inserting such an ordered sequence of items and see what happens if you ask for large number of nodes when you use fill to create treeyou may not get as many nodes as you requested depending on how unbalanced the tree becomessome branches may not be able to hold full number of nodes this is because the depth of the applet' tree is limited to fivethe problem would not arise in real tree if tree is created by data items whose key values arrive in random orderthe problem of unbalanced trees may not be too much of problem for larger treesbecause the chances of long run of numbers in sequence is small but key values can arrive in strict sequencefor examplewhen data-entry person arranges stack of personnel files into order of ascending employee number before entering the data when this happenstree efficiency can be seriously degraded we'll discuss unbalanced trees and what to do about them in "red-black trees representing the tree in java code let' see how we might implement binary tree in java as with other data structuresthere are several approaches to representing tree in the computer' memory the most common is to store the nodes at unrelated locations in memory and connect them using references in each node that point to its children it' also possible to represent tree in memory as an arraywith nodes in specific positions stored in corresponding positions in the array we'll return to this possibility at the end of this for our sample java code we'll use the approach of connecting the nodes using references as we discuss individual operations we'll show code fragments pertaining to that operation the complete program from which these fragments are extracted can be seen toward the end of this in listing the node class
25,039
objects being stored (employees in an employee databasefor exampleand also references to each of the node' two children here' how that looksclass node int idatafloat fdatanode leftchildnode rightchild/data used as key value /other data /this node' left child /this node' right child public void displaynode(/(see listing for method bodysome programmers also include reference to the node' parent this simplifies some operations but complicates othersso we don' include it we do include method called displaynode(to display the node' databut its code isn' relevant here there are other approaches to designing class node instead of placing the data items directly into the nodeyou could use reference to an object representing the data itemclass node person node leftchildnode rightchild/reference to person object /this node' left child /this node' right child class person int idatafloat fdatathis makes it conceptually clearer that the node and the data item it holds aren' the same thingbut it results in somewhat more complicated codeso we'll stick to the first approach the tree class we'll also need class from which to instantiate the tree itselfthe object that holds all the nodes we'll call this class tree it has only one fielda node variable that holds the root it doesn' need fields for the other nodes because they are all accessed from the root the tree class has number of methodssome for findinginsertingand deleting nodesseveral for different kinds of traversesand one to display the tree here' skeleton version
25,040
private node root/the only data field in tree public void find(int keypublic void insert(int iddouble ddpublic void delete(int id/various other methods /end class tree the treeapp class finallywe need way to perform operations on the tree here' how you might write class with main(routine to create treeinsert three nodes into itand then search for one of them we'll call this class treeappclass treeapp public static void main(string[argstree thetree new tree/make tree thetree insert( )thetree insert( )thetree insert( )/insert nodes node found thetree find( )/find node with key if(found !nullsystem out println("found the node with key ")else system out println("could not find node with key ")/end main(/end class treeapp in listing the main(routine provides primitive user interface so you can decide from the keyboard whether you want to insertfinddeleteor perform other operations next we'll look at individual tree operationsfinding nodeinserting nodetraversing the treeand deleting node finding node finding node with specific key is the simplest of the major tree operationsso let' start with that
25,041
information they could be person objectswith an employee number as the key and also perhaps nameaddresstelephone numbersalaryand other fields or they could represent car partswith part number as the key value and fields for quantity on handpriceand so on howeverthe only characteristics of each node that we can see in the workshop applet are number and color node is created with these two characteristics and keeps them throughout its life using the workshop applet to find node look at the workshop applet and pick nodepreferably one near the bottom of the tree (as far from the root as possiblethe number shown in this node is its key value we're going to demonstrate how the workshop applet finds the nodegiven the key value for purposes of this discussion we'll assume you've decided to find the node representing the item with key value as shown in figure of coursewhen you run the workshop applet you'll get different tree and will need to pick different key value figure finding node click the find button the prompt will ask for the value of the node to find enter (or whatever the number is on the node you choseclick find twice more as the workshop applet looks for the specified nodethe prompt will display either "going to left childor "going to right child,and the red arrow will move down one level to the right or left figure the arrow starts at the root the program compares the key value with the value at the rootwhich is the key is lessso the program knows the desired node must be on the left side of the treeeither the root' left child or one of this child' descendants the left child of the root has the value so the comparison of and will show that the desired node is in the right subtree of the arrow will go to the root of this subtree here is again greater than the nodeso we go to the rightto and then to the leftto this time the comparison shows equals the node' key valueso we've found the node we want the workshop applet doesn' do anything with the node once it finds itexcept to display message saying it has been found serious program would perform some operation on the found nodesuch as displaying its contents or changing one of its fields java code for finding node here' the code for the find(routinewhich is method of the tree class
25,042
node current root/find node with given key /(assumes non-empty tree/start at root while(current idata !key/while no matchif(key current idata/go leftcurrent current leftchildelse current current rightchild/or go rightif(current =null/if no childreturn null/didn' find it return current/found it this routine uses variable current to hold the node it is currently examining the argument key is the value to be found the routine starts at the root (it has tothis is the only node it can access directly that isit sets current to the root thenin the while loopit compares the value to be foundkeywith the value of the idata field (the key fieldin the current node if key is less than this fieldthen current is set to the node' left child if key is greater than (or equalto the node' idata fieldthen current is set to the node' right child can' find it if current becomes equal to nullthen we couldn' find the next child node in the sequencewe've reached the end of the line without finding the node we were looking forso it can' exist we return null to indicate this fact found it if the condition of the while loop is not satisfiedso that we exit from the bottom of the loopthen the idata field of current is equal to keythat iswe've found the node we want we return the nodeso that the routine that called find(can access any of the node' data efficiency as you can seehow long it takes to find node depends on how many levels down it is situated in the workshop applet there can be up to nodesbut no more than levels-so you can find any node using maximum of only comparisons this is (logntimeor more specifically (log ntimethe logarithm to the base we'll discuss this further toward the end of this inserting node to insert node we must first find the place to insert it this is much the same process as trying to find node that turns out not to existas described in the section on find we follow the path from the root to the appropriate nodewhich will be the parent of the new node once this parent is foundthe new node is connected as its left or right childdepending on whether the new node' key is less than or greater than that of the parent
25,043
to insert new node with the workshop appletpress the ins button you'll be asked to type the key value of the node to be inserted let' assume we're going to insert new node with the value type this into the text field the first step for the program in inserting node is to find where it should be inserted figure shows how this looks the value is less than but greater than so we arrive at node now we want to go left because is less than but has no left childits leftchild field is null when it sees this nullthe insertion routine has found the place to attach the new node the workshop applet does this by creating new node with the value (and randomly generated colorand connecting it as the left child of as shown in figure figure inserting node java code for inserting node the insert(function starts by creating the new nodeusing the data supplied as arguments nextinsert(must determine where to insert the new node this is done using roughly the same code as finding nodedescribed in the section on find(the difference is that when you're simply trying to find node and you encounter null (nonexistentnodeyou know the node you're looking for doesn' exist so you return immediately when you're trying to insert node you insert it (creating it firstif necessarybefore returning the value to be searched for is the data item passed in the argument id the while loop uses true as its condition because it doesn' care if it encounters node with the same value as idit treats another node with the same key value as if it were simply greater than the key value (we'll return to the subject of duplicate nodes later in this place to insert new node will always be found (unless you run out of memory)when it isand the new node is attachedthe while loop exits with return statement here' the code for the insert(functionpublic void insert(int iddouble dd
25,044
node newnode new node()/make new node newnode idata id/insert data newnode ddata ddif(root==null/no node in root root newnodeelse /root occupied node current root/start at root node parentwhile(true/(exits internallyparent currentif(id current idata/go leftcurrent current leftchildif(current =null/if end of the line/insert on left parent leftchild newnodereturn/end if go left else /or go rightcurrent current rightchildif(current =null/if end of the line /insert on right parent rightchild newnodereturn/end else go right /end while /end else not root /end insert(/we use new variableparent (the parent of current)to remember the last non-null node we encountered ( in the figurethis is necessary because current is set to null in the process of discovering that its previous value did not have an appropriate child if we didn' save parentwe' lose track of where we were to insert the new nodechange the appropriate child pointer in parent (the last non-null node you encounteredto point to the new node if you were looking unsuccessfully for parent' left childyou attach the new node as parent' left childif you were looking for its right childyou attach the new node as its right child in figure is attached as the left child of traversing the tree traversing tree means visiting each node in specified order this process is not as commonly used as findinginsertingand deleting nodes one reason for this is that
25,045
the algorithm is interesting (it' also simpler than deletionthe discussion of which we want to defer as long as possible there are three simple ways to traverse tree they're called preorderinorderand postorder the order most commonly used for binary search trees is inorderso let' look at that firstand then return briefly to the other two inorder traversal an inorder traversal of binary search tree will cause all the nodes to be visited in ascending orderbased on their key values if you want to create sorted list of the data in binary treethis is one way to do it the simplest way to carry out traversal is the use of recursion (discussed in "recursion" recursive method to traverse the entire tree is called with node as an argument initiallythis node is the root the method needs to do only three things call itself to traverse the node' left subtree visit the node call itself to traverse the node' right subtree remember that visiting node means doing something to itdisplaying itwriting it to fileor whatever traversals work with any binary treenot just with binary search trees the traversal mechanism doesn' pay any attention to the key values of the nodesit only concerns itself with whether node has children java code for traversing the actual code for inorder traversal is so simple we show it before seeing how traversal looks in the workshop applet the routineinorder()performs the three steps already described the visit to the node consists of displaying the contents of the node like any recursive functionthere must be base casethe condition that causes the routine to return immediatelywithout calling itself in inorder(this happens when the node passed as an argument is null here' the code for the inorder(methodprivate void inorder(node localrootif(localroot !nullinorder(localroot leftchild)localroot displaynode()inorder(localroot rightchild)this method is initially called with the root as an argumentinorder(root)after thatit' on its owncalling itself recursively until there are no more nodes to visit
25,046
let' look at simple example to get an idea of how this recursive traversal routine works imagine traversing tree with only three nodesa root (awith left child (band right child ( )as shown in figure figure inorder(method applied to -node tree we start by calling inorder(with the root as an argument this incarnation of inorder(we'll call inorder(ainorder(afirst calls inorder(with its left childbas an argument this second incarnation of inorder(we'll call inorder(binorder(bnow calls itself with its left child as an argument howeverit has no left childso this argument is null this creates an invocation of inorder(we could call inorder(nullthere are now three instances of inorder(in existenceinorder( )inorder( )and inorder(nullhoweverinorder(nullreturns immediately when it finds its argument is null (we all have days like that now inorder(bgoes on to visit bwe'll assume this means to display it then inorder(bcalls inorder(againwith its right child as an argument again this argument is nullso the second inorder(nullreturns immediately now inorder(bhas carried out steps and so it returns (and thereby ceases to existnow we're back to inorder( )just returning from traversing ' left child we visit aand then call inorder(again with as an argumentcreating inorder(clike inorder( )inorder(chas no childrenso step returns with no actionstep visits cand step returns with no action inorder(bnow returns to inorder(ahoweverinorder(ais now doneso it returns and the entire traversal is complete the order in which the nodes were visited is abcthey have been visited inorder in binary search tree this would be the order of ascending keys more complex trees are handled similarly the inorder(function calls itself for each nodeuntil it has worked its way through the entire tree traversing with the workshop applet to see what traversal looks like with the workshop appletrepeatedly press the trav
25,047
here' what happens when you use the tree workshop applet to traverse inorder the tree shown in figure this is slightly more complex than the -node tree seen previously the red arrow starts at the root table shows the sequence of node keys and the corresponding messages the key sequence is displayed at the bottom of the workshop applet screen figure traversing tree inorder table workshop applet traversal step number red arrow on node message (rootwill check left child will check left child will check left child will visit this node will check right child will go to root of previous subtree will visit this node will check for right child will check left child will visit this node will check right child will go to root of previous subtree list of nodes visited
25,048
will visit this node will check right child will check left child will visit this node will check for right child will go to root of previous subtree done traversal it may not be obviousbut for each nodethe routine traverses the node' left subtreevisits the nodeand traverses the right subtree for examplefor node this happens in steps and all this isn' as complicated as it looks the best way to get feel for what' happening is to traverse variety of different trees with the workshop applet preorder and postorder traversals you can traverse the tree in two ways besides inorderthey're called preorder and postorder it' fairly clear why you might want to traverse tree inorderbut the motivation for preorder and postorder traversals is more obscure howeverthese traversals are indeed useful if you're writing programs that parse or analyze algebraic expressions let' see why that should be true binary tree (not binary search treecan be used to represent an algebraic expression that involves the binary arithmetic operators +-/and the root node holds an operatorand each of its subtrees represents either variable name (like abor cor another expression for examplethe binary tree shown in figure represents the algebraic expression *( +cthis is called infix notationit' the notation normally used in algebra traversing the tree inorder will generate the correct inorder sequence * +cbut you'll need to insert the parentheses yourself
25,049
what' all this got to do with preorder and postorder traversalslet' see what' involved for these other traversals the same three steps are used as for inorderbut in different sequence here' the sequence for preorder(method visit the node call itself to traverse the node' left subtree call itself to traverse the node' right subtree traversing the tree shown in figure using preorder would generate the expression * +bc this is called prefix notation one of the nice things about it is that parentheses are never requiredthe expression is unambiguous without them it means "apply the operator to the next two things in the expression these two things are and +bc the expression +bc means "apply to the next two things in the expression;which are and cso this last expression is + in inorder notation inserting that into the original expression * +bc (preordergives us *( +cin inorder the postorder traversal method contains the three steps arranged in yet another way call itself to traverse the node' left subtree call itself to traverse the node' right subtree visit the node for the tree in figure visiting the nodes with postorder traversal would generate the expression abc+this is called postfix notation as described in "stacks and queues,it means "apply the last operator in the expression*to the first and second things the first thing is aand the second thing is bcbcmeans "apply the last operator in the expression+to the first and second things the first thing is and the second thing is cso this gives us ( +cin infix inserting this in the original expression abc+(postfixgives us *( +cpostfix the code in listing contains methods for preorder and postorder traversalsas well as for inorder finding maximum and minimum values incidentallywe should note how easy it is to find the maximum and minimum values in binary search tree in factit' so easy we don' include it as an option in the workshop appletnor show code for it in listing stillit' important to understand how it works for the minimumgo to the left child of the rootthen go to the left child of that childand
25,050
shown in figure figure minimum value of tree here' some code that returns the node with the minimum key valuepublic node minimum(/returns node with minimum key value node currentlastcurrent root/start at root while(current !null/until the bottomlast current/remember node current current leftchild/go to left child return lastwe'll need to know about finding the minimum value when we set about deleting node for the maximum value in the treefollow the same procedure but go from right child to right child until you find node with no right child this node is the maximum the code is the same except that the last statement in the loop is current current rightchild/go to right child deleting node deleting node is the most complicated common operation required for binary search trees howeverdeletion is important in many tree applicationsand studying the details builds character you start by finding the node you want to deleteusing the same approach we saw in find(and insert(once you've found the nodethere are three cases to consider the node to be deleted is leaf (has no children the node to be deleted has one child the node to be deleted has two children
25,051
the third quite complicated case the node to be deleted has no children to delete leaf nodeyou simply change the appropriate child field in the node' parent to point to null instead of to the node the node will still existbut it will no longer be part of the tree this is shown in figure figure deleting node with no children because of java' garbage collection featurewe don' need to worry about explicitly deleting the node itself when java realizes that nothing in the program refers to the nodeit will be removed from memory (in and +you would need to execute free(or delete(to remove the node from memory using the workshop applet to delete node with no children assume you're going do delete node in figure press the del button and enter when prompted againthe node must be found before it can be deleted repeatedly pressing del will take you from to to when it' foundit' deleted without incident java code to delete node with no children the first part of the delete(routine is similar to find(and insert(it involves finding the node to be deleted as with insert()we need to remember the parent of the node to be deleted so we can modify its child fields if we find the nodewe drop out of the while loop with parent containing the node to be deleted if we can' find itwe return from delete(with value of false public boolean delete(int key/delete node with given key /(assumes non-empty listnode current rootnode parent rootboolean isleftchild truewhile(current idata !key/search for node parent currentif(key current idata/go leftisleftchild truecurrent current leftchildelse /or go right
25,052
isleftchild falsecurrent current rightchildif(current =null/end of the linereturn false/didn' find it /end while /found node to delete /continues once we've found the nodewe check first to see whether it has no children when this is true we check the special case of the rootif that' the node to be deletedwe simply set it to nulland this empties the tree otherwisewe set the parent' leftchild or rightchild field to null to disconnect the parent from the node /delete(continued /if no childrensimply delete it if(current leftchild==null &current rightchild==nullif(current =root/if rootroot null/tree is empty else if(isleftchildparent leftchild null/disconnect else /from parent parent rightchild null/continues case the node to be deleted has one child this case isn' so bad either the node has only two connectionsto its parent and to its only child you want to "snipthe node out of this sequence by connecting its parent directly to its child this involves changing the appropriate reference in the parent (leftchild or rightchildto point to the deleted node' child this is shown in figure figure deleting node with one child
25,053
let' assume we're using the workshop on the tree in figure and deleting node which has left child but no right child press del and enter when prompted keep pressing del until the arrow rests on node has only one child it doesn' matter whether has children of its ownin this caseit has one pressing del once more causes to be deleted its place is taken by its left child in factthe entire subtree of which is the root is moved up and plugged in as the new right child of use the workshop applet to generate new trees with one-child nodesand see what happens when you delete them look for the subtree whose root is the deleted node' child no matter how complicated this subtree isit' simply moved up and plugged in as the new child of the deleted node' parent java code to delete node with one child the following code shows how to deal with the one-child situation there are four variationsthe child of the node to be deleted may be either left or right childand for each of these cases the node to be deleted may be either the left or right child of its parent there is also specialized situationthe node to be deleted may be the rootin which case it has no parent and is simply replaced by the appropriate subtree here' the code (which continues from the end of the no-child code fragment shown earlier)/delete(continued /if no right childreplace with left subtree else if(current rightchild==nullif(current =rootroot current leftchildelse if(isleftchild/left child of parent parent leftchild current leftchildelse /right child of parent parent rightchild current leftchild/if no left childreplace with right subtree else if(current leftchild==nullif(current =rootroot current rightchildelse if(isleftchild/left child of parent parent leftchild current rightchildelse /right child of parent parent rightchild current rightchild/continued notice that working with references makes it easy to move an entire subtree you do this by simply disconnecting the old reference to the subtree and creating new reference to it somewhere else although there may be lots of nodes in the subtreeyoudon' need to worry about moving them individually in factthey only "movein the sense of being conceptually in different positions relative to the other nodes as far as the program is concernedonly the reference to the root of the subtree has changed
25,054
now the fun begins if the deleted node has two childrenyou can' just replace it with one of these childrenat least if the child has its own children why notexamine figure and imagine deleting node and replacing it with its right subtreewhose root is which left child would havethe deleted node' left child or the new node' left child in either case would be in the wrong placebut we can' just throw it away we need another approach the good news is that there' trick the bad news is thateven with the trickthere are lot of special cases to consider remember that in binary search tree the nodes are arranged in order of ascending keys for each nodethe node with the next-highest key is called its inorder successoror simply its successor in figure anode is the successor of node here' the trickto delete node with two childrenreplace the node with its inorder successor figure shows deleted node being replaced by its successor notice that the nodes are still in order (there' more to it if the successor itself has childrenwe'll look at that possibility in moment figure can' replace with subtree figure node replaced by its successor finding the successor how do you find the successor of nodeas human beingyou can do this quickly (for small treesanywayjust take quick glance at the tree and find the next-largest number following the key of the node to be deleted in figure it doesn' take long to see that the successor of is there' just no other number which is greater than and also smaller than howeverthe computer can' do things "at glance,it needs an algorithm
25,055
than the node then it goes to this right child' left child (if it has one)and to this left child' left childand so onfollowing down the path of left children the last left child in this path is the successor of the original nodeas shown in figure figure finding the successor why does this workwhat we're really looking for is the smallest of the set of nodes that are larger than the original node when you go to the original node' right childall the nodes in the resulting subtree are greater than the original nodebecause this is how binary search tree is defined now we want the smallest value in this subtree as we learnedyou can find the minimum value in subtree by following the path down all the left children thusthis algorithm finds the minimum value that is greater than the original nodethis is what we mean by its successor if the right child of the original node has no left childrenthen this right child is itself the successoras shown in figure figure the right child is the successor using the workshop applet to delete node with two children generate tree with the workshop appletand pick node with two children now mentally figure out which node is its successor by going to its right child and then following down the line of this right child' left children (if it has anyyou may want to make sure the successor has no children of its own if it doesthe situation gets more complicated because entire subtrees are moved aroundrather than single node once you've chosen node to deleteclick the del button you'll be asked for the key value of the node to delete when you've specified itrepeated presses of the del button will show the red arrow searching down the tree to the designated node when the node is deletedit' replaced by its successor
25,056
will be replaced by java code to find the successor here' some code for method getsuccessor()which returns the successor of the node specified as its delnode argument (this routine assumes that delnode does indeed have right childbut we know this is true because we've already determined that the node to be deleted has two children /returns node with next-highest value after delnode /goes to right childthen right child' left descendants private node getsuccessor(node delnodenode successorparent delnodenode successor delnodenode current delnode rightchildwhile(current !nullsuccessorparent successorsuccessor currentcurrent current leftchild/go to right child /until no more /left children/go to left child /if successor not if(successor !delnode rightchild/right child/make connections successorparent leftchild successor rightchildsuccessor rightchild delnode rightchildreturn successorthe routine first goes to delnode' right childthenin the while loopfollows down the path of all this right child' left children when the while loop exitssuccessor contains delnode' successor once we've found the successorwe may need to access its parentso within the while loop we also keep track of the parent of the current node the getsuccessor(routine carries out two additional operations in addition to finding the successor howeverto understand thesewe need to step back and consider the big picture as we've seenthe successor node can occupy one of two possible positions relative to currentthe node to be deleted the successor can be current' right childor it can be one of this right child' left descendants we'll look at these two situations in turn successor is right child of delnode if successor is the right child of delnodethings are simplified somewhat because we
25,057
deleted node was this requires only two steps unplug current from the rightchild field of its parent (or leftchild field if appropriate)and set this field to point to successor unplug current' left child from currentand plug it into the leftchild field of successor here are the code statements that carry out these stepsexcerpted from delete() parent rightchild successor successor leftchild current leftchildthis situation is summarized in figure which shows the connections affected by these two steps figure deletion when successor is right child here' the code in context ( continuation of the else-if ladder shown earlier)/delete(continued else /two childrenso replace with inorder successor /get successor of node to delete (currentnode successor getsuccessor(current)/connect parent of current to successor instead if(current =rootroot successorelse if(isleftchildparent leftchild successorelse parent rightchild successor/connect successor to current' left child successor leftchild current leftchild/end else two children /(successor cannot have left childreturn true/end delete(
25,058
these two steps step if the node to be deletedcurrentis the rootit has no parent so we merely set the root to the successor otherwisethe node to be deleted can be either left or right child (the figure shows it as right child)so we set the appropriate field in its parent to point to successor once delete(returns and current goes out of scopethe node referred to by current will have no references to itso it will be discarded during java' next garbage collection step we set the left child of successor to point to current' left child what happens if the successor has children of its ownfirst of alla successor node is guaranteed not to have left child this is true whether the successor is the right child of the node to be deleted or one of this right child' left children how do we know thiswellremember that the algorithm we use to determine the successor goes to the right child firstand then to any left children of that right child it stops when it gets to node with no left childso the algorithm itself determines that the successor can' have any left children if it didthat left child would be the successor instead you can check this out on the workshop applet no matter how many trees you makeyou'll never find situation in which node' successor has left child (assuming the original node has two childrenwhich is the situation that leads to all this trouble in the first placeon the other handthe successor may very well have right child this isn' much of problem when the successor is the right child of the node to be deleted when we move the successorits right subtree simply follows along with it there' no conflict with the right child of the node being deletedbecause the successor is this right child in the next section we'll see that successor' right child needs more attention if the successor is not the right child of the node to be deleted successor is left descendant of right child of delnode if successor is left descendant of the right child of the node to be deletedfour steps are required to perform the deletion plug the right child of successor into the leftchild field of the successor' parent plug the right child of the node to be deleted into the rightchild field of successor unplug current from the rightchild field of its parentand set this field to point to successor unplug current' left child from currentand plug it into the leftchild field of successor steps and are handled in the getsuccessor(routinewhile and are carried in delete(figure shows the connections affected by these four steps
25,059
here' the code for these four steps successorparent leftchild successor rightchild successor rightchild delnode rightchild parent rightchild successor successor leftchild current leftchild(step could also refer to the left child of its parent the numbers in figure show the connections affected by the four steps step in effect replaces the successor with its right subtree step keeps the right child of the deleted node in its proper place (this happens automatically when the successor is the right child of the deleted nodesteps and are carried out in the if statement that ends the getsuccessor(method shown earlier here' that statement again/if successor not if(successor !delnode rightchild/right child/make connections successorparent leftchild successor rightchildsuccessor rightchild delnode rightchildthese steps are more convenient to perform here than in delete()because in getsuccessor(it' easy to figure out where the successor' parent is while we're descending the tree to find the successor steps and we've seen alreadythey're the same as steps and in the case where the successor is the right child of the node to be deletedand the code is in the if statement at the end of delete(is deletion necessaryif you've come this faryou can see that deletion is fairly involved in factit' so complicated that some programmers try to sidestep it altogether they add new boolean field to the node classcalled something like isdeleted to delete nodethey simply set this field to true then other operationslike find()check this field to
25,060
node doesn' change the structure of the tree of courseit also means that memory can fill up with "deletednodes this approach is bit of cop-outbut it may be appropriate where there won' be many deletions in tree (if ex-employees remain in the personnel file foreverfor example the efficiency of binary trees as you've seenmost operations with trees involve descending the tree from level to level to find particular node how long does it take to do thisin full treeabout half the nodes are on the bottom level (actually there' one more node on the bottom row than in the rest of the tree thus about half of all searches or insertions or deletions require finding node on the lowest level (an additional quarter of these operations require finding the node on the next-to-lowest leveland so on during search we need to visit one node on each level so we can get good idea how long it takes to carry out these operations by knowing how many levels there are assuming full treetable shows how many levels are necessary to hold given number of nodes table number of levels for specified number of nodes number of nodes number of levels , , , , , ,
25,061
, , , this situation is very much like the ordered array discussed in in that casethe number of comparisons for binary search was approximately equal to the base- logarithm of the number of cells in the array hereif we call the number of nodes in the first column nand the number of levels in the second column lthen we can say that is less than raised to the power lor adding to both sides of the equationwe have + this is equivalent to log ( + thus the time needed to carry out the common tree operations is proportional to the base- log of in big- notation we say such operations take (logntime if the tree isn' fullanalysis is difficult we can say that for tree with given number of levelsaverage search times will be shorter for the non-full tree than the full tree because fewer searches will proceed to lower levels compare the tree to the other data-storage structures we've discussed so far in an unordered array or linked list containing , , itemsit would take you on the average , comparisons to find the one you wanted but in tree of , , itemsit takes (or fewercomparisons in an ordered array you can find an item equally quicklybut inserting an item requireson the averagemoving , items inserting an item in tree with , , items requires or fewer comparisonsplus small amount of time to connect the item similarlydeleting an item from , , -item array requires moving an average of , itemswhile deleting an item from , , -node tree requires or fewer comparisons to find the itemplus (possiblya few more comparisons to find its successorplus short time to disconnect the item and connect its successor thus tree provides high efficiency for all the common data-storage operations traversing is not as fast as the other operations howevertraversals are probably not very commonly carried out in typical large database they're more appropriate when tree is used as an aid to parsing algebraic or similar expressionswhich are probably not too long anyway trees represented as arrays our code examples are based on the idea that tree' edges are represented by leftchild and rightchild references in each node howeverthere' completely different way to represent treewith an array in the array approachthe nodes are stored in an array and are not linked by references
25,062
index is the rootthe node at index is the root' left childand so onprogressing from left to right along each level of the tree this is shown in figure figure tree represented by an array every position in the treewhether it represents an existing node or notcorresponds to cell in the array adding node at given position in the tree means inserting the node into the equivalent cell in the array cells representing tree positions with no nodes are filled with zero or null with this schemea node' children and parent can be found by applying some simple arithmetic to the node' index number in the array if node' index number is indexthen this node' left child is *index its right child is *index and its parent is (index- (where the '/character indicates integer division with no remainderyou can check this out by looking at the figure in most situationsrepresenting tree with an array isn' very efficient unfilled nodes and deleted nodes leave holes in the arraywasting memory even worsewhen deletion of node involves moving subtreesevery node in the subtree must be moved to its new location in the arraywhich is time-consuming in large trees howeverif deletions aren' allowedthen the array representation may be usefulespecially if obtaining memory for each node dynamically isfor some reasontoo timeconsuming the array representation may also be useful in special situations the tree in the workshop appletfor exampleis represented internally as an array to make it easy to map the nodes from the array to fixed locations on the screen display duplicate keys as in other data structuresthe problem of duplicate keys must be addressed in the code
25,063
inserted as the right child of its twin the problem is that the find(routine will find only the first of two (or moreduplicate nodes the find(routine could be modified to check an additional data itemto distinguish data items even when the keys were the samebut this would be (at least somewhattime-consuming one option is to simply forbid duplicate keys when duplicate keys are excluded by the nature of the data (employee id numbersfor examplethere' no problem otherwiseyou need to modify the insert(routine to check for equality during the insertion processand abort the insertion if duplicate is found the fill routine in the workshop applet excludes duplicates when generating the random keys duplicate keys as in other data structuresthe problem of duplicate keys must be addressed in the code shown for insert()and in the workshop appleta node with duplicate key will be inserted as the right child of its twin the problem is that the find(routine will find only the first of two (or moreduplicate nodes the find(routine could be modified to check an additional data itemto distinguish data items even when the keys were the samebut this would be (at least somewhattime-consuming one option is to simply forbid duplicate keys when duplicate keys are excluded by the nature of the data (employee id numbersfor examplethere' no problem otherwiseyou need to modify the insert(routine to check for equality during the insertion processand abort the insertion if duplicate is found the fill routine in the workshop applet excludes duplicates when generating the random keys summary trees consist of nodes (circlesconnected by edges (linesthe root is the topmost node in treeit has no parent in binary treea node has at most two children in binary search treeall the nodes that are left descendants of node have key values less than aall the nodes that are ' right descendants have key values greater than (or equal toa trees perform searchesinsertionsand deletions in (log ntime nodes represent the data-objects being stored in the tree edges are most commonly represented in program by references to node' children (and sometimes to its parenttraversing tree means visiting all its nodes in some order
25,064
an unbalanced tree is one whose root has many more left descendents than right descendantsor vice versa searching for node involves comparing the value to be found with the key value of nodeand going to that node' left child if the key search value is lessor to the node' right child if the search value is greater insertion involves finding the place to insert the new nodeand then changing child field in its new parent to refer to it an inorder traversal visits nodes in order of ascending keys preorder and postorder traversals are useful for parsing algebraic expressions when node has no childrenit can be deleted by setting the child field in its parent to null when node has one childit can be deleted by setting the child field in its parent to point to its child when node has two childrenit can be deleted by replacing it with its successor the successor to node can be found by finding the minimum node in the subtree whose root is ' right child in deletion of node with two childrendifferent situations arisedepending on whether the successor is the right child of the node to be deleted or one of the right child' left descendants nodes with duplicate key values may cause trouble in arrays because only the first one can be found in search trees can be represented in the computer' memory as an arrayalthough the reference-based approach is more common red-black trees overview as you learned in the last ordinary binary search trees offer important advantages as data storage devicesyou can quickly search for an item with given keyand you can also quickly insert or delete an item other data storage structuressuch as arrayssorted arraysand linked listsperform one or the other of these activities slowly thus binary search trees might appear to be the ideal data storage structure unfortunatelyordinary binary search trees suffer from troublesome problem they work well if the data is inserted into the tree in random order howeverthey become much slower if data is inserted in already sorted order ( or inversely sorted order ( when the values to be inserted are already ordereda binary tree becomes unbalanced with an unbalanced treethe capability to quickly find (or insert or deletea given element is lost this explores one way to solve the problem of unbalanced treesthe red-black treewhich is binary search tree with some added features
25,065
this and examine onethe treein tables and external storage howeverthe red-black tree is in most cases the most efficient balanced treeat least when data is stored in memory as opposed to external files our approach to the discussion we'll explain insertion into red-black trees little differently than we have explained insertion into other data structures red-black trees are not trivial to understand because of this and also because of multiplicity of symmetrical cases (for left or right childrenand inside or outside grandchildrenthe actual code is more lengthy and complex than one might expect it' therefore hard to learn about the algorithm by examining code conceptual for this reasonwe're going to concentrate on conceptual understanding rather than coding details in this we will be aided by the rbtree workshop applet we'll describe how you can work in partnership with the applet to insert new nodes into tree including human in the insertion routine certainly slows it downbut it also makes it easier for the human to understand how the process works searching works the same way in red-black tree as it does in an ordinary binary tree on the other handinsertion and deletionwhile based on the algorithms in an ordinary treeare extensively modified accordinglyin this we'll be concentrating on the insertion process top-down insertion in this the approach to insertion that we'll discuss is called top-down insertion this means that some structural changes may be made to the tree as the search routine descends the tree looking for the place to insert the node another approach is bottom-up insertion this involves finding the place to insert the node and then working back up through the tree making structural changes bottom-up insertion is less efficient because two passes must be made through the tree balanced and unbalanced trees before we begin our investigation of red-black treeslet' review how trees become unbalanced fire up the tree workshop applet from "binary trees,(not this rbtree appletuse the fill button to create tree with only one node then insert series of nodes whose keys are in either ascending or descending order the result will be something like that in figure figure items inserted in ascending order
25,066
than the previously inserted oneevery node is right childso all the nodes are on one side of the root the tree is maximally unbalanced if you inserted items in descending orderevery node would be the left child of its parentthe tree would be unbalanced on the other side degenerates to (nwhen there are no branchesthe tree becomesin effecta linked list the arrangement of data is one-dimensional instead of two-dimensional unfortunatelyas with linked listyou must now search through (on the averagehalf the items to find the one you're looking for in this situation the speed of searching is reduced to ( )instead of (lognas it is for balanced tree searching through , items in such an unbalanced tree would require an average of , comparisonswhereas for balanced tree with random insertions it requires only for presorted data you might just as well use linked list in the first place data that' only partly sorted will generate trees that are only partly unbalanced if you use the tree workshop applet from to attempt to generate trees with nodesyou'll see that some of them are more unbalanced than othersas shown in figure figure partially unbalanced tree although not as bad as maximally unbalanced treethis situation is not optimal for searching times in the tree workshop applettrees can become partially unbalancedeven with randomly generated databecause the amount of data is so small that even short run of ordered numbers will have big effect on the tree alsoa very small or very large key value can cause an unbalanced tree by not allowing the insertion of many nodes on one side or the other root of for exampleallows only two more nodes to be inserted to its left with realistic amount of random data it' not likely tree would become seriously unbalanced howeverthere may be runs of sorted data that will partially unbalance tree searching partially unbalanced trees will take time somewhere between (nand (logn)depending on how badly the tree is unbalanced balance to the rescue to guarantee the quick (log nsearch times tree is capable ofwe need to ensure that our tree is always balanced (or at least almost balancedthis means that each node in tree needs to have roughly the same number of descendents on its left side as it has on its right
25,067
ignore that for the momentas an item is being insertedthe insertion routine checks that certain characteristics of the tree are not violated if they areit takes corrective actionrestructuring the tree as necessary by maintaining these characteristicsthe tree is kept balanced red-black tree characteristics what are these mysterious tree characteristicsthere are twoone simple and one more complicatedthe nodes are colored during insertion and deletionrules are followed that preserve various arrangements of these colors colored nodes in red-black treeevery node is either black or red these are arbitrary colorsblue and yellow would do just as well in factthe whole concept of saying that nodes have "colorsis somewhat arbitrary some other analogy could have been used insteadwe could say that every node is either heavy or lightor yin or yang howevercolors are convenient labels data fieldwhich can be boolean (isredfor example)is added to the node class to embody this color information in the rbtree workshop appletthe red/black characteristic of node is shown by its border color the center coloras it was in the tree applet in the last is simply randomly generated data field of the node when we speak of node' color in this we'll almost always be referring to its red-black border color in the figures (except the screen shot of figure we'll show black nodes with solid black border and red nodes with white border (nodes are sometimes shown with no border to indicate that it doesn' matter whether they're black or red red-black rules when inserting (or deletinga new nodecertain ruleswhich we call the red-black rulesmust be followed if they're followedthe tree will be balanced let' look briefly at these rules every node is either red or black the root is always black if node is redits children must be black (although the converse isn' necessarily true every path from the root to leafor to null childmust contain the same number of black nodes the "null childreferred to in rule is place where child could be attached to nonleaf node in other wordsit' the potential left child of node with right childor the potential right child of node with left child this will make more sense as we go along the number of black nodes on path from root to leaf is called the black height another way to state rule is that the black height must be the same for all paths from the root to leaf
25,068
balanced treebut they dosome very clever people invented them copy them onto sticky noteand keep it on your computer you'll need to refer to them often in the course of this you can see how the rules work by using the rbtree workshop applet we'll do some experiments with the applet in momentbut first you should understand what actions you can take to fix things if one of the red-black rules is broken duplicate keys what happens if there' more than one data item with the same keythis presents slight problem in red-black trees it' important that nodes with the same key are distributed on both sides of other nodes with the same key that isif keys arrive in the order you want the second to go to the right of the first oneand the third to go to the left of the first one otherwisethe tree becomes unbalanced this could be handled by some kind of randomizing process in the insertion algorithm howeverthe search process then becomes more complicated if all items with the same key must be found it' simpler to outlaw items with the same key in this discussion we'll assume duplicates aren' allowed the actions suppose you see (or are told by the appletthat the color rules are violated how can you fix things so your tree is in compliancethere are twoand only twopossible actions you can takeyou can change the colors of nodes you can perform rotations changing the color of node means changing its red-black border color (not the center colora rotation is rearrangement of the nodes that hopefully leaves the tree more balanced at this point such concepts probably seem very abstractso let' become familiar with the rbtree workshop appletwhich can help to clarify things using the rbtree workshop applet figure shows what the rbtree workshop applet looks like after some nodes have been inserted (it may be hard to tell the difference between red and black node borders in the figurebut they should be clear on color monitor
25,069
there are quite few buttons in the rbtree applet we'll briefly review what they doalthough at this point some of the descriptions may be bit puzzling soon we'll do some experimenting with these buttons clicking on node the red arrow points to the currently selected node it' this node whose color is changed or which is the top node in rotation you select node by single-clicking it with the mouse this moves the red arrow to the node the start button when you first start the workshop appletand also when you press the start buttonyou'll see that tree is created that contains only one node because an understanding of red-black trees focuses on using the red-black rules during the insertion processit' more convenient to begin with the root and build up the tree by inserting additional nodes to simplify future operationsthe initial root node is always given value of (you select your own numbers for subsequent insertions the ins button the ins button causes new node to be createdwith the value that was typed into the number boxand then inserted into the tree (at least this is what happens if no color flips are necessary see the section on the flip button for more on this possibility notice that the ins button does complete insertion operation with one pushmultiple pushes are not required as they were with the tree workshop applet in the last the focus in the rbtree applet is not on the process of finding the place to insert the nodewhich is similar to that in ordinary binary search treesbut on keeping the tree balancedso the applet doesn' show the individual steps in the insertion the del button pushing the del button causes the node with the key value typed into the number box to be deleted as with the ins buttonthis takes place immediately after the first pushmultiple pushes are not required the del button and the ins button use the basic insertion algorithmsthe same as those in the tree workshop applet this is how the work is divided between the applet and the userthe applet does the insertionbut it' (mostlyup to the user to make the appropriate changes to the tree to ensure the red-black rules are followed and the tree thereby becomes balanced
25,070
if there is black parent with two red childrenand you place the red arrow on the parent by clicking on the node with the mousethen when you press the flip button the parent will become red and the children will become black that isthe colors are flipped between the parent and children you'll learn later why this is desirable thing to do if you try to flip the rootit will remain blackso as not to violate rule but its children will change from red to black the rol button this button carries out left rotation to rotate group of nodesfirst single-click the mouse to position the arrow at the topmost node of the group to be rotated for left rotationthe top node must have right child then click the button we'll examine rotations in detail later the ror button this button performs right rotation position the arrow on the top node to be rotatedmaking sure it has left childthen click the button the / button the / button changes red node to blackor black node to red single-click the mouse to position the red arrow on the nodeand then push the button (this button changes the color of single nodedon' confuse it with the flip buttonwhich changes three nodes at once text messages messages in the text box below the buttons tell you whether the tree is red-black correct the tree is red-black correct if it adheres to rules to listed previously if it' not correctyou'll see messages advising which rule is being violated in some cases the red arrow will point to where the violation occurred where' the find buttonin red-black treesa search routine operates exactly as it did in the ordinary binary search trees described in the last it starts at the rootandat each node it encounters (the current node)it decides whether to go to the left or right child by comparing the key of the current node with the search key we don' include find button in the rbtree applet because you already understand this process and our attention will be on manipulating the red-black aspects of the tree experimenting now that you're familiar with the rbtree buttonslet' do some simple experiments to get feel for what the applet does the idea here is to learn to manipulate the applet' controls later you'll use these skills to balance the tree experiment press start to clear any extra nodes you'll be left with the root nodewhich always has
25,071
insert new node with value smaller than the rootsay by typing the number into the number box and pressing the ins button this doesn' cause any rule violationsso the message continues to say tree is red-black correct insert second node that' larger than the rootsay the tree is still red-black correct it' also balancedthere are the same number of nodes on the right of the only non-leaf node (the rootas there are on its left the result is shown in figure figure balanced tree notice that newly inserted nodes are always colored red (except for the rootthis is not an accident it' less likely that inserting red node will violate the red-black rules than inserting black one this is because if the new red node is attached to black oneno rule is broken it doesn' create situation in which there are two red nodes together (rule )and it doesn' change the black height in any of the paths (rule of courseif you attach new red node to red noderule will be violated howeverwith any luck this will only happen half the time whereasif it were possible to add new black nodeit would always change the black height for its pathviolating rule alsoit' easier to fix violations of rule (parent and child are both redthan rule (black heights differ)as we'll see later experiment let' try some rotations start with the three nodes as shown in figure position the red arrow on the root ( by clicking it with the mouse this node will be the top node in the rotation now perform right rotation by pressing the ror button the nodes all shift to new positionsas shown in figure figure following right rotation in this right rotationthe parent or top node moves into the place of its right childthe left child moves up and takes the place of the parentand the right child moves down to become the grandchild of the new top node notice that the tree is now unbalancedthere are more nodes to the right of the root than to the left alsothe message indicates that the red-black rules are violatedspecifically
25,072
insteadrotate the other way position the red arrow on which is now the root (the arrow should already point to after the previous rotationclick the rol button to rotate left the nodes will return to the position of figure experiment start with the position of figure with nodes and inserted in addition to in the root position note that the parent (the rootis black and both its children are red now try to insert another node no matter what value you useyou'll see the message can' insertneeds color flip as we mentioneda color flip is necessary wheneverduring the insertion processa black node with two red children is encountered the red arrow should already be positioned on the black parent (the root node)so click the flip button the root' two children change from red to black ordinarily the parent would change from black to redbut this is special case because it' the rootit remains black to avoid violating rule now all three nodes are black the tree is still red-black correct now click the ins button again to insert the new node figure shows the result if the newly inserted node has the key value the tree is still red-black correct the root is blackthere' no situation in which parent and child are both redand all the paths have the same number of black nodes ( adding the new red node didn' change the red-black correctness experiment now let' see what happens when you try to do something that leads to an unbalanced tree in figure one path has one more node than the other this isn' very unbalancedand no red-black rules are violatedso neither we nor the red-black algorithms need to worry about it howeversuppose that one path differs from another by two or more levels (where level is the same as the number of nodes along the pathin this case the red-black rules will always be violatedand we'll need to rebalance the tree figure colors flippednew node inserted insert into the tree of figure you'll see the message errorparent and child are both red rule has been violatedas shown in figure
25,073
how can we fix things so rule isn' violatedan obvious approach is to change one of the offending nodes to black let' try changing the child node position the red arrow on it and press the / button the node becomes black the good news is we fixed the problem of both parent and child being red the bad news is that now the message says errorblack heights differ the path from the root to node has three black nodes in itwhile the path from the root to node has only two thus rule is violated it seems we can' win this problem can be fixed with rotation and some color changes how to do this will be the topic of later sections more experiments experiment with the rbtree workshop applet on your own insert more nodes and see what happens see if you can use rotations and color changes to achieve balanced tree does keeping the tree red-black correct seem to guarantee an (almostbalanced treetry inserting ascending keys ( and then restart with the start button and try descending keys ( ignore the messageswe'll see what they mean later these are the situations that get the ordinary binary search tree into trouble can you still balance the treethe red-black rules and balanced trees try to create tree that is unbalanced by two or more levels but is red-black correct as it turns outthis is impossible that' why the red-black rules keep the tree balanced if one path is more than one node longer than anotherthen it must either have more black nodesviolating rule or it must have two adjacent red nodesviolating rule convince yourself that this is true by experimenting with the applet null children remember that rule specifies all paths that go from the root to any leaf or to any null children must have the same number of black nodes null child is child that non-leaf node might havebut doesn' thus in figure the path from to to the right child of (its null childhas only one black nodewhich is not the same as the paths to and which have this arrangement violates rule although both paths to leaf nodes have the same number of black nodes
25,074
the term black height is used to describe the number of black nodes from between given node and the root in figure the black height of is of is still of is and so on rotations to balance treeit' necessary to physically rearrange the nodes if all the nodes are on the left of the rootfor exampleyou need to move some of them over to the right side this is done using rotations in this section we'll learn what rotations are and how to execute them rotations are ways to rearrange nodes they were designed to do the following two thingsraise some nodes and lower others to help balance the tree ensure that the characteristics of binary search tree are not violated recall that in binary search tree the left children of any node have key values less than the nodewhile its right children have key values greater or equal to the node if the rotation didn' maintain valid binary search tree it wouldn' be of much usebecause the search algorithmas we saw in the last relies on the search-tree arrangement note that color rules and node color changes are used only to help decide when to perform rotationfiddling with the colors doesn' accomplish anything by itselfit' the rotation that' the heavy hitter color rules are like rules of thumb for building house (such as "exterior doors open inward")while rotations are like the hammering and sawing needed to actually build it simple rotations in experiment we tried rotations to the left and right these rotations were easy to visualize because they involved only three nodes let' clarify some aspects of this process what' rotatingthe term rotation can be little misleading the nodes themselves aren' rotatedthe relationship between them changes one node is chosen as the "topof the rotation if we're doing right rotationthis "topnode will move down and to the rightinto the position of its right child its left child will move up to take its place remember that the top node isn' the "centerof the rotation if we talk about car tirethe top node doesn' correspond to the axle or the hubcapit' more like the topmost part
25,075
the rotation we described in experiment was performed with the root as the top nodebut of course any node can be the top node in rotationprovided it has the appropriate child mind the children you must be sure thatif you're doing right rotationthe top node has left child otherwise there' nothing to rotate into the top spot similarlyif you're doing left rotationthe top node must have right child the weird crossover node rotations can be more complicated than the three-node example we've discussed so far click startand thenwith already at the rootinsert nodes with following valuesin this order when you try to insert the you'll see the can' insertneeds color flip message just click the flip button the parent and children change color then press ins again to complete the insertion of the finally insert the the resulting arrangement is shown in figure figure rotation with crossover node now we'll try rotation place the arrow on the root (don' forget this!and press the ror button all the nodes move the follows the upand the follows the down but what' thisthe has detached itself from the whose right child it wasand become instead the left child of some nodes go upsome nodes go downbut the moves across the result is shown in figure the rotation has caused violation of rule we'll see how to fix this later in the original position of figure athe is called an inside grandchild of the top node (the is an outside grandchild the inside grandchildif it' the child of the node that' going up (which is the left child of the top node in right rotationis always disconnected from its parent and reconnected to its former grandparent it' like becoming your own uncle (although it' best not to dwell too long on this analogysubtrees on the move we've shown individual nodes changing position during rotationbut entire subtrees can move as well to see thisclick start to put at the rootand then insert the
25,076
whenever you can' complete an insertion because of the can' insertneeds color flip message the resulting arrangement is shown in figure figure subtree motion during rotation position the arrow on the root now press ror wow(or is it wow? lot of nodes have changed position the result is shown in figure here' what happensthe top node ( goes to its right child the top node' left child ( goes to the top the entire subtree of which is the root moves up the entire subtree of which is the root moves across to become the left child of the entire subtree of which is the root moves down you'll see the errorroot must be black message but you can ignore it for the time being you can flip back and forth by alternately pressing ror and rol with the arrow on the top node do this and watch what happens to the subtreesespecially the one with as its root the figures show the subtrees encircled by dotted triangles note that the relations of the nodes within each subtree are unaffected by the rotation the entire subtree moves as unit the subtrees can be larger (have more descendantsthan the three nodes we show in this example no matter how many nodes there are in subtreethey will all move together during rotation human beings versus computers this is pretty much all you need to know about what rotation does to cause rotationyou position the arrow on the top nodethen press ror or rol of coursein real redblack tree insertion algorithmrotations happen under program controlwithout human intervention notice however thatin your capacity as human beingyou could probably balance any tree just by looking at it and performing appropriate rotations whenever node has lot of left descendants and not too many right onesyou rotate it rightand vice versa
25,077
they can follow few simple rules that' what the red-black scheme providesin the form of color coding and the four color rules inserting new node now you have enough background to see how red-black tree' insertion routine uses rotations and the color rules to maintain the tree' balance preview we're going to briefly preview our approach to describing the insertion process don' worry if things aren' completely clear in the previewwe'll discuss things in more detail in moment in the discussion that follows we'll use xpand to designate pattern of related nodes is node that has caused rule violation (sometimes refers to newly inserted nodeand sometimes to the child node when parent and child have red-red conflict is particular node is the parent of is the grandparent of (the parent of pon the way down the tree to find the insertion pointyou perform color flip whenever you find black node with two red children ( violation of rule sometimes the flip causes red-red conflict ( violation of rule call the red child and the red parent the conflict can be fixed with single rotation or double rotationdepending on whether is an outside or inside grandchild of following color flips and rotationsyou continue down to the insertion point and insert the new node after you've inserted the new node xif is black you simply attach the new red node if is redthere are two possibilitiesx can be an outside or inside grandchild of you perform two color changes (we'll see what they are in momentif is an outside grandchildyou perform one rotationand if it' an inside grandchild you perform two this restores the tree to balanced state now we'll recapitulate this preview in more detail we'll divide the discussion into three partsarranged in order of complexity color flips on the way down rotations once the node is inserted rotations on the way down if we were discussing these three parts in strict chronological orderwe' examine part before part howeverit' easier to talk about rotations at the bottom of the tree than in the middleand operations and are encountered more frequently than operation so we'll discuss before color flips on the way down the insertion routine in red-black tree starts off doing essentially the same thing it does
25,078
node should be insertedgoing left or right at each node depending on the relative size of the node' key and the search key howeverin red-black treegetting to the insertion point is complicated by color flips and rotations we introduced color flips in experiment now we'll look at them in more detail imagine the insertion routine proceeding down the treegoing left or right at each nodesearching for the place to insert new node to make sure the color rules aren' brokenit needs to perform color flips when necessary here' the ruleevery time the insertion routine encounters black node that has two red childrenit must change the children to black and the parent to red (unless the parent is the rootwhich always remains blackfigure color flip how does color flip affect the red-black rulesfor conveniencelet' call the node at the top of the trianglethe one that' red before the flipp for parent we'll call ' left and right children and this is shown in figure black heights unchanged figure shows the nodes after the color flip the flip leaves unchanged the number of black nodes on the path from the root on down through to the leaf or null nodes all such paths go through pand then through either or before the fliponly is blackso the triangle (consisting of px and adds one black node to each of these paths after the flipp is no longer blackbut both and areso again the triangle contributes one black node to every path that passes through it so color flip can' cause rule to be violated color flips are helpful because they make red leaf nodes into black leaf nodes this makes it easier to attach new red nodes without violating rule could be two reds although rule is not violated by color fliprule ( node and its parent can' both be redmay be if the parent of is blackthere' no problem when is changed from black to red howeverif the parent of is redthenafter the color changewe'll have two reds in row this needs to be fixed before we continue down the path to insert the new node we can correct the situation with rotationas we'll soon see the root situation what about the rootremember that color flip of the root and its two children leaves the rootas well as its childrenblack this avoids violating rule does this affect the
25,079
more nodes black and none red thusrule isn' violated alsobecause the root and one or the other of its two children are in every paththe black height of every path is increased the same amountthat isby thusrule isn' violated either finallyjust insert it once you've worked your way down to the appropriate place in the treeperforming color flips (and rotationsif necessary on the way downyou can then insert the new node as described in the last for an ordinary binary search tree howeverthat' not the end of the story rotations once the node is inserted the insertion of the new node may cause the red-black rules to be violated thereforefollowing the insertionwe must check for rule violations and take appropriate steps remember thatas described earlierthe newly inserted nodewhich we'll call xis always red may be located in various positions relative to and gas shown in figure figure handed variations of node being inserted remember that node is an outside grandchild if it' on the same side of its parent that is of its parent that isx is an outside grandchild if either it' left child of and is left child of gor it' right child of and is right child of converselyx is an inside grandchild if it' on the opposite side of its parent that is of its parent if is an outside grandchildit may be either the left or right child of pdepending on whether is the left or right child of two similar possibilities exist if is an inside grandchild it' these four situations that are shown in figure this multiplicity of what we might call "handed(left or rightvariations is one reason the red-black insertion routine is challenging to program the action we take to restore the red-black rules is determined by the colors and configuration of and its relatives perhaps surprisinglythere are only three major ways in which nodes can be arranged (not counting the handed variations already mentionedeach possibility must be dealt with in different way to preserve red-black correctness and thereby lead to balanced tree we'll list the three possibilities brieflythen discuss each one in detail in its own section figure shows what they look like remember that is always red
25,080
is black is red and is an outside grandchild of is red and is an inside grandchild of it might seem that this list doesn' cover all the possibilities we'll return to this question after we've explored these three possibility is black if is blackwe get free ride the node we've just inserted is always red if its parent is blackthere' no red-to-red conflict (rule )and no addition to the number of black nodes (rule thus no color rules are violated we don' need to do anything else the insertion is complete possibility is redx is outside if is red and is an outside grandchildwe need single rotation and some color changes let' set this up with the workshop applet so we can see what we're talking about start with the usual at the rootand insert and you'll need to do color flip before you insert the now insert which is xthe new node figure shows how this looks the message on the workshop applet says errorparent and child both redso we know we need to take some action
25,081
in this situationwe can take three steps to restore red-black correctness and thereby balance the tree here are the steps switch the color of ' grandparent ( in this example switch the color of ' parent ( rotate with ' grandparent ( at the topin the direction that raises ( this is right rotation in the example as you've learnedto switch colorsput the arrow on the node and press the / button to rotate rightput the arrow on the top node and press ror when you've completed the three stepsthe workshop applet will inform you that the tree is red/black correct it' also more balanced than it wasas shown in figure in this examplex was an outside grandchild and left child there' symmetrical situation when the is an outside grandchild but right child try this by creating the tree (with color flips when necessaryfix it by changing the colors of and and rotating left with at the top again the tree is balanced possibility is red and is inside if is red and is an inside grandchildwe need two rotations and some color changes to see this one in actionuse the workshop applet to create the tree (again you'll need color flip before you insert the the result is shown in figure
25,082
note that the node is an inside grandchild it and its parent are both redso again you see the error message errorparent and child both red fixing this arrangement is slightly more complicated if we try to rotate right with the grandparent node ( at the topas we did in possibility the inside grandchild ( moves across rather than upso the tree is no more balanced than before (try thisthen rotate backwith at the topto restore it different solution is needed the trick when is an inside grandchild is to perform two rotations rather than one the first changes from an inside grandchild to an outside grandchildas shown in figure now the situation is similar to possibility and we can apply the same rotationwith the grandparent at the topas we did before the result is shown in figure we must also recolor the nodes we do this before doing any rotations (this order doesn' really matterbut if we wait until after the rotations to recolor the nodesit' hard to know what to call them the steps are switch the color of ' grandparent ( in this example switch the color of (not its parentx is here rotate with ' parent at the top (not the grandparentthe parent is )in the direction that raises ( left rotation in this example rotate again with ' grandparent ( at the topin the direction that raises ( right rotationthis restores the tree to red-black correctness and also balances it (as much as possibleas with possibility there is an analogous case in which is the right child of rather than the left what about other possibilitiesdo the three post-insertion possibilities we just discussed really cover all situationssupposefor examplethat has sibling sthe other child of this might complicate the rotations necessary to insert but if is blackthere' no problem inserting (that' possibility if is redthen both its children must be black (to avoid violating rule it
25,083
and the null child howeverwe know is redso we conclude that it' impossible for to have sibling unless is red another possibility is that gthe grandparent of phas child uthe sibling of and the uncle of againthis would complicate any necessary rotations howeverif is blackthere' no need for rotations when inserting xas we've seen so let' assume is red then must also be redotherwise the black height going from to would be different from that going from to but black parent with two red children is flipped on the way downso this situation can' exist either thus the three possibilities discussed above are the only ones that can exist (except thatin possibilities and can be right or left child and can be right or left childwhat the color flips accomplished suppose that performing rotation and appropriate color changes caused other violations of the red-black rules to appear further up the tree one can imagine situations in which you would need to work your way all the way back up the treeperforming rotations and color switchesto remove rule violations fortunatelythis situation can' arise using color flips on the way down has eliminated the situations in which rotation could introduce any rule violations further up the tree it ensures that one or two rotations will restore red-black correctness in the entire tree actually proving this is beyond the scope of this bookbut such proof is possible it' the color flips on the way down that make insertion in red-black trees more efficient than in other kinds of balanced treessuch as avl trees they ensure that you need to pass through the tree only onceon the way down rotations on the way down now we'll discuss the last of the three operations involved in inserting nodemaking rotations on the way down to the insertion point as we notedalthough we're discussing this lastit actually takes place before the node is inserted we've waited until now to discuss it only because it was easier to explain rotations for just-installed node than for nodes in the middle of the tree during the discussion of color flips during the insertion processwe noted that it' possible for color flip to cause violation of rule ( parent and child can' both be redwe also noted that rotation can fix this violation there are two possibilitiescorresponding to possibility and possibility during the insertion phase described above the offending node can be an outside grandchild or it can be an inside grandchild (in the situation corresponding to possibility no action is required outside grandchild first we'll examine an example in which the offending node is an outside grandchild by "offending nodewe mean the child in the parent-child pair that caused the red-red conflict start new tree with the nodeand insert the following nodes and you'll need to do color flips when inserting and now try to insert node with the value you'll be told you must flip and its children and you push the flip button the flip is carried outbut now the message says errorparent and child are both redreferring to and its child the
25,084
figure outside grandchild on the way down the procedure used to fix this is similar to the post-insertion operation with an outside grandchilddescribed earlier we must perform two color switches and one rotation so we can discuss this in the same terms we did when inserting nodewe'll call the node at the top of the triangle that was flipped (which is in this casex this looks little oddbecause we're used to thinking of as the node being insertedand here it' not even leaf node howeverthese on-the-way-down rotations can take place anywhere within the tree the parent of is ( in this case)and the grandparent of --the parent of --is ( in this casewe follow the same set of rules we did under possibility discussed above switch the color of ' grandparent ( in this exampleignore the message that the root must be black switch the color of ' parent ( rotate with ' grandparent ( at the topin the direction that raises (here right rotationsuddenlythe tree is balancedit has also become pleasantly symmetrical it appears to be bit of miraclebut it' only result of following the color rules now the node with value can be inserted in the usual way because the node it connects to is blackthere' no complexity about the insertion one color flip (at is necessary figure shows the tree after is inserted inside grandchild if is an inside grandchild when red-red conflict occurs on the way downtwo rotations are required to set it right this situation is similar to the inside grandchild in the postinsertion phasewhich we called possibility click start in the rbtree workshop applet to begin with and insert and you'll need color flips before and
25,085
but when you perform the flip and are both redand you get the errorparent and child are both red message don' press ins again in this situation is is and is as shown in figure figure inside grandchild on the way down to cure the red-red conflictyou must do the same two color changes and two rotations as in possibility change the color of (it' ignore the message that the root must be black change the color of ( rotate with ( as the topin the direction that raises (left in this examplethe result is shown in figure rotate with as the topin the direction that raises (right in this examplenow you can insert the color flip changes and to black as you insert it the result is shown in figure this concludes the description of how tree is kept red-black correctand therefore balancedduring the insertion process deletion as you may recallcoding for deletion in an ordinary binary search tree is considerably harder than for insertion the same is true in red-black treesbut in additionthe deletion process isas you might expectcomplicated by the need to restore red-black correctness after the node is removed in factthe deletion process is so complicated that many programmers sidestep it in various ways one approachas with ordinary binary treesis to mark node as deleted without actually deleting it search routine that finds the node then knows not to tell anyone about it this works in many situationsespecially if deletions are not common occurrence in any casewe're going to forgo discussion of the deletion process you can refer to appendix "further reading,if you want to pursue it
25,086
like ordinary binary search treesa red-black tree allows for searchinginsertionand deletion in (log ntime search times should be almost the same in the red-black tree as in the ordinary tree because the red-black characteristics of the tree aren' used during searches the only penalty is that the storage required for each node is increased slightly to accommodate the red-black color ( boolean variablemore specificallyaccording to sedgewick (see appendix )in practice search in red-black tree takes about log comparisonsand it can be shown that it cannot require more than *log comparisons the times for insertion and deletion are increased by constant factor because of having to perform color flips and rotations on the way down and at the insertion point on the averagean insertion requires about one rotation thereforeinsertion still takes (log ntimebut is slower than insertion in the ordinary binary tree because in most applications there will be more searches than insertions and deletionsthere is probably not much overall time penalty for using red-black tree instead of an ordinary tree of coursethe advantage is that in red-black tree sorted data doesn' lead to slow (nperformance implementation if you're writing an insertion routine for red-black treesall you need to do (irony intendedis to write code to carry out the operations described above as we notedshowing and describing such code is beyond the scope of this book howeverhere' what you'll need to think about you'll need to add red-black field (which can be type booleanto the node class you can adapt the insertion routine from the tree java program in on the way down to the insertion pointcheck whether the current node is black and its two children are both red if sochange the color of all three (unless the parent is the rootwhich must be kept blackafter color flipcheck that there are no violations of rule if soperform the appropriate rotationsone for an outside grandchildtwo for an inside grandchild when you reach leaf nodeinsert the new node as in tree javamaking sure the node is red check again for red-red conflictsand perform any necessary rotations perhaps surprisinglyyour software need not keep track of the black height of different parts of the tree (although you might want to check this during debuggingyou only need to check for violations of rule red parent with red childwhich can be done locally (unlike checks of black heightsrule which would require more complex bookkeepingif you perform the color flipscolor changesand rotations described earlierthe black heights of the nodes should take care of themselves and the tree should remain balanced the rbtree workshop applet reports black-height errors only because the user is not forced to carry out insertion algorithm correctly other balanced trees the avl tree is the earliest kind of balanced tree it' named after its inventorsadelsonvelskii and landis in avl trees each node stores an additional piece of datathe difference between the heights of its left and right subtrees this difference may not be
25,087
different from the height of its right subtree following insertionthe root of the lowest subtree into which the new node was inserted is checked if the height of its children differs by more than single or double rotation is performed to equalize their heights the algorithm then moves up and checks the node aboveequalizing heights if necessary this continues all the way back up to the root search times in an avl tree are (lognbecause the tree is guaranteed to be balanced howeverbecause two passes through the tree are necessary to insert (or deletea nodeone down to find the insertion point and one up to rebalance the treeavl trees are not as efficient as red-black trees and are not used as often the other important kind of balanced tree is the multiway treein which each node can have more than two children we'll look at one version of multiway treesthe treein the next one problem with multiway trees is that each node must be larger than for binary treebecause it needs reference to every one of its children summary it' important to keep binary search tree balanced to ensure that the time necessary to find given node is kept as short as possible inserting data that has already been sorted can create maximally unbalanced treewhich will have search times of (nin the red-black balancing schemeeach node is given new characteristica color that can be either red or black set of rulescalled red-black rulesspecifies permissible ways that nodes of different colors can be arranged these rules are applied while inserting (or deletinga node color flip changes black node with two red children to red node with two black children in rotationone node is designated the top node right rotation moves the top node into the position of its right childand the top node' left child into its position left rotation moves the top node into the position of its left childand the top node' right child into its position color flipsand sometimes rotationsare applied while searching down the tree to find where new node should be inserted these flips simplify returning the tree to redblack correctness following an insertion after new node is insertedred-red conflicts are checked again if violation is foundappropriate rotations are carried out to make the tree red-black correct these adjustments result in the tree being balancedor at least almost balanced adding red-black balancing to binary tree has only small negative effect on average performanceand avoids worst-case performance when the data is already sorted
25,088
list trees and external storage hash tables heaps trees and external storage overview in binary treeeach node has one data item and can have up to two children if we allow more data items and children per nodethe result is multiway tree treesto which we devote the first part of this are multiway trees that can have up to four children and three data items per node trees are interesting for several reasons firstthey're balanced trees like red-black trees they're slightly less efficient than red-black treesbut easier to program secondand most importantlythey serve as an easy-to-understand introduction to -trees -tree is another kind of multiway tree that' particularly useful for organizing data in external storage (external means external to main memoryusually this is disk drive node in -tree can have dozens or hundreds of children we'll discuss external storage and -trees in the second part of this introduction to trees in this section we'll look at the characteristics of trees later we'll see how workshop applet models treeand how we can program tree in java we'll also look at the surprisingly close relationship between trees and red-black trees figure shows small tree each lozenge-shaped node can hold onetwoor three data items figure tree here the top three nodes have childrenand the six nodes on the bottom row are all leaf nodeswhich by definition have no children in tree all the leaf nodes are always on the same level
25,089
the and in the name tree refer to how many links to child nodes can potentially be contained in given node for non-leaf nodesthree arrangements are possiblea node with one data item always has two children node with two data items always has three children node with three data items always has four children in shorta non-leaf node must always have one more child than it has data items orto put it symbolicallyif the number of child links is and the number of data items is dthen = + this is critical relationship that determines the structure of trees leaf nodeby contrasthas no childrenbut it can nevertheless contain onetwoor three data items empty nodes are not allowed because tree can have nodes with up to four childrenit' called multiway tree of order you may wonder why tree isn' called - tree can' node have only one childas nodes in binary trees cana binary tree (described in "binary trees,and "red-black trees"can be thought of as multiway tree of order because each node can have up to two children howeverthere' difference (besides the maximum number of childrenbetween binary trees and trees in binary treea node can have up to two child links single linkto its left or to its right childis also perfectly permissible the other link has null value in treeon the other handnodes with single link are not permitted node with one data item must always have two linksunless it' leafin which case it has no links figure shows the possibilities node with two links is called -nodea node with three links is -nodeand node with links is -nodebut there is no such thing as -node figure nodes in tree tree organization
25,090
to as shown in figure the data items in node are arranged in ascending key orderby convention from left to right (lower to higher numbersan important aspect of any tree' structure is the relationship of its links to the key values of its data items in binary treeall children with keys less than the node' key are in subtree rooted in the node' left childand all children with keys larger than or equal to the node' key are rooted in the node' right child in tree the principle is the samebut there' more to itall children in the subtree rooted at child have key values less than key all children in the subtree rooted at child have key values greater than key but less than key all children in the subtree rooted at child have key values greater than key but less than key all children in the subtree rooted at child have key values greater than key this is shown in figure duplicate values are not usually permitted in treesso we don' need to worry about comparing equal keys figure keys and children refer to the tree in figure as in all treesthe leaves are all on the same level (the bottom rowupper-level nodes are often not fullthat isthey may contain only one or two data items instead of three alsonotice that the tree is balanced it retains its balance even if you insert sequence of data in ascending (or descendingorder the tree' self-balancing capability results from the way new data items are insertedas we'll see in moment searching finding data item with particular key is similar to the search routine in binary tree you start at the rootandunless the search key is found thereselect the link that leads to the subtree with the appropriate range of values for exampleto search for the data item with key in the tree in figure you start at the root you search the rootbut don' find the item because is larger than you go to child which we will represent as (remember that child is on the rightbecause the numbering of children and links starts at on the left you don' find the data item in this node eitherso you must go to the next child herebecause is greater than but less than you go again to child this time you find the specified item in the link insertion new data items are always inserted in leaveswhich are on the bottom row of the tree if
25,091
changed to maintain the structure of the treewhich stipulates that there should be one more child than data items in node insertion into tree is sometimes quite easy and sometimes rather complicated in any case the process begins by searching for the appropriate leaf node if no full nodes are encountered during the searchinsertion is easy when the appropriate leaf node is reachedthe new data item is simply inserted into it figure shows data item with key being inserted into tree figure insertion with no splits insertion may involve moving one or two other items in node so the keys will be in the correct order after the new item is inserted in this example the had to be shifted right to make room for the node splits insertion becomes more complicated if full node is encountered on the path down to the insertion point when this happensthe node must be split it' this splitting process that keeps the tree balanced the kind of tree we're discussing here is often called top-down tree because nodes are split on the way down to the insertion point let' name the data items in the node that' about to be split aband here' what happens in split (we assume the node being split is not the rootwe'll examine splitting the root later newempty node is created it' sibling of the node being splitand is placed to its right data item is moved into the new node data item is moved into the parent of the node being split data item remains where it is the rightmost two children are disconnected from the node being split and connected to the new node an example of node split is shown in figure another way of describing node split is to say that -node has been transformed into two -nodes
25,092
notice that the effect of the node split is to move data up and to the right it' this rearrangement that keeps the tree balanced here the insertion required only one node splitbut more than one full node may be encountered on the path to the insertion point when this is the case there will be multiple splits splitting the root when full root is encountered at the beginning of the search for the insertion pointthe resulting split is slightly more complicateda new node is created that becomes the new root and the parent of the node being split second new node is created that becomes sibling of the node being split data item is moved into the new sibling data item is moved into the new root data item remains where it is the two rightmost children of the node being split are disconnected from it and connected to the new right-hand node figure shows the root being split this process creates new root that' at higher level than the old one thus the overall height of the tree is increased by one
25,093
another way to describe splitting the root is to say that -node is split into three nodes following node splitthe search for the insertion point continues down the tree in figure the data item with key of is inserted into the appropriate leaf figure insertions into tree splitting on the way down notice thatbecause all full nodes are split on the way downa split can' cause an effect that ripples back up through the tree the parent of any node that' being split is guaranteed not to be fulland can therefore accept data item without itself needing to be split of courseif this parent already had two children when its child was splitit will
25,094
encounters it figure shows series of insertions into an empty tree there are four node splitstwo of the root and two of leaves the tree workshop applet operating the tree workshop applet provides quick way to see how trees work when you start the applet you'll see screen similar to figure figure the tree workshop applet the fill button when it' first startedthe tree workshop applet inserts data items into the tree you can use the fill button to create new tree with different number of data items from to click fill and type the number into the field when prompted another click will create the new tree the tree may not look very full with nodesbut more nodes require more levelswhich won' fit in the display the find button you can watch the applet locate data item with given key by repeatedly clicking the find button when promptedtype in the appropriate key thenas you click the buttonwatch the red arrow move from node to node as it searches for the item messages will say something like went to child number as we've seenchildren are numbered from to from left to rightwhile data items are numbered from to after little practice you should be able to predict the path the search will take search involves examining one node on each level the applet supports maximum of four levelsso any item can be found by examining only four nodes within each non-leaf nodethe algorithm examines each data itemstarting on the leftto see which child it should go to next in leaf node it examines each data item to see if it contains the specified key if it can' find such an item in the leaf nodethe search fails in the tree workshop applet it' important to complete each operation before attempting new one continue to click the button until the message says press any button this is the signal that an operation is complete the ins button
25,095
in the tree the algorithm first searches for the appropriate node if it encounters full node along the wayit splits it before continuing on experiment with the insertion process watch what happens when there are no full nodes on the path to the insertion point this is straightforward process then try inserting at the end of path that includes full nodeeither at the rootat the leafor somewhere in between watch how new nodes are formed and the contents of the node being split are distributed among three different nodes the zoom button one of the problems with trees is that there are great many nodes and data items just few levels down the tree workshop applet supports only four levelsbut there are potentially nodes on the bottom leveleach of which can hold up to three data items it would be impossible to display so many items at once on one rowso the applet shows only some of themthe children of selected node (to see the children of another nodeyou click on itwe'll discuss that in moment to see zoomed-out view of the entire treeclick the zoom button figure shows what you'll see figure the zoomed-out view in this view nodes are shown as small rectanglesdata items are not shown nodes that exist and are visible in the zoomed-in view (which you can restore by clicking zoom againare shown in green nodes that exist but aren' currently visible in the zoomed-out view are shown in magentaand nodes that don' exist are shown in gray these colors are hard to distinguish on the figureyou'll need to view the applet on your color monitor to make sense of the display using the zoom button to toggle back and forth between the zoomed-out and zoomed-in views allows you to see both the big picture and the detailsand hopefully put the two together in your mind viewing different nodes in the zoomed-in view you can always see all the nodes in the top two rowsthere' only onethe rootin the top rowand only four in the second row below the second row things get more complicated because there are too many nodes to fit on the screen on the third row on the fourth howeveryou can see any node you want by clicking on its parentor sometimes its grandparent and then its parent blue triangle at the bottom of node shows where child is connected to node if
25,096
the blue triangles to them if the children aren' currently visiblethere are no linesbut the blue triangles indicate that the node nevertheless has children if you click on the parent nodeits children and the lines to them will appear by clicking the appropriate nodes you can navigate all over the tree for convenienceall the nodes are numberedstarting with at the root and continuing up to for the node on the far right of the bottom row the numbers are displayed to the upper right of each nodeas shown in figure nodes are numbered whether they exist or notso the numbers on existing nodes probably won' be contiguous figure shows small tree with four nodes in the third row the user has clicked on node so its two childrennumbered and are visible figure selecting the leftmost children if the user clicks on node its children and will appearas shown in figure figure selecting the rightmost children these figures show how to switch among different nodes in the third row by clicking nodes in the second row to switch nodes in the fourth row you'll need to click first on grandparent in the second rowthen on parent in the third row during searches and insertions with the find and ins buttonsthe view will change automatically to show the node currently being pointed to by the red arrow experiments the tree workshop applet offers quick way to learn about trees try inserting items into the tree watch for node splits stop before one is about to happenand figure
25,097
to see if you're right as the tree gets larger you'll need to move around it to see all the nodes click on node to see its children (and their childrenand so onif you lose track of where you areuse the zoom key to see the big picture how many data items can you insert in the treethere' limit because only four levels are allowed four levels can potentially contain nodesfor total of nodes (all visible on the zoomed-out displayassume full items per node gives data items howeverthe nodes can' all be full at the same time long before they fill upanother root splitleading to five levelswould be necessaryand this is impossible because the applet supports only four levels you can insert the most items by deliberately inserting them into nodes that lie on paths with no full nodesso that no splits are necessary of course this is not reasonable procedure with real data for random data you probably can' insert more than about items into the applet the fill button allows only to minimize the possibility of overflow java code for tree in this section we'll examine java program that models tree we'll show the complete tree java program at the end of the section this is relatively complex programand the classes are extensively interrelatedso you'll need to peruse the entire listing to see how it works there are four classesdataitemnodetree and tree app we'll discuss them in turn the dataitem class objects of this class represent the data items stored in nodes in real-world program each object would contain an entire personnel or inventory recordbut here there' only one piece of dataof type doubleassociated with each dataitem object the only actions that objects of this class can perform are to initialize themselves and display themselves the display is the data value preceded by slash/ (the display routine in the node class will call this routine to display all the items in node the node class the node class contains two arrayschildarray and itemarray the first is four cells long and holds references to whatever children the node might have the second is three cells long and holds references to objects of type dataitem contained in the node note that the data items in itemarray comprise an ordered array new items are addedor existing ones removedin the same way they would be in any ordered array (as described in "arrays"items may need to be shifted to make room to insert new item in order or to close an empty cell when an item is removed we've chosen to store the number of items currently in the node (numitemsand the node' parent (parentas fields in this class neither of these is strictly necessaryand could be eliminated to make the nodes smaller howeverincluding them clarifies the programmingand only small price is paid in increased node size various small utility routines are provided in the node class to manage the connections to child and parent and to check if the node is full and if it is leaf howeverthe major work is done by the finditem()insertitem()and removeitem(routines these
25,098
with particular keyinsert new item into the nodemoving existing items if necessaryand remove an itemagain moving existing items if necessary don' confuse these methods with the find(and insert(routines in the tree classwhich we'll look at next display routine displays node with slashes separating the data itemslike /// / /or / don' forget that in javareferences are automatically initialized to null and numbers to when their object is createdso class node doesn' need constructor the tree class an object of the tree class represents the entire tree the class has only one fieldrootof type node all operations start at the rootso that' all tree needs to remember searching searching for data item with specified key is carried out by the find(routine it starts at the rootand at each node calls that node' finditem(routine to see if the item is there if soit returns the index of the item within the node' item array if find(is at leaf and can' find the itemthe search has failedso it returns - if it can' find the item in the current nodeand the current node isn' leaffind(calls the getnextchild(methodwhich figures out which of node' children the routine should go to next inserting the insert(method starts with code similar to find()except that if it finds full node it splits it alsoit assumes it can' failit keeps lookinggoing to deeper and deeper levelsuntil it finds leaf node at this point it inserts the new data item into the leaf (there is always room in the leafotherwise the leaf would have been split splitting the split(method is the most complicated in this program it is passed the node that will be split as an argument firstthe two rightmost data items are removed from the node and stored then the two rightmost children are disconnectedtheir references are also stored new nodecalled newrightis created it will be placed to the right of the node being split if the node being split is the rootan additional new node is createda new root nextappropriate connections are made to the parent of the node being split it may be pre-existing parentor if the root is being split it will be the newly created root node assume the three data items in the node being split are called aband item is inserted in this parent node if necessarythe parent' existing children are disconnected and reconnected one position to the right to make room for the new data item and new connections the newright node is connected to this parent (refer to figures and now the focus shifts to the newright node data item is inserted in itand child and child which were previously disconnected from the node being splitare connected to it the split is now completeand the split(routine returns
25,099
in the tree app classthe main(routine inserts few data items into the tree it then presents character-based interface for the userwho can enter to see the treei to insert new data itemand to find an existing item here' some sample interactionenter first letter of showinsertor finds level= child= / level= child= / / level= child= / / enter first letter of showinsertor findf enter value to find found enter first letter of showinsertor findi enter value to insert enter first letter of showinsertor finds level= child= / level= child= /level= child= / / enter first letter of showinsertor findi enter value to insert enter first letter of showinsertor finds level= child= / / level= child= / / level= child= / level= child= / / the output is not very intuitivebut there' enough information to draw the tree if you want the levelstarting with at the rootis shownas well as the child number the display algorithm is depth-firstso the root is shown firstthen its first child and the subtree of which the first child is the rootthen the second child and its subtreeand so on the output shows two items being inserted and the second of these caused node (the root' child to split figure depicts the tree that results from these insertionsfollowing the final press of the key listing for tree java listing shows the complete tree java programincluding all the classes just discussed as with most object-oriented programsit' probably easiest to start by reeexamining the big picture classes first and then work down to the detail-oriented classes in this program this order is tree apptree nodedataitem