GENETIC-ALGORITHM BASED IMAGE COMPRESSIONfei.edu.br/sbai/SBAI1999/ARTIGOS/IV_SBAI_102.pdf ·...

6
40. SBAI - Simpósio Brasileiro de Automação Inteligente, São Paulo, SP. 08-10 de Setembro .de 1999 GENETIC-ALGORITHM BASED IMAGE COMPRESSION Merlo, G.; Caram, F; Fernández V., Britos, P. Rossi, B. & Garcia-Martinez R. Intelligent Systems Laboratory Computer Science Department. School of Engineering. University ofBuenos Aires Junín 143 3°C. (1026) Buenos Aires. ARGENTINA Abstract. In this paper we analyze the image compression problem using genetic clustering algorithms based on the pixels of the irnage. The main problem to solve is to find analgorithm that performs this clustering efficiently. Nowadays the possibility of solving c1ustering problems with genetic algorithms is being studied. In this paper we make use of gcnetic algorithrns to obtain an ordered representation of the image and then we perform the c1ustering to obtain the compression. With this purpose in mind, we have developed different c1ustering methods to apply to the ordered representation. Keywords: Genetic algorithms, irnage compression, 1 INTRODUCTION We can split the c1ustering algorithms in two categories: constructive or iterative algorithms. In lhe constructive methods, the object assignment to various c1usters is determined by developing a complete possible solution from a partial solution. In the iterative methods, an initial complete solution is always possible and one tries to improve this solution by different ways from one iteration to other. Popular clustering algorithrns like K-MEANS belong to this category. An improvernent to this algorithm has recent1y been developed by Ismail & Kamel [1989] (AFB y ABF) that alternates between a first depth search and a first breadth search to minimize the objective function. In this algorithms objects are sistematical1y moved to different c1usters in each itcration, whenever this action decreases the value of the objective function . The greedy nature of these algorithrns can make them get stuck to a local minimum. This problem can be avoided by taking different random initial configurations and then applying the transformation procedure to each of them. This type of evaluation is toa ad hoc and the results quality strongly depends on data properties and on the objective function. Recently Klein & Dubes [1989] have applied simmulated annealing , but the main disadvantages of this method are lhe great amount of execution time and that an efficient schedule for a simmulated annealing algorithrn is very difficult to achievc. Our work is based on the paper of Bhuyan t 199i! author considers lhe problem of partitioning N objects in M 637 disjoint clusters using genetic algorithms to obtain a suitable object permutation. In this work, we consider a solution to the c1ustering problem based on genetic algorithms in which we search for the optimum solution by simultaneo usly considering and manipulating a set of possible solutions (population). 11 constitutes an alternative to the c1assical methods [Aldenferder & Blashfield,1984; Everitt, 1980]. We have defined a fitness function that minimizes the disorder among the elements we are ordering. In this way we do not mix the fitness calculation with the c1ustering procedure. Wc apply the c1ustering as lhe Jast step of the algorithm when the ordered representation has already been obtained. According to lhe results obtained in our experiments we think that the genetic algorithm solution to the clustering problem is very promising. 2 GENETIC ALGORITHM DESCRIPTION The genetic algorithm used to obtain the ordered element representation is based on the algorithm introduced in [I]. 2.1 Cost Function In this work we did not consider the clustering in the cost function. We simply pretend to obtain an ordered element repiesentatíon using the genetic algorithm, so what we intend to minimize 15 the disorder of them. In order to achieve this we propose: Given an ordered element representation: x =[Xi] N ordered elements , where each element of lhe vector is another vector containing p bytes. The cost function is defmed as folIows: F(x i ) = L:dist(xi'X i + l ) i;J where:

Transcript of GENETIC-ALGORITHM BASED IMAGE COMPRESSIONfei.edu.br/sbai/SBAI1999/ARTIGOS/IV_SBAI_102.pdf ·...

Page 1: GENETIC-ALGORITHM BASED IMAGE COMPRESSIONfei.edu.br/sbai/SBAI1999/ARTIGOS/IV_SBAI_102.pdf · GENETIC-ALGORITHM BASED IMAGE COMPRESSION Merlo, G.; Caram, F; ... of XcX in parentl the

40. SBAI - Simpósio Brasileiro de Automação Inteligente, São Paulo, SP. 08-10 de Setembro .de 1999

GENETIC-ALGORITHM BASED IMAGE COMPRESSION

Merlo, G.; Caram, F; Fernández V., Britos, P. Rossi, B. & Garcia-Martinez R.Intelligent Systems Laboratory

Computer Science Department. School of Engineering. University ofBuenos AiresJunín 143 3°C. (1026) Buenos Aires. ARGENTINA

Abstract. In this paper we analyze the image compressionproblem using genetic clustering algorithms based on the pixelsof the irnage. The main problem to solve is to find analgorithmthat performs this clustering efficiently. Nowadays thepossibility of solving c1ustering problems with geneticalgorithms is being studied. In this paper we make use ofgcnetic algorithrns to obtain an ordered representation of theimage and then we perform the c1ustering to obtain thecompression. With this purpose in mind, we have developeddifferent c1ustering methods to apply to the orderedrepresentation.

Keywords: Genetic algorithms, irnage compression,

1 INTRODUCTION

We can split the c1ustering algorithms in two categories:constructive or iterative algorithms. In lhe constructivemethods, the object assignment to various c1usters isdetermined by developing a complete possible solution from apartial solution. In the iterative methods, an initial completesolution is always possible and one tries to improve thissolution by different ways from one iteration to other.

Popular clustering algorithrns like K-MEANS belong to thiscategory. An improvernent to this algorithm has recent1y beendeveloped by Ismail & Kamel [1989] (AFB y ABF) thatalternates between a first depth search and a first breadthsearch to minimize the objective function. In this algorithmsobjects are sistematical1y moved to different c1usters in eachitcration, whenever this action decreases the value of theobjective function . The greedy nature of these algorithrns canmake them get stuck to a local minimum. This problem can beavoided by taking different random initial configurations andthen applying the transformation procedure to each of them.This type of evaluation is toa ad hoc and the results qualitystrongly depends on data properties and on the objectivefunction. Recently Klein & Dubes [1989] have appliedsimmulated annealing , but the main disadvantages of thismethod are lhe great amount of execution time and that anefficient schedule for a simmulated annealing algorithrn is verydifficult to achievc.

Our work is based on the paper of Bhuyan t 199i!author considers lhe problem of partitioning N objects in M

637

disjoint clusters using genetic algorithms to obtain a suitableobject permutation.

In this work, we consider a solution to the c1ustering problembased on genetic algorithms in which we search for theoptimum solution by simultaneously considering andmanipulating a set of possible solutions (population). 11constitutes an alternative to the c1assical methods [Aldenferder& Blashfield,1984; Everitt, 1980]. We have defined a fitnessfunction that minimizes the disorder among the elements weare ordering. In this way we do not mix the fitness calculationwith the c1ustering procedure. Wc apply the c1ustering as lheJast step of the algorithm when the ordered representation hasalready been obtained. According to lhe results obtained in ourexperiments we think that the genetic algorithm solution to theclustering problem is very promising.

2 GENETIC ALGORITHM DESCRIPTION

The genetic algorithm used to obtain the ordered elementrepresentation is based on the algorithm introduced in [I].

2.1 Cost Function

In this work we did not consider the clustering in the costfunction. We simply pretend to obtain an ordered elementrepiesentatíon using the genetic algorithm, so what we intendto minimize 15 the disorder of them. In order to achieve this wepropose:

Given an ordered element representation:

x =[Xi] N ordered elements , where each element oflhe vector is another vector containing pbytes.

The cost function is defmed as folIows:

F(xi ) =L:dist(xi'Xi+l )i;J

where:

Page 2: GENETIC-ALGORITHM BASED IMAGE COMPRESSIONfei.edu.br/sbai/SBAI1999/ARTIGOS/IV_SBAI_102.pdf · GENETIC-ALGORITHM BASED IMAGE COMPRESSION Merlo, G.; Caram, F; ... of XcX in parentl the

40. SBAI- Simpósio Brasileiro de Automação Inteligente, São Paulo, SP, 08-10 de Setembro de 1999

• . 2 listo This conslructor, instead of searching among alI thedzst(Xi'X;+l) = - Xj+l,j) objects that are not included in lhe .l íst, searches only in

j=J k objects, where k is a constant. The temporalcomplexity, in this case, is considerable reduced.

The problem is now to minimize F, lhe distance sum from eachelement of the representation to lhe folIowing one.

2.2 Clustering Solution Representation:Ordered Representation.

In this work we use the ordered representation to represent lhesolutions. In this representation each partition is represented byan object permutation. This kind of ordered representationgives information about lhe similarities among lhe objectsinstead of directly giving the optimum clusters .

For example, let's assume that the chromosomic representationof lhe objects from I to 6 is lhe permutation (23 5 1 64). Weassume that in a permutation that leads to an optimal solutionsimilar objects are placed one close to lhe other. Based on thishypothesis lhe order ( 2 3 5 1 6 4 ) shows that 2 is more similarto 3 than to 5, and that 5 cannot be included in the same clusteras 2 .without including 3 too. In lhe particular case ofpartitioning N objects in M clusters let's assume that(Ol,02,...,ON) is a particular permutation of N objects. Theneach cluster consists in an object interval (Oi,Oi+ 1,...,Oj)where j:;;N.

With this constraint the number of different possible clusters is:

1- ·N ·(N + 1)2

2.3 Initial Population Constructors.

In [I] lhe authors explore three different initial populationconstructors for clustering algorithms based on GAs. Forsimplicity they are calIed A, B, and C.

Constructor A places lhe objects in an ordered list atrandom. The use of this strategy to obtain an initialpopulation alIows testing genetic algorithms in lhe mostadverse circumstances.The GA is completly ignorantabout the good regions in lhe search space.

Constructor B uses ,and heuristical algorithm based onlhe minumun distance between two objects. It generatesan object label at random and places it in lhe firstposition of lhe ordered Iist, Then it searches among alIlhe -objects absent from the ordered list, to find lhenearest object to lhe most recently added to lhe list. Itrepeats this procedure until alI the objects are includedin lhe ordered list. The complexity of this constructor isO(N2).

Constructor C uses a greedyand probabilistic heuristicto conlruct lhe initial population. The primary intentionin this constructor is to balance lhe knowledge about lheproblem with randomness. The diference between thisconstructor and constràctor B is in lhe way they find lhenearest object to the most recentIy added to lhe ordered

638

In this work we are going to use lhe ,Ç

Constructor in which we allow the user to entering th éparameter k.

2.4 Parent Selection operator

The problem we are facing is a function rninimization, so theobjective function f(x) must be mapped to a fitness functionf(x). In this fitness function lhe probability of choosing a stringXi to be crossed is given by:

f (x;)p

Lf(x;)j=1

Where p is lhe population size.

If f(x) is taken as the inverse of F(x) there wiII not be muchdifference between a good string and a bad one. This problemcan be solved using another transformation:

f(x)=Cmax-F(x)

where Cmax corresponds to lhe value of the worst string in lhepopulation.

In our case Cmax was computed like t

N- I

Ldistmáx(Xj ,Xj+l) =(N -1) * d ístmáx (Xj, Xj+l );=1

where:p

distmáx(x;,X j+1) =L tmaxix., -xj+1•j ))2j=J

As each Xij is a byte, then

maxix. . -x· I ') =255I,J .. «l

and

p

CmáJ: =(N -1) *L 2552j=l

The problem encountered here was that due to lhe great valueof lhe maximum constaot computed for the fitness function, alIlhe fitness values for lhe Chromosomes of a population weretoa close each other . This produces a generation evolutionbased practically on ao equal probability basis and not based onlhe selection of the most suitable components of the oldpopulation.

In order to solve this problem, we scaled lhe fitnes s function inlhe folIowing way:

J'=a·f+b;

Page 3: GENETIC-ALGORITHM BASED IMAGE COMPRESSIONfei.edu.br/sbai/SBAI1999/ARTIGOS/IV_SBAI_102.pdf · GENETIC-ALGORITHM BASED IMAGE COMPRESSION Merlo, G.; Caram, F; ... of XcX in parentl the

40. SBAI- Simpósio Brasileiro de Automação Inteligente, São Paulo, SP, 08-10 de Setembro de 1999

We determined the constants a and b using the followingconditions:

parentl = (5, 3, 8,2,27,6,4) and parent2 = (6, 5, 1, 2,4, 7, 8,3). '

a- f prom+b=f promSupposse that x=2 and xw=2 are selected randomly.Then, thecorresponding string in parent2 to be cut is :X={5, 1,2,4, 7}and the corresponding sequence in parentl is (5, 1,2,7,4). '

Solving this equations we obtained:

b = f prom ' f minfmjn - fprom

2.5 Cross Operators

In this work we have used two cross operators that wereexplained in [1]. We are going to name them as cross-operator1 and cross-operator 2.

2.5.1 Cross-operator 1

Let's consider Xt as the object set to be partitioned.Werandomly choose a window size Xw and an object x belongingto Xt. The position oh in the dominant parent (pareritI) is xpIand the position of x in the support parent ( parent2 ) is xp2'We will call X the set ofall the objects between xp2-xw andxp2+Xw in the support parent. The first action made by thealgorithm is to place the x in the xp1 position of the offpring.

Then the objective is to place the X-{x} objects sorroundingxpl and the other (Xt - X) objects in the remaining positionsso that the order of XcX and X in parentl is mantained in theofspring. This objective is achived by a three step process:

• Place the objects XeX (){parentl[I],parentl[2],...,parentl[xpI]) in the offspring, the first objectin the first position and so on, keeping in this way thesequence of parentl.

• The objects present in X are placed in the offspring beginingin the most left empty position and in the same sequence of, parentl.

• The objects Xt-X '() {parentl[xptl, parentl[xpI+I] ,...,parentl[N]}, where N is the string Iength, are placed in thevacant positions of the offspring by keeping the sequence inparentl.

It is easy to see that the complexity of this algorithm is 9(N)

Let's consider an example of this method: the set of objects tobe parti tioned is

x,= {I, 2, 3, 4, 5, 6, 7, 8}, and the parents:

639

In the first step of the method, the objects 3 and 8 are placed inthe fiest and in the second positions of the offspring. In thesecond step, the objects 5, 1, 2, 7, and 4. are copied to theoffspring starting from the third position . In the last step, theobject 6 is added in the last position of the offspring, whichresults in (3,8,5,1,2,7,4,6).

Fig l - Cross operatoe 1(Translation: Padre Dominante = Dominant Father; Padre soportc = Suporting

Father; Paso 1 =Step I; Paso 2 =Step 2; Paso 3 =Step 3)

2.5.2 Cross-operator2

In this crossing operation the first object in the offspringcorresponds to the object in the fírst position of any of theparents. Then we determine the distance of the recentlyincluded object to the nearest one in both parents. The objectwhich has the shortest distance is added in the offspring in themost left empty position. If the calculated distances are equal,then one object is selected at random. This process continuesuntil ali the offspring positions are filled.

Let's see an example with 8 objects, in which the parents are:

PI=(5,3,8,I,2,7,6,4)andP2 =(6,5, 1,2,4,7,8,3).

We take lhe first position in the offspring at random from thefirst position of any of the parents.Let's asume that in this casewe select randomly P2, so the object 6 will be placed in thefirst position in the offspring. The objects nearest to 6 in parentPI with a distance of 1 are {7, 4}. and in parent P2 is 5.

Therefore, we select at random on object from 4,5 and 7. Let'sassume that the one selected is 7,which will occupy the secondposition in the offspring . The closest objects to 7 in parent PIand in parent P2 are {2, 6} and {4, 8} respectivelly. Because ofthe object 6 has already been included, we choose an object atrandom from 2, 4 and 8 for the third position in lhe offspring.

Page 4: GENETIC-ALGORITHM BASED IMAGE COMPRESSIONfei.edu.br/sbai/SBAI1999/ARTIGOS/IV_SBAI_102.pdf · GENETIC-ALGORITHM BASED IMAGE COMPRESSION Merlo, G.; Caram, F; ... of XcX in parentl the

This method is based on c1ustering lhe elements by assigningthe same quantity of elements per c1uster. Once we haveobtained the final ordered representation we divide the amountof elements by the amount of c1usters, obtaining in this way theamount of elements that each c1usterwill contain.

- - - T - , • - Padre 1

40. SBAI - Simpósio Brasileiro de Automação Inteligente, São Paulo, SP, 08-10 de Setembro de 1999

Let's assume that the objcct selected is 2. In a similar manner 3.1 Fixed Clusteringali the remaining positions in lhe offspring are filled. The resultchromosome is (6, 7, 2, 4,8, 1,5,3).

Padre 2,

NE=-M

where:N = amount of elementsM =amount of c1ustersE = amount of elements per

c1uster.

The first E elements of the representation are assigned to thefirst c1uster,'the following E elements to the second one and soon until we assign the last E elements lo the M eluster.

Fig2 • Cross operator 2

3.2 Dynamic Clustering with Fixed Clusters2.6 Replacement Operator

This operator is used to select a fixed size population based onan old population P(old) and on the offspring P(offspring)which was created using the cross operators.

In this method, the amount of elements inside each c1uster isnot fixed as in lhe previous one. Once we have obtained lhefinal ordered element representation, we compute the euc1ideandistances between each element and the following one. Afterthat we selcct lhe M-I greater distances.

A parameter X, given by the user, determines that lhe newpopulation P(new) has to be created using the X best stringsfrom the combination of P(old) and P(offspring) . Theremainder of the strings from the new population P(new) areselccted randomly from P(offspring). You can notiee that whenlhe parameter X is zero, lhe whole new generation is selectedonly from P(offspring) . Otherwise, if the parameter X equalslhe population size then P(new) is formed by lhe best stringsamong ali lhe strings in P(old) and P(offspring) .

The idea in this method is that the elements whose distancesbelong to this set of greater distances must be assigned todifferent c1usters. Therefore we intend that the places in thevector where these greater distances appear were the limitsbetween clusters.

3.3 Dynamic Clustering with VariableClusters

2.7 Mutation Operator

This operator tries to solve some inherent problems of thegenetie algorithms as regards local minima . These problemsappear because of GAs pay attention to lhe population as awhole instead of identifying lhe best individual. As is natural ingenetic algorithms, they progress identifying high performanceregions inside the searchspace. The genetic algorithms wouldbe more useful in combinatorial optimization problems if theywere adapted to invoke a local search strategy to optimize thefinal population members.

This method is similar to the previous one; the difference isthat lhe arnount of c1usters is not fixed (M). Instead it isdetermined by the algorithrn using a parameter entered by theuser: the compression percentage.

Using this parameter lhe algorithm computes the amount ofc1usters needed and then perforrns lhe clustering process usingthem.

The amount of c1usters is computed:

C [(I OO - P) .N]=trunc -'----'---

100The mutation operator function is select at random two objectsof the string and exchange their positions .

3 IMPLEMENTED CLUSTERING METHODS

Where:c = Amount of clusters to be used.P =Compression percentage entered by the user.N =Amount of elements to be clustered.

Once the ordered element representation was obtained bymeans of lhe genetie algorithm, we proceed lo make theclustering of it With this purpose in mind, we developed fourdifferent c1usteringmethods that are explained below.

3.4 Local Tuning

This is a c1ustering method in whieh the amount of clusters isfixed, but not the amount of elements assigned to each of them.This method begins by making a fixed c1ustering as it wasexplained in,4.1. Once these fixed c1usters are determined wego through the veetor ascendently, and for each last element ofa cluster we compute the distance to the centroid of the

640

Page 5: GENETIC-ALGORITHM BASED IMAGE COMPRESSIONfei.edu.br/sbai/SBAI1999/ARTIGOS/IV_SBAI_102.pdf · GENETIC-ALGORITHM BASED IMAGE COMPRESSION Merlo, G.; Caram, F; ... of XcX in parentl the

40. SBAI- Simpósio Brasileiro de Automação Inteligente, São Paulo, SP, 08-10 de Setembro de 1999

cluster(Dcp) and the distance to the centroid of the nextcluster(Dcs) . If Dcs < Dcp, then we change the cluster limitplacing this element in the next cluster. IfDcs >= Dcp, we takeno action. In this way we go through the whole veetor until thelast elements of ali its clusters have been analyzed. After thatwe go through the vector descendentIy and we analyze in thesame way the first element of each cluster, eomputing thedistance to the centroid of their own cluster (Dep) and to thecentroid of the previous cluster (Dca). If Dca < Dcp, then wechange the cluster limit placing this element in the previouscluster. If Dca >= Dcp, we take no action. In this way we gothrough the whole vector until the first elements of ali itsclusters have been analyzed . It is worth mentioning that eachtime an element ís changed we have to recalculate the centroidof the cluster it belonged to and the centroid of the cluster itbelongs after the change . The procedure of going through thevector ascendently and descendentIy is performed R times,where R is a parameter entered by the user,

6 CONCLUSIONS

Based on the results obtained and detailed ' in the previoussections, we conclude the foUowing:

• The genetic algorithms constitute a powerful tool that cansucceed in eonstructing a clustering algorithm

• Among the methods proposed to perform the clustering,Local Tuning and Dynamic Clustering with VariableClusters tumed to be the best ones.

• In spite of its simplicity, the Fixed Clustering methodproduced aceptable results. Anyhow, we do not recomendits utilization because it can be widely improved by theLocal Tuning method with almost no incrernen t incomputing time.

The results of the testing are sununarized in the table below:

4 RESULTS

In order to perform comparative testing, the followingparameters were used:

• The Local Tuning method appears to be the best one whenwe have a fixed amount of clusters. As we mentionedbefore, this method improves in a eonsiderable manner theFixed Clustering method performance . We think that anadequate value for the R parameter, should be between 50and 100. Below these values, its action is not so noticeablcand above them we do not obtain any furtherimprovements. This ean be explained with the coneept thatwith a fixed amount of clusters the image cannot beimproved further than a eertain limit. Therefore, when thereare no changes in one cycle, there is no sense in continuingapplying this method beeause the mentioned limit has beenreached.

• The Dynamic Clustering with Fixed Clusters did notproduce good results. In most of the cases, the resultsobtained with this method were worse than the onesobtained with the Fixed Clustering method. We observedthat in the images compressed by this method the clusterscreated were toa big, so there were much average in them.This characteristic can be observed in lhe imagesbaekgrounds which appeared to be practically uniform.Otherwise, object details are kept in a perfectly manner .Another problem that arises in this method, concerns lhedefmition of the object edges which are not sharply defined .In spite of what we have exposed here, we do not want tocompletely discard this method because we think that itcould reach good results with images of othercharacteristics such as uniform colors and wel1 definededges.

1000.0110400100

7--++65

I 15,55 20,42 14,611 15,97 27,62 14,881 15,69 20,96 14,941 15,72 20,88 14,391 15,72 29,99 14,652 15,73 28,50 14,542 15,67 22,11 14,922 15,31 28,28 14,442 15,60 26,82 14,602 15,75 22,05 14,70

Amount of chromosomes: 50Amount of generations :Mutation probability :X :K:R :% Compression :

5 CONSTRAINTS

The main constraint encountered in this work is that aU thetesting has been performed on the same image. In consecuence,aU the evaluations and performance comparisons among thedifferent methods are valid in this context.

We consider that it would be important to perform tests usingother images with different sizes and characteristics to confirmor complete the results exposed in this paper.

• The results obtained with the Dynamic Clustering withVariable Clusters were excellent and by analyzing them weean observe a great improvement in the image sacrifying alittle of compression.

• As a final eonclusion we propose lhe use of Local Tuningmethod when the clustering is constrained to a fixednumber of clusters, and the Dynamic Clustering withVariable Clusters melhod when the arnount of clusters canbe increased to obtained a better representation of theimage.

641

Page 6: GENETIC-ALGORITHM BASED IMAGE COMPRESSIONfei.edu.br/sbai/SBAI1999/ARTIGOS/IV_SBAI_102.pdf · GENETIC-ALGORITHM BASED IMAGE COMPRESSION Merlo, G.; Caram, F; ... of XcX in parentl the

We think this work can be used as a basis to continue theresearch in Image Clustering methods using GAs. In thissection we propose some research trends that can be of interest:

740. SBAI :" Simpósio Brasileiro de Automação Inteligente, São Paulo, SP, 08-10 de Setembro de 1999

FUTURE INVESTIGATION TRENDS Gordon, A. & Henderson, 1.,1977, An algorithmfor euclideansum of squared classifications. Biometric, 33, pp. 355-362,1977.

Holland, J., 1975, Adaptation in Natural and Artificial Systems.The University of Michigan Press, Ann Arbour .

• Implementation of the Local Tuning method by entering thecompression percentage as a parameter. In this way wewould let the number of clusters to be variable.

• Comparison of the performance of the different clusteringmethods using different types of images. This research cancomplete or confirm the results obtained in our work.

• Implementation of a clustering algorithm based on some ofthe proposed methods using as parameters the maximumand minimum error desired with regard to the originalimage. The algorithm must find the amount of clustersrequired to comply the specifications achieving themaxirnum possible compression.

• Improvement of the Local Tuning method. in which theuser must not enter the number of cycles of it. The methodwill detect when there are no changes in a cycle, and in thiscase it will stop automatically.

• AIl the methods proposed in our work it may be possible toadd a slight modification. Once you determine the amountof clusters to be used, fixed or variable, you can assign apart of them to perform the clustering of the imagebackground and the rest of them can be used in theclustering of the image details.

• Combination of various methods to perform the clusteringof different parts of the image. For example you could useLocal Tuning to perform the clustering of the imagebackground and Dynamic Clustering with Variable Clustersto perform the clustering of the image details.

REFERENCESAldenferder, M. & Blashfield, R. 1984, Cluster Analysis, Sage

Publications, Beverly Hills.

Bhuyan, J. 1991, Genetic Algorithm for Clustering witb anOrdered Representation, : Proceedings of FourthInternatio nal Conference on Genetic Algorithms, pp.408-415 .

Brown, D, Huntley, R. & Spillane, C., 1989, A parallel geneticheuristic fo r the quadratic assignment problem.Proceedings of third intemational Conference onGenetic Algorithms, pp. 406-415.

Everitt, B. 1980, Cluster Analysis, Heinemann Educational,London .

Fisher, W., 1958, On Grouping for maximun homogeneity.Joumal of American Stat. Asoc., 53, pp. 789-798.

Goldberg, D. 1989, Genetic Algorithms in searcb optimization& machine learning. Addison-Wesley PublishingCompany Inc.

642

Ismail M . & Kamel, M., 1989, Multidimensional dataclustering utilizing hybrid search strategies. Pattemrecognition, Vo122, Num 1, pp. 75-89.

Klein, R. & Dubes, D. 1989. Experiments in projection andclustering by simmulated annealing, PattemRecognition, vo122, pp. 213-220.