Post on 10-Jul-2020
Faculdade de Engenharia da Universidade do Porto
Solving multiobjective industrial scheduling problems by metaheuristics
David Ricardo Simões Rola
VERSÃO FINAL
Dissertação realizada no âmbito do Mestrado Integrado em Engenharia Electrotécnica e de Computadores
Major Automação
Orientador: Prof. Dr. Jorge Manuel Pinho de Sousa Co-orientador: Dra. Iryna Yevseyeva
29 de Julho de 2011
ii
© David Ricardo Simões Rola, 2011
iii
iv
v
Abstract
Job Shop Scheduling Problem (JSSP) is a problem that addresses competitive concerns in
industrial environment. JSSP is a problem of combinatorial optimization and one of the most
well-known machine scheduling problems. Our goal is to determine the sequence of jobs for
each machine (schedule) that optimizes two objectives: makespan and tardiness. It is not
possible to achieve this goal with exact methods, therefore we use metaheuristics. From
those metaheuristics, we choose to apply Multiobjective Genetic Algorithms (MOGAs). NSGA
II and SMS-EMOA are two very effective MOGAs. We apply those chosen MOGAs in order to
help achieve our goal. The test of the methods for solving several instances of bi-objective
JSSP in this master thesis shows the efficiency of the selected methods.
vi
vii
Agradecimentos
Os meus agradecimentos são destinados a toda minha família e amigos, que me apoiaram
durante a realização desta tese de mestrado. Em particular à minha mãe, às minhas irmãs, e
ao Rui que foi um parceiro sempre presente durante a realização desta tese.
Um grande agradecimento para a Dra. Iryna Yevseyeva, que foi a co-orientadora desta
tese no INESC. Sem o seu contributo constante, não seria possível realizar todo este trabalho.
Por fim, um agradecimento a todo o pessoal do INESC, em especial da unidade de UESP,
que me acolheu e me fez sentir em casa.
viii
ix
Index
Chapter 1 ........................................................................................... 1
Introduction ....................................................................................................... 1
Chapter 2 ........................................................................................... 3
Literature Survey ................................................................................................ 3 2.1 - Introduction ............................................................................................. 3 2.2 - Literature Survey on Job-Shop (JSSP) ............................................................ 3
2.2.1 - Definition of JSSP-Job Shop Scheduling Problem .......................................... 3 2.2.2 – Some objectives ................................................................................. 3 2.2.3 – Simple classification of JSSP .................................................................. 4
2.2.3.1. Machine Singularity ...................................................................... 4 2.2.3.2 - Preemptions (prmp) .................................................................... 5 2.2.3.3. Others ...................................................................................... 5
2.3 - Complexity(NP-hard) ................................................................................ 5 2.4 - Multiobjective optimization for Combinatorial problems ..................................... 6 2.5 - Literature survey on Multiobjective Metaheuristics ........................................... 7
2.5.1 – Local Search ..................................................................................... 7 2.5.2. Tabu Search ....................................................................................... 7 2.5.3. Genetic Algorithms .............................................................................. 7
2.6 – Existing MOGAs ....................................................................................... 8 2.6.1. MOGAs Survey .................................................................................... 8
2.7. Hypervolume based selection MOEAs: HypE and SMS-EMOA .................................... 9 2.7.1. Hypervolume (S metric) ........................................................................ 9 2.7.2. Other MOEAs (hypervolume based): HypE and SMS-EMOA ............................... 9
2.8. Job Shop with MOGAs ................................................................................ 10 2.8.1. Summary of some scheduling applications (JSSP) ........................................ 10
Chapter 3 .......................................................................................... 11
NSGA II ........................................................................................................... 11 3.1 – Algorithm .............................................................................................. 11 3.2 – Fast Nondominated Sorting ........................................................................ 12 3.3 – Diversity Preservation ............................................................................... 17 3.4 – Binary Tournament .................................................................................. 18
Chapter 4 .......................................................................................... 19
SMS-EMOA ....................................................................................................... 19 4.1 – Algorithm .............................................................................................. 19 4.2 – Calculating Hypervolume for SMS-EMOA ......................................................... 21
Chapter 5 .......................................................................................... 23
x
Adapting MOGAs for JSSP ..................................................................................... 23 5.1 – Encoding ............................................................................................... 23
5.1.1 – Operation-Based Representation........................................................... 24 5.1.2 – Job-Based Representation ................................................................... 24 5.1.3 – Preference-list Based Representation ..................................................... 24
5.1.3.1 – Preference-List Based GA representation: Algorithm .......................... 26 5.2 – Evaluation of Objective Function Values ........................................................ 28 5.3 – Genetic Operators ................................................................................... 30
5.3.1. Selection ........................................................................................ 30 5.3.2. Crossover ........................................................................................ 30 5.3.3. Mutation ......................................................................................... 31
Chapter 6 .......................................................................................... 34
Computational Analysis ....................................................................................... 34 6.1 – Parameters of the MOGAs .......................................................................... 34 6.2 – Comparison of results ............................................................................... 35
Chapter 7 .......................................................................................... 38
Experimental results .......................................................................................... 38 7.1 – Results: NSGA II vs SMS-EMOA ..................................................................... 38
Chapter 8 .......................................................................................... 42
Conclusions and future work ................................................................................. 42
References ........................................................................................ 43
Appendix .......................................................................................... 46
xi
xii
Figure List
Figure 1.1 - General Schema of NSGA II ................................................................ 12
Figure 2.1 - Population of Example 1.1; (X-axis-objective 1: total tardiness; Y-axis-objective 2:
makespan). .............................................................................................. 14
Figure 2.2 - Analysis of solution “A” of Example 1.1; (X-axis-objective 1: total tardiness; Y-axis-
objective 2: makespan). ................................................................................ 14
Figure 2.3 - Illustration of the final results of Example 1.1;(X-axis-objective 1: total tardiness; Y-
axis-objective 2: makespan) ........................................................................... 17
Figure 3 - Crowding distance illustration; (X-axis-objective 1: total tardiness; Y-axis-objective 2:
makespan) ............................................................................................... 17
Figure 4 - General Schema of SMS-EMOA ............................................................... 20
Figure 5 - Hypervolume calculation; (X-axis-objective 1: total tardiness; Y-axis-objective 2:
makespan) ............................................................................................... 22
Figure 6.1 - Operation-based representation .............................................................. 24
Figure 6.2 - Job-based representation ..................................................................... 24
Figure 7.1 - Possible Job-sequence for example 2 ........................................................ 26
Figure 7.2 - Possible Job-sequence for example 2 ........................................................ 26
Figure 7.3- Possible Job-sequence for example 2 ......................................................... 27
Figure 7.4 - Possible Job-sequence for example 2 ........................................................ 27
Figure 7.5 - Gantt chart for previous sequence from Example 3 ......................................... 28
Figure 8 - Area between each point and reference point (the overlapped area is considered only
once). The points flagged as x1, x2, x3 and x4 represent four different solutions belonging to
the Pareto front of the final population. The red point (Pref) represents the reference point. .... 35
Figure 9.1 - Pareto front evolution comparison for both methods. Results for [10 x 5] instances (X-
axis-total tardiness; Y-axis-objective 2: makespan) .................................................. 39
Figure 9.2 - Pareto front evolution comparison for both methods. Results for [20 x 5] instances (X-
axis-total tardiness; Y-axis-objective 2: makespan) .................................................. 39
xiii
Figure 9.3 - Pareto front evolution comparison for both methods. Results for [50 x 5] instances (X-
axis-total tardiness; Y-axis-objective 2: makespan). ................................................. 39
xiv
Table List
Table 1 — Table showing the final results from Example 1.1. ............................................ 16
Table 2 — Comparison of NSGA II and SMS-EMOA parameters ............................................. 21
Table 3 — List of Job Shop most usual representations. ................................................... 23
Table 4.1 —Matrix of instance (3 x 3) from Example 2 showing the machine corresponding to each operation. ....................................................................................................... 25
Table 4.2 — Matrix of instance (3 x 3) from Example 2 showing the processing times of
corresponding operations on machines from Table 4.1. ............................................. 25
Tabela 5.1 — One-point crossover illustration.............................................................. 30
Tabela 5.2 — Two-point crossover illustration. ............................................................ 31
Tabela 5.3 — Simple mutation illustration. ....................................................................... 31
Tabela 5.4 — Insert illustration. .................................................................................... 31
xv
Tabela 5.5 — Swapping illustration.................................................................................... 32
Tabela 6.1 — Results from 10 x 5 Instance runs. .................................................................... 40
Tabela 6.2 — Results from 20 x 5 Instance runs. .................................................................... 40
Tabela 6.3 — Results from 50 x 5 Instance runs. .................................................................... 40
Tabela 6.4 — Average Results. ......................................................................................... 41
xvi
Symbols and Abbreviations
Abbreviations List
COP Combinatorial Optimization Problem
ENGA The Elitist Nondominated Sorting Genetic Algorithm
ES Evolutionary Strategies
GA Genetic Algorithm
HypE Hypervolume Estimation Algorithm for Multiobjective Optimization
JSSP Job Shop Scheduling
LS Local Search
MOEA Multi Objective Evolutionary Algorithm
MOGA Multi Objective Genetic Algorithm
MOP Multi Objective Problem
NSGA Nondominated Sorted Genetic Algorithm
NSGA II Nondominated Sorted Genetic Algorithm-II
NP Non-Polynomial
PESA Pareto Envelope Based Selection Algorithm
PESA II Pareto Envelope Based Selection Algorithm II
RDGA Rank-Density Based Genetic Algorithm
RWGA Random Weighted Genetic Algorithm
SMS-EMOA S-metric selection Evolutionary multiObjective Algorithm
SPEA Strength Pareto Evolutionary Algorithm
SPEA II Strength Pareto Evolutionary Algorithm II
VEGA Vector Evaluated Genetic Algorithm
VNS Variable Neighbourhood Search
WBGA Weight-based Genetic Algorithm
IDE Integrated Development Environment
xvii
Symbols List
Jj Individual Job
mk Individual machine
Ojk Individual operation of job Jj on machine mk
pjk Individual processing time of job Jj on machine mk
dj Due date of job Jj
Cmax Makespan of the schedule
Cj Individual makespan of job Jj
Lj Lateness of job Jj
Tj Tardiness of job Jj
Ttotal Total Tardiness of the schedule
α α field of scheduling description triplet
β β field of scheduling description triplet
γ γ field of scheduling description triplet
prmp Preemptions
Pm Identical machines in parallel
Qm Machines in parallel with different speeds
Rm Unrelated machines in parallel
batch Batch processing
brkdwn Breakdown
prmu Permutation
rcrc Recirculation
≺ Dominance relation (s,p ∈ Rd, s≺p, s dominates p)
∆S Hypervolume measure (S-metric)
λ Lebesgue measure
Fi Individual Pareto front
1
Chapter 1
Introduction
Nowadays, planning and production control are very important components of business to
be competitive on the market. In this reality, scheduling plays a very important role, being a
crucial piece of the decision making field. Organizing the productive processes efficiently
contributes to accomplish in deadlines and due dates, minimizing lead-times and maximizing
the capacity. Job Shop scheduling Problem (JSSP) is a problem that addresses these issues in
industrial environment.
Job-shop is a functional organization, in which the departments are organized around
particular processes. And those processes are specific operations like perforation or assembly
in a factory. The produced goods or the offered services are triggered by individual orders
from specific clients. It is possible to define JSSP for now as a set of a certain number of jobs
which have to be processed on a certain set of machines. Then, each job has a specific
execution order between each machine, which is equivalent to say that each job has an
ordered list of operations, each one defined by the required machine and its processing time.
The main goal is to determine the sequence the sequence of jobs for each machine
(schedule) that optimize one or more objectives. The most common objective is makespan,
that is the time between the start of the first job until the end of the last job. Minimizing
makespan is very important and very widely used. There are other important objectives, like
tardiness. Tardiness is the delay of a job in reference to a due date. The sum of tardiness
values of all jobs in the schedule constitutes total tardiness.
The main goal of this Master Thesis is to solve JSSP by minimizing makespan and total
tardiness simultaneously. This bi-objective problem is a simplest version of more general
class of multi-objective problems (MOPs). MOP is the problem of finding a set of decision
variables which satisfies constraints and optimizes a vector function whose elements
represent the objective functions. There can be a very large number of restrictions inherent
to this problem.
That is what makes JSSP one very difficult problem. JSSP is combinatorial optimization
problem, which is a kind of problem consisting on running all the possible solutions from a
finite set of solutions, and then find an optimal solution. JSSP is classified as NP-hard
problem, as many other combinatorial optimization problems. NP-hard problems are difficult
to solve with exact methods. Therefore, we use metaheuristics. Metaheuristic is a stohastic
2
computational method that search for near optimal solution(s) in a problem by iteratively
improving a candidate solution(s).
We intend solve JSSP in order to minimize makespan and tardiness. But this process
should be done with near optimal solutions and satisfactory computational time. In real
production environments, it is possible to achieve nearly optimal results with satisfactory
computational time, achieved by metaheuristics. Some of those metaheuristics are
designated by genetic algorithms (GA). GAs is one of the metaheuristics applied in scheduling
problems.
Therefore, in order to achieve the primary goal mentioned above, the application of
genetic algorithms (GAs) is a very good option. GAs were originally proposed by John Holland
[1]. Based on analogies to natural selection by Darwin, GAs start from one initial population
of solutions, and then during a certain number of generations, the algorithm aims to improve
the quality of the solutions. This process occurs through some “genetic mechanisms” as
selection of the “best” parents, their mutation and reproduction. Later, GA idea was
extended to MOPs by Schaffer [2], originating Multi-Objective Evolutionary Algorithms
(MOEAs)
In this work, these methods (GAs) are adopted to JSSP. Recently, two of the most
efficient MOGAs have been NSGA II and SMS-EMOA. These are the two adopted GAs for our
problem. Another goal of this Master Thesis is to show how NSGA II and SMS-EMOA can be
applied to solve this JSSP problem, and to evaluate their results comparing both.
The content of this master thesis is divided in eight main sections. Section 2 shows the
literature survey. In Section 3 and Section 4, the NSGA II and SMS-EMOA algorithm are
described, respectively. These sections present the details of each algorithm. Section 5
shows how to adapt GA to solve JSSP. In Section 6 parameters for each method are selected.
In Section 7 the principal results from these applied algorithms are presented. Finally,
Section 8 concludes the thesis with the final comments, and indicated future work.
3
Chapter 2
Literature Survey
2.1 - Introduction
The individual chapters of this thesis are relatively self-contained and include references
to relevant existing work. In this section a literature survey is presented in attempt to
familiarize the most important concept and work in the scope of this Master Thesis.
2.2 - Literature Survey on Job-Shop (JSSP)
2.2.1 - Definition of JSSP-Job Shop Scheduling Problem
There are some machine scheduling problems in manufacturing and service environments
where jobs represent activities and machines represent resources, and each machine can
process one job at time. This kind of scheduling problem is known as job-shop.
So, JSSP can be described as a set of n jobs denoted by Jj where j=1,2,…,n which have to
be processed on a set of m machines denoted by mk where k=1,2,…,m. The operation of j th
job on the kth machine will be denoted by Ojk with the processing time pjk. Once a machine
starts to process a job, no interruption is allowed.[3]
2.2.2 – Some objectives
In this section, some of the most common criteria in JSSP are presented, such as:
makespan and tardiness.
The completion time of the operation of job j on machine i is denoted by Cij. The time at
which job j is completed is denoted by Cj .That is, its completion time on the last machine on
which it requires processing.
Makespan (Cmax) is defined as max (C1, . . ., Cn), and is equivalent to the completion time
of the last job to leave the system. A minimum makespan usually implies a good utilization of
the machine(s).
The objective may also be a function of the due dates:
4
Due date (dj) The due date dj of the job j represents the committed shipping or
completion date (i.e., the date the job is promised to be delivered to the customer).
Completion of a job after its due date is allowed, but then a penalty is incurred.
The lateness of job j is defined as:
Lj = Cj − dj ,
this is positive when the job j is completed late and negative when it is completed early.
We can define tardiness of the job j as:
Tj = max (Cj − dj, 0) = max (Lj, 0).
The difference between the tardiness and the lateness lies in the fact that the tardiness
is never negative, and is equal to zero if the job is finished before its due date.
Other important objectives are [4]:
Maximum Lateness
Total weighted completion time
Discounted total weighted completion time
Total weighted tardiness
Weighted number of tardy jobs
Note: These objectives are not considered in detail here
2.2.3 – Simple classification of JSSP
Note: A scheduling problem is described by a triplet α | β | γ [4]. The α field describes
the machine environment and contains just one entry. The β field provides details of
processing characteristics and constraints and may contain no entry at all, a single entry, or
multiple entries. The γ field describes the objective to be minimized and often contains a
single entry.
JSSP models can introduce some variations. Some of those are presented next:
2.2.3.1. Machine Singularity
Single machine (1) - The case of a single machine is the simplest of all possible
machine environments but usually not realistic, and it is a special case of all other more
complicated real machine environments.
Identical machines in parallel (Pm) -There are m identical machines working in
parallel. Job j requires a single operation and may be processed on any one of the m
machines. If job j cannot be processed on just any machine, but only on any one belonging to
a specific subset Mj, then the entry Mj appears in the β field.
5
Machines in parallel with different speeds (Qm) There are m machines working in
parallel,but with different speeds. The speed of machine i is denoted by vi. The processing
time pij that job j spends on machine i is equal to pj/vi (assuming job j receives all its
processing work from machine i). This environment is referred to as uniform machines. If all
machines have the same speed, i.e., vi = 1 for all i and pij = pj , then the environment is
identical to the previous one.
Unrelated machines in parallel (Rm) This environment is a further generalization of
the previous one. There are m different machines working in parallel. Machine i can process
job j at speed vij. The time pij that job j spends on machine i is equal to pj/vij (again assuming
job j receives all its processing from machine i). If the speeds of the machines are
independent of the jobs, i.e., vij = vi for all i and j, then the environment is identical to the
previous one.
2.2.3.2 - Preemptions (prmp)
Preemptions imply that it is not necessary to keep a job on a machine, once it has been
started, until its completion. Sometimes, it is allowed to interrupt the processing of a job
(preempt) at any point in time and put a different job on the machine instead. The processed
part of a preempted job is not lost. When a preempted job is afterwards put back on the
machine (or on another machine in the case of parallel machines), it occupies the machine
for its remaining processing time. When preemptions are allowed prmp is included in the β
field; otherwise, preemptions are not allowed.
2.2.3.3. Others
Other variations/classification parameters can be introduced on JSSP [4], such as, and not
considered here:
Batch processing (batch (b))
Breakdowns (brkdwn)
Permutation (prmu)
Recirculation (rcrc),
2.3 - Complexity(NP-hard)
“Combinatorial programs can be computationally intractable [5] and this is often the
case if the problem is a NP-problem (NP stands for Non-Deterministic Polynomial time). NP-
problems are problems for which no algorithm exists that can solve all instances of the
problem to optimality within a computational time that grows only polynomial with the size
of the problem instance.”[6]
JSSP is proved to be NP-hard [5], which means that solving it by an exact algorithm and
finding an optimal solution in polynomial time is not possible. These concepts are illustrated
next:
6
NP complete are the hardest problems in NP. Such a problem is NP-hard and in NP.
NP hard (non-deterministic polynomial-time hard) is a class of problems that are
informally, "at least as hard as the hardest problems in NP". More precisely, is the complexity
class of decision problems that are intrinsically harder than those that can be solved by a
nondeterministic Turing machine in polynomial time.
Polynomial time: An algorithm is said to be of polynomial time if its running time is upper
bounded by a polynomial expression in the size of the input for the algorithm. More formally
m (n) = O(nk) where k is a constant (m(n) is the execution time of a computation).
Non-deterministic Turing Machine: A Turing machine is a machine that has more than one
next state for some combinations of contents of the current cell and current state.
2.4 - Multiobjective optimization for Combinatorial problems
The Multiobjective Optimization Problem (MOP) (also called multicriteria optimization,
multiperformance or vector optimization problem) can be defined as the problem of finding:
“a vector of decision variables which satisfies constraints and optimizes a vector function
whose elements represent the objective functions. These functions form a mathematical
description of performance criteria which are usually in conflict with each other.”[7]
Optimization is the task of finding one or more solutions which correspond to minimizing (or
maximizing) one or more specified objectives and which satisfy all existing constraints.
Mathematically, a general MOP can be defined as minimizing (or maximizing) a vector of
solutions F(x) = (f1(x), . . . , fk(x)) subject to inequality constraints gi(x) ≤ 0, i = {1, . . . , w},
and equality constraints hj(x) = 0, j = {1, . . . , p} x ∈ Ω. An MOP solution minimizes (or
maximizes) the components of a vector F(x) where x is a n-dimensional decision variable
vector x = (x1, . . . , xn) from some universe Ω. It is noted that gi(x) ≤ 0 and hj(x) = 0 represent
constraints that must be fulfilled while minimizing (or maximizing) F(x) and Ω contains all
possible x that can be used to satisfy an evaluation of F(x). Thus, a MOP consists of k
objectives reflected in the k objective functions, m + p constraints on the objective functions
and n decision variables.[7]
The next concept of this section is non-dominance. Non-Dominance relation states
that a solution s is better than solution p (s,p ∈ Rd), if it is strictly better in at least one
objective and not worse on the rest of the objectives. Assuming minimization of all
objectives this can be written as follows: fk(s) fk(p) k ∈ {1,…,o} and ∃l ∈ {1,…,o}:
fl(s)<fl(p), where fk(s) fk(p) ∈ {1,…,o} are evaluations of s,p in the objective space Ro. A
point in the search space is called non-dominated within a set, if in that same set there is not
any other point dominating it. Non-dominated solutions are denoted as Pareto or Pareto-
Optimal Set.
A combinatorial optimization problem (COP) is specified by a finite set of objects and a
cost function that assigns a numerical value to each solution. COP consists in finding the
combination of solutions from that set of objects in order to optimize the objective function
values. Then, if the number of objects increases, this problem gets more difficult which is
our case in JSSP.
7
2.5 - Literature survey on Multiobjective Metaheuristics
2.5.1 – Local Search
Local search (LS) is likely the oldest and simplest metaheuristic method. Local search
starts at a given initial solution and at each iteration, the heuristic may replace the current
solution by a neighbor if it improves the objective function. The search stops when all
candidate neighbors are checked and appear to be worse than the current solution, meaning
a local optimum is reached. For large neighborhoods, the candidate solutions may be a subset
of the neighbors. The main objective of this restricted neighborhood strategy is to speed up
the search [8].
The problem of LS resides in being stuck in a local optimum and thus it is not efficient in
its original form for global optimization. To overcome limitations of LS, different strategies
were suggested such as Variable Neighbourhood Search (VNS) [9].
2.5.2. Tabu Search
Tabu Search (TS) is a metaheuristic method originally proposed by Fred Glover [10], with
applications to various operational combinatorial problems, in most cases providing near
optimal effective solutions.
The basic principle of TS is to pursue LS whenever it encounters a local optimum by
allowing non-improving moves. Cycling back to previously visited solutions is prevented by
the use of memories, called tabu lists, which record the recent history of the search, (a key
idea that can be linked to Artificial Intelligence concepts). It is interesting to note that, the
same year, Hansen proposed a similar approach, which he named steepest ascent/mildest
descent [11].
2.5.3. Genetic Algorithms
The metaheuristic genetic algorithm (GA) was introduced by John Holland, whose book
Adaptation in Natural and Aritificial Systems of 1975 was instrumental in creating what is
now a productive field of research and application that goes much wider than the original
GA.[12] In comparison to the two previous approaches which work with one solution at a time
and are called single-solution based, this approach work with a set of solutions and thus is
population-based.
In GA, the objective is to select good quality solutions at each generation and improve
overall fitness of individuals of the population. The two principal mechanisms at the basis of
evolution theory of the living beings, of C. Darwin:
the selection, which supports the survival of the fittest individuals,
and the reproduction, which allows to form offspring through recombinating the
variations of the hereditary features of the parents, to form offspring with new potentialities
better features.
In GA, a representation must be chosen for the individuals of a population.
Classically, an individual is represented as a chromosome with of binary (boolean) values.
Other representations are also popular, such as a list of integers for combinatorial problems,
a vector of real numbers for numerical problems in continuous spaces or more complex
combinations of several representations exist [13].
8
2.6 – Existing MOGAs
2.6.1. MOGAs Survey
Key Concept: Elitism is a selection strategy, in which parents are preserved for several
generations. All non-dominated solutions discovered by a multi-objective GA are considered
elite solutions.
This MO survey is done based on the review in [14]:
The first MOGA here presented is the Weight-based Genetic Algorithm (WBGA) for multi-
objective (WBGA-MO), proposed by Hajela and Lin [15]:
In this method, each solution xi in the population uses a different weight vector: wi =
{w1, w2,…,wk} ; Then, the calculation of the summed objective function is given by:
min z=w1z1’(x)+w2z2’(x)+…+wkzk’(x) (1)
Random Weighted Genetic Algorithm (RWGA) is also a weighted sum approach. This
method was proposed by Murata T, Ishibuchi H and Tanaka H [16]. In this method, a
normalized weight vector wi is randomly generated for each solution xi during the selection
phase at each generation. This approach aims to stipulate multiple search directions in a
single run without using additional parameters.
Vector Evaluated-GA (VEGA) was proposed by Schaffer [2] ,in which the population Pt is
randomly divided into K equal sized sub-populations; P1, P2, …, PK. Then, each solution in
subpopulation Pi is assigned a fitness value based on the objective function value zi. From
these subpopulations, solutions are using proportional selection for reproduction with
crossover and mutation. Crossover and mutation are performed in the same way as in a single
objective GA.
All the approaches described earlier are some modifications of the single objective
optimization. Principally different approach is so-called Pareto ranking approaches. The first
Pareto ranking technique was proposed by Goldberg [17]. The population is ranked according
to a dominance rule, and then each solution is assigned a fitness value based on its rank in
the population.
Multi-objective Genetic Algorithm (MOGA)[18] was the first multi-objective GA that
explicitly used Pareto-based ranking and niching techniques together to encourage the search
toward the true Pareto front while maintaining diversity in the population.
Strength Pareto Evolutionary Algorithm (SPEA) [19] uses a ranking procedure to assign
better fitness values to non-dominated solutions in under represented regions of the
objective space. In SPEA II [20], a density measure is used to discriminate between solutions
with the same rank, where the density of a solution is defined as the inverse of the distance
to its kth closest neighbor in objective function space.
In Pareto Envelope-based Selection Algorithm (PESA) [21], between two non-dominated
solutions, the one with a lower density is preferable. PESA II [22] follows a more direct
approach, namely region-based selection, where cells but not individual solutions are
selected during the selection process. In this approach, a cell that is sparsely occupied has a
9
higher chance to be selected than a crowded cell. Once a cell is selected, solutions within
the cell are randomly chosen to participate in reproduction (crossover and mutation).
Rank-Density Based Genetic Algorithm (RDGA) [23] uses a cell-based density approach in
an interesting way to convert a general K-objective problem into a bi-objective optimization
problem with the objectives to minimize the individual rank value and density of the
population.
Non-Dominated Sorting GA (NSGA) [24] and (NSGA II) [25] are two examples of this Pareto
ranking approaches. The NSGA II method appears to be very easy implemented and efficient
at the same time. It obtained wide-spreading and application to many real-world MO
problems. NSGA II is used in very complex and real-world MOP. It was developed to form non-
dominated fronts. NSGA-II can become a pure elitist GA. Multi-objective selection based on
dominated hypervolume (SMS-EMOA) is also been proven as very easy implemented and
efficient at the same time [26].
Cotemporary evolution strategies (ESs) was a symbolic notation introduced by Schwefel
[27]. Very shortly, the abbreviation (µ+λ) ES stands for an ES that generates λ offspring from
µ parents and selects the µ best individuals from the µ+λ individuals (merged population of
parents and offspring).
To conclude, there are many other existing MOGAs. In the next section, is reported the
“last generation of MOGAs”, also known as “Hyper volume-based MOGAs”.
2.7. Hypervolume based selection MOEAs: HypE and SMS-EMOA
2.7.1. Hypervolume (S metric)
Hypervolume is the bi-dimensional or multi-dimensional space dominated by the Pareto-
optimal solutions. Hypervolume can be mathematically formulated as [26]:
∆S (B,yref):= λ (U {y’|y≺ y’≺yref }), y ∈ B; B ℝ m (2)
yref ∈ ℝ m denotes a reference point that should be dominated by all Pareto optimal
solutions. λ denotes the Lebesgue measure.
It has been proven that given a finite search space and a reference point, maximization of
the hypervolume is equivalent to finding Pareto set [28]. The absolute value of hypervolume
is related with the reference point, but the reference point only influences the relative
contribution of the of the boundary solutions. The ranking of the solutions is invariant to
linear scaling of the objective functions.
2.7.2. Other MOEAs (hypervolume based): HypE and SMS-EMOA
This section introduces the hypervolume based evolutionary algorithms such as the SMS-
EMOA, MO-CMA-ES, or HypE :
10
HypE stands for Hypervolume Estimation Algorithm for Multiobjective Optimization
(HypE). Hype is an advance sampling strategy involving an advanced fitness assignment
scheme, that enhances sampling accuracy, and that can be applied to both mating and
environmental selection. By adjusting the number of samples, accuracy can thereby be
traded-off versus the overall computing time budget. The new algorithm HypE makes
hypervolume-based search possible also for many-objective problems.
The S metric selection EMOA (SMS- EMOA) is another technique of the “hypervolume new
generation”. It has been designed to cover a maximal hypervolume with a finite number of
points. It utilizes ideas borrowed from other EMOA (like NSGA II), and it is founded on two
main pillars:
Non-dominated sorting is used as a ranking criterion
The hypervolume is applied as a selection criterion to discard an individual, which has
the smallest value of of hypervolume among hypervolumes of individuals belonging to the
worst-ranked front.
HypE and SMS-EMOA are the most advance techniques in this area at the moment, but the
most well-known and widely applied technique is NSGA II (Nondominated Sorting Genetic
Algorithm II; previously mentioned).In this master thesis, the complete procedure of the
applied versions of NSGA II and SMS-EMOA are in section 3 and section 4 respectively.
2.8. Job Shop with MOGAs
2.8.1. Summary of some scheduling applications (JSSP)
In this sub-section, selected MOGAs for job shop scheduling problem are discussed:
Bagchi proposes a variation of the NSGA for job shop scheduling, which is The Elitist
Nondominated Sorting-GA (ENGA) [29]. It uses elitism and non-dominated sorting, niche
formation and fitness sharing as in the NSGA. But selection is done differently because ENGA
uses a plus selection strategy (parents compete against their children). Non-dominated
sorting is done on the mixed population of parents and offspring. Three objectives are
minimized: makespan, mean flow time and mean tardiness of jobs. ENGA was able to find
more elements of the Pareto optimal than NSGA set using the same running times. But also,
given a pre-specified number of Pareto optimal solutions that the decision maker wished to
achieve, ENGA was able to find them faster than NSGA.
Another method developed by Tamaki [30] applied the Pareto reservation strategy to
solve a job shop scheduling problem with several jobs and several machines, in which two
objectives are minimized: the total flow time and the total earliness and tardiness. Liang
and Lewis use a GA with a linear combination of weights to solve a job shop scheduling
problem, in which two objectives are minimized: mean flow time and mean lateness [7]
11
Chapter 3
NSGA II
3.1 – Algorithm
NSGA II is a fast and elitist multiobjective genetic algorithm which is based on some two
following principles: Fast Nondominated Sorting Approach and Diversity Preservation. In this
section, it is possible to understand the main characteristics of this method. The general
schema of this method is in Figure 1.1.
It illustrates the NSGA II algorithm adapted and implemented on this Master Thesis to
solve the Job-Shop Scheduling problem. First, the initial population (P0) of the size N is
generated. Then, the population is sorted by Fast-Non-Dominated Sort procedure into ranks
and Crowding distance assigned to elements of each rank (both procedures are described in
the next subsections). Next, the binary tournament selection is applied to the parent
population and parents to be mutated are selected. This tournament selection consists in
taking two random solutions, and comparing them (according to the procedure in section
3.4). In total, N comparisons are made to obtain parents for the offspring to be produced by
variation operators (mutation and/or crossover). The new population (Pt+1) (of size N) is
selected from both parents and offspring (R=P U Q; 2N size).
The merged parents and offspring population is sorted with Fast Nondominated Sorting
procedure. And, then, the least N elements are selected from the first fronts. In case only
few solutions have to be selected to the population from many solutions belonging to the last
front, they are selected according to the best Crowding Distance values.
Figure 1.1 is followed next:
12
Figure 1.1 - General schema of the NSGA II
3.2 – Fast Nondominated Sorting
Fast Nondominated Sorting Procedure consists in allocating every single solution from a
set of solutions into nondominated fronts. In Algorithm 1 the procedure is illustrated [25]:
13
Fast-Non-Dominated-sort(P):
for each p ∈ P for each solution in P (p)* STARTING ITERATION(PART 1) Sp=∅ initialize set of dominated points by p (Sp)
np=0 initialize domination counter for each q ∈ P *compare with every other solution in P (q)
if (p≺ q) then if p dominates q Sp=Sp U {q} add q to Sp else if (q≺ p) then but if q dominates p
np=np+1 increment the domination counter of p if np=0 then if the counter value of p equals to 0 prank=1 means that p belongs to the first front F1=F1U{p} add p to first front set
i=1 initialize front counter while Fi ≠ ∅ PART 1 OF MAIN CYCLE Q= ∅ used to store the members of the next front
for each p ∈ Fi for each p in actual front
for each q ∈ Sp for each q belonging to Sp
nq=nq-1 decrease the domination counter MAIN CYCLE if nq=0 then if the counter value equals to 0 qrank=i+1 q is in the next front Q=Q U {q} add q to the set of elements Q PART 2 OF MAIN CYCLE
i=i+1 increment front counter
Fi=Q Q set is the next front
Algorithm 1 - Fast-Non-Dominated-Sort
The first part of the algorithm above (in the red box) consists in finding the best front.
The solutions that constitute the best ranked front are basically the set of nondominated
solutions (np=0).
Then, the second part of the algorithm in the blue box (main cycle) consists in finding
the remaining fronts progressively by the same process. Depending on the value of the
domination counter at the different iterations, the solutions are allocated into fronts. To
understand better this process, let’s analyze the next simple example with two objectives,
both to be minimized:
Example 1.1:
Let P be the initial population:
P= {A,B,C,D,E,F}
A=(2,7);B=(4,4);C=(6,1);D=(5,9);E=(7,5);F=(9,2);G=(8,8);H=(12,6)
Note: Let’s consider objective 1 as total tardiness and objective 2 as makespan
14
Figure 2.1 - Population of Example 1.1; (X-axis-objective 1: total tardiness; Y-axis-objective 2: makespan)
Let p and q be two solutions of P to be compared. If p dominates q, then add q to the
set of solutions dominated by p (np). Otherwise, increment domination counter of p (Sp).
Applying to this example, we get:
For solution A:(2,7):
Figure 2.2-Analysis of solution “A” of Example 1.1; (X-axis-objective 1: total tardiness; Y-axis-objective 2: makespan)
Observing the figure above, we can find the set of solutions dominated by A. In this
case, D and G are the solutions dominated by A, therefore, the set of solutions dominated by
A is denoted by:
A
B
C
D
E
G
F
H
0
1
2
3
4
5
6
7
8
9
10
0 2 4 6 8 10 12 14
Population P
A
B
C
D
E
G
F
H
0
1
2
3
4
5
6
7
8
9
10
0 2 4 6 8 10 12 14
Solution "A" Analysis
15
SA={D,G} .
Solution A is not dominated by any other solution, therefore, the domination counter for
A is zero (na=0). Solutions A, B and C have similar situations because they are both non-
dominated by any other solution, and then we have:
nb=0 and nc=0.
Observing the graphic, we also get:
SB={D,E,G,H} ,
and
SC={E,F,G,H} .
Now, let us analyze the case of solution D. Solution D is dominated by two solutions A
and B. Therefore, the domination counter for solution D equals to two: nd=2. On the other
hand, the set of solutions dominated by D is totally empty, as we can figure looking at the
figure (SD= {}).
The process of defining Sp and np sets continues for the other remaining solutions:
For solution E Sp=SE={G,H} and ne=2;
For solution F Sp=SF={H} and nf=1;
For solution G: Sp=SG={} and ng=4 ;
For solution H Sp=SH={} and nh=4;
Now that we have all the initial information about domination relation between solutions
of the initial population, we are able to identify the first nondominated front, denoted by F1.
As we saw, solutions A, B and C have domination counters equal to zero. Therefore, solutions
A, B and C constitute the first front:
F1={A,B,C}.
The solutions of the first front are considered to be sorted and they do not participate in
the sorting process of the rest of population. Now that we have identified the first front, we
are able to enter the main cycle shown on the blue box of Algorithm 1. For each solution
belonging to the first front (i=1F1): (A,B,C), we consider its set of dominated solutions
(Sp).And for each solution q in this set Sp, we decrease the respective domination counter (nq)
by one:
1st Run of the main cycle:
16
For Solution A SA={D,G} nd=2-1=1;ng=4-1=3;
For Solution B SB={D,E,G,H}nd=1-1=0;ne=2-1=1;ng=3-1=2;nh=4-1=3;
For Solution CSC={E,F,G,H}ne=1-1=0;nf=1-1=0;ng=2-1=1;nh=3-1=2;
After the first run, we get the following values of domination counters for the not yet
sorted solutions:
nd=0;ne=0;nf=0,ng=1;nh=2.
Now we are able to identify the elements of the second front as those having domination
counter equal to zero (part 2 of the main cycle in Algorithm 1)
The second front is constituted by solutions: D,E and FF2={D,E,F}
The process of sorting of the rest of population is continue in a similar way until all the
solutions are allocated to respective fronts. In this case, let us analyze the remaining
process:
2nd Run of the main cycle:
Solution D SD={}”pass”
Solution “E”SE={G,H}ng=1-1=0;nh=2-1=1;
Solution “F” SF={H}nh=1-1=0;
The new front, which in this example is the final front, is: F3={G,H}
For this example, the Fast-Non-Dominated-Sort algorithm is over since are no more
solutions to be allocated, and therefore all the fronts are identified:
Table 1 - Table showing the final results from Example 1.1
Solution(s) # Front
A,B,C (2,7),(4,4),(6,1) 1
D,E,F (5,9),(7,5),(9,2) 2
G,H (8,8),(12,6) 3
17
Figure 2.3 - Illustration of the final results of Example 1.1;(X-axis-objective 1: total tardiness; Y-axis-objective 2: makespan)
3.3 – Diversity Preservation
Diversity Preservation is one of NSGA II most important pillars. Diversity implies well
diffused solutions among the set of solutions. NSGA II forces Diversity preservation of
population by applying crowding distance. Crowding distance is calculated for solutions
belonging to the same front and it shows the total distance from neighbours. In order to
calculate the crowding distance, it is required to sort the individuals in the population
according to objective functions values in ascending order of their magnitude. In Figure 3
crowding distance (u distance) is shown by the red box. The boundary solutions are assigned
an infinite distance value.
f2
∞
CROWDING DISTANCE (u distance)
∞
xi
f1
Figure 3 - Crowding distance illustration; (X-axis-objective 1: total tardiness; Y-axis-objective 2:
makespan)
A
B
C
D
E
G
F
H
0
1
2
3
4
5
6
7
8
9
10
0 2 4 6 8 10 12 14
Final Sort
18
SOLUTIONS OF FIRST FRONT
SOLUTIONS OF SECOND FRONT
The algorithm of calculating crowding distance is illustrated in algorithm 2 [25]:
Crowding-distance-assignment (Fi):
while i!=∑Fi for all fronts i=0 initialize front counter for each j ∈ Fi, set cd[j]=0 initialize distance n=|Fi| number of solutions in actual front choose one objective m Fi=sort (Fi,m) sort solutions according to the selected objective cd[1]=cd[∞]=∞ so that boundary solutions are always selected for j=2 to (n-1) for the rest of solutions cd [j]=cd[j]+(cd[j+1]m-cd[j-1]m)/(fm
max - fmmin)
i=i+1
Algorithm 2 - Crowding-distance-assignment
Note that for a multiobjective problem with more than two objectives, the sorting for
each objective must been done. But since our problem is two-objective, sorting for only one
objective is enough.
3.4 – Binary Tournament
Binary Tournament used in NSGA II as an environmental selection procedure that consists
in taking two individuals randomly and selecting the “fittest” one as a parent of the next
population. Binary Tournament Selection procedure is realized according to the following
procedure:
When two randomly selected individuals are compared, first they are evaluated according
to the rank:
1. If arank>branka dominates b, then a is selected
However, if both a and b belong to the same front and consequently have the
same sorting rank, then the solution located in the less crowded region is selected (solution
with biggest crowding distance from two solutions):
2. If arank=brank Ʌ adistance>bdistancea is selected
Note that a and b are two different individuals of the studied population.
19
Chapter 4
SMS-EMOA
4.1 – Algorithm
The other MOEA implemented in this master thesis is SMS-EMOA. This approach first sort
population according to Fast-Non-dominated-Sorting similarly to NSGA II, however instead of
crowding distance it uses hypervolume (introduced on sub-section 2.7).
Like it was previously done in NSGA II, it is possible to establish the main pillars and
respectively criteria for SMS-EMOA:
1. If arank>branka dominates b, then a is selected
But if both a and b belong to the same front, then the best solution is the one
with bigger hypervolume measure (∆S):
2. If arank=brank Ʌ ∆S(a)> ∆S(b) a is selected
Note: a and b are two different individuals of the studied population
The other difference is that in SMS-EMOA only one individual is updated at each
generation (reminding ES in sub-section 2.6). The general schema of this method is presented
in Figure 4:
20
Figure 4- General schema of the adopted version of SMS-EMOA
Figure 4 illustrates the SMS-EMOA algorithm adapted and implemented in this Master
Thesis to solve the Job-Shop Scheduling problem. First, the initial population (P0) with size N
is generated. Then, we sort the population by Fast-Non-Dominated Sort procedure and
calculate Hypervolume (this last one is detailed in the next subsection).
21
In SMS-EMOA only one single parent is selected randomly from population and one
offspring is produced. Thereby, there is only one offspring per iteration produced by SMS-
EMOA, instead of the N offspring updated per iteration in NSGA II.
Then, the new population (Pt+1) of size N again, is selected from both parents and
offspring (R=PUQ), (with N+1 size) by selecting the N best ranked elements. The other
difference between these two distinct algorithms resides in the number of generations. For
fair comparison of the two algorithms, since we have a much smaller (unitary) size of
offspring in SMS-EMOA algorithm, the number of generations is N times bigger when compared
to NSGA II.
Reminding again ES in section 2.6, in SMS-EMOA, λ=1, since only one individual is updated
at each generation. The direct comparison between the two algorithms NSGA II and SMS-
EMOA is presented in Table 2 below:
Table 2- Comparison of NSGA II and SMS-EMOA parameters
ALGORITHM
PARAMETERS NSGA II SMS-EMOA
Initial Population(P0) Size N N
Offspring(Q0) Size N 1
Number of generations n N x n
In the next subsection the simple example of calculation of hypervolume for a bi-
objective problem.
4.2 – Calculating Hypervolume for SMS-EMOA
Next it is shown how the hypervolume is calculated at the third step of SMS-EMOA
algorithm presented at Figure 4 for a bi-objective problem. Assuming the population is
sorted into nondominated fronts we have to:
1. Select solutions of the worst-ranked non-dominated front
2. Sort solutions in ascending order according to values of one objective
function (e.g.makespan). Not forgetting that the solutions are mutually non-
comparable since they are non-dominated. Note that while solutions become
allocated in ascended order according to makespan, they are also allocated in
descended order according to total tardiness.
3. For the aimed sorted front with n solutions, hypervolume contribution (∆S) is
calculated through [26]:
22
for i=2,…,n-1:
∆S (si)=(f1(si+1)-f1(si)).(f2(si-1)-f2(si)) (3)
4. For boundary solutions, hypervolume is calculated substituting the respective
coordinate of the non-existing adjacent point for the coordinate of a reference point.
Algorithm 3- Hypervolume calculation for a bi-objective problem
Graphically hypervolume measure is illustrated at Figure 5 for the individual xi:
f2
Pref
xi
f1
Figure 5- Hypervolume calculation; (X-axis-objective 1: total tardiness; Y-axis-objective 2: makespan)
SOLUTIONS OF SECOND FRONT
SOLUTIONS OF FIRST FRONT
Looking at the picture above, it is possible to observe the hypervolume contribution
∆S of the solution xi. This contribution is visually translated by the area of the green
rectangle. The reference point should be selected always bigger than the maximum on each
objective.
23
Chapter 5
Adapting MOGAs for JSSP
5.1 – Encoding
Chromosome representation is one of the most important issues, when it comes to
adapting GA to be applied to solving specific type of problems as JSSP. Encoding a solution
into a chromosome must ensure some crucial aspects. For scheduling such issues are related
to feasibility of schedules. To address this problem, several representations of schedules have
been suggested in the literature (Table 3 [29]). The survey of representation can be found in
[29]:
Table 3-List of Job Shop most usual representations
REPRESENTATION
AUTHOR(S)
DATE
Operation-based
Bean[31]
1994
Job-based
Holsapple, Jacob, Pakath and Zaveri [32]
1993
Preference-list-based
Davis[33];Falkeneur and Bouffouix [34]
1985,1991
Job-pair-relation-based
Nakano and Yamada [35]
1992
Priority-rule-based
representation
Dorndorf and Pesch [36]
1995
Disjunctive-graph-based
representation
Tamaki and Nishikawa [37]
1992
Completion-time-based
Nakamo and Yamada [38]
1992
Machine-based
Dorndorf and Pesch [36]
1995
Random key-based
Bean [31]
1994
24
5.1.1 – Operation-Based Representation
Operation-based representation was first proposed by Bean [31]. This representation
encodes a solution (schedule) as a sequence of operations. In this representation a
chromosome encodes a schedule as sequence of operations (genes of chromosome).
The operations of the same job are represented by the same symbol, or number
(e.g.”5”, #5 or J5). With this kind of notation, the position of a job in schedule is interpreted
according to its position in the chromosome. Let us illustrate with a small example of a four-
job three-machine problem (Figure 6.1). Note that “1” stands for a job J1, “2” stands for a
job J2 and so on.
1st Op. of J2 1st Op. of J4 3
rd Op. of J2 2nd Op. of J1 2
nd Op. of J4 3rd Op. of J1
2 1 4 2 2 3 1 3 4 4 1 3
1st Op. of J1 2nd Op. of J2 1
st Op. of J3 2nd Op. of J3 3
rd Op. of J4 3rd Op. of J3
Figure 6.1-Operation-based representation
5.1.2 – Job-Based Representation
Job-based representation was first proposed by Horsapple, Jacob, Pakath and Zaveri
[32], and used by them to deal with static scheduling problems. This representation encodes
a solution (schedule) as a sequence of jobs. So, in this representation a chromosome encodes
schedule and jobs are encoded by genes of this chromosome.
For a certain sequence of jobs, all operations of the first job in the list are scheduled
first, and then the operations of the second job in the list are considered. Let us take again a
four-job three-machine problem simple example (see Figure 6.2). In which “1” stands for a
first job J1, “2” stands for a second job J2 and so on. Then, a chromosome can be represented
as follows:
2 3 1 4
Figure 6.2 - Job-based representation
Note that with this kind of representation, any permutation of jobs corresponds to a
feasible schedule.
5.1.3 – Preference-list Based Representation
Preference-List Based Representation was first proposed by Davis [33], but then later on
it was developed by Falkenauer and Bouffouix [34], in order to solve a job shop problem
25
involving release times and due dates. In 1995, Croce, Tadei and Voltas applied it to the
classical JSSP [39].
This representation is similar to operation-based representation, since both of
representations are operation-based, but each machine has its own preference list in which it
will process the jobs.
In this representation precedence constraints of operations of individual jobs must be
considered. This is important because we aim feasibility of schedules.
The notation of operations is similar to previous representations. Let us see the schedule
representation in the chromosome of this type with simple example (Example 2):
Example 2:
3x 3 instance:
3 jobs (J1, J2, J3);
3 machines (M1, M2, M3);
9 operations;
Evaluate objective 1: Makespan (Cmax);
Evaluate objective 2: Tardiness (Ttotal);
Table 4.1 - Matrix of instance (3 x 3) from Example 2 showing the machine corresponding to each
operation
Jobs
Processing times
Operations
J1 M1 M3 M2
J2 M1 M2 M3
J3 M2 M1 M3
Table 4.2 - Matrix of instance (3 x 3) from Example 2 showing the processing times of corresponding
operations on machines from Table 4.
Jobs
Processing times
Operations
J1 56 12 37
J2 22 4 80
J3 43 33 5
26
From these two tables we can write the operation as a following triplet index of a job,
index of machine and processing time at the job on the machine (see Figure 7.1):
J1:
J1 M1 56 J1 M3 12 J1 M2 37
1st Op. of J1 2nd Op. of J2 3
rd Op. of J3
J2:
J2 M1 22 J2 M2 4 J2 M3 80
J3:
J3 M2 43 J3 M1 33 J3 M3 5
Figure 7.1 - Possible Job sequence for Example 2
For instance J1 must be processed first on machine M1 for 56 time units. Then, J1 must
be processed on machine M3 for 12 time units and finally on machine M2 for 37 time units.
5.1.3.1 – Preference-List Based GA representation: Algorithm
For generating initial population of solutions we have used the following algorithm:
From previous Example 2, the schedule of three jobs (J1, J2, J3) produces three
different lists (one for each job). To create initial solution from this one we can shuffle
operations randomly.
1. Gather list of possible operations of all jobs to be scheduled from Example 2.(see
Figure 7.2):
J1 M1 56 J1 M3 12 J1 M2 37
J2 M1 22 J2 M2 4 J2 M3 80
J3 M2 43 J3 M1 33 J3 M3 5
Figure 7.2 - Possible Job sequence for Example 2
27
2. Choose one operation randomly as for example in Figure 7.3. It must be a first
operation from one of the three lists. For instance, let us choose the first operation from the
third list, which is “(J3 M2 43)”. Then, add it to the empty schedule as the first operation to
be processed. That same operation is removed from the list of possible operations.
J1 M1 56 J1 M3 12 J1 M2 37
J2 M1 22 J2 M2 4 J2 M3 80
J3 M2 43 J3 M1 33 J3 M3 5
Figure 7.3-Possible Job sequence for Example 2
Currently, the actual schedule consists in one operation:(J3 M2 43)
3. The exactly same process is repeated for choosing the second operation of the
schedule. For example, the operation “(J1 M1 56)” was selected randomly as a second
operation to be scheduled (see Figure 7.4):
J1 M1 56 J1 M3 12 J1 M2 37
J2 M1 22 J2 M2 4 J2 M3 80
J3 M1 33 J3 M3 5
Figure 7.4 - Possible Job sequence for Example 2
Now, the actual schedule consists of two operations.”(J3 M2 43)”,” (J1 M1 56)”; the
operation” (J1 M1 56)” is removed from the list of possible operations
4. This procedure is repeated until all the operations from the list of possible operations
are scheduled. The final sequence of all triplets represents a schedule
(individual).Supposing that this process would result in a following schedule:
Schedule (J3 M2 43) (J1 M1 56) (J3 M1 33) (J2 M1 22) (J1 M3 12) (J3 M3 5) (J2 M2 4) (J1 M2 37) (J2
M3 80)
28
The main advantage of this process consists in possibility of controlling feasibility of
solutions by considering precedence constraints. Note that the job operations in the
schedule follow the same processing order.
5.2 – Evaluation of Objective Function Values
Let us analyze evaluation of objective function values through Example 3:
Example 3:
Assuming the two objectives, makespan and tardiness, to be minimized. First, let us look
at the makespan objective. The most usual way to visualize a schedule is through Gantt
charts. Let us assume the schedule from Example 3, and build a Gantt chart for it:
1. Schedule (J3 M2 43) (J1 M1 56) (J3 M1 33) (J2 M1 22) (J1 M3 12) (J3 M3 5) (J2 M2 4) (J1
M2 37) (J2 M3 80).
2. Choose the axis variables:
X-axis: time variable (unit);
Y-axis: machine index (Mi);
3. The Gantt chart at Figure 7.5 is built progressively according to the schedule
presented in 1. Placing the first operation “(J3 M2 43)”, then the operation “(J1 M1
56)” and so on until the complete schedule is allocated to time-slots.
Figure 7.5-Gantt chart for previous sequence from Example 3
29
4. For this example the makespan is calculated as the sum of processing times
of all jobs scheduled to all machines:
M1 (0 J1 56) (56 J3 89) (89 J2 111)
M2 (0 J3 43) (111 J2 115) (115 J1 152)
M3 (56 J1 68) (89 J3 94) (115 J2 195)
Reminding Section 2, the makespan value (Cmax) is the maximum of all makespan values
(each job has a corresponding makespan value). In this case, job 2 is the last job to leave the
system, and its value corresponds to 195.
For the rest of the jobs the completion times are:
Cj:
j=1C1=152 time units;
j=2C2=195 time units;
j=3C3=94 time units;
Cmax=max(C1,C2,C3)=max(152,195,94)=195 time units.
5. Another important objective is Tardiness (Tj).That is calculated as reminding section
2: Tj = max (Cj − dj, 0);
For calculating Tardiness value, as was already mentioned in section 2, the due dates (dj)
for jobs are required. The process to generate due dates in this master thesis will be detailed
on the “Parameters of the MOGAs” section (section 6). Let us assume the following random
due dates:
dj:
d1=150;
d2=165;
d3=100.
Then, tardiness can be calculated as follows:
T1=max(152 – 150,0)=2;
T2=max(195-165,0)=30;
T3=max(94-100,0)=0.
Then, the total tardiness (Ttotal) can be obtained as the sum of all tardiness values for all
jobs:
30
Ttotal=T1+T2+T3=2+30+0=32.
6. To conclude, this example shortly illustrates the mechanisms of job-shop implementation
on this Master Thesis.
5.3 – Genetic Operators
Genetic operators are a very important part of the MOEAs. Some genetic operators are
universal in the sense that they do not depend on the type of problem to be solved, like
selection or tournament selection. Therefore, these kinds of operations are complete
independent of the application. On the other hand, so-called variation operators, like
mutation and crossover are problem dependent. They are problem dependent because they
directly affect solution feature(s) (see Section 5.3.2).
5.3.1. Selection
Selection basically consists in one principle: keeping the “best” solutions from one
generation to another. “Best” solutions in multi-objective optimization (MOO) means best
ranked solutions according to specific algorithm criteria. Just like the name suggests,
Tournament selection is a competition between solutions, in which the winner is the best
ranked solution. As the name suggests in Binary Tournament Selection (TS), the selection is
made between two solutions, choosing the best solution according to the TS approach is used
in NSGA II for offspring creation.
5.3.2. Crossover
Generic crossover is a genetic operator that combines elements of parent solutions to
produce two new offspring solutions. There are different types of crossover, such as: one
point crossover (Table 5.1), two-point crossover (Table 5.2), ”cut and splice”,etc.
Table 5.1-One-point crossover illustration
PARENT STRINGs CROSSOVER(1 POINT) OFFSPRING SOLUTION
000000 000000001111 001111
111111 111111110000 110000
As we can see in Table 5.1 above, one point crossover consists in selecting two parent
strings. Then, select randomly one point in a parent string (the same for each parent). Then,
divide both chromosomes into substrings from the selected point. Finally, swap the
homologue substring between each chromosome.
31
Table 5.2-Two-point crossover illustration
PARENT STRINGs CROSSOVER(2 POINT) OFFSPRING SOLUTION
000000 000000001100 001100
111111 111111110011 110011
As we can see in Table 5.2 above, two point crossover consists in selecting two parent
strings. Then, select randomly two points in a parent string (the same points for each
parent). Then, divide both chromosomes into substrings from the selected points. Finally,
swap the homologue(s) substring(s) between each chromosome.
Sometimes, some of those techniques may produce infeasible solutions which violate
constraints (eg. precedence constraints), and thus need to be repaired.
5.3.3. Mutation
Mutation is another genetic operator, that introduces small changes to the current solution.
For instance, the simple swap of binary value of one single gene can be a Mutation operation
for chromosome with binary representation (Table 5.3):
Table 5.3-Simple mutation illustration
PARENT STRING MUTATION OFFSPRING SOLUTION
11001100 1100110011000100 11000100
Insertion is a mutation operator which consists of moving one gene from its current
position to another position in the chromosome, just like a “shifting” operation. This
technique may also introduce constraint violations. A small simple example of the “insert”
operation follows in Table 5.4:
Table 5.4-Insert illustration
PARENT STRING MUTATION OFFSPRING SOLUTION
11001100 1100110010011100 110011100
Swap is another very common mutation operator. It simply involves swapping values of
two genes inside the chromosome. Again, constraint violations might be a concern. Next
follows the illustration of the swap operation in Table 5.5:
32
Table 5.5-Swapping illustration
PARENT STRING MUTATION OFFSPRING SOLUTION
11001100 1100110001001101
01001101
Let us illustrate mutation with an example from the taken approach:
1. The initial schedule is:
(J3 M2 43) (J1 M1 56) (J3 M1 33) (J2 M1 22) (J1 M3 12) (J3 M3 5) (J2 M2 4) (J1 M2 37) (J2 M3 80)
2. Selecting randomly one operation:
(J3 M2 43) (J1 M1 56) (J3 M1 33) (J2 M1 22) (J1 M3 12) (J3 M3 5) (J2 M2 4) (J1 M2 37) (J2 M3 80)
3. Selecting randomly some position:
(J3 M2 43) (J1 M1 56) (J3 M1 33) (J2 M1 22) (J1 M3 12) (J3 M3 5) (J2 M2 4) (J1 M2 37) (J2 M3 80)
4. Performing insertion operation:
(J3 M2 43) (J3 M1 33) (J2 M1 22) (J1 M3 12) (J1 M1 56) (J3 M3 5) (J2 M2 4) (J1 M2 37) (J2 M3 80)
5. Getting the new schedule:
(J3 M2 43) (J3 M1 33) (J2 M1 22) (J1 M3 12) (J1 M1 56) (J3 M3 5) (J2 M2 4) (J1 M2 37) (J2 M3 80)
Considering precedence constraints, the first operation of Job 1 is now placed after the
second operation of Job 1 in the schedule string, which violates constraints and produces a
infeasible solution. That is why this schedule should not be created:
(J3 M2 43) (J3 M1 33) (J2 M1 22) (J1 M3 12) (J1 M1 56) (J3 M3 5) (J2 M2 4) (J1 M2 37) (J2 M3 80)
However, there are some techniques that can be adopted to handle constraints violation
problem. Let us illustrate one of them:
33
6. For the initial schedule let us define a substring, with a left border selected by the
marked operation in 2. , and the right border defined by the next operation of the same
job. In this case, the right border is flagged by:“(J1 M3 12)”
Then,
LEFT BORDER RIGHT BORDER
(J3 M2 43) (J1 M1 56) (J3 M1 33) (J2 M1 22) (J1 M3 12) (J3 M3 5) (J2 M2 4) (J1 M2 37) (J2 M3 80)
When selecting a border, if the operation does not exist, the border is defined by the
beginning or the end of the schedule string.
8. Repeating steps 3.,4. and 5.:
(J3 M2 43) (J1 M1 56) (J3 M1 33) (J2 M1 22) (J1 M3 12) (J3 M3 5) (J2 M2 4) (J1 M2 37) (J2 M3 80)
Then, the new schedule is:
(J3 M2 43) (J3 M1 33) (J2 M1 22) (J1 M1 56) (J1 M3 12) (J3 M3 5) (J2 M2 4) (J1 M2 37) (J2 M3
80)
To conclude, we have applied this approach in order to avoid constraints violations.
In implementation part, mutation operations (swapping and insertion) were applied, similarly
to steps 2., 3. and 4. described above.
34
Chapter 6
Computational Analysis
In this section, we discuss the parameters of MOGAs. There is shown in sub-section
6.2 how to compare our two applied MOGAs: NSGA II and SMS-EMOA.
6.1 – Parameters of the MOGAs
In this section we discuss GA parameters that influence its performance. These
parameters are:
Population size
Number of generations/Stopping Criteria
Data set
The population size N is the same in both SMS-EMOA and NSGA II algorithms (reminding
table 2; section 4.1). Both have initial N elements, being N equal to 20 in all tests done in
this Master Thesis.
Reminding table 2 in section 4.1, we can figure that the number of generations in SMS-
EMOA will be twice the number of NSGA II generations. In the implementation part, n always
took values between 10 and 20. In the section of computational results (section 7), these
values are expressed on the results table.
The processing times are generated randomly between 1 and 100 (time units). The
decimal values are allowed.
The due dates for a job i are generated as follows:
dj=((m*n/100+0.5)+1)*∑mi=1pji (4)
35
Such schema is widely used for instance, in [40].
The instances with n jobs and m machines of the following sizes [n x m] are used for
tests:
10x5; 20x5; 50x5.
6.2 – Comparison of results
Two algorithms can be compared based on some performance measures. We use
hypervolume as performance measure. Among different performance measures available
in literature, the hypervolume [26] is claimed to be currently one of the best.
Hypervolume is very simple to be implemented and very well suited since our problem is
bi-objective. Next follows the calculation of Hypervolume for the set of solutions of a
non-dominated front:
Figure 8 - Area between each point and reference point (the overlapped area is considered only once).
The points flagged as x1, x2, x3 and x4 represent four different solutions belonging to the Pareto front of
the final population. The red point (Pref) represents the reference point.
In order to get the value of our chosen performance measure, we adopted the
following the procedure shown in Example 4 for Figure 8:
36
Example 4:
“Performance measure calculation”
Procedure:
1. Obtain Pareto front. (Reminding that our problem is bi-objective and we are
minimizing both objectives)
2. Perform normalization:
For each blue point representing solution i (i=1,2,3,4) :
For each objective j (j=makespan, total tardiness):
fj(si)[0,1] = (fj(si)– fmin) /(fmax-fmin) (5)
3. Then, assuming normalization for each objective is done in the range [0,1.0], we
establishing the reference point (red point in Figure 8):
Pref =[1.2,1.2]
Note: The reference point should be selected always bigger than the maximum on each
objective.
4. Calculate hypervolume measure through simple algorithm below:
For each i (i=1,2,3,4): i represents index of solution belonging to the first front F1
ai=0 initialize objective 1 contribution for solution i
bi=0 initialize objective 2 contribution for solution i
for f1 (eg.total tardiness) (j=1): f1 contribution (ai)
ai=f1(si)[0,1]-Pref(1)
for f2 (eg.makespan)(j=2): f2 contribution (bi)
if i=1:
bi=f2(si)[0,1] -Pref(2)
else if i>1:
37
bi= f2(si)[0,1] - f2(si-1)[0,1]
∆S (si)= ai* bi
∆STOTAL= ∆S (si) (i=1,2,3,4) si represents ith solution from F1
Algorithm 4 - Performance measure calculation
In Figure 8 for each solution xi the hypervolume area is coded by each respective
colored rectangle:
∆S (s1)AREA COVERED BY THE PURPLE RECTANGLE(“1”)
∆S (s2)AREA COVERED BY THE GREEN RECTANGLE(“2”)
∆S (s3)AREA COVERED BY THE ORANGE RECTANGLE(“3”)
∆S (s4)AREA COVERED BY THE RED RECTANGLE(“4”)
∆STOTAL THE SUM OF ALL AREAS COVERED BY EACH RECTANGLE
PERFORMANCE MEASURE
The hypervolume measure mentioned above is performance measure. The results
relative to this indicator are expressed on the next section (Section 7).
38
Chapter 7
Experimental results
7.1 – Results: NSGA II vs SMS-EMOA
In this section, we present the test results for the two methods applied in this Master
Thesis: NSGA II and SMS-EMOA. NSGA II and SMS-EMOA algorithms were implemented in
Python 2.6 programming language. Python is very simple and easy to implement. However, is
a very time-consuming language. All the source code is original and was developed in Eric 4
integrated development environment (IDE), with WINDOWS operational system. The used
compiler was iPython. The complete list of software and main code is available on the
appendix. The tests were done in an Intel Core i5-460 M 2.53 GHz with 4 Gb RAM.
To illustrate the dynamics and impact of both methods, it is shown the evolution of
Pareto fronts obtained on initial and final generations for both methods that could be seen at
figures 9.1 9.2 and 9.3. In section 6 we have introduced instance types ([n x m], n
represents the number of jobs and m represents the number of machines) and used different
instance sets for each type (see appendix section). The following results illustrate Pareto
fronts in one single run only, for [10 x 5], [20 x 5] and [50 x 5] instances:
39
Figure 9.1- Pareto front evolution comparison for both methods. Results for [10 x 5] instances (X-axis-total tardiness; Y-axis-objective 2: makespan)
Figure 9.2- Pareto front evolution comparison for both methods. Results for [20 x 5] instances (X-axis-total tardiness; Y-axis-objective 2: makespan)
Figure 9.3- Pareto front evolution comparison for both methods. Results for [50 x 5] instances (X-axis-total tardiness; Y-axis-objective 2: makespan)
NSGA II PARETO FRONT (n is the number of generations)
SMS-EMOA PARETO FRONT (n is the number of generations)
40
In the results displayed on figures 9.1, 9.2 and 9.3, blue lines represent Pareto fronts
obtained for NSGA II method and green lines represent Pareto fronts obtained for SMS-EMOA.
Each Pareto front has a respective legend with n value indicating the index of generations.
We used an initial population with 20 random individuals (N=20). Reminding section 4.2, if N
equals to twenty individuals, we need to perform twenty generations in SMS-EMOA method,
for each one performed by NSGA II. Then, the lines with n equal to zero represent Pareto
front for initial population. On the other hand, for lines with n equals to ten (ten
generations) we have the final Pareto front for NSGA II method. For SMS-EMOA, we have two
hundred generations instead of ten generations.
Like it was previously mentioned, we tested for [10 x 5], [20 x 5] and [50 x 5] instances.
Each instance had 10 runs. We used an initial population with 20 random individuals (N=20)
for each run. Once again, if N equals to twenty individuals, we need to perform twenty
generations in SMS-EMOA method, for each one performed by NSGA II. Then, we perform ten
generations for NSGA II and two hundred generations for SMS-EMOA. The results show the
average hypervolume of those runs for each method presented in Example 4 from Section 6.
Table 6.1.- Results from 10 x 5 Instance runs
PARAMETERS ALGORITHM
NSGA II SMS-EMOA
POPULATION SIZE 20 20
NGEN/STOP.CRITERIA 10 200
AVERAGE (Hypervolume) 1,336 1,385
Table 6.2.- Results from 20 x 5 Instance runs
PARAMETERS ALGORITHM
NSGA II SMS-EMOA
POPULATION SIZE 20 20
NGEN/STOP.CRITERIA 10 200
AVERAGE (Hypervolume) 1,308 1,418
Table 6.3.- Results from 50 x 5 Instance runs
PARAMETERS ALGORITHM
NSGA II SMS-EMOA
POPULATION SIZE 20 20
NGEN/STOP.CRITERIA 10 200
AVERAGE (Hypervolume) 1,212 1,231
41
Table 6.4.- Average Results
PARAMETERS ALGORITHM
NSGA II SMS-EMOA
AVERAGE (Hypervolume) 1,285 1,345
Observing the results presented in tables (6.1-6.4), it is possible to verify that the
applied SMS-EMOA performs better for [20 x 5] instance runs. SMS-EMOA presents better
overall results, which can be seen from Table 6.4.
For instances with fifty jobs and five machines the difference is not very clear, since both
methods show similar results. However, for twenty jobs and five machines instances, SMS-
EMOA shows a significant advantage comparing to NSGA II (+7.639 %). In overall results, SMS-
EMOA presents an advantage of plus approximated four per cent (+ 4,167 %) comparing to
NSGA II.
To conclude, it is possible to affirm that SMS-EMOA has better overall results, however it
is also fair to affirm that SMS-EMOA is also much slower achieving its results, especially for
instances with larger sizes. In all tested data sets, makespan and total tardiness values were
substantially reduced from initial values.
For some runs, the total tardiness was totally reduced to zero. This occurred specially for
a small number of jobs and also in NSGA II algorithm for an increasing number of jobs. In
part, this is justified by the use of loose due dates generated.
42
Chapter 8
Conclusions and future work
In this master thesis, we solve the JSSP with two objectives: makespan and total
tardiness. JSSP is NP-hard combinatorial problem. NP-hard problems are difficult to solve
with exact methods, thereby we use metaheuristics. Using the right encoding and the
adopting the right metaheuristic method, it is possible to achieve good results. Preference-
list-based is a good option for JSSP problem encoding. NSGA II and SMS-EMOA are two
different effective GAs to solve this problem and achieve good results. The evaluation
method for experimental testing was based on hypervolume, being very successfully applied.
The applied version of SMS-EMOA can be more effective comparing to the applied version
NSGA II, however can also be more slow. Generally, both methods present similar and good
results.
However, the two approaches are based on distinct concepts: NSGA II is based on
diversity preservation through crowding distance operator. SMS-EMOA is based on
hypervolume measure. Both NSGA-II and SMS-EMOA start from fast non-dominated sorting for
selecting best quality solutions, but for diversity preservation NSGA-II uses crowding distance
and SMS-EMOA uses hypervolume.
In future, in addition to mutation that was used in this work, some crossover operator can
be applied. Other encoding schemas could also be studied. The methods studied in this
Master Thesis could also be tested on other data JSSP data sets. For example, using instances
with a larger number of jobs and machines could be tested.
To conclude, other MOGAs can also be applied to solve this kind of problem.
43
References
[1] Holland JH. Adaptation in natural and artificial systems. Ann Arbor:University of Michigan
Press; 1975.
[2] Schaffer JD. Multiple objective optimization with vector evaluated genetic algorithms.
In: Proceedings of the international conference on genetic algorithm and their
applications, 1985.
[3] Mahanim Omar,Adam Baharum,Yahya Abu Hasan, ”A job-shop scheduling problem (JSSP)
USING GENETIC ALGORITHM (GA)”, Proceedings of the 2nd IMT-GT Regional Conference on
Mathematics, Statistics and Applications Universiti Sains Malaysia, Penang, June 13-
15,2006
[4] Michael L.Pinedo,”Scheduling-Theory,Algorithms,and Systems”,Springer,3rd Edition
[5] Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to
the Theory of NP-Completeness, W. H. Freeman, ISBN 0-7167-1045-5.
[6] Michael Pilegaard Hansen, “Metaheuristics for multiple objective combinatorial
optimization”, March 31, 1998, Working Paper, IMM Ph.D. dissertation
[7] Carlos A. Coello Coello, Gary B. Lamont and David A. Van Veldhuizen , ” Evolutionary
Algorithms for Solving Multi-Objective Problems”,Springer,2nd Edition
[8] El-Ghazali Talbi,”Metaheuristics - FROM DESIGN TO IMPLEMENTATION”, JOHN WILEY &
SONS INC., PUBLICATION p.146;
[9] N.Mladenovic (1995) A Variable neighborhood algorithm—a new metaheuristic for
combinatorial optimization. Abstracts of papers presented at Optimization Days,
Montréal, p. 112.
[10] Fred Glover (1986), Future Paths for Integer Programming and Links to Artificial
Intelligence. Computers and Operations Research, volume 13, issue 5, pages 533–549;
[11] Hansen, P. (1986) The steepest ascent mildest descent heuristic for combinatorial
programming. Presented at the Congress on Numerical Methods in Combinatorial
Optimization, Capri, Italy.
[12] Fred Gover,Gary A. Kochenberger,”Handbook of metaheuristics”, Kluwer’s International
Series;
[13] J. Dr´eo A. P´etrowski ,P. Siarry E. Taillard,”Metaheuristics for Hard
Optimization”,Springer;
[14] Abdullah Konak, David W. Coit, Alice E. Smith,”Multi-objective optimization using
genetic algorithms: A tutorial”;
44
[15] Hajela P, lin C-y. Genetic search strategies in multicriterion optimal design. Struct
Optimization 1992;4(2):99–107.
[16] Murata T, Ishibuchi H. MOGA: multi-objective genetic algorithms.In: Proceedings of the
1995 IEEE international conference on evolutionary computation, 29 November–1 December,
1995. Perth,WA, Australia: IEEE; 1995.]
[17] Goldberg DE. Genetic algorithms in search, optimization, and machine learning. Reading,
MA: Addison-Wesley; 1989
[18] Fonseca CM, Fleming PJ. Multiobjective genetic algorithms. In: IEE colloquium on
‘Genetic Algorithms for Control Systems Engineering’ (Digest No. 1993/130), 28 May 1993.
London, UK: IEE; 1993.
[19] Zitzler E, Thiele L. Multiobjective evolutionary algorithms: a comparative case study and
the strength Pareto approach. IEEE Trans Evol Comput 1999;3(4):257–71.
[20] Zitzler E, Laumanns M, Thiele L. SPEA2: improving the strength Pareto evolutionary
algorithm. Swiss Federal Institute Techonology: Zurich, Switzerland; 2001.
[21] Corne DW, Knowles JD, Oates MJ. The Pareto envelope-based selection algorithm for
multiobjective optimization. In: Proceedings of sixth international conference on parallel
problem solving from Nature, 18–20 September, 2000. Paris, France: Springer; 2000.
[22] Corne D, Jerram NR, Knowles J, Oates J. PESA-II: region-based selection in evolutionary
multiobjective optimization. In: Proceedings of the genetic and evolutionary computation
conference (GECCO-2001), San Francisco, CA, 2001.
[23] Lu H, Yen GG. Rank-density-based multiobjective genetic algorithm and benchmark test
function study. IEEE Trans Evol Comput 2003;7(4):325–43.7
[24] Srinivas N, Deb K. Multiobjective optimization using nondominated sorting in genetic
algorithms. J Evol Comput 1994;2(3):221–48 [25] Kalyanmoy Deb, Associate Member, IEEE, Amrit Pratap, Sameer Agarwal, and T. Meyarivan,” A Fast and Elitist Multiobjective Genetic Algorithm:NSGA-II”, IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 6, NO. 2, APRIL 2002
[26]
Nicola Beume, Boris Naujoks, and Michael Emmerich. SMS-EMOA: Multiobjective
selection based on dominated hypervolume. European Journal of Operational Research,
181(3):1653-1669, 2007.
[27] Hans-Paul Schwefel: Numerical Optimization of Computer Models, Birkhäuser Verlag,
Basel, 1977, traduction John Wiley and Sons Ltd., 1980. ISBN 0 471 09988 0
[28] M. Fleischer, The Measure of Pareto Optima. Applications to Multi-objective
Metaheuristics, Evolutionary Multi-Criterion Optimization (EMO 2003), LNCS, vol. 2632,
Springer, Berlin, 2003, pp. 519–533.
[29] Tapan P.,Bagchi, “Multiobjective Scheduling by Genetic Algorithms”, Kluwer Academic
Publishers Norwell, MA, USA ©1999 ,ISBN:0792385616
[30] Hisashi Tamaki, Hajime Kita, and Shigenobu Kobayashi. Multi-Objective Optimization by
Genetic Algorithms : A Review. In Toshio Fukuda and Takeshi Furuhashi, editors, Proceedings
of the 1996International Conference on Evolutionary Computation (ICEC’96), pages 517–522,
Nagoya, Japan, 1996.IEEE
[31] Bean J, “Genetic Algorithms and Random keys for Sequencing and Optimization, ORSA J.
of Computing,6-2,154-160,1994
[32] Horsapple C,V Jabob, R Pakath and J Zaveri, “A Genetics-based Hybrid Scheduler for
generating Static Schedules in Flexible Manufacturing Contexts, IEEE Transactions on Systems,
Man and Cybernetics,23,953-971,1993
45
[33] Davis L, “Jobshop Scheduling with Genetic Algorithms”, in Grefenstette(1985), 136-
140,1985
[34] Faulkenauer E and S Bouffouix, “A genetic algorithm for Job shop”,Proceeding of the
IEEE Int’l Conference in Robotics and Automation,824-829,1991
[35] Nakano R and T Yamada, “A Genetic Algorithm Applicable to Large Scale Jobs Shop
Scheduling Problems”, in Manner and Maderick (1992), 281-290,1992
[36] Dorndorf U and E Pesch, “Evaluation Based in Learning in a Jobshop Scheduling
Environment”, Computers and Operations Research,22,25-40
[37] Tamaki and Y Nishikawa, “a Parallel Genetic Algorithm based on a Neighbourhood Model
and its Applications to the Jobshop Scheduling”, in Manner and Maderick (1992),573-582,1992
[38] Nakano,R and T Yamada, “Conventional Genetic Algorithm for Jobshop Problems”, in
Belew and Booker (12), 477-479
[39] Croce F, R Tadei and G Volta,”A genetic Algorithm for the Jobshop Problem”,Computer
and Operations Research,22,15-24,1995
[40] Vinicius Amaral Armentano, Cintia Rigião Scrich, “Tabu Search for minimizing total
tardiness in Job Shop”, Int J. Production Economics 63(2000) 131-140, November 1998
46
Appendix
//SOFTWARE LIST
Python 2.6(for Win-32)
Qt 4.4
PyQt 4.4
min GW
QScintilla 2.2
ipython-0.10-win32
matplot lib-win 32
numpy
scipy-win32
psyco-1.6-win32 pj2.6
pyredredline
//SAMPLE CODES
//SCHEDULING (Eg. random n and m)
def sequencing(d,e): counterr=([]) sequence=([]) b=0 i=0 j=0 k=0 l=0 m=0 p=0 kkk=0 b=rng.randint(1, njobs)
47
counter=0 counter2=0 for i in range(d): for j in range(e): if seq1[i][j]!=0: counter=counter+1 for l in range(d): for m in range(e): if seq1[l][m]!=0: jobop[l][m*3]=l+1 jobop[l][m*3+1]=seq1[l][m] jobop[l][m*3+2]=pt1[l][m] for p in range(njobs): counterr.append(0) while any(jobop.any(0)!=0): b=rng.randint(1, njobs) if counterr[b-1]==e:#job x can't have more than e operations pass elif jobop[b-1][counterr[b-1]*3]!=0: sequence.append(jobop[b-1][counterr[b-1]*3]), sequence.append(jobop[b-1][counterr[b-1]*3+1]), sequence.append(jobop[b-1][counterr[b-1]*3+2]),parent1.append(jobop[b-1][counterr[b-1]*3])#inserts job,machine and pt into sequence jobop[b-1][counterr[b-1]*3]=0#puts all at 0(already chosen) jobop[b-1][counterr[b-1]*3+1]=0 jobop[b-1][counterr[b-1]*3+2]=0 counterr[b-1]=counterr[b-1]+1#from job x, goes to the next operation #print(any(jobop.any(0)!=0)) #print(jobop) aux=deepcopy(sequence)#copy sequence listofsequence.insert(0, aux)#inserts on list
//OBJ. FUNCTION VALUES CALCULATION EXAMPLE FOR 3 MACHINES (MAKESPAN AND
TOTAL TARDINESS)
def makespan(aaaa, bbbb, cccc,flag, flag2,qq): i=0 auxx=deepcopy(listofsequence)#saves all sequences historic.insert(0, auxx) for i in range(20): m1=([])#clean previous sequence data in order to calculate the next one m2=([]) m3=([]) mav=([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
48
jobav=([0, 0, 0, 0, 0, 0]) makespanaux=([]) tardiness=([0, 0, 0, 0, 0, 0]) j=0 while listofsequence[i]!=([]):#while sequence is not fully read or empty if listofsequence[i][1]==1:#for machine 1(Gantt style) flag=listofsequence[i][0]#flag job flag2=listofsequence[i][1] aaaa=max(mav[0], jobav[flag-1])+listofsequence[i][2]#aaaa is finishing time(begins in maximum between machine 1 availability and job x(flag) availability, adding processing time...) m1.append(max(mav[0], jobav[flag-1])), m1.append(listofsequence[i][0]), m1.append(aaaa),mav.pop(0), mav.insert(0, aaaa), jobav.pop(flag-1), jobav.insert(flag-1, aaaa), listofsequence[i].pop(0), listofsequence[i].pop(0), listofsequence[i].pop(0)#in machine 1 is inserted initial time, job index and finishing time, machine 1 availability and job x(flag) availability are updated.The operation is removed from sequence, going to the next one on next iteration, and so on until sequence is empty elif listofsequence[i][1]==2: flag=listofsequence[i][0] flag2=listofsequence[i][1] bbbb=max(mav[1], jobav[flag-1])+listofsequence[i][2] m2.append(max(mav[1], jobav[flag-1])), m2.append(listofsequence[i][0]), m2.append(bbbb),mav.pop(1), mav.insert(1, bbbb), jobav.pop(flag-1), jobav.insert(flag-1, bbbb), listofsequence[i].pop(0), listofsequence[i].pop(0), listofsequence[i].pop(0) elif listofsequence[i][1]==3: flag=listofsequence[i][0] flag2=listofsequence[i][1] cccc=max(mav[2], jobav[flag-1])+listofsequence[i][2] m3.append(max(mav[2], jobav[flag-1])), m3.append(listofsequence[i][0]), m3.append(cccc),mav.pop(2), mav.insert(2, cccc), jobav.pop(flag-1), jobav.insert(flag-1, cccc), listofsequence[i].pop(0), listofsequence[i].pop(0), listofsequence[i].pop(0) makespanaux.append(aaaa), makespanaux.append(bbbb), makespanaux.append(cccc),ms.append(max(makespanaux)) for j in range(njobs):#for each job aaa=jobav[j]-duedates[j][0] print(aaa)#calculates the difference between job index finishing time and due date(now on variable aaa) bbb=max(0, aaa)#bbb stands for tardiness value for each job, being the maximum between o and the previous value(on the previous line) print(bbb) tardiness.pop(j),tardiness.insert(j,bbb) print(tardiness) print(tardiness) ts.append(sum(tardiness)) print(ts) //INSERT OPERATION def insertjob2(limit):#insert operation print("1", tourchosen) copy=deepcopy(historic[0]) oldposition=0 newposition=0
49
i=0 for i in range(len(historic[0])): allseq.append(historic[0][i]) k=0 i=0 for i in range(len(tourchosen)): oldposition=rng.randrange(0, len(copy[tourchosen[i]])/2,3) print("old", oldposition) k=0 for k in range(oldposition+3, len(copy[tourchosen[i]])-2, 3): print("2", k) #print("3", len(copy[iindex[i]]) if copy[tourchosen[i]][k]==copy[tourchosen[i]][oldposition]: limit==k lim.append(k) old.append(oldposition) print("limit", limit, lim, old) break elif k==(len(copy[tourchosen[i]])-2): limit==k lim.append(k) old.append(oldposition) print("limit1", limit, lim, old) break for j in range (len(lim)): if (lim[j]-old[j])>3: newposition=rng.randrange(old[j],lim[j]-1, 3) a=0 b=0 c=0 a=copy[tourchosen[i]][old[j]] #job data b=copy[tourchosen[i]][old[j]+1] c=copy[tourchosen[i]][old[j]+2] copy[tourchosen[i]].insert(newposition, a), copy[tourchosen[i]].insert(newposition+1, b), copy[tourchosen[i]].insert(newposition+2, c), copy[tourchosen[i]].pop(old[j]), copy[tourchosen[i]].pop(old[j]), copy[tourchosen[i]].pop(old[j]) elif (lim[j]-old[j])<3: pass offa.append(copy[tourchosen[i]]), allseq.append(copy[tourchosen[i]]) print("offa", offa) all2=deepcopy(allseq) all.insert(0, all2) #auxx=deepcopy(historic) offfa=deepcopy(offa) offf.insert(0, offfa) print("offf", offa, offf) #aux=deepcopy(copy[i]) #offspring.append(aux) #print(offspring)
50
print(m1, m2, m3, ms, ts, jobav)#in the end,final value/finishing time for each operation are inserted on makespan list and makespan value is the maximum between them #print those //FAST NON-DOMINATED SORTING def fastnondominationsorting(countersols, a):#or fast non dominated sort i=0 countersols=0 for i in range(len(ms)): solutions=([]) solutions.append(ms[i]), solutions.append(ts[i]) sol.append(solutions)#save solutions #lq.append(lsol) np.append(0), nq.append(0),rank.append(0),cd.append(0)#creating domination counters print(solutions) i=0 j=0 for i in range(0, len(sol)): lq1=([]) lq2=([]) for j in range(0, len(sol)): if sol[i][0]<sol[j][0] and sol[i][1]<=sol[j][1] or sol[i][0]<=sol[j][0] and sol[i][1]<sol[j][1]: #MINIMIZING lq2.append(j), lq1.append(sol[j]) #saves non-dominated solutions and index #print(i, j, sol[j]) #ffff.append(sol[j]) #lq[i].append(sol[j]) #print(lq) #lq[i].insert(0, sol[j]) elif sol[i][0]>sol[j][0] and sol[i][1]>=sol[j][1] or sol[i][0]>=sol[j][0] and sol[i][1]>sol[j][1]: np[i]=np[i]+1 #increment domination counter lq.append(lq1) #saves data lqi.append(lq2) i=0 #first iteration--->finding first front for i in range(len(np)): if np[i]==0: #if it is not dominated by anyone countersols=countersols+1 print("counter", countersols) #front=([]) setq=([]) front.append(sol[i]), setq.append(lqi[i]), rank.pop(i), rank.insert(i, 1), rank2.insert(i, countersols) #belongs to first front and saves dominated point indexes to perform next iteration #saves rank print(sol[i],"is from first front") setq2.append(setq) fronts.append(front) #building first front and all sets of dominated points for each point in the first front(previous and this line) #i=0
51
#for i in range(len(front)): #del(front[0]) print(np) print(lq) print(fronts, setq2) i=0 #while fronts!=([]): while countersols!=a:# until a solutions are allocated or fulfilled secf=([]) q=([]) j=0 k=0 if i==0: #leng is the array which specifies the number of all sets of dominated points which are already read leng=([]) leng.append(0) leng.append(len(setq2)) print("leng", leng) for j in range(leng[i], leng[i+1]): #a=len(setq2) #for j in range(a, b): #a=len(setq2) k=0 #a=len(setq2[j][i]) #leng=([]) #leng.append(0) #leng.append(len(setq2[j][0])) print("leng", leng) for k in range(len(setq2[j][0])): np[setq2[j][0][k]]=np[setq2[j][0][k]]-1 #if it is on set of solutions dominated by P, decrease domination counter print(k) if np[setq2[j][0][k]]==0:#if domination counter is zero... countersols=countersols+1 print("counter", countersols) #newseq=([]) secf.append(sol[setq2[j][0][k]]) #.....this solution is in next front setq2.append([lqi[setq2[j][0][k]]]) #saves(finds) new set of dominated solutions dominated by P #newseq.append(lqi[setq2[j][i][k]]) setq2.append(newseq) #setq2.append(lqi[k]),newseq.append(lqi[k+1]), rank.pop(setq2[j][0][k]), rank.insert(setq2[j][0][k], i+2), rank2.insert(setq2[j][0][k], countersols) #b=len(setq2) print("2", sol[setq2[j][0][k]]) print(k) fronts.append(secf)#stores new front #sol2 #fronts.append("this is front") #save front number #fronts.append(i+1) secf=([]) leng.append(len(setq2))#refresh size i=i+1 print("kkk", leng, setq2, i) print("leng2", leng[i], leng[i+1]) #setq2.append(newseq)
52
print(newseq) #setq2=([]) #setq2.append(newseq) #i=i+1 #break #print("ae b", a, b) print(fronts) print("prank", rank) //CROWDING DISTANCE def set_cdistance(value, counterpos): import numpy as np i=0 j=0 cc=([]) ccc=([]) dist=([]) dist1=([]) for i in range(len(fronts)): print("C1", cc) if i>0: ccc=sorted(cc, key=lambda cc:cc[0]) #sorting front according to one objective cccc.append(ccc) print("Cccc", cccc) cc=([]) ccc=([]) #distance.append(cc) for j in range(len(fronts[i])): cc.append(fronts[i][j]) #gathers data from each front ccc=sorted(cc, key=lambda cc:cc[0]) #(last iteration) cccc.append(ccc) #c2.append(c1) #print("c2", c2) #print("cccc", cccc, fronts) i=0 j=0 for i in range(len(cccc)): if i>0: dist.append(dist1) dist1=([]) for j in range(len(fronts[i])): if j==0 or j==(len(fronts[i])-1): dist1.append(2e1000) #for the extremes assignes infinite values #counterpos=counterpos+1 #cd.insert(counterpos, 2e1000) print("1") elif j!=0 or j!=len(fronts[i]): #for the intermediate points applies formula from algorithm 2., assigning distance value equaling to the absolute normalized difference in the function values of two adjacent solutions dist1.append(0) print("2")
53
value=0.000000 value=abs(cccc[i][j+1][0]-cccc[i][j-1][0])/1.00000/(max(ms)-min(ms)) #difference from first objective print("cd", value) value=value+abs(cccc[i][j+1][1]-cccc[i][j-1][1])/(max(ts)-min(ts)) #difference from second objective print("cd1", value) dist1.pop(j), dist1.insert(j, value) #saves value # counterpos=counterpos+1 #cd.insert(counterpos, value) dist.append(dist1) #final crowding distance array print(dist) i=0 j=0 k=0 for i in range(len(sol)): for j in range(len(cccc)): for k in range (len(cccc[j])): if sol[i]==cccc[j][k]: cd.pop(i), cd.insert(i, dist[j][k]) #stores crowding distance value for each solution print(cd) //BINARY TOURNAMENT SELECTION def tour_sel(a, b, n): tourchosen=([]) tour2=([]) tour=([]) i=0 for i in range(n): #choose randomly N pairs and compare all of them by ranking and crowding distance as second criteria a=rng.randrange(0, len(sol)) b=rng.randrange(0, len(sol)) #print("sol", sol[a], sol[b]) if rank[a]<rank[b] or ((rank[a]==rank[b]) and (cd[a]>cd[b])): #less rank is "better" and bigger crowding distance is better tourchosen.append(a) #print(sol[a], "dominates", sol[b]) elif rank[a]>rank[b] or ((rank[a]==rank[b]) and (cd[a]<cd[b])): tourchosen.append(b) #print(sol[b], "dominates", sol[a]) elif ((rank[a]==rank[b]) and (cd[a]==cd[b])): tourchosen.append(a) #print("TIE") print("tourmentchosen", tourchosen) #tour2=deepcopy(tourchosen)
54
#tour.insert(0, tour2) print("TOUR", tour) sss.insert(0, tourchosen) //SELECTION def selection(a, n, m,l): i=0 j=0 b=0 sele=([]) selec=([]) rink=2*l*[0] iindex=([]) for j in range(max(rank)): b=b+1 #increment ranking counter if a==l: break elif rank.count(b)>l-len(sele): #if there is more of b ranked elements than the number of elements to fill N(=l) and rank[i]==b: n=0 for n in range (len(rank)): if rank[n]==b: rink.pop(n), rink.insert(n, cd[n]) #elif rank[n]!=b: #pass #m=0 #for m in range(len(rink)): #cdink.append(cd[m]) while a<l: m=0 for m in range(len(rink)): if rink[m]==max(rink):#or rink[m] #select best ranking through crowding distance value sele.append(sol[m]), rink.pop(m), rink.insert(m, 0), iindex.append(m) a=a+1 if a==l: break #while a!=l: for i in range (len(rank)): if rank[i]==b and a<l: #select best(actual) ranking sele.append(sol[i]), iindex.append(i) print("soli", i, sol[i]) a=a+1 if a==l: break z=0 for z in range (len(iindex)): selected.append(allseq[iindex[z]])
55
selec=deepcopy(sele) selecc.insert(0, selec) print("s", b, a, selec, sele) print("index", iindex, selected) //HYPERVOLUME FOR SMS-EMOA def hyperv(a,b, hyper):#value, counterpos: import numpy as np aux=([]) aux=deepcopy(fronts) frontt.insert(0, aux) referencepoint=[1.2, 1.2] print("a", frontt) i=0 j=0 cc=([]) ccc=([]) hyp1=([]) hyp2=([]) for i in range(len(frontt[0])): print("C1", cc) if i>0: ccc=sorted(cc, key=lambda cc:cc[0]) #sorting front according to one objective cccc.append(ccc) #ccccc.append(ccc) print("Cccc", cccc) cc=([]) ccc=([]) #distance.append(cc) for j in range(len(frontt[0][i])): cc.append(frontt[0][i][j]) #gathers data from each front ccc.append(cc) print("ccc", ccc) ccc=sorted(cc, key=lambda cc:cc[0]) #(last iteration) cccc.append(ccc) #ccccc.append(ccc) ccccc=deepcopy(cccc) i=0 j=0 for i in range(len(cccc)): #if i>0: #hyp2.append(hyp1) #hyp1=([]) for j in range(len(cccc[i])): cccc[i][j][0]=(cccc[i][j][0]-min(ms))/1.0000 cccc[i][j][0]=cccc[i][j][0]/(max(ms)-min(ms))/1.0000 cccc[i][j][1]=(cccc[i][j][1]-min(ts))/1.0000 cccc[i][j][1]=cccc[i][j][1]/(max(ts)-min(ts))/1.0000 i=0 j=0 hyp3=([])
56
for i in range(len(cccc)): if i>0: hyp2.append(hyp1) hyp1=([]) for j in range(len(cccc[i])): hyp1.append(0) hyp3.append(0) a=0 b=0 hyper=0 if len(cccc[i])==1: a=0 b=0 #pass elif len(cccc[i])>1: if j==0: a=cccc[i][j+1][0]-cccc[i][j][0] #ascended ordered on makespan therefore difference for first on front is between solutions for makespan and ref. for tardiness b=referencepoint[1]-cccc[i][j][1] elif (j>0 and j<len(cccc[i])-1): a=cccc[i][j+1][0]-cccc[i][j][0] #algotithm from hypervolume b=cccc[i][j][1]-cccc[i][j-1][1] elif j==(len(cccc[i])-1): a=referencepoint[0]-cccc[i][j][0] b=cccc[i][j][1]-cccc[i][j-1][1] hyper=abs(a*b) #hyp1.append(hyper) hyp1.pop(j), hyp1.insert(j, hyper) hyp2.append(hyp1) #cccc[i][j][0]=cccc[i][j][0]/1.0000 #put decimal #cccc[i][j][1]=(cccc[i][j][1]-min(ms))/(max(ts)-min(ts))/1.0000 #put #hyper=a*b #a=referencepoint[0]-ccc[i][j][0] #hyp1.append(hyper) #b=referencepoint[1]- ccc[i][j][1] hyper=a*b print("1", hyp2, ccccc) hyp4.insert(0, hyp2) #i=0 #j=0 #k=0 #f#or i in range(len(sol)): #for j in range(len(fronts)): #for k in range (len(fronts[j])): #if sol[i]==fronts[j][k]: #hyp3.pop(i), hyp3.insert(i, hyp2[j][k]) #stores hypervolume value for each solution #print("u", hyp3) #ccc=sorted(cc, key=lambda cc:cc[0]) #(last iteration) #cccc.append(ccc) #c2.append(c1) #print("c2", c2) #print("2", cccc, fronts) #i=0 #j=0
57
i=0 j=0 k=0 for i in range(len(sol)): for j in range(len(ccccc)): for k in range (len(ccccc[j])): if sol[i]==ccccc[j][k]: hyp3.pop(i), hyp3.insert(i, hyp2[j][k]) #stores crowding distance value for each solution print(hyp3) //DATA SETS //10 x 5 pt1=np.array([[60, 70, 25, 83, 42], [29, 67, 5, 91, 34], [83, 4, 15, 38, 49], [39, 12, 59, 49, 48], [60, 67, 73, 37, 52], [45, 89, 42, 71, 71], [28, 27, 56, 61, 78], [18, 15, 91, 97, 75], [55, 81, 33, 4, 56], [68, 56, 43, 60, 9]]) seq1=np.array([[2, 1, 4, 4, 4], [3, 1, 1, 1, 2], [3, 5, 3, 1, 1], [3, 2, 1, 4, 5], [5, 3, 4, 2, 5], [2, 1, 1, 5, 3], [2, 1, 1, 1, 1], [1, 1, 4, 3, 2], [3, 4, 2, 3, 5], [5, 2, 3, 3, 2]]) duedates=np.array([[ 434. ], [ 350.3 ], [ 292.95], [ 320.85], [ 447.95], [ 492.9 ], [ 387.5 ], [ 458.8 ], [ 354.95], [ 365.8 ]]) pt1=np.array([[68, 21, 90, 45, 25], [49, 1, 8, 95, 47], [65, 52, 17, 54, 20], [74, 79, 3, 83, 92], [81, 13, 6, 79, 37], [30, 63, 47, 65, 61], [42, 6, 28, 95, 11],
58
[65, 66, 24, 12, 5], [ 6, 22, 23, 19, 49], [12, 59, 5, 74, 97]]) seq1=np.array([[5, 4, 4, 5, 1], [3, 2, 1, 1, 5], [2, 2, 1, 4, 3], [5, 3, 4, 1, 2], [1, 3, 4, 3, 1], [3, 3, 5, 2, 1], [2, 3, 2, 3, 1], [5, 4, 3, 5, 2], [2, 1, 3, 3, 4], [1, 5, 3, 4, 3]]) duedates=np.array([[ 385.95], [ 310. ], [ 322.4 ], [ 513.05], [ 334.8 ], [ 412.3 ], [ 282.1 ], [ 266.6 ], [ 184.45], [ 382.85]]) //20 x 5 pt1=np.array([[35, 32, 39, 44, 43], [43, 39, 29, 33, 63], [33, 2, 19, 64, 8], [ 2, 4, 70, 94, 41], [13, 26, 63, 74, 17], [ 8, 36, 37, 7, 42], [96, 41, 89, 18, 73], [78, 88, 79, 66, 4], [28, 98, 7, 19, 1], [43, 65, 7, 82, 75], [19, 24, 91, 15, 15], [80, 98, 57, 9, 73], [93, 51, 35, 66, 50], [22, 47, 88, 66, 38], [ 5, 53, 32, 41, 84], [67, 52, 55, 85, 43], [77, 4, 56, 81, 37], [89, 53, 23, 69, 45], [79, 59, 9, 88, 83], [ 4, 43, 75, 87, 3]]) seq1=np.array([[1, 5, 2, 1, 1], [1, 1, 2, 1, 4], [5, 4, 4, 3, 3], [1, 5, 3, 2, 4], [4, 4, 4, 2, 2], [2, 3, 1, 5, 1], [5, 4, 4, 1, 1], [2, 5, 2, 2, 1], [1, 4, 4, 4, 4], [2, 2, 2, 1, 3], [2, 5, 1, 2, 1],
59
[4, 1, 1, 4, 2], [3, 1, 2, 3, 2], [1, 2, 2, 2, 3], [1, 2, 4, 5, 5], [4, 5, 2, 3, 1], [3, 1, 2, 3, 4], [5, 5, 5, 4, 1], [5, 3, 3, 5, 5], [5, 1, 5, 1, 4]]) duedates=np.array([[ 308.8], [ 331.2], [ 201.6], [ 337.6], [ 308.8], [ 208. ], [ 507.2], [ 504. ], [ 244.8], [ 435.2], [ 262.4], [ 507.2], [ 472. ], [ 417.6], [ 344. ], [ 483.2], [ 408. ], [ 446.4], [ 508.8], [ 339.2]]) pt1=np.array([[25, 4, 61, 34, 4], [44, 79, 82, 5, 14], [51, 23, 5, 26, 12], [15, 74, 47, 22, 1], [94, 35, 97, 46, 48], [76, 60, 31, 95, 1], [25, 20, 90, 71, 8], [13, 30, 65, 26, 70], [18, 71, 23, 83, 76], [17, 90, 20, 33, 56], [39, 41, 2, 29, 35], [28, 30, 86, 62, 62], [20, 24, 51, 89, 59], [24, 81, 41, 21, 75], [82, 52, 31, 33, 17], [72, 68, 32, 13, 62], [58, 27, 12, 61, 62], [68, 84, 14, 84, 83], [ 5, 2, 81, 38, 30], [98, 91, 52, 4, 45]]) seq1=np.array([[4, 3, 2, 4, 2], [2, 1, 3, 4, 3], [2, 3, 1, 1, 4], [3, 1, 5, 1, 5], [4, 1, 3, 2, 3], [3, 1, 2, 3, 4],
60
[2, 3, 3, 5, 1], [3, 1, 3, 4, 5], [3, 4, 4, 3, 2], [4, 3, 1, 3, 5], [2, 1, 3, 2, 5], [4, 1, 5, 2, 1], [5, 4, 5, 4, 4], [2, 5, 2, 5, 2], [2, 4, 1, 3, 2], [3, 3, 4, 5, 5], [4, 3, 3, 5, 5], [1, 4, 5, 4, 4], [5, 3, 2, 4, 3], [5, 3, 5, 5, 3]]) duedates=np.array([[ 204.8], [ 358.4], [ 187.2], [ 254.4], [ 512. ], [ 420.8], [ 342.4], [ 326.4], [ 433.6], [ 345.6], [ 233.6], [ 428.8], [ 388.8], [ 387.2], [ 344. ], [ 395.2], [ 352. ], [ 532.8], [ 249.6], [ 464. ]]) pt1=np.array([[ 4, 92, 92, 58, 44], [25, 5, 72, 59, 80], [62, 21, 60, 36, 29], [53, 65, 78, 95, 75], [44, 32, 42, 34, 71], [42, 80, 76, 18, 79], [89, 74, 97, 68, 80], [16, 86, 87, 38, 75], [63, 81, 34, 80, 30], [56, 49, 8, 12, 30], [26, 83, 67, 77, 40], [ 7, 16, 35, 95, 53], [24, 24, 9, 96, 70], [ 6, 28, 69, 81, 38], [35, 61, 53, 63, 61], [81, 16, 67, 23, 68], [ 1, 58, 72, 80, 41], [50, 57, 2, 41, 70], [62, 80, 22, 94, 23], [36, 46, 95, 67, 51]]) seq1=np.array([[3, 3, 5, 3, 1],
61
[3, 5, 2, 3, 5], [1, 4, 4, 4, 3], [2, 4, 5, 2, 4], [5, 2, 1, 2, 5], [5, 4, 3, 5, 5], [3, 5, 1, 5, 2], [3, 2, 4, 4, 2], [1, 4, 5, 2, 4], [5, 5, 1, 4, 1], [5, 2, 5, 3, 3], [2, 4, 5, 2, 5], [4, 3, 5, 2, 3], [5, 4, 5, 1, 2], [1, 1, 1, 4, 1], [2, 1, 3, 4, 4], [2, 2, 4, 4, 2], [3, 3, 4, 4, 3], [4, 3, 4, 5, 4], [2, 1, 2, 4, 3]]) duedates=np.array([[ 464. ], [ 385.6], [ 332.8], [ 585.6], [ 356.8], [ 472. ], [ 652.8], [ 483.2], [ 460.8], [ 248. ], [ 468.8], [ 329.6], [ 356.8], [ 355.2], [ 436.8], [ 408. ], [ 403.2], [ 352. ], [ 449.6], [ 472. ]]) //50 x 5 pt1=np.array([[11, 67, 8, 78, 11], [74, 83, 1, 5, 54], [26, 15, 82, 22, 66], [42, 9, 42, 61, 38], [44, 69, 67, 39, 7], [50, 9, 19, 50, 43], [57, 44, 15, 84, 97], [51, 74, 13, 72, 51], [45, 33, 39, 36, 26], [19, 77, 21, 49, 91], [14, 86, 18, 48, 64], [89, 24, 96, 56, 15], [24, 70, 29, 93, 49], [69, 87, 57, 41, 24], [36, 97, 51, 50, 80], [37, 28, 54, 62, 7],
62
[43, 70, 29, 88, 93], [47, 79, 8, 64, 98], [94, 10, 82, 89, 50], [67, 18, 50, 79, 5], [14, 40, 17, 60, 90], [88, 5, 31, 23, 48], [19, 45, 63, 29, 68], [59, 30, 7, 67, 82], [24, 93, 77, 4, 25], [52, 53, 65, 58, 76], [21, 76, 73, 77, 23], [51, 95, 62, 81, 50], [ 7, 37, 25, 98, 93], [76, 76, 4, 18, 71], [85, 87, 51, 53, 40], [64, 59, 62, 42, 11], [41, 24, 56, 51, 15], [ 2, 89, 13, 56, 92], [ 8, 96, 13, 96, 70], [74, 33, 90, 37, 25], [53, 64, 40, 44, 85], [17, 76, 1, 54, 40], [59, 38, 83, 33, 49], [52, 56, 1, 29, 24], [93, 5, 38, 1, 5], [86, 72, 67, 27, 38], [25, 66, 95, 3, 33], [69, 48, 28, 16, 10], [98, 65, 75, 91, 60], [44, 19, 25, 31, 77], [75, 60, 10, 33, 34], [61, 86, 21, 73, 34], [75, 25, 54, 76, 93], [50, 65, 97, 87, 22]]) seq1=np.array([[2, 1, 3, 4, 4], [4, 1, 4, 2, 5], [2, 4, 4, 2, 4], [4, 1, 1, 3, 3], [2, 3, 3, 4, 3], [5, 3, 2, 5, 1], [1, 2, 2, 1, 2], [1, 3, 4, 2, 3], [4, 2, 3, 1, 2], [3, 3, 3, 5, 1], [1, 5, 3, 3, 4], [3, 1, 1, 5, 1], [5, 4, 3, 1, 3], [1, 5, 5, 3, 3], [5, 5, 5, 2, 5], [2, 5, 3, 1, 5], [2, 5, 1, 4, 3], [5, 4, 1, 2, 2], [2, 1, 3, 4, 4], [3, 4, 3, 1, 1], [3, 4, 5, 1, 3], [1, 2, 2, 4, 4], [5, 2, 4, 4, 5], [5, 3, 2, 5, 3], [5, 1, 1, 4, 4],
63
[1, 2, 3, 1, 4], [1, 3, 1, 3, 5], [5, 1, 2, 1, 2], [1, 3, 4, 5, 2], [4, 3, 5, 4, 5], [2, 4, 3, 3, 4], [3, 2, 2, 3, 3], [2, 2, 5, 3, 5], [4, 3, 1, 2, 1], [1, 5, 1, 1, 4], [2, 1, 5, 3, 1], [3, 4, 5, 5, 3], [2, 5, 3, 2, 3], [3, 2, 3, 1, 1], [1, 2, 4, 4, 4], [3, 1, 5, 2, 4], [5, 5, 1, 5, 3], [3, 1, 4, 3, 3], [4, 1, 3, 4, 4], [5, 2, 1, 5, 2], [4, 4, 5, 1, 4], [2, 1, 5, 5, 2], [5, 2, 5, 2, 1], [3, 5, 4, 2, 1], [4, 2, 4, 2, 2]]) duedates=np.array([[ 306.25], [ 379.75], [ 369.25], [ 336. ], [ 395.5 ], [ 299.25], [ 519.75], [ 456.75], [ 313.25], [ 449.75], [ 402.5 ], [ 490. ], [ 463.75], [ 486.5 ], [ 549.5 ], [ 329. ], [ 565.25], [ 518. ], [ 568.75], [ 383.25], [ 386.75], [ 341.25], [ 392. ], [ 428.75], [ 390.25], [ 532. ], [ 472.5 ], [ 593.25], [ 455. ], [ 428.75], [ 553. ], [ 416.5 ], [ 327.25], [ 441. ],
64
[ 495.25], [ 453.25], [ 500.5 ], [ 329. ], [ 458.5 ], [ 283.5 ], [ 248.5 ], [ 507.5 ], [ 388.5 ], [ 299.25], [ 680.75], [ 343. ], [ 371. ], [ 481.25], [ 565.25], [ 561.75]]) pt1=np.array([[28, 28, 51, 81, 61], [48, 19, 77, 46, 9], [61, 74, 44, 77, 81], [56, 20, 26, 35, 92], [25, 54, 51, 97, 78], [47, 25, 12, 45, 82], [64, 17, 56, 76, 54], [89, 84, 29, 2, 72], [83, 3, 59, 26, 34], [ 6, 29, 26, 22, 91], [ 7, 35, 24, 54, 88], [16, 80, 12, 71, 97], [62, 54, 83, 9, 6], [47, 40, 42, 84, 12], [88, 38, 54, 40, 75], [64, 25, 81, 22, 2], [74, 96, 69, 75, 31], [19, 76, 95, 5, 77], [90, 83, 89, 46, 61], [63, 14, 7, 90, 95], [44, 54, 44, 10, 37], [76, 83, 92, 96, 28], [82, 17, 76, 73, 86], [62, 89, 32, 78, 87], [57, 70, 24, 55, 37], [44, 72, 32, 75, 65], [20, 6, 39, 51, 83], [31, 10, 65, 7, 56], [56, 75, 2, 4, 8], [49, 83, 94, 41, 36], [39, 14, 80, 34, 25], [16, 25, 42, 48, 69], [27, 33, 94, 14, 47], [65, 74, 85, 90, 23], [34, 51, 87, 59, 95], [ 1, 92, 85, 98, 85], [42, 85, 13, 20, 97], [67, 40, 56, 43, 70], [28, 47, 11, 78, 74], [48, 43, 74, 93, 31], [60, 2, 39, 18, 89], [93, 9, 20, 82, 63], [85, 76, 36, 14, 8],
65
[47, 47, 79, 76, 25], [42, 78, 34, 56, 30], [22, 23, 36, 6, 21], [16, 89, 84, 67, 68], [71, 5, 58, 51, 9], [72, 24, 15, 98, 87], [25, 16, 24, 30, 22]]) seq1=np.array([[3, 5, 3, 2, 4], [1, 2, 4, 3, 2], [4, 1, 5, 3, 5], [3, 1, 3, 2, 5], [2, 3, 4, 1, 2], [2, 4, 2, 2, 5], [3, 3, 2, 2, 5], [4, 4, 4, 4, 4], [2, 5, 1, 2, 5], [2, 3, 3, 1, 3], [5, 3, 3, 1, 1], [4, 4, 3, 1, 1], [1, 4, 5, 4, 2], [1, 1, 1, 2, 2], [3, 2, 5, 3, 3], [1, 2, 5, 1, 5], [4, 2, 1, 2, 5], [1, 4, 3, 1, 3], [5, 5, 2, 2, 3], [2, 4, 3, 5, 4], [3, 2, 4, 3, 3], [5, 2, 1, 5, 1], [5, 5, 3, 1, 3], [3, 1, 4, 3, 4], [1, 2, 1, 3, 5], [4, 1, 3, 2, 3], [5, 3, 5, 3, 4], [3, 5, 2, 4, 5], [4, 3, 3, 5, 5], [3, 1, 4, 3, 2], [1, 3, 3, 3, 1], [4, 5, 2, 2, 2], [4, 2, 4, 1, 2], [1, 5, 3, 5, 4], [3, 2, 2, 4, 3], [5, 2, 5, 5, 1], [4, 3, 2, 2, 4], [3, 4, 4, 4, 5], [1, 3, 1, 2, 2], [4, 1, 1, 3, 4], [3, 2, 5, 1, 5], [5, 3, 1, 1, 5], [5, 2, 3, 3, 5], [5, 5, 1, 1, 5], [4, 3, 4, 3, 5], [5, 2, 3, 4, 5], [1, 2, 3, 2, 4], [2, 4, 5, 3, 4], [1, 3, 5, 5, 2], [4, 1, 4, 1, 5]])
66
duedates=np.array([[ 435.75], [ 348.25], [ 589.75], [ 400.75], [ 533.75], [ 369.25], [ 467.25], [ 483. ], [ 358.75], [ 304.5 ], [ 364. ], [ 483. ], [ 374.5 ], [ 393.75], [ 516.25], [ 339.5 ], [ 603.75], [ 476. ], [ 645.75], [ 470.75], [ 330.75], [ 656.25], [ 584.5 ], [ 609. ], [ 425.25], [ 504. ], [ 348.25], [ 295.75], [ 253.75], [ 530.25], [ 336. ], [ 350. ], [ 376.25], [ 589.75], [ 570.5 ], [ 631.75], [ 449.75], [ 483. ], [ 416.5 ], [ 505.75], [ 364. ], [ 467.25], [ 383.25], [ 479.5 ], [ 420. ], [ 189. ], [ 567. ], [ 339.5 ], [ 518. ], [ 204.75]])