Main idea¶
Goal programming is used to handle an optimization problem with multiple goals of the form
for \(i=1,...,N\), with \(N\) the total number of goals.
Each goal \(i\) has a weight \(\omega_i\) and a priority \(p_i\). Goals with the same priority are converted into one goal. For example, if multiple goals have priority 30, then the resulting minimization problem for priority 30 would look like
The reduced set of goals can be rewritten in the form
Problems with different priorities are solved from lowest to highest priority. The results from previous priorities are used as constraints in the goal of the next priority to enforce that the minimal values of the previous priorities remain minimal. For example, suppose we have goals with priorities 10, 20, and 30. We first solve the minimisation problem for priority 10 and find a solution \(x_{10}^*\). For the next goal, we add the constraints \(f_{10,i}(x) \leq f_{10,i}(x_{10}^*)\) or, equivalently, \(f_{10,i}(x) - f_{10,i}(x_{10}^*) \leq 0\), for \(i=1,2,\dots,n_{10}\). The resulting problem for priority 20 can thus be written as
For priority 30, we then do something similar and add the constraints \(f_{10}(x) \leq f_{10,i}(x_{10}^*)\) and \(f_{20}(x) \leq f_{20,i}(x_{20}^*)\). In general, the extended problem for priority \(p\) can be written as
This is the default way to convert results of a previous optimisation problem to a constraint for the next optimisation problem. Details on converting goals to constraints can be found in Converting goals to constraints.
There are several options for implementing goal programming.
These options are set with
rtctools.optimization.goal_programming_mixin.GoalProgrammingMixin.goal_programming_options()
.
-
class
rtctools.optimization.goal_programming_mixin.
GoalProgrammingMixin
(**kwargs)[source]¶ Bases:
rtctools.optimization.goal_programming_mixin_base._GoalProgrammingMixinBase
Adds lexicographic goal programming to your optimization problem.
-
goal_programming_options
() → Dict[str, Union[float, bool]][source]¶ Returns a dictionary of options controlling the goal programming process.
Option Type Default value violation_relaxation
float
0.0
constraint_relaxation
float
0.0
mu_reinit
bool
True
fix_minimized_values
bool
True/False
check_monotonicity
bool
True
equality_threshold
float
1e-8
interior_distance
float
1e-6
scale_by_problem_size
bool
False
keep_soft_constraints
bool
False
Before turning a soft constraint of the goal programming algorithm into a hard constraint, the violation variable (also known as epsilon) of each goal is relaxed with the
violation_relaxation
. Use of this option is normally not required.When turning a soft constraint of the goal programming algorithm into a hard constraint, the constraint is relaxed with
constraint_relaxation
. Use of this option is normally not required. Note that:- Minimization goals do not get
constraint_relaxation
applied whenfix_minimized_values
is True. - Because of the constraints it generates, when
keep_soft_constraints
is True, the optionfix_minimized_values
needs to be set to False for theconstraint_relaxation
to be applied at all.
A goal is considered to be violated if the violation, scaled between 0 and 1, is greater than the specified tolerance. Violated goals are fixed. Use of this option is normally not required.
When using the default solver (IPOPT), its barrier parameter
mu
is normally re-initialized at every iteration of the goal programming algorithm, unless mu_reinit is set toFalse
. Use of this option is normally not required.If
fix_minimized_values
is set toTrue
, goal functions will be set to equal their optimized values in optimization problems generated during subsequent priorities. Otherwise, only an upper bound will be set. Use of this option is normally not required. Note that a non-zero goal relaxation overrules this option; a non-zero relaxation will always result in only an upper bound being set. Also note that the use of this option may add non-convex constraints to the optimization problem. The default value for this parameter isTrue
for the default solvers IPOPT/BONMIN. If any other solver is used, the default value isFalse
.If
check_monotonicity
is set toTrue
, then it will be checked whether goals with the same function key form a monotonically decreasing sequence with regards to the target interval.The option
equality_threshold
controls when a two-sided inequality constraint is folded into an equality constraint.The option
interior_distance
controls the distance from the scaled target bounds, starting from which the function value is considered to lie in the interior of the target space.If
scale_by_problem_size
is set toTrue
, the objective (i.e. the sum of the violation variables) will be divided by the number of goals, and the path objective will be divided by the number of path goals and the number of active time steps (per goal). This will make sure the objectives are always in the range [0, 1], at the cost of solving each goal/time step less accurately.The option
keep_soft_constraints
controls how the epsilon variables introduced in the target goals are dealt with in subsequent priorities. Ifkeep_soft_constraints
is set to False, each epsilon is replaced by its computed value and those are used to derive a new set of constraints. Ifkeep_soft_constraints
is set to True, the epsilons are kept as variables and the constraints are not modified. To ensure the goal programming philosophy, i.e., Pareto optimality, a single constraint is added to enforce that the objective function must always be at most the objective value. This method allows for a larger solution space, at the cost of having a (possibly) more complex optimization problem. Indeed, more variables are kept around throughout the optimization and any objective function is turned into a constraint for the subsequent priorities (while in the False option this was the case only for the function of minimization goals).Returns: A dictionary of goal programming options. - Minimization goals do not get
-
goals
() → List[rtctools.optimization.goal_programming_mixin_base.Goal]¶ User problem returns list of
Goal
objects.Returns: A list of goals.
-
path_goals
() → List[rtctools.optimization.goal_programming_mixin_base.Goal]¶ User problem returns list of path
Goal
objects.Returns: A list of path goals.
-
priority_completed
(priority: int) → None¶ Called after optimization for goals of certain priority is completed.
Parameters: priority – The priority level that was completed.
-
priority_started
(priority: int) → None¶ Called when optimization for goals of certain priority is started.
Parameters: priority – The priority level that was started.
-
-
class
rtctools.optimization.single_pass_goal_programming_mixin.
SinglePassGoalProgrammingMixin
(**kwargs)[source]¶ Bases:
rtctools.optimization.goal_programming_mixin_base._GoalProgrammingMixinBase
Adds lexicographic goal programming to your optimization problem.
Unlike
GoalProgrammingMixin
, this mixin will calltranscribe()
only once per call tooptimize()
, and not \(N\) times for \(N\) priorities. It works similar to how keep_soft_constraints = True works forGoalProgrammingMixin
, while avoiding the repeated calls to transcribe the problem.This mixin can work in one of two ways. What is shared between them is that all violation variables of all goals are generated once at the beginning, such that the state vector is exactly the same for all priorities. They also share that all goal constraints are added from the start. How they differ is in how they handle/append the constraints on the objective of previous priorities:
- At priority \(i\) the constraints are the same as the ones at priority \(i - 1\) with the addition of the objective constraint related to priority \(i - 1\). This is the default method.
- All objective constraints are added at the start. The objective constraints will have bound of \([-\inf, \inf]\) at the start, to be updated after each priority finishes.
There is a special qpsol alternative available
CachingQPSol
, that will avoid recalculations on constraints that were already there in previous priorities. This works for both options outlined above, because the assumptions ofCachingQPSol
are that:- The state vector does not change
- Any new constraints are appended at the end
Note
Just like GoalProgrammingMixin, objective constraints are only added on the goal objectives, not on any custom user objective.
-
goal_programming_options
() → Dict[str, Union[float, bool]][source]¶ Returns a dictionary of options controlling the goal programming process.
Option Type Default value constraint_relaxation
float
0.0
mu_reinit
bool
True
fix_minimized_values
bool
True/False
check_monotonicity
bool
True
equality_threshold
float
1e-8
scale_by_problem_size
bool
False
When a priority’s objective is turned into a hard constraint, the constraint is relaxed with
constraint_relaxation
. Use of this option is normally not required. Note that:When using the default solver (IPOPT), its barrier parameter
mu
is normally re-initialized at every iteration of the goal programming algorithm, unless mu_reinit is set toFalse
. Use of this option is normally not required.If
fix_minimized_values
is set toTrue
, goal functions will be set to equal their optimized values in optimization problems generated during subsequent priorities. Otherwise, only an upper bound will be set. Use of this option is normally not required. Note that the use of this option may add non-convex constraints to the optimization problem. The default value for this parameter isTrue
for the default solvers IPOPT/BONMIN. If any other solver is used, the default value isFalse
.If
check_monotonicity
is set toTrue
, then it will be checked whether goals with the same function key form a monotonically decreasing sequence with regards to the target interval.The option
equality_threshold
controls when a two-sided inequality constraint is folded into an equality constraint.If
scale_by_problem_size
is set toTrue
, the objective (i.e. the sum of the violation variables) will be divided by the number of goals, and the path objective will be divided by the number of path goals and the number of active time steps (per goal). This will make sure the objectives are always in the range [0, 1], at the cost of solving each goal/time step less accurately.Returns: A dictionary of goal programming options.
-
goals
() → List[rtctools.optimization.goal_programming_mixin_base.Goal]¶ User problem returns list of
Goal
objects.Returns: A list of goals.
-
path_goals
() → List[rtctools.optimization.goal_programming_mixin_base.Goal]¶ User problem returns list of path
Goal
objects.Returns: A list of path goals.
-
priority_completed
(priority: int) → None¶ Called after optimization for goals of certain priority is completed.
Parameters: priority – The priority level that was completed.
-
priority_started
(priority: int) → None¶ Called when optimization for goals of certain priority is started.
Parameters: priority – The priority level that was started.