# Multi-objective optimization¶

class rtctools.optimization.goal_programming_mixin.Goal[source]

Bases: object

Base class for lexicographic goal programming goals.

A goal is defined by overriding the function() method.

Variables: function_range – Range of goal function. Required if a target is set. function_nominal – Nominal value of function. Used for scaling. Default is 1. target_min – Desired lower bound for goal function. Default is numpy.nan. target_max – Desired upper bound for goal function. Default is numpy.nan. priority – Integer priority of goal. Default is 1. weight – Optional weighting applied to the goal. Default is 1.0. order – Penalization order of goal violation. Default is 2. critical – If True, the algorithm will abort if this goal cannot be fully met. Default is False. relaxation – Amount of slack added to the hard constraints related to the goal. Must be a nonnegative value. Default is 0.0.

The target bounds indicate the range within the function should stay, if possible. Goals are, in that sense, soft, as opposed to standard hard constraints.

Four types of goals can be created:

1. Minimization goal if no target bounds are set:

$\min f$
2. Lower bound goal if target_min is set:

$m \leq f$
3. Upper bound goal if target_max is set:

$f \leq M$
4. Combined lower and upper bound goal if target_min and target_max are both set:

$m \leq f \leq M$

Lower priority goals take precedence over higher priority goals.

Goals with the same priority are weighted off against each other in a single objective function.

In goals where a target is set:
• The function range interval must be provided as this is used to introduce hard constrains on the value that the function can take. If one is unsure about which value the function can take, it is recommended to overestimate this interval. However, an overestimated interval will negatively influence how accurately the target bounds are met.
• The target provided must be contained in the function range.
• The function nominal is used to scale the constraints.
• If both a target_min and a target_max are set, the target maximum must be at least equal to minimum one.
• In a path goal, the target can be a Timeseries.
In minimization goals:
• The function range is not used and therefore cannot be set.
• The function nominal is used to scale the function value in the objective function. To ensure that all goals are given a similar importance, it is crucial to provide an accurate estimate of this parameter.

The goal violation value is taken to the order’th power in the objective function of the final optimization problem.

Relaxation is used to loosen the constraints that are set after the optimization of the goal’s priority. The unit of the relaxation is equal to that of the goal function.

A goal can be written in vector form. In a vector goal:
• The goal size determines how many goals there are.
• The goal function has shape (goal size, 1).
• The function is either minimized or has, possibly various, targets.
• Function nominal can either be an array with as many entries as the goal size or have a single value.
• Function ranges can either be an array with as many entries as the goal size or have a single value.
• In a goal, the target can either be an array with as many entries as the goal size or have a single value.
• In a path goal, the target can also be a Timeseries whose values are either a 1-dimensional vector or have as many columns as the goal size.

Example definition of the point goal $$x(t) \geq 1.1$$ for $$t=1.0$$ at priority 1:

class MyGoal(Goal):
def function(self, optimization_problem, ensemble_member):
# State 'x' at time t = 1.0
t = 1.0
return optimization_problem.state_at('x', t, ensemble_member)

function_range = (1.0, 2.0)
target_min = 1.1
priority = 1


Example definition of the path goal $$x(t) \geq 1.1$$ for all $$t$$ at priority 2:

class MyPathGoal(Goal):
def function(self, optimization_problem, ensemble_member):
# State 'x' at any point in time
return optimization_problem.state('x')

function_range = (1.0, 2.0)
target_min = 1.1
priority = 2


Note that for path goals, the ensemble member index is not passed to the call to OptimizationProblem.state(). This call returns a time-independent symbol that is also independent of the active ensemble member. Path goals are applied to all times and all ensemble members simultaneously.

critical = False

Critical goals must always be fully satisfied.

function(optimization_problem: rtctools.optimization.optimization_problem.OptimizationProblem, ensemble_member: int) → casadi.casadi.MX[source]

This method returns a CasADi MX object describing the goal function.

Returns: A CasADi MX object.
function_nominal = 1.0

Nominal value of function (used for scaling)

function_range = (nan, nan)

Range of goal function

function_value_timeseries_id = None

Timeseries ID for function value data (optional)

get_function_key(optimization_problem: rtctools.optimization.optimization_problem.OptimizationProblem, ensemble_member: int) → str[source]

Returns a key string uniquely identifying the goal function. This is used to eliminate linearly dependent constraints from the optimization problem.

has_target_bounds

True if the user goal has min/max bounds.

has_target_max

True if the user goal has max bounds.

has_target_min

True if the user goal has min bounds.

order = 2

The goal violation value is taken to the order’th power in the objective function.

priority = 1

Lower priority goals take precedence over higher priority goals.

relaxation = 0.0

Absolute relaxation applied to the optimized values of this goal

size = 1

The size of the goal if it’s a vector goal.

target_max = nan

Desired upper bound for goal function

target_min = nan

Desired lower bound for goal function

violation_timeseries_id = None

Timeseries ID for goal violation data (optional)

weight = 1.0

Goals with the same priority are weighted off against each other in a single objective function.

class rtctools.optimization.goal_programming_mixin.StateGoal(optimization_problem)[source]

Bases: rtctools.optimization.goal_programming_mixin_base.Goal

Base class for lexicographic goal programming path goals that act on a single model state.

A state goal is defined by setting at least the state class variable.

Variables: state – State on which the goal acts. Required. target_min – Desired lower bound for goal function. Default is numpy.nan. target_max – Desired upper bound for goal function. Default is numpy.nan. priority – Integer priority of goal. Default is 1. weight – Optional weighting applied to the goal. Default is 1.0. order – Penalization order of goal violation. Default is 2. critical – If True, the algorithm will abort if this goal cannot be fully met. Default is False.

Example definition of the goal $$x(t) \geq 1.1$$ for all $$t$$ at priority 2:

class MyStateGoal(StateGoal):
state = 'x'
target_min = 1.1
priority = 2


Contrary to ordinary Goal objects, PathGoal objects need to be initialized with an OptimizationProblem instance to allow extraction of state metadata, such as bounds and nominal values. Consequently, state goals must be instantiated as follows:

my_state_goal = MyStateGoal(optimization_problem)


Note that StateGoal is a helper class. State goals can also be defined using Goal as direct base class, by implementing the function method and providing the function_range and function_nominal class variables manually.

__init__(optimization_problem)[source]

Initialize the state goal object.

Parameters: optimization_problem – OptimizationProblem instance.
class rtctools.optimization.goal_programming_mixin.GoalProgrammingMixin(**kwargs)[source]

Bases: rtctools.optimization.goal_programming_mixin_base._GoalProgrammingMixinBase

goal_programming_options() → Dict[str, Union[float, bool]][source]

Returns a dictionary of options controlling the goal programming process.

Option Type Default value
violation_relaxation float 0.0
constraint_relaxation float 0.0
mu_reinit bool True
fix_minimized_values bool True/False
check_monotonicity bool True
equality_threshold float 1e-8
interior_distance float 1e-6
scale_by_problem_size bool False
keep_soft_constraints bool False

Before turning a soft constraint of the goal programming algorithm into a hard constraint, the violation variable (also known as epsilon) of each goal is relaxed with the violation_relaxation. Use of this option is normally not required.

When turning a soft constraint of the goal programming algorithm into a hard constraint, the constraint is relaxed with constraint_relaxation. Use of this option is normally not required. Note that:

1. Minimization goals do not get constraint_relaxation applied when fix_minimized_values is True.
2. Because of the constraints it generates, when keep_soft_constraints is True, the option fix_minimized_values needs to be set to False for the constraint_relaxation to be applied at all.

A goal is considered to be violated if the violation, scaled between 0 and 1, is greater than the specified tolerance. Violated goals are fixed. Use of this option is normally not required.

When using the default solver (IPOPT), its barrier parameter mu is normally re-initialized at every iteration of the goal programming algorithm, unless mu_reinit is set to False. Use of this option is normally not required.

If fix_minimized_values is set to True, goal functions will be set to equal their optimized values in optimization problems generated during subsequent priorities. Otherwise, only an upper bound will be set. Use of this option is normally not required. Note that a non-zero goal relaxation overrules this option; a non-zero relaxation will always result in only an upper bound being set. Also note that the use of this option may add non-convex constraints to the optimization problem. The default value for this parameter is True for the default solvers IPOPT/BONMIN. If any other solver is used, the default value is False.

If check_monotonicity is set to True, then it will be checked whether goals with the same function key form a monotonically decreasing sequence with regards to the target interval.

The option equality_threshold controls when a two-sided inequality constraint is folded into an equality constraint.

The option interior_distance controls the distance from the scaled target bounds, starting from which the function value is considered to lie in the interior of the target space.

If scale_by_problem_size is set to True, the objective (i.e. the sum of the violation variables) will be divided by the number of goals, and the path objective will be divided by the number of path goals and the number of active time steps (per goal). This will make sure the objectives are always in the range [0, 1], at the cost of solving each goal/time step less accurately.

The option keep_soft_constraints controls how the epsilon variables introduced in the target goals are dealt with in subsequent priorities. If keep_soft_constraints is set to False, each epsilon is replaced by its computed value and those are used to derive a new set of constraints. If keep_soft_constraints is set to True, the epsilons are kept as variables and the constraints are not modified. To ensure the goal programming philosophy, i.e., Pareto optimality, a single constraint is added to enforce that the objective function must always be at most the objective value. This method allows for a larger solution space, at the cost of having a (possibly) more complex optimization problem. Indeed, more variables are kept around throughout the optimization and any objective function is turned into a constraint for the subsequent priorities (while in the False option this was the case only for the function of minimization goals).

Returns: A dictionary of goal programming options.
goals() → List[rtctools.optimization.goal_programming_mixin_base.Goal]

User problem returns list of Goal objects.

Returns: A list of goals.
path_goals() → List[rtctools.optimization.goal_programming_mixin_base.Goal]

User problem returns list of path Goal objects.

Returns: A list of path goals.
priority_completed(priority: int) → None

Called after optimization for goals of certain priority is completed.

Parameters: priority – The priority level that was completed.
priority_started(priority: int) → None

Called when optimization for goals of certain priority is started.

Parameters: priority – The priority level that was started.
class rtctools.optimization.single_pass_goal_programming_mixin.SinglePassGoalProgrammingMixin(**kwargs)[source]

Bases: rtctools.optimization.goal_programming_mixin_base._GoalProgrammingMixinBase

Unlike GoalProgrammingMixin, this mixin will call transcribe() only once per call to optimize(), and not $$N$$ times for $$N$$ priorities. It works similar to how keep_soft_constraints = True works for GoalProgrammingMixin, while avoiding the repeated calls to transcribe the problem.

This mixin can work in one of two ways. What is shared between them is that all violation variables of all goals are generated once at the beginning, such that the state vector is exactly the same for all priorities. They also share that all goal constraints are added from the start. How they differ is in how they handle/append the constraints on the objective of previous priorities:

1. At priority $$i$$ the constraints are the same as the ones at priority $$i - 1$$ with the addition of the objective constraint related to priority $$i - 1$$. This is the default method.
2. All objective constraints are added at the start. The objective constraints will have bound of $$[-\inf, \inf]$$ at the start, to be updated after each priority finishes.

There is a special qpsol alternative available CachingQPSol, that will avoid recalculations on constraints that were already there in previous priorities. This works for both options outlined above, because the assumptions of CachingQPSol are that:

1. The state vector does not change
2. Any new constraints are appended at the end

Note

Just like GoalProgrammingMixin, objective constraints are only added on the goal objectives, not on any custom user objective.

goal_programming_options() → Dict[str, Union[float, bool]][source]

Returns a dictionary of options controlling the goal programming process.

Option Type Default value
constraint_relaxation float 0.0
mu_reinit bool True
fix_minimized_values bool True/False
check_monotonicity bool True
equality_threshold float 1e-8
scale_by_problem_size bool False

When a priority’s objective is turned into a hard constraint, the constraint is relaxed with constraint_relaxation. Use of this option is normally not required. Note that:

When using the default solver (IPOPT), its barrier parameter mu is normally re-initialized at every iteration of the goal programming algorithm, unless mu_reinit is set to False. Use of this option is normally not required.

If fix_minimized_values is set to True, goal functions will be set to equal their optimized values in optimization problems generated during subsequent priorities. Otherwise, only an upper bound will be set. Use of this option is normally not required. Note that the use of this option may add non-convex constraints to the optimization problem. The default value for this parameter is True for the default solvers IPOPT/BONMIN. If any other solver is used, the default value is False.

If check_monotonicity is set to True, then it will be checked whether goals with the same function key form a monotonically decreasing sequence with regards to the target interval.

The option equality_threshold controls when a two-sided inequality constraint is folded into an equality constraint.

If scale_by_problem_size is set to True, the objective (i.e. the sum of the violation variables) will be divided by the number of goals, and the path objective will be divided by the number of path goals and the number of active time steps (per goal). This will make sure the objectives are always in the range [0, 1], at the cost of solving each goal/time step less accurately.

Returns: A dictionary of goal programming options.
goals() → List[rtctools.optimization.goal_programming_mixin_base.Goal]

User problem returns list of Goal objects.

Returns: A list of goals.
path_goals() → List[rtctools.optimization.goal_programming_mixin_base.Goal]

User problem returns list of path Goal objects.

Returns: A list of path goals.
priority_completed(priority: int) → None

Called after optimization for goals of certain priority is completed.

Parameters: priority – The priority level that was completed.
priority_started(priority: int) → None

Called when optimization for goals of certain priority is started.

Parameters: priority – The priority level that was started.
class rtctools.optimization.single_pass_goal_programming_mixin.CachingQPSol[source]

Bases: object

Alternative to ca.qpsol() that caches the Jacobian between calls.

Typical usage would be something like:

## Minimize absolute value¶

class rtctools.optimization.min_abs_goal_programming_mixin.MinAbsGoalProgrammingMixin(*args, **kwargs)[source]

Bases: rtctools.optimization.goal_programming_mixin_base._GoalProgrammingMixinBase

Similar behavior to GoalProgrammingMixin, but any MinAbsGoal passed to min_abs_goals() or min_abs_path_goals() will be automatically converted to:

1. An auxiliary minimization variable
2. Two additional linear constraints relating the auxiliary variable to the goal function
3. A new goal (of a different type) minimizing the auxiliary variable
min_abs_goals() → List[rtctools.optimization.min_abs_goal_programming_mixin.MinAbsGoal][source]

User problem returns list of MinAbsGoal objects.

Returns: A list of goals.
min_abs_path_goals() → List[rtctools.optimization.min_abs_goal_programming_mixin.MinAbsGoal][source]

User problem returns list of MinAbsGoal objects.

Returns: A list of goals.
class rtctools.optimization.min_abs_goal_programming_mixin.MinAbsGoal[source]

Bases: rtctools.optimization.goal_programming_mixin_base.Goal

Absolute minimization goal class which can be used to minimize the absolute value of the goal’s (linear) goal function. Contrary to its super class, the default order is 1 as absolute minimization is typically desired for fully linear problems.

class rtctools.optimization.min_abs_goal_programming_mixin.MinAbsStateGoal(optimization_problem)[source]

Bases: rtctools.optimization.goal_programming_mixin_base.StateGoal, rtctools.optimization.min_abs_goal_programming_mixin.MinAbsGoal

__init__(optimization_problem)

Initialize the state goal object.

Parameters: optimization_problem – OptimizationProblem instance.

## Linearized order¶

class rtctools.optimization.linearized_order_goal_programming_mixin.LinearizedOrderGoalProgrammingMixin(**kwargs)[source]

Bases: rtctools.optimization.goal_programming_mixin_base._GoalProgrammingMixinBase

Adds support for linearization of the goal objective functions, i.e. the violation variables to a certain power. This can be used to keep a problem fully linear and/or make sure that no quadratic constraints appear when using the goal programming option keep_soft_constraints.

goal_programming_options()[source]

If linearize_goal_order is set to True, the goal’s order will be approximated linearly for any goals where order > 1. Note that this option does not work with minimization goals of higher order. Instead, it is suggested to transform these minimization goals into goals with a target (and function range) when using this option. Note that this option can be overriden on the level of a goal by using a LinearizedOrderGoal (see LinearizedOrderGoal.linearize_order).

class rtctools.optimization.linearized_order_goal_programming_mixin.LinearizedOrderGoal[source]

Bases: rtctools.optimization.goal_programming_mixin_base.Goal

linearize_order = None

Override linearization of goal order. Related global goal programming option is linearize_goal_order (see LinearizedOrderGoalProgrammingMixin.goal_programming_options()). The default value of None defers to the global option, but the user can explicitly override it per goal by setting this value to True or False.

class rtctools.optimization.linearized_order_goal_programming_mixin.LinearizedOrderStateGoal(optimization_problem)[source]

Bases: rtctools.optimization.linearized_order_goal_programming_mixin.LinearizedOrderGoal, rtctools.optimization.goal_programming_mixin_base.StateGoal

Convenience class definition for linearized order state goals. Note that it is possible to just inherit from LinearizedOrderGoal to get the needed functionality for control of the linearization at goal level.

__init__(optimization_problem)

Initialize the state goal object.

Parameters: optimization_problem – OptimizationProblem instance.