figaroh.optimal package

Optimal calibration and trajectory optimization module.

This module provides base classes for optimal calibration and trajectory optimization for robotic systems.

class figaroh.optimal.BaseOptimalCalibration(robot, config_file='config/robot_config.yaml')[source]

Bases: ABC

Base class for robot optimal configuration generation for calibration.

This class implements the framework for generating optimal robot configurations that maximize the observability of kinematic parameters during calibration. It uses Second-Order Cone Programming (SOCP) to solve the D-optimal design problem for parameter estimation.

The class provides a Template Method pattern where the main workflow is defined, but specific optimization strategies can be customized by derived classes for different robot types.

Workflow:
  1. Load candidate configurations from file (CSV or YAML)

  2. Calculate kinematic regressors for all candidates

  3. Compute information matrices for each configuration

  4. Solve SOCP optimization to find optimal subset

  5. Select configurations with significant weights

  6. Visualize and save results

Key Features:
  • D-optimal experimental design for calibration

  • Support for multiple calibration models (full_params, joint_offset)

  • Automatic minimum configuration calculation

  • SOCP-based optimization with convex relaxation

  • Comprehensive visualization and analysis tools

  • File I/O for configuration management

Mathematical Background:

The method maximizes the determinant of the Fisher Information Matrix: max det(Σᵢ wᵢ Rᵢᵀ Rᵢ) subject to Σᵢ wᵢ ≤ 1, wᵢ ≥ 0

Where: - Rᵢ is the kinematic regressor for configuration i - wᵢ is the weight assigned to configuration i - The objective maximizes parameter estimation precision

robot

Robot model instance loaded with FIGAROH

model

Pinocchio robot model

data

Pinocchio robot data

calib_config

Calibration parameters from configuration file

Type:

dict

optimal_configurations

Selected optimal configurations

Type:

dict

optimal_weights

Weights assigned to configurations

Type:

ndarray

minNbChosen

Minimum number of configurations required

Type:

int

# Internal computation attributes
R_rearr

Rearranged kinematic regressor matrix

Type:

ndarray

detroot_whole

Determinant root of full information matrix

Type:

float

w_list

Solution weights from SOCP optimization

Type:

list

w_dict_sort

Sorted weights by configuration index

Type:

dict

Example

>>> # Basic usage for TIAGo robot
>>> from figaroh.robots import TiagoRobot
>>> robot = TiagoRobot()
>>>
>>> # Create optimal calibration instance
>>> opt_calib = TiagoOptimalCalibration(robot, "config/tiago.yaml")
>>>
>>> # Generate optimal configurations
>>> opt_calib.solve(save_file=True)
>>>
>>> # Access results
>>> print(f"Selected {len(opt_calib.optimal_configurations)} configs")
>>> print(f"Minimum required: {opt_calib.minNbChosen}")

See also

BaseCalibration: Main calibration framework SOCPOptimizer: Second-order cone programming solver TiagoOptimalCalibration: TIAGo-specific implementation UR10OptimalCalibration: UR10-specific implementation

calculate_detroot_whole()[source]

Calculate determinant root of complete information matrix.

Computes the determinant root of the full Fisher Information Matrix formed by all candidate configurations. This serves as the theoretical upper bound for the D-optimality criterion and is used for performance comparison.

Mathematical Background:

M_full = R^T R (full regressor) detroot_whole = det(M_full)^(1/n) / sqrt(n)

This represents the geometric mean of eigenvalues, normalized by matrix dimension for scale independence.

Side Effects:
  • Sets self.detroot_whole with computed determinant root

  • Prints the computed value for verification

Prerequisites:
  • Kinematic regressor must be calculated (self.R_rearr)

  • Requires picos library for determinant computation

Raises:
  • AssertionError – If regressor calculation not performed first

  • ImportError – If picos library not available

See also

calculate_regressor: Prerequisites for this computation plot: Uses this value for performance comparison

calculate_optimal_configurations()[source]

Solve SOCP optimization to find optimal configuration subset.

This is the core optimization method that solves the D-optimal experimental design problem using Second-Order Cone Programming. The method finds weights for each candidate configuration that maximize the determinant of the Fisher Information Matrix.

Optimization Problem:

maximize det(Σᵢ wᵢ Xᵢ)^(1/n) subject to: Σᵢ wᵢ ≤ 1, wᵢ ≥ 0

Where Xᵢ are information matrices and wᵢ are configuration weights.

Selection Process:
  1. Solve SOCP optimization for optimal weights

  2. Select configurations with weights > eps_opt (1e-5)

  3. Verify minimum configuration requirement is met

  4. Store selected configurations and weights

Side Effects:
  • Sets self.w_list with optimization solution weights

  • Sets self.w_dict_sort with sorted weight dictionary

  • Sets self.optimal_configurations with selected configs

  • Sets self.optimal_weights with final weight values

  • Sets self.nb_chosen with number of selected configurations

  • Prints timing information and selection results

Returns:

True if optimization successful and feasible

Return type:

bool

Raises:

AssertionError – If regressor not calculated or if insufficient configurations selected (infeasible design)

Example

>>> opt_calib.calculate_optimal_configurations()
solve time of socp: 2.35 seconds
12 configs are chosen: [0, 5, 12, 18, 23, ...]

See also

SOCPOptimizer: The optimization solver implementation calculate_regressor: Required prerequisite computation

calculate_regressor()[source]

Calculate kinematic regressors and information matrices.

Computes the kinematic regressor matrices that relate kinematic parameter variations to end-effector pose changes. This is the mathematical foundation for the optimization problem.

The method performs several key computations: 1. Calculate base kinematic regressors for all configurations 2. Rearrange regressor matrix by sample order for efficiency 3. Compute individual information matrices for each configuration 4. Store results for optimization access

Mathematical Background:

For each configuration i, the regressor Rᵢ satisfies: δx = Rᵢ δθ where δx is pose variation and δθ is parameter variation.

The information matrix is: Xᵢ = RᵢᵀRᵢ

Side Effects:
  • Sets self.R_rearr with rearranged kinematic regressor

  • Sets self._subX_list with list of information matrices

  • Sets self._subX_dict with indexed information matrices

  • Prints parameter names for verification

Returns:

True if calculation successful

Return type:

bool

Prerequisites:
  • Joint configurations must be loaded (self.q_measured)

  • Robot model and parameters must be initialized

See also

calculate_base_kinematics_regressor: Core regressor computation rearrange_rb: Matrix rearrangement for optimization sub_info_matrix: Information matrix decomposition

initialize()[source]

Initialize the optimization process by preparing all required data.

This method orchestrates the initialization sequence required before optimization can begin. It ensures all mathematical components are properly computed and cached for efficient optimization.

The initialization sequence: 1. Load candidate configurations from external files 2. Calculate kinematic regressors for all configurations 3. Compute determinant root of the full information matrix

Prerequisites:
  • Robot model and parameters must be loaded

  • Configuration file must specify valid sample data paths

Side Effects:
  • Sets self.q_measured with candidate joint configurations

  • Sets self.R_rearr with rearranged kinematic regressor

  • Sets self._subX_dict and self._subX_list with info matrices

  • Sets self.detroot_whole with full matrix determinant root

Raises:
  • ValueError – If sample configuration file is invalid or missing

  • AssertionError – If regressor calculation fails

See also

load_candidate_configurations: Configuration data loading calculate_regressor: Kinematic regressor computation calculate_detroot_whole: Information matrix analysis

load_candidate_configurations()[source]

Load candidate joint configurations from external data files.

Reads robot joint configurations from CSV or YAML files that serve as the candidate pool for optimization. The method supports multiple file formats and automatically updates the sample count parameter.

Supported formats: - CSV: Standard measurement data format with joint configurations - YAML: Structured format with named joints and configurations

The YAML format expects: `yaml calibration_joint_names: [joint1, joint2, ...] calibration_joint_configurations: [[q1_1, q1_2, ...], [q2_1, ...]] `

Side Effects:
  • Sets self.q_measured with loaded joint configurations

  • Updates self.calib_config[“NbSample”] with actual sample count

  • May load self._configs for YAML format data

Raises:
  • ValueError – If sample_configs_file not specified in configuration or if file format is not supported

  • FileNotFoundError – If specified data file does not exist

Example

>>> # Assuming config specifies "data/candidate_configs.yaml"
>>> opt_calib.load_candidate_configurations()
>>> print(opt_calib.q_measured.shape)  # (1000, 7) for TIAGo
load_param(config_file, setting_type='calibration')[source]

Load optimization parameters from YAML configuration file.

Reads and parses calibration configuration from YAML file, extracting robot-specific parameters needed for optimal configuration generation. The configuration supports multiple setting types within the same file.

Parameters:
  • config_file (str) – Path to YAML configuration file containing optimization and calibration parameters

  • setting_type (str) – Configuration section to load. Options include “calibration”, “identification”, or custom section names. Default “calibration”.

Side Effects:
  • Updates self.calib_config with loaded configuration dictionary

  • Overwrites any existing parameter settings

Raises:
  • FileNotFoundError – If config_file does not exist

  • yaml.YAMLError – If YAML parsing fails

  • KeyError – If setting_type section not found in config

Example

>>> opt_calib.load_param("config/tiago_optimal.yaml")
>>> print(opt_calib.calib_config["calib_model"])  # "full_params"
>>> print(opt_calib.calib_config["NbSample"])     # 1000
plot()[source]

Generate comprehensive visualization of optimization results.

Creates dual-panel plots that provide insight into the optimization quality and configuration selection process. The visualizations help assess the efficiency of the selected configuration subset.

Plot Components: 1. D-optimality criterion vs. number of configurations

  • Shows how information matrix determinant improves with additional configurations

  • Normalized against theoretical maximum (all configurations)

  • Helps identify diminishing returns point

  1. Configuration weights in logarithmic scale - Displays weight assigned to each candidate configuration - Configurations above threshold (eps_opt) are selected - Shows selection boundary and weight distribution

Prerequisites:
  • Optimization must be completed (optimal_configurations available)

  • Information matrices must be computed

Side Effects:
  • Creates matplotlib figure with two subplots

  • Displays plots using plt.show()

  • May block execution until plots are closed

Returns:

True if plotting successful

Return type:

bool

Mathematical Details:

D-optimality ratio = detroot_whole / det(selected_subset) This ratio approaches 1.0 as selected subset approaches optimality.

Example

>>> opt_calib.solve()
>>> # Plot is automatically generated, or call manually:
>>> opt_calib.plot()

See also

calculate_optimal_configurations: Generates data for plotting calculate_detroot_whole: Provides normalization reference

plot_results()[source]

Plot optimal calibration results using unified results manager.

rearrange_rb(R_b, calib_config)[source]

rearrange the kinematic regressor by sample numbered order

save_results(output_dir='results')[source]

Save optimal configuration results using unified results manager.

solve(save_file=False)[source]

Solve the optimal configuration selection problem.

This is the main entry point that orchestrates the complete optimal configuration generation workflow. It automatically handles initialization if not already performed, solves the SOCP optimization, and provides comprehensive results analysis.

The method implements the complete D-optimal design workflow: 1. Initialize data and regressors (if needed) 2. Solve SOCP optimization for optimal weights 3. Select configurations with significant weights 4. Optionally save results to files 5. Generate visualization plots

Parameters:

save_file (bool) – Whether to save optimal configurations to YAML file in results directory. Default False.

Side Effects:
  • Updates self.optimal_configurations with selected configs

  • Updates self.optimal_weights with optimization weights

  • Creates visualization plots

  • May create output files if save_file=True

  • Prints progress and results to console

Raises:
  • AssertionError – If minimum configuration requirement not met

  • ValueError – If optimization problem is infeasible

  • IOError – If file saving fails (logged as warning)

Example

>>> opt_calib = TiagoOptimalCalibration(robot)
>>> opt_calib.solve(save_file=True)
12 configs are chosen: [0, 5, 12, 18, ...]
Optimal configurations written to file successfully

See also

initialize: Data preparation workflow calculate_optimal_configurations: Core optimization solver plot: Results visualization save_results: File output management

sub_info_matrix(R, calib_config)[source]

Decompose regressor into individual configuration info matrices.

Creates separate information matrices for each configuration by extracting the corresponding rows from the full regressor matrix. This decomposition enables individual configuration evaluation in the optimization process.

Parameters:
  • R (ndarray) – Full rearranged kinematic regressor matrix

  • calib_config (dict) – Calibration parameters including sample count and calibration index

Returns:

(subX_list, subX_dict) where:
  • subX_list: List of information matrices (RᵢᵀRᵢ)

  • subX_dict: Dictionary mapping config index to matrix

Return type:

tuple

Mathematical Details:

For configuration i: Rᵢ = R[i*idx:(i+1)*idx, :] (extract rows) Xᵢ = RᵢᵀRᵢ (information matrix)

Example

>>> R_full = np.random.rand(6000, 42)  # 1000 configs, 6 DOF
>>> subX_list, subX_dict = self.sub_info_matrix(R_full, calib_config)
>>> print(len(subX_list))  # 1000
>>> print(subX_dict[0].shape)  # (42, 42)
class figaroh.optimal.BaseOptimalTrajectory(robot, active_joints: List[str], config_file: str = 'config/robot_config.yaml')[source]

Bases: object

Base class for IPOPT-based optimal trajectory generation.

Features: - Modular design with separated concerns - Better error handling and logging - Configuration validation - Cleaner interfaces

This base class can be extended for specific robots by implementing robot-specific configuration loading and constraint handling.

build_base_regressor(q, v, a, W_stack=None) ndarray[source]

Build base regressor matrix.

generate_feasible_initial_guess(wp_init, vel_wp_init, acc_wp_init)[source]

Generate a feasible initial guess for optimization.

objective_function(X, opt_cb, tps, vel_wps, acc_wps, wp_init, W_stack=None)[source]

Objective function: condition number of base regressor matrix.

plot_results()[source]

Plot optimal trajectory results using unified results manager.

save_results(output_dir='results')[source]

Save optimal trajectory results using unified results manager.

solve(stack_reps: int = 2) Dict[str, Any][source]

Solve the optimal trajectory generation problem.

Parameters:

stack_reps – Number of trajectory segments to stack

Returns:

Dict containing trajectories and optimization info

Submodules

figaroh.optimal.base_optimal_calibration module

Base class for robot optimal configuration generation for calibration. This module provides a generalized framework for optimal configuration generation that can be inherited by any robot type (TIAGo, UR10, MATE, etc.).

class figaroh.optimal.base_optimal_calibration.BaseOptimalCalibration(robot, config_file='config/robot_config.yaml')[source]

Bases: ABC

Base class for robot optimal configuration generation for calibration.

This class implements the framework for generating optimal robot configurations that maximize the observability of kinematic parameters during calibration. It uses Second-Order Cone Programming (SOCP) to solve the D-optimal design problem for parameter estimation.

The class provides a Template Method pattern where the main workflow is defined, but specific optimization strategies can be customized by derived classes for different robot types.

Workflow:
  1. Load candidate configurations from file (CSV or YAML)

  2. Calculate kinematic regressors for all candidates

  3. Compute information matrices for each configuration

  4. Solve SOCP optimization to find optimal subset

  5. Select configurations with significant weights

  6. Visualize and save results

Key Features:
  • D-optimal experimental design for calibration

  • Support for multiple calibration models (full_params, joint_offset)

  • Automatic minimum configuration calculation

  • SOCP-based optimization with convex relaxation

  • Comprehensive visualization and analysis tools

  • File I/O for configuration management

Mathematical Background:

The method maximizes the determinant of the Fisher Information Matrix: max det(Σᵢ wᵢ Rᵢᵀ Rᵢ) subject to Σᵢ wᵢ ≤ 1, wᵢ ≥ 0

Where: - Rᵢ is the kinematic regressor for configuration i - wᵢ is the weight assigned to configuration i - The objective maximizes parameter estimation precision

robot

Robot model instance loaded with FIGAROH

model

Pinocchio robot model

data

Pinocchio robot data

calib_config

Calibration parameters from configuration file

Type:

dict

optimal_configurations

Selected optimal configurations

Type:

dict

optimal_weights

Weights assigned to configurations

Type:

ndarray

minNbChosen

Minimum number of configurations required

Type:

int

# Internal computation attributes
R_rearr

Rearranged kinematic regressor matrix

Type:

ndarray

detroot_whole

Determinant root of full information matrix

Type:

float

w_list

Solution weights from SOCP optimization

Type:

list

w_dict_sort

Sorted weights by configuration index

Type:

dict

Example

>>> # Basic usage for TIAGo robot
>>> from figaroh.robots import TiagoRobot
>>> robot = TiagoRobot()
>>>
>>> # Create optimal calibration instance
>>> opt_calib = TiagoOptimalCalibration(robot, "config/tiago.yaml")
>>>
>>> # Generate optimal configurations
>>> opt_calib.solve(save_file=True)
>>>
>>> # Access results
>>> print(f"Selected {len(opt_calib.optimal_configurations)} configs")
>>> print(f"Minimum required: {opt_calib.minNbChosen}")

See also

BaseCalibration: Main calibration framework SOCPOptimizer: Second-order cone programming solver TiagoOptimalCalibration: TIAGo-specific implementation UR10OptimalCalibration: UR10-specific implementation

calculate_detroot_whole()[source]

Calculate determinant root of complete information matrix.

Computes the determinant root of the full Fisher Information Matrix formed by all candidate configurations. This serves as the theoretical upper bound for the D-optimality criterion and is used for performance comparison.

Mathematical Background:

M_full = R^T R (full regressor) detroot_whole = det(M_full)^(1/n) / sqrt(n)

This represents the geometric mean of eigenvalues, normalized by matrix dimension for scale independence.

Side Effects:
  • Sets self.detroot_whole with computed determinant root

  • Prints the computed value for verification

Prerequisites:
  • Kinematic regressor must be calculated (self.R_rearr)

  • Requires picos library for determinant computation

Raises:
  • AssertionError – If regressor calculation not performed first

  • ImportError – If picos library not available

See also

calculate_regressor: Prerequisites for this computation plot: Uses this value for performance comparison

calculate_optimal_configurations()[source]

Solve SOCP optimization to find optimal configuration subset.

This is the core optimization method that solves the D-optimal experimental design problem using Second-Order Cone Programming. The method finds weights for each candidate configuration that maximize the determinant of the Fisher Information Matrix.

Optimization Problem:

maximize det(Σᵢ wᵢ Xᵢ)^(1/n) subject to: Σᵢ wᵢ ≤ 1, wᵢ ≥ 0

Where Xᵢ are information matrices and wᵢ are configuration weights.

Selection Process:
  1. Solve SOCP optimization for optimal weights

  2. Select configurations with weights > eps_opt (1e-5)

  3. Verify minimum configuration requirement is met

  4. Store selected configurations and weights

Side Effects:
  • Sets self.w_list with optimization solution weights

  • Sets self.w_dict_sort with sorted weight dictionary

  • Sets self.optimal_configurations with selected configs

  • Sets self.optimal_weights with final weight values

  • Sets self.nb_chosen with number of selected configurations

  • Prints timing information and selection results

Returns:

True if optimization successful and feasible

Return type:

bool

Raises:

AssertionError – If regressor not calculated or if insufficient configurations selected (infeasible design)

Example

>>> opt_calib.calculate_optimal_configurations()
solve time of socp: 2.35 seconds
12 configs are chosen: [0, 5, 12, 18, 23, ...]

See also

SOCPOptimizer: The optimization solver implementation calculate_regressor: Required prerequisite computation

calculate_regressor()[source]

Calculate kinematic regressors and information matrices.

Computes the kinematic regressor matrices that relate kinematic parameter variations to end-effector pose changes. This is the mathematical foundation for the optimization problem.

The method performs several key computations: 1. Calculate base kinematic regressors for all configurations 2. Rearrange regressor matrix by sample order for efficiency 3. Compute individual information matrices for each configuration 4. Store results for optimization access

Mathematical Background:

For each configuration i, the regressor Rᵢ satisfies: δx = Rᵢ δθ where δx is pose variation and δθ is parameter variation.

The information matrix is: Xᵢ = RᵢᵀRᵢ

Side Effects:
  • Sets self.R_rearr with rearranged kinematic regressor

  • Sets self._subX_list with list of information matrices

  • Sets self._subX_dict with indexed information matrices

  • Prints parameter names for verification

Returns:

True if calculation successful

Return type:

bool

Prerequisites:
  • Joint configurations must be loaded (self.q_measured)

  • Robot model and parameters must be initialized

See also

calculate_base_kinematics_regressor: Core regressor computation rearrange_rb: Matrix rearrangement for optimization sub_info_matrix: Information matrix decomposition

initialize()[source]

Initialize the optimization process by preparing all required data.

This method orchestrates the initialization sequence required before optimization can begin. It ensures all mathematical components are properly computed and cached for efficient optimization.

The initialization sequence: 1. Load candidate configurations from external files 2. Calculate kinematic regressors for all configurations 3. Compute determinant root of the full information matrix

Prerequisites:
  • Robot model and parameters must be loaded

  • Configuration file must specify valid sample data paths

Side Effects:
  • Sets self.q_measured with candidate joint configurations

  • Sets self.R_rearr with rearranged kinematic regressor

  • Sets self._subX_dict and self._subX_list with info matrices

  • Sets self.detroot_whole with full matrix determinant root

Raises:
  • ValueError – If sample configuration file is invalid or missing

  • AssertionError – If regressor calculation fails

See also

load_candidate_configurations: Configuration data loading calculate_regressor: Kinematic regressor computation calculate_detroot_whole: Information matrix analysis

load_candidate_configurations()[source]

Load candidate joint configurations from external data files.

Reads robot joint configurations from CSV or YAML files that serve as the candidate pool for optimization. The method supports multiple file formats and automatically updates the sample count parameter.

Supported formats: - CSV: Standard measurement data format with joint configurations - YAML: Structured format with named joints and configurations

The YAML format expects: `yaml calibration_joint_names: [joint1, joint2, ...] calibration_joint_configurations: [[q1_1, q1_2, ...], [q2_1, ...]] `

Side Effects:
  • Sets self.q_measured with loaded joint configurations

  • Updates self.calib_config[“NbSample”] with actual sample count

  • May load self._configs for YAML format data

Raises:
  • ValueError – If sample_configs_file not specified in configuration or if file format is not supported

  • FileNotFoundError – If specified data file does not exist

Example

>>> # Assuming config specifies "data/candidate_configs.yaml"
>>> opt_calib.load_candidate_configurations()
>>> print(opt_calib.q_measured.shape)  # (1000, 7) for TIAGo
load_param(config_file, setting_type='calibration')[source]

Load optimization parameters from YAML configuration file.

Reads and parses calibration configuration from YAML file, extracting robot-specific parameters needed for optimal configuration generation. The configuration supports multiple setting types within the same file.

Parameters:
  • config_file (str) – Path to YAML configuration file containing optimization and calibration parameters

  • setting_type (str) – Configuration section to load. Options include “calibration”, “identification”, or custom section names. Default “calibration”.

Side Effects:
  • Updates self.calib_config with loaded configuration dictionary

  • Overwrites any existing parameter settings

Raises:
  • FileNotFoundError – If config_file does not exist

  • yaml.YAMLError – If YAML parsing fails

  • KeyError – If setting_type section not found in config

Example

>>> opt_calib.load_param("config/tiago_optimal.yaml")
>>> print(opt_calib.calib_config["calib_model"])  # "full_params"
>>> print(opt_calib.calib_config["NbSample"])     # 1000
plot()[source]

Generate comprehensive visualization of optimization results.

Creates dual-panel plots that provide insight into the optimization quality and configuration selection process. The visualizations help assess the efficiency of the selected configuration subset.

Plot Components: 1. D-optimality criterion vs. number of configurations

  • Shows how information matrix determinant improves with additional configurations

  • Normalized against theoretical maximum (all configurations)

  • Helps identify diminishing returns point

  1. Configuration weights in logarithmic scale - Displays weight assigned to each candidate configuration - Configurations above threshold (eps_opt) are selected - Shows selection boundary and weight distribution

Prerequisites:
  • Optimization must be completed (optimal_configurations available)

  • Information matrices must be computed

Side Effects:
  • Creates matplotlib figure with two subplots

  • Displays plots using plt.show()

  • May block execution until plots are closed

Returns:

True if plotting successful

Return type:

bool

Mathematical Details:

D-optimality ratio = detroot_whole / det(selected_subset) This ratio approaches 1.0 as selected subset approaches optimality.

Example

>>> opt_calib.solve()
>>> # Plot is automatically generated, or call manually:
>>> opt_calib.plot()

See also

calculate_optimal_configurations: Generates data for plotting calculate_detroot_whole: Provides normalization reference

plot_results()[source]

Plot optimal calibration results using unified results manager.

rearrange_rb(R_b, calib_config)[source]

rearrange the kinematic regressor by sample numbered order

save_results(output_dir='results')[source]

Save optimal configuration results using unified results manager.

solve(save_file=False)[source]

Solve the optimal configuration selection problem.

This is the main entry point that orchestrates the complete optimal configuration generation workflow. It automatically handles initialization if not already performed, solves the SOCP optimization, and provides comprehensive results analysis.

The method implements the complete D-optimal design workflow: 1. Initialize data and regressors (if needed) 2. Solve SOCP optimization for optimal weights 3. Select configurations with significant weights 4. Optionally save results to files 5. Generate visualization plots

Parameters:

save_file (bool) – Whether to save optimal configurations to YAML file in results directory. Default False.

Side Effects:
  • Updates self.optimal_configurations with selected configs

  • Updates self.optimal_weights with optimization weights

  • Creates visualization plots

  • May create output files if save_file=True

  • Prints progress and results to console

Raises:
  • AssertionError – If minimum configuration requirement not met

  • ValueError – If optimization problem is infeasible

  • IOError – If file saving fails (logged as warning)

Example

>>> opt_calib = TiagoOptimalCalibration(robot)
>>> opt_calib.solve(save_file=True)
12 configs are chosen: [0, 5, 12, 18, ...]
Optimal configurations written to file successfully

See also

initialize: Data preparation workflow calculate_optimal_configurations: Core optimization solver plot: Results visualization save_results: File output management

sub_info_matrix(R, calib_config)[source]

Decompose regressor into individual configuration info matrices.

Creates separate information matrices for each configuration by extracting the corresponding rows from the full regressor matrix. This decomposition enables individual configuration evaluation in the optimization process.

Parameters:
  • R (ndarray) – Full rearranged kinematic regressor matrix

  • calib_config (dict) – Calibration parameters including sample count and calibration index

Returns:

(subX_list, subX_dict) where:
  • subX_list: List of information matrices (RᵢᵀRᵢ)

  • subX_dict: Dictionary mapping config index to matrix

Return type:

tuple

Mathematical Details:

For configuration i: Rᵢ = R[i*idx:(i+1)*idx, :] (extract rows) Xᵢ = RᵢᵀRᵢ (information matrix)

Example

>>> R_full = np.random.rand(6000, 42)  # 1000 configs, 6 DOF
>>> subX_list, subX_dict = self.sub_info_matrix(R_full, calib_config)
>>> print(len(subX_list))  # 1000
>>> print(subX_dict[0].shape)  # (42, 42)
class figaroh.optimal.base_optimal_calibration.Detmax(candidate_pool, NbChosen)[source]

Bases: object

Determinant Maximization optimizer using greedy exchange algorithm.

This class implements a heuristic optimization algorithm for D-optimal experimental design that uses a greedy exchange strategy to find near-optimal configuration subsets. Unlike the SOCP approach, this method provides a combinatorial solution that directly selects discrete configurations.

Algorithm Overview:

The DetMax algorithm uses an iterative exchange procedure: 1. Initialize with a random subset of configurations 2. Iteratively add the configuration that maximally improves

the determinant criterion

  1. Remove the configuration whose absence minimally degrades the determinant criterion

  2. Repeat until convergence (no beneficial exchanges)

Mathematical Background:

The algorithm maximizes det(Σᵢ∈S Xᵢ)^(1/n) where: - S is the selected configuration subset - Xᵢ are information matrices for configurations - n is the matrix dimension

This is a discrete optimization problem (vs continuous SOCP).

Advantages:
  • Provides exact discrete solution (no weight thresholding)

  • Computationally efficient for small to medium problems

  • Intuitive greedy strategy with good convergence properties

  • No external optimization solvers required

Limitations:
  • May converge to local optima (not globally optimal)

  • Performance depends on random initialization

  • Computational complexity grows with candidate pool size

pool

Dictionary of information matrices indexed by config ID

Type:

dict

nd

Number of configurations to select

Type:

int

cur_set

Current configuration subset being evaluated

Type:

list

fail_set

Configurations that failed selection criteria

Type:

list

opt_set

Final optimal configuration subset

Type:

list

opt_critD

Evolution of determinant criterion during optimization

Type:

list

Example

>>> # Create DetMax optimizer
>>> detmax = Detmax(subX_dict, num_configs=12)
>>>
>>> # Run optimization
>>> criterion_history = detmax.main_algo()
>>>
>>> # Get selected configurations
>>> selected_configs = detmax.cur_set
>>> final_criterion = criterion_history[-1]
>>>
>>> print(f"Selected {len(selected_configs)} configurations")
>>> print(f"Final D-optimality: {final_criterion:.4f}")

See also

SOCPOptimizer: Alternative SOCP-based optimization approach BaseOptimalCalibration: Main calibration framework

get_critD(set)[source]

Calculate D-optimality criterion for configuration subset.

Computes the determinant root of the Fisher Information Matrix formed by summing the information matrices of configurations in the specified subset. This serves as the objective function for the determinant maximization algorithm.

Parameters:

set (list) – List of configuration indices from the candidate pool to include in the criterion calculation

Returns:

D-optimality criterion value (determinant root)

Higher values indicate better parameter identifiability

Return type:

float

Raises:

AssertionError – If any configuration index not in candidate pool

Mathematical Details:

For subset S, computes: det(Σᵢ∈S Xᵢ)^(1/n) where Xᵢ are information matrices and n is matrix dimension

Example

>>> subset = [0, 5, 12, 18]  # Configuration indices
>>> criterion = optimizer.get_critD(subset)
>>> print(f"D-optimality: {criterion:.6f}")
main_algo()[source]

Execute the main determinant maximization algorithm.

Implements the greedy exchange algorithm for D-optimal experimental design. The algorithm alternates between adding configurations that maximally improve the determinant and removing configurations whose absence minimally degrades the determinant.

Algorithm Steps: 1. Initialize random subset of target size from candidate pool 2. Exchange Loop:

  1. ADD PHASE: Find configuration that maximally improves criterion

  2. REMOVE PHASE: Find configuration whose removal minimally hurts

  3. Update current subset and criterion value

  1. Repeat until convergence (no beneficial exchanges)

  2. Return optimization history

Convergence Condition:

The algorithm stops when the optimal configuration to add equals the optimal configuration to remove, indicating no further improvement is possible.

Returns:

History of D-optimality criterion values throughout

the optimization process. Last value is final criterion.

Return type:

list

Side Effects:
  • Updates self.cur_set with final optimal configuration subset

  • Updates self.opt_critD with complete optimization history

  • Uses random initialization (results may vary between runs)

Complexity:

O(max_iterations × candidate_pool_size × target_subset_size) where max_iterations depends on problem structure and initialization

Example

>>> optimizer = Detmax(info_matrices, NbChosen=10)
>>> history = optimizer.main_algo()
>>> print(f"Converged after {len(history)} iterations")
>>> print(f"Final subset: {optimizer.cur_set}")
>>> print(f"Final criterion: {history[-1]:.6f}")

Note

The algorithm may converge to different local optima depending on random initialization. For critical applications, consider running multiple times with different seeds.

class figaroh.optimal.base_optimal_calibration.SOCPOptimizer(subX_dict, calib_config)[source]

Bases: object

Second-Order Cone Programming optimizer for configuration selection.

Implements the mathematical optimization for D-optimal experimental design using Second-Order Cone Programming (SOCP). This class formulates and solves the convex optimization problem that maximizes the determinant of the Fisher Information Matrix.

Mathematical Formulation:

maximize t subject to: t ≤ det(Σᵢ wᵢ Xᵢ)^(1/n)

Σᵢ wᵢ ≤ 1 wᵢ ≥ 0

Where: - t is auxiliary variable for objective - wᵢ are configuration weights - Xᵢ are information matrices - n is matrix dimension

The problem is solved using the CVXOPT solver with picos interface.

pool

Dictionary of information matrices indexed by config ID

Type:

dict

calib_config

Calibration parameters including sample count

Type:

dict

problem

Picos optimization problem instance

w

Decision variable for configuration weights

t

Auxiliary variable for determinant objective

solution

Optimization solution object

Example

>>> optimizer = SOCPOptimizer(subX_dict, calib_config)
>>> weights, sorted_weights = optimizer.solve()
>>> print(f"Optimization status: {optimizer.solution.status}")
add_constraints()[source]
set_objective()[source]
solve()[source]

figaroh.optimal.base_optimal_trajectory module

Base Optimal Trajectory Generation Framework

This module provides base classes for optimal trajectory generation with configuration management, parameter computation, constraint handling, and IPOPT-based optimization. This framework can be extended for different robots.

class figaroh.optimal.base_optimal_trajectory.BaseOptimalTrajectory(robot, active_joints: List[str], config_file: str = 'config/robot_config.yaml')[source]

Bases: object

Base class for IPOPT-based optimal trajectory generation.

Features: - Modular design with separated concerns - Better error handling and logging - Configuration validation - Cleaner interfaces

This base class can be extended for specific robots by implementing robot-specific configuration loading and constraint handling.

build_base_regressor(q, v, a, W_stack=None) ndarray[source]

Build base regressor matrix.

generate_feasible_initial_guess(wp_init, vel_wp_init, acc_wp_init)[source]

Generate a feasible initial guess for optimization.

objective_function(X, opt_cb, tps, vel_wps, acc_wps, wp_init, W_stack=None)[source]

Objective function: condition number of base regressor matrix.

plot_results()[source]

Plot optimal trajectory results using unified results manager.

save_results(output_dir='results')[source]

Save optimal trajectory results using unified results manager.

solve(stack_reps: int = 2) Dict[str, Any][source]

Solve the optimal trajectory generation problem.

Parameters:

stack_reps – Number of trajectory segments to stack

Returns:

Dict containing trajectories and optimization info

class figaroh.optimal.base_optimal_trajectory.BaseParameterComputer(robot, identif_config, active_joints, soft_lim_pool)[source]

Bases: object

Handles base parameter computation and indexing.

compute_base_indices() Tuple[ndarray, ndarray][source]

Compute base parameter indices from random trajectory.

class figaroh.optimal.base_optimal_trajectory.BaseTrajectoryIPOPTProblem(opt_traj, n_joints, n_wps, Ns, tps, vel_wps, acc_wps, wp_init, vel_wp_init, acc_wp_init, W_stack, problem_name='TrajectoryOptimization')[source]

Bases: BaseOptimizationProblem

Base IPOPT problem formulation for trajectory optimization.

This class provides a base implementation for trajectory optimization that can be extended for specific robots.

constraints(X: ndarray) ndarray[source]

Constraint function for IPOPT.

get_constraint_bounds() Tuple[List[float], List[float]][source]

Get constraint bounds for optimization.

get_initial_guess() List[float][source]

Get initial guess from waypoints.

get_variable_bounds() Tuple[List[float], List[float]][source]

Get variable bounds for optimization.

jacobian(X: ndarray) ndarray[source]

Jacobian of constraints - Custom implementation for better performance.

For trajectory optimization, we can use sparse finite differences instead of full automatic differentiation which is too slow.

objective(X: ndarray) float[source]

Objective function: condition number of base regressor matrix.

solve_with_waypoints(wps) Tuple[bool, Dict[str, Any]][source]

Solve the optimization problem with given initial waypoints.

Parameters:

wps – Initial waypoints

Returns:

Tuple of (success, results_dict)

class figaroh.optimal.base_optimal_trajectory.ConfigurationManager[source]

Bases: object

Manages configuration loading and validation.

static load_from_yaml(config_file: str) Tuple[Dict[str, Any], Any][source]

Load trajectory parameters from YAML file.

class figaroh.optimal.base_optimal_trajectory.TrajectoryConstraintManager(robot, CB, traj_params, identif_config)[source]

Bases: object

Manages trajectory constraints and bounds.

evaluate_constraints(Ns: int, X: ndarray, opt_cb: Dict, tps, vel_wps, acc_wps, wp_init) ndarray[source]

Evaluate all constraints for optimization.

get_constraint_bounds(Ns: int) Tuple[List, List][source]

Get constraint bounds for optimization.

get_variable_bounds() Tuple[List[float], List[float]][source]

Get variable bounds for optimization.