ipopt   Linux AMD Opteron, Windows 32, Windows 64

Purpose ^

IPOPT Call the IPOPT constrained, nonlinear solver.

Synopsis ^

This is a script file.

Description ^

 IPOPT Call the IPOPT constrained, nonlinear solver. 
   The basic function call is
   
     [x, info] = IPOPT(x0,funcs,options)

   The first input is either a matrix or a cell array of matrices. It
   declares the starting point for the solver.

   CALLBACK FUNCTIONS

   The second input must be struct containing function handles for various
   MATLAB routines. For more information on using functions and function
   handles in MATLAB, type HELP FUNCTION and HELP FUNCTION_HANDLE in the
   MATLAB prompt.

     funcs.objective (required)

     Calculates the objective function at the current point. It takes one
     point, the current iterate x. For example, the definition of the
     objective function for the Hock & Schittkowski (H&S) test problem #71
     (with 4 optimization variables) would be
 
         function f = objective (x)
           f = x(1)*x(4)*sum(x(1:3)) + x(3);
         
     funcs.gradient (required)

     Computes the gradient of the objective at the current point. It takes
     one input, the current iterate x. For H&S test problem #71, the
     definition of the gradient callback would be

         function g = gradient (x)
           g = [ x(1)*x(4) + x(4)*sum(x(1:3))
                 x(1)*x(4)
                 x(1)*x(4) + 1
                 x(1)*sum(x(1:3)) ]; 

     funcs.constraints (optional)

     This function is only required if there are constraints on your
     variables. It evaluates the constraint functions at the current
     point. It takes one input, x. The return value is a vector of length
     equal to the number of constraints (it must be of the same length as
     options.cl and options.cu). For H&S test problem #71, the
     callback definition would be

         function c = constraints (x)
           c = [ prod(x); sum(x.^2) ];

     funcs.jacobian (optional)
 
     This function is only required if there are constraints on your
     variables. Evaluates the Jacobian of the constraints at the current
     point. It takes one input, x. The output must always be an M x N
     sparse matrix, where M is the number of constraints and N is the
     number of variables. Type HELP SPARSE for more information on
     constructing sparse matrices in MATLAB. The definition of the
     callback function for H&S test problem #71 would be

         function J = jacobian (x)
           sparse([ prod(x)./x; 2*x ]);

     Notice that the return value is a sparse matrix.

     funcs.jacobianstructure (optional)

     This function is only required if there are constraints on your
     variables. It takes no inputs. The return value is a sparse
     matrix whereby an entry is nonzero if and only if the Jacobian of
     the constraints is nonzero at ANY point. The callback function for
     the H&S test problem #71 simply returns a 2 x 4 matrix of ones in
     the sparse matrix format:

         function J = jacobianstructure() 
           J = sparse(ones(2,4));

     funcs.hessian (optional)

     Evaluates the Hessian of the Lagrangian at the current point. It
     must be specified unless you choose to use the limited-memory
     quasi-Newton approximation to the Hessian (see below).
 
     The callback function has three inputs: the current point (x), a
     scalar factor on the objective (sigma), and the Lagrange multipliers
     (lambda), a vector of length equal to the number of constraints. The
     function should compute
                  
        sigma*H + lambda(1)*G1 + ... + lambda(M)*GM

     where M is the number of constraints, H is the Hessian of the
     objective and the G's are the Hessians of the constraint
     functions. The output must always be an N x N sparse, lower triangular
     matrix, where N is the number of variables. In other words, if X is
     the output value, then X must be the same as TRIL(X).

     Here is an implementation of the Hessian callback routine for the
     H&S test problem #71:

         function H = hessian (x, sigma, lambda)
           H = sigma*[ 2*x(4)             0      0   0;
                       x(4)               0      0   0;
                       x(4)               0      0   0;
                       2*x(1)+x(2)+x(3)  x(1)  x(1)  0 ];
           H = H + lambda(1)*[    0          0         0         0;
                               x(3)*x(4)     0         0         0;
                               x(2)*x(4) x(1)*x(4)     0         0;
                               x(2)*x(3) x(1)*x(3) x(1)*x(2)     0  ];
           H = sparse(H + lambda(2)*diag([2 2 2 2]));
  
     funcs.hessianstructure (optional)
 
     This function serves the same purpose as funcs.jacobianstructure, but
     for the Hessian matrix. Again, it is not needed if you are using the
     limited-memory quasi-Newton approximation to the Hessian. It takes no
     inputs, and must return a sparse, lower triangular matrix. For H&S
     test problem #71, the MATLAB callback routine is fairly
     straightforward:

         function H = hessianstructure() 
           H = sparse(tril(ones(4)));

     funcs.iterfunc (optional)

     An additional callback routine that is called once per algorithm
     iteration. It takes three inputs: the first is the current iteration
     of the algorithm, the second is the current value of the objective,
     and the third is a structure containing fields x, inf_pr, inf_du, mu,
     d_norm, regularization_size, alpha_du, alpha_pr, and ls_trials. This
     function should always return true unless you want IPOPT to terminate
     prematurely for whatever reason. If you would like to use the third
     input to iterfunc along with auxdata functionality, you will need to
     modify the appropriate section of ipopt_auxdata.m.

   OPTIONS

   The options are passed through the third input. What follows is a
   description of the fields you may optionally specify.

     options.lb  

     Specify lower bounds on the variables. It must have the same number
     of elements as x0. Set an entry to -Inf to specify no lower bound.

     options.ub

     Specify upper bounds on the variables. It must have the same number
     of elements as x0. Set an entry to Inf to specify no upper bound.

     options.cl, options.cu

     Set lower and upper bounds on the constraints. Each should be a
     vector of length equal to the number of constraints. As before, a
     bound is removed by setting the entry to -Inf or +Inf. An equality
     constraint is achieved by setting cl(i) = cu(i).

     options.auxdata

     Optionally, one may choose to pass additional auxiliary data to the
     MATLAB callback routines listed above through the function call. For
     instance, the objective callback function now takes two inputs, x and
     auxdata. The auxiliary data may not change through the course of the
     IPOPT optimization. The auxiliary data keep the same values as they
     possessed in the initial call. If you need variables that change over
     time, you may want to consider global variables (type HELP
     GLOBAL). See the lasso.m file in the examples subdirectory for an
     illustration of how the auxiliary data is passed to the various
     callback functions. Starting with Ipopt version 3.11, you must call
     ipopt_auxdata(x0,funcs,options) to use auxdata functionality.

     options.zl, options.zu, options.lambda

     These fields specify the initial value for the Lagrange multipliers,
     which is especially useful for "warm starting" the interior-point
     solver. They specify the Lagrange multipliers corresponding to the
     lower bounds on the variables, upper bounds on the variables, and
     constraints, respectively.

     options.ipopt

     Finally, you may also change the settings of IPOPT through this
     field. For instance, to turn off the IPOPT output, use the
     limited-memory BFGS approximation to the Hessian, and turn on the
     derivative checker, do the following:

       options.ipopt.print_level           = 0;
       options.ipopt.hessian_approximation = 'limited-memory';
       options.ipopt.derivative_test       = 'first-order';

     For more details, see the documentation on the IPOPT website.

   OUTPUTS

   If the solver successfully converges to a stationary point or terminated
   without an unrecoverable error, the function IPOPT outputs the candidate
   solution x. In all other cases, an error is thrown. It also outputs some
   additional information:

     info.zl, info.zu, info.lambda

     The value of the Lagrange multipliers at the solution. See the
     "options" for more information on the Lagrange multipliers.

     info.status

     Upon termination, this field will take on one of these following
     values (for a more up-to-date listing, see the IpReturnCodes.h header
     file in the IPOPT C++ source directory):

         0  solved
         1  solved to acceptable level
         2  infeasible problem detected
         3  search direction becomes too small
         4  diverging iterates
         5  user requested stop
     
        -1  maximum number of iterations exceeded
        -2  restoration phase failed
        -3  error in step computation
       -10  not enough degrees of freedom
       -11  invalid problem definition
       -12  invalid option
       -13  invalid number detected

      -100  unrecoverable exception
      -101  non-IPOPT exception thrown
      -102  insufficient memory
      -199  internal error

     info.iter, info.cpu

     Number of iterations and CPU time (in seconds) taken by the Ipopt run

   Finally, for more information, please consult the following webpages:

      http://www.cs.ubc.ca/~pcarbo/ipopt-for-matlab
      http://projects.coin-or.org/Ipopt

   Copyright (C) 2008 Peter Carbonetto. All Rights Reserved.
   This code is published under the Eclipse Public License.

   Author: Peter Carbonetto
           Dept. of Computer Science
           University of British Columbia
           September 19, 2008

   Downloaded binaries from http://www.coin-or.org/download/binary/Ipopt/I
   popt-3.11.8-linux64mac64win32win64-matlabmexfiles.zip on 3/22/2016

Cross-reference information ^

This function calls: This function is called by:

Source code ^

0001 % IPOPT Call the IPOPT constrained, nonlinear solver.
0002 %   The basic function call is
0003 %
0004 %     [x, info] = IPOPT(x0,funcs,options)
0005 %
0006 %   The first input is either a matrix or a cell array of matrices. It
0007 %   declares the starting point for the solver.
0008 %
0009 %   CALLBACK FUNCTIONS
0010 %
0011 %   The second input must be struct containing function handles for various
0012 %   MATLAB routines. For more information on using functions and function
0013 %   handles in MATLAB, type HELP FUNCTION and HELP FUNCTION_HANDLE in the
0014 %   MATLAB prompt.
0015 %
0016 %     funcs.objective (required)
0017 %
0018 %     Calculates the objective function at the current point. It takes one
0019 %     point, the current iterate x. For example, the definition of the
0020 %     objective function for the Hock & Schittkowski (H&S) test problem #71
0021 %     (with 4 optimization variables) would be
0022 %
0023 %         function f = objective (x)
0024 %           f = x(1)*x(4)*sum(x(1:3)) + x(3);
0025 %
0026 %     funcs.gradient (required)
0027 %
0028 %     Computes the gradient of the objective at the current point. It takes
0029 %     one input, the current iterate x. For H&S test problem #71, the
0030 %     definition of the gradient callback would be
0031 %
0032 %         function g = gradient (x)
0033 %           g = [ x(1)*x(4) + x(4)*sum(x(1:3))
0034 %                 x(1)*x(4)
0035 %                 x(1)*x(4) + 1
0036 %                 x(1)*sum(x(1:3)) ];
0037 %
0038 %     funcs.constraints (optional)
0039 %
0040 %     This function is only required if there are constraints on your
0041 %     variables. It evaluates the constraint functions at the current
0042 %     point. It takes one input, x. The return value is a vector of length
0043 %     equal to the number of constraints (it must be of the same length as
0044 %     options.cl and options.cu). For H&S test problem #71, the
0045 %     callback definition would be
0046 %
0047 %         function c = constraints (x)
0048 %           c = [ prod(x); sum(x.^2) ];
0049 %
0050 %     funcs.jacobian (optional)
0051 %
0052 %     This function is only required if there are constraints on your
0053 %     variables. Evaluates the Jacobian of the constraints at the current
0054 %     point. It takes one input, x. The output must always be an M x N
0055 %     sparse matrix, where M is the number of constraints and N is the
0056 %     number of variables. Type HELP SPARSE for more information on
0057 %     constructing sparse matrices in MATLAB. The definition of the
0058 %     callback function for H&S test problem #71 would be
0059 %
0060 %         function J = jacobian (x)
0061 %           sparse([ prod(x)./x; 2*x ]);
0062 %
0063 %     Notice that the return value is a sparse matrix.
0064 %
0065 %     funcs.jacobianstructure (optional)
0066 %
0067 %     This function is only required if there are constraints on your
0068 %     variables. It takes no inputs. The return value is a sparse
0069 %     matrix whereby an entry is nonzero if and only if the Jacobian of
0070 %     the constraints is nonzero at ANY point. The callback function for
0071 %     the H&S test problem #71 simply returns a 2 x 4 matrix of ones in
0072 %     the sparse matrix format:
0073 %
0074 %         function J = jacobianstructure()
0075 %           J = sparse(ones(2,4));
0076 %
0077 %     funcs.hessian (optional)
0078 %
0079 %     Evaluates the Hessian of the Lagrangian at the current point. It
0080 %     must be specified unless you choose to use the limited-memory
0081 %     quasi-Newton approximation to the Hessian (see below).
0082 %
0083 %     The callback function has three inputs: the current point (x), a
0084 %     scalar factor on the objective (sigma), and the Lagrange multipliers
0085 %     (lambda), a vector of length equal to the number of constraints. The
0086 %     function should compute
0087 %
0088 %        sigma*H + lambda(1)*G1 + ... + lambda(M)*GM
0089 %
0090 %     where M is the number of constraints, H is the Hessian of the
0091 %     objective and the G's are the Hessians of the constraint
0092 %     functions. The output must always be an N x N sparse, lower triangular
0093 %     matrix, where N is the number of variables. In other words, if X is
0094 %     the output value, then X must be the same as TRIL(X).
0095 %
0096 %     Here is an implementation of the Hessian callback routine for the
0097 %     H&S test problem #71:
0098 %
0099 %         function H = hessian (x, sigma, lambda)
0100 %           H = sigma*[ 2*x(4)             0      0   0;
0101 %                       x(4)               0      0   0;
0102 %                       x(4)               0      0   0;
0103 %                       2*x(1)+x(2)+x(3)  x(1)  x(1)  0 ];
0104 %           H = H + lambda(1)*[    0          0         0         0;
0105 %                               x(3)*x(4)     0         0         0;
0106 %                               x(2)*x(4) x(1)*x(4)     0         0;
0107 %                               x(2)*x(3) x(1)*x(3) x(1)*x(2)     0  ];
0108 %           H = sparse(H + lambda(2)*diag([2 2 2 2]));
0109 %
0110 %     funcs.hessianstructure (optional)
0111 %
0112 %     This function serves the same purpose as funcs.jacobianstructure, but
0113 %     for the Hessian matrix. Again, it is not needed if you are using the
0114 %     limited-memory quasi-Newton approximation to the Hessian. It takes no
0115 %     inputs, and must return a sparse, lower triangular matrix. For H&S
0116 %     test problem #71, the MATLAB callback routine is fairly
0117 %     straightforward:
0118 %
0119 %         function H = hessianstructure()
0120 %           H = sparse(tril(ones(4)));
0121 %
0122 %     funcs.iterfunc (optional)
0123 %
0124 %     An additional callback routine that is called once per algorithm
0125 %     iteration. It takes three inputs: the first is the current iteration
0126 %     of the algorithm, the second is the current value of the objective,
0127 %     and the third is a structure containing fields x, inf_pr, inf_du, mu,
0128 %     d_norm, regularization_size, alpha_du, alpha_pr, and ls_trials. This
0129 %     function should always return true unless you want IPOPT to terminate
0130 %     prematurely for whatever reason. If you would like to use the third
0131 %     input to iterfunc along with auxdata functionality, you will need to
0132 %     modify the appropriate section of ipopt_auxdata.m.
0133 %
0134 %   OPTIONS
0135 %
0136 %   The options are passed through the third input. What follows is a
0137 %   description of the fields you may optionally specify.
0138 %
0139 %     options.lb
0140 %
0141 %     Specify lower bounds on the variables. It must have the same number
0142 %     of elements as x0. Set an entry to -Inf to specify no lower bound.
0143 %
0144 %     options.ub
0145 %
0146 %     Specify upper bounds on the variables. It must have the same number
0147 %     of elements as x0. Set an entry to Inf to specify no upper bound.
0148 %
0149 %     options.cl, options.cu
0150 %
0151 %     Set lower and upper bounds on the constraints. Each should be a
0152 %     vector of length equal to the number of constraints. As before, a
0153 %     bound is removed by setting the entry to -Inf or +Inf. An equality
0154 %     constraint is achieved by setting cl(i) = cu(i).
0155 %
0156 %     options.auxdata
0157 %
0158 %     Optionally, one may choose to pass additional auxiliary data to the
0159 %     MATLAB callback routines listed above through the function call. For
0160 %     instance, the objective callback function now takes two inputs, x and
0161 %     auxdata. The auxiliary data may not change through the course of the
0162 %     IPOPT optimization. The auxiliary data keep the same values as they
0163 %     possessed in the initial call. If you need variables that change over
0164 %     time, you may want to consider global variables (type HELP
0165 %     GLOBAL). See the lasso.m file in the examples subdirectory for an
0166 %     illustration of how the auxiliary data is passed to the various
0167 %     callback functions. Starting with Ipopt version 3.11, you must call
0168 %     ipopt_auxdata(x0,funcs,options) to use auxdata functionality.
0169 %
0170 %     options.zl, options.zu, options.lambda
0171 %
0172 %     These fields specify the initial value for the Lagrange multipliers,
0173 %     which is especially useful for "warm starting" the interior-point
0174 %     solver. They specify the Lagrange multipliers corresponding to the
0175 %     lower bounds on the variables, upper bounds on the variables, and
0176 %     constraints, respectively.
0177 %
0178 %     options.ipopt
0179 %
0180 %     Finally, you may also change the settings of IPOPT through this
0181 %     field. For instance, to turn off the IPOPT output, use the
0182 %     limited-memory BFGS approximation to the Hessian, and turn on the
0183 %     derivative checker, do the following:
0184 %
0185 %       options.ipopt.print_level           = 0;
0186 %       options.ipopt.hessian_approximation = 'limited-memory';
0187 %       options.ipopt.derivative_test       = 'first-order';
0188 %
0189 %     For more details, see the documentation on the IPOPT website.
0190 %
0191 %   OUTPUTS
0192 %
0193 %   If the solver successfully converges to a stationary point or terminated
0194 %   without an unrecoverable error, the function IPOPT outputs the candidate
0195 %   solution x. In all other cases, an error is thrown. It also outputs some
0196 %   additional information:
0197 %
0198 %     info.zl, info.zu, info.lambda
0199 %
0200 %     The value of the Lagrange multipliers at the solution. See the
0201 %     "options" for more information on the Lagrange multipliers.
0202 %
0203 %     info.status
0204 %
0205 %     Upon termination, this field will take on one of these following
0206 %     values (for a more up-to-date listing, see the IpReturnCodes.h header
0207 %     file in the IPOPT C++ source directory):
0208 %
0209 %         0  solved
0210 %         1  solved to acceptable level
0211 %         2  infeasible problem detected
0212 %         3  search direction becomes too small
0213 %         4  diverging iterates
0214 %         5  user requested stop
0215 %
0216 %        -1  maximum number of iterations exceeded
0217 %        -2  restoration phase failed
0218 %        -3  error in step computation
0219 %       -10  not enough degrees of freedom
0220 %       -11  invalid problem definition
0221 %       -12  invalid option
0222 %       -13  invalid number detected
0223 %
0224 %      -100  unrecoverable exception
0225 %      -101  non-IPOPT exception thrown
0226 %      -102  insufficient memory
0227 %      -199  internal error
0228 %
0229 %     info.iter, info.cpu
0230 %
0231 %     Number of iterations and CPU time (in seconds) taken by the Ipopt run
0232 %
0233 %   Finally, for more information, please consult the following webpages:
0234 %
0235 %      http://www.cs.ubc.ca/~pcarbo/ipopt-for-matlab
0236 %      http://projects.coin-or.org/Ipopt
0237 %
0238 %   Copyright (C) 2008 Peter Carbonetto. All Rights Reserved.
0239 %   This code is published under the Eclipse Public License.
0240 %
0241 %   Author: Peter Carbonetto
0242 %           Dept. of Computer Science
0243 %           University of British Columbia
0244 %           September 19, 2008
0245 %
0246 %   Downloaded binaries from http://www.coin-or.org/download/binary/Ipopt/I
0247 %   popt-3.11.8-linux64mac64win32win64-matlabmexfiles.zip on 3/22/2016

| Generated by m2html © 2005