Pergunta

I have a some experimental data (for y, x, t_exp, m_exp), and want to find the "optimal" model parameters (A, B, C, D, E) for this data using the constrained multivariate BFGS method. Parameter E must be greater than 0, the others are unconstrained.

def func(x, A, B, C, D, E, *args):
    return A * (x ** E) * numpy.cos(t_exp) * (1 - numpy.exp((-2 * B * x) / numpy.cos(t_exp))) +  numpy.exp((-2 * B * x) / numpy.cos(t_exp)) * C + (D * m_exp)

initial_values = numpy.array([-10, 2, -20, 0.3, 0.25])
mybounds = [(None,None), (None,None), (None,None), (None,None), (0, None)]
x,f,d = scipy.optimize.fmin_l_bfgs_b(func, x0=initial_values, args=(m_exp, t_exp), bounds=mybounds)

A few questions:

  1. Should my model formulation func include my independent variable x or should it be provided from the experimental data x_exp as part of *args?
  2. When I run the above code, I get an error func() takes at least 6 arguments (3 given), which I assume are x, and my two *args... How should I define func?

EDIT: Thanks to @zephyr's answer, I now understand that the goal is to minimize the sum of squared residuals, not the actual function. I got to the following working code:

def func(params, *args):
    l_exp = args[0]
    s_exp = args[1]
    m_exp = args[2]
    t_exp = args[3]
    A, B, C, D, E = params
    s_model = A * (l_exp ** E) * numpy.cos(t_exp) * (1 - numpy.exp((-2 * B * l_exp) / numpy.cos(t_exp))) +  numpy.exp((-2 * B * l_exp) / numpy.cos(theta_exp)) * C + (D * m_exp)
    residual = s_exp - s_model
    return numpy.sum(residual ** 2)

initial_values = numpy.array([-10, 2, -20, 0.3, 0.25])
mybounds = [(None,None), (None,None), (None,None), (None,None), (0,None)]

x, f, d = scipy.optimize.fmin_l_bfgs_b(func, x0=initial_values, args=(l_exp, s_exp, m_exp, t_exp), bounds=mybounds, approx_grad=True)

I am not sure that the bounds are working correctly. When I specify (0, None) for E, I get a run flag 2, abnormal termination. If I set it to (1e-6, None), it runs fine, but selects 1e-6 as E. Am I specifying the bounds correctly?

Foi útil?

Solução

I didn't want to try to figure out what the model you're using represented, so here's a simple example fitting to a line:

x_true = arange(0,10,0.1)
m_true = 2.5
b_true = 1.0
y_true = m_true*x_true + b_true

def func(params, *args):
    x = args[0]
    y = args[1]
    m, b = params
    y_model = m*x+b
    error = y-y_model
    return sum(error**2)

initial_values = numpy.array([1.0, 0.0])
mybounds = [(None,2), (None,None)]

scipy.optimize.fmin_l_bfgs_b(func, x0=initial_values, args=(x_true,y_true), approx_grad=True)
scipy.optimize.fmin_l_bfgs_b(func, x0=initial_values, args=(x_true, y_true), bounds=mybounds, approx_grad=True)

The first optimize is unbounded, and gives the correct answer, the second respects the bounds which prevents it from reaching the correct parameters.

The important thing you have wrong is for almost all the optimize functions, 'x' and 'x0' refer to the parameters you are optimizing over - everything else is passed as an argument. It's also important that your fit function return the correct data type - here we want a single value, some routines expect an error vector. Also you need the approx_grad=True flag unless you want to compute the gradient analytically and provide it.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top