Using some weights w[k]
, compute the sums
yxlx
over w[k]*y[k]*x[k]*log2(x[k])
and
xlx2
over w[k]*sqr(x[k]*log2(x[k]))
, where sqr(u)=u*u
.
Then the estimate for c
is yxlx/xlx2
.
One can chose the standard weights w[k]=1
or adapting weights
w[k]=1/( 1+sqr( x[k]*log2(x[k]) ) )
or even more adapting
w[k]=1/( 1+sqr( x[k]*log2(x[k]) ) +sqr( y[k] ) )
so that large values for x,y do not excessively influence the estimate. For some middle strategy take the square root of those expressions as weights.
Mathematics: These formulas result from the formulation of the estimation problem as a weighted least square problem
sum[ w(x,y)*(y-c*f(x))^2 ] over (x,y) in Data
which expands as
sum[ w(x,y)*y^2 ]
-2*c* sum[ w(x,y)*y*f(x) ]
+ c^2 * sum[ w(x,y)*f(x)^2 ] over (x,y) in Data
where the minimum is located at
c = sum[ w(x,y)*y*f(x) ] / sum[ w(x,y)*f(x)^2 ]
w(x,y) should be approximately inverse to the variance of the error at (x,y), so if you expect a uniform size of the error, then w(x,y)=1, if the error grows proportional to x and y, then w(x,y)=1/(1+x^2+y^2) or similar is a sensible choice.