At the moment, we use the condition of the Jacobian matrix to estimate the determinability of our parameters. Due to the big differences in the magnitude of our values, this approach is corrupted.

We would like to transform some parameters but since we do not know the exact math, we are not sure whether the transformation would lead to invalid results.

- Is the condition of the Jacobian matrix some estimation for the relative condition number of the underlying function f at x, i.t. ‖J(x)‖ / (‖f(x)‖ ‖x‖ ) ?
- In that case, does the system allow us to analyse ‖J(t(x))‖ / (‖f(t(x))‖ ‖t(x)‖ ) for some linear transformation t, or some other transformation like the logarithm?
- Is the condition of J(t(x)) a valid estimation of the determinability for some (e.g. linear) transformation t?

You are looking at the condition number of the matrix that results from looking at the derivative of the chromatogram with respect to each of the parameters, correct?

Short answer: A low condition number can mean your system is probably better determined but does not necessarily mean that because the systems are highly non-linear.

My view is that you can do whatever variable transforms you want and if it gives a better condition number that is good. The purpose of the transformations is to make it easier to find the optimal value and that should lower the condition number.

However, just because a system has a poor condition number doesn’t mean your parameters are determinable or not. The math behind that assumes linear system. I have designed experiments based on trying to get the lowest condition number in the past and that was pretty much a failure. I would get much lower condition numbers, but the optimizer would take longer to converge and would show a less clear optimum indicating that the lower condition number did not always help.

If the estimated parameters span several orders of magnitude, I would gererally recommend applying a log transform. Most search algorithms benefit from homogenizing the stepwidth along different dimensions of the search space.

However, this alone might not be sufficient for highly correlated parameters and/or non-linear models. A typical example would be the adsorption and desorption rate constants in customary binding models. It is often favourable to estimate just one of these rates, e.g. k_a together with the equilibrium constant k_{eq}=\frac{k_a}{k_d}.

The effectiveness of such parameter transforms with respect to identifiability of the estimated parameters can be evaluated by linerarizing the model around a given parameter set. From that you can compute the covariance matrix, e.g. under a least squares estimator, and apply different measures for optimal design.

Strangely I found with SMA that trying to generate optimal designs this way did not work. We tried a bootstrapping approach to generate the Fishcher information matrix and selected to optimal experiments but in practice they normally performed the same or worse on optimization. I suspect it is related to the difference between the optimum being well defined and the path to the optimum being well defined.

This is something that someone should look into later because the reasons are probably really interesting and probably has to do with linear vs non-linear systems and path vs optimum outcome.