Testing a new binding model: checkJacobianPatternFD test


I implemented a new binding model in CADET. @ronald.jaepel has shown me how to create a test for it. So, I created the test using CADET_BINDINGTEST_ALLBINDING function.

I was able to pass the test for analytical vs AD jacobian, I checked it for several test cases, they are always equal. So, as I understand, it means that the analytical jacobian correctly corresponds to the flux.

But at the same time another test checkJacobianPatternFD always fails. And actually, I don’t understand what exactly is tested there. What is the objective of this test and what does it mean if this test fails?

I get the messages like this from the test runner:

/CADET/src/test/JacobianHelper.hpp:413: FAILED:
  CHECK( colB[row] < 0.0 )
with expansion:
  0.0 < 0.0
with messages:
  row := 0
  col := 3

The checkJacobianPatternFD checks if the Jacobian matches the “patterns” produced by Finite Differences, so changing the parameters a tiny bit and comparing if the prediction increases or decreases.

In this test, the exact value of the partial derivative isn’t evaluated, only if it’s positive or negative. In your posted error, the FD got a result below zero (I think) and is therefore trying to assert that the Jacobian result is below zero as well, which it isn’t as the Jacobian returns exactly zero.

Without looking at the code and a full error log I don’t know if I can help more.

According to my experience the AD Jacobian is always correct, if your analytical Jacobian matches the AD Jacobian then chances are high that it’s correctly implemented. Regarding the FD, it also depends on what FD scheme and order are actually implemented in these tests, sometimes the order is too low and could not give a good approximation of the derivatives, for example in this case, the analytical derivative is zero but the FD might give a (nominal) negative value due to ‘insufficient’ approximations.

I’ve had incorrect AD results because I used incorrect types :sweat_smile:

At a point where I hadn’t fully understood the difference between the parameter types and had set up my model incorrectly the AD didn’t get the right results (obviously, because I gave it incorrect info).

Now that I’m typing this, I though I could also include the parameters that I learned about:

  • ParamType (values that depend on parameters)
  • StateType (values that depend on stationary phase concentrations)
  • CpStateType (values that depend on mobile phase concentrations)
  • and their combinations, such as
    • StateParamType (for values depending on stationary phase concentrations and parameter values)
    • CpStateParamType (for values depending on mobile phase concentrations and parameter values)

I can see in the code that the FD test implements the 2nd order central differences…

So, the question is what additional info can we get from the FD test if there is already a test for AD vs analytical Jacobian? I can understand the value of the AD vs analytical test because it allows to test both the analytical Jacobian and the flux implementation in a way which is independent from the actual binding model equations.

Do you agree that if the AD test passes, but FD test fails there is an issue with the FD test, not with the binding model? Or what can be another possible reason of such test results combination?

I don’t know. Maybe @s.leweke can answer that.

The idea behind the test is: If your Jacobian is incorrect, you can maybe pinpoint which entries are wrong. Checking the pattern gives you hints where to look that are easy to understand.

Well, if AD and analytical Jacobian agree at multiple points, I’d say that the FD test has some problems. I’d rate the analytical Jacobian as correct.

Thanks everyone for the answers. I checked that my analytical Jacobian corresponds to AD for several points, so now I’m confident that the binding model implementation is correct.

BTW, you can check which Jacobian entries are wrong for AD test as well, and it gives you exact numbers, not just the sign. So, maybe the FD test is not adding any new value as we already have the AD as the ground truth. And in that case maybe it is a good idea to disable it to prevent looking for an issue where there is no issue since this test is likely to be run by non-developers who just adding their binding models.