Bad allocation error when increasing number of discretization steps

Hey everyone,

I am runnning into a “bad allocation error” when increasing the number of discretization steps.

What is the reason for that?
How can I prevent it?
Also is a higher number of discretization steps the only way to minimize the numerical dispersion?

The code as well as the .h5 file are provided below: (For me the code works for n_col=100 but not for n_col=200)

Simulation 16 comp Bi-Langmuir no Pores.ipynb (22.0 KB)
Bi_langmuir model.h5 (155.0 KB)

Thanks in advance for any replies!

Hi Jan,

I could not replicate your error for the file you sent using the latest CADET release, the simulation runs fine for me.

Bad allocations should not happen, if we get a reproducible error we can investigate a potential BUG.

Yes, the number of spatial discrete points combined with sufficiently low RELTOL and ABSTOL (which control time-step size) is the only way to reduce numerical dispersion in the latest CADET master release.
Another thing you can do is check out our CADET-DG branch, which implements a different numerical scheme for spatial discretization (Discontinuous Galerkin, DG). This requires a source build, since this is currently not included in our master branch, but it will be added in the future. The advantage of the DG scheme is that we generally require less discrete points to achieve a certain accuracy compared to FV (DG adds less numerical/artificial dispersion), except for cases where this additional dispersion is beneficial (which is the case for steep concentration fronts).

PS: I recommend you to only export data that you need. Ive seen that you set everything on true, but exporting, e.g., SOLUTION_BULK takes additional time and significantly increases file sizes.



Hey Jan,

thanks for your reply!
Very weird.
Could a bad allocation error in general have to do with the amount of written data?
I noticed that the error also didn’t show up for me anymore, as soon as I minimzed the exported data (per your recommendation) so maybe that was a problem.

Thanks for elaborating on the numerical dispersion!


The error bad allocation indicates that CADET could not get enough contiguous memory from the operating system.
This can be caused by one or both of the following:

  • You don’t have enough free memory to hold the data
  • The free memory is fragmented too much (no contiguous piece of memory available in the requested size)

Fields like SOLUTION_BULK and SOLUTION_PARTICLE require a large amount of memory. If you have lots of time steps at which the solution is recorded, a very large piece of contiguous memory is allocated. Presumably, this allocation failed.

Yes, this can be a cause.

1 Like

Hi @j.domagala,

do you know how to compile CADET? I believe had this (or at least a similar) issue before and might already have fixed it on master. It’s definitely high on our priority list but we have been so occupied with other work that we never got around to actually make a new release.
Hopefully, this already solves your issue. If not, please let us know!

1 Like

Thanks for all of your thoughts on the topic!

This makes a lot of sense. Thanks for the brief explanation!

Im afraid I have not dealt with compiling CADET up to this point.
Since the reduction of exported data fixed the issue for now, I will probably avoid diving into it.
Although of course I may have to get back to it, in case another issue requires a CADET build from source.

Thanks again for the help!
I highly appreciate CADETs developers and community!

PS: maybe this is supposed to be discussed in another topic but do you have any tips on how to achieve minimal numerical dispersion without getting an enormous simulation time? Is it really just trial and error with the discretization steps?

1 Like

I am afraid thats what it is. The following may help with your trial and error strategy:
The FV method as implemented in CADET has an asymptotic convergence rate of 2. Its usually slightly higher, depending on the setting. Convergence rate of 2 means that when doubling the number of FV cells, the approximation error is divided by four.
The formula for experimental order of convergence is given by

EOC = \frac{\log(error_k) / \log(error_{k-1})}{log(nCells_{k-1}) / log(nCells_k)},

error_k denotes some error metric of a discretization step k and nCells denotes the number of FV cells for discretization step k (k-1 denotes a more coarse discretization than k).
You can use this to roughly estimate the approximation accuracy when increasing the number of cells, but be careful since the convergence rate is an asymptotic property.
You should also set the time integration tolerances RELTOL and ABSTOL only as small as required.


A rule of thumb that has worked for me is to use 24 axial discretization nodes per cm of the column and 30 nodes for particle discretization. These are the values that I would use in most scenarios when performing simulations with a completed model. For the fitting, I would use fewer nodes and tolerate some numerical dispersion in order to obtain the parameters more quickly. You can also repeat the fitting routine with more nodes after narrowing the parameter bounds. For example, in a large-scale parameter estimation problem I used 1/3 of the number of nodes mentioned above. In some cases, you may need to use a more granular particle discretization if the diffusion rates are small (less than 1e-13 m^2/s). This was the case for one atypical problem in my thesis work and not something that you would normally find.

As for abstol and reltol, I have found that 1e-6 to 1e-8 for abstol and 1e-6 for reltol are fine. I always used these values unless I had components that were particularly low in concentration, then I used 1e-9 and 1e-7. If you are using inputting the entire inlet profile over time (using the piecewise polynomial) instead of using the pulse injection coefficients, I don’t think that there is a practical difference between abstol and reltol because every time step is a discontinuous section. Here, I used 1e-6 and 1e-6. In general, I would only increase these values if my simulations failed (error of “TOO_MUCH_WORK”).

Hope this helps!


Thanks for this correlation, it helps a lot to understand the effect, an increased number of discretization steps actually has!

Okay this seems like a good rule of thumb. I will certainly apply it to my model. Thanks for sharing this approach!

Another important point to consider is which numerical approximation error your application can tolerate. Theoretically, the true solution can be approximated arbitrarily accurate but in practice we often need to find a good balance between computational effort and accuracy.