Why apply DEM simulations to the pharmaceutical development process?

Share this

Producing and marketing new medicines generally comes after a long process involving months or years of research, testing, and a large investment of resources.

With today’s increasingly complex processes, better-structured products, and strong regulation on the part of national governments, the product approval process in the pharmaceutical industry presents particularities and challenges that generate many questions.

Because the development cycle is quite long and very expensive, optimizing the available resources towards getting regulatory approval is critical.

Therefore, we have prepared this article to answer questions about computational simulation and Discrete Element Method (DEM) for this sector.

Why should I use a physics-based process modeling technique? Ultimately I need to provide experimental data for the regulatory submission.

Simulations are not intended to take the place of experiments, but simulations can ultimately reduce the experimental burden on the drug development process by substantially improving process insight.

Reliable first principle models allow for enhanced process understanding by capturing the dynamic interaction of process, geometry, and material variables.

This leads to significant process improvement with a much-reduced experimental burden. Validated models provide information like dynamic stress distributions, which are very hard to obtain experimentally. Such information can help an engineer make informed decisions on formulation, process, and equipment design changes, and to troubleshoot problems (Figure 1).

I use statistical models that are fast and work very well. Why should I use first principle models?

Empirical models like statistical or Artificial Neural Networks are based on mathematical/statistical models correlating inputs and outputs as seen from large volumes of experimental data.

They are quicker to run, but they don’t capture the underlying physics. So these models are very good for predicting changes within an established design space, but are very poor at capturing failure modes outside this region.

In contrast, first principle physics-based methods like Discrete Element Method (DEM), Computational Fluid Dynamics (CFD), and Finite Element Method (FEM) use fundamental first principles like conservation of mass, momentum, and energy. Often enough, first principle models are used to provide reasonable starting points for experiments thereby saving valuable time and material.

What types of processes can be modeled using DEM?

Bulk particulate processes can be easily modeled using DEM. In the pharmaceutical industry, DEM has been used to model hopper discharge, mixing, milling, granulation, tablet coating, etc. If the fluid effects are important, like in drying, fluidized bed processes, dry powder inhaler design, then DEM-CFD coupled models can be used. Rocky DEM is fully integrated with ANSYS and couples seamlessly with ANSYS Fluent in both 1-way and 2-way modes.

Furthermore, Rocky DEM implements multiple advanced models to enable enhanced process insights, some of which are listed in the table below:

Table 1 – Discrete Element Method (DEM) Advanced Models for Pharmaceutical Applications

Several examples of pharmaceutical processes successfully modeled using Rocky DEM can be seen in Figure 2.

(a) Mixing

(b) Tablet Coating

1)

2)

(c) Wet Granulation

1)

2)

(d) Breakage

1)

2)

(e) Drying

(f) Fluid Driven Operations

1)

2)

3)

My particle size is a few microns, and my system has billions of particles. Can such a system be solved using DEM?

Using the same particle size, this cannot be done in practice yet. But you don’t need a 1:1 model to obtain critical insight. Engineers can use a well-calibrated model, which uses a slightly larger particle size so that the total count can be handled using the available hardware.

Model calibration for pharmaceutical processes can be done by adjusting the material properties like friction and Young’s modulus so that an experimental measurement, typically relating to flow, is well matched.

In our benchmark study with BMS, we modeled a scale-up study of granulation with a real feed size of 80 microns. The simulations were run at 10 times the particle size, but it was ensured that the angle of repose and impeller power consumption were well matched with experiments.

a)

b)

Can you provide an example of how a DEM model can be used practically?

Yes, consider the benchmark wet granulation case study with BMS After the initial calibration as discussed above, the granulation was simulated at different scales and rotation speeds and the work done per unit mass was computed. Transient variables like pressure, shear stress profiles, and residence time distribution at different scales are extracted from simulations (Figure 4). In this instance, it’s easy to see that the pressure and shear stress decreases with increasing the size of the vessel, and this can be attributed to particles spending less time in the high shear zone. This implies that the work done per unit mass must be increased at the higher scale, either using a higher tip speed or increasing the wet massing time. These observations were matched experimentally. The analysis compared scale-up laws at 150L scale using a fraction of the material that would have been used if scale-up was done through conventional experimentation.

How can I make sure that my model is reliable?

A reliable model must be both verified and validated.

Model verification: Ensures that the mathematical model (code) is accurately implemented and then solved (calculation). Code verification includes Software Quality Assurance (SQA) to show that the software is running correctly and produces reproducible results, Numerical Code Verification (NQV) to demonstrate correct implementation and functioning of numerical algorithms. The calculation verification is done to estimate the numerical error in the Quantities of Interest (QOI) due to discretization, solver tolerances, and user errors.

Model validation: This is the process of determining the degree to which a model or simulation is an accurate representation of the real world from the perspective of the Context of Use (CoU), which is the intended use of the model or simulation. It is generally demonstrated by comparing the computational model predictions with results from experimentally determinations, referred to as comparators(s). The validation activities are principally concerned with demonstrating the degree to which sensitivities and uncertainties of the computational model and associated comparator(s) are understood. The credibility factors include the governing equations to incorporate the correct physics, and model inputs (geometry, material properties, boundary conditions, and assumptions.)

Was this content helpful? If you have any questions that have not been clarified, please comment below, and we will reply you in this same blog post.

Saurabh Sarkar

Applications Engineer, Rocky DEM

Dr. Saurabh Sarkar is an Applications Engineer for the Rocky DEM Business Unit. Prior to joining ESSS, Dr. Sarkar worked as an Adjunct Faculty at Rutgers University and an on-site Consultant at Sunovion Pharmaceuticals where he supported drug formulation and process development activities. He obtained his Ph.D. in Pharmaceutics from the University of Connecticut where his focus was understanding and optimization of different pharmaceutical unit operations using DEM and CFD tools in projects with multiple industrial and government collaborators. He is a Senior Member of the AIChE and serves as an expert reviewer for several journals.

Leave a comment

Get Fresh Updates on Email

We'll never share your email address, and you can opt out at any time, we promise.