Why apply DEM simulations to the pharmaceutical development process?
Share this
Published on: May 7, 2019
Producing and marketing new medicines generally comes after a long process involving months or years of research, testing, and a large investment of resources.
With today’s increasingly complex processes, better-structured products, and strong regulation on the part of national governments, the product approval process in the pharmaceutical industry presents particularities and challenges that generate many questions.
Because the development cycle is quite long and very expensive, optimizing the available resources towards getting regulatory approval is critical.
Therefore, we have prepared this blog to answer questions about computational simulations and Discrete Element Method (DEM) in the pharmaceutical sector.
Why should I use a modeling tool? Ultimately I need to provide experimental data for the regulatory submission.
Simulations are not intended to take the place of pharmaceutical experiments, but simulations can ultimately reduce the experimental burden on the drug development process by substantially improving process insight.
Reliable first principle models allow for enhanced process understanding by capturing the dynamic interaction of process, geometry, and material variables. This leads to significant process improvement with a much-reduced experimental burden. Validated models provide information like dynamic stress distributions, which are very hard to obtain experimentally. Such information can help an engineer make informed decisions on formulation, process, and equipment design changes, and to troubleshoot problems.
Figure 1 shows a generic flowchart illustrating how DEM can help in optimized process and product development.
Figure 1: Flowchart showing how DEM simulations can be used in pharmaceutical industry for optimized process and product development
I use statistical models that are fast and work very well. Why should I use first principle models?
Empirical models like statistical or Artificial Neural Networks are based on mathematical/statistical models correlating inputs and outputs as seen from large volumes of experimental data.
They are quicker to run, but don’t capture the underlying physics. So these models are very good for predicting changes within an established design space, but are very poor at capturing failure modes outside this region.
In contrast, first principle physics-based methods like Discrete Element Method (DEM), Computational Fluid Dynamics (CFD), and Finite Element Method (FEM) use fundamental first principles like conservation of mass, momentum, and energy. Often, first principle models are used to provide reasonable starting points for experiments thereby saving valuable time and material.
Finally, the use of CFD and FEM for medical device development is quite common and encouraged by the FDA.
What types of pharmaceutical processes can be modeled using DEM simulations?
Bulk particulate processes can be easily modeled using DEM. Standalone DEM has been used to model mixing, sieving, milling, wet granulation, tablet coating, hopper discharge, etc. If the fluid effects are important, like in drying, fluidized bed processes, etc., then DEM-CFD coupled models can be used. DEM can also be coupled with an FEM tool to model compression and aid in the development of medical devices.
My particle size is a few microns, and my system has billions of particles. Can such a system be modeled using DEM? If so, can you provide an example?
While there is no theoretical limitation to the number of particles in a DEM simulation, modeling billions of particles is not possible in practice due to hardware limitations. But a 1:1 model to obtain critical insight into the processes. Engineers can use a well-calibrated model, which uses a slightly larger particle size so that the total count can be handled using the available hardware.
Model calibration for pharmaceutical processes can be done by adjusting the material properties like friction and Young’s modulus so that an experimental measurement, typically relating to flow like angle of repose or a rheometer, is well matched.
In our benchmark study with BMS, we evaluated different scale up rules for wet granulation with a real feed size of 80 microns. The simulations were run at 10 times the particle size, but it was ensured that the angle of repose and impeller power consumption (Figure 2) were well matched with experiments.
Figure 2. Calibration of particle-wall friction for feed powder simulated at 10X particle size using Impeller Power consumption.
After calibrating the material properties for the powder, wet granulation was simulated at different scales and rotation speeds, and the work done per unit mass was computed. Transient variables like pressure, shear stress profiles, and residence time distribution at different scales are extracted from simulations (Figure 3). In this instance, it’s easy to see that the pressure and shear stress decreases with increasing the size of the vessel, and this can be attributed to particles spending less time in the high shear zone.
This implies that the work done per unit mass must be increased at the higher scale, either using a higher tip speed or increasing the wet massing time. These observations were matched experimentally. The analysis compared scale-up laws at 150L scale using a fraction of the material that would have been used if scale-up was done through conventional experimentation.
Figure 3: Post-processing of a high shear wet granulation case to show velocity, pressure, shear stress, and residence time distribution change with equipment size. Data taken from Remy et. al., Advances in Discrete Element Modeling of high-shear wet granulation process using Rocky-DEM, 2016 AICHE Annual meeting.
I understand we have to make quite a few assumptions to simplify the problem while capturing the essential physics. But with these assumptions, isn’t the credibility of the model an issue? How can I make sure that my model is reliable?
The confidence in a model for the engineer and the regulatory agency stems from the model’s ability to match real experimental observations for a diverse range of operating conditions. This is a part of model validation. A good validated model not only matches the experimental data, but also provides a measure of the sensitivities and uncertainties of the computational model and the associated experimental comparator. This in turn stems from scientifically sound assumptions and using the correct physics and mathematical models.
Needless to say, is imperative that the mathematical models used are implemented and solved accurately. This assurance is provided through model verification.
We encourage users to review the relevant guidance, from organizations like the FDA and the ASME, to ensure the reliability of a model, essentially by proving it is both verified and validated.
There are many DEM codes simulations in the market. Why should I use Rocky DEM for modeling pharmaceutical unit operations?
It is particularly advantageous to model pharmaceutical unit operations using simulations with Rocky DEM because of Rocky DEM’s many superior features:
1) Multi GPU technology: As mentioned above, pharmaceutical operations typically involve large particle counts. Rocky DEM’s multi GPU technology enables processing of tens of millions of particles with relative ease. This study shows the relative increase in speed for a case simulating approximately 241,000 polyhedral tablets, each with 222 vertices.
2) Fully integrated with Ansys: Rocky DEM is integrated into Ansys Workbench, allowing it to solve large and complicated multiphysics problems quickly and accurately. Rocky DEM seamlessly couples with Ansys Fluent in both one-way and two-way modes, and with ANSYS Mechanical for Static and Transient Structural Simulations. Furthermore, parametric design optimization studies can be conducted in silico using DesignXplorer in Workbench.
3) Advanced Models: Rocky implements a number of advanced physics models to capture underlying physics and post-process the simulation data easily for gaining maximum insight. As an example, Rocky DEM allows one to import an exact shape which is often critical for model breakage or tablet coating operations. Modeling a shaped tablet using glued spheres, as a lot of competing DEM codes do is physically inaccurate. An example is shown here.
Table 1 lists some of the advanced models implemented within Rocky DEM. Figure 4 shows a few examples of how simulations of some of the advanced models have been used to model pharmaceutical materials and processes.
Table 1 – Discrete Element Method (DEM) Advanced Models for Pharmaceutical Applications
Predict probability of breakage. Can be used as an alternative for computational expensive breakage models.
Figure 4 shows a few examples of how some of the advanced models have been used to study pharmaceutical materials and processes.
a)
b)
c)
Figure 4 (a) Adhesive Model to study distribution of fine cohesive lubricant in a multicomponent mixture in a conical blender. (b) Eulerian statistics to study stresses in a tablet coater (c) intra-particle collision statistics are using to predict the shear stresses on different tablet shapes for the same coating process.
Was this content helpful? If you have any questions about computational simulations in pharmaceutical industry that have not been clarified, please comment below, and we will reply you in this same blog post.
Saurabh Sarkar
Applications Engineer, Rocky DEM
Dr. Saurabh Sarkar is an Applications Engineer for the Rocky DEM Business Unit. Prior to joining ESSS, Dr. Sarkar worked as an Adjunct Faculty at Rutgers University and an on-site Consultant at Sunovion Pharmaceuticals where he supported drug formulation and process development activities. He obtained his Ph.D. in Pharmaceutics from the University of Connecticut where his focus was understanding and optimization of different pharmaceutical unit operations using DEM and CFD tools in projects with multiple industrial and government collaborators. He is a Senior Member of the AIChE and serves as an expert reviewer for several journals.