Published on: November 3, 2022
Through a recent collaboration, ESSS, Weir Minerals, and Oracle Cloud Infrastructure (OCI), analyzed and simulated an advanced design of a SAG mill on OCI’s BareMetal GPU Shape, using the latest 22R2 release of Rocky, and considering the grate discharge flow. This blog post highlights the recent advances in DEM-SPH simulation capabilities combined with access to efficient GPU infrastructure that enabled the modeling of a real-scale SAG mill with the addition of key features that were previously tedious to compute, neglected or simplified.
SAG mills are used for primary grinding of ore in comminution processes. Milling and grinding are energy-intensive processes and, according to a study done in 2014 [1], annually they can use up to 1.8% of globally produced electricity. To increase production yield with improved energy efficiency, equipment manufacturers and operators continuously look for computational modeling and simulations to assist product design.
As illustrated in Figure 1, SAG mills are essentially large rotating drums in which ore is mixed with steel balls. As it rotates, the charge (ore + balls) cascades down and impacts the drum walls. With enough impact energy, the ore is fragmented into smaller pieces and this process will continue until desired particle size distribution and material throughput are achieved. The dynamics of the charge inside the mill has a direct impact on the grinding efficiency, the amount of wear the equipment will experience over time, and ultimately the energy usage.
While the mill operates and the grinding process takes place, excessively fine particles are generated. Besides affecting the equipment’s performance, it also generates dust. To mitigate dust-related issues and remove the fines from within the drum, water is usually fed into the mill to mix up with the finer particles. Figure 2a shows this mixture, which usually takes the form of a highly viscous slurry with complex rheological behavior. As it is transported throughout the mill, it must be properly removed (see Figure 2b), otherwise affecting the equipment’s power draw and grinding efficiency.
As the grinds and slurry reach the discharge end of the mill, they pass through small holes to be lifted and discharged. These grate holes (Figure 3) are much smaller than the mill and also quite smaller than the initial size of the ore. Proper characterization of the flow through the grates is key to improving the equipment design to optimize throughput and avoid issues such as carryover and flow back.
From a modeling and simulation perspective, accounting for such a detailed description of such huge equipment can be computationally complex. In recent years, engineers have been able to use Rocky to simulate wear modeling and study equipment efficiency using real particle shape representation while considering particle breakage. Including the slurry on top of that is a huge challenge and traditional Eulerian grid-based approaches are usually followed by numerical diffusion and extremely long simulation times.
In Rocky’s latest release, a mesh-free lagrangian approach for fluids was implemented following the Smoothed Particle Hydrodynamics (SPH) method. The computationally efficient GPU and multi-GPU solver enables modeling practical engineering solutions to particle-laden free surface flows such as milling and grinding. Simulating a section of a SAG mill is a good example to put Rocky SPH-DEM to the test. The problem includes a large equipment with complex details, with a lot of particles, and a fluid phase with a free surface, to be solved in a fully coupled fashion.
A full 360º discharge end of a real-scale SAG mill was considered as illustrated in Figure 4. The mill is 6.7 m in diameter and 1 m in length (cylindrical part). It operates at 12.42 RPM (75% of the critical velocity) with a 25% filling ratio. 3,500 liters of water are included.
The steel balls are modeled as spheres while a comparison is made between spheres and polyhedrons for the ore particles. The size and count of particles and SPH elements are summarized in Table 1.
Solving problems with this complexity in reasonable engineering time requires efficient and scalable hardware. Oracle Cloud Infrastructure is the cloud provider that has the computing capability for that. The brain of the system is the Bare Metal GPU4.8 server, loaded with 8 Nvidia Tensor Core A100 cards, with 40 GB of GPU memory each. This is where the engineering team installed Rocky solver on top of Oracle Linux, an in-house OS distribution that carries all required dependencies to handle Nvidia’s A100s and run Rocky seamlessly.
The machine also carries a 27 TB NVMe drive, which is key to handling huge volumes of data efficiently. Add full root privileges, efficient support, and an incredibly low Time-to-Market on top of that, and you have a tailored solution to tackle real industrial applications. From power on to up and running within the day, OCI’s solutions are the number one choice.
The fully coupled DEM-SPH flow was integrated for a couple of mill revolutions. As shown in the animation below, Rocky is able to capture the complex interplay between ore, steel balls, and water.
Moreover, the simulation properly captured flow through the grates and unwanted phenomena, such as carryover/flow back, also showed up, making the solution approach suitable for design optimization studies.
Using OCI’s BareMetal GPU4.8 machine, we were able to evaluate the simulation performance and scalability with 2, 4, and 8 GPUs. In Figure 6, the solid lines represent the solver scalability, as given by the ratio between the wall-clock time for 2 GPUs, and those for 4 and 8 GPUs. The vertical bars represent the total GPU memory usage.
In general, the simulation got approximately 40% faster when going from 2 to 4 GPUs with very similar memory usage, of about 60 GB. In terms of memory, it means that with 4 GPUs (160 GB), we have room for doubling the problem size. By doubling the number of cards once again, we were able to get another 10% improvement in simulation time. On the other hand, the memory usage increased to 75 GB to spheres and to 90 GB for shapes, explained by the overhead of exchanging information among the 8 GPUs.
In terms of total wall-clock time, since the simulation cost is more related to the 97 million SPH elements, both spheres and polyhedra took a similar time to run, as summarized in the table below.
In short, Rocky SPH-DEM solver combined with OCI compute shapes is able to deliver high-fidelity simulation data for real-scale industrial applications in a feasible time.
References
[1] Napier-Munn, Tim. “Is progress in energy-efficient comminution doomed?.” Minerals Engineering 73 (2015): 1-6.
[2] Morell, S., Modelling the influence on power draw of the slurry phase in Autogenous (AG), Semi-autogenous (SAG) and ball mills, 2016
[3] Weerasekara, N.S., Town, S., (2019). Optimising pulp lifter design using SPH Simulation: Tritton story, In: Proc. SAG 2019, 57, 22–26th September 2019, Vancouver, Canada
[4] Weerasekara et. al. (2022), SPH modelling based SAG mill pulp lifter design improvement at Tritton mine and its performance on grinding circuit, IMPC Asia-Pacific 2022, Melbourne, Australia