Solid Particle Implementation for E-L Solver#1301
Solid Particle Implementation for E-L Solver#1301jaguilar37 wants to merge 9 commits intoMFlowCode:masterfrom
Conversation
a2bf4ac to
e527f8f
Compare
Claude Code ReviewIncremental review from: e527f8f Previously-flagged issues addressed in this update: New findings since last Claude review[HIGH] Declaration-after-executable-statement: Fortran syntax error in MPI builds The fix initializes bubs_glb = 0 ! ← executable statement
integer :: ierr ! ← local declaration after executable: Fortran syntax error
integer :: i, j, k, ...Per the Fortran standard, all declarations must precede the first executable statement in a scoping unit. Move Remaining from prior review (not re-examined here): |
Claude Code ReviewIncremental review from: 069237f Previously-flagged issues addressed or carried forward: The declaration-after-executable bug ( Findings since last Claude review[HIGH] Declaration-after-executable still present — Fortran standard violation This was flagged in the previous review and remains unfixed. In an MFC_SIMULATION + MFC_MPI build the preprocessor expands to: bubs_glb = 0 ! ← executable statement
! ...
integer :: ierr ! ← local declaration AFTER executable: illegal per Fortran standardFix: move [MEDIUM] In [MEDIUM]
real(wp), ..., intent(inout) :: vL_field
real(wp), ..., intent(inout) :: vR_fieldThe implementation only reads these arrays (computes [LOW] Bare integer literal
aTemp(l) = 2._wp*f_bTemp/(fmass_g + fmass_v) - 3*fV*fVel(l)/fRThe bare integer [LOW] Typo in NVTX range label:
call nvtxStartRange("BETA-COMM-SENDRECV-NO-RMDA")Should be Remaining from prior reviews (not re-examined here): |
Claude Code ReviewIncremental review from: 3c49f42 Previously-flagged issues addressed in this update: New findings since last Claude review[HIGH] Module-level This commit adds: ! Module level (line 73)
real(wp), allocatable, dimension(:, :, :) :: max_dtallocated conditionally in the initializer: if (cfl_dt) then
@:ALLOCATE(max_dt(0:m, 0:n, 0:p)) ! line 474
end ifbut Additionally, inside real(wp) :: max_dt ! local scalar — shadows the module-level 3-D arrayThe GPU parallel loop at lines 766-769 passes this local scalar to Fix: add Remaining from prior reviews (not re-examined here): |
Claude Code ReviewIncremental review from: 754cc32 Previously-flagged findings from last review: No new issues were found in the prior increment. All previously flagged issues (declaration-after-executable, New findings since last Claude review[MEDIUM] Optional
The [LOW] No-op assignments in The case default
do l = 1, num_dims
fVel(l) = fVel(l)
fPos(l) = fPos(l)
end doThese are no-ops. The block can be removed entirely or replaced with a comment. [LOW] Unconditional MPI broadcast of hardcoded-IC variables in pre_process
Remaining from prior reviews (not re-examined here): |
d973f42 to
eb03599
Compare
Claude Code ReviewIncremental review from: 0756c08 Previously-flagged issues addressed in this update: optional New findings since last Claude review[HIGH]
The finalizer currently only handles the pre-existing IB buffers: subroutine s_finalize_mpi_proxy_module()
#ifdef MFC_MPI
if (ib) then
@:DEALLOCATE(ib_buff_send, ib_buff_recv)
end if
#endif
end subroutine s_finalize_mpi_proxy_moduleAll nine new module-level allocatable variables leak their host and GPU device memory. Add matching [MEDIUM]
[LOW] Missing In impure subroutine s_add_particles_to_transfer_list(nBub, pos, posPrev, include_ghost)
real(wp), dimension(:, :) :: pos, posPrev ! no intent
integer :: bubID, nbub ! nbub == nBub (case-insensitive); no intent
Similarly, both [LOW] No-op Previously flagged; still present: case default
do l = 1, num_dims
fVel(l) = fVel(l)
fPos(l) = fPos(l)
end doThese are no-ops. The |
|
<!-- claude-review: thread=primary; reviewed_sha=f0440446198aca0c66b76e8e88c888a49965dea3; mode=incremental --> |
|
Claude Code Review Incremental review from: 8bd0d15 New findings since last Claude review:
|
82cd581 to
dad92d8
Compare
dad92d8 to
61a4b80
Compare
…, not fix/time-stepping-order)
…on, and comment typo
1. Fixes volume fraction and source term smearing. Previously, these two were combined in a convoluted way. Volume fraction field needs to be computed/communicated at the start of the timestep, source term contributions need to be computed/communicated after computing the fluid force on the particle. These are now split in a clean way. 2. The filling in the buffer cells has the update that uses sum and replace algorithm (implemented by Ben W.). This was further modified to take in an array as input with the indexes of the variables in q_particles/q_beta that should be updated by the algorithm. This was done so that the volume fraction update could be split from the source term contribution update. 3. The collision force parameters are now defined in the inputs in the particle physical properties. Includes: particle_pp%ksp_col, particle_pp%nu_col, particle_pp%E_col, particle_pp%cor_col. Need to be set if collisions is turned on. The collision forces are not communicated anymore. Instead, each local particle is looped through and checked for overlap with neighbors. Then, only the collision force on this local particle of interest is added to this local particle. This avoids communication of forces. 4. The sutherland viscosity for air is hardcoded into the force subroutine if "viscous" is turned off. This is a temporary bandaid. Use lag_params%mu_ref = 1.716E-5 5. Implements quasi-steady drag fluctuations force (logical input : lag_params%qs_fluct_force) The subroutine is within the kernels file. This uses the random number generator in src/common/m_model.fpp. src/common/m_model.fpp was modified to make the random number generator subroutine public. 6. The documentation for the qs_drag_model was corrected. New documentation is added for the collision force inputs, quasi-steady fluctuation force, and fluid (air) reference viscosity).
I removed the collision force sending arrays in mpi_proxy and added a check in the deallocate for processor count greater than 0. This is a bug fix for running mfc with mpi with only 1 rank.
Description
This update expands upon the pre-existing (in development) E-L solver for bubble dynamics to include solid particle dynamics. This is in support of the PSAAP center, which requires the capability to model solid particle dynamics in MFC.
Type of change
Testing
The solver has been tested by running various 2D/3D problems involving fluid-particle interactions, such as spherical blasts surrounded by a layer of particles, shock-particle curtains, collision tests, etc.
The inputs to the EL solid particle solver have all been turned on/off to verify they work independent of each other, and together.
The code has been tested for CPU and GPU usage. The GPU usage has been tested on Tuolumne.
Two new files have been added:
m_particles_EL.fpp
m_particles_EL_kernels.fpp
File 1 has the main particle dynamics subroutines. This initializes the particles, computes fluid forces, coupling terms, computes collision forces, enforces boundary conditions, and writes the data for post-processing.
File 2 has the gaussian kernel projection code and the subroutine to compute the force on the particle due to the fluid. This compute the quasi-steady drag force, pressure gradient force, added mass force, stokes drag, gravitational force. Models for the quasi-steady drag are implemented here.
Checklist
See the developer guide for full coding standards.
GPU changes (expand if you modified
src/simulation/)