Fix WP_MOK hardcoded to 8 bytes, use storage_size for portability#1187
Fix WP_MOK hardcoded to 8 bytes, use storage_size for portability#1187sbryngelson wants to merge 9 commits intoMFlowCode:masterfrom
Conversation
Nitpicks 🔍
|
There was a problem hiding this comment.
Pull request overview
This PR fixes MPI file offset calculations that were hardcoded for 8-byte wp reals, making parallel I/O portable across single- and double-precision builds.
Changes:
- Replaced
WP_MOK = int(8._wp, MPI_OFFSET_KIND)with astorage_size-based byte-size computation. - Applied the fix across multiple MPI I/O read/setup paths to keep offsets consistent.
61f22b2 to
c224343
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1187 +/- ##
=======================================
Coverage 44.04% 44.04%
=======================================
Files 70 70
Lines 20499 20499
Branches 1993 1993
=======================================
Hits 9028 9028
Misses 10330 10330
Partials 1141 1141 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
f4ac2d6 to
5e07f44
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/simulation/m_data_output.fpp (2)
962-970:⚠️ Potential issue | 🟠 MajorSame
wpvs.stpmismatch as in the simulation read path —WP_MOKshould usestpsize for the shared-file write branch.The
dispbyte-offset formula at lines 977/990/1003/1017:disp = m_MOK*max(MOK, n_MOK)*max(MOK, p_MOK)*WP_MOK*(var_MOK - 1)uses
WP_MOK = storage_size(0._wp)/8. The actual write (MPI_FILE_WRITE_ALL) usesmpi_io_p(stp), meaning elements arereal(stp)on disk. In a mixed-precision build the offset computed withwpsize diverges from the actualstpelement size, misaligning every variable's data block.Note:
MPI_FILE_SET_VIEWhere usesmpi_p(wp) as etype while the write usesmpi_io_p(stp) — this inconsistency between etype and data type is a separate pre-existing issue, but thedispvalue itself must reflect the on-diskstpelement size.🔧 Proposed fix
- WP_MOK = int(storage_size(0._wp)/8, MPI_OFFSET_KIND) + WP_MOK = int(storage_size(0._stp)/8, MPI_OFFSET_KIND)Based on learnings: "Ensure
mpi_pmatcheswpandmpi_io_pmatchesstp" and "In mixed-precision mode: do not mixstp(storage) andwp(working) precision without intentional conversion."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/simulation/m_data_output.fpp` around lines 962 - 970, The computed byte stride WP_MOK is using working precision (wp) but the file writes use storage precision (stp); change WP_MOK initialization to use storage_size(0._stp)/8 so disk offsets match the on-disk element size (i.e., replace WP_MOK = int(storage_size(0._wp)/8, MPI_OFFSET_KIND) with WP_MOK = int(storage_size(0._stp)/8, MPI_OFFSET_KIND)), and verify the disp calculation (disp = m_MOK*max(MOK, n_MOK)*max(MOK, p_MOK)*WP_MOK*(var_MOK - 1)) uses that WP_MOK so offsets align with MPI_FILE_WRITE_ALL which uses mpi_io_p (stp).
1085-1105:⚠️ Potential issue | 🟠 Major
s_write_parallel_ib_data:WP_MOK(real byte size) is incorrectly used to compute the displacement for integer IB marker data, and thesys_size-based base offset is spurious for a dedicatedib.datfile.Two distinct issues:
Wrong element size.
WP_MOK = storage_size(0._wp)/8gives the byte count of areal(wp)value, but the IB data is written withMPI_INTEGER. In a double-precision build (WP_MOK = 8) this doubles the displacement relative to the actual 4-byte integer elements, placing subsequent IB snapshots at wrong offsets in the file. (In single-precisionWP_MOK = 4, matchingMPI_INTEGERsize, so it accidentally works.)Spurious
sys_sizebase offset.var_MOK = sys_size + 1was designed for a layout where IB data followssys_sizereal-variable blocks. Sinces_write_parallel_ib_dataopens its own dedicatedib.datfile, there are no preceding real-variable blocks to skip; thesys_sizeterm introduces an incorrectly large initial offset for all non-zero time steps.The corrected formula should be:
disp = m_MOK*max(MOK, n_MOK)*max(MOK, p_MOK)*IB_MOK*int(time_step/t_step_save, MPI_OFFSET_KIND)where
IB_MOKholds the byte size ofMPI_INTEGER(e.g.int(4, MPI_OFFSET_KIND)for standard builds, orint(1, MPI_OFFSET_KIND)underMFC_MIXED_PRECISION).These are pre-existing issues not introduced by this PR, but the PR surfaces them by making the element-size derivation explicit.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/simulation/m_data_output.fpp` around lines 1085 - 1105, The displacement calculation in s_write_parallel_ib_data uses WP_MOK (storage_size of real) and a sys_size-based var_MOK base offset, which is wrong for an ib.dat file of MPI_INTEGERs; replace WP_MOK/var_MOK with an IB_MOK that represents the byte size of MPI_INTEGER (set IB_MOK = int(4, MPI_OFFSET_KIND) normally or the appropriate size under MFC_MIXED_PRECISION) and compute disp as m_MOK*max(MOK,n_MOK)*max(MOK,p_MOK)*IB_MOK*int(time_step/t_step_save, MPI_OFFSET_KIND), removing the sys_size (+1) term and the special-case time_step==0 handling so the initial snapshot is at offset 0 and later snapshots are spaced by integer-sized blocks.src/simulation/m_start_up.fpp (1)
655-670:⚠️ Potential issue | 🟠 MajorFix WP_MOK to use storage precision size instead of working precision.
In mixed-precision builds (
wp=real64,stp=real32),WP_MOKshould be calculated fromstpsize, notwpsize. The variablempi_io_pcorrectly corresponds tostp(viaMPI_BYTEwhenstp == half_precision), so file elements are written as 2 bytes each. However,WP_MOK = int(storage_size(0._wp)/8, ...)yields 8 bytes, causing the offset formula to calculate byte displacements at 4× the actual element size. This corrupts all MPI I/O reads after the first variable.This issue exists across multiple files:
src/simulation/m_start_up.fpp(lines 578, 655)src/pre_process/m_start_up.fpp(line 663)src/pre_process/m_data_output.fpp(lines 552, 618)src/simulation/m_data_output.fpp(lines 898, 966, 1090)src/post_process/m_data_input.f90(lines 136, 180, 520)Change
storage_size(0._wp)tostorage_size(0._stp)in all assignments.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/simulation/m_start_up.fpp` around lines 655 - 670, The byte-offset WP_MOK is being computed using working precision (storage_size(0._wp)) which is wrong for mixed-precision I/O; change the WP_MOK assignments to use storage precision instead (storage_size(0._stp)) so WP_MOK matches the actual element size used by mpi_io_p/mpi_io_type; update every occurrence that sets WP_MOK (e.g., the WP_MOK assignments in m_start_up.fpp, m_data_output.fpp and m_data_input.f90) so the offset calculation that uses WP_MOK, MOK, n_MOK, p_MOK, var_MOK/dsp produces correct byte displacements.
🧹 Nitpick comments (2)
src/simulation/m_start_up.fpp (1)
574-581:WP_MOK(and otherMOKvariants) are computed but never consumed in thefile_per_processbranch.In this branch the reads are plain sequential
MPI_FILE_READcalls with noMPI_FILE_SET_VIEW, soWP_MOK,MOK,str_MOK, andNVARS_MOK(lines 578–581) are dead. Likewise, thevar_MOKassignments inside each loop body are also never used. These are copy-paste artefacts from the shared-file path.🧹 Suggested cleanup
- WP_MOK = int(storage_size(0._wp)/8, MPI_OFFSET_KIND) - MOK = int(1._wp, MPI_OFFSET_KIND) - str_MOK = int(name_len, MPI_OFFSET_KIND) - NVARS_MOK = int(sys_size, MPI_OFFSET_KIND)And remove the
var_MOK = int(i, MPI_OFFSET_KIND)lines inside the sequential-read loops in this branch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/simulation/m_start_up.fpp` around lines 574 - 581, Remove unused MPI offset-sized integer calculations and per-loop var_MOK assignments from the file_per_process branch: WP_MOK, MOK, str_MOK, NVARS_MOK (the int(...) assignments around WP_MOK..NVARS_MOK) are computed but never used because this branch uses plain MPI_FILE_READ without MPI_FILE_SET_VIEW; delete those assignments and also remove any var_MOK = int(i, MPI_OFFSET_KIND) inside the sequential-read loops so only the values actually consumed by the file_per_process path remain.src/simulation/m_data_output.fpp (1)
894-901:WP_MOK(andvar_MOK) are dead assignments in thefile_per_processwrite branch.No
dispis computed and noMPI_FILE_SET_VIEWis called in this branch; all writes go through sequentialMPI_FILE_WRITE_ALL.WP_MOK,MOK,str_MOK,NVARS_MOK(lines 898–901) and thevar_MOKassignments inside every loop are unused.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/simulation/m_data_output.fpp` around lines 894 - 901, The WP_MOK, MOK, str_MOK, NVARS_MOK assignments (and the per-loop var_MOK assignments) are dead in the file_per_process write branch because no disp is computed and no MPI_FILE_SET_VIEW is used — only MPI_FILE_WRITE_ALL is called; either remove these unused variables/assignments from the file_per_process branch or, if per-process file views were intended, compute disp and call MPI_FILE_SET_VIEW before writes. Specifically, either delete the WP_MOK, MOK, str_MOK, NVARS_MOK lines and the var_MOK assignments inside the loops in the file_per_process branch, or implement the missing disp computation and invoke MPI_FILE_SET_VIEW so those MOK variables are actually used.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/simulation/m_data_output.fpp`:
- Around line 962-970: The computed byte stride WP_MOK is using working
precision (wp) but the file writes use storage precision (stp); change WP_MOK
initialization to use storage_size(0._stp)/8 so disk offsets match the on-disk
element size (i.e., replace WP_MOK = int(storage_size(0._wp)/8, MPI_OFFSET_KIND)
with WP_MOK = int(storage_size(0._stp)/8, MPI_OFFSET_KIND)), and verify the disp
calculation (disp = m_MOK*max(MOK, n_MOK)*max(MOK, p_MOK)*WP_MOK*(var_MOK - 1))
uses that WP_MOK so offsets align with MPI_FILE_WRITE_ALL which uses mpi_io_p
(stp).
- Around line 1085-1105: The displacement calculation in
s_write_parallel_ib_data uses WP_MOK (storage_size of real) and a sys_size-based
var_MOK base offset, which is wrong for an ib.dat file of MPI_INTEGERs; replace
WP_MOK/var_MOK with an IB_MOK that represents the byte size of MPI_INTEGER (set
IB_MOK = int(4, MPI_OFFSET_KIND) normally or the appropriate size under
MFC_MIXED_PRECISION) and compute disp as
m_MOK*max(MOK,n_MOK)*max(MOK,p_MOK)*IB_MOK*int(time_step/t_step_save,
MPI_OFFSET_KIND), removing the sys_size (+1) term and the special-case
time_step==0 handling so the initial snapshot is at offset 0 and later snapshots
are spaced by integer-sized blocks.
In `@src/simulation/m_start_up.fpp`:
- Around line 655-670: The byte-offset WP_MOK is being computed using working
precision (storage_size(0._wp)) which is wrong for mixed-precision I/O; change
the WP_MOK assignments to use storage precision instead (storage_size(0._stp))
so WP_MOK matches the actual element size used by mpi_io_p/mpi_io_type; update
every occurrence that sets WP_MOK (e.g., the WP_MOK assignments in
m_start_up.fpp, m_data_output.fpp and m_data_input.f90) so the offset
calculation that uses WP_MOK, MOK, n_MOK, p_MOK, var_MOK/dsp produces correct
byte displacements.
---
Nitpick comments:
In `@src/simulation/m_data_output.fpp`:
- Around line 894-901: The WP_MOK, MOK, str_MOK, NVARS_MOK assignments (and the
per-loop var_MOK assignments) are dead in the file_per_process write branch
because no disp is computed and no MPI_FILE_SET_VIEW is used — only
MPI_FILE_WRITE_ALL is called; either remove these unused variables/assignments
from the file_per_process branch or, if per-process file views were intended,
compute disp and call MPI_FILE_SET_VIEW before writes. Specifically, either
delete the WP_MOK, MOK, str_MOK, NVARS_MOK lines and the var_MOK assignments
inside the loops in the file_per_process branch, or implement the missing disp
computation and invoke MPI_FILE_SET_VIEW so those MOK variables are actually
used.
In `@src/simulation/m_start_up.fpp`:
- Around line 574-581: Remove unused MPI offset-sized integer calculations and
per-loop var_MOK assignments from the file_per_process branch: WP_MOK, MOK,
str_MOK, NVARS_MOK (the int(...) assignments around WP_MOK..NVARS_MOK) are
computed but never used because this branch uses plain MPI_FILE_READ without
MPI_FILE_SET_VIEW; delete those assignments and also remove any var_MOK = int(i,
MPI_OFFSET_KIND) inside the sequential-read loops so only the values actually
consumed by the file_per_process path remain.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/post_process/m_data_input.f90src/pre_process/m_data_output.fppsrc/pre_process/m_start_up.fppsrc/simulation/m_data_output.fppsrc/simulation/m_start_up.fpp
🚧 Files skipped from review as they are similar to previous changes (2)
- src/pre_process/m_data_output.fpp
- src/post_process/m_data_input.f90
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/simulation/m_start_up.fpp (1)
634-639:⚠️ Potential issue | 🔴 CriticalUse
stpinstead ofwpfor MPI I/O byte size calculations.Line 638 should derive
WP_MOKfrom storage precision (stp), not working precision (wp). In mixed-precision builds wherestp ≠ wp, usingwpproduces incorrect displacements formpi_io_p-based MPI file operations, corrupting restart file I/O. The same issue exists across all data I/O modules.🔧 Corrected line
- WP_MOK = int(storage_size(0._wp)/8, MPI_OFFSET_KIND) + WP_MOK = int(storage_size(0._stp)/8, MPI_OFFSET_KIND)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/simulation/m_start_up.fpp` around lines 634 - 639, WP_MOK is computed using working precision (wp) which yields incorrect byte counts when storage precision (stp) differs; change the computation of WP_MOK to use storage precision by calling storage_size(0._stp)/8 (i.e., replace storage_size(0._wp)/8 with storage_size(0._stp)/8) so MPI displacement math uses the actual on-disk element size; update the same pattern anywhere else in I/O modules where storage_size was invoked with _wp instead of _stp (look for variables like WP_MOK and mpi I/O sizing calculations).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/simulation/m_start_up.fpp`:
- Around line 634-639: WP_MOK is computed using working precision (wp) which
yields incorrect byte counts when storage precision (stp) differs; change the
computation of WP_MOK to use storage precision by calling storage_size(0._stp)/8
(i.e., replace storage_size(0._wp)/8 with storage_size(0._stp)/8) so MPI
displacement math uses the actual on-disk element size; update the same pattern
anywhere else in I/O modules where storage_size was invoked with _wp instead of
_stp (look for variables like WP_MOK and mpi I/O sizing calculations).
3ea26e9 to
b5a5fe8
Compare
2b51263 to
c260aad
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/simulation/m_data_output.fpp (1)
894-899:⚠️ Potential issue | 🟠 MajorUse
storage_size(0._stp)/8for WP_MOK to match MPI_IO_DATA%var(i)%sf precision.
The data written at these lines isreal(stp)(storage precision), butWP_MOKis computed fromwp(working precision). In mixed-precision mode, this causes incorrect file displacements. Per the codebase convention,mpi_io_pcorresponds tostp, so the multiplier must also derive fromstp.🔧 Suggested fix
- WP_MOK = int(storage_size(0._wp)/8, MPI_OFFSET_KIND) + WP_MOK = int(storage_size(0._stp)/8, MPI_OFFSET_KIND)Also applies to: 962–967
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/simulation/m_data_output.fpp` around lines 894 - 899, WP_MOK is computed using working precision (wp) but the MPI writes use storage precision (stp), causing wrong displacements; change the calculation of WP_MOK to use storage_size(0._stp)/8 instead of storage_size(0._wp)/8 so WP_MOK matches MPI_IO_DATA%var(i)%sf and the real(stp) data layout. Update both occurrences where WP_MOK is set (the block setting m_MOK, n_MOK, p_MOK, WP_MOK, MOK and the later similar block around lines 962–967) to derive WP_MOK from stp rather than wp. Ensure the symbol WP_MOK is the only change so other MOK, m_MOK, n_MOK, p_MOK calculations remain untouched.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/simulation/m_data_output.fpp`:
- Around line 894-899: WP_MOK is computed using working precision (wp) but the
MPI writes use storage precision (stp), causing wrong displacements; change the
calculation of WP_MOK to use storage_size(0._stp)/8 instead of
storage_size(0._wp)/8 so WP_MOK matches MPI_IO_DATA%var(i)%sf and the real(stp)
data layout. Update both occurrences where WP_MOK is set (the block setting
m_MOK, n_MOK, p_MOK, WP_MOK, MOK and the later similar block around lines
962–967) to derive WP_MOK from stp rather than wp. Ensure the symbol WP_MOK is
the only change so other MOK, m_MOK, n_MOK, p_MOK calculations remain untouched.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/post_process/m_data_input.f90src/pre_process/m_data_output.fppsrc/pre_process/m_start_up.fppsrc/simulation/m_data_output.fppsrc/simulation/m_start_up.fpp
🚧 Files skipped from review as they are similar to previous changes (3)
- src/simulation/m_start_up.fpp
- src/post_process/m_data_input.f90
- src/pre_process/m_data_output.fpp
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/simulation/m_start_up.fpp (1)
578-578:stp-basedWP_MOKis correct, but the value goes unused in this branchUsing
storage_size(0._stp)/8is the right choice: all data reads in both branches usempi_io_p, which corresponds tostp. Good fix.However, within the
file_per_processblock,WP_MOK(along withm_MOK,n_MOK,p_MOK,var_MOK) is assigned but never consumed — there are noMPI_FILE_SET_VIEWcalls and nodispdisplacement formula in this path (lines 584–616 are all plain sequentialMPI_FILE_READ). The computation is dead code here. If offset-based seeks are not needed for the per-process file layout, these assignments could be removed to reduce confusion; if they are intended for future use, a comment would help.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/simulation/m_start_up.fpp` at line 578, WP_MOK and the related m_MOK, n_MOK, p_MOK, var_MOK are computed in the file_per_process branch but never used (no MPI_FILE_SET_VIEW or disp-based reads); either remove these dead assignments or clearly document why they remain for future use. Locate the assignments to WP_MOK (storage_size(0._stp)/8, MPI_OFFSET_KIND) and the m_MOK/n_MOK/p_MOK/var_MOK setup in the file_per_process branch and either (A) delete them so the branch only contains the sequential MPI_FILE_READ logic using mpi_io_p, or (B) add a one-line comment above them referencing mpi_io_p and explaining they are intentionally unused placeholders for future offset-based views.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/simulation/m_start_up.fpp`:
- Line 578: WP_MOK and the related m_MOK, n_MOK, p_MOK, var_MOK are computed in
the file_per_process branch but never used (no MPI_FILE_SET_VIEW or disp-based
reads); either remove these dead assignments or clearly document why they remain
for future use. Locate the assignments to WP_MOK (storage_size(0._stp)/8,
MPI_OFFSET_KIND) and the m_MOK/n_MOK/p_MOK/var_MOK setup in the file_per_process
branch and either (A) delete them so the branch only contains the sequential
MPI_FILE_READ logic using mpi_io_p, or (B) add a one-line comment above them
referencing mpi_io_p and explaining they are intentionally unused placeholders
for future offset-based views.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/post_process/m_data_input.f90src/simulation/m_data_output.fppsrc/simulation/m_start_up.fpp
🚧 Files skipped from review as they are similar to previous changes (2)
- src/simulation/m_data_output.fpp
- src/post_process/m_data_input.f90
|
This is correct, change should be added |
…bility MPI file offsets assume 8-byte reals. Single-precision builds would read from wrong offsets. Use storage_size(0._wp)/8 instead. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The same int(8._wp, MPI_OFFSET_KIND) pattern that was fixed in post_process/m_data_input.f90 was present in 7 additional locations across simulation/m_data_output.fpp, simulation/m_start_up.fpp, pre_process/m_data_output.fpp, and pre_process/m_start_up.fpp. All hardcoded 8-byte strides are replaced with storage_size(0._wp)/8 so MPI file offsets are correct in single-precision builds. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
IB markers are MPI_INTEGER (4 bytes), but WP_MOK was computed as storage_size(0._wp)/8 = 8 bytes in double precision, causing a factor-of-2 displacement error when writing/reading ib.dat via MPI-IO. Fix by using storage_size(0)/8 (default integer byte size) at the two IB-specific MPI-IO displacement sites: - src/simulation/m_data_output.fpp (s_write_parallel_ib_data) - src/post_process/m_data_input.f90 (s_read_ib_data_files) Also fix the Lustre restart read path in src/simulation/m_start_up.fpp which had WP_MOK hardcoded to 4 instead of storage_size(0._wp)/8. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
MPI-IO displacement (WP_MOK) must match the storage precision of the data being read/written. Main field variables are written with mpi_io_p (which tracks stp), not mpi_p (which tracks wp). In mixed-precision mode stp = half (2 bytes) while wp = double (8 bytes), so using storage_size(0._wp) gives a 4× wrong displacement. Change all main-data WP_MOK assignments from storage_size(0._wp) to storage_size(0._stp) across all three I/O paths: - src/simulation/m_data_output.fpp (write, two call sites) - src/simulation/m_start_up.fpp (restart read, two call sites) - src/post_process/m_data_input.f90 (post-process read, two call sites) IB marker I/O (MPI_INTEGER, 4 bytes) was already fixed to use storage_size(0)/8 in the previous commit and is not changed here. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Pre-process MPI I/O strides should use storage precision (stp) not working precision (wp) since field arrays use stp. Also removes the dead file_per_process sequential-read path that contained known copy-paste bugs (n_MOK/p_MOK both using m_glb_read). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This code path is still used in practice on HPC clusters even though it is not exercised by the test suite. Reverts the removal while keeping the wp -> stp precision fix in both branches. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The IB read/write displacement formula uses WP_MOK to skip past floating-point field data. Using storage_size(0) gave the default integer size (4 bytes), not the field storage precision size, causing incorrect byte offsets in double-precision builds. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1cdea6b to
b012f15
Compare
The linter will intentionally fail until PRs MFlowCode#1187 (WP_MOK), MFlowCode#1241 (R3bar duplicate), and this PR's dq_prim_dy fix (now in MFlowCode#1187) merge first. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
In s_compute_viscous_stress_cylindrical_boundary call (p==0 branch), the 3rd argument was incorrectly passing dq_prim_dy_vf (y-derivative) instead of dq_prim_dz_vf (z-derivative) for the grad_z_vf parameter. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Claude Code ReviewHead SHA: 43bbd2b
Summary
Findings1. Pre-existing bug visible in diff context — m_MOK = int(m_glb_read + 1, MPI_OFFSET_KIND)
n_MOK = int(m_glb_read + 1, MPI_OFFSET_KIND) ! should be n_glb_read?
p_MOK = int(m_glb_read + 1, MPI_OFFSET_KIND) ! should be p_glb_read?Both 2. Variable name OverallThe WP_MOK fix and the |
User description
Summary
Severity: HIGH — MPI file I/O reads from wrong offsets in single-precision builds.
WP_MOK(the MPI offset for the working-precision real size) is hardcoded toint(8._wp, MPI_OFFSET_KIND), assuming 8-byte (double-precision) reals. In single-precision builds wherewpcorresponds to 4 bytes, this causes all MPI file read offsets to be wrong by a factor of 2.Before
After
All 10 occurrences across 5 files are fixed. One site in
src/simulation/m_start_up.fpphadint(4._wp, ...)instead ofint(8._wp, ...)— wrong in both single and double precision.Additionally, 3 sites in pre_process used
storage_size(0._wp)instead ofstorage_size(0._stp)— since field arrays use storage precision (stp), the I/O byte stride must matchstp, notwp.Why this went undetected
The codebase is predominantly used in double precision where
wp=stp= 8 bytes, so the hardcoded value happens to be correct.Also fixes:
dq_prim_dy_vf→dq_prim_dz_vfbug in 2D cylindrical viscous boundaryIn
src/simulation/m_rhs.fpp, the call tos_compute_viscous_stress_cylindrical_boundaryin thep == 0(2D) branch was passingdq_prim_dy_vfas the z-gradient argument (grad_z_vf) instead ofdq_prim_dz_vf. Thep > 0(3D) branch was correct. This was a duplicate-line copy-paste error that would cause incorrect viscous stress computation at cylindrical boundaries in 2D simulations.Files changed
src/post_process/m_data_input.f90— 3 WP_MOK sitessrc/pre_process/m_data_output.fpp— 2 WP_MOK sitessrc/pre_process/m_start_up.fpp— 1 WP_MOK sitesrc/simulation/m_data_output.fpp— 3 WP_MOK sitessrc/simulation/m_start_up.fpp— 1 WP_MOK sitesrc/simulation/m_rhs.fpp— 1dq_prim_dy_vf→dq_prim_dz_vffixTest plan
🤖 Generated with Claude Code
Fixes #1207