Skip to content

fix: replace set_track_meta with F.interpolate in RandSimulateLowResolution (#8409)#8837

Open
chhayankjain wants to merge 2 commits intoProject-MONAI:devfrom
chhayankjain:8242-add-use-softmax-to-dice-focal-loss
Open

fix: replace set_track_meta with F.interpolate in RandSimulateLowResolution (#8409)#8837
chhayankjain wants to merge 2 commits intoProject-MONAI:devfrom
chhayankjain:8242-add-use-softmax-to-dice-focal-loss

Conversation

@chhayankjain
Copy link
Copy Markdown

Fixes #8409

Description

RandSimulateLowResolution internally performs a downsample → upsample cycle using two
Resize transforms. To prevent these from being recorded in the invertible transform
stack, the previous implementation temporarily toggled the global set_track_meta(False)
flag and restored it afterward.

This is not thread-safe: in multi-threaded data loading (e.g. ThreadDataLoader), another
thread calling get_track_meta() between the toggle and the restore would silently receive
the wrong value, causing incorrect metadata tracking behaviour.

Fix: replace the Resize transforms with direct torch.nn.functional.interpolate calls on
a plain tensor obtained via convert_to_tensor(img, track_meta=False). This avoids any
global state mutation entirely. Output dtype (float32) and metadata-copy behaviour are
preserved from the original implementation. set_track_meta is also removed from the import
since it is no longer used anywhere in the file.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 29, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

RandSimulateLowResolution no longer uses MONAI Resize transforms; it converts the input with convert_to_tensor(..., track_meta=False), forces float32, adds a batch dimension, and performs down- and up-sampling with torch.nn.functional.interpolate, passing align_corners only for linear-family modes. The transform avoids mutating the global metadata-tracking flag by converting the output via convert_to_tensor(..., track_meta=get_track_meta()) and copies metadata from the original input only when the result is a MetaTensor. Two tests were added to ensure get_track_meta() is unchanged after a single call and remains stable under concurrent calls.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed Title clearly summarizes the main change: replacing set_track_meta toggle with F.interpolate to fix a thread-safety issue.
Description check ✅ Passed Description covers the problem, solution, and change type, following the template structure with appropriate sections checked.
Linked Issues check ✅ Passed Changes fully address #8409 by eliminating global state mutation in RandSimulateLowResolution and using thread-safe F.interpolate instead.
Out of Scope Changes check ✅ Passed All changes directly relate to fixing the thread-safety issue: interpolate replacement, metadata handling, import cleanup, and regression tests.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@chhayankjain chhayankjain force-pushed the 8242-add-use-softmax-to-dice-focal-loss branch from 6df6570 to b15b2a2 Compare April 30, 2026 02:25
@chhayankjain chhayankjain marked this pull request as ready for review April 30, 2026 02:27
@chhayankjain chhayankjain force-pushed the 8242-add-use-softmax-to-dice-focal-loss branch from b15b2a2 to b991fcb Compare April 30, 2026 02:52
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/transforms/spatial/array.py`:
- Around line 3577-3585: The downsampling call to
torch.nn.functional.interpolate does not pass align_corners even though
align_corners should apply to both downsample_mode and upsample_mode; update the
img_downsampled interpolation call (the torch.nn.functional.interpolate
invocation that produces img_downsampled from img_float/target_shape) to include
align_corners when downsample_mode is one of the valid modes (use the same
_align_corners_modes check used for upsample_align_corners), e.g., compute
downsample_align_corners = self.align_corners if downsample_mode in
_align_corners_modes else None and pass downsample_align_corners as the
align_corners argument to the interpolate call so behavior matches the
docstring.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 836eec9b-957b-433c-80e6-909cce2aa688

📥 Commits

Reviewing files that changed from the base of the PR and between b15b2a2 and 26f27c3.

📒 Files selected for processing (2)
  • monai/transforms/spatial/array.py
  • tests/transforms/test_rand_simulate_low_resolution.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/transforms/test_rand_simulate_low_resolution.py

Comment thread monai/transforms/spatial/array.py
@chhayankjain chhayankjain force-pushed the 8242-add-use-softmax-to-dice-focal-loss branch from 26f27c3 to 067ed70 Compare April 30, 2026 03:07
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/transforms/spatial/array.py`:
- Around line 3592-3595: The code always wraps the upsampled array into a
MetaTensor and copies metadata (creating img_upsampled = MetaTensor(...);
img_upsampled.copy_meta_from(img)), which ignores get_track_meta() and breaks
callers that disabled meta tracking; change the rebuild logic to check
get_track_meta() and only construct and copy into a MetaTensor when
get_track_meta() is True, otherwise return a plain ndarray (img_upsampled_t)
without metadata, and add a small regression test that disables meta tracking to
ensure the returned object is not a MetaTensor.

In `@tests/transforms/test_rand_simulate_low_resolution.py`:
- Around line 95-117: The test is nondeterministic because each worker records
its baseline at different times; make it deterministic by synchronizing thread
start with a threading.Barrier: create barrier = threading.Barrier(len(threads)
+ 1), have run_transform call barrier.wait() first, then capture before =
get_track_meta(), run tfm(img) and check get_track_meta(), and have the main
test call barrier.wait() after starting all threads to release them
simultaneously; this ensures all workers record the same global baseline and
reliably reproduces the race.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9ed93f83-2a6c-4bdc-a735-c955ea71e7c8

📥 Commits

Reviewing files that changed from the base of the PR and between 26f27c3 and 4d1b850.

📒 Files selected for processing (2)
  • monai/transforms/spatial/array.py
  • tests/transforms/test_rand_simulate_low_resolution.py

Comment thread monai/transforms/spatial/array.py Outdated
Comment thread tests/transforms/test_rand_simulate_low_resolution.py
@chhayankjain chhayankjain force-pushed the 8242-add-use-softmax-to-dice-focal-loss branch from 4d1b850 to 937a516 Compare April 30, 2026 04:59
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/transforms/spatial/array.py`:
- Line 3568: The computed target_shape can become zero when computing
tuple(np.round(np.array(input_shape) *
self.zoom_factor).astype(np.int_).tolist()), causing interpolate to fail; update
the target_shape calculation in the same block (where target_shape,
self.zoom_factor and input_shape are used) to clamp each axis to at least 1
(e.g., replace the raw rounded/casted values with max(1, value) per axis) so no
spatial dimension is zero before calling torch.nn.functional.interpolate.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 36dda1e2-4b10-45cf-b1e4-aaee25cc8a62

📥 Commits

Reviewing files that changed from the base of the PR and between 4d1b850 and 9645a81.

📒 Files selected for processing (2)
  • monai/transforms/spatial/array.py
  • tests/transforms/test_rand_simulate_low_resolution.py

Comment thread monai/transforms/spatial/array.py Outdated
@chhayankjain chhayankjain force-pushed the 8242-add-use-softmax-to-dice-focal-loss branch from 9645a81 to 3f50313 Compare April 30, 2026 12:48
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/transforms/spatial/array.py`:
- Around line 3577-3589: The interpolation mode choices (downsample_mode /
upsample_mode) can be incompatible with the spatial rank of the tensor and cause
F.interpolate to raise NotImplementedError; update the logic in the block around
downsample_mode, upsample_mode and where img_downsampled/img_upsampled_t are
created to map/validate modes based on the spatial rank (len(input_shape)): use
"linear" for 1D, "bilinear" for 2D and "trilinear" for 3D (or fall back to an
appropriate supported mode), and ensure align_corners logic still only applies
to the supported modes; implement this mapping before calling
torch.nn.functional.interpolate so the mode strings passed to F.interpolate are
valid for the input rank.

In `@tests/transforms/test_rand_simulate_low_resolution.py`:
- Around line 105-123: The test currently only verifies track_meta after the
50-iteration loop and swallows all exceptions, allowing transient races or
hidden failures to escape detection; inside run_transform (where tfm =
RandSimulateLowResolution(...) and the loop for _ in range(50) calls tfm(img)),
move the check get_track_meta() != expected_track_meta to immediately after each
tfm(img) call so each invocation is validated, and replace the blanket except
Exception with capturing the actual exception (and optional traceback)
per-iteration and appending that error to errors so the original worker failure
is preserved and reported.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: a3bfbc51-eb7f-49fe-8368-d893d0f7a030

📥 Commits

Reviewing files that changed from the base of the PR and between 9645a81 and ec57803.

📒 Files selected for processing (2)
  • monai/transforms/spatial/array.py
  • tests/transforms/test_rand_simulate_low_resolution.py

Comment thread monai/transforms/spatial/array.py
Comment thread tests/transforms/test_rand_simulate_low_resolution.py
…lution (Project-MONAI#8409)

Signed-off-by: chhayankjain <chhayank44@gmail.com>
@chhayankjain chhayankjain force-pushed the 8242-add-use-softmax-to-dice-focal-loss branch from ec57803 to 0860882 Compare April 30, 2026 15:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

RandSimulateLowResolution race condition with set_track_meta()

1 participant