Skip to content

fix(ci): unblock all failing workflows on main#17

Open
t0mdavid-m wants to merge 3 commits intomainfrom
fix-windows-ci
Open

fix(ci): unblock all failing workflows on main#17
t0mdavid-m wants to merge 3 commits intomainfrom
fix-windows-ci

Conversation

@t0mdavid-m
Copy link
Copy Markdown
Member

@t0mdavid-m t0mdavid-m commented Apr 17, 2026

Summary

All five CI workflows on `main` have been failing since the 2026-04-08 "Merge Template (#16)" commit. This PR addresses each.

  • Build executable for Windows — contrib drift from unpinned `gh release download` after the GH Actions cache expired. Fixed by pinning the contrib download to `release/${OPENMS_VERSION}` and tying the cache key to the OpenMS version (commit `2c3f1c0`).
  • continuous-integration / Pylint / Test workflow functions / Docker Image CI — the template merge reintroduced 15 vestigial content pages, a stale `test.py`, two tests targeting orphaned pages, and a `Dockerfile_simple` reference. All removed. `test_gui.py` rewritten to smoke-test the 13 active pages in `app.py` (using `AppTest.from_file("app.py")` for pages that require navigation context). New `tests/test_results_helpers.py` unit-tests pure helpers in `src/common/results_helpers.py`.
  • README rewritten with a punchy quantms-web-specific summary (old one was still the upstream template README).

Test plan

  • `ci.yml` green on branch (run 24582602838)
  • `pylint.yml` green on this PR
  • `Docker Image CI` green on this PR (only `build-full-app` remains)
  • `Build executable for Windows` green (dispatched on branch, run 24582624580)
  • Local `python -m pytest test_gui.py tests/ -q` → 34 passed

🤖 Generated with Claude Code

Summary by CodeRabbit

Release Notes

  • Removed Features

    • Removed in silico protein digest, fragmentation calculator, isotope pattern generator, and peptide m/z calculator tools
    • Removed raw data viewer and file upload interfaces
    • Removed generic workflow execution pages
  • Documentation

    • Updated README with revised deployment and application information
  • Improvements

    • Updated Docker configuration and GitHub Actions workflows for streamlined deployment

t0mdavid-m and others added 2 commits April 17, 2026 20:25
The Windows build step downloaded contrib_build-Windows.tar.gz from
OpenMS/contrib without a --tag, always pulling the latest release.
When the GH Actions cache (7-day eviction) expired, a newer contrib
got pulled that was incompatible with the pinned OpenMS release/3.5.0
source tree, breaking MSVC compilation in DIAPrescoring.cpp.

Pin the download to release/${OPENMS_VERSION} and tie the cache key
to the OpenMS version so contrib stays in lockstep with the source.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Deletes 15 template-leftover content pages, test.py, two tests targeting
orphaned pages, and the redundant workflow-tests.yml. These were all
reintroduced by the "Merge Template (#16)" commit and reference modules
that don't exist in quantms-web (src.simpleworkflow, src.Workflow,
docs.toppframework, etc.), which broke ci.yml, pylint.yml, and the
build-simple-app Docker job.

Rewrites test_gui.py to smoke-test the active pages registered in
app.py and adds tests/test_results_helpers.py with unit tests for the
pure helpers in src/common/results_helpers.py. Drops Dockerfile_simple
references from the Docker CI workflow and the README, and replaces the
old template README with a quantms-web-specific summary.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 17, 2026

📝 Walkthrough

Walkthrough

This pull request refactors the repository from a generic OpenMS streamlit template into a specialized quantms-web application focused on DDA label-free quantification. Changes include removing multiple content pages, simplifying build workflows, updating GitHub Actions cache management, rewriting documentation, and restructuring tests accordingly.

Changes

Cohort / File(s) Summary
GitHub Actions Workflows
.github/workflows/build-docker-images.yml, .github/workflows/build-windows-executable-app.yaml
Removed build-simple-app Docker job; updated Windows executable workflow to use version-scoped cache keys (${{ env.OPENMS_VERSION }}) and explicit tag-based release downloads instead of default resolution.
Removed Workflow Tests
.github/workflows/workflow-tests.yml
Entirely removed CI workflow that ran pytest on push and pull_request events targeting the main branch.
Content Pages—Removed
content/digest.py, content/documentation.py, content/download_section.py, content/file_upload.py, content/fragmentation.py, content/isotope_pattern_generator.py, content/peptide_mz_calculator.py, content/raw_data_viewer.py, content/run_example_workflow.py, content/run_subprocess.py, content/simple_workflow.py, content/topp_workflow_*.py
Deleted 14 Streamlit UI modules, removing template features (protein digest, documentation browser, file uploads, spectrum visualization, isotope patterns, m/z calculator, raw data viewer, workflow execution, subprocess runner, and various TOPP workflow pages).
Test Files—Removed
test.py, tests/test_run_subprocess.py, tests/test_simple_workflow.py
Deleted legacy test modules covering simple workflow generation, subprocess execution, and mzML spectrum testing.
Test Files—Modified
test_gui.py
Replaced comprehensive page-by-page GUI coverage with a shared _init() helper, narrowed testable pages list, and added test_app_loads() to cover pages requiring Streamlit navigation context.
Test Files—Added
tests/test_results_helpers.py
Added unit tests for helper functions (get_workflow_dir, extract_scan_from_ref, extract_scan_number, extract_filename_from_idxml) from src.common.results_helpers.
Documentation
README.md
Replaced generic OpenMS template documentation with quantms-web-specific content describing DDA label-free quantification pipeline (Comet → Percolator → IDFilter → ProteomicsLFQ), updated Docker and installation instructions, added Windows .msi installer section, and updated citations.

Possibly related PRs

  • Merge Template #16: Modifies .github/workflows/build-windows-executable-app.yaml to add a step for removing Streamlit server address configuration, intersecting with this PR's workflow updates.

Poem

🐇 From templates broad to focus keen,
A rabbit's refactor, crisp and clean!
Generic pages fade away,
quantms-web shines bright today—
Proteomics pipeline takes the stage. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main objective: fixing failing CI workflows on the main branch by addressing specific workflow failures and removing template remnants.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix-windows-ci

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
test_gui.py (1)

9-15: Consider deriving the page list from app.py to avoid drift.

DIRECTLY_TESTABLE_PAGES is hard-coded. If pages are renamed/added/removed in app.py, this list silently goes stale and the smoke test coverage narrows without signal. Consider introspecting the navigation registration in app.py (or centralizing the page list) and filtering out the ones that need navigation context, so new pages are automatically exercised.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test_gui.py` around lines 9 - 15, DIRECTLY_TESTABLE_PAGES is hard-coded and
can drift from the source of truth in app.py; change the test to derive its list
by importing the navigation/page registry used by app.py (e.g., the function or
variable that registers pages in app.py) and filter out pages that require
navigation context before assigning to DIRECTLY_TESTABLE_PAGES. Specifically,
replace the static list with an introspection step that imports the app's page
registry (the navigation registration symbol in app.py), extracts the registered
page module paths or IDs, and excludes those that need extra context so the test
automatically follows additions/renames in app.py.
tests/test_results_helpers.py (2)

20-21: test_extract_scan_number lacks a negative/missing-token case.

Unlike test_extract_scan_from_ref, this test doesn't exercise the "token absent" or malformed-input path for extract_scan_number. Adding one case would lock in the contract and prevent regressions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_results_helpers.py` around lines 20 - 21, Add a negative case to
test_extract_scan_number to cover the "token absent"/malformed path: call
extract_scan_number with a string that does not contain the "scan=" token (for
example "file=abc") and assert the function returns the expected missing value
(e.g., None) to match the contract used by extract_scan_from_ref.

24-27: Add a test for the str.replace gotcha in extract_filename_from_idxml.

The underlying implementation (see src/common/results_helpers.py:90-96) uses stem.replace(suffix, ''), which removes the suffix anywhere in the stem — not just at the end. For inputs like per_sample_filter.idXML this collapses to sample.mzML, which may or may not be intended. Worth adding a test that pins down this behavior (or switching the implementation to removesuffix if end-only stripping is desired).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_results_helpers.py` around lines 24 - 27, The test suite must
cover the str.replace gotcha in extract_filename_from_idxml: change the
implementation in src/common/results_helpers.py inside
extract_filename_from_idxml to use stem.removesuffix(suffix) instead of
stem.replace(suffix, '') so only trailing suffixes are removed, then add a unit
test in tests/test_results_helpers.py asserting
extract_filename_from_idxml(Path("per_sample_filter.idXML")) ==
"per_sample.mzML" (and keep the existing cases) to pin down the intended
behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Around line 9-15: The README pipeline table omits the optional DecoyDatabase
stage used when generate-decoys=True, causing mismatch with runtime behavior;
update the table to include a row for "DecoyDatabase" (or add an explicit note
that the table is simplified) and describe its role (decoy generation for
downstream FDR control) so users know when inputs/outputs include decoy files;
reference the pipeline stage name "DecoyDatabase" and the config flag
"generate-decoys=True" in the note so readers can correlate docs with the
implemented workflow.
- Line 28: Update the README sentence to clarify that Comet is typically bundled
with OpenMS installers while Percolator is an external dependency, and that both
are invoked via OpenMS adapters (CometAdapter, PercolatorAdapter) which require
explicit paths to the respective executables; also distinguish these
adapter-invoked tools from the direct TOPP suite tools and note that
availability varies by installation method (official installers vs source builds
vs conda packages).

In `@test_gui.py`:
- Around line 37-40: The test_app_loads test only runs the default Streamlit
landing page and doesn't exercise nav-only pages referenced by st.page_link;
update the test to either drive navigation via the test harness (use
AppTest.from_file("app.py") to programmatically switch pages and re-run the app
for each route before asserting app.exception) or change the test comment to
remove the implication of multi-page coverage; specifically modify
test_app_loads and use AppTest methods to navigate between pages (or limit the
comment) so nav-only pages are actually executed or not claimed to be covered.

In `@tests/test_results_helpers.py`:
- Around line 11-12: Replace the hard-coded POSIX path in the
test_get_workflow_dir test with pytest's tmp_path fixture to avoid S108 and make
the test cross-platform: change the test signature to accept tmp_path, create
the workspace dir under tmp_path (e.g., tmp_path / "ws" or similar), call
get_workflow_dir with that Path (or its str if function expects str) and assert
it equals Path(tmp_path, "topp-workflow") (or the appropriate Path
construction); update any imports as needed to use pathlib.Path.

---

Nitpick comments:
In `@test_gui.py`:
- Around line 9-15: DIRECTLY_TESTABLE_PAGES is hard-coded and can drift from the
source of truth in app.py; change the test to derive its list by importing the
navigation/page registry used by app.py (e.g., the function or variable that
registers pages in app.py) and filter out pages that require navigation context
before assigning to DIRECTLY_TESTABLE_PAGES. Specifically, replace the static
list with an introspection step that imports the app's page registry (the
navigation registration symbol in app.py), extracts the registered page module
paths or IDs, and excludes those that need extra context so the test
automatically follows additions/renames in app.py.

In `@tests/test_results_helpers.py`:
- Around line 20-21: Add a negative case to test_extract_scan_number to cover
the "token absent"/malformed path: call extract_scan_number with a string that
does not contain the "scan=" token (for example "file=abc") and assert the
function returns the expected missing value (e.g., None) to match the contract
used by extract_scan_from_ref.
- Around line 24-27: The test suite must cover the str.replace gotcha in
extract_filename_from_idxml: change the implementation in
src/common/results_helpers.py inside extract_filename_from_idxml to use
stem.removesuffix(suffix) instead of stem.replace(suffix, '') so only trailing
suffixes are removed, then add a unit test in tests/test_results_helpers.py
asserting extract_filename_from_idxml(Path("per_sample_filter.idXML")) ==
"per_sample.mzML" (and keep the existing cases) to pin down the intended
behavior.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e73e0e46-c8a9-4eb1-9857-e49f02805820

📥 Commits

Reviewing files that changed from the base of the PR and between b2db1a9 and f43b0ad.

📒 Files selected for processing (24)
  • .github/workflows/build-docker-images.yml
  • .github/workflows/build-windows-executable-app.yaml
  • .github/workflows/workflow-tests.yml
  • README.md
  • content/digest.py
  • content/documentation.py
  • content/download_section.py
  • content/file_upload.py
  • content/fragmentation.py
  • content/isotope_pattern_generator.py
  • content/peptide_mz_calculator.py
  • content/raw_data_viewer.py
  • content/run_example_workflow.py
  • content/run_subprocess.py
  • content/simple_workflow.py
  • content/topp_workflow_execution.py
  • content/topp_workflow_file_upload.py
  • content/topp_workflow_parameter.py
  • content/topp_workflow_results.py
  • test.py
  • test_gui.py
  • tests/test_results_helpers.py
  • tests/test_run_subprocess.py
  • tests/test_simple_workflow.py
💤 Files with no reviewable changes (20)
  • content/topp_workflow_execution.py
  • content/topp_workflow_parameter.py
  • content/topp_workflow_file_upload.py
  • .github/workflows/build-docker-images.yml
  • content/simple_workflow.py
  • content/topp_workflow_results.py
  • .github/workflows/workflow-tests.yml
  • test.py
  • content/documentation.py
  • content/download_section.py
  • tests/test_run_subprocess.py
  • content/peptide_mz_calculator.py
  • content/run_subprocess.py
  • content/file_upload.py
  • content/raw_data_viewer.py
  • tests/test_simple_workflow.py
  • content/run_example_workflow.py
  • content/digest.py
  • content/isotope_pattern_generator.py
  • content/fragmentation.py

Comment thread README.md
Comment on lines +9 to +15
| Stage | Tool | What it does |
|---|---|---|
| 1. Identification | Comet | Peptide-spectrum matching against a protein database |
| 2. Rescoring | Percolator | ML-based statistical validation of PSMs |
| 3. Filtering | IDFilter | FDR-controlled peptide identification filtering |
| 4. Quantification | ProteomicsLFQ | Label-free quantification across samples |
| 5. Analysis | Built-in | Volcano plots, PCA, heatmaps, spectral library export |
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Document the optional decoy-generation step to match runtime behavior.

The pipeline table skips the optional DecoyDatabase stage that is part of the implemented workflow path (generate-decoys=True). Please add it (or explicitly mark current table as simplified) to avoid user confusion about expected inputs/outputs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 9 - 15, The README pipeline table omits the optional
DecoyDatabase stage used when generate-decoys=True, causing mismatch with
runtime behavior; update the table to include a row for "DecoyDatabase" (or add
an explicit note that the table is simplified) and describe its role (decoy
generation for downstream FDR control) so users know when inputs/outputs include
decoy files; reference the pipeline stage name "DecoyDatabase" and the config
flag "generate-decoys=True" in the note so readers can correlate docs with the
implemented workflow.

Comment thread README.md
streamlit run app.py
```

The full pipeline runs locally once the [OpenMS Command Line Tools](https://openms.readthedocs.io/en/latest/about/installation.html) are on your `PATH` — they provide Comet, Percolator, ProteomicsLFQ, and the rest of the TOPP suite. With Python alone, the pyOpenMS-backed parts of the UI still work.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In current OpenMS documentation, are Comet and Percolator binaries bundled with OpenMS Command Line Tools by default, or are they accessed via adapters/external dependencies depending on installation?

💡 Result:

In the current OpenMS documentation (version 3.5.0 and nightly 3.6.0-pre), Comet and Percolator binaries are not bundled as direct OpenMS Command Line Tools (TOPP). Instead, they are external tools accessed via OpenMS adapters (CometAdapter and PercolatorAdapter). These adapters require the user to provide the path to the respective executables (-comet_executable and -percolator_executable parameters, which can be in PATH). However, for Comet, the documentation explicitly states that OpenMS installers ship with Comet, making it available by default in those installations. Percolator is treated similarly as an external dependency without mention of bundling in installers. For source builds or conda (libopenms/openms packages), third-party tools like Comet must be in PATH separately, and tools return exit code 14 if missing.

Citations:


Clarify that Comet and Percolator availability depends on installation method and that both are accessed via OpenMS adapters.

The phrase "they provide Comet, Percolator..." is ambiguous. Comet is bundled with OpenMS installers by default, while Percolator is an external dependency. Both are accessed via adapters (CometAdapter, PercolatorAdapter) that require explicit paths to the respective executables. Distinguish this from the direct TOPP suite tools, and note that availability varies by installation method (installers vs. source builds vs. conda packages).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` at line 28, Update the README sentence to clarify that Comet is
typically bundled with OpenMS installers while Percolator is an external
dependency, and that both are invoked via OpenMS adapters (CometAdapter,
PercolatorAdapter) which require explicit paths to the respective executables;
also distinguish these adapter-invoked tools from the direct TOPP suite tools
and note that availability varies by installation method (official installers vs
source builds vs conda packages).

Comment thread test_gui.py
Comment on lines +37 to +40
def test_app_loads():
app = _init(AppTest.from_file("app.py"))
app.run(timeout=30)
assert not app.exception
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

test_app_loads provides weak coverage for nav-only pages.

Running app.py only exercises whichever page Streamlit renders by default; other nav-only pages (e.g., those using st.page_link) are not actually executed by this test. The comment on line 6–8 implies indirect coverage, but in practice only the landing page runs. Consider either (a) driving navigation via AppTest (e.g., switching pages and re-running) or (b) adjusting the comment so it doesn't overstate the coverage.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test_gui.py` around lines 37 - 40, The test_app_loads test only runs the
default Streamlit landing page and doesn't exercise nav-only pages referenced by
st.page_link; update the test to either drive navigation via the test harness
(use AppTest.from_file("app.py") to programmatically switch pages and re-run the
app for each route before asserting app.exception) or change the test comment to
remove the implication of multi-page coverage; specifically modify
test_app_loads and use AppTest methods to navigate between pages (or limit the
comment) so nav-only pages are actually executed or not claimed to be covered.

Comment on lines +11 to +12
def test_get_workflow_dir():
assert get_workflow_dir("/tmp/ws") == Path("/tmp/ws", "topp-workflow")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use tmp_path instead of a hard-coded /tmp/ws.

Ruff flags S108 (insecure tmp path), and a hard-coded POSIX path also makes the test Windows-unfriendly — relevant given this PR also fixes the Windows build workflow. Prefer pytest's tmp_path fixture or at least a relative Path.

♻️ Proposed fix
-def test_get_workflow_dir():
-    assert get_workflow_dir("/tmp/ws") == Path("/tmp/ws", "topp-workflow")
+def test_get_workflow_dir(tmp_path):
+    assert get_workflow_dir(str(tmp_path)) == tmp_path / "topp-workflow"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_get_workflow_dir():
assert get_workflow_dir("/tmp/ws") == Path("/tmp/ws", "topp-workflow")
def test_get_workflow_dir(tmp_path):
assert get_workflow_dir(str(tmp_path)) == tmp_path / "topp-workflow"
🧰 Tools
🪛 Ruff (0.15.10)

[error] 12-12: Probable insecure usage of temporary file or directory: "/tmp/ws"

(S108)


[error] 12-12: Probable insecure usage of temporary file or directory: "/tmp/ws"

(S108)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_results_helpers.py` around lines 11 - 12, Replace the hard-coded
POSIX path in the test_get_workflow_dir test with pytest's tmp_path fixture to
avoid S108 and make the test cross-platform: change the test signature to accept
tmp_path, create the workspace dir under tmp_path (e.g., tmp_path / "ws" or
similar), call get_workflow_dir with that Path (or its str if function expects
str) and assert it equals Path(tmp_path, "topp-workflow") (or the appropriate
Path construction); update any imports as needed to use pathlib.Path.

`gh release download` takes the tag as a positional argument, not a
`--tag` flag. Silently failed to match on Windows with the system error
"The system cannot find the file specified".

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant