fix(ci): unblock all failing workflows on main#17
Conversation
The Windows build step downloaded contrib_build-Windows.tar.gz from
OpenMS/contrib without a --tag, always pulling the latest release.
When the GH Actions cache (7-day eviction) expired, a newer contrib
got pulled that was incompatible with the pinned OpenMS release/3.5.0
source tree, breaking MSVC compilation in DIAPrescoring.cpp.
Pin the download to release/${OPENMS_VERSION} and tie the cache key
to the OpenMS version so contrib stays in lockstep with the source.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Deletes 15 template-leftover content pages, test.py, two tests targeting orphaned pages, and the redundant workflow-tests.yml. These were all reintroduced by the "Merge Template (#16)" commit and reference modules that don't exist in quantms-web (src.simpleworkflow, src.Workflow, docs.toppframework, etc.), which broke ci.yml, pylint.yml, and the build-simple-app Docker job. Rewrites test_gui.py to smoke-test the active pages registered in app.py and adds tests/test_results_helpers.py with unit tests for the pure helpers in src/common/results_helpers.py. Drops Dockerfile_simple references from the Docker CI workflow and the README, and replaces the old template README with a quantms-web-specific summary. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis pull request refactors the repository from a generic OpenMS streamlit template into a specialized quantms-web application focused on DDA label-free quantification. Changes include removing multiple content pages, simplifying build workflows, updating GitHub Actions cache management, rewriting documentation, and restructuring tests accordingly. Changes
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
test_gui.py (1)
9-15: Consider deriving the page list fromapp.pyto avoid drift.
DIRECTLY_TESTABLE_PAGESis hard-coded. If pages are renamed/added/removed inapp.py, this list silently goes stale and the smoke test coverage narrows without signal. Consider introspecting the navigation registration inapp.py(or centralizing the page list) and filtering out the ones that need navigation context, so new pages are automatically exercised.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test_gui.py` around lines 9 - 15, DIRECTLY_TESTABLE_PAGES is hard-coded and can drift from the source of truth in app.py; change the test to derive its list by importing the navigation/page registry used by app.py (e.g., the function or variable that registers pages in app.py) and filter out pages that require navigation context before assigning to DIRECTLY_TESTABLE_PAGES. Specifically, replace the static list with an introspection step that imports the app's page registry (the navigation registration symbol in app.py), extracts the registered page module paths or IDs, and excludes those that need extra context so the test automatically follows additions/renames in app.py.tests/test_results_helpers.py (2)
20-21:test_extract_scan_numberlacks a negative/missing-token case.Unlike
test_extract_scan_from_ref, this test doesn't exercise the "token absent" or malformed-input path forextract_scan_number. Adding one case would lock in the contract and prevent regressions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_results_helpers.py` around lines 20 - 21, Add a negative case to test_extract_scan_number to cover the "token absent"/malformed path: call extract_scan_number with a string that does not contain the "scan=" token (for example "file=abc") and assert the function returns the expected missing value (e.g., None) to match the contract used by extract_scan_from_ref.
24-27: Add a test for thestr.replacegotcha inextract_filename_from_idxml.The underlying implementation (see
src/common/results_helpers.py:90-96) usesstem.replace(suffix, ''), which removes the suffix anywhere in the stem — not just at the end. For inputs likeper_sample_filter.idXMLthis collapses tosample.mzML, which may or may not be intended. Worth adding a test that pins down this behavior (or switching the implementation toremovesuffixif end-only stripping is desired).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_results_helpers.py` around lines 24 - 27, The test suite must cover the str.replace gotcha in extract_filename_from_idxml: change the implementation in src/common/results_helpers.py inside extract_filename_from_idxml to use stem.removesuffix(suffix) instead of stem.replace(suffix, '') so only trailing suffixes are removed, then add a unit test in tests/test_results_helpers.py asserting extract_filename_from_idxml(Path("per_sample_filter.idXML")) == "per_sample.mzML" (and keep the existing cases) to pin down the intended behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 9-15: The README pipeline table omits the optional DecoyDatabase
stage used when generate-decoys=True, causing mismatch with runtime behavior;
update the table to include a row for "DecoyDatabase" (or add an explicit note
that the table is simplified) and describe its role (decoy generation for
downstream FDR control) so users know when inputs/outputs include decoy files;
reference the pipeline stage name "DecoyDatabase" and the config flag
"generate-decoys=True" in the note so readers can correlate docs with the
implemented workflow.
- Line 28: Update the README sentence to clarify that Comet is typically bundled
with OpenMS installers while Percolator is an external dependency, and that both
are invoked via OpenMS adapters (CometAdapter, PercolatorAdapter) which require
explicit paths to the respective executables; also distinguish these
adapter-invoked tools from the direct TOPP suite tools and note that
availability varies by installation method (official installers vs source builds
vs conda packages).
In `@test_gui.py`:
- Around line 37-40: The test_app_loads test only runs the default Streamlit
landing page and doesn't exercise nav-only pages referenced by st.page_link;
update the test to either drive navigation via the test harness (use
AppTest.from_file("app.py") to programmatically switch pages and re-run the app
for each route before asserting app.exception) or change the test comment to
remove the implication of multi-page coverage; specifically modify
test_app_loads and use AppTest methods to navigate between pages (or limit the
comment) so nav-only pages are actually executed or not claimed to be covered.
In `@tests/test_results_helpers.py`:
- Around line 11-12: Replace the hard-coded POSIX path in the
test_get_workflow_dir test with pytest's tmp_path fixture to avoid S108 and make
the test cross-platform: change the test signature to accept tmp_path, create
the workspace dir under tmp_path (e.g., tmp_path / "ws" or similar), call
get_workflow_dir with that Path (or its str if function expects str) and assert
it equals Path(tmp_path, "topp-workflow") (or the appropriate Path
construction); update any imports as needed to use pathlib.Path.
---
Nitpick comments:
In `@test_gui.py`:
- Around line 9-15: DIRECTLY_TESTABLE_PAGES is hard-coded and can drift from the
source of truth in app.py; change the test to derive its list by importing the
navigation/page registry used by app.py (e.g., the function or variable that
registers pages in app.py) and filter out pages that require navigation context
before assigning to DIRECTLY_TESTABLE_PAGES. Specifically, replace the static
list with an introspection step that imports the app's page registry (the
navigation registration symbol in app.py), extracts the registered page module
paths or IDs, and excludes those that need extra context so the test
automatically follows additions/renames in app.py.
In `@tests/test_results_helpers.py`:
- Around line 20-21: Add a negative case to test_extract_scan_number to cover
the "token absent"/malformed path: call extract_scan_number with a string that
does not contain the "scan=" token (for example "file=abc") and assert the
function returns the expected missing value (e.g., None) to match the contract
used by extract_scan_from_ref.
- Around line 24-27: The test suite must cover the str.replace gotcha in
extract_filename_from_idxml: change the implementation in
src/common/results_helpers.py inside extract_filename_from_idxml to use
stem.removesuffix(suffix) instead of stem.replace(suffix, '') so only trailing
suffixes are removed, then add a unit test in tests/test_results_helpers.py
asserting extract_filename_from_idxml(Path("per_sample_filter.idXML")) ==
"per_sample.mzML" (and keep the existing cases) to pin down the intended
behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e73e0e46-c8a9-4eb1-9857-e49f02805820
📒 Files selected for processing (24)
.github/workflows/build-docker-images.yml.github/workflows/build-windows-executable-app.yaml.github/workflows/workflow-tests.ymlREADME.mdcontent/digest.pycontent/documentation.pycontent/download_section.pycontent/file_upload.pycontent/fragmentation.pycontent/isotope_pattern_generator.pycontent/peptide_mz_calculator.pycontent/raw_data_viewer.pycontent/run_example_workflow.pycontent/run_subprocess.pycontent/simple_workflow.pycontent/topp_workflow_execution.pycontent/topp_workflow_file_upload.pycontent/topp_workflow_parameter.pycontent/topp_workflow_results.pytest.pytest_gui.pytests/test_results_helpers.pytests/test_run_subprocess.pytests/test_simple_workflow.py
💤 Files with no reviewable changes (20)
- content/topp_workflow_execution.py
- content/topp_workflow_parameter.py
- content/topp_workflow_file_upload.py
- .github/workflows/build-docker-images.yml
- content/simple_workflow.py
- content/topp_workflow_results.py
- .github/workflows/workflow-tests.yml
- test.py
- content/documentation.py
- content/download_section.py
- tests/test_run_subprocess.py
- content/peptide_mz_calculator.py
- content/run_subprocess.py
- content/file_upload.py
- content/raw_data_viewer.py
- tests/test_simple_workflow.py
- content/run_example_workflow.py
- content/digest.py
- content/isotope_pattern_generator.py
- content/fragmentation.py
| | Stage | Tool | What it does | | ||
| |---|---|---| | ||
| | 1. Identification | Comet | Peptide-spectrum matching against a protein database | | ||
| | 2. Rescoring | Percolator | ML-based statistical validation of PSMs | | ||
| | 3. Filtering | IDFilter | FDR-controlled peptide identification filtering | | ||
| | 4. Quantification | ProteomicsLFQ | Label-free quantification across samples | | ||
| | 5. Analysis | Built-in | Volcano plots, PCA, heatmaps, spectral library export | |
There was a problem hiding this comment.
Document the optional decoy-generation step to match runtime behavior.
The pipeline table skips the optional DecoyDatabase stage that is part of the implemented workflow path (generate-decoys=True). Please add it (or explicitly mark current table as simplified) to avoid user confusion about expected inputs/outputs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 9 - 15, The README pipeline table omits the optional
DecoyDatabase stage used when generate-decoys=True, causing mismatch with
runtime behavior; update the table to include a row for "DecoyDatabase" (or add
an explicit note that the table is simplified) and describe its role (decoy
generation for downstream FDR control) so users know when inputs/outputs include
decoy files; reference the pipeline stage name "DecoyDatabase" and the config
flag "generate-decoys=True" in the note so readers can correlate docs with the
implemented workflow.
| streamlit run app.py | ||
| ``` | ||
|
|
||
| The full pipeline runs locally once the [OpenMS Command Line Tools](https://openms.readthedocs.io/en/latest/about/installation.html) are on your `PATH` — they provide Comet, Percolator, ProteomicsLFQ, and the rest of the TOPP suite. With Python alone, the pyOpenMS-backed parts of the UI still work. |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In current OpenMS documentation, are Comet and Percolator binaries bundled with OpenMS Command Line Tools by default, or are they accessed via adapters/external dependencies depending on installation?
💡 Result:
In the current OpenMS documentation (version 3.5.0 and nightly 3.6.0-pre), Comet and Percolator binaries are not bundled as direct OpenMS Command Line Tools (TOPP). Instead, they are external tools accessed via OpenMS adapters (CometAdapter and PercolatorAdapter). These adapters require the user to provide the path to the respective executables (-comet_executable and -percolator_executable parameters, which can be in PATH). However, for Comet, the documentation explicitly states that OpenMS installers ship with Comet, making it available by default in those installations. Percolator is treated similarly as an external dependency without mention of bundling in installers. For source builds or conda (libopenms/openms packages), third-party tools like Comet must be in PATH separately, and tools return exit code 14 if missing.
Citations:
- 1: https://openms.de/documentation/TOPP_CometAdapter.html
- 2: https://openms.de/documentation/TOPP_PercolatorAdapter.html
- 3: https://openms.de/doxygen/nightly/html/TOPP_PercolatorAdapter.html
- 4: https://abibuilder.cs.uni-tuebingen.de/archive/openms/Documentation/nightly/html/TOPP_CometAdapter.html
- 5: https://openms.de/documentation/install_linux.html
- 6: https://abibuilder.cs.uni-tuebingen.de/archive/openms/Documentation/nightly/html/install_linux.html
- 7: https://openms.readthedocs.io/en/latest/about/installation/installation-on-gnu-linux.html
Clarify that Comet and Percolator availability depends on installation method and that both are accessed via OpenMS adapters.
The phrase "they provide Comet, Percolator..." is ambiguous. Comet is bundled with OpenMS installers by default, while Percolator is an external dependency. Both are accessed via adapters (CometAdapter, PercolatorAdapter) that require explicit paths to the respective executables. Distinguish this from the direct TOPP suite tools, and note that availability varies by installation method (installers vs. source builds vs. conda packages).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 28, Update the README sentence to clarify that Comet is
typically bundled with OpenMS installers while Percolator is an external
dependency, and that both are invoked via OpenMS adapters (CometAdapter,
PercolatorAdapter) which require explicit paths to the respective executables;
also distinguish these adapter-invoked tools from the direct TOPP suite tools
and note that availability varies by installation method (official installers vs
source builds vs conda packages).
| def test_app_loads(): | ||
| app = _init(AppTest.from_file("app.py")) | ||
| app.run(timeout=30) | ||
| assert not app.exception |
There was a problem hiding this comment.
test_app_loads provides weak coverage for nav-only pages.
Running app.py only exercises whichever page Streamlit renders by default; other nav-only pages (e.g., those using st.page_link) are not actually executed by this test. The comment on line 6–8 implies indirect coverage, but in practice only the landing page runs. Consider either (a) driving navigation via AppTest (e.g., switching pages and re-running) or (b) adjusting the comment so it doesn't overstate the coverage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test_gui.py` around lines 37 - 40, The test_app_loads test only runs the
default Streamlit landing page and doesn't exercise nav-only pages referenced by
st.page_link; update the test to either drive navigation via the test harness
(use AppTest.from_file("app.py") to programmatically switch pages and re-run the
app for each route before asserting app.exception) or change the test comment to
remove the implication of multi-page coverage; specifically modify
test_app_loads and use AppTest methods to navigate between pages (or limit the
comment) so nav-only pages are actually executed or not claimed to be covered.
| def test_get_workflow_dir(): | ||
| assert get_workflow_dir("/tmp/ws") == Path("/tmp/ws", "topp-workflow") |
There was a problem hiding this comment.
Use tmp_path instead of a hard-coded /tmp/ws.
Ruff flags S108 (insecure tmp path), and a hard-coded POSIX path also makes the test Windows-unfriendly — relevant given this PR also fixes the Windows build workflow. Prefer pytest's tmp_path fixture or at least a relative Path.
♻️ Proposed fix
-def test_get_workflow_dir():
- assert get_workflow_dir("/tmp/ws") == Path("/tmp/ws", "topp-workflow")
+def test_get_workflow_dir(tmp_path):
+ assert get_workflow_dir(str(tmp_path)) == tmp_path / "topp-workflow"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_get_workflow_dir(): | |
| assert get_workflow_dir("/tmp/ws") == Path("/tmp/ws", "topp-workflow") | |
| def test_get_workflow_dir(tmp_path): | |
| assert get_workflow_dir(str(tmp_path)) == tmp_path / "topp-workflow" |
🧰 Tools
🪛 Ruff (0.15.10)
[error] 12-12: Probable insecure usage of temporary file or directory: "/tmp/ws"
(S108)
[error] 12-12: Probable insecure usage of temporary file or directory: "/tmp/ws"
(S108)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_results_helpers.py` around lines 11 - 12, Replace the hard-coded
POSIX path in the test_get_workflow_dir test with pytest's tmp_path fixture to
avoid S108 and make the test cross-platform: change the test signature to accept
tmp_path, create the workspace dir under tmp_path (e.g., tmp_path / "ws" or
similar), call get_workflow_dir with that Path (or its str if function expects
str) and assert it equals Path(tmp_path, "topp-workflow") (or the appropriate
Path construction); update any imports as needed to use pathlib.Path.
`gh release download` takes the tag as a positional argument, not a `--tag` flag. Silently failed to match on Windows with the system error "The system cannot find the file specified". Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
All five CI workflows on `main` have been failing since the 2026-04-08 "Merge Template (#16)" commit. This PR addresses each.
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
Removed Features
Documentation
Improvements