LLM CALL: save bypassed entry in db as well#826
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughPersists guardrail rephrase calls: changes config serialization to JSON mode, adds ChangesGuardrail Rephrase Persistence
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
bb74426 to
4663ffb
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
backend/app/tests/services/llm/test_jobs.py (1)
1201-1217: ⚡ Quick winUse a fixture factory instead of an inline request payload.
Line 1201 builds a full request dict inline; please route this through a factory-style fixture/helper for consistency with test-data construction standards.
As per coding guidelines,
backend/app/tests/**/*.py: Use factory pattern for test fixtures inbackend/app/tests/.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/tests/services/llm/test_jobs.py` around lines 1201 - 1217, Replace the inline request_data dict in test_jobs.py with a factory-style fixture: extract the payload construction (the "query" and "config" with "blob" / "completion" / "params" and "input_guardrails" using VALIDATOR_CONFIG_ID_1) into a test factory helper (e.g., request_payload_factory or build_request_payload) in the test fixtures module and then call that factory from the test before invoking _execute_job(job_for_execution, db, payload); ensure the factory accepts overrides for fields like model and validator IDs so other tests can reuse it and keep the same variable names (request_data, job_for_execution, VALIDATOR_CONFIG_ID_1) to minimize changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@backend/app/tests/services/llm/test_jobs.py`:
- Around line 1177-1179: The test method
test_guardrails_rephrase_saves_original_input_and_safe_text_in_db is missing
type hints for its parameters and return value; update its signature to add
appropriate typing for db, job_env, and job_for_execution (e.g., use the
repository's existing fixture types or typing.Any/Fixture types used elsewhere)
and annotate the return type as -> None so the function signature follows the
project's typing rules.
---
Nitpick comments:
In `@backend/app/tests/services/llm/test_jobs.py`:
- Around line 1201-1217: Replace the inline request_data dict in test_jobs.py
with a factory-style fixture: extract the payload construction (the "query" and
"config" with "blob" / "completion" / "params" and "input_guardrails" using
VALIDATOR_CONFIG_ID_1) into a test factory helper (e.g., request_payload_factory
or build_request_payload) in the test fixtures module and then call that factory
from the test before invoking _execute_job(job_for_execution, db, payload);
ensure the factory accepts overrides for fields like model and validator IDs so
other tests can reuse it and keep the same variable names (request_data,
job_for_execution, VALIDATOR_CONFIG_ID_1) to minimize changes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e572c28d-2925-4ec9-ab3c-c034449b9c71
📒 Files selected for processing (4)
backend/app/crud/llm.pybackend/app/services/llm/guardrails.pybackend/app/services/llm/jobs.pybackend/app/tests/services/llm/test_jobs.py
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
| usage={ | ||
| "input_tokens": 0, | ||
| "output_tokens": 0, | ||
| "total_tokens": 0, | ||
| }, |
There was a problem hiding this comment.
why we are hardcoding this?
There was a problem hiding this comment.
because llm is not being hit so no token numbers will be calculated, we get this token number from when it goes through llm
| content={ | ||
| "type": "text", | ||
| "content": { | ||
| "format": "text", | ||
| "value": guardrail_direct_response, |
There was a problem hiding this comment.
we should be able to identify in some way that this response was from guardrail and actual LLM Call didn't happen otherwise with the format it might get confusing and users may end up fixing things in LLM end
There was a problem hiding this comment.
maybe later on you can add a "source" key inside content where you can put guardrail in teh case of guardrail and llm in the case of llm
| input_guardrails=config_blob.input_guardrails, | ||
| output_guardrails=config_blob.output_guardrails, | ||
| ) | ||
| rephrase_llm_call = create_llm_call( |
There was a problem hiding this comment.
Can we use this directly instead of creating with same four fields again
rephrase_llm_call = create_llm_call(
session,
request=rephrase_call_request,
job_id=job_id,
project_id=project_id,
organization_id=organization_id,
resolved_config=config_blob, # ← was rephrase_resolved_config
original_provider=str(config_blob.completion.provider),
chain_id=chain_id,
)
There was a problem hiding this comment.
took care of this as well
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/tests/services/llm/test_guardrails.py (1)
285-375:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winAdd type hints to all newly added test signatures.
Line 285 and Lines 293-375 introduce new methods with untyped parameters (
job,**overrides) and incomplete signature typing coverage. Please annotate all parameters and return types in this new test class.As per coding guidelines,
**/*.py: Always add type hints to all function parameters and return values in Python code.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/tests/services/llm/test_guardrails.py` around lines 285 - 375, The tests add untyped functions — annotate the fixture and all test/signature params and return types: update the job fixture signature to def job(self, db: Session) -> Job (or the project Job model type), annotate _call as def _call(self, db: Session, job: Job, **overrides: Any) -> Optional[UUID] (or UUID if always returning one), and add explicit return types to each test method (e.g., -> None) and typed local vars where needed; import typing symbols (Any, Optional, Dict) and the UUID/Job types used so create_llm_call, update_llm_call_response, and each test (test_success_returns_uuid, test_success_saves_original_input_and_job_id, test_success_saves_safe_text_as_content, test_success_saves_zero_usage, test_create_llm_call_error_returns_none, test_update_llm_call_response_error_returns_none, test_chain_id_forwarded_to_create_llm_call, test_request_metadata_forwarded_to_llm_call_request) have full parameter and return annotations.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@backend/app/tests/services/llm/test_guardrails.py`:
- Around line 286-291: The job fixture currently uses JobCrud.create (creating a
Job with JobType.LLM_API and trace_id="rephrase-test")—replace that direct CRUD
usage with the project’s test factory pattern: call the existing job factory
fixture (e.g., job_factory or create_job_factory) to build/persist a Job with
the same attributes (job_type=JobType.LLM_API, trace_id="rephrase-test"), commit
if the factory requires it, and return the created job; remove JobCrud and
direct DB manipulation and rely on the shared factory to ensure consistency with
other tests.
---
Outside diff comments:
In `@backend/app/tests/services/llm/test_guardrails.py`:
- Around line 285-375: The tests add untyped functions — annotate the fixture
and all test/signature params and return types: update the job fixture signature
to def job(self, db: Session) -> Job (or the project Job model type), annotate
_call as def _call(self, db: Session, job: Job, **overrides: Any) ->
Optional[UUID] (or UUID if always returning one), and add explicit return types
to each test method (e.g., -> None) and typed local vars where needed; import
typing symbols (Any, Optional, Dict) and the UUID/Job types used so
create_llm_call, update_llm_call_response, and each test
(test_success_returns_uuid, test_success_saves_original_input_and_job_id,
test_success_saves_safe_text_as_content, test_success_saves_zero_usage,
test_create_llm_call_error_returns_none,
test_update_llm_call_response_error_returns_none,
test_chain_id_forwarded_to_create_llm_call,
test_request_metadata_forwarded_to_llm_call_request) have full parameter and
return annotations.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 71bef9e5-160c-47ce-b490-2584c0d451d0
📒 Files selected for processing (3)
backend/app/crud/llm.pybackend/app/services/llm/jobs.pybackend/app/tests/services/llm/test_guardrails.py
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@backend/app/crud/llm.py`:
- Around line 271-305: The create/update sequence for the rephrase LLM call must
be made atomic to avoid partial persistence: ensure the creation and the
subsequent update happen inside a single DB transaction (or use new
non-committing helpers) so either both writes succeed or both are rolled back.
Concretely, wrap the calls to create_llm_call and update_llm_call_response (the
block that constructs LLMCallRequest and calls
create_llm_call/rephrase_llm_call.id then update_llm_call_response) in a single
transaction scope, or implement and call internal variants like
create_llm_call_no_commit and update_llm_call_response_no_commit and then commit
once after both succeed; if any error occurs, rollback and return None.
- Around line 290-291: The code is incorrectly storing job_id in the
provider_response_id field; instead keep provider_response_id reserved for the
actual provider response identifier (or None) and move job_id into a dedicated
metadata field (e.g., add metadata={"job_id": str(job_id)} or include "job_id"
inside the existing content/metadata dict). Update the call that currently sets
provider_response_id=str(job_id) to set provider_response_id=None (or the real
provider id variable) and add the job_id to content or metadata so traceability
is preserved without overloading provider_response_id.
In `@backend/app/tests/services/llm/test_guardrails.py`:
- Around line 293-308: The helper _call lacks full type hints: annotate its
parameters and return type (e.g., def _call(db: Session, job: Job, **overrides:
Any) -> GuardrailCall) and import any needed types (Session, Job model,
GuardrailCall/return type from save_rephrase_guardrail_call, and Any); likewise
add explicit return/type annotations to the job fixture and to all new test
function parameters (e.g., job: Job) in the 310-375 range so every function
parameter and return value uses Python 3.11+ type hints consistent with
QueryParams, save_rephrase_guardrail_call, and the symbols
_CONFIG/_CONFIG_BLOB/_SAFE_TEXT used in the helper.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 31b92230-61b7-46a1-8660-89bfe1d4ae08
📒 Files selected for processing (4)
backend/app/crud/llm.pybackend/app/services/llm/guardrails.pybackend/app/services/llm/jobs.pybackend/app/tests/services/llm/test_guardrails.py
✅ Files skipped from review due to trivial changes (1)
- backend/app/services/llm/guardrails.py
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/app/services/llm/jobs.py
Summary
Target issue is #828
Notes
Release Notes
New Features
Tests
Summary by CodeRabbit
New Features
Reliability
Tests