Skip to content

Return error if the chat answer is empty#487

Merged
josephjclark merged 3 commits into
mainfrom
empty-response-bug
May 11, 2026
Merged

Return error if the chat answer is empty#487
josephjclark merged 3 commits into
mainfrom
empty-response-bug

Conversation

@hanna-paasivirta
Copy link
Copy Markdown
Contributor

@hanna-paasivirta hanna-paasivirta commented May 7, 2026

Short Description

workflow_chat, global_chat, and job_chat could return HTTP 200 with response: "" when the LLM produced no usable text. Lightning treated the 200 as success leaving the user-side message stuck in :processing. Each service now raises ApolloError when there's no usable text, so Lightning's handle_error_response routes it into the error tuple.

Fixes #484

Implementation Details

ApolloError usage

Each service raises ApolloError(502, ...) when text output is empty after the existing retry/loop logic. Type is OUTPUT_TRUNCATED for stop_reason=max_tokens, EMPTY_OUTPUT otherwise. Code and types are new values within the existing ApolloError shape.

Sentry tracking

We attach Sentry reporting before each raise. entry.py's existing capture_exception records one event per failed request.

Tests

Added temporary tests in separate test files that won't run automatically. This is to avoid conflicts from our testing architecture which is currently in progress. These tests will need to be rewritten to fit into the new tests.

AI Usage

Please disclose how you've used AI in this work (it's cool, we just want to know!):

  • Code generation (copilot but not intellisense)
  • Learning or fact checking
  • Strategy / design
  • Optimisation / refactoring
  • Translation / spellchecking / doc gen
  • Other
  • I have not used AI

You can read more details in our Responsible AI Policy

@hanna-paasivirta hanna-paasivirta changed the title return error Return error if the chat answer is empty May 7, 2026
@hanna-paasivirta hanna-paasivirta marked this pull request as ready for review May 7, 2026 16:24
Copy link
Copy Markdown
Collaborator

@josephjclark josephjclark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've not been able to repro but this looks good.

Interesting use of mocking. Let's consider this pattern more when we come to the proper integration tests

I've added a changeset and will prep the release

@josephjclark josephjclark merged commit 9de5dd7 into main May 11, 2026
2 checks passed
@josephjclark josephjclark deleted the empty-response-bug branch May 11, 2026 13:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Chat services return success with empty response when LLM output is unparseable or empty

2 participants