Skip to content

fix(event): correct fork rollback handling in solidity event maps#6718

Merged
lvs0075 merged 1 commit intotronprotocol:developfrom
xxo1shine:fix/event-removed-filter-solidity-maps-sync-1
May 8, 2026
Merged

fix(event): correct fork rollback handling in solidity event maps#6718
lvs0075 merged 1 commit intotronprotocol:developfrom
xxo1shine:fix/event-removed-filter-solidity-maps-sync-1

Conversation

@xxo1shine
Copy link
Copy Markdown
Collaborator

@xxo1shine xxo1shine commented Apr 28, 2026

What does this PR do?

Fix two correctness bugs in event service rollback handling that cause solidity-side event/log triggers to either leak fork- removed entries or be lost during reorg.

When a block is rolled back during fork resolution, its contract triggers are marked removed=true. The solidity event pipeline depends on this flag plus an in-memory map (Args.getSolidityContractEventTriggerMap / getSolidityContractLogTriggerMap) keyed by block number. Two paths through this pipeline were broken:

  1. Removed triggers reached the solidity sink. ContractTriggerCapsule.processTrigger()
    only checked isSolidityEventTriggerEnable / isSolidityLogTriggerEnable, never the
    removed flag. Triggers from forked-out blocks were emitted as valid solidity events.

  2. Async enqueue raced with reorg cache clearing. Manager.postContractTrigger
    posted to a global triggerCapsuleQueue for async draining, but
    reOrgContractTrigger clears the in-memory cache synchronously. Capsules already
    in the queue could reference state that was about to be invalidated, leading to
    either lost events or stale cache reads.

  3. Solidity caches were not cleared on block-process failure or shutdown.
    getSolidityContract{Log,Event}TriggerMap entries are populated per block.
    If processBlock threw after population, or on node shutdown, those entries
    leaked.

Why are these changes required?

This PR has been tested by:

  • Unit Tests
  • Manual Testing

Follow up

Extra details

Fixes #6678

@github-actions github-actions Bot requested a review from 0xbigapple April 28, 2026 10:21
@xxo1shine xxo1shine force-pushed the fix/event-removed-filter-solidity-maps-sync-1 branch from 6102162 to 2831e35 Compare April 28, 2026 11:07
@halibobo1205 halibobo1205 added this to the GreatVoyage-v4.8.2 milestone Apr 28, 2026
@halibobo1205 halibobo1205 added the topic:event subscribe transaction trigger, block trigger, contract event, contract log label Apr 28, 2026
@lvs0075 lvs0075 requested a review from yanghang8612 April 29, 2026 12:22
private void clearSolidityContractTriggerCache(long blockNum) {
if (eventPluginLoaded
&& (EventPluginLoader.getInstance().isSolidityEventTriggerEnable()
|| EventPluginLoader.getInstance().isSolidityLogTriggerEnable())) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] clearSolidityContractTriggerCache guard is narrower than the write side; redundancy path leaks the map.
ContractTriggerCapsule.java:162 ( the event-as-log redundancy path) also writes to solidityContractLogTriggerMap under this combination:

isContractLogTriggerEnable      = true
isContractLogTriggerRedundancy  = true
isSolidityLogTriggerRedundancy  = true
isSolidityLogTriggerEnable      = false
isSolidityEventTriggerEnable    = false

outer condition (isContractLogEnable && isContractLogRedundancy) || ... is true → enters the redundancy block; inner isSolidityLogTriggerRedundancy=true → writes to the solidity log map. But both clear-side guard predicates are false → clear is skipped.

Subtler still: the consumer postSolidityTrigger is also gated by isSolidityLogTriggerEnable, so under this configuration the consumer does not run either. The result: solidityContractLogTriggerMap accumulates one entry per event-bearing block, never consumed, never cleared — a memory leak. It does not produce Issue #6678's "duplicate downstream events" symptom (no downstream is wired up), but since the PR formalizes the clearing responsibility, the clear's semantics should be aligned with the write side.

Suggestion (preferred): simply drop the guard. ConcurrentHashMap.remove(key) on a non-existent key is an O(1) bucket lookup returning null — the cost is negligible. The guard was only a defensive optimization; without it, write-side condition changes won't desync the clear:

private void clearSolidityContractTriggerCache(long blockNum) {
  Args.getSolidityContractLogTriggerMap().remove(blockNum);
  Args.getSolidityContractEventTriggerMap().remove(blockNum);
}

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enabling only isContractLogTriggerEnable = true and isContractLogTriggerRedundancy = true will not write data; only enabling the solid event switch will write data.

} catch (Throwable throwable) {
logger.error(throwable.getMessage(), throwable);
khaosDb.removeBlk(block.getBlockId());
clearSolidityContractTriggerCache(block.getNum());
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] switchFork has two applyBlock failure paths that don't clear the cache.

Copy link
Copy Markdown
Collaborator Author

@xxo1shine xxo1shine May 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repeated chain switching is a historical oversight and will not be addressed in this PR. It can be discussed separately in a later issue.

try {
contractTriggerCapsule.processTrigger();
} catch (Throwable throwable) {
logger.warn("Post contract trigger failed. {}", throwable.getMessage());
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] logger.warn loses stack trace; trigger failures will be undebuggable.

Suggestion:

} catch (Throwable throwable) {
  logger.warn("Post contract trigger failed.", throwable);
}

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

}

@Test
public void testReOrgContractTriggerClearsMap() throws Exception {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] testReOrgContractTriggerClearsMap doesn't actually call reOrgContractTrigger — the name is misleading

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@0xbigapple, you're right. The test name promises the reorg path but only exercises clearSolidityContractTriggerCache, which also overlaps with testClearSolidityContractTriggerCache below. I'll rewrite this case to actually go through reOrgContractTrigger (construct a forked block and assert the corresponding entry is removed from the cache) so the name and behavior line up.

…trigger processing

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@xxo1shine xxo1shine force-pushed the fix/event-removed-filter-solidity-maps-sync-1 branch from 2831e35 to 7d33b6a Compare May 8, 2026 09:44
Comment thread framework/src/main/java/org/tron/core/db/Manager.java
Copy link
Copy Markdown
Collaborator

@yanghang8612 yanghang8612 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lvs0075 lvs0075 merged commit 55da98e into tronprotocol:develop May 8, 2026
12 checks passed
@github-project-automation github-project-automation Bot moved this to Done in java-tron May 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

topic:event subscribe transaction trigger, block trigger, contract event, contract log

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

[Bug]Event cache not cleared on reorg causing duplicate and inconsistent event delivery

6 participants