A .NET 9 console application implementing distributed file storage with fragmentation, multi-provider distribution, and cryptographic integrity verification.
%%{init: {'theme': 'base', 'themeVariables': {
'background': '#f8f9fa',
'primaryTextColor': '#2d3748',
'secondaryTextColor': '#4a5568',
'lineColor': '#718096'
}}}%%
graph TB
subgraph Presentation["Presentation Layer"]
Console["DistributedFileFragmentor.Console<br/>(CLI Commands)"]
end
subgraph Infrastructure["Infrastructure Layer"]
Infra["DistributedFileFragmentor.Infrastructure<br/>(Storage, EF Core, Resilience)"]
end
subgraph Application["Application Layer"]
App["DistributedFileFragmentor.Application<br/>(CQRS, Abstractions)"]
end
subgraph Domain["Domain Layer"]
Dom["DistributedFileFragmentor.Domain<br/>(Entities, Value Objects)"]
end
subgraph Shared["Shared Layer"]
Common["DistributedFileFragmentor.Shared.Common<br/>(Utilities)"]
end
Console -->|depends on| Infra
Console -->|depends on| App
Console -->|depends on| Dom
Console -->|depends on| Common
Infra -->|implements| App
Infra -->|depends on| Dom
Infra -->|depends on| Common
App -->|depends on| Dom
App -->|depends on| Common
Dom -->|no dependencies| Dom
Common -->|no dependencies| Common
style Presentation fill:#e8f5e9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style Console fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style Infrastructure fill:#e8f5e9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style Infra fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style Application fill:#e8f5e9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style App fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style Domain fill:#e8f5e9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style Dom fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style Shared fill:#e8f5e9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style Common fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
Layer Responsibilities:
- Domain: Entities (FileMetadata, ChunkMetadata, UploadSession), value objects, enums
- Application: CQRS commands/handlers, abstractions (IAppDbContext, IStorageProvider, ICleanupService)
- Infrastructure: EF Core, storage providers (FileSystem, Database), resilience patterns
- Presentation: CLI commands with System.CommandLine
- Shared.Common: Cross-cutting utilities (ReadOnlyMemoryStream)
- File Fragmentation: Adaptive chunking (50-500MB) with SHA-256 hashing
- Batch Operations: Parallel processing with isolated DbContext scopes
- Multi-Provider Storage: Round-robin distribution (FileSystem + Database BLOB)
- Reassembly: Parallel chunk downloads with sequential writes and integrity verification
- Deletion: Idempotent chunk cleanup with partial failure recovery
- Maintenance: Orphaned chunk cleanup with dry-run mode
- Storage Insights: Multi-format reporting (Console, JSON, CSV, Markdown)
- Resilience: Exponential backoff retry + circuit breaker patterns
- Security: Path traversal prevention, symlink detection, glob pattern validation
- .NET 9 with EF Core 9 + SQL Server LocalDB
- Mediator.SourceGenerator (reflection-free CQRS)
- ZLogger (zero-allocation logging)
- System.CommandLine 2.0.3
- NUnit 4.4.0 + NSubstitute 5.3.0 + FluentAssertions 8.8.0
- .NET 9 SDK
- SQL Server LocalDB
# Initialize database
dotnet ef database update --project src/ClassLibraries/DistributedFileFragmentor.Infrastructure --startup-project src/Presentation/DistributedFileFragmentor.Console
# Build
dotnet build --configuration Release# Single file
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- fragment --files "C:\path\to\file.zip"
# Multiple files with glob patterns
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- fragment --files "C:\data\*.zip" "C:\docs\*.pdf"Fragments one or more files into adaptive-sized chunks with SHA-256 deduplication, distributing chunks round-robin across FileSystem and Database providers. Supports glob patterns.
%%{init: {'theme': 'base', 'themeVariables': {
'background': '#f8f9fa',
'primaryTextColor': '#2d3748',
'secondaryTextColor': '#4a5568',
'lineColor': '#388e3c'
}}}%%
flowchart TD
A([CLI: fragment]) --> B[Parse --files path1 path2 ...]
B --> C[Create ConsoleProgressReporter]
C --> D[Send FragmentFilesCommand via Mediator]
D --> E[FragmentFilesHandler]
E --> F{Single file?}
F -- Yes --> G[concurrency = 1]
F -- No --> H[concurrency = BatchConcurrency]
G & H --> I[BatchOrchestrator.ExecuteAsync]
I --> J[Per-file: CreateAsyncScope\nIsolated DbContext]
J --> K[FileFragmentationService.FragmentAsync]
K --> L[Validate file exists]
L --> M{File not found?}
M -- Yes --> ERR1[❌ FileNotFoundException\nAdd to Failed bag]
M -- No --> N[Open FileStream\nSequentialScan]
N --> O[Sha256StreamHasher\nComputeSha256Async]
O --> P{Hash exists in DB?\nUploadSession = Completed}
P -- Yes --> Q[Return cached FileId\nDeduplication hit]
P -- No --> R[Create UploadSession\nStatus = InProgress]
R --> S[AdaptiveChunkingCalculator\nDetermine chunk size]
S --> T{File size?}
T -- "< 100MB" --> T1[1 chunk]
T -- "100–500MB" --> T2[50MB chunks]
T -- "500MB–2GB" --> T3[100MB chunks]
T -- "2–10GB" --> T4[200MB chunks]
T -- "> 10GB" --> T5[500MB chunks]
T1 & T2 & T3 & T4 & T5 --> U[Producer: Read chunks\nMemoryPool buffers]
U --> V[Per chunk: SHA-256 hash]
V --> W[RoundRobinDistributionStrategy\nSelect provider by chunkIndex % N]
W --> X[ResilientStorageProvider\nRetry 3x + CircuitBreaker]
X --> Y{Provider}
Y -- FileSystem --> Y1[Write .dff file\nto BasePath]
Y -- Database --> Y2[INSERT ChunkBlob\nSQL Server VARBINARY]
Y1 & Y2 --> Z[Save ChunkMetadata to DB]
Z --> AA{DbUpdateException?\nHash conflict}
AA -- Yes --> AB[Return existing FileId\nRace condition safe]
AA -- No --> AC[Update UploadSession\nStatus = Completed]
AC --> AD[✅ FileId returned\nAdd to Successful bag]
AB --> AD
AD --> AE[BatchOrchestrator\nAggregate results]
AE --> AF[ConsoleProgressReporter\nUpdate % every 10 items]
AF --> AG([Log: Successful / Failed counts])
style ERR1 fill:#ffcdd2,stroke:#c62828,color:#2d3748
style Q fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style AD fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style AG fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
# Single file
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- reassemble --ids "a1b2c3d4-e5f6-7890-abcd-ef1234567890" --output "C:\restored\file.zip"
# Multiple files to directory
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- reassemble --ids "guid1" "guid2" --output "C:\restored"Reassembles one or more fragmented files from their chunks. Parallel chunk download via producer-consumer channels, per-chunk SHA-256 verification, and final file integrity check.
%%{init: {'theme': 'base', 'themeVariables': {
'background': '#f8f9fa',
'primaryTextColor': '#2d3748',
'secondaryTextColor': '#4a5568',
'lineColor': '#388e3c'
}}}%%
flowchart TD
A([CLI: reassemble]) --> B[Parse --ids guid1 guid2...\n--output path]
B --> C{Single ID?}
C -- Yes --> D[requests = ReassemblyRequest\nid → output as file path]
C -- No --> E[Query DB for OriginalFileName\nper ID]
E --> F[Directory.CreateDirectory output\nBuild ReassemblyRequest list]
D & F --> G[Create ConsoleProgressReporter]
G --> H[Send ReassembleFilesCommand via Mediator]
H --> I[ReassembleFilesHandler]
I --> J{Single file?}
J -- Yes --> K[concurrency = 1]
J -- No --> L[concurrency = BatchConcurrency]
K & L --> M[BatchOrchestrator.ExecuteAsync]
M --> N[Per-file: CreateAsyncScope\nIsolated DbContext]
N --> O[FileReassemblyService.ReassembleAsync]
O --> P[Query FileMetadata + Chunks\nInclude navigation]
P --> Q{FileMetadata found?}
Q -- No --> ERR1[❌ InvalidOperationException\nMetadata not found]
Q -- Yes --> R[Sort chunks by ChunkIndex]
R --> S[Open output FileStream\nMode=Create ReadWrite]
S --> T[Create BoundedChannel\nCapacity=IOBoundConcurrency]
T --> U[Producer Task\nParallel chunk downloads]
U --> V[Per chunk: Semaphore wait]
V --> W[Lookup provider\nby StorageProviderType]
W --> X{Provider}
X -- FileSystem --> X1[Read .dff file stream]
X -- Database --> X2[SQL SequentialAccess\nStream BLOB]
X1 & X2 --> Y[Buffer to MemoryStream\nMake seekable]
Y --> Z[Write to Channel\nindex order preserved]
Z --> AA[Consumer Task\nSequential write]
AA --> AB{Chunk index\nin sequence?}
AB -- No --> ERR2[❌ Chunk order violation\nCleanup partial file]
AB -- Yes --> AC[Verify chunk SHA-256]
AC --> AD{Hash match?}
AD -- No --> ERR3[❌ Checksum mismatch\nCleanup partial file]
AD -- Yes --> AE[Write chunk bytes\nto output FileStream]
AE --> AF[Next chunk]
AF --> AA
AE --> AG[All chunks written\nFlush + Seek to 0]
AG --> AH[Compute final\nSHA-256 of output file]
AH --> AI{Final hash ==\nFileMetadata.Hash?}
AI -- No --> ERR4[❌ File corrupt\nCleanup + throw]
AI -- Yes --> AJ[✅ Integrity verified\nReassembly complete]
AJ --> AK[ConsoleProgressReporter update]
AK --> AL([Log: Successful / Failed counts])
style ERR1 fill:#ffcdd2,stroke:#c62828,color:#2d3748
style ERR2 fill:#ffcdd2,stroke:#c62828,color:#2d3748
style ERR3 fill:#ffcdd2,stroke:#c62828,color:#2d3748
style ERR4 fill:#ffcdd2,stroke:#c62828,color:#2d3748
style AJ fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style AL fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
# Single file
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- delete --ids "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
# Multiple files
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- delete --ids "guid1" "guid2"
# Delete all files (requires confirmation)
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- delete --allDeletes one or more files and all associated chunks from all storage providers. --all requires interactive confirmation. Parallel execution with idempotent chunk cleanup.
%%{init: {'theme': 'base', 'themeVariables': {
'background': '#f8f9fa',
'primaryTextColor': '#2d3748',
'secondaryTextColor': '#4a5568',
'lineColor': '#388e3c'
}}}%%
flowchart TD
A([CLI: delete]) --> B[Parse --ids guid1 guid2...\nOR --all]
B --> C{--all flag?}
C -- Yes --> D[Query all FileIds from DB\nFileMetadata.Select id]
D --> E{Files found?}
E -- No --> F([Log: No files found. Exit])
E -- Yes --> G[Warn: About to delete N files\nConfirm? y/N]
G --> H{Input == 'y'?}
H -- No --> I([Log: Cancelled. Exit])
H -- Yes --> J[fileIds = all IDs from DB]
C -- No --> K{--ids provided?}
K -- No --> ERR1[❌ Log: Provide --ids or --all\nReturn]
K -- Yes --> J2[fileIds = provided IDs]
J & J2 --> L[Create ConsoleProgressReporter]
L --> M[Send DeleteFilesCommand via Mediator]
M --> N[DeleteFilesHandler]
N --> O{Single file?}
O -- Yes --> P[concurrency = 1]
O -- No --> Q[concurrency = BatchConcurrency]
P & Q --> R[BatchOrchestrator.ExecuteAsync]
R --> S[Per-fileId: CreateAsyncScope\nIsolated DbContext]
S --> T[IFileDeletionService.DeleteAsync]
T --> U[Query FileMetadata + Chunks\nby FileId]
U --> V{FileMetadata exists?}
V -- No --> W[Log warning\nreturn Success=false]
V -- Yes --> X[Per chunk: DeleteChunkAsync]
X --> Y{Provider type}
Y -- FileSystem --> Y1[File.Delete .dff file\nIdempotent: skip if missing]
Y -- Database --> Y2[ExecuteDeleteAsync\nWHERE Id = chunkId]
Y1 & Y2 --> Z[Delete FileMetadata\n+ UploadSession from DB]
Z --> AA[SaveChangesAsync]
AA --> AB[✅ Return DeleteFileResult\nSuccess=true, ChunksDeleted=N]
W --> AC[Add to Failed bag]
AB --> AD[Add to Successful bag]
AC & AD --> AE[BatchOrchestrator\nAggregate results]
AE --> AF[ConsoleProgressReporter update]
AF --> AG([Log: Total / Successful / Failed\nTotal chunks deleted])
style ERR1 fill:#ffcdd2,stroke:#c62828,color:#2d3748
style W fill:#fff9c4,stroke:#f9a825,color:#2d3748
style AB fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style AG fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
# Console view
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- storage-insights --detailed --debug
# Export reports
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- storage-insights --format json --output report.json
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- storage-insights --format csv --output ./reports/
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- storage-insights --format markdown --output report.mdQueries storage state across all providers and renders results in console, JSON, CSV, or Markdown format. Supports provider filter, detailed file metadata, and debug statistics.
%%{init: {'theme': 'base', 'themeVariables': {
'background': '#f8f9fa',
'primaryTextColor': '#2d3748',
'secondaryTextColor': '#4a5568',
'lineColor': '#388e3c'
}}}%%
flowchart TD
A([CLI: storage-insights / insights]) --> B["Parse options:\n--provider, --format\n--output, --detailed, --debug"]
B --> C[Parse StorageProviderType\nfrom --provider]
C --> D{Valid provider?}
D -- No --> D1[providerEnum = Unknown\nQuery all providers]
D -- Yes --> D2[providerEnum = FileSystem\nor Database]
D1 & D2 --> E[Parse StorageReportFormat\nfrom --format]
E --> F{Valid format?}
F -- No --> F1[Default: Console]
F -- Yes --> F2[console / json / csv / markdown]
F1 & F2 --> G[Send GetStorageStateQuery via Mediator]
G --> H[GetStorageStateHandler]
H --> I[Query FileMetadata + Chunks\nfrom DB server-side]
I --> J[Client-side GroupBy\nper StorageProviderType]
J --> K[Per provider:\nCall GetStateAsync]
K --> L{Provider}
L -- FileSystem --> L1[Enumerate .dff files\nCount + SUM bytes\nLock-free Interlocked]
L -- Database --> L2[SQL: COUNT + SUM DATALENGTH\nSingle round-trip]
L1 & L2 --> M[Build StorageExplorationResult\nwith DebugInfo if requested]
M --> N[Return StorageExplorationResult]
N --> O{Format?}
O -- Console --> P[RenderConsoleView]
P --> P1[Log: Total providers\nchunks, bytes summary]
P1 --> P2{--detailed?}
P2 -- Yes --> P3[Render per-file metadata\nID, name, size, chunks, hash, distribution]
P2 -- No --> P4[Skip file details]
P3 & P4 --> P5{--debug?}
P5 -- Yes --> P6[Render debug stats\nAvg/Max chunk size per provider]
P5 -- No --> P7([✅ Console output complete])
P6 --> P7
O -- json/csv/markdown --> Q[RenderReportViewAsync]
Q --> R{Output path provided?}
R -- No --> ERR1[❌ Output path required]
R -- Yes --> S[Resolve matching IStorageReporter\nby format]
S --> T{Reporter found?}
T -- No --> ERR2[❌ No reporter for format]
T -- Yes --> U[Directory.CreateDirectory]
U --> V[reporter.GenerateAsync\nBuild artifacts]
V --> W[Per artifact:\nFile.WriteAllTextAsync]
W --> X{Write error?}
X -- "UnauthorizedAccess /\nDirectoryNotFound / IOException" --> ERR3[❌ Log error\nAdd to failedFiles]
X -- Success --> Y[Log: Generated filepath]
Y --> Z{Any failed files?}
Z -- Yes --> ERR4[❌ Throw with failure summary]
Z -- No --> ZZ([✅ Report files written])
style ERR1 fill:#ffcdd2,stroke:#c62828,color:#2d3748
style ERR2 fill:#ffcdd2,stroke:#c62828,color:#2d3748
style ERR3 fill:#ffcdd2,stroke:#c62828,color:#2d3748
style ERR4 fill:#ffcdd2,stroke:#c62828,color:#2d3748
style P7 fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
style ZZ fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
# Dry run (preview orphaned chunks)
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- cleanup --dry-run
# Execute cleanup (delete orphaned chunks)
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- cleanup
# Filter by provider (case-insensitive)
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- cleanup --provider filesystem
dotnet run --project src/Presentation/DistributedFileFragmentor.Console -- cleanup --provider databaseScans storage providers for orphaned chunks (no matching FileMetadata in DB) and deletes them. Supports --dry-run preview mode and --provider scoping to FileSystem or Database.
%%{init: {'theme': 'base', 'themeVariables': {
'background': '#f8f9fa',
'primaryTextColor': '#2d3748',
'secondaryTextColor': '#4a5568',
'lineColor': '#388e3c'
}}}%%
flowchart TD
A([CLI: cleanup]) --> B["Parse options:\n--dry-run, --provider"]
B --> C{--provider valid?}
C -- "Invalid / empty" --> C1[Log warning if non-empty\nproviderEnum = Unknown\nProcess all providers]
C -- Valid --> C2[providerEnum = FileSystem\nor Database]
C1 & C2 --> D[Create SimpleProgressReporter]
D --> E[Send CleanupOrphanedChunksCommand\nvia Mediator]
E --> F[CleanupOrphanedChunksHandler]
F --> G[Resolve provider list\nvia IStorageProviderFactory]
G --> H{providerEnum filter?}
H -- "Unknown: all" --> I[Process all providers]
H -- Specific --> J[Filter to matching provider]
I & J --> K[Per provider:\nICleanupService.CleanupAsync]
K --> L[StorageCleanupService]
L --> M[Stream known ChunkIds\nfrom DB via IAsyncEnumerable]
M --> N[Stream stored chunk references\nfrom provider]
N --> O[Set difference:\nstored - known = orphans]
O --> P{Orphan found?}
P -- No orphans --> Q[ProviderCleanupResult\nOrphanedCount=0]
P -- Yes --> R{--dry-run?}
R -- Yes --> S[Count orphans only\nDo NOT delete]
R -- No --> T[provider.DeleteChunkAsync\nper orphan]
T --> U{Delete error?}
U -- Exception --> V[Log error\nIncrement FailedCount]
U -- Success --> W[Increment DeletedCount]
S --> X[ProviderCleanupResult\nOrphanedCount=N, Deleted=0]
W & V --> Y[ProviderCleanupResult\nOrphanedCount / Deleted / Failed]
Q & X & Y --> Z[SimpleProgressReporter.Report]
Z --> AA[Aggregate ProviderCleanupResult\ninto CleanupResult]
AA --> BB{--dry-run?}
BB -- Yes --> BC[Log: Found N orphaned chunks\nRun without --dry-run to delete]
BB -- No --> BD[Log: Deleted N orphaned chunks]
BC & BD --> BE[Per provider: log breakdown\nOrphaned / Deleted / Failed]
BE --> BF{Any provider errors?}
BF -- Yes --> BG([⚠️ Log errors per provider])
BF -- No --> BH([✅ Cleanup complete])
style V fill:#fff9c4,stroke:#f9a825,color:#2d3748
style BG fill:#fff9c4,stroke:#f9a825,color:#2d3748
style S fill:#e3f2fd,stroke:#1565c0,color:#2d3748
style BH fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#2d3748
All file operations (single and batch) use IServiceScopeFactory to create an isolated child scope per file, resolving a fresh IAppDbContext instance per task. This prevents DbContext sharing violations across parallel operations. Orchestration is centralized in BatchOrchestrator.
FileFragmentationService catches DbUpdateException on unique hash constraint violations (IX_FileMetadata_Hash), queries the existing record, and returns the cached FileId — enabling transparent deduplication.
GetStorageStateHandler splits queries: server-side data load + client-side GroupBy to avoid translation errors.
FileReassemblyService buffers downloaded chunks to MemoryStream for hash verification (requires seekable streams).
- < 100MB: Single chunk
- 100-500MB: 50MB chunks
- 500MB-2GB: 100MB chunks
- 2-10GB: 200MB chunks
- > 10GB: 500MB chunks
MemoryPool<byte>for chunk bufferingArrayPool<byte>for database streaming (25% memory reduction)- Producer-consumer pattern with bounded channels
- Retry: 3 attempts, exponential backoff (200ms-10s), decorrelated jitter
- Circuit Breaker: 3-failure threshold, 60s timeout, lock-free state machine
dotnet test --configuration ReleaseCoverage: 209 tests passing across all layers
Test Projects:
- Application.Tests: DependencyInjection configuration validation
- Domain.Tests: Entity and value object behavior (ChunkMetadata, FileMetadata, FileHash)
- Infrastructure.Tests: Storage providers, hashing, resilience patterns, configuration validation, end-to-end integration
- Shared.Common.Tests: ReadOnlyMemoryStream comprehensive coverage
- Console.Tests: Command integration tests, progress reporting, path validation, glob pattern resolution
Edit appsettings.json:
{
"Storage": {
"FileSystem": {
"BasePath": "D:\\FileStorageChunks"
}
},
"ConnectionStrings": {
"DefaultConnection": "Data Source=(localdb)\\MSSQLLocalDB;Initial Catalog=DistributedFileFragmentor;Integrated Security=True"
}
}src/
├── ClassLibraries/
│ ├── Domain/ # Entities, value objects, enums
│ ├── Application/ # CQRS, abstractions, features
│ │ ├── Abstractions/Services/ # IFileFragmentationService, IFileDeletionService, IFileReassemblyService
│ │ ├── Services/ # FileFragmentationService, FileDeletionService, FileReassemblyService
│ │ ├── Features/FileOperations/
│ │ │ ├── BatchOrchestrator.cs # Shared parallel execution infrastructure
│ │ │ ├── FileOperationExtensions.cs # Single-file ergonomic wrappers
│ │ │ ├── Deletion/ # DeleteFilesCommand, DeleteFilesHandler, DeleteFilesResult, DeleteFileResult
│ │ │ ├── Fragmentation/ # FragmentFilesCommand, FragmentFilesHandler, FragmentFilesResult
│ │ │ └── Reassembly/ # ReassembleFilesCommand, ReassembleFilesHandler, ReassembleFilesResult
│ │ ├── Features/Maintenance/
│ │ └── Features/StorageExploration/
│ ├── Infrastructure/ # EF Core, storage, resilience
│ └── Shared.Common/ # Utilities (ReadOnlyMemoryStream)
└── Presentation/
└── Console/ # CLI commands, services, validators
tests/
├── ClassLibraries/
│ ├── Application.Tests/ # DependencyInjection tests
│ ├── Domain.Tests/ # Entity and value object tests
│ ├── Infrastructure.Tests/ # Storage, hashing, resilience, configuration tests
│ └── Shared.Common.Tests/ # ReadOnlyMemoryStream tests
└── Presentation/
└── Console.Tests/ # Command integration tests
- Windows-only (SQL Server LocalDB)
- Single-machine design (no distributed coordination)
- No compression or encryption
- Fixed round-robin distribution