Report hasn't been filed before.
What version of drizzle-orm are you using?
0.45.2
What version of drizzle-kit are you using?
0.31.10
Other packages
@libsql/client@0.17.2, @upstash/redis@1.37.0
Describe the Bug
When using upstashCache() with the libsql driver (either with global: true or explicit .$withCache()), queries succeed on the first request (cache miss → DB hit → result stored in cache), but crash on the second request (cache hit) with:
SyntaxError: "undefined" is not valid JSON
This happens because libSQL's Row objects have a special structure that doesn't survive JSON serialization through Redis.
Root Cause
LibSQL's Row objects are plain objects with both named keys AND numeric indices + length:
// libSQL Row (from @libsql/client execute())
Row {
0: '{}', // numeric index
1: 1705312200, // numeric index
baseline: '{}', // named key
created_at: 1705312200, // named key
length: 2 // array-like length
}
Drizzle's values() method in libsql/session.js caches these Row objects via queryWithCache:
// libsql/session.js — values()
return (client.execute(stmt)).then(({ rows }) => rows);
// ↑ rows is Array<Row> — each Row has numeric + named keys
When the upstash cache stores this via HSET, @upstash/redis serializes with JSON.stringify. Since Row is not an Array instance (Array.isArray(row) === false), JSON.stringify serializes it as a plain object with only named keys:
JSON.stringify(row)
// → '{"baseline":"{}","created_at":1705312200}'
// ❌ Numeric indices (0, 1) and length are LOST
On cache hit, the deserialized row is a plain object without numeric indices. Then mapAllResult calls:
Array.prototype.slice.call(row).map(v => normalizeFieldValue(v))
// → [] (empty array!) because the object has no numeric indices or length
Then mapResultRow reads row[0] → undefined, and the column decoder calls JSON.parse(undefined) → 💥
Proof
const { createClient } = require('@libsql/client');
const c = createClient({ url: 'file:test.db' });
await c.execute('CREATE TABLE t(id TEXT, data TEXT)');
await c.execute("INSERT INTO t VALUES('x', '{\"a\":1}')");
const { rows } = await c.execute('SELECT * FROM t');
const row = rows[0];
console.log(Array.isArray(row)); // false
console.log(JSON.stringify(row)); // {"id":"x","data":"{\"a\":1}"}
console.log(Array.prototype.slice.call(row)); // ['x', '{"a":1}'] ✅
const parsed = JSON.parse(JSON.stringify(row));
console.log(Array.prototype.slice.call(parsed)); // [] ❌ EMPTY
Steps to Reproduce
- Set up a libSQL/Turso database with Drizzle ORM
- Configure
upstashCache() with valid Upstash Redis credentials
- Add
.$withCache() to any db.select() query (or use global: true)
- Execute the query — works (cache miss, fetches from DB)
- Execute the same query again — crashes (cache hit, Row deserialization broken)
Expected Behavior
Cached queries should return the same results as uncached queries. The cache layer should properly serialize/deserialize libSQL Row objects so that Array.prototype.slice.call(row) works after a JSON roundtrip.
Suggested Fix
In libsql/session.js, the values() method should convert Row objects to plain arrays before passing them to queryWithCache:
async values(placeholderValues) {
// ...
return await this.queryWithCache(this.query.sql, params, async () => {
const stmt = { sql: this.query.sql, args: params };
return (this.tx ? this.tx.execute(stmt) : this.client.execute(stmt))
- .then(({ rows }) => rows);
+ .then(({ rows }) => rows.map(row => Array.prototype.slice.call(row)));
});
}
This ensures the cached data is a plain Array<Array<Value>> that survives JSON roundtrip, matching what mapAllResult expects.
Environment
- Database: Turso (libSQL HTTP) + local SQLite via
@libsql/client
- Driver:
drizzle-orm/libsql
- Cache:
drizzle-orm/cache/upstash
- Runtime: Node.js v24
- NOT a monorepo issue — reproducible in a standalone project
Report hasn't been filed before.
What version of
drizzle-ormare you using?0.45.2
What version of
drizzle-kitare you using?0.31.10
Other packages
@libsql/client@0.17.2, @upstash/redis@1.37.0
Describe the Bug
When using
upstashCache()with thelibsqldriver (either withglobal: trueor explicit.$withCache()), queries succeed on the first request (cache miss → DB hit → result stored in cache), but crash on the second request (cache hit) with:This happens because libSQL's Row objects have a special structure that doesn't survive JSON serialization through Redis.
Root Cause
LibSQL's
Rowobjects are plain objects with both named keys AND numeric indices +length:Drizzle's
values()method inlibsql/session.jscaches these Row objects viaqueryWithCache:When the upstash cache stores this via
HSET,@upstash/redisserializes withJSON.stringify. Since Row is not an Array instance (Array.isArray(row) === false),JSON.stringifyserializes it as a plain object with only named keys:On cache hit, the deserialized row is a plain object without numeric indices. Then
mapAllResultcalls:Then
mapResultRowreadsrow[0]→undefined, and the column decoder callsJSON.parse(undefined)→ 💥Proof
Steps to Reproduce
upstashCache()with valid Upstash Redis credentials.$withCache()to anydb.select()query (or useglobal: true)Expected Behavior
Cached queries should return the same results as uncached queries. The cache layer should properly serialize/deserialize libSQL Row objects so that
Array.prototype.slice.call(row)works after a JSON roundtrip.Suggested Fix
In
libsql/session.js, thevalues()method should convert Row objects to plain arrays before passing them toqueryWithCache:async values(placeholderValues) { // ... return await this.queryWithCache(this.query.sql, params, async () => { const stmt = { sql: this.query.sql, args: params }; return (this.tx ? this.tx.execute(stmt) : this.client.execute(stmt)) - .then(({ rows }) => rows); + .then(({ rows }) => rows.map(row => Array.prototype.slice.call(row))); }); }This ensures the cached data is a plain
Array<Array<Value>>that survives JSON roundtrip, matching whatmapAllResultexpects.Environment
@libsql/clientdrizzle-orm/libsqldrizzle-orm/cache/upstash