madapes

@aurora/runtime-projection-history (0.3.3)

Published 2025-12-27 10:44:44 +00:00 by vlad

Installation

@aurora:registry=
npm install @aurora/runtime-projection-history@0.3.3
"@aurora/runtime-projection-history": "0.3.3"

About this package

@aurora/runtime-storage-rocksdb

RocksDB-based storage adapter for Aurora event history and write-heavy workloads.

Overview

This package provides a high-performance, LSM-tree storage implementation using RocksDB. RocksDB is optimized for write-heavy workloads, making it ideal for event history, audit logs, and archival data.

Use cases:

  • Event history and audit logs
  • Write-heavy workloads
  • Archival data with compression
  • Large datasets requiring efficient storage

Not suitable for:

  • Read-heavy workloads (use LMDBX instead)
  • Simple in-memory caching
  • Real-time hot data access

Installation

bun add @aurora/runtime-storage-rocksdb

Usage

Basic Example

import { createRocksDBRepository } from "@aurora/runtime-storage-rocksdb";

// Create repository
const repo = createRocksDBRepository<string, Event>({
  path: "./data/events",
  compression: true,
});

// Put an event
await repo.put("tenant1", "event:123", {
  type: "OrderCreated",
  orderId: "123",
  timestamp: Date.now(),
});

// Get an event
const event = await repo.get("tenant1", "event:123");
console.log(event);

// Delete an event
await repo.delete("tenant1", "event:123");

// Check if event exists
const exists = await repo.has("tenant1", "event:123");

// List all event keys for a tenant
const keys = await repo.list("tenant1");

// Close database
await repo.close();

Configuration

interface RocksDBConfig {
  /** Path to the RocksDB database directory */
  path: string;
  
  /** Create database if missing (default: true) */
  createIfMissing?: boolean;
  
  /** Error if database exists (default: false) */
  errorIfExists?: boolean;
  
  /** Enable compression (default: true) */
  compression?: boolean;
  
  /** Write buffer size in bytes */
  writeBufferSize?: number;
  
  /** Maximum open files */
  maxOpenFiles?: number;
}

Configuration options:

  • compression: Enables Snappy compression (recommended for event history)
  • writeBufferSize: Controls memory usage and write amplification trade-off
  • maxOpenFiles: Limits open file descriptors (important for large databases)

Multi-Tenant Isolation

Keys are automatically scoped by tenant ID:

// Different tenants, same event key
await repo.put("tenant1", "event:001", { type: "Created", data: "A" });
await repo.put("tenant2", "event:001", { type: "Created", data: "B" });

const t1 = await repo.get("tenant1", "event:001"); // { type: "Created", data: "A" }
const t2 = await repo.get("tenant2", "event:001"); // { type: "Created", data: "B" }

Complex Keys

Keys can be any JSON-serializable type:

type EventKey = { tenantId: string; timestamp: number; sequence: number };
type EventData = { type: string; payload: any };

const repo = new RocksDBRepository<EventKey, EventData>({
  path: "./data/events-timeline",
  compression: true,
});

await repo.put(
  "tenant1",
  { tenantId: "tenant1", timestamp: Date.now(), sequence: 1 },
  { type: "OrderCreated", payload: { orderId: "123" } }
);

Architecture

LSM Tree Storage

RocksDB uses a Log-Structured Merge-tree (LSM) architecture:

  • Write-optimized: Sequential writes to memtable → immutable SSTables
  • Compaction: Background merging and sorting of SSTables
  • Compression: Snappy compression reduces storage footprint
  • Bloom filters: Fast negative lookups

Key Encoding

Keys are encoded as tenantId:JSON(key) with Buffer serialization for values.

Example:

  • Input: tenantId="acme", key="event:123"
  • Stored as: acme:"event:123"Buffer(JSON.stringify(value))

Storage Layout

data/
└── events/               # RocksDB database directory
    ├── 000001.log        # Write-ahead log
    ├── 000003.sst        # Sorted string table files
    ├── 000004.sst
    ├── MANIFEST-000005   # Database metadata
    └── CURRENT           # Points to current MANIFEST

Performance Characteristics

  • Writes: O(log n) with sequential writes (very fast)
  • Reads: O(log n) with bloom filters and cache
  • Space: Excellent with compression (typically 50-70% reduction)
  • Compaction: Background process, no locking

Iterator-Based Listing

The list() method uses RocksDB iterators for efficient prefix scans:

  • Seek to tenant prefix
  • Iterate until prefix no longer matches
  • Automatically closes iterator on completion

Integration with Aurora

Event History Storage

import { createRocksDBRepository } from "@aurora/runtime-storage-rocksdb";
import type { Repository } from "@aurora/runtime-storage";

// Event history for audit and replay
const eventsRepo: Repository<string, StoredEvent> = new RocksDBRepository({
  path: "./data/history/events",
  compression: true,
  writeBufferSize: 64 * 1024 * 1024, // 64 MB
});

// Store event after publishing to JetStream
async function storeEvent(event: DomainEvent) {
  const key = `${event.type}:${event.timestamp}:${event.id}`;
  
  await eventsRepo.put(
    event.tenantId,
    key,
    {
      id: event.id,
      type: event.type,
      payload: event.payload,
      metadata: event.metadata,
      timestamp: event.timestamp,
    }
  );
}

Comparison with LMDBX

Feature LMDBX RocksDB
Read performance ️ Fastest (mmap) Fast (cached + bloom)
Write performance Good ️ Fastest (LSM)
Space efficiency Good ️ Best (compression)
Concurrency MVCC, single writer Lock-free writes
Use case Hot projections Event history

Data flow pattern:

  1. Write events to JetStream (pub/sub)
  2. Store in RocksDB (durable history)
  3. Project to LMDBX (fast queries)

Tiered Storage

// Hot data (current state) - LMDBX
const projectionsRepo = new LMDBXRepository({
  path: "./data/projections/orders",
  encoding: "msgpack",
});

// Cold data (historical events) - RocksDB
const historyRepo = new RocksDBRepository({
  path: "./data/history/orders",
  compression: true,
});

// Write path: event → JetStream → both stores
async function handleEvent(event: OrderEvent) {
  // Update projection (hot)
  await projectionsRepo.put(event.tenantId, event.orderId, {
    id: event.orderId,
    status: event.payload.status,
    updatedAt: event.timestamp,
  });
  
  // Store event history (cold)
  await historyRepo.put(
    event.tenantId,
    `${event.timestamp}:${event.id}`,
    event
  );
}

Limitations

  • Async only: All operations are asynchronous (no sync API)
  • Single process: RocksDB locks the database directory
  • Compaction overhead: Background compaction uses CPU/IO
  • Not distributed: Embedded storage only (use NATS JetStream for replication)

Performance Tuning

Write-Heavy Workloads

const repo = new RocksDBRepository({
  path: "./data/events",
  compression: true,
  writeBufferSize: 128 * 1024 * 1024, // 128 MB - larger buffer, less frequent flushes
  maxOpenFiles: 1000, // More files for large datasets
});

Read-Heavy Workloads

For read-heavy workloads, prefer LMDBX. If using RocksDB:

  • Enable compression for better cache efficiency
  • Use smaller write buffers to reduce memory overhead
  • Consider bloom filters (enabled by default)

Testing

bun test

Tests use temporary database directories that are cleaned up after each test.

License

MIT

Dependencies

Dependencies

ID Version
@aurora/runtime-effect 0.3.3
@aurora/runtime-observability 0.3.3
@aurora/runtime-storage 0.3.3
@nxtedition/rocksdb ^15.1.2
Details
npm
2025-12-27 10:44:44 +00:00
9
latest
5.1 KiB
Assets (1)
Versions (3) View all
0.3.3 2025-12-27
0.3.2 2025-12-27
0.3.1 2025-12-27