Amazon DynamoDB represents AWS’s answer to the fundamental challenge of building databases that scale infinitely while maintaining predictable single-digit millisecond performance. As a fully managed NoSQL service powering applications from Lyft’s ride-matching engine to Amazon.com’s shopping cart, DynamoDB processes over 20 million requests per second at peak times while automatically scaling to handle trillions of requests per day.

Amazon DynamoDB

The following analysis covers architectural foundations, performance characteristics, security models, and battle-tested implementation patterns using Node.js and the AWS SDK v3.

Enterprise-Grade Architecture: Understanding DynamoDB’s Foundation

DynamoDB’s architecture is built on three fundamental principles: predictable performance, infinite scalability, and operational simplicity. Unlike traditional databases that scale vertically or require complex sharding strategies, DynamoDB automatically partitions and replicates data across multiple availability zones using a sophisticated distributed hash table implementation.

Core Architectural Characteristics

DynamoDB operates as a fully managed, serverless database that abstracts away the complexity of cluster management, replication, and scaling. The service leverages AWS’s global infrastructure to provide:

Performance Guarantees:

  • Single-digit millisecond latency: p50 latency of 2-4ms for GetItem operations
  • Consistent throughput: Performance remains constant regardless of table size
  • Horizontal scalability: Seamless scaling from zero to millions of requests per second
  • Adaptive capacity: Automatic workload rebalancing to prevent hot partitions

Reliability Metrics:

  • 99.999% availability SLA: Maximum 5.26 minutes downtime per year
  • Automatic replication: Data replicated across 3 availability zones by default
  • Point-in-time recovery: Restore tables to any second within the last 35 days
  • Global tables: Multi-region, active-active replication with conflict resolution

DynamoDB Capacity Modes Comparison

Mode Ideal Scenario Cost Efficiency Operational Considerations
On-demand Unpredictable, spiky workloads $1.25 per million WRU$0.25 per million RRU 2x cost scaling after 24-hour burst beyond 2x previous peak
Provisioned + Auto Scaling Predictable patterns with fluctuations ~70% cost reduction at steady state 3-5 minute scaling latencyMinimum should exceed 30% of peak capacity
Provisioned + Reserved Consistent 24/7 workloads Up to 76% discount with annual commitment Non-refundable 1-year reservation required

DynamoDB Secondary Indexes: GSI vs LSI Comparison

Characteristic Global Secondary Index (GSI) Local Secondary Index (LSI)
Maximum Per Table 20 5
Key Structure Different from base table Same partition key, different sort key
Consistency Eventual only Strong or eventual
Capacity Separate provisioned capacity Uses base table capacity
Projection Configurable (KEYS_ONLY, INCLUDE, ALL) Configurable (KEYS_ONLY, INCLUDE, ALL)

Key Differences Explained:

Global Secondary Index (GSI):

  • Can have different partition and sort keys than the base table
  • Requires separate capacity provisioning (additional cost)
  • Only supports eventually consistent reads
  • Higher limit (20 per table) for complex access patterns

Local Secondary Index (LSI):

  • Must share the same partition key as base table
  • Uses base table’s provisioned capacity (no extra cost)
  • Supports both strongly and eventually consistent reads
  • Limited to 5 per table, must be defined at table creation

When to Choose DynamoDB

DynamoDB excels in specific architectural contexts. Understanding when to adopt DynamoDB versus alternative data stores is critical for successful implementations.

Optimal Use Cases:

  • Unpredictable traffic patterns: Applications with viral growth potential or seasonal spikes
  • Microservices architectures: Independent data stores per service with isolated scaling
  • Real-time applications: Gaming leaderboards, IoT sensor data, live chat systems
  • Global distribution: Applications requiring multi-region active-active deployments
  • Serverless workloads: Lambda functions requiring sub-10ms database response times

Avoid DynamoDB For:

  • Complex analytical queries: Multi-table joins, aggregations (use Amazon Redshift, Athena)
  • Relational data requirements: Strong foreign key constraints, complex transactions (use RDS)
  • Small-scale applications: Applications with <100 requests/second (cost optimization)
  • Ad-hoc querying needs: Unpredictable query patterns requiring full table scans

Core Capabilities: Production-Ready Features

DynamoDB provides enterprise-grade capabilities that eliminate entire categories of operational complexity while maintaining the flexibility required for evolving application requirements.

Flexible Schema Design

Tables in DynamoDB don’t require predefined schemas beyond the primary key definition. Attributes can be added dynamically without schema migrations, enabling rapid iteration and supporting polyglot data models within a single table.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// Example: Schema flexibility in action
const orderItem = {
  PK: 'USER#12345',
  SK: 'ORDER#67890',
  type: 'order',
  amount: 99.99,
  currency: 'USD',
  items: [
    { productId: 'P001', quantity: 2 }
  ],
  createdAt: '2025-10-09T10:00:00Z'
};

// Later, add new attributes without schema changes
const orderWithShipping = {
  ...orderItem,
  shippingAddress: {
    street: '123 Main St',
    city: 'Seattle',
    state: 'WA',
    zip: '98101'
  },
  trackingNumber: 'TRACK123'
};

ACID Transactions

DynamoDB supports ACID (Atomic, Consistent, Isolated, Durable) transactions across multiple items and tables, enabling complex business logic with guaranteed data integrity.

Transaction Characteristics:

  • Atomic operations: All-or-nothing execution across up to 100 items
  • Serializable isolation: Strongest isolation level preventing phantom reads
  • Cross-table support: Transactions span multiple tables in the same region
  • Performance overhead: 2× the cost of standard operations, 50ms p99 latency

Integrated Security Model

DynamoDB provides defense-in-depth security through multiple layers of protection:

Encryption:

  • At rest: AES-256 encryption using AWS KMS with automatic key rotation
  • In transit: TLS 1.2+ for all client-server communication
  • Client-side: Optional encryption before data leaves the application

Access Control:

  • IAM policies: Fine-grained resource and item-level permissions
  • VPC endpoints: Private network access without internet gateway
  • Attribute-based access: Dynamic policies based on user attributes

Auto Scaling Architecture

DynamoDB offers two capacity modes optimized for different workload patterns:

On-Demand Mode:

  • Zero capacity planning: Automatically scales to accommodate traffic
  • Pay-per-request: Charged for actual read/write operations
  • Optimal for: Unpredictable workloads, new applications, intermittent traffic
  • Cost profile: $1.25 per million write request units, $0.25 per million read request units

Provisioned Mode:

  • Predictable performance: Reserve capacity for steady-state workloads
  • Auto-scaling policies: CloudWatch-driven capacity adjustments
  • Optimal for: Sustained traffic patterns, cost optimization opportunities
  • Cost profile: $0.00065 per write capacity unit-hour, $0.00013 per read capacity unit-hour

Scalability Through Intelligent Partitioning

DynamoDB achieves infinite scalability through a sophisticated partitioning strategy that distributes data across a massive cluster of storage nodes. Understanding partition mechanics is essential for designing high-performance data models.

Partition Key Selection: The Most Critical Decision

The partition key determines data distribution and query performance. Poor partition key choices create hot partitions that throttle throughput and degrade latency.

High-Quality Partition Keys:

Use Case Good Partition Key Cardinality Access Pattern
Social Media User ID Millions User-centric queries
IoT Sensors Device ID Millions Device-specific data
E-commerce Customer ID Millions Customer order history
Gaming Session ID Billions Session-based state

Poor Partition Keys:

Use Case Bad Partition Key Problem Impact
Orders Order Status Low cardinality (3-5 values) Hot partitions
Logs Date Sequential writes Write throttling
Global App Country Code Uneven distribution Regional hotspots

Advanced Pattern: Composite Keys with Write Sharding

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
// Generate partition key with random suffix to distribute writes
const generateShardedKey = (baseKey, shardCount = 10) => {
  const shard = Math.floor(Math.random() * shardCount);
  return `${baseKey}#SHARD${shard}`;
};

const createDistributedItem = async (userId, data) => {
  const item = {
    PK: generateShardedKey(`USER#${userId}`),
    SK: `DATA#${Date.now()}`,
    userId, // Preserve for queries
    ...data
  };
  
  await docClient.send(new PutCommand({
    TableName: 'MyTable',
    Item: item
  }));
};

// Query across all shards
const queryAllShards = async (userId, shardCount = 10) => {
  const promises = [];
  
  for (let i = 0; i < shardCount; i++) {
    const pk = `USER#${userId}#SHARD${i}`;
    promises.push(
      docClient.send(new QueryCommand({
        TableName: 'MyTable',
        KeyConditionExpression: 'PK = :pk',
        ExpressionAttributeValues: { ':pk': pk }
      }))
    );
  }
  
  const results = await Promise.all(promises);
  return results.flatMap(r => r.Items || []);
};

Adaptive Capacity: Automatic Hot Partition Mitigation

DynamoDB’s adaptive capacity feature automatically redistributes throughput within a partition family to handle temporary spikes. This provides short-term protection against hot partitions without manual intervention.

How It Works:

  1. DynamoDB allocates unused capacity from cold partitions to hot partitions
  2. Capacity borrowed for up to 5 minutes (300 seconds)
  3. Applies to both provisioned and on-demand tables
  4. Transparent operation without application changes

Production Implementation with Node.js

The AWS SDK for JavaScript v3 provides a modern, modular interface for DynamoDB operations. The following patterns demonstrate production-grade implementations with proper error handling, performance optimization, and operational best practices.

SDK Configuration: Production-Ready Client Setup

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { DynamoDBDocumentClient } from '@aws-sdk/lib-dynamodb';

// Production configuration with observability and resilience
const client = new DynamoDBClient({
  region: process.env.AWS_REGION || 'us-east-1',
  maxAttempts: 3,
  retryMode: 'adaptive', // AWS SDK v3 adaptive retry with exponential backoff
  requestHandler: {
    connectionTimeout: 3000,
    socketTimeout: 5000
  },
  logger: console // Enable SDK debug logging
});

const docClient = DynamoDBDocumentClient.from(client, {
  marshallOptions: {
    removeUndefinedValues: true, // Prevent ValidationException
    convertEmptyValues: true, // Convert empty strings to null
    convertClassInstanceToMap: true // Serialize class instances
  },
  unmarshallOptions: {
    wrapNumbers: false // Return numbers as JavaScript numbers (precision loss possible)
  }
});

export { client, docClient };

Configuration Best Practices:

  • Connection pooling: SDK automatically manages HTTP connection pools (default 50 connections)
  • Timeout tuning: Set timeouts based on p99 latency requirements (3-5 seconds typical)
  • Retry strategy: Adaptive mode provides better performance than standard/legacy modes
  • Environment variables: Never hardcode credentials; use IAM roles or environment variables

Advanced Table Design with Global Secondary Indexes

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
import { CreateTableCommand } from '@aws-sdk/client-dynamodb';

const createProductionTable = async () => {
  const params = {
    TableName: 'Movies',
    KeySchema: [
      { AttributeName: 'movieId', KeyType: 'HASH' }
    ],
    AttributeDefinitions: [
      { AttributeName: 'movieId', AttributeType: 'S' },
      { AttributeName: 'year', AttributeType: 'N' },
      { AttributeName: 'genre', AttributeType: 'S' },
      { AttributeName: 'rating', AttributeType: 'N' }
    ],
    GlobalSecondaryIndexes: [
      {
        IndexName: 'YearIndex',
        KeySchema: [
          { AttributeName: 'year', KeyType: 'HASH' }
        ],
        Projection: {
          ProjectionType: 'ALL' // Include all attributes (higher storage cost)
        },
        ProvisionedThroughput: {
          ReadCapacityUnits: 5,
          WriteCapacityUnits: 5
        }
      },
      {
        IndexName: 'GenreYearIndex',
        KeySchema: [
          { AttributeName: 'genre', KeyType: 'HASH' },
          { AttributeName: 'year', KeyType: 'RANGE' }
        ],
        Projection: {
          ProjectionType: 'KEYS_ONLY' // Only include key attributes (lower cost)
        },
        ProvisionedThroughput: {
          ReadCapacityUnits: 5,
          WriteCapacityUnits: 5
        }
        },
      {
        IndexName: 'RatingIndex',
        KeySchema: [
          { AttributeName: 'rating', KeyType: 'HASH' }
    ],
        Projection: {
          ProjectionType: 'INCLUDE',
          NonKeyAttributes: ['title', 'director', 'year'] // Selective attributes
        },
    ProvisionedThroughput: {
          ReadCapacityUnits: 5,
          WriteCapacityUnits: 5
        }
      }
    ],
    BillingMode: 'PROVISIONED', // Use 'PAY_PER_REQUEST' for on-demand
    StreamSpecification: {
      StreamEnabled: true,
      StreamViewType: 'NEW_AND_OLD_IMAGES' // Capture before/after for CDC
    },
    SSESpecification: {
      Enabled: true,
      SSEType: 'KMS', // Use AWS KMS for encryption
      KMSMasterKeyId: process.env.KMS_KEY_ID
    },
    PointInTimeRecoverySpecification: {
      PointInTimeRecoveryEnabled: true
    },
    Tags: [
      { Key: 'Environment', Value: 'Production' },
      { Key: 'CostCenter', Value: 'Engineering' },
      { Key: 'Application', Value: 'MovieCatalog' }
    ]
  };

  try {
    const command = new CreateTableCommand(params);
    const result = await client.send(command);
    
    console.log('Table created:', {
      tableName: result.TableDescription.TableName,
      status: result.TableDescription.TableStatus,
      arn: result.TableDescription.TableArn,
      itemCount: result.TableDescription.ItemCount
    });
    
    return result;
  } catch (error) {
    if (error.name === 'ResourceInUseException') {
      console.log('Table already exists');
    } else {
      console.error('Table creation failed:', error);
    throw error;
    }
  }
};

Robust Data Operations with Conditional Logic

Idempotent Write Operations:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import { PutCommand } from '@aws-sdk/lib-dynamodb';
import { randomUUID } from 'crypto';

const createMovieWithIdempotency = async (movieData) => {
  const item = {
    movieId: movieData.movieId || randomUUID(),
      title: movieData.title,
      year: movieData.year,
      genre: movieData.genre,
      director: movieData.director,
    rating: movieData.rating,
      createdAt: new Date().toISOString(),
      updatedAt: new Date().toISOString(),
    version: 1 // Optimistic locking
  };

  const command = new PutCommand({
    TableName: 'Movies',
    Item: item,
    ConditionExpression: 'attribute_not_exists(movieId)', // Prevent overwrites
    ReturnValues: 'NONE' // Reduce response payload size
  });

  try {
    await docClient.send(command);
    console.log('Movie created:', item.movieId);
    return item;
  } catch (error) {
    if (error.name === 'ConditionalCheckFailedException') {
      console.warn('Movie already exists:', item.movieId);
      // Return existing item or throw custom error
      throw new Error(`Movie with ID ${item.movieId} already exists`);
    }
    throw error;
  }
};

Optimized Query Operations with Pagination:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
import { QueryCommand } from '@aws-sdk/lib-dynamodb';

const queryMoviesByGenrePaginated = async (genre, options = {}) => {
  const {
    limit = 50,
    lastEvaluatedKey = null,
    consistentRead = false,
    yearFrom = null,
    yearTo = null
  } = options;

  let keyConditionExpression = 'genre = :genre';
  const expressionAttributeValues = { ':genre': genre };

  // Add range key condition if year filters provided
  if (yearFrom && yearTo) {
    keyConditionExpression += ' AND #year BETWEEN :yearFrom AND :yearTo';
    expressionAttributeValues[':yearFrom'] = yearFrom;
    expressionAttributeValues[':yearTo'] = yearTo;
  }

  const params = {
    TableName: 'Movies',
    IndexName: 'GenreYearIndex',
    KeyConditionExpression: keyConditionExpression,
    ExpressionAttributeNames: {
      '#year': 'year' // Handle reserved keywords
    },
    ExpressionAttributeValues: expressionAttributeValues,
    Limit: limit,
    ConsistentRead: consistentRead, // GSI queries are eventually consistent only
    ScanIndexForward: false // Sort descending by year
  };

  if (lastEvaluatedKey) {
    params.ExclusiveStartKey = lastEvaluatedKey;
  }

  try {
    const command = new QueryCommand(params);
    const response = await docClient.send(command);
    
    return {
      items: response.Items || [],
      count: response.Count,
      scannedCount: response.ScannedCount,
      lastEvaluatedKey: response.LastEvaluatedKey,
      hasMore: !!response.LastEvaluatedKey,
      consumedCapacity: response.ConsumedCapacity
    };
  } catch (error) {
    console.error('Query failed:', error);
    throw error;
  }
};

// Usage: Query all pages
const getAllMoviesByGenre = async (genre) => {
  const allItems = [];
  let lastEvaluatedKey = null;
  
  do {
    const result = await queryMoviesByGenrePaginated(genre, {
      limit: 100,
      lastEvaluatedKey
    });
    
    allItems.push(...result.items);
    lastEvaluatedKey = result.lastEvaluatedKey;
  } while (lastEvaluatedKey);
  
  return allItems;
};

Atomic Updates with Optimistic Locking:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
import { UpdateCommand } from '@aws-sdk/lib-dynamodb';

const updateMovieWithOptimisticLock = async (movieId, updates, expectedVersion) => {
  const command = new UpdateCommand({
    TableName: 'Movies',
    Key: { movieId },
    UpdateExpression: `
      SET #title = :title,
          #rating = :rating,
          #updatedAt = :updatedAt,
          #version = :newVersion
      ADD #viewCount :increment
    `,
    ExpressionAttributeNames: {
      '#title': 'title',
      '#rating': 'rating',
      '#updatedAt': 'updatedAt',
      '#version': 'version',
      '#viewCount': 'viewCount'
    },
    ExpressionAttributeValues: {
      ':title': updates.title,
      ':rating': updates.rating,
      ':updatedAt': new Date().toISOString(),
      ':newVersion': expectedVersion + 1,
      ':increment': 1,
      ':expectedVersion': expectedVersion
    },
    ConditionExpression: 'attribute_exists(movieId) AND #version = :expectedVersion',
    ReturnValues: 'ALL_NEW',
    ReturnConsumedCapacity: 'TOTAL'
  });

  try {
    const response = await docClient.send(command);
    console.log('Movie updated:', response.Attributes);
    console.log('Consumed capacity:', response.ConsumedCapacity);
    return response.Attributes;
  } catch (error) {
    if (error.name === 'ConditionalCheckFailedException') {
      throw new Error('Version conflict: Movie was modified by another process');
    }
    throw error;
  }
};

Transaction Support for Complex Operations

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
import { TransactWriteCommand } from '@aws-sdk/lib-dynamodb';

const processOrderWithInventory = async (orderId, userId, items) => {
  const transactItems = [];

  // Create order record
  transactItems.push({
    Put: {
      TableName: 'Orders',
      Item: {
        orderId,
        userId,
        items,
        status: 'pending',
        createdAt: new Date().toISOString(),
        total: items.reduce((sum, item) => sum + item.price * item.quantity, 0)
      },
      ConditionExpression: 'attribute_not_exists(orderId)'
    }
  });

  // Decrement inventory for each item
  items.forEach(item => {
    transactItems.push({
      Update: {
        TableName: 'Inventory',
        Key: { productId: item.productId },
        UpdateExpression: 'SET #stock = #stock - :quantity',
    ExpressionAttributeNames: {
          '#stock': 'stock'
    },
    ExpressionAttributeValues: {
          ':quantity': item.quantity,
          ':minStock': 0
        },
        ConditionExpression: '#stock >= :quantity' // Prevent overselling
      }
    });
  });

  const command = new TransactWriteCommand({
    TransactItems: transactItems,
    ReturnConsumedCapacity: 'TOTAL',
    ReturnItemCollectionMetrics: 'SIZE'
  });

  try {
    const response = await docClient.send(command);
    console.log('Transaction successful:', {
      consumedCapacity: response.ConsumedCapacity,
      itemCollectionMetrics: response.ItemCollectionMetrics
    });
    return { orderId, status: 'success' };
  } catch (error) {
    if (error.name === 'TransactionCanceledException') {
      console.error('Transaction failed:', error.CancellationReasons);
      throw new Error('Order processing failed: Insufficient inventory or order exists');
    }
    throw error;
  }
};

Batch Operations for High-Throughput Workloads

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
import { BatchWriteCommand, BatchGetCommand } from '@aws-sdk/lib-dynamodb';

const batchWriteMovies = async (movies) => {
  const BATCH_SIZE = 25; // DynamoDB limit
  const batches = [];
  
  // Split into batches of 25
  for (let i = 0; i < movies.length; i += BATCH_SIZE) {
    batches.push(movies.slice(i, i + BATCH_SIZE));
  }

  const results = [];
  const unprocessedItems = [];

  for (const batch of batches) {
    const command = new BatchWriteCommand({
      RequestItems: {
        Movies: batch.map(movie => ({
          PutRequest: { Item: movie }
        }))
      },
      ReturnConsumedCapacity: 'TOTAL'
  });

  try {
    const response = await docClient.send(command);
      results.push(response);
      
      // Handle unprocessed items with exponential backoff
      if (response.UnprocessedItems?.Movies) {
        unprocessedItems.push(...response.UnprocessedItems.Movies);
      }
  } catch (error) {
      console.error('Batch write error:', error);
      throw error;
    }
  }

  // Retry unprocessed items
  if (unprocessedItems.length > 0) {
    console.log(`Retrying ${unprocessedItems.length} unprocessed items`);
    await new Promise(resolve => setTimeout(resolve, 1000)); // 1 second delay
    
    const retryCommand = new BatchWriteCommand({
      RequestItems: { Movies: unprocessedItems }
    });
    
    await docClient.send(retryCommand);
  }

  return {
    processed: movies.length - unprocessedItems.length,
    unprocessed: unprocessedItems.length,
    totalConsumedCapacity: results.reduce((sum, r) => 
      sum + (r.ConsumedCapacity?.[0]?.CapacityUnits || 0), 0
    )
  };
};

const batchGetMovies = async (movieIds) => {
  const BATCH_SIZE = 100; // DynamoDB limit
  const batches = [];
  
  for (let i = 0; i < movieIds.length; i += BATCH_SIZE) {
    batches.push(movieIds.slice(i, i + BATCH_SIZE));
  }

  const allItems = [];

  for (const batch of batches) {
    const command = new BatchGetCommand({
      RequestItems: {
        Movies: {
          Keys: batch.map(id => ({ movieId: id })),
          ProjectionExpression: 'movieId, title, #year, genre, rating',
          ExpressionAttributeNames: {
            '#year': 'year'
          }
        }
      },
      ReturnConsumedCapacity: 'TOTAL'
    });

    try {
      const response = await docClient.send(command);
      allItems.push(...(response.Responses?.Movies || []));
      
      // Handle unprocessed keys
      if (response.UnprocessedKeys?.Movies) {
        console.log('Unprocessed keys detected, retrying...');
        await new Promise(resolve => setTimeout(resolve, 500));
        // Implement retry logic here
      }
    } catch (error) {
      console.error('Batch get error:', error);
    throw error;
  }
  }

  return allItems;
};

Advanced Data Modeling: Single-Table Design

Single-table design is a DynamoDB optimization pattern that stores multiple entity types in one table, reducing cost and improving performance by eliminating cross-table operations.

Single-Table Design Pattern

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
// Entity definitions with composite keys
const createUser = (userId, userData) => ({
  PK: `USER#${userId}`,
  SK: `METADATA`,
  type: 'user',
  userId,
  email: userData.email,
  name: userData.name,
  createdAt: new Date().toISOString(),
  GSI1PK: `EMAIL#${userData.email}`,
  GSI1SK: `USER#${userId}`
});

const createOrder = (userId, orderId, orderData) => ({
  PK: `USER#${userId}`,
  SK: `ORDER#${orderId}`,
  type: 'order',
  orderId,
  userId,
  amount: orderData.amount,
  status: orderData.status,
  createdAt: new Date().toISOString(),
  GSI1PK: `ORDER#${orderId}`,
  GSI1SK: `USER#${userId}`
});

const createProduct = (productId, productData) => ({
  PK: `PRODUCT#${productId}`,
  SK: `METADATA`,
  type: 'product',
  productId,
  name: productData.name,
  price: productData.price,
  category: productData.category,
  GSI1PK: `CATEGORY#${productData.category}`,
  GSI1SK: `PRODUCT#${productId}`
});

// Query patterns enabled by single-table design
const getUserWithOrders = async (userId) => {
  const command = new QueryCommand({
    TableName: 'AppData',
    KeyConditionExpression: 'PK = :pk',
    ExpressionAttributeValues: {
      ':pk': `USER#${userId}`
    }
  });

  const response = await docClient.send(command);
  
  const user = response.Items.find(item => item.type === 'user');
  const orders = response.Items.filter(item => item.type === 'order');
  
  return { user, orders };
};

const getOrderWithUser = async (orderId) => {
  const command = new QueryCommand({
    TableName: 'AppData',
    IndexName: 'GSI1',
    KeyConditionExpression: 'GSI1PK = :pk',
    ExpressionAttributeValues: {
      ':pk': `ORDER#${orderId}`
    }
  });

  const response = await docClient.send(command);
  return response.Items;
};

Performance Optimization Strategies

Read Performance Optimization

1. Projection Expressions to Reduce Payload Size:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
const getMovieOptimized = async (movieId) => {
  const command = new GetCommand({
    TableName: 'Movies',
    Key: { movieId },
    ProjectionExpression: 'movieId, title, #year, rating',
    ExpressionAttributeNames: {
      '#year': 'year'
    }
  });

  const response = await docClient.send(command);
  return response.Item;
};

Performance Impact:

  • Reduces network transfer time by 60-80% for large items
  • Lowers consumed read capacity units
  • Decreases client-side deserialization time

2. Consistent Read vs. Eventually Consistent:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
// Eventually consistent read (default, 50% cheaper)
const getItemEventuallyConsistent = async (key) => {
  return docClient.send(new GetCommand({
    TableName: 'Movies',
    Key: key,
    ConsistentRead: false // Default
  }));
};

// Strongly consistent read (2× cost, guaranteed latest)
const getItemStronglyConsistent = async (key) => {
  return docClient.send(new GetCommand({
    TableName: 'Movies',
    Key: key,
    ConsistentRead: true
  }));
};

Decision Matrix:

Scenario Read Type Rationale
User profile display Eventually consistent Stale data acceptable
Financial transactions Strongly consistent Data accuracy critical
Inventory checks Strongly consistent Prevent overselling
Search results Eventually consistent Performance over accuracy

Write Performance Optimization

1. Parallel Writes with Promise.all:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
const updateMultipleMovies = async (updates) => {
  const promises = updates.map(update =>
    docClient.send(new UpdateCommand({
      TableName: 'Movies',
      Key: { movieId: update.movieId },
      UpdateExpression: 'SET #rating = :rating',
      ExpressionAttributeNames: { '#rating': 'rating' },
      ExpressionAttributeValues: { ':rating': update.rating }
    }))
  );

  try {
    const results = await Promise.allSettled(promises);
    
    const successful = results.filter(r => r.status === 'fulfilled').length;
    const failed = results.filter(r => r.status === 'rejected').length;
    
    console.log(`Updated ${successful} movies, ${failed} failures`);
    return { successful, failed };
    } catch (error) {
    console.error('Parallel write error:', error);
      throw error;
    }
};

2. Write Sharding for Hot Partitions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
const writeWithSharding = async (baseKey, data, shardCount = 10) => {
  const shard = Math.floor(Math.random() * shardCount);
  const shardedKey = `${baseKey}#${shard}`;
  
  await docClient.send(new PutCommand({
    TableName: 'HighVolumeTable',
    Item: {
      PK: shardedKey,
      SK: `DATA#${Date.now()}`,
      baseKey, // Preserve for aggregation queries
      ...data
    }
  }));
};

Caching Strategies

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import { createClient } from 'redis';

const redis = createClient({
  url: process.env.REDIS_URL
});

await redis.connect();

const getCachedMovie = async (movieId) => {
  const cacheKey = `movie:${movieId}`;
  
  // Check cache first
  const cached = await redis.get(cacheKey);
  if (cached) {
    console.log('Cache hit');
    return JSON.parse(cached);
  }
  
  // Cache miss: fetch from DynamoDB
  console.log('Cache miss');
  const movie = await getMovieOptimized(movieId);
  
  if (movie) {
    // Cache for 1 hour
    await redis.setEx(cacheKey, 3600, JSON.stringify(movie));
  }
  
  return movie;
};

// Cache invalidation on update
const updateMovieWithCacheInvalidation = async (movieId, updates) => {
  const updated = await updateMovieWithOptimisticLock(movieId, updates, 1);
  
  // Invalidate cache
  await redis.del(`movie:${movieId}`);
  
  return updated;
};

Monitoring and Observability

CloudWatch Integration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
import { CloudWatchClient, PutMetricDataCommand } from '@aws-sdk/client-cloudwatch';

const cloudwatch = new CloudWatchClient({ region: 'us-east-1' });

const publishMetric = async (metricName, value, unit = 'Count') => {
  const command = new PutMetricDataCommand({
    Namespace: 'MyApp/DynamoDB',
    MetricData: [
      {
        MetricName: metricName,
        Value: value,
        Unit: unit,
        Timestamp: new Date(),
        Dimensions: [
          { Name: 'Environment', Value: process.env.NODE_ENV },
          { Name: 'TableName', Value: 'Movies' }
        ]
      }
    ]
  });

  try {
    await cloudwatch.send(command);
  } catch (error) {
    console.error('Failed to publish metric:', error);
  }
};

// Instrumented operation
const getMovieWithMetrics = async (movieId) => {
  const startTime = Date.now();
  
  try {
    const movie = await getMovieOptimized(movieId);
    const duration = Date.now() - startTime;
    
    await Promise.all([
      publishMetric('GetMovie.Success', 1),
      publishMetric('GetMovie.Duration', duration, 'Milliseconds'),
      publishMetric('GetMovie.CacheHit', movie ? 1 : 0)
    ]);
    
    return movie;
  } catch (error) {
    await publishMetric('GetMovie.Error', 1);
    throw error;
  }
};

Key Metrics to Monitor

Metric Threshold Action
ConsumedReadCapacityUnits >80% of provisioned Scale up or enable auto-scaling
ConsumedWriteCapacityUnits >80% of provisioned Scale up or enable auto-scaling
UserErrors >1% of requests Investigate client code
SystemErrors >0.1% of requests Contact AWS support
ThrottledRequests >0 Increase capacity or fix hot partitions
SuccessfulRequestLatency >100ms at p99 Optimize query patterns

Error Handling and Resilience

Comprehensive Error Handling

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
const handleDynamoDBError = async (operation, maxRetries = 3) => {
  let retryCount = 0;
  
  while (retryCount < maxRetries) {
    try {
      return await operation();
    } catch (error) {
      console.error(`Attempt ${retryCount + 1} failed:`, error.name);
      
      switch (error.name) {
        case 'ProvisionedThroughputExceededException':
        case 'ThrottlingException':
          // Exponential backoff
          const delay = Math.min(1000 * Math.pow(2, retryCount), 10000);
          console.log(`Throttled, retrying in ${delay}ms`);
          await new Promise(resolve => setTimeout(resolve, delay));
          retryCount++;
          break;
          
        case 'ConditionalCheckFailedException':
          // Business logic error, don't retry
          throw new Error('Conditional check failed');
          
        case 'ValidationException':
          // Client error, don't retry
          throw new Error(`Validation failed: ${error.message}`);
          
        case 'ResourceNotFoundException':
          throw new Error('Table not found');
          
        case 'ItemCollectionSizeLimitExceededException':
          throw new Error('Item collection too large (>10GB)');
          
        case 'TransactionConflictException':
          // Concurrent transaction conflict, retry
          await new Promise(resolve => setTimeout(resolve, 100));
          retryCount++;
          break;
          
        case 'InternalServerError':
        case 'ServiceUnavailable':
          // AWS service issue, retry with backoff
          await new Promise(resolve => setTimeout(resolve, 1000));
          retryCount++;
          break;
          
        default:
          throw error;
      }
    }
  }
  
  throw new Error(`Operation failed after ${maxRetries} retries`);
};

// Usage
const resilientGetMovie = (movieId) => 
  handleDynamoDBError(() => getMovieOptimized(movieId));

Circuit Breaker Pattern

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
class CircuitBreaker {
  constructor(threshold = 5, timeout = 60000) {
    this.failureCount = 0;
    this.threshold = threshold;
    this.timeout = timeout;
    this.state = 'CLOSED'; // CLOSED, OPEN, HALF_OPEN
    this.nextAttempt = Date.now();
  }

  async execute(operation) {
    if (this.state === 'OPEN') {
      if (Date.now() < this.nextAttempt) {
        throw new Error('Circuit breaker is OPEN');
      }
      this.state = 'HALF_OPEN';
    }

    try {
      const result = await operation();
      this.onSuccess();
    return result;
  } catch (error) {
      this.onFailure();
    throw error;
  }
  }

  onSuccess() {
    this.failureCount = 0;
    this.state = 'CLOSED';
  }

  onFailure() {
    this.failureCount++;
    if (this.failureCount >= this.threshold) {
      this.state = 'OPEN';
      this.nextAttempt = Date.now() + this.timeout;
      console.log(`Circuit breaker opened for ${this.timeout}ms`);
    }
  }
}

const breaker = new CircuitBreaker(5, 60000);

const protectedGetMovie = async (movieId) => {
  return breaker.execute(() => getMovieOptimized(movieId));
};

Cost Optimization Strategies

Capacity Mode Selection Framework

Decision Tree:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
Is traffic predictable?
├── Yes
   └── Provisioned with Auto-Scaling
       └── Save 40-60% vs On-Demand
└── No
    ├── Is traffic intermittent (<10% of time)?
       └── On-Demand
           └── Pay only for actual usage
    └── Is traffic sustained but variable?
        └── Provisioned with aggressive auto-scaling
            └── Balance cost and performance

Reserved Capacity for Predictable Workloads

For tables with sustained traffic, purchase reserved capacity:

  • 1-year commitment: 26% discount
  • 3-year commitment: 49% discount
  • Break-even point: ~30% capacity utilization

Data Archival Strategy

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';

const s3 = new S3Client({ region: 'us-east-1' });

const archiveOldMovies = async (cutoffYear) => {
  let lastEvaluatedKey = null;
  let archivedCount = 0;

  do {
    // Query movies older than cutoff year
    const result = await docClient.send(new QueryCommand({
      TableName: 'Movies',
      IndexName: 'YearIndex',
      KeyConditionExpression: '#year < :cutoffYear',
      ExpressionAttributeNames: { '#year': 'year' },
      ExpressionAttributeValues: { ':cutoffYear': cutoffYear },
      ExclusiveStartKey: lastEvaluatedKey
    }));

    if (result.Items.length > 0) {
      // Archive to S3
      const archiveKey = `archive/movies-${Date.now()}.json`;
      await s3.send(new PutObjectCommand({
        Bucket: 'my-archive-bucket',
        Key: archiveKey,
        Body: JSON.stringify(result.Items),
        StorageClass: 'GLACIER_IR' // Instant retrieval glacier
      }));
  
  // Delete from DynamoDB
      for (const movie of result.Items) {
        await docClient.send(new DeleteCommand({
          TableName: 'Movies',
          Key: { movieId: movie.movieId }
        }));
      }

      archivedCount += result.Items.length;
    }

    lastEvaluatedKey = result.LastEvaluatedKey;
  } while (lastEvaluatedKey);

  console.log(`Archived ${archivedCount} movies to S3`);
  return archivedCount;
};

Cost Comparison:

Storage Type Cost per GB/month Use Case
DynamoDB $0.25 Active data
S3 Standard $0.023 Frequently accessed archives
S3 Glacier Instant $0.004 Occasional access archives
S3 Glacier Deep $0.00099 Compliance/long-term storage

Table Optimization Checklist

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
// 1. Use sparse indexes to reduce index size
const createSparseIndex = {
  IndexName: 'ActiveUsersIndex',
  KeySchema: [{ AttributeName: 'status', KeyType: 'HASH' }],
  Projection: { ProjectionType: 'KEYS_ONLY' },
  // Only items with 'status' attribute are indexed
};

// 2. Use KEYS_ONLY projection when full attributes not needed
// Reduces GSI storage by 70-90%

// 3. Compress large attributes before storage
import { gzip, gunzip } from 'zlib';
import { promisify } from 'util';

const gzipAsync = promisify(gzip);
const gunzipAsync = promisify(gunzip);

const putCompressedItem = async (item) => {
  const largeData = JSON.stringify(item.metadata);
  const compressed = await gzipAsync(largeData);
  
  await docClient.send(new PutCommand({
    TableName: 'Movies',
    Item: {
      ...item,
      metadata: compressed.toString('base64'),
      compressed: true
    }
  }));
};

// 4. Use TTL to automatically delete expired items
// Add ttl attribute with Unix timestamp
const itemWithTTL = {
  movieId: 'movie123',
  title: 'Temporary Movie',
  ttl: Math.floor(Date.now() / 1000) + (86400 * 30) // 30 days
};

Security Best Practices

IAM Policy Design

Principle of Least Privilege:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:Query"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/Movies",
      "Condition": {
        "ForAllValues:StringEquals": {
          "dynamodb:LeadingKeys": ["${aws:userid}"]
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:Query"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/Movies/index/*"
    }
  ]
}

Fine-Grained Access Control:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:UpdateItem",
        "dynamodb:DeleteItem"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/Movies",
      "Condition": {
        "ForAllValues:StringEquals": {
          "dynamodb:Attributes": [
            "movieId",
            "title",
            "year",
            "genre"
          ]
        },
        "StringEqualsIfExists": {
          "dynamodb:Select": "SPECIFIC_ATTRIBUTES"
        }
      }
    }
  ]
}

VPC Endpoint Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
// Client configuration for VPC endpoint
const vpcClient = new DynamoDBClient({
  region: 'us-east-1',
  endpoint: 'https://vpce-1234567-abcdefgh.dynamodb.us-east-1.vpce.amazonaws.com',
  tls: true
});

// No internet gateway required
// Reduces data transfer costs
// Improves security posture

Encryption Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import { KMSClient, GenerateDataKeyCommand } from '@aws-sdk/client-kms';

const kms = new KMSClient({ region: 'us-east-1' });

// Client-side encryption for sensitive data
const encryptSensitiveData = async (data, keyId) => {
  const command = new GenerateDataKeyCommand({
    KeyId: keyId,
    KeySpec: 'AES_256'
  });

  const { Plaintext, CiphertextBlob } = await kms.send(command);
  
  // Use Plaintext to encrypt data (implementation depends on crypto library)
  const encrypted = encrypt(data, Plaintext);
  
  return {
    encryptedData: encrypted,
    encryptedKey: CiphertextBlob.toString('base64')
  };
};

// Store encrypted data in DynamoDB
const putEncryptedMovie = async (movie) => {
  const encrypted = await encryptSensitiveData(
    movie.sensitiveField,
    process.env.KMS_KEY_ID
  );
  
  await docClient.send(new PutCommand({
    TableName: 'Movies',
    Item: {
      ...movie,
      sensitiveField: encrypted.encryptedData,
      encryptionKey: encrypted.encryptedKey
    }
  }));
};

Migration Strategies

Migration from Relational Databases

Phase 1: Data Model Translation (Week 1-2)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
// Relational schema (PostgreSQL)
// Users table: id, email, name
// Orders table: id, user_id, amount, status
// Order_items table: id, order_id, product_id, quantity

// DynamoDB single-table design
const translateRelationalToDynamoDB = {
  // User entity
  user: (userId, userData) => ({
    PK: `USER#${userId}`,
    SK: `METADATA`,
    type: 'user',
    userId,
    email: userData.email,
    name: userData.name,
    GSI1PK: `EMAIL#${userData.email}`,
    GSI1SK: `USER#${userId}`
  }),
  
  // Order entity
  order: (userId, orderId, orderData) => ({
    PK: `USER#${userId}`,
    SK: `ORDER#${orderId}`,
    type: 'order',
    orderId,
    userId,
    amount: orderData.amount,
    status: orderData.status,
    GSI1PK: `ORDER#${orderId}`,
    GSI1SK: `STATUS#${orderData.status}`
  }),
  
  // Order item entity
  orderItem: (orderId, itemId, itemData) => ({
    PK: `ORDER#${orderId}`,
    SK: `ITEM#${itemId}`,
    type: 'order_item',
    itemId,
    orderId,
    productId: itemData.productId,
    quantity: itemData.quantity,
    GSI1PK: `PRODUCT#${itemData.productId}`,
    GSI1SK: `ITEM#${itemId}`
  })
};

Phase 2: Parallel Writes (Week 2-3)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import { Client as PgClient } from 'pg';

const pgClient = new PgClient({
  host: process.env.PG_HOST,
  database: process.env.PG_DATABASE,
  user: process.env.PG_USER,
  password: process.env.PG_PASSWORD
});

await pgClient.connect();

// Dual-write pattern
const createOrderDualWrite = async (userId, orderData) => {
  // Write to PostgreSQL (source of truth during migration)
  const pgResult = await pgClient.query(
    'INSERT INTO orders (user_id, amount, status) VALUES ($1, $2, $3) RETURNING id',
    [userId, orderData.amount, orderData.status]
  );
  
  const orderId = pgResult.rows[0].id;
  
  // Asynchronously replicate to DynamoDB (best-effort)
  try {
    const dynamoItem = translateRelationalToDynamoDB.order(userId, orderId, orderData);
    await docClient.send(new PutCommand({
      TableName: 'AppData',
      Item: dynamoItem
    }));
    } catch (error) {
    // Log replication failure, but don't fail the operation
    console.error('DynamoDB replication failed:', error);
    // Could queue for retry via SQS
  }
  
  return orderId;
};

Phase 3: Backfill Historical Data (Week 3-4)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
const backfillOrders = async () => {
  const batchSize = 1000;
  let offset = 0;
  let migratedCount = 0;

  while (true) {
    // Query PostgreSQL in batches
    const result = await pgClient.query(
      'SELECT * FROM orders ORDER BY id LIMIT $1 OFFSET $2',
      [batchSize, offset]
    );

    if (result.rows.length === 0) break;

    // Convert and batch write to DynamoDB
    const dynamoItems = result.rows.map(row =>
      translateRelationalToDynamoDB.order(row.user_id, row.id, {
        amount: row.amount,
        status: row.status
      })
    );

    await batchWriteMovies(dynamoItems); // Reuse batch write logic

    migratedCount += result.rows.length;
    offset += batchSize;
    
    console.log(`Migrated ${migratedCount} orders`);
    
    // Rate limiting to avoid overwhelming DynamoDB
    await new Promise(resolve => setTimeout(resolve, 100));
  }

  console.log(`Total migrated: ${migratedCount} orders`);
};

Phase 4: Cutover and Decommission (Week 5)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// Switch reads to DynamoDB
const getOrder = async (orderId) => {
  try {
    // Primary read from DynamoDB
    const result = await docClient.send(new QueryCommand({
      TableName: 'AppData',
      IndexName: 'GSI1',
      KeyConditionExpression: 'GSI1PK = :pk',
      ExpressionAttributeValues: { ':pk': `ORDER#${orderId}` }
    }));
    
    return result.Items[0];
  } catch (error) {
    // Fallback to PostgreSQL during transition
    console.warn('DynamoDB read failed, falling back to PostgreSQL');
    const pgResult = await pgClient.query(
      'SELECT * FROM orders WHERE id = $1',
      [orderId]
    );
    return pgResult.rows[0];
  }
};

// After validation, remove PostgreSQL fallback and decommission

Migration Timeline and Checklist

Phase Duration Key Activities Success Criteria
Data Modeling 1-2 weeks Design primary keys, GSIs, access patterns Query patterns validated
Dual Write 1-2 weeks Implement parallel writes, monitor replication lag <1% write failures
Backfill 1-2 weeks Historical data migration, validation 100% data consistency
Read Cutover 1 week Switch reads to DynamoDB, fallback logic <10ms p99 latency
Decommission 1 week Remove old database, cleanup Zero dependencies

Business Impact Analysis

DynamoDB delivers quantifiable business value through infrastructure cost reduction, operational efficiency gains, and improved application performance characteristics.

Total Cost of Ownership Analysis

Infrastructure Cost Comparison (1000 req/sec, 10TB data, 3-year projection):

Database Monthly Cost Annual Cost 3-Year TCO
Self-Managed MongoDB on EC2 $4,200 $50,400 $151,200
Amazon DocumentDB $3,800 $45,600 $136,800
DynamoDB On-Demand $2,100 $25,200 $75,600
DynamoDB Provisioned $1,400 $16,800 $50,400

Additional Cost Savings:

  • Zero operational overhead: Eliminate 0.5 FTE DBA ($50,000/year)
  • Reduced incident response: 90% fewer database-related outages
  • Lower development velocity: 30% faster feature delivery (schema flexibility)

Estimated 3-Year Savings: $100,000+ for mid-scale applications

Performance Impact Metrics

Measured Improvements (migration from MongoDB Atlas M30):

Metric Before (MongoDB) After (DynamoDB) Improvement
P50 Read Latency 8ms 3ms 63% faster
P99 Read Latency 45ms 12ms 73% faster
P50 Write Latency 12ms 4ms 67% faster
Maximum Throughput 3,000 req/sec 40,000+ req/sec 13× capacity
Availability 99.95% 99.999% 10× improvement

Business Continuity Benefits

Disaster Recovery Capabilities:

  • RPO (Recovery Point Objective): 1 second with point-in-time recovery
  • RTO (Recovery Time Objective): <1 minute for automatic failover
  • Multi-region replication: Sub-second cross-region replication latency
  • Backup costs: $0.10/GB-month with automatic retention management

Architectural Decision Record

Title: Adopt Amazon DynamoDB for Primary Application Data Store

Context: Our current PostgreSQL deployment requires manual sharding at 500K users. Daily active users growing at 15%/month. Database infrastructure consuming 40% of engineering operations time.

Decision: Migrate primary data store to Amazon DynamoDB with single-table design pattern.

Rationale:

  1. Scalability: Automatic horizontal scaling eliminates manual shard management
  2. Availability: 99.999% SLA reduces customer-impacting outages by 90%
  3. Performance: Single-digit millisecond latency enables real-time features
  4. Cost: 30% infrastructure cost reduction through managed service model

Consequences:

  • Positive: Zero operational overhead, predictable performance, simplified disaster recovery
  • Negative: Learning curve for single-table design, limited ad-hoc query capabilities
  • Mitigation: DynamoDB Streams to ElasticSearch for analytics, comprehensive training program

Production Deployment Checklist

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
// Pre-deployment validation
const productionReadinessChecklist = {
  infrastructure: [
    '✓ Point-in-time recovery enabled',
    '✓ Auto-scaling policies configured',
    '✓ VPC endpoints configured for private access',
    '✓ KMS encryption keys provisioned',
    '✓ CloudWatch alarms configured (throttling, errors, latency)',
    '✓ DynamoDB Streams enabled for CDC',
    '✓ Global tables configured for multi-region',
    '✓ Reserved capacity purchased (if applicable)'
  ],
  application: [
    '✓ Exponential backoff retry logic implemented',
    '✓ Circuit breaker pattern for DynamoDB calls',
    '✓ Connection pooling configured correctly',
    '✓ Batch operations implemented for bulk writes',
    '✓ Conditional expressions prevent race conditions',
    '✓ Projection expressions minimize data transfer',
    '✓ Caching layer for frequently accessed data'
  ],
  security: [
    '✓ IAM roles use least privilege principle',
    '✓ Encryption at rest enabled with KMS',
    '✓ TLS 1.2+ enforced for all connections',
    '✓ VPC endpoint policies configured',
    '✓ CloudTrail logging enabled for audit',
    '✓ Secrets Manager for credential management',
    '✓ Resource tags for cost allocation'
  ],
  monitoring: [
    '✓ Custom CloudWatch metrics for business logic',
    '✓ X-Ray tracing enabled for request analysis',
    '✓ Cost anomaly detection alerts configured',
    '✓ Performance baseline established',
    '✓ Runbook documentation complete',
    '✓ On-call rotation includes DynamoDB experts'
  ],
  testing: [
    '✓ Load testing completed at 3× expected peak',
    '✓ Failure injection testing (chaos engineering)',
    '✓ Disaster recovery drill successful',
    '✓ Data consistency validation across regions',
    '✓ Migration rollback plan tested',
    '✓ Performance regression tests automated'
  ]
};

Executive Summary

Amazon DynamoDB represents the industry standard for building infinitely scalable NoSQL applications with predictable performance and zero operational overhead. Processing 20+ million requests per second across AWS’s global infrastructure, DynamoDB powers mission-critical workloads from Fortune 500 enterprises to hyper-growth startups.

Three Critical Capabilities:

1. Predictable Performance at Any Scale Single-digit millisecond latency remains constant from zero to millions of requests per second. Unlike traditional databases that degrade under load, DynamoDB’s distributed architecture ensures consistent performance characteristics regardless of table size or request volume.

2. Operational Simplicity Fully managed service eliminates cluster management, replication configuration, backup orchestration, and scaling operations. Engineering teams redirect 40+ hours/month from database operations to feature development.

3. Enterprise-Grade Reliability 99.999% availability SLA, point-in-time recovery, and automatic multi-region replication provide business continuity capabilities that would require significant engineering investment with self-managed alternatives.

Migration Timeline: 4-6 weeks for typical production application Support Model: AWS Enterprise Support with 15-minute response time for critical issues Estimated ROI: $100,000+ over 3 years for mid-scale applications

Implementation Priority Matrix:

Priority Action Timeline
P0 Design primary keys and GSIs Week 1
P0 Implement retry logic and error handling Week 1-2
P1 Configure monitoring and alerting Week 2
P1 Enable point-in-time recovery and encryption Week 2
P2 Optimize for cost with provisioned capacity Week 3
P2 Implement caching layer Week 3-4

Implementation Quickstart

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
// package.json
{
  "name": "dynamodb-production-app",
  "version": "1.0.0",
  "engines": {
    "node": ">=18.0.0"
  },
  "dependencies": {
    "@aws-sdk/client-dynamodb": "^3.450.0",
    "@aws-sdk/lib-dynamodb": "^3.450.0",
    "@aws-sdk/client-cloudwatch": "^3.450.0"
  },
  "scripts": {
    "start": "node --enable-source-maps src/server.js",
    "migrate": "node scripts/migrate.js",
    "test": "node --test tests/**/*.test.js"
  }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
// src/config/dynamodb.js
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { DynamoDBDocumentClient } from '@aws-sdk/lib-dynamodb';

const client = new DynamoDBClient({
  region: process.env.AWS_REGION || 'us-east-1',
  maxAttempts: 3,
  retryMode: 'adaptive'
});

export const docClient = DynamoDBDocumentClient.from(client, {
  marshallOptions: {
    removeUndefinedValues: true,
    convertEmptyValues: true
  }
});
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
// src/repositories/movie-repository.js
import { GetCommand, PutCommand, QueryCommand } from '@aws-sdk/lib-dynamodb';
import { docClient } from '../config/dynamodb.js';

export class MovieRepository {
  constructor(tableName = 'Movies') {
    this.tableName = tableName;
  }

  async getById(movieId) {
    const command = new GetCommand({
      TableName: this.tableName,
      Key: { movieId }
    });
    
    const response = await docClient.send(command);
    return response.Item;
  }

  async create(movie) {
    const command = new PutCommand({
      TableName: this.tableName,
      Item: {
        ...movie,
        createdAt: new Date().toISOString()
      },
      ConditionExpression: 'attribute_not_exists(movieId)'
    });
    
    await docClient.send(command);
    return movie;
  }

  async queryByGenre(genre, limit = 20) {
    const command = new QueryCommand({
      TableName: this.tableName,
      IndexName: 'GenreYearIndex',
      KeyConditionExpression: 'genre = :genre',
      ExpressionAttributeValues: { ':genre': genre },
      Limit: limit
    });
    
    const response = await docClient.send(command);
    return response.Items;
  }
}

Next Steps:

  1. Install AWS SDK v3: npm install @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb
  2. Configure IAM credentials via environment variables or AWS CLI
  3. Design primary keys based on query patterns
  4. Implement repository pattern for data access abstraction
  5. Deploy with auto-scaling enabled and CloudWatch monitoring
  6. Conduct load testing at 3× expected peak traffic

DynamoDB represents the definitive solution for building infinitely scalable, operationally simple data stores. The architectural patterns, implementation strategies, and operational best practices in this guide provide the foundation for production-grade applications that scale from startup to enterprise without infrastructure complexity.

✨ Thank you for reading and I hope you find it helpful. I sincerely request for your feedback in the comment’s section.