migrate-postgres-tables-to-hypertables
from timescale/pg-aiguide
MCP server and Claude plugin for Postgres skills and documentation. Helps AI coding tools generate better PostgreSQL code.
npx skills add https://github.com/timescale/pg-aiguide --skill migrate-postgres-tables-to-hypertablesSKILL.md
PostgreSQL to TimescaleDB Hypertable Migration
Migrate identified PostgreSQL tables to TimescaleDB hypertables with optimal configuration, migration planning and validation.
Prerequisites: Tables already identified as hypertable candidates (use companion "find-hypertable-candidates" skill if needed).
Step 1: Optimal Configuration
Partition Column Selection
-- Find potential partition columns
SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_name = 'your_table_name'
AND data_type IN ('timestamp', 'timestamptz', 'bigint', 'integer', 'date')
ORDER BY ordinal_position;
Requirements: Time-based (TIMESTAMP/TIMESTAMPTZ/DATE) or sequential integer (INT/BIGINT)
Should represent when the event actually occurred or sequential ordering.
Common choices:
timestamp,created_at,event_time- when event occurredid,sequence_number- auto-increment (for sequential data without timestamps)ingested_at- less ideal, only if primary query dimensionupdated_at- AVOID (records updated out of order, breaks chunk distribution) unless primary query dimension
Special Case: table with BOTH ID AND Timestamp
When table has sequential ID (PK) AND timestamp that correlate:
-- Partition by ID, enable minmax sparse indexes on timestamp
SELECT create_hypertable('orders', 'id', chunk_time_interval => 1000000);
ALTER TABLE orders SET (
timescaledb.sparse_index = 'minmax(created_at),...'
);
Sparse indexes on time column enable skipping compressed blocks outside queried time ranges.
Use when: ID correlates with time (newer records have higher IDs), need ID-based lookups, time queries also common
Chunk Interval Selection
-- Ensure statistics are current
ANALYZE your_table_name;
-- Estimate index size per time unit
WITH time_range AS (
SELECT
MIN(timestamp_column) as min_time,
MAX(timestamp_column) as max_time,
EXTRACT(EPOCH FROM (MAX(timestamp_column) - MIN(timestamp_column)))/3600 as total_hours
FROM your_table_name
),
total_index_size AS (
SELECT SUM(pg_relation_size(indexname::regclass)) as total_index_bytes
FROM pg_stat_user_indexes
WHERE schemaname||'.'||tablename = 'your_schema.your_table_name'
)
SELECT
pg_size_pretty(tis.total_index_bytes / tr.total_hours) as index_size_per_hour
FROM time_range tr, total_index_size tis;
Target: Indexes of recent chunks < 25% of RAM Default: IMPORTANT: Keep default of 7 days if unsure Range: 1 hour minimum, 30 days maximum
Example: 32GB RAM → target 8GB for recent indexes. If index_size_per_hour = 200MB:
- 1 hour chunks: 200MB chunk index size × 40 recent = 8GB ✓
- 6 hour chunks: 1.2GB chunk index size × 7 recent = 8.4GB ✓
- 1 day chunks: 4.8GB chunk index size × 2 recent = 9.6GB ⚠️ Choose largest interval keeping 2+ recent chunk indexes under target.
Primary Key/ Unique Constraints Compatibility
-- Check existing primary key/ unique constraints
SELECT conname, pg_get_constraintdef(oid) as definition
FROM pg_constraint
WHERE conrelid = 'your_table_name'::regclass AND contype = 'p' OR contype = 'u';
Rules: PK/UNIQUE must include partition column
Actions:
- No PK/UNIQUE: No changes needed
- PK/UNIQUE includes partition column: No changes needed
- PK/UNIQUE excludes partition column: ⚠️ ASK USER PERMISSION to modify PK/UNIQUE
Example: user prompt if needed:
"Primary key (id) doesn't include partition column (timestamp). Must modify to PRIMARY KEY (id, timestamp) to convert to hypertable. This may break application code. Is this acceptable?" "Unique constraint (id) doesn't include partition column (timestamp). Must modify to UNIQUE (id, timestamp) to convert to hypertable. This may break application code. Is this acceptable?"
If the user accepts, modify the constraint:
BEGIN;
ALTER TABLE your_table_name DROP CONSTRAINT existing_pk_name;
ALTER TABLE your_table_name ADD PRIMARY KEY (existing_columns, partition_column);
COMMIT;
If the user does not accept, you should NOT migrate the table.
IMPORTANT: DO NOT modify the primary key/unique constraint without user permission.
Compression Configuration
For detailed segment_by and order_by selection, see "setup-timescaledb-hypertables" skill. Quick reference:
segment_by: Most common WHERE filter with >100 rows per value per chunk
- IoT:
device_id - Finance:
symbol - Analytics:
user_idorsession_id
-- Analyze cardinality for segment_by selection
SELECT column_name, COUNT(DISTINCT column_name) as unique_values,
ROUND(COUNT(*)::float / COUNT(DISTINCT column_name), 2) as avg_rows_per_value
FROM your_table_name GROUP BY column_name;
order_by: Usually timestamp DESC. The (segment_by, order_by) combination should form a natural time-series progression.
- If column has <100 rows/chunk (too low for segment_by), prepend to order_by: `order_by='low_de
...