WebRedshift Health

Redshift Alerts Management

The Redshift Alerts page surfaces active performance alerts from Redshift's internal alert event log. These are distinct from the platform monitoring alerts — they come directly from the Redshift query optimizer and indicate conditions that degraded query execution performance.

Alert Source

All Redshift performance alerts are read from stl_alert_event_log, a Redshift system table that records optimizer-detected performance events. The dashboard shows events from the last 7 days, grouped by table.

Alert Types (Bitmask Decoding)

Each alert event has a bitmask field that indicates which alert conditions were detected. The UI decodes this bitmask and displays human-readable type labels:

BitType NameWhat It MeansRecommended Action
Bit 0 (1)SortkeyData was not stored in sort key order, requiring a full table scanRun VACUUM SORT ONLY on the table
Bit 1 (2)DeletesHigh ratio of deleted (ghost) rows causing scan overheadRun VACUUM DELETE ONLY to reclaim space
Bit 2 (4)NL (Nested Loop)A nested loop join was performed — usually indicates a missing or incorrect join predicateReview the query join conditions and add the missing predicate
Bit 3 (8)DistDistribution key mismatch caused data movement across nodes (redistribute)Align distribution keys between joined tables
Bit 4 (16)BroadcastA large table was broadcast to all nodes — typically occurs when one join table has no keyAdd a distribution key to the broadcast table or use DISTSTYLE EVEN
Bit 5 (32)StatsTable statistics were stale, causing the query optimizer to generate a suboptimal planRun ANALYZE on the table to update statistics

Alert History Table

The main view shows a table of all tables that generated alerts in the last 7 days, with one row per table:

ColumnDescription
Table NameSchema-qualified Redshift table name
Alert TypesBadges for each active alert type decoded from the bitmask
Occurrence CountTotal number of alert events in the 7-day window
Last SeenTimestamp of the most recent alert event
Recommended ActionContext-aware suggestion based on the detected alert types

Sorting by Frequency

Sort the alert table by "Occurrence Count" descending to prioritize the tables generating the most optimizer warnings. High-frequency alerts on large tables have the greatest performance impact and should be addressed first.

Filtering and Time Range

The time range picker allows filtering alert history from 1 day to 30 days. The alert type filter lets you focus on a specific alert category (e.g., show only "Stats" alerts to find all tables needing ANALYZE).