Optimism - OP Mainnet degraded performance – Incident details

All systems operational

OP Mainnet degraded performance

Resolved
Operational
Started about 2 months agoLasted 17 days

Affected

Mainnet

Operational from 8:10 AM to 1:31 PM, Degraded performance from 1:31 PM to 2:07 PM, Operational from 11:44 PM to 5:21 PM

Transaction Sequencing

Operational from 8:10 AM to 1:31 PM, Degraded performance from 1:31 PM to 2:07 PM, Operational from 11:44 PM to 5:21 PM

Node Sync

Operational from 8:10 AM to 1:31 PM, Degraded performance from 1:31 PM to 2:07 PM, Operational from 11:44 PM to 5:21 PM

Updates
  • Resolved
    Resolved

    We have resolved the performance issues on OP Mainnet. The performance issues were a result of transactions with a high I/O cost coinciding with op-geth's automated background compaction. We have implemented multiple mitigations to avoid such scenarios happening in the future. We will also be upgrading the underlying hardware OP Mainnet runs on in order to provide additional buffer.

  • Monitoring
    Update

    We observed another period (approximately 5 minutes) of slow blocks. Block production has resumed and we are continuing to monitor.

  • Monitoring
    Monitoring

    The unsafe head is currently performing as normal, however we may still see occasional slow unsafe heads as we roll out performance fixes to the network. We will continue to update this issue if extended periods of slow unsafe heads occur.

    No action is required from users.

  • Investigating
    Update

    We continue to see intermittent degraded performance on the sequencer's unsafe head.

  • Investigating
    Investigating

    We became aware of a slowdown in the unsafe head of our sequencer today at 09:10 UTC. This impacted other nodes in the network, presenting as an unsafe head stall for approximately 9 minutes. The performance issue resolved on its own after approximately 20 minutes.

    We are aware of the root cause and will be issuing further communications in due course.