All systems operational

Goerli Full Sync Issues

Resolved
Degraded performance
Started 9 months ago Lasted about 2 hours

Affected

Mainnet
Node Sync
Updates
  • Resolved
    Resolved

    OP Goerli is no longer experiencing node halts, which were caused by the L1 Goerli testnet reorg. As shared in the previous status, full node operators do need to take one of the following actions:

    1. Delete Geth’s datadir, then replace it with the fully-synced archival database available from https://datadirs.optimism.io/goerli-bedrock-archival-2023-07-29.tar.zst.
    2. Make a call to eth_getblockbynumber with finalized as the block tag. Then, roll the full node back to that block value + 1 using debug_setHead.
    3. Resync the full node from a backup prior to block 12572500, or from genesis.

    This issue is resolved and we will be sharing a full post-mortem when ready.

  • Monitoring
    Monitoring

    We have identified the root cause of this issue.

    The L1 Goerli testnet experienced a large reorg of approximately 18 blocks. In addition to the large reorg, a significant number of blocks were missed, resulting in one large reorg and many missing blocks. This caused a large (>256 block) reorg on OP Goerli. Currently, users operating full nodes on OP Goerli will experience a node halt. These nodes will not sync new blocks from the OP Goerli network, and may return inconsistent data over RPC. Users operating archival nodes on OP Goerli are not affected and we do not expect this issue to occur in any capacity on OP Mainnet.

    To resolve this issue, full node operators need to perform one of these actions:

    1. Delete Geth’s datadir, then replace it with the fully-synced archival database available from https://datadirs.optimism.io/goerli-bedrock-archival-2023-07-29.tar.zst.
    2. Make a call to eth_getblockbynumber with finalized as the block tag. Then, roll the full node back to that block value + 1 using debug_setHead.
    3. Resync the full node from a backup prior to block 12572500, or from genesis.

    We apologize for the inconvenience and will share a full post-mortem when ready.

  • Investigating
    Investigating

    Our team is investigating reports of issues with full node sync. At this time, users syncing full nodes may experience degraded performance.

    We apologize for the inconvenience and will share an update once we have more information.