Zero‑Downtime Migration of a Large MediaWiki Database to a New Server

Moving a MediaWiki installation to a new host is a routine operation, but when the database contains tens or hundreds of gigabytes of content the migration can easily become a source of downtime

Overview

Moving a MediaWiki installation to a new host is a routine operation, but when the database contains tens or hundreds of gigabytes of content the migration can easily become a source of downtime. This guide shows how to perform the move while keeping the wiki fully available to readers and editors. The process follows the official Manual:Moving a wiki and adds a zero‑downtime workflow based on read‑only mode, incremental data copy, and a short cut‑over window.

Key Assumptions

  • The source and target servers run the same major MediaWiki version (or the target is upgraded and the upgrade is performed before the cut‑over).
  • Both servers use MySQL/MariaDB with InnoDB tables.
  • Access to the filesystem of both servers (SSH, rsync, or similar).
  • A load balancer or reverse‑proxy that can direct traffic to either server.
  • Feature‑flag support (optional but recommended) to switch read‑only mode without a code deploy.

Prerequisites

  1. Backups – Take a full logical dump and a physical snapshot (e.g. Percona XtraBackup). Store the backup off‑site.
  2. Read‑only flag – Add $wgReadOnly = 'Migration in progress – read‑only mode'; to LocalSettings.php on the source server (or use a feature flag that toggles the same setting).
  3. Test environment – Clone the current wiki on a staging host and run the full migration there. Verify that extensions, skins, and customizations work with the new MediaWiki version.
  4. Network bandwidth – Estimate the time to copy the raw data files (images/, extensions/, LocalSettings.php) and the database. For a 200 GB database a --single-transaction dump over a 1 Gbps link takes roughly 30 min.

Step‑by‑Step Migration

1. Prepare the Target Server

On the new host install the same MediaWiki version (or the newer version you intend to run). Create an empty database and a dedicated MySQL user with the same privileges as the source.

# Install MediaWiki (example for Debian/Ubuntu)
sudo apt-get update && sudo apt-get install mediawiki php php-mysql
# Create DB and user
echo "CREATE DATABASE wiki_new CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;" | mysql -u root -p
echo "CREATE USER 'wiki_user'@'localhost' IDENTIFIED BY 'strong_password';" | mysql -u root -p
echo "GRANT ALL PRIVILEGES ON wiki_new.* TO 'wiki_user'@'localhost';" | mysql -u root -p

Copy the extensions/, skins/, and any custom PHP files from the source to the target. Preserve permissions (usually www-data).

2. Incremental Database Copy

Because the source wiki must stay online, use a two‑phase copy:

  1. Initial bulk dump – Run a non‑blocking logical dump using --single-transaction. This captures a consistent snapshot without locking tables.
  2. Replication of changes – After the initial dump, start a binary‑log‑based replication (or a simple mysqldump --incremental) that copies only rows that changed while the dump was running.

Example using MySQL binary logs:

# Enable binlog on source if not already enabled
sudo sed -i '/\[mysqld\]/a log-bin=mysql-bin' /etc/mysql/my.cnf
sudo systemctl restart mysql

# Create a replication user
mysql -u root -p -e "CREATE USER 'repl'@'%' IDENTIFIED BY 'repl_pass'; GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';"

# Record the current binlog position
MASTER_STATUS=$(mysql -u root -p -e "SHOW MASTER STATUS\G" | grep -E 'File|Position')
FILE=$(echo "$MASTER_STATUS" | grep File | awk '{print $2}')
POS=$(echo "$MASTER_STATUS" | grep Position | awk '{print $2}')

# Dump the database (no lock)
mysqldump -u wiki_user -p --single-transaction --quick wiki > /tmp/wiki_initial.sql

# Transfer the dump to the target
scp /tmp/wiki_initial.sql target:/tmp/

# Load the dump on the target
ssh target "mysql -u wiki_user -p wiki_new < /tmp/wiki_initial.sql"

# Configure the target as a slave
ssh target "mysql -u root -p -e \"CHANGE MASTER TO MASTER_HOST='source', MASTER_USER='repl', MASTER_PASSWORD='repl_pass', MASTER_LOG_FILE='$FILE', MASTER_LOG_POS=$POS; START SLAVE;\""

Allow the slave to catch up. When Seconds_Behind_Master on the target is < 5 seconds, the two databases are effectively in sync.

3. Switch to Read‑Only Mode

Now that the replica is up‑to‑date, put the source wiki into read‑only mode. This prevents any new edits while the final cut‑over happens.

# In LocalSettings.php on the source
$wgReadOnly = 'Migration in progress – read‑only mode';

If you use a feature flag (e.g. LaunchDarkly, Rollout.io), flip the flag instead of editing the file.

4. Final Data Sync

Because the source is now read‑only, there are no more writes. Stop the replication on the target, perform a final STOP SLAVE, and then execute a short mysqldump --single-transaction to flush any remaining changes.

# On target server
mysql -u root -p -e "STOP SLAVE;"
# Final dump (fast because no writes are happening)
mysqldump -u wiki_user -p --single-transaction wiki_new > /tmp/wiki_final.sql
# Load into the same DB (overwrites any lag)
mysql -u wiki_user -p wiki_new < /tmp/wiki_final.sql

At this point the target database contains an exact copy of the source.

5. Cut‑Over the Web Front‑End

Update the load balancer to point HTTP(S) traffic to the new host. Because the database is already in sync, the wiki will continue to serve pages without interruption. The DNS TTL for the wiki domain should be set to a low value (e.g. 60 seconds) a few hours before the migration to make the switch instantaneous.

Verify that the new server serves static assets (images, uploaded files) correctly. If you use a CDN, purge the CDN cache after the switch.

6. Remove Read‑Only Flag

On the source server, either comment out the $wgReadOnly line or disable the feature flag. The source wiki can now be decommissioned.

Handling Very Large Media Files

MediaWiki stores uploaded files under images/. For a multi‑terabyte media library, use rsync with the --inplace and --partial options to copy files in parallel while the wiki remains online.

rsync -avz --progress --partial --inplace source:/var/www/wiki/images/ target:/var/www/wiki/images/

Run the rsync job several times before the read‑only cut‑over; the final run will transfer only the delta.

Verification Checklist

  • All extensions listed in ExtensionRegistry::getInstance() load without errors.
  • Login works for a test user (both normal and sysop accounts).
  • Read a random page, edit it, and verify the edit appears.
  • Upload a small image and confirm it is stored in the new images/ directory.
  • Run php maintenance/update.php on the target to ensure schema is up‑to‑date.
  • Check SELECT COUNT(*) FROM page; on both source and target – numbers must match.
  • Run php maintenance/checkImages.php to verify file integrity.

Rollback Plan

If any of the verification steps fail, you can instantly revert to the source server because the DNS/ load‑balancer still points to it until you commit the change. Keep the source database read‑only for at most 15 minutes; after that period, restore the source from the backup taken before step 3.

Post‑Migration Tasks

  1. Increase DNS TTL back to a normal value (e.g. 1 hour).
  2. Remove the temporary replication user and any binlog files that are no longer needed.
  3. Run maintenance/rebuildrecentchanges.php and maintenance/rebuilduserstats.php on the new server to refresh caches.
  4. Update monitoring alerts to reference the new host IP/hostname.
  5. Archive the old server or repurpose it after a safe retention period (30 days is a common choice).

Tips for a Smooth Experience

  • Use InnoDB row‑level locking. Avoid ALTER TABLE … NOT NULL on huge tables; instead add a nullable column, backfill, then change the constraint.
  • Chunk large tables. When back‑filling data (e.g. adding a new column), process rows in batches of 10 k with a short SLEEP between batches to keep replication lag low.
  • Monitor replication lag. The Seconds_Behind_Master metric should stay under 2 seconds during the whole migration.
  • Keep extensions version‑locked. If an extension has a newer release that requires a DB schema change, apply the extension upgrade only after the database is fully synced.
  • Document every command. Store the exact commands you run in a version‑controlled script – this makes repeatable migrations (e.g. for staging) trivial.

Conclusion

By combining a read‑only window, an initial non‑blocking dump, binary‑log replication, and a short DNS cut‑over, you can move a MediaWiki installation of any size to a new server without noticeable downtime. The method respects MediaWiki’s own migration guide while adding the robustness required for large‑scale wikis that cannot afford a service interruption.

Follow the checklist, test the process on a clone, and you’ll be able to upgrade hardware, migrate to the cloud, or switch to a containerised deployment with confidence.

Subscribe to MediaWiki Tips and Tricks

Don’t miss out on the latest articles. Sign up now to get access to the library of members-only articles.
jamie@example.com
Subscribe