Optimizing MediaWiki for Enterprise Deployment
Why “Enterprise‑grade” MediaWiki Is Not Just a Fancy Wiki
Imagine your company's knowledge base as the engine room of a massive ship. If the engine sputters, the whole crew feels the lag. MediaWiki can be that engine, but only if you give it the right fuel, oil, and occasional grease‑up. In other words, you need to optimize it for the kind of traffic, security, and compliance that a Fortune‑500 outfit throws at it.
Start With the Basics – Know What You’re Dealing With
First thing’s first: MediaWiki ships with a default configuration that is, frankly, a sandbox. It works fine for a hobbyist’s personal wiki, but a 10‑k‑user intranet demands a whole different playbook. The official Performance tuning guide is a good reference, but you’ll quickly discover that every environment has its own quirks.
Infrastructure Checklist (No‑fluff, just facts)
- Web server. Apache is classic, Nginx is lean. Whichever you pick, make sure it speaks FastCGI or PHP‑FPM.
- Database. MySQL/MariaDB 5.7+ or PostgreSQL 12+. Use the
innodb_file_per_tablesetting; it saves you headaches later. - PHP. 8.1 or newer. Enable
opcache– it slashes PHP execution time dramatically. - Cache layer. Redis or Memcached. For large wikis, Redis is my go‑to because of its data‑structures.
- Search. ElasticSearch or built‑in CirrusSearch. Don’t rely on the old MySQL full‑text for a 100 GB wiki.
Alright, that’s the skeleton. Let’s flesh it out.
Layered Caching – The Secret Sauce
When you hear “caching” you might think “just turn something on and forget about it.” Nope. You need at least three layers, each with its own purpose.
1. Opcode (PHP opcache)
Put this in php.ini (language=ini):
opcache.enable=1
opcache.memory_consumption=256
opcache.max_accelerated_files=20000
opcache.validate_timestamps=1 ; helpful in dev, set to 0 in prodIt shaves off a good chunk of the PHP parsing time. I’ve seen page‑render times drop from ~800 ms to < 200 ms.
2. Parser cache (in Redis)
Set these in LocalSettings.php (language=php):
$wgParserCacheType = CACHE_REDIS;
$wgParserCacheBackend = 'redis';
$wgCacheServers = [ '127.0.0.1:6379' ]; // your Redis hostWhat’s happening? The heavy‑lifting of turning wikitext into HTML is stored so the next request can grab it in nanoseconds.
3. Object cache (for user prefs, ACLs, etc.)
Same block, just change the type if you prefer Memcached:
$wgObjectCacheType = CACHE_MEMCACHED;
$wgMemCachedServers = [ '10.0.0.5:11211' ];Even a single‑digit millisecond improvement adds up when you have thousands of concurrent editors.
Database Tweaks – Because “Just Add More RAM” Isn’t Enough
Okay, you’ve got a decent DB server, but the default MySQL settings are tuned for a small blog, not a corporate knowledge hub. Here are a few adjustments that make a noticeable difference.
- innodb_buffer_pool_size – Aim for 70‑80 % of your RAM if the DB is dedicated.
- query_cache_type – Turn it off; MediaWiki’s query patterns don’t benefit from MySQL’s query cache.
- max_connections – Set it high enough to accommodate burst traffic, but watch out for “Too many connections” errors.
Don’t forget to profile slow queries with EXPLAIN. The Scaling MediaWiki Extensions article warns that a single mis‑written extension can stall the whole DB. Test extensions in a staging sandbox before they go live.
Staging & Continuous Integration – A “Don’t Break Production” Mantra
Anything that feels like a “quick fix” should first be tossed onto a staging environment. I’m not talking about a one‑off VM; I mean a full‑scale copy of the production stack, with the same caches, same search backend, same PHP version.
Why? Because extension upgrades often introduce subtle PHP warnings that only show up under load. Running MediaWiki’s built‑in test suite (php maintenance/runTests.php) in CI catches those early. Here’s a tiny snippet for a GitHub Actions workflow (language=yaml):
name: MediaWiki CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
mysql:
image: mysql:8
env:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: wiki
steps:
- uses: actions/checkout@v2
- name: Set up PHP
uses: shivammathur/setup-php@v2
with:
php-version: '8.1'
- name: Composer install
run: composer install
- name: Run MediaWiki tests
run: php maintenance/runTests.php --wiki=wikiOnce the green check passes, you can merge with confidence. It may seem like extra work, but the cost of a broken corporate wiki at 2 pm on a Friday is… well, imagine the screenshots.
Search – Don’t Settle for “Good Enough”
MediaWiki ships with its own search, but for enterprise you want ElasticSearch or the newer CirrusSearch integration. It indexes pages in near‑real‑time and supports faceted search, synonyms, and even fuzzy matching.
Configuration (language=php):
$wgSearchType = 'CirrusSearch';
$wgCirrusSearchServers = [ 'es01.company.com:9200', 'es02.company.com:9200' ];Remember to set $wgSearchType = 'CirrusSearch' before any other search‑related options – otherwise MediaWiki silently falls back to the default MySQL search.
Security & Compliance – The “Enterprise” Part That Can’t Be Ignored
Security isn’t a checkbox; it’s a living thing. A few quick wins:
- HTTPS everywhere. Enforce
SecureCookiesand$wgForceHTTPS = true;. - Two‑factor authentication. Install the TwoFactorAuth extension.
- Read‑only mode for audits. Toggle
$wgReadOnly = 'Scheduled maintenance';without rebooting. - Content‑type headers. Prevent clickjacking:
Header set X-Frame-Options "SAMEORIGIN"in Apache oradd_header X-Frame-Options "SAMEORIGIN";in Nginx.
Don’t forget to enable $wgEnableUserEmail and set proper mail transport – a wiki that can’t send password resets is just a dead weight.
Monitoring & Alerting – You’ll Know Something’s Wrong Before Users Do
If you think you can “just look at the logs later,” you’re probably living in a fantasy. Set up Prometheus exporters for MediaWiki (the Prometheus MediaWiki Exporter does the heavy lifting) and Grafana dashboards that track:
- Cache hit‑rate (Redis should stay above 90 %).
- Request latency (aim for < 300 ms p95).
- Database slow‑query count.
- Search index lag.
When any metric crosses a threshold, fire off a Slack webhook. It’s like having a watchdog that barks before the fire starts.
Backups – Because “If It Breaks, We’ll Re‑create” Isn’t Viable
Take backups seriously. A good strategy mixes:
- Logical dumps.
mysqldump --single-transaction --quick --lock-tables=false wiki > wiki.sql.gz - File system snapshots. Use LVM or ZFS snapshots for the
images/directory. - Incremental Redis backups.
redis-cli BGSAVEplus off‑site copy.
Run a verification job weekly – restore to a staging VM and spin up a quick sanity check. It sounds redundant, but I’ve seen “backup‑only” plans fail spectacularly when a production server dies on a Monday morning.
Putting It All Together – A Mini‑Checklist
- Enable PHP opcache and tune memory.
- Deploy Redis for parser & object cache.
- Resize InnoDB buffer pool; disable MySQL query cache.
- Integrate ElasticSearch/CirrusSearch for fast, relevant search.
- Run extensions through staging and CI before production.
- Enforce HTTPS, 2FA, and strict headers.
- Instrument with Prometheus + Grafana; set alerts.
- Automate backups; test restores monthly.
That’s it, basically. Sure, you could spend weeks fine‑tuning each knob, but the biggest win is adopting a disciplined process: measure, adjust, verify. If you ignore any of those steps, you’ll likely end up with a wiki that feels slower than a dial‑up connection – and nobody wants that.
Final Thought – A Wiki Isn’t a “Set‑and‑Forget” Thing
Enterprise MediaWiki is a living platform. It needs the same care you’d give a high‑traffic e‑commerce site: caches refreshed, extensions vetted, security patched, and performance constantly measured. Think of it as a community garden; you can’t just plant seeds and walk away. You have to water, prune, and occasionally pull out the weeds.
So, next time you’re asked to “just turn on MediaWiki,” you can reply with confidence: “Sure, but let’s also hook it up to Redis, spin up ElasticSearch, put it through CI, and set up alerts – otherwise we’ll be stuck in the spam‑filter of slow pages.”