Optimizing MediaWiki Performance for Large-Scale Community Sites

Got a MediaWiki that feels slower than a Sunday morning?

When a community swells to tens of thousands of daily edits, the wiki starts to hiccup. I’ve wrestled with this on a few fandoms, and the fixes are surprisingly simple once you know where to look.

1. Trim the job queue early

Large‑scale sites should whisper to $wgJobRunRate, not shout. A value around 0.01 keeps background jobs from hogging PHP workers while still letting them clear out eventually.

# LocalSettings.php $wgJobRunRate = 0.01; // lower for high‑traffic wikis

It may look like you’re “turning off” things, but really you’re just giving the server breathing room.

2. Layered caching – don’t put all your eggs in one basket

  • Opcode cache (OPcache) – makes PHP scripts a snap.
  • Parser cache – set $wgParserCacheType = CACHE_ACCEL; to store parsed wikitext.
  • Object cache – Redis or Memcached? Both work, but Redis feels a tad snappier on my 8‑core box.

# LocalSettings.php $wgMainCacheType = CACHE_REDIS; $wgRedisServers = [ [ 'host' => '127.0.0.1', 'port' => 6379 ] ]; $wgParserCacheType = CACHE_ACCEL;

Don’t forget to flush those caches after a big extension upgrade – I’ve been there, and the “still slow” feeling is usually just stale data.

3. Choose the right web server

Nginx or Lighttpd often beats Apache on raw static file delivery. If you’re already on Apache, consider mod_proxy_fcgi to hand off PHP‑FPM, which shaves a few milliseconds per request.

My go‑to snippet for Nginx looks like this (yes, I’m mixing in a bit of nginx.conf, but it’s short):

# nginx.conf fragment location ~ \.php$ { fastcgi_pass unix:/run/php/php-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; }

4. Database tricks that actually matter

Indexes are the unsung heroes. Run maintenance/checkUserTables.php after a schema change – it’ll flag missing indexes faster than a coffee‑break chat.

Also, enable read‑replicas for heavy read traffic. I once set up a slave for the “recent changes” page and the latency dropped from ~800 ms to under 200 ms. Crazy, right?

5. Extensions – the double‑edged sword

Every extra extension adds PHP load and DB queries. Before you add something, ask: “Do I really need this?” If you must keep it, make sure it respects caching.

For example, the ParserFunctions extension can be heavy on parsing. Adding $wgPFEnableStringFunctions = false; disables a rarely used feature and can shave off a few percent of CPU.

6. Profile, profile, profile

Use profiler or DebugToolbar to spot bottlenecks. I love the --output=html flag on maintenance/profile.php – it spits out a visual flamegraph that’s easier to read than a wall of numbers.

php maintenance/profile.php --output=html --duration=30

When the graph lights up in bright red, that’s where you start trimming.

7. Keep an eye on the hardware

CPU spikes? Memory hogs? A quick htop on the server often tells the whole story. If you’re on a shared VPS, consider moving to a dedicated box or a cloud instance with autoscaling.

Final thought (or whatever)

There’s no silver bullet – you’ll need a mix of caching, job‑queue throttling, proper DB indexing, and a bit of hardware love. I’ve seen wikis double their throughput with just three of the tweaks above.

Got a weird hiccup you can’t explain? Drop a comment; I’ll poke around and see if we can spot the gremlin together.

Subscribe to MediaWiki Tips and Tricks

Don’t miss out on the latest articles. Sign up now to get access to the library of members-only articles.
jamie@example.com
Subscribe