Leveraging the Cargo Extension in MediaWiki for Database Functionality

What Cargo Brings to a MediaWiki Installation

When you first glance at a MediaWiki site, the pages look like ordinary wiki articles – that is the point. Yet behind the scenes you often need a way to treat the content as structured data without turning the whole thing into a full‑blown database application. Cargo steps in exactly there, allowing you to store, query, and export tabular data right inside the wiki.

Why Cargo instead of “just” a database?

  • It lives in the same ecosystem as your articles, so editors don’t have to learn a separate admin console.
  • Data is entered via ordinary templates, meaning the same wikitext that powers infoboxes can also feed a query engine.
  • It works with MySQL, MariaDB, PostgreSQL – whatever your MediaWiki already uses – so there’s no extra server to spin up.

That said, Cargo isn’t a magical cure‑all. If you need ultra‑complex relational logic or massive transaction volumes, a dedicated DBMS will still be the right choice. Cargo shines when you want “light‑weight” data‑driven pages – think lists of characters, product specs, or event calendars – without pulling in a full Semantic MediaWiki stack.

Getting Cargo Up and Running

Installation is a matter of a few lines in LocalSettings.php. First, download the extension (or use Composer) and then add:

wfLoadExtension( 'Cargo' );
# Optional: limit who can modify Cargo tables
$wgGroupPermissions['cargo-admin'] = [ 'edit' => true ];

If you’re on a managed wiki host, you might need to ask the sysadmin to enable the cargo tables in the database. After a quick refresh of the site, you’ll see a new special page – Special:CargoTables – where all created tables are listed.

A quick sanity check

Open Special:CargoTables. You’ll likely see nothing yet. That’s normal; Cargo only creates a table when a template tells it to.

Defining a Cargo Table via a Template

The core idea is simple: a template contains a #cargo_store parser function, and every time the template is used Cargo records the supplied values.

Take a typical “Character” infobox. Instead of just displaying the data, we add a store call:

{{#cargo_store:
_table=Characters,
Name= {{{name|}}} ,
Species= {{{species|}}} ,
Origin= {{{origin|}}}
}}

Notice the underscore before _table. That signals Cargo to create (or reuse) a table named Characters. The three fields – Name, Species, Origin – become columns. The curly braces are the usual template parameters, so editors still fill them out like before.

Once you save a page using {{Character|name=Rina|species=Elf|origin=Eldoria}}, Cargo silently writes a new row into the MySQL table cargo_Characters. No extra UI appears, which is the point: the data lives where the content lives.

Querying the Data

Fetching data back out is where Cargo gets fun. The #cargo_query parser function lets you write SQL‑like statements directly in wiki markup.

{{#cargo_query:
tables=Characters,
fields=Name, Species,
where=Species=Elvish,
format=ul
}}

This snippet will produce an unordered list of all elves in the wiki. You can change format=table for a render‑ready HTML table, or format=csv for a downloadable CSV file.

Want a more complex query? Cargo supports joins, aggregates, and even sub‑queries – albeit with a syntax that mirrors MediaWiki’s parser functions rather than raw SQL. Here's a join that pulls character names together with the name of their realm, assuming you have a second table Realms:

{{#cargo_query:
tables=Characters=char, Realms=realm,
fields=char_Name=Name, realm_Name=Realm,
join on=char_Origin=realm_ID,
format=table
}}

It looks a little clunky, but the upside is that any editor can copy‑paste this snippet without touching the underlying DB directly. In practice that means faster iterations and fewer “I broke the DB” moments.

Exporting for the Outside World

Because Cargo tables are real database tables, you can also grab them via the MediaWiki API (action=cargoquery). For instance:

https://example.org/w/api.php?action=cargoquery&tables=Characters&fields=Name,Species&format=json

The JSON response can be consumed by external tools, static site generators, or data‑visualisation libraries like D3. It’s a neat bridge between the wiki and the rest of your tech stack.

Administration and Permissions

Only users in the cargo-admin group can create or delete tables. This restriction prevents a novice from accidentally wiping out a whole dataset. You can grant the group with:

$wgGroupPermissions['sysop']['cargo-admin'] = true;

Beyond that, ordinary editors who have the edit right can still add rows by using the appropriate template. Cargo respects the standard $wgReadOnly flag, so if the wiki goes into maintenance mode, no new data slips in.

Performance Tips

  • Index frequently‑searched columns. When you define a table, you can add _index=Name inside the #cargo_store call.
  • Batch imports. If you have a CSV file with hundreds of rows, use #cargo_import on a sandbox page. It’s much faster than creating the rows one‑by‑one via the UI.
  • Cache queries. Cargo respects MediaWiki’s parser cache. Adding nocache=1 disables it (useful for testing), but for production omit it so the rendered HTML is stored and reused.
  • Watch the table count. Each #cargo_store generates its own table if you change the _table name. Too many tables can bloat the DB schema; keep naming consistent.

Comparing Cargo to Semantic MediaWiki

If you’ve dabbled with Semantic MediaWiki (SMW), you’ll notice familiar syntax – both extensions let you embed structured data in wikitext. The main differences are:

  • Complexity. SMW ships with a whole ecosystem of query pages, visualisation extensions, and a custom triple store. Cargo is leaner; it leans on the relational engine you already have.
  • Data storage. SMW stores properties in its own tables, while Cargo creates one table per _table definition, which can be easier to export.
  • Learning curve. New editors often find Cargo’s #cargo_store approach more intuitive because it mirrors the familiar template pattern.

That isn’t to say one replaces the other. Some wikis run both, using SMW for semantic web features and Cargo for bulk data imports.

Practical Use‑Cases

Below are a few common scenarios where Cargo shines – feel free to adapt them to your own domain.

  1. Game wikis. Store monster stats, item attributes, or player rankings. Queries can produce “Top 10” lists automatically.
  2. Conference sites. Use a Talks table to list speakers, times, and rooms. A simple query generates the schedule page.
  3. Research repositories. Capture experiment parameters in an Experiments table; later queries help generate summary tables for publications.

All of those examples share a pattern: data is entered via the same template that renders it, and the same data fuels views, charts, or exports.

Potential Pitfalls and How to Avoid Them

Nothing is perfect. When you start leaning heavily on Cargo, watch out for:

  • Schema drift. If editors rename a field in a template but forget to update the #cargo_store call, you’ll end up with orphaned columns.
  • Large tables. A single table with tens of thousands of rows can slow down queries. In that case, split the data into multiple tables or add indexes.
  • Permission mistakes. Accidentally granting cargo-admin to too many users can lead to accidental table deletions.

Regular audits – perhaps a weekly run of Special:CargoTables – can keep things tidy.

Wrapping It Up

To sum it all up, Cargo gives a MediaWiki site a lightweight relational layer that lives inside the wiki’s own markup. By defining tables in templates, you let contributors add data without ever opening a database console. Queries are written in familiar wiki syntax, and the results can be displayed as lists, tables, or CSV files. With a few admin tweaks – permissions, indexing, and minding the table count – Cargo scales well enough for medium‑sized data projects.

Whether you’re curating a fan‑made monster compendium, tracking conference sessions, or simply want a nicer way to list equipment specs, Cargo offers a pragmatic bridge between the free‑form world of wikitext and the disciplined universe of relational data.

Subscribe to MediaWiki Tips and Tricks

Don’t miss out on the latest articles. Sign up now to get access to the library of members-only articles.
jamie@example.com
Subscribe