Campaign Mastery helps tabletop RPG GMs knock their players' socks off through tips, how-to articles, and GMing tricks that build memorable campaigns from start to finish.

The Adverse Effects Engine


The AEE is a subsystem that slots into any RPG for simulating everything from Bad Weather to Plagues & Poisons.

Time Out Post Logo

I made the time-out logo from two images in combination: The relaxing man photo is by Frauke Riether and the clock face (which was used as inspiration for the text rendering) Image was provided by OpenClipart-Vectors, both sourced from Pixabay.

The Backstory

A while back, I was working on an adventure for one my campaigns (being deliberately vague, here) and I needed to look up the effects of Cobra Venom in the Hero System.

I wasn’t impressed – this stuff is supposed to be dangerous, even deadly, and what was offered in the bestiary supplement would barely kill a child.

And this particular venom was supposed to derive from supernatural Cobras summoned by a pissed-off deity. So that wouldn’t cut it.

I developed the Venom described in the box below, but wasn’t very happy with it – too fiddly, and perhaps a touch TOO lethal.
 
 
 
 
 

PER HIT:

  • Immediate on exposure: -5 all primary stats -2 PD -2 ED -10 END -1 ALL SKILLS -2 OCV -2 DCV plus 10 STUN 1 BODY dmg
  • Round after exposure: -3 all primary stats -1 PD -1 ED -6 END -1 ALL SKILLS -1 OCV -1 DCV (all cumulative) plus 10 STUN 2 BODY dmg
  • 2nd round after exposure: -2 all primary stats -4 END -1 ALL SKILLS -1 OCV -1 DCV (all cumulative) plus 5 STUN 3 BODY dmg
  • 3rd, 4th, rounds after exposure: -1 all primary stats -2 END plus 3 STUN 2 BODY
  • 5th round after exposure: -1 all primary stats -1 PD -1 ED -2 END -1 ALL SKILLS -1 OCV -1 DCV plus 2 STUN 1 BODY
  • 6th, 7th round after exposure: as per 3rd & 4th rounds
  • 8th round after exposure: as 5th round
  • 9th, 10th round after exposure: -2 END plus 2 STUN 1 BODY

These are accompanied by appropriate physical & mental responses – shaking, stumbling, delirium, semi-consciousness, poor decision-making, extreme pain (burning sensations) etc. The wound site will blister as though exposed to Mustard Gas or a gas stove’s flame, and the effect will slowly spread through the 10 rounds, starting 2-3 cm diameter +1 cm diam each subsequent round..

TOTAL EFFECTS:

    -5-3-2-2-1-2-1= -16 all primary stats;
    -2-1-1-1 = -5 PD same ED;
    -10-6-4-2-2-2-2-2-2-2 = -32 END;
    -1-1-1-1-1=-5 ALL SKILLS;
    -2-1-1-1-1=-6 OCV & DCV;
    10+10+5+3+3+2+3+3+2+2 = 43 STUN
    1+2+3+2+2+1+2+2+1+1 = 17 BODY

Clothing: Adds 1 round delay to the above

A tornique: Halves the rate of effect shown

Antivenom: Stops effects instantly, restores 1/4 of the damage taken to stats & skills (round down)

If the character survives the course of the attack and does not get hit again, he can recover:

    1 Primary stat point (each stat) / 30 mins
    1 OCV & DCV / 30 mins
    1 Secondary stat point / hour
    END as Normal
    STUN as 1/2 Normal
    BODY as Normal

Those second thoughts didn’t happen right away – in fact, there was about a year in between generating and reviewing the above, and we’re still nowhere near it appearing in play, which it may never do, so I marked it for reconsideration and moved on to higher-priority tasks.

Then, a few weeks ago, in Traits of Exotic d20 Substitutes pt 1, I casually tossed out a completely original system (inspired by the Sixes System, for which I still have to write the final part).

A number of people seemed to like its elegance and simplicity and flexibility. So, a couple of days later, when I came across my note to review the Cobra Venom, the two thoughts clicked together.

But, to actually be usable in play, I needed to dig deeper into what was a casual aside at the time. And so, here we are.

The Core System

The GM specifies N dice, and a target of T sixes. At intervals (generally fixed by the GM but may be variable), the character rolls Nd6. Any sixes are counted towards T, until the total is T or more.

    If one 1 is showing, something bad happens (specified by the GM but not necessarily announced).

    If two 1s are showing, something worse happens (specified as above). Or the same bad thing happens twice. Or the same bad thing happens, and some other bad thing happens. Whatever – it’s worse.

    If three 1s are showing, something really bad happens (specified as above). And T might increase by 1. Or one of the alternatives listed previously. It’s useful to be consistent.

    If four or more 1s are showing, something catastrophically bad happens and T increases by 1 or more. Or (you guessed it) as above.

    You also have the option of specifying a very small ‘something bad’ if no 1s are showing, just to remind the victim that they have this hanging over their head.

The GM controls the severity of each level of effect, the frequency of rolls, the size of the rolls (N), and the target (T). The combination of N and T also dictates what the frequency of occurrence of the different levels of penalty should be.

Nice, neat, and simple – in theory.

To really use it in practice, the GM needs a way to estimate what the total effects are likely to be. Then he can adjust the penalty levels and N and T accordingly to get exactly what he wants the probable outcome to be.

Or he can start with predetermined outcomes in mind and divide them up into the different penalty levels according to a convenient pairing of N and T, based on E, the number of rolls it’s expected to take to reach T.

On Today’s Menu

I’m going to outline the process in full, with tables and convenient shortcuts built in for the GM, for the first approach. Then I’ll outline the second in a shorter format, because it will use the same tables as the first approach.

When I was planning and contemplating this expansion, I also thought up a number of variations, so I’ll describe them and their impacts as the cherry on top.

Set N and T

These should always be determined by E, the expected number of rolls to reach T rolling N dice at a time.

    T=1, for N=1 to 8: 6, 3, 2, 2, 2, 1, 1, 1
    T=2, for N=1 to 8: 12, 6, 4, 3, 3, 2, 2, 2
    T=3, for N=1 to 8: 18, 9, 6, 5, 4, 3, 3, 3
    T=4, for N=1 to 8: 24, 12, 8, 6, 5, 4, 4, 3
    T=5, for N=1 to 8: 30, 15, 10, 8, 6, 5, 5, 4
    T=6, for N=1 to 8: 36, 18, 12, 9, 8, 6, 6, 5
    T=7, for N=1 to 8: 42, 21, 14, 11, 9, 7, 6, 6
    T=8, for N=1 to 8: 48, 24, 16, 12, 10, 8, 7, 6

or, you might prefer to pick an N and then a T:

    N=1, T=1 to 8: 6, 12, 18, 24, 30, 36, 42, 48
    N=2, T=1 to 8: 3, 6, 9, 12, 15, 18, 21, 24
    N=3, T=1 to 8: 2, 4, 6, 8, 10, 12, 14, 16
    N=4, T=1 to 8: 2, 3, 5, 6, 8, 9, 11, 12
    N=5, T=1 to 8: 2, 3, 4, 5, 6, 8, 9, 10
    N=6, T=1 to 8: 1, 2, 3, 4, 5, 6, 7, 8
    N=7, T=1 to 8: 1, 2, 3, 4, 5, 6, 6, 7
    N=8, T=1 to 8: 1, 2, 3, 3, 4, 5, 6, 6

Don’t worry about these not lining up in neat columns, the same information is available in the table that is below.

Advice:

I prefer this approach because of the clear patterns shown for N=1, 2, 3, and 6 – but these can be misleading if used for extrapolation, as N=4 shows with its jump from 3 to 5, and N=5 shows with the jump from 6 to 8, with the second of these being the stronger example. So the extrapolation is not as certain as a pattern might suggest, and can’t be relied on – so I will always recommend using the first arrangement, simply because it doesn’t suggest potentially misleading extrapolations.

High-T = long durations, especially with lower N values. That’s suitable for diseases that have a long interval between checks – every 12 or 24 hours, say. But for poisons, you don’t want an E that’s more than 6 or 8, even for the worst ones, and 5-6 is probably a better target even for those. E=3-4 is good for mid-strength poisons, and E=1-2 should really be reserved for only the fastest-acting.

For every really lethal poison or disease, there should be several of the mid-strength variety, and for every mid-strength, many weaker poisons – or so runs one line of thinking. But evolution favors those poisons that are strong enough to take down whatever the poisoner feeds on or is commonly attacked by; it doesn’t happen in isolation. That can cause potency to increase, moderating the earlier trend. So here are a trio of ratios to get you thinking:

    By Theoretical Threat Magnitude: 1: 3: 9-12
    By Evolutionary End-point: 1 : 2 : 3
    Compromise: 2 : 5 : 10

Playing into that decision should be the poison reservoir. In other words, how many bites of the poison cherry can one poisoner deliver?

Size of the creature impacts this – the larger the creature, the larger the venom sacs (or their equivalent).

Here are some real-world assessments:

Tiny/Small – insects, small spiders, scorpions, small centipedes – venom capacity is very low and either single-use or low-frequency bursts. The venom is metabolically costly relative to body size. Often have a single, full dose for immediate defense/predation. Recovery is long (hours/days).

Medium – mid-sized snakes, large spiders, cone snails, large scorpions, etc – Moderate venom capacity, low-moderate frequency of delivery – three uses in quick succession. Capable of venom metering – injecting less than maximum to conserve supply. May deliver a full dose for a large threat, or a “dry bite” (no venom). Can deliver a burst of 2-3 significant bites, then need short recovery (minutes).

Large – large snakes, octopuses, large fishes – high venom reservoirs, Moderate-high frequency of use (multiple uses or sustained delivery). High reservoir allows for multiple, significant envenomations. Gaboon Vipers, in particular, are known for a massive venom yield and ability to deliver repeated, high-volume strikes. Delivery can be sustained over a short period. Recovery time for full capacity is still long, but practical use is frequent.

As a general rule of thumb, the less venom, the deadlier it has to be, because volume decreases as the cube of linear size. The venom therefore has to become more potent just to keep up. Larger creatures have much more venom, which they can utilize in a number of different ways, one species compared to another. On top of that, smaller creatures are less physically resilient, and need to end combat encounters more quickly in order to survive – so that’s an extra push toward higher toxicity

The graphic below was provideded by Gemini, Google’s AI, and edited by me:

I also asked Gemini to extrapolate its’ findings to cover giant and ‘dire-” creatures, and this is what it came back with (edited):

Gargantuan Creatures – 5m long spiders, Giant Snakes: Size factor 5-10 x earth “real”. Venom Capacity up to 50x that of normal equivalents. Potency may decrease slightly, but total damage output increases exponentially due to volume. Sustained High Frequency of venom delivery, can deliver (5-10x earth “real”) lethal doses with minimal pause. (May take weeks to recharge but still have sufficient venom for 2-3 encounters while recharging).

Colossal Creatures – 25m sea creatures, “Kaiju” spiders, etc. Size Factor 25+ times earth “real”. Venom Capacity – essentially unlimited. Potency is often low relative to size, but the volume is so immense it acts as a biological (or breath weapon, acid spray, etc, with toxic effects on top). The creature’s bite/sting is less about injecting a dose and more about dousing the target (and/or the environment around it).

A “Dire Version” is a creature that defies the standard biological trade-off, making it inherently more dangerous and a true “boss” encounter. The Dire modifier should break the Inverse Correlation by increasing both Reservoir Size and Venom Potency.

So, once you have T, N and E, and have started thinking about bite frequency vs toxicity

Probable Occurrence of Adverse Effects

By the way, before it begins – generating this table of results proved too complicated for both Gemini and ChatGPT! Both understood clearly what I wanted them to do, and (as much as an LLM can) why, and generated a solution to the problem of how – that didn’t work.

Repeated corrections were attempted in both cases, and failed. That’s not a measure of my intellect or anything like that – it’s an indication of just how much detailed work lies under the surface of this innocuous-looking table.

If I had a BASIC compiler, I could have written the code myself from one of their algorithms in less time, and in about 20 lines.

Key:

“No +” represents low chance of more. Use the indicated number of occurrences in estimating total impact from impact per occurrence.

“+” represents a moderate chance of more. Use the indicated number of occurrences in estimating total impact from impact per occurrence.

“++” represents a significant chance of more. Use the indicated number of occurrences + 0.5 to estimate the average total impact from impact per occurrence.

“+++” represents a high likelihood of more occurrences than the number shown, and a high confidence of at least this many occurrences. Use the indicated number +1 to estimate the average total impact from impact per occurrence.

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
T N E K=1 K=2 K=3 K=4 K=5 K=6 K=7 K=8
1 1 6 1
1 2 4 1 0
1 3 3 1 0 0
1 4 2 0+++ 0 0 0
1 5 2 0+++ 0+ 0 0 0
1 6 1 0+++ 0+ 0 0 0 0
1 7 2 0+++ 0+ 0 0 0 0 0
1 8 1 0++ 0++ 0 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
2 1 12 2
2 2 7 1+++ 0
2 3 5 1++ 0+ 0
2 4 4 1++ 0+ 0 0
2 5 3 1 0+ 0 0 0
2 6 3 1 0+ 0 0 0 0
2 7 3 1 0++ 0 0 0 0 0
2 8 2 0++ 0++ 0 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
3 1 18 3
3 2 10 2+++ 0+
3 3 7 2+ 0+ 0
3 4 5 1+++ 0++ 0 0
3 5 4 1++ 0++ 0 0 0
3 6 4 1++ 0+++ 0 0 0 0
3 7 3 1 0++ 0 0 0 0 0
3 8 3 1 0+++ 0+ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
4 1 24 4
4 2 13 3++ 0+
4 3 9 3 0++ 0
4 4 7 2++ 0+++ 0 0
4 5 5 2+ 0+++ 0 0 0
4 6 5 2 1 0+ 0 0 0
4 7 4 1++ 0+++ 0+ 0 0 0 0
4 8 4 1++ 0+++ 0+ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
5 1 30 5
5 2 16 4+ 0+
5 3 11 3+++ 0+++ 0
5 4 8 3 0+++ 0 0
5 5 7 2+++ 1 0 0 0
5 6 6 2+ 1 0+ 0 0 0
5 7 5 1+++ 1 0+ 0 0 0 0
5 8 5 1+++ 1+ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
6 1 36 6
6 2 19 5+ 0++
6 3 13 4++ 0+++ 0
6 4 10 3+++ 1 0 0
6 5 8 3 1+ 0+ 0 0
6 6 7 2+++ 1+ 0+ 0 0 0
6 7 6 2+ 1+ 0+ 0 0 0 0
6 8 5 1+++ 1+ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
7 1 42 7
7 2 22 6 0++
7 3 15 5 1 0
7 4 11 4 1+ 0 0
7 5 9 3++ 1+ 0+ 0 0
7 6 8 3 1++ 0+ 0 0 0
7 7 7 2++ 1++ 0++ 0 0 0 0
7 8 6 2 1++ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
8 1 48 8
8 2 25 6+++ 0++
8 3 17 5+++ 1 0
8 4 13 5 1++ 0 0
8 5 10 4 1++ 0+ 0 0
8 6 9 3++ 1+++ 0+ 0 0 0
8 7 8 3 1+++ 0++ 0 0 0 0
8 8 7 2++ 1+++ 0++ 0 0 0 0 0

E is usually a decimalized number because the calculations determine the average outcome over many sets of rolls. “2.6” means that 40% of the time it will take 2 rolls and 60% of the time it will take 3 – but there is always an outside chance that it might take 1 or 4, so those percentages are approximate. Because in the real world you can’t have “0.6 of a roll”, these have been rounded up, and the resulting whole number of rolls used to calculate the rest of the table.

If you want to know the exact query that ‘broke’ the AIs, it was something like this:

For N 6-sided fair dice from 1 to 8, calculate the number of rolls required to reach a total number of sixes shown across all rolls equal to or greater than T, which also varies from 1 to 8, and label it E1. Because in the real world you can’t have “0.6” of a roll, round E1 up and label it E. For E rolls of N fair six-sided dice, calculate the number of rolls exactly K 1s will be seen, with K varying from one to 8. If the result for a given K (designated R) is an integer, show the integer; else if RK-INT(RK) is <0.25, show INT(RK); else if RK-INT(RK) is <0.5, show INT(RK) and one “+” sign; else if RK-INT(RK) is <0.75, show INT(RK) and two “+” signs; else show INT(RK) and three “+” signs, for example “2+++”. If an entry is impossible, eg K>N, show a blank space, not a 0. Format the results in a plaintext tab-delimited table with columns T, N, E, K=1, K=2, etc, sorted by T and sub-sorted by N.

Note that I had to run this query about 25 times, refining it each time, and eventually had to take out everything relating to the encoding and requesting the answer to 3 decimal places so that I could ‘manually’ do the coding.

Gemini calculated the results correctly, including the formatting, but couldn’t get the columns of data to line up correctly after 24 rows plus the heading – the K=1 column kept overwriting the E column, no matter what was done.

ChatGPT failed completely to apply the encoding correctly and had several calculation errors at first, but with a bit of patience and simplifying the question, did manage to produce a table that I could copy and paste into a spreadsheet. I then inserted additional columns to perform the calculation of RK-INT(RK) and interpret the results as per the “if” statement shown above. I then hid the working and manually transcribed the results into the tables above.

Oh, and for clarity, I decided at the last minute to break what was one big table into the more user-friendly 8 smaller tables.

I’m getting ahead of myself with this picture, but it had to go somewhere! You’ll see why it’s included in due course. Image by Daniel McWilliams from Pixabay

So let’s pick an entry, I’ll decode it, and show you how it works. How about… 5 dice, target of four 6’s.

  1. Look for the line that starts 4 – 5.
  2. E is 6, so you can expect the victim to roll 6 times on average before getting to the target of 4 sixes – of course, it could happen on the very first roll, but it probably won’t.
  3. So, what’s likely to happen, bad-things wise, over the course of those expected 6 rolls?
    • K=1 has a value of 2+, so there will probably be two times that a single 1 one is showing.
    • K=2 has a value of 0+++ – so the expectation is that this won’t happen on any of them, but there’s a very high chance of it happening at least once – just not a relative certainty of it. And that makes sense – there’s a 1 in 36 that you’ll get 2 ones on two dice, and 25/36 chance that there will be no 1s on the other 2 dice, for a total chance of 25/1296 of this outcome, or 1.9%. But that doesn’t allow for a 1 on the first dice and another 1 on, say, the 3rd dice – so there are more ways for this to happen. And that puts the chance up so high that it’s very likely to happen.
    • K=3 through K=5 are extremely unlikely to occur. Not impossible, but not likely. For all practical purposes, this is a two- or three-tiered penalty structure.
  4. The key takeaway, though, is: 2 x one 1, 1 x two 1’s, and 8-3=5 x no 1’s.
  5. So multiply that by the chosen harm levels that go with those one-counts, add it up, and you have your expected damage.
    • To demonstrate this, let’s say no 1’s = 1 HP, one 1 = 5 HP, and two 1’s = 10HP. Then we would have 1×5 + 1×10 + 5×1 = 20 HP damage.
  6. But the system can be as complicated as you want.
    • Try no 1’s = 2 HP, 1 one = +5 HP, and 2 ones = +10 HP and a point of STR, each accompanied by the lesser levels.
    • Then, we would expect 2x(5+2) + 1x(10+5+2, & 1 STR) + 5×2 = 14+17+10 HP & 1 STR = 41 HP & 1 STR.

Choosing N and T

Unless you are modeling a specific set of conditions that dictate otherwise, or are working to deliver an ‘average fixed amount of damage’ (both covered in subsequent sections), the place to start is with the time intervals* between rolls and the number of rolls expected to be needed, E.

That will give you a short-list (perhaps VERY short) to choose between.

For example, if I want an effect to apply for an average of 6 time-intervals – it could be six rounds, six lots of 30 seconds, 6 minutes, 6 hours, 6 days, or whatever – I would look for E of 5, 6, or 7.

A whopping 17 entries in the table match, so I’m not spoiled for choice. Since there are so many, I would lose the 5’s and 7’s and go with just the options that give exactly what I want.

That gets me down to 5 choices. I want the players to roll more than 1 die but no more than 4, because anything else takes longer to add up.

But that kills all my choices, so the decision is now which restriction do I desire more – the 6 rounds, or the 4 dice?

I decide that 7 rounds is acceptable, after all. That puts a lot of options back on my radar, including T=4 N=4 and T=4, N=5. The first has a higher chance of K=1 results, the latter introduces an outside chance of K=5 and an increased chance of K=3 and K=4. But it does fit my original 6-round desire. In the end, I choose to flip my compromise and choose the N=T=4 option.

Job Done.

Extending The table

Let’s compare the 4-4 line with the 8-8 line.

4-4: 7, 2+, 0+++, 0, 0; vs
8-8: 7, 2++, 1+++, 0++, 0, 0, 0, 0, 0

So you can’t break an 8-8 into two sets of 4-4 rolls. But there is a simple way.

Let’s look at N=12 T=12.

    Step 1: Divide both N and T by 2 (they have to be even).

    Step 2: Look up the results on the tables above. In this case, we get N=6, T=6.

    Step 3: The total number of rolls expected is the same for both – in this case, 7.

    Step 4: Because the scaling also increases the deliberately-induced ’rounding error’, subtract 1/2 from the expected number of rolls in response to the doubling. So that’s 6½.

    Step 5: The total number of rolls is the same, but doubling the dice makes it easier to roll high numbers of ones. The counts for the worse penalties will increase, while the count for the standard penalty remains stable or slightly decreases. Balanced against that is the fact that the probability of those higher penalties is so low that your increasing nothing by a smidgen in most cases. Analysis has led to the rules for doubling:

    • # and #+ are always treated as #.
    • ++ should be read as #+1.
    • If the full E is <16, +++ should also be read as #+1.
    • If E >15, +++ should be read as #+2.

    So, in this case, we have 2+++, 1+, 0+, 0, 0, 0.
    E is <16, so 2+++ becomes 3.
    1+ stays 1.
    0+ stays 0.
    0 stays 0.

    So three single 1s, 1 pair of 1s, and 2.5 rolls without ones.

    Step 6: But then we have to factor in the drop from 7 to 6½ expected rolls:

    3 x 6.5 / 7 = 2.8 single 1s, 0.93 pairs of 1s, and 6.5 – 2.8 – 0.93 = 2.77 rolls with no 1s.

    Step 7: Multiply those by your chosen penalty values.

    Let’s use…

      No 1’s = 3 HP
      One 1 = 10 HP
      Two 1s = 25 HP

    3 x 2.77 + 10 x 2.8 + 25 x 0.93
    = 8.31 + 28 + 23.25 = 59.56 HP.

    Step 8: Round up and add the lower of the half N or T to allow for the possibility of those results of 3 or more 1s.

    In this case, both are 6, giving a final estimate of 66 HP damage.

It is recommended that + and +++ rolls should have their expected penalties softened, especially if using compound effects, as the levels set for them are based on occurrence numbers that are only partially expected to occur. 10% weaker is about right. Similarly, ++ rolls should be subjected to a moderate reduction (~20%) for the same reason.

Setting penalty levels

Ensure the penalty definitions are geometrically worse as K increases (eg., K=2 is far worse than K=1) to reflect the exponentially decreasing probability of high K rolls.

Setting penalty levels from a designated target

If plugging values into the calculations above doesn’t suit, you can establish a fixed geometric ratio – 2.5, 3, or 4 all work well – and use them to reduce your high K results to a specific number of K=1 or K=0 results. I recommend the first of these, but it’s up to you.

For example, let’s use 6 dice and a Target of 3 sixes. E=4.

    One 1 = 1++, treated as 1.5
    Two 1s = 0+++, treated as 1.
    Three to Six 1s = 0. Ignored.
    No 1’s = 4-1-1.5 = 1.5.

And let’s set a nice robust target like 100 HP. That’ll get a PC’s attention in a hurry!

Set the ratio as 4, and let’s extend the calculation down to K=0.

    Two ones = 4 (the ratio) single ones, for a total of 5.5 of them.
    One one = 4 (the ratio).’no ones’, so 5.5 x 4 = 22.

    100/22 = 4.54. Round down to 4. That 4 x 1.5 expected = 6 points, so our target is now 94 points from 5.5 k=1s.

    94 / 5.5 = 17.09. Round it down to 17. Multiply by the 1.5 times it’s expected to occur and we get 25.5. So our target goes down by 25 (round it down again) and our K=1 value is 17 HP.

    96-25 = 71. So our K=2 – expected once – is 71 HP.

Final results:

    K=0 does 4 HP.
    K=1 does 17 HP.
    K=2 does 71 HP.

Of course, if you set more modest targets, you’ll get more moderate results. This was deliberately extreme.

Variation One: Nested Damage Types

Try this on for size:

    K=0: minor HP damage.
    K=1: significant HP damage.
    K=2: significant HP damage & single-stat damage.
    K=3: significant HP damage & second-stat damage.
    K=4: Significant HP damage & both stats damaged.
    K=5: K=4 + Significant HP damage.
    K=6: K=4 + K=2.
    K=7: K=4 + K=3.
    K=8: 3 x K=4.

These results ‘nest’ three types of damage – two to stats and HP. You can use a similar system if the game system has multiple damage types, as in the Hero System:

    K=0: Some END loss
    K=1: K=0 + Some Stun loss
    K=2: 2 x K=1 + Some Body damage
    K=3: K=1 + K=2 + Some temporary Stat loss
    K=4: 2 x K=2 + Some temporary Stat loss
    K=5: K=4 + K=2
    K=6: K=5 + K=3.
    K=7: K=6 + K=4.
    K=8: 3 x K=5.

Defining ‘some’ as 5 points, that becomes:

    K=0: -5 END
    K=1: -5 END -5 Stun
    K=2: -10 END -10 Stun -5 Body
    K=3: -15 END -15 Stun -10 Body -5 Stat
    K=4: -20 END -20 Stun -10 Body -5 Stat
    K=5: -30 END -30 Stun -15 Body -5 Stat
    K=6: -45 END -45 Stun -25 Body -10 Stat
    K=7: -65 END -65 Stun -35 Body -10 Stat
    K=8: -90 END -90 Stun -45 Body -15 Stat

Or you could simplify things:

    K=0: -5 END -1 Stun -0 Body
    K=1: -10 END -5 Stun -1 Body
    K=2: 2 x K1
    K=3: 4 x K1 plus -1 stat
    K=4: 8 x K1 plus -5 stat
    K=5: 15 x K1 plus -10 stat
    K=6: 30 x K1 plus -20 stat
    K=7: 50 x K1 plus -30 stat
    K=8: 100 x K1 plus -40 stat

The Healing Difference

It’s up to you to decide whether or not healing, or recoveries in the Hero System, can function until whatever-it-is has run it’s course.

That makes these effects much nastier and should cause you to halve whatever damage levels you had in mind. Unless you want it to be potentially deadly.

Other Systemic Options

There are five other options that the GM can choose. Some of these can operate in combinations.

1. The Exhaustion Option

When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

That means that your biggest risk of a really bad result is at the start, and possible effects moderate.

It makes it much harder to predict the net outcome though.

Statistical Impact: This dramatically reduces the dice pool (N) over the course of the effect. Successes are achieved quickly, but the chance of rolling K>0 adverse events on any remaining die remains constant (1 in 6). Since the pool shrinks, the absolute chance of rolling multiple 1s decreases rapidly.

Game Feel: Front-loaded risk and rapid resolution. The initial rolls are the most dangerous. If a character survives the first two or three checks, the difficulty in rolling 1s drops faster than the difficulty in reaching the target, T.

Best For: Fast-acting, non-renewable poisons (like a single large dose of nerve agent) or short, focused challenges where the effect is quickly flushed from the system.

2. The Continual Option

Once you roll a 1, it stays unrolled thereafter and counts toward future penalties. Rolling continues until every dice shows either a 1 or a 6. The Core exit condition of accumulating T sixes remains in effect but is overshadowed by the alternative.

This means that things get progressively worse until whatever-it-is has run its course and left your system. It’s nasty but good for supernaturally-sourced troubles.

The one saving grace is the additional way out – if every dice is either a 1 or 6, the nightmare ends. In some cases, the cause – disease or poison – will burn itself out fast, in others it will be the cause of extremely protracted suffering.

The higher the initial N, the worse this gets. If you start with 6 dice:

    1, x, x, x, x, x – T sixes (cumulative) or 5 sixes needed
    K=1 events every roll until you roll another 1 or exit
    1, 1, x, x, x, x – T sixes (cumulative) or 4 sixes needed
    K=2 events every roll until you roll another 1 or exit
    1, 1, 1, x, x, x – T sixes (cumulative) or 3 sixes needed
    K=3 events every roll until you roll another 1 or exit
    1, 1, 1, 1, x, x – T sixes (cumulative) or 2 sixes needed
    K=4 events every roll until you roll another 1 or exit
    1, 1, 1, 1, 1, x – 1 six needed
    K=-5 events every roll until you roll a 1 or a 6. If you roll a 1, there is a K=6 events.

Each time a die is locked on ‘1’, your chances of getting the sixes you need go down and the number of rolls you’re expected to need will go up.

Damage accumulates very rapidly, and with accelerating pace.

3. The Progressively-worse Option

Each 1 that gets rolled increases the Target by 1.

This puts survival on a knife-edge and ensures that if you suffer badly, the effects will linger for longer – making it a good choice for plagues.

Statistical Impact: This maintains the dice pool (N) but increases the overall target (T) dynamically. Every adverse event makes the recovery condition harder to achieve. This means rolling a 1 directly increases the expected duration (E) of the effect. A single unfortunate roll early on can potentially double the total expected number of checks.

Game Feel: Cascading failure and desperation. Failure feeds failure. The character sees the light at the end of the tunnel (the target T) constantly moving further away. This is highly effective for plagues or diseases that exploit the body’s weakening condition.

Best For: Plagues, zombie infection progression, or effects that are harder to fight off the longer they persist (like a viral load).

4. The Blessed Balm Option

Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on.

This creates a situation in which the health of the sufferer is on a roller-coaster, up and down with each roll of the dice. Eventually, these changes will tend to dampen out. Works very well with the Progressively-Worse option.

This fundamentally re-balances the risk assessment; introducing greater variance into the process – rolls are either great (success towards T) or terrible (a large number of 1s) or tension-building (anything else). It models a scenario where the character’s vitality is constantly tested.

Game Feel: Roller-coaster effect and high stakes per roll. The character may suffer a terrible wound but instantly cancel it in the same roll with a heroic recovery effort. This variation is highly dramatic.

Best For: Magical duels, effects that fluctuate with effort or willpower, and scenarios where the poison’s progression is inherently unstable.

5. The Devastating Option

The first 6 in a roll doesn’t count, only sixes above that one.

This strongly biases the results away from recovery, without ruling it out entirely. It makes any of the ‘nasty’ options far worse.

Statistical Impact: This increases the expected number of rolls (E) needed to reach T without changing the probability of adverse events (K). Since E is higher, the total number of adverse events over the life of the affliction is necessarily higher. If you use the same N and T, the effect will be substantially longer and more severe than calculated in the base tables.

Game Feel: Recovery – and the downhill slide before it – feels Incredibly sluggish and unforgiving. Successes are hard-won. This makes the affliction feel resistant or deeply embedded in the character’s system, guaranteeing prolonged suffering

Best For: Artifact-level curses, dire creature venom, effects designed to be a significant narrative roadblock, or spurs for quests for a cure. Don’t hit a PC with this variant except in unusual circumstances when they have no-one to blame but themselves; DO hit someone important who the PCs want to save.

6. The With-A-Bang Option:

A selected number of the dice pool (N) start already showing ones and are not re-rolled. These reduce by 1 each round, becoming regular dice rolled and not “fixed ones”.

The “Fixed Ones” should be 1/2 of N or less. This ‘forces’ the occurrence of a high K result in the first round, tapering off in subsequent rounds. It also extends E by reducing the likelihood of sixes being rolled, generally by the number of fixed ones at the beginning, minus 1.

    6a. Bigger Bang Sub-variant

    The”fixed ones” are only removed when a 6 is rolled. A 6 used for the purpose does not count toward the target.

    This extends the durability of the high-N count AND effectively increases T by the number of initial ones showing.

    6b. It Will All Be Over Soon Sub-variant:

    As per the basic option 6, but fixed ones do not become regular dice, they become automatic sixes.

    This front-loads the results with high-K results but effectively reduces T by the number of initial ones showing.

Going Further

Any situation in which one character uses his skills to solve a multipart problem, or a group collaborate on a challenge, or a group face adversity together, or that can otherwise be broken down into units of roughly equal value, can be modeled using the Adverse Effects Engine.

Each part of the problem, or contributor to a solution, or participant, gets one dice, and they all roll collectively at the same time. This is especially powerful when coupled with the variants listed above.

Think of T as Progress, N as Resource/Skill, and K as Consequence (usually Immediate, but that depends on the definitions of harm that you set up).

Here are just a few of the many situations that the engine, correctly configured, can simulate.

Extreme Weather

N = number of PCs / NPCs in the group

T = N unless there is a natural channel either guiding the weather toward them (+1-3 T) or away from them (-1-2 T).

K = scale of impact of the weather event on the group.

Best Option: The Blessed Balm PLUS Progressively Worse:
Each 1 that gets rolled increases the Target by 1.
Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on, mitigating an existing K result OR reducing the Target by 1 if there are no K results to mitigate.

Everybody rolls a dice and contributes the result to the roll. Sixes push the weather away from the party, Ones bring it down on top of them to a degree. Net effects change from round to round, with weather either just missing the characters (K=0), catching them at its fringes (K=1), or enveloping them (K>1).

For added flavor, throw in Nested Damage Types – First impact = Wind, Second impact = Rain / Hail / Snow, Third impact = Stronger Wind, and so on.

Product Development

Your PC is part of a team developing a new product for sale. You will need a Market Specialist (salesman), a production / manufacturing engineer, a marketer, a technical expert, and a team manager.

The salesman will identify a gap in the market to be targeted, the technician will design the product to fill that gap, the engineer will determine what the possible price-points are, and the rate of production that is possible, the marketer will figure out how to sell the product, and the team manager will make decisions and look at the costs of altering the production environment to change the production engineer’s forecasts.

Each team member gets at least 1 dice to contribute; if their specific skill is more than double the lowest specific skill in the team, they get a second one. If the company has a good history / reputation in the field, the GM can award 1-3 extra dice.

T starts at 1 per team member. If the company has a bad history or reputation to overcome, add 2. If the product is especially cutting-edge, increase this subtotal by +50% or even +100%. If the market is especially cut-throat, add another 25% on top of that. For each team member whose specific skill is less than half the highest specific skill amongst the team, add another 1.

Each 6 counts +1 towards the product being fit for purpose. Each roll marks a milestone in the development process – there can be blind alleys, competitor announcements changing the market / playing field, cost increases, new markets opening up, old markets closing down, scandals in the boardroom – anything and everything that affects the market for the product.

Penalties take the form of additional design time between rolls (K=0, K=1) and reductions in the fitness for purpose of the resulting product (K>0).

I don’t think any of the optional configurations are appropriate for this application.

Collaboration to overcome an environmental hazard (1)

Use the AEE for ongoing natural challenges where the group’s collective effort determines the duration, and individual poor luck determines the immediate suffering.

Crossing a Frozen Lake or Glacier, for example: N (Dice) = The number of characters in the group, or the lowest relevant skill rating in the group, or some reasonable fraction thereof. Only characters with a relevant skill or with a relevant stat value higher than a medium-high threshold get a die. Below those marks, the characters are liabilities toward the group’s success.

T = the GM-assigned difficulty, or some simple fraction thereof, +1 per character, whether they get a die or not..

Options Configuration: The Continual Option, PLUS The Blessed Balm Option.
Once you roll a 1, it stays unrolled thereafter and counts toward future penalties. Rolling continues until every dice shows either a 1 or a 6. The Core exit condition of accumulating T sixes remains in effect but is overshadowed by the alternative.
Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on – removing it from the locked pool and releasing it back into the live dice to be rolled.

Collaboration to overcome an environmental hazard (2)

The party are roped together and have to climb.

N = Characters with climbing skill of +2 or better, or STR+DEX of 16 or better.

T = Total number of characters + 1-4 for difficulty of climb. Add 2 if the characters are under attack or otherwise pressured to climb at speed.

K = falls / setbacks. K>2 = ropes break.

Options Configuration: The Exhaustion Option simulates the rope tying the bad climbers to the good ones: When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

For especially difficult climbs, add The Progressively-Worse Option: Each 1 that gets rolled increases the Target by 1.

For the most supremely challenging climbs, add the Devastating Option instead of Progressively Worse: The first 6 in a roll doesn’t count, only sixes above that one.

Ransacking A Library for specific (hidden / obscure) information

How long it takes to find a specific piece of hidden or obscure lore in a Library that might not even contain what you’re looking for depends on your reading speed (INT), presuming you have the ability to read, and your ability to recognize what you’re looking for, or that what you have just found is a clue to where to look next.

Well-structured libraries also make it a lot easier by excluding most of the books as irrelevant.

I would employ a simulation similar to the Design-A-Product example, but based purely on INT and not on specific skills. Note that if you have a character participating who is low INT, they can actually disrupt the efforts of higher-INT characters by continually interrupting them with “is this it?”.

Specifically, you want the total number of 6s to exceed the total number of 1s before the search comes to an end. If it doesn’t, either the answers aren’t there, or you’ve missed them. So long as there are dice to be rolled, there’s a chance, even if you’re at -2 or -3 to getting a result.

K=penalties to the success total, high-K = passing guards, accidental fires, magical books that scream when opened, ghostly librarians…

Focal Character overcoming an environmental hazard

All sorts of things fall into this category. Picking a combination lock, for example. Or Disarming a bomb with N critical steps that have to be performed in the right order. Or using a code-breaker.

You’ve seen these devices in the movies. Attach one to a lock and let it work its way through the combinations. To make life more difficult, consider a rolling code – that’s where a complex algorithm sets a new code every time, but only the 1000 or so valid results from that algorithm will be accepted. Which means that if you lock in the wrong answer, you have to start over.

The relevant skill here isn’t necessarily one of yours – it’s the design and programming skill of whoever designed and built the code-breaker. All you have to do is place it on the lock in roughly the right position.

With each success (each 6 toward the T), the stakes get higher. One wrong move (K>1) and it’s back to square one.

This scenario seems tailor-made for the Exhaustion Option – a 6 is a locked-in digit: When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

Lesser K results are events that threaten failure / discovery, but which may not actually incur the problem.

T=Number Of Digits in the code.

N=T+a simple fraction of the programmer / designer’s skill.

Let the tension build…

PDF Icon

Click the icon to download the PDF

Using The AEE

If you prep in advance, you have plenty of opportunity to consult the tables and simply put the specific simulation instructions into your notes.

If you want to be able to use the system off-the-cuff, though, you’re going to have to be able to take it with you. For that reason, I’ve put together a PDF with the essential mechanics, shorn of explanation and example – but WITH a hyperlink back to this article.

Leave a Comment

Trade In Fantasy Ch. 5: Land Transport, Pt 5b


This entry is part 20 of 20 in the series Trade In Fantasy

This post continues the text of Part 5 of Chapter 5. Its content has been added to the parent post here and the Table of contents updated.

I have a series of images of communities of different sizes which will be sprinkled throughout this article. This is the first of these – something so sparsely-settled that it barely even qualifies as a community. It’s more a collection of close rural neighbors! Image by Jörg Peter from Pixabay

5.8.1.5 Blended Models

In general, the rule is one zone, one model. In fact, as a general rule, your goal should be one Kingdom, one model – that way, if you choose “England” as your model, your capital city will resemble London in size and characteristics, and not, say, Imperial Rome.

But, if you can think of a compelling enough reason, there’s no reason not to blend models. There are lots of ways to do this.

The simplest is to designate one model for part of a zone, and another to apply to the rest.

Example, if your capital city were much older than the rest of the Kingdom, you might decide that for IT ALONE, the Imperial model might be more appropriate, while the rest of the Kingdom is England-like. Or you might decide that because of its size, it has sucked up resources that would otherwise grow surrounding communities more strongly, and declare a three-model structure: Imperial Capital, France for all zones except zone 1, and England for the rest of Zone 1.

Example: A zone contains both swamp and typical agricultural land. You decide that those parts that are Swamp are German or Frontier in nature, while the rest are whatever else you are using.

An alternative approach to the problem that works in the case of the latter example is to actually average the two models’ characteristics and apply the result either to just the swamp areas, or to the zone overall.

When you get right down to it, the models are recommendations and guidelines, describing a particular demographic pattern seen in Earth’s history. There’s absolutely nothing to prevent you from inventing a unique one for a Kingdom in your world – except for it being a lot of work, that is.

5.8.1.6 Zomania – An Example

I don’t really think that a fully-worked example is actually necessary at this point, but I need to have one up-to-date and ready to go for later in the article. So it’s time for another deep-dive into the Kingdom of Zomania.

5.8.1.6.1 Zone Selection

I’ll start by picking a couple of Zones that look interesting, and distinctive compared to each other.

Zone 7 is bounded by a major road, but doesn’t actually contain that road; it DOES have capacity for a lot of fishing, though. And I note that there are cliffs in the zones to either side of it, so they WON’T support fishing – in fact, those cliffs appear to denote the limits of the zone..Zone 7 adds up to 167.8 units in area, and features 26 units of pristine beaches.

Zone 30 has an international border, and a major road, lots of forest and foothills becoming mountainous. It’s larger than one 7, at 251.45 units.

Because I haven’t detailed these areas at all, the place that I have to start is back in 5.7.1.13. But first…

5.8.1.6.7.1.1 Sidebar: Anatomy Of A Fishing Locus

I was going to bring this up a little later, but realized that readers need to know it, now.

Coastal Loci are a little different to the normal. To explain those differences, I threw together the diagram below.

1: is a coast of some kind. It might not be an actual beach, but it’s flat and meets the water.

2: It’s normal, especially if there’s a beach, for the ends to be ‘capped’ with some sort of headland. This is often rocky in nature. This is the natural location for expensive seaside homes and lighthouses.

3. Fishing villages.

4. Water. It could be a lake, or the sea, or even a river if it’s wide enough.

5. Non-coastal land, usually suitable for agriculture.

6. A fishing village’s locus is compressed along the line of the coast and bulging out into the water. This territory produces a great deal more food than the equivalent land area – anywhere from 2-5 times as much. Some cultures can go beyond coastal fishing, doubling this area – though what’s further out than shown is generally considered open to anyone from this Kingdom. Beyond that, some cultures can Deep-Sea fish (if this is the sea), which quadruples the effective area again. If you’re keeping track, that’s 2-5 x 2 x 4 = 16-40 times the land area equivalent. The axis of the locus is always as perpendicular to the coast as possible.

7. The bottoms of the lobes are lopped off…

8. And the land equivalent is then found ‘squaring up’ the locuses…

9. …which means that these are the real boundaries of the locus. The area stays roughly the same, though.

The key point is this: you don’t have to choose “Coastal Mercantile” to simulate living on the coast and fishing for food. There are mechanisms already built into the system for handling that – it’s all done with Terrain and a more generous interpretation of “Arable Land”.

Save the “Coastal Mercantile” Model for islands and coastal cultures whose primary endeavor is water-based trade.

Zone 7, then, should have the same Model as all the other farmland within the Kingdom. I think France is the right model to choose.

Zone 30 is a slightly more complicated story. For a start, don’t worry about the road – like coastal villages, that gets taken care of later. For that matter, so is the heavy forestation, and the local geography – hills and mountains. But this is an area under siege from the wilderness, as explained in an earlier post. Which changes the fundamental parameters of how people live, and that should be reflected in a change of model. In this case, I think the Germany / Holy Roman Empire model of lots of small, walled, communities is the most appropriate.

But this does raise the question of where the change in profile takes place. I have three real options: The Zone in it’s entirety may be HRE-derived; or the HRE model might only apply to the forests; or might take hold in the hills and mountains, only.

My real inclination would be to choose one of the first two options, but in this case I’m going to choose door number 3m simply because it will contrast he HRE model with the base French version of the hills and forests. In fact, for that specific purpose, I’m going to set the boundary midway through the range of hills:

5.8.1.6.1.2 Sidebar: Elevation Classification

Which means, I guess, that I should talk about how such things are classified in this system. There are eight elevation categories, but the categories themselves are based on the differences between peak elevation and base elevation.

I tried, but couldn’t quite get this to be fully legible at CM-scale. Click on the image above to open a larger copy in a new tab.

To get the typical feature size – the horizontal diameter of hills or mountains – divide 5 x the average of the Average Peak Elevation range by the average Relief range and multiply by the elevation category number, squared for mountains, or twice the previous category’s value, whichever is higher. Note that the latter is usually the dominant calculation! The results are also shown below. Actual cases can be 2-3 times this value – or 1/2 of it.

1. Undulating Hillocks – Average Peak Elevation 10-150m, Local Relief <50m; Features 16m (see below).
2. Gentle Hills – Average Peak Elevation 150-300m, Local Relief 50-150m; Features 32m.
3. Rolling Hills – Average Peak Elevation 300-600m, Local Relief 150-300m; Features 64m

     -> □ Zone 30 Treeline from the start of this category
     -> □ Normal Treeline is midway through the range

4. Big Hills – Average Peak Elevation 600-1000m, Local Relief 300-600m; Features 128m
5. Shallow Mountains – Average Peak Elevation 1000-2500m, Local Relief 600-1500m; Features 417m
6. Medium Mountains – Average Peak Elevation 2500-4500m, Local Relief 1000-3000m; Features 834 m
7. Steep Mountains – Average Peak Elevation 4500-7000m, Local Relief 3000-5000m; Features 1668m
8. Impassable Mountains, permanent snow-caps regardless of climate – Average Peak Elevation 7000m+, Local Relief 5000m+; Features 3336m.

Undulating Hillocks (also known as Rolling Hillocks or Rolling Foothills) are basically a blend of scraped-away geography and boulders deposited by glaciers. If the boulders have any sort of faults (and most do), they will quickly become more flat than round and start to tumble within the Glacier. When they come to rest, several will be stacked, on on top of another, generally in long waves. There will be gaps in between, which get filled with earth and mud and weathered rock over time, unless the rocks are less resistant to weathering than soil, in which case the rocks get slowly eaten away. In a few tens of thousands of years, you end up with undulating hillocks, or their big brothers. The flatter the terrain, the more opportunity there is for floodwaters to cover everything with topsoil, smoothing out the bumps. The diagram above shows how this ‘stacking and filling’ can produce structures many times the size of individual hillocks.

A very similar phenomenon – wind instead of glaciers, and sand instead of boulders – creates sandy dunes in deserts prone to that sort of thing. Over time, great corridors get carved out before and after each dune, generally at right angles to the prevailing winds. It can help you picture it if you think of the wind “rolling” across the dunes – when they come to a spot where the sand is a little less held together, it starts to carve out a trench, and before long, you have wave-shaped sand-dunes.

5.8.1.6.3 Area Adjustments – from 5.7.1.13

Zone 7 has a measured area of 167.8 units, but that needs to be adjusted for terrain. Instead of the slow way, estimating relative proportions, let’s use the faster homogenized approach:

Hostile Factors:
     Coast 1.1 + Farmland 0.9 + Scrub 1.1 = 3.1; average 1.03333.
     Coast +0.25 + Beaches -0.05 + Civilized -0.1 = +0.1
     Towns -0.1
     Net total: 1.03333
167.8 x 1.0333 = 173.4 units^2.

Benign Factors:
     Town 0.1 + Coast 0.15 + Beaches 0.15 + Civilized 0.2
     Subtotal +0.6
     Square Root = 0.7746
173.4 x 0.7746 = 134.3 units^2.

Zone 30 is… messier. Base Area 251.45 units^2.

Hostile Factors:
     Mining 1.5 +
     Average (Mountains 1.4 + Forest 1.25 + Hills 1.2 = 3.85) = 1.28
     Town -0.1 + Foreign Town 0.1 + River 0.2 + Caves 0.05 + Ruins 0.4 + “Wild” 0.1 = +0.75
     Net total = 1.5 + 1.28 + 0.75 = 3.53
251.45 x 3.53 = 887.6 units^2.

Benign Factors:
     Town 0.1 + Foreign Town -0.1 + River +0.1 + Caves 0.05 + Ruin 0.4 + Major Road 0.2
     Subtotal 0.75
     “Wild” = average subtotal with 1 = 0.875
     Sqr Root = 0.935
887.6 x 0.935 = 829.9 units^2.

To me, this looks very Greek – but it’s actually ‘Gordes’ in England, which the photographer describes as a village. One glance is enough to show that it’s bigger than the town depicted previously. Image by Neil Gibbons from Pixabay

5.8.1.6.4 Defensive Pattern – from 5.7.1.14

Zone 7 is pretty secure, the biggest threat being local insurrection or maybe pirate raids. A 4-lobe structure of 2½,5 looks about right.

When I measure out the area protected by a single fort and 4 satellites, I get 47.2 days^2. That takes into account overlapping areas where this one structure shares the burden 50% with a neighboring structure, and the additional areas that have to be protected by cavalry units.

That means that in Zone 7, there should be S x 134.3 / 47.2 = 2.845 x S of them, depending on the size of a “unit” on the map is, measured in days’ march for infantry.

S is going to be the same for all zones I’ve avoided making that decision for as long as I can – the question is, how large is Zomania?

5.8.1.6.5 Sidebar: The Size of Zomania, revisited

16,000 square miles – at least, that’s the total that I threw out in 5.7.1.3.

That’s about the same size as the Netherlands.

It’s a lot smaller than the Zomania that I’m picturing in my head when I look at the map. It IS the right size if the units shown are miles. But if they aren’t?

There are two reasons for regularly offering up Zomania as an example. The first is to provide a consistent foundation and demonstration of the principles discussed coming together into a cohesive whole. And the second is for me to check on the validity of the logic and techniques that I’ve described.

Feeling ‘wrong’ is keeping my subconscious radar from achieving purpose #2. And the Zomania being described being too small, which is the cause of that ‘wrong’ feeling, means that it isn’t going to adequately perform function #1, either.

There can be only one solution – Zomania has to grow, has to be scaled up. I want Zone 7 to be comparable to the size of the Netherlands, not the entire Kingdom, which should be comparable to France, or Germany, or England, or Spain.

A factor of 10? Where would 160,000 sqr miles place Zomania amongst the European Nations that I’ve named?

UK: 94,356. Germany: 138,063. Spain: 192,466. France: 233,032. So 160,000 would be smack-dab in the middle, and absolutely perfect for both purposes.

So Zomania is now 160,000 square miles, and the ‘units’ on all the maps are 10 miles each.

It wasn’t easy sorting this out – it’s been a road-block in my thinking for a couple of days now – triggered by results that seemed to show Zone 7 to be about 0.08 defensive structures in size.

And that is due to a second scaling problem that was getting in the way of my thinking:

How much is that in day’s marching?

In 5.7.1.14.3, I offered up:

    If d=10 miles (low), that’s 103,923 square miles.
    If d=20 miles (still low), that’s 415,692 square miles.
    If d=25 miles (reasonable), that’s 649, 519 square miles.
    If d=30 miles (doable), 935,307 square miles.
    If d=40 miles (close to max), 1.66 million square miles.
    If d=50 miles (max), 2.6 million square miles.

But that was in reference to a theoretical 6 x 4, 12 + 12 pattern. Nevertheless, the scales are there. And they are way bigger than I thought they would be, and way to big to be useful as examples. Yet the logic that led to them seemed air-tight. Clearly, there was an assumption that had been made that wasn’t correct, but this problem was getting in the way of solving the first one.

Once I had separated the two, answers started falling into place. The numbers shown above are how far infantry can march in 24 solid hours, such as they might do in a dire emergency. But defensive structures would not be built and arranged on that basis.

If infantry march for 8 hours, they have just about enough daylight left to break camp in the morning (after being fed) and set up camp in the evening (digging latrines and getting fed). That’s the scale that would be used in establishing fortifications, not the epic scale listed. In effect, then, those areas of protection are nine times the size they should be.

So, let’s redo them on that basis:

    If d=10 miles (low), that’s 11,547 square miles.
    If d=20 miles (still low), that’s 46,188 square miles.
    If d=25 miles (reasonable), that’s 72,169 square miles.
    If d=30 miles (doable), 103,923 square miles.
    If d=40 miles (close to max), 184,444 square miles.
    If d=50 miles (max), 288,889 square miles.

And those are still misleading, because mentally, I’m thinking of this as the area protected by the central stronghold, and ignoring the satellites. To get the area per fortification,, we should divide by the total number of fortifications in the pattern – in the case of the numbers cited, that’s 6×4+12=36.

    If d=10 miles (low), that’s 320.75 square miles.
    If d=20 miles (still low), that’s 1283 square miles.
    If d=25 miles (reasonable), that’s 2,004.7 square miles.
    If d=30 miles (doable), 2,886.75 square miles.
    If d=40 miles (close to max), 5,123.4 square miles.
    If d=50 miles (max), 8024.7 square miles.

Reasonable = 2004.7 square miles, or roughly equal to a 44.8 x 44.8 mile area. For a really tightly packed defensive structure of the one being discussed, that’s entirely reasonable – and it fits the image in my head.

In my error-strewn calculation, my logic went as follows:

    ▪ In the inner Kingdom, I think that life is easy and lived fairly casually. That points to the lower end of the scale – 10 miles a day or 20 miles a day.

    ▪ 10^2 = 100, so at 10 mi/day, 16,000 = 160 days march.
    ▪ 20^2 = 400, so at 20 mi/day, 16,000 = 40 days march.

    ▪ That’s a BIG difference. 40 is too quick, but 160 sounds a little too slow. Tell you what, let’s pick an intermediate value of convenience and work backwards.

    ▪ 100 days march to cover anywhere in 16000 square miles gives 160, and the square root of 160 is 12.65 miles per day.

Now, that logic’s not bad. But it doesn’t factor in the ‘working day’ of the infantry march – it needs to be divided by 3. And it DOES factor in my psychological trend toward making the defensive areas smaller, because my instinct was telling me they were too large – but this is the wrong way to correct for that. So this number is getting consigned to the dustbin.

After all, the ‘hostile’ and ‘benign’ factors are supposed to already take into account the threat level that these fortifications are supposed to address, and hence their relative density.

    ▪ So, let’s start with the “reasonable” 25 miles.
    ▪ Apply the ‘working day’ to get 8.333 miles.
    ▪ The measured area of the defensive structure is 47.2 ‘days march’^2.
    ▪ Each of which is 8.333^2= 69.444 miles^2 in area.
    ▪ So the defensive unit – stronghold and four satellites – covers 47.2 x 69.444 = 3277.8 sqr miles.
    ▪ Or 655.56 sqr miles each.
    ▪ Equivalent to a square 25.6 miles x 25.65 miles.
    ▪ Or a circle 12.51 miles in radius.
    ▪ Base Area 173.4 units^2 = 17340 square miles.
    ▪ Adjusted for threat level, 134.3 units^2 or 13430 square miles. In other words, defensive structures are further apart because there’s less threat than normal.
    ▪ 13430 / 3277.8 = 4.1 defensive structures, of 1 hub and 4 satellites each.
    ▪ So that’s 4 hubs and 16 satellites plus an extra half-satellite somewhere.

Those satellites could be anything from a watchtower to a small fort to a hut with a couple of men garrisoned inside, depending on the danger level and what the Kingdom is prepared to spend on securing the region. The stronghold in the heart of the configuration needs to be more substantial.

Okay, so that’s Zone 7. Zone 30 is a whole different kettle of fish.

I wanted to implement a 3-lobed configuration with more overlap than the four-lobed choice made for Zone 7. And it was turning out exactly the way I wanted it to; some every hub was reinforced by three satellites, every satellite reinforced by three hubs. I had the diagrams 75% done and was gearing up to measure the protected area.

Which is when the plan ran aground in the most spectacular way. There were areas where responsibility was shared two ways, and three ways, and four ways, and – at some points – six ways. It was going to take a LONG time to measure and calculate.

If I were creating Zomania as an adventuring location for real, I would have carried on. If I lived in an ideal world, without deadlines (even the very soft ones now in place at Campaign Mastery) I would have continued. I still think that it would have provided a more enlightening example for readers, because I would be doing something a little bit different and having to explain the differences and their significance.

But since neither of those circumstances is the case, and this post is already several days late due to the complications explained earlier, I am going to have to compromise on principle and re-use the configuration established for Zone 7.

Well, at least that will show the impact that the greater threat level will impose on the structure, but it leaves the outer reaches of the Kingdom less well-protected than they should be. If and when I re-edit this series into an e-book, I might well spend the extra time and replace the balance of this section – or even work the problem both ways for readers’ edification.

REMINDER TO SELF – 3 LOBES, 1 DAY EXAMPLE

But, in the meantime…

Zone 30.
    ▪ Actual area 251.45 square units = 25,145 square miles.
    ▪ Adjusted for threat level = effective area 829.9 square units = 82,990 sqr miles. (in other words, the defensive structures you would expect to protect 82,990 square miles are so closely packed that they actually protect only 25,145 square miles, a 3.3-to-1 ratio.)
    ▪ Defensive Structure = 3277.8 square miles (from Zone 7).
    ▪ 82,990 / 3277.8 = 25.32 defensive structures of 5 fortifications each, or 126.6 fortifications in total. Zone 7 is 69% of the area and had a total of 20.5 fortifications, in comparison.

What does 0.32 defensive structures represent? Well, if I take the basic structure and ‘lop off’ two of the satellites, then it’s 3/5 of a protected area minus the overlaps. By eye, those overlaps look to be a bit more than 2 x 1/4 of one of those 1/5ths, and since 1/4 of 1/5 is 1/20th, that’s roughly 0.6-0.1 = 0.5.

If I take away a third satellite, the structure is down to 2/5 protected area minus overlaps, and those overlaps are now 1 x 1/20th, so 0.4-0.05=0.35. So, somewhere on the border, there’s a spot with one hub and one satellite.

One more point: 3.3 to 1. What does THAT really mean? Well, the defensive structure used has satellites 2.5 days march from the hub. But everything is more compressed, by that 3.3:1 ratio, so the satellites in Zone 30 are actually 2.5 / 3.3 = 0.76 day’s march from the hub. The area each commands is still the same, but there’s a lot more overlap and capacity to reinforce one another.

Another way to look at it is that there are so many fortifications that each only has to protect a smaller area. 3277.8 sqr miles / 3.3 = 993 sqr miles.

5.8.1.6.6 Sidebar: Changes Of Defensive Structure

The point that I’m going to make in this sidebar won’t make a lot of sense unless you’re paying close attention, because the Zone 30 example has the same defensive structure as Zone 7 – it’s just a lot more compressed. But imagine for a moment that there was a completely different defensive structure in Zone 30.

What does that imply for Zone 11, which lies in between the two?

You might think that it should be some sort of half-way compromise or blend between the two, but you would be wrong to do so.

If you look back at the overall zone map for Zomania (reproduced below)

…and recall that the zones are numbered in the order they were established, a pattern emerges. Zone 1 first, then Zone 2, then Zones 3-4-5-6-7, then zones 8-9-10-11-12, and so on. Until Zones 29-32 were established, Zone 11 was the frontier. it would likely have the same defensive structure as Zone 30. Rather than fewer fortifications, it would have them at the same density as Zone 30 – but the manpower in each would be reduced.

If you know how to interpret it, the entire history of the Kingdom should be laid bare by the changes in its fortifications and defenses.

But that’s not as important as the verisimilitude that you create by taking care of little details like this and keeping them consistent. The specifics might never be overtly referenced – but they still add a little to the credibility of the creation.

5.8.1.6.7 Inns in Zone 7 – from 5.7.3

Zone 7 is noteworthy for NOT having a major road – that’s on the Zone 11 / Zone 6 side of the border. Some of the inns along that road, however, may well be over that border – it’s a reasonable expectation that half of them would count. But only that half that is located where the border runs next to the road – there’s a section at the start and another at the end where the border shifts away.

But there’s a second factor – what is the sea, if not another road to travel down? And Zone 7 has quite a lot of beach. The reality, of course, is that these are holiday destinations, and places for health recovery – but it’s a convenient way of placing them.

So that’s two separate calculations. The ‘road that is a road’ first: There are actually two sections. The longer one runs through Zones 6 and 11, as already noted; it measures out at 15 units long, or 150 miles.

The second lies in Zone 15, and it’s got a noticeable bend in it. If I straighten that out and measure it, I get 5 units or 50 miles.

Conditions:
    Road condition, terrain, good weather = 3 x 2.
    Load = 1 x 1/2.
    Everything else is a zero.
    Total: 6.5.
6.5 / 16 x 3.1 = 1.26 miles per hour.
1.26 mph x 9 hrs = 11.34 miles.

Here’s the rub: we don’t know exactly where the hubs and satellites are in Zone 7, only how many of them that there are to emplace. But it seems a sure bet that those areas where the road and border part ways, do so because there’s a fortification there that answers to Zone 6 or Zone 11, respectively. And that means that we can treat the entire length of the road as being between two end points.

We know from the defensive structure diagram that the base distance from Satellite to Hub is 2 1/2 days march, and that there’s a scaling of x 1.0333 (hostile) x 0.7746 (benign) = x 0.8 – and that benign factors space fortifications further apart while hostile ones bunch them together, so this is a divided by when calculating distances. We know that 8.333 miles has been defined as a “day’s march”.

If we put all that together, we get 2.5 x 8.333 / 0.8 = 26 miles from satellite to hub.

Armies like their fortifications on roads, it makes it faster to get anywhere. Traders like their trade routes to flow from fortification to fortification, it protects them from bandits. The general public, ditto. If a road doesn’t go to the fortification, people will create a new road and leave the official one to rot. So it can be assumed that the line of fortifications will follow the road, and be spaced every 26 miles along it, alternating between hub and satellite.

    150 miles / 26 = 5.77 of them.

It’s an imperfect world; that 0.77 means that you have one of three situations, as shown below:

The first figure shows a hub at the distant end of the road. The first shows a hub at the end of the road closest to the capital. And the third shows the hubs not quite lining up with either position.

But those aren’t the actual ends of the road – this is just the section that parallels the border of Zone 7, or vice-versa. So the last one is probably the most realistic

Now, let’s place Inns – one every 11.34 miles. But we have to do them from both ends – one showing 1 day’s travel for ordinary people headed out, and one showing them heading in. Just because I’m Australian, and we drive on the left, I’ll put outbound on the south side and inbound on the north.

Isn’t that annoying? The don’t quite line up – to my complete lack of surprise. Look at the second in-bound inn – it’s about 20% of a day short of getting to the satellite, and that puts it so close that it’s not worth stopping there; you would keep going.

Well, you can’t make a day longer, but you can make it shorter. And that makes sense, because these are very much average distances.

I’ve shortened the days for the ordinary traveler – including merchants – just a little, so that every 5th inbound Inn is located at a Stronghold, and every 5th outbound inn is located at a satellite. Every half-day’s travel now brings you to somewhere to stop for a meal or for the night.

It’s entirely possible that not all of these Inns will actually be in service, it must be added. Maybe only half of them are actually operating. Maybe it’s only 1/3. But, given it’s position within the Kingdom, there’s probably enough demand to support most of these, so let’s do a simple little table:

    1 inn functional
    2 inn functional
    3 inn functional but 1/4 day closer
    4 inn functional but 3/4 day farther away
    5 inn not functional
    6 inn not functional, and neither is the next one.

Applying this table produces the following (for some reason, my die kept rolling 3s and 6s):

Even here, in this ‘safe’ part of the Kingdom, travelers will be forced to camp by the roadside.

And that’s where I’ll have to leave it, for this post. I had hoped to get all of the Zomania examples done, but the problems early on put paid to that, and didn’t even leave me enough time to get Zone 30 detailed through to the inn stage – let alone up to date! That’s obviously for the next post….

Leave a Comment

Trade In Fantasy Ch. 5: Land Transport, Pt 5a


This entry is part 19 of 20 in the series Trade In Fantasy

This post continues the text of Part 5 of Chapter 5. Its content has been added to the parent post here and the Table of contents updated. I have decided at the last minute to let the featured image (but not the head image) evolve with each post.

I have a series of images of communities of different sizes which will be sprinkled throughout this article. This is the first of these – something so sparsely-settled that it barely even qualifies as a community. It’s more a collection of close rural neighbors! Image by Jörg Peter from Pixabay

5.8.1 Villages

The village is the fundamental unit of the population distribution simulation – everything starts there and flows from it.

    5.8.1.1 Village Frequency

    I’ve given this section a title that I think everyone will understand, but it’s not actually what it’s all about. The real question to be answered here is, how big is the Locus surrounding a population?

    The answer differs from one Demographic Model to another, unsurprisingly.

    The area of a given Locus is:

        SL = MF x (Pop)^0.5 x k,
            where,
            SL = Locus Size
            MF = Model Factor
            Pop is the population of the village
            and k = a constant that defines the units of area.

    The base calculation, with a k of 1, is measured in days of travel. That works for a lot of things, but comparison to a base area of 10,000 km^2 isn’t one of them. For that, we need a different K – one based on the Travel Ranges defined in previous parts of this series.

    Section 5.7.1.14.5.1 gives answers based on travel speed, more as a side-issue than anything else, based on the number of miles that can be traversed in a day:

      (Very) Low d = 10 miles / day
      Low d = 20 miles / day
      Reasonable d = 25 miles / day
      Doable d = 30 miles / day
      Close To Max (High) d = 40 miles / day
      Max d = 50 miles / day
          ( x 1.61 = km).

    — but these are the values for Infantry Marching, and that’s a whole other thing.

    Infantry march faster than people walk or ride in wagons. The amount varies depending on terrain (that’s the main variable in the above values), but – depending on who you ask – it’s 1 2/3 or 2 or 2.5 times.

    But, because they travel in numbers, they can march for less time in a day. Some say 6 hours, some 7, some 8. Ordinary travelers may be slower, but they can operate for all but an hour or two of daylight. That might be 8-2=6 or 7 hours in winter, but it’s more like 12-2=10 or 11 hours in summer.

    And it has to be borne in mind that the basis for these values assumes travel in Summer – at least in medieval times. But we want to take the seasons out of the equation entirely and set a baseline from which to adjust the list given earlier.

    One could argue that summer is when the crops are growing, and therefore that should be the basis of measurement, given that we’re looking for the size of a community’s reach.

    So let’s take the summer values, and average them to 10.5 hours. When you take the various factors into account and generate a table (I used 6, 6.5, 7, 7.5, and 8 for army marching times per day, and the various figures for speed cited plus 2.25 as an additional intermediate value, and work out all the values that it might be, and average them, you get 1.04. That’s so small a change as to be negligible – 1.04 x 50 = 52. We will have far bigger approximations than that!

    So we can use the existing table as our baseline. Isn’t that convenient?

    But which value from amongst those listed to choose? Overall, unless there’s some reason not to, you have to assume that terrain is going to average out when you’re talking about a baseline unit of 10,000 sqr kilometers. So, let’s use the “Reasonable” value unless there’s reason to change it.

    And that gives a conversion rate of 1 day’s travel = roughly 25 miles, or 40 km. And those are nice round numbers.

    Now, a locus is roughly circular in shape, so is that going to be a radius or a diameter? Well, a “market day” is how far a peasant or farmer can travel with their goods and return. in a day, so I think we’re dealing with a radius of 1/2 the measurement, so that measurement must be the diameter of the locus.

    Which means that the base radius of a locus is 12.5 miles or 20 km.

    In an area where the terrain is friendly in terms of travel, this could inflate to twice as much; in an area where terrain makes travel difficult, it could be 1/2 as much or less. But if we’re looking for a baseline, that works.

    12.5 miles radius = area roughly 500 sqr miles = area 1270 sqr km. So in 10,000 sqr km, we would expect to find, on average, 7.9 locuses.. But that’s without looking at the population levels and the required Model Factors.

    The minimum size for an English Village is 240 people. The Square Root of 240 is 15.5.

    So the formula is now 1270 = 15.5 x 20 x Model Factor, and the Model Factor for England conditions and demographics is 4.1. Under this demographic model, there will be 4.1 Village Loci – which is the same thing as 4.1 villages – in 10,000 sqr km.

    Having worked one example out to show you how it’s done, here are the Model Factors for all the Demographic Models:

    ▪ Imperial Core: 480^0.5 = 21.9, and 21.9 x 20 x Model Factor = 1270, so MF = 2.9
    ▪ Germany (HRE): 400^0.5=20, and 20 x 20 x MF = 1270, so MF = 3.175
    ▪ France: 320^0.5 = 17.9, and 17.9 x 20 x MF = 1270, so MF = 3.55
    ▪ Coastal Mercantile Model: 280^0.5 = 16.733, and 16.733 x 20 x MF = 1270, so MF = 3.8
    ▪ England: 4.1
    ▪ Frontier Nation: 200^0.5 = 14.14, and 14.14 x 20 x MF = 1270, so MF = 4.5
    ▪ Scotland: 160^0.5 = 12.65, and 12.65 x 20 x MF = 1270, so MF = 5.02
    ▪ Tribal / Clan Model: 80^0.5 = 8.95, and 8.95 x 20 x MF = 1270, so MF = 8.95

    So, why didn’t I simply state the number of loci (i.e. the number of villages) in an area?

    It’s because that’s a base number. When we get to working on actual loci or zones, these can shrink, or grow; according to other factors. This is a guideline – but to define an actual village and it’s surrounds, we will need to use the MF. Besides, you might want to generate a specific model for a specific Kingdom in your game.

    You may be wondering, then, why it should be brought up at all, or especially at this stage? The answer to those questions is that the area calculated is a generic base number which may have only passing resemblance to the actual size of the locus.

    A locus will continue to expand until it hits a natural boundary, a border, or equidistance to another population center. Very few of them will actually be round in shape – some of them not even approximately.

    The ratio between ACTUAL area and BASE area is an important factor in calculating the size of a specific village.

    An example of the ‘real borders’ of a Locus

    To create the above map, I made a copy of the base map (shown to the left). At the middle top and bottom, i placed a dot representing the Locus ‘radius’.

    At the left top, another dot marked the half-way point to the next town (top left), where it intersected a change of terrain – in this case, a river.

    At the top right, doing the same thing would have made the town at top right a bit of a mixed bag – it already has forests and hills and probably mountains. I didn’t want it to have a lot of farmland as well. So I deliberately let the current locus stretch up that way. The point below it is also slightly closer to the top right town than it would normally be, but that’s whee there is a change of terrain – the road. I tossed up whether the locus in question should include the intersection and road, but decided against it.

    And so on. Once I had the main intersection points plotted, I thought about intermediate points – I didn’t want terrain features to be split between two towns, they had to belong to one or the other. You can see the results in the “bites” that are taken out of the borders of the locus at the bottom.

    If you use your fingers, one pointing at the town in the center and the other at the top-middle intersection point, and then rotate them to get an idea of the ‘circular’ shape of the locus, you can see that it’s missing about 1/6 of it’s theoretical area to the east, another 1/6 to the south, and a third 1/6th to the west. It’s literally 1/2 of the standard size. That’s going to drive the population down – but it’s fertile farmland, which will push it up. But that’s getting ahead of ourselves.

    As an exercise, though, imagine that the town lower right wasn’t there. The one that’s on the edge of the swamp. Instead of ending at a point at the bottom, the border would probably have continued, including in the locus that small stand of trees and then following the rivers emerging from the swamp, and so including the really small stand of trees. The Locus wouldn’t stop until it got to the swamp itself. The locus would have extended east to the next river, in fact, encompassing forest and hills until reaching the East-road, which it would follow inwards until ii joined the existing boundary. It would still have lost maybe 1/12th in the east, but it would have gained at least that much and probably more in the south, instead of losing 1/3. The locus would be 1 – 1/12 + 1/3 – 1/12 – 1/3 = 10/12 of normal instead of 1/2 of normal.

    5.8.1.2 Village Base Size

    If you look at the models, you will notice “Base Village” and a population count, and might be fooled into thinking that everything in that range is equally likely. It’s not.

    Take the French model – it lists the village size as 320-480.

    First, what’s the difference, high minus low? In this case, it’s 160. We need to divide that by 8 as a first step – which in this case is a nice, even, 20.

    Half of 20 is 10, and three times 10 is 30. Always round these UP.

    With that, we can construct a table:

        01-30 = 320
        31-40 = 321-350 (up by 30)
        41-50 = 351-380 (up by 30)
        51-60 = 381-400 (up by 20)
        61-70 = 401-420 (up by 20)
        71-75 = 421-430 (up by 10)
        76-80 = 431-440 (up by 10)
        81-85 = 441-450 (up by 10)
        86-90 = 451-460 (up by 10)
        91-95 = 461-470 (up by 10)
        96-00 = 470-480 (up by 10)

    I used Gemini to assist in validating various elements of this section, and it thought the “up by 30” was confusing and the terminology be replaced with something more formal.

    I disagree. I think the more colloquial vernacular will get the point across more clearly.

    It was also concerned – and this is a more important point – that GMs couldn’t implement this roll and the subsequent sub-table quickly. I disagree, once again – I’ve seen far more complicated constructions for getting precise population numbers than two d% rolls, especially since the same tables will apply to all areas within the Kingdom that are similar in constituents. Everywhere within a given zone, in fact, unless you deliberately choose to complicate that in search of precision.

    In general, you construct one set of tables for the entire zone – and can often copy those as-is for other similar zones as well. Maybe even for a whole Kingdom.

    The d% breakdown is always the same percentages, and there are always 2 “up by “3 x 1/2″s, 2 “up by 2 x 1/2″s, and 5 “up by 1/2″‘s – with the final one absorbing any rounding errors; in this example there aren’t any.

    We then construct a set of secondary tables by dividing our three (or four) increments by 10. In this case, 30 -> 3, 20 -> 2, 10 -> 1. And we apply the same d% breakdown in exactly the same way, but from a relative position:

    So:
        1/2 x 3 = 1.5, rounds to 2; 3 x 1.5 = 4.5, rounds to 5.
        1/2 x 2 = 1; 3 x 1 = 3.
        1/2 z 1 = 0.5, rounds to 1; 3 x 1 = 3.

    The “Up By 30” Sub-table reads:

        01-30 = +0
        31-40 = +5
        41-50 = +5+5 = +10
        51-60 = +10+3=+13
        61-70 = +13+3=+16
        71-75 = +16+2 = +18
        76-80 = +18+2 = +20
        81-85 = +20+2 = +22
        86-90 = +20+2 = +24
        91-95 = +24+2 = +26
        96-00 = +30 (up by whatever’s left).

    The “Up By 20” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+2 =+8
        61-70 = +8+2=+10
        71-75 = +10+1 = +11
        76-80 = +11+1 = +12
        81-85 = +12+1 = +13
        86-90 = +13+1 = +14
        91-95 = +14+1 = +15
        96-00 = +20 (up by whatever’s left).

    The “Up By 10” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+1 =+7
        61-70 = +7+1=+8
        71-75 = +8+1 = +9
        76-80 = +9+1 = +10
        81-85 = +0-1 = -1
        86-90 = -1-1 = -2
        91-95 = -2-1 = -3
        96-00 = -3-1 = -4

    Notice what happened when I ran out of room in the “+10”? The values stopped going up, and starting from +0, started going DOWN.

    It takes just two rolls to determine the Base Population of a specific village with sufficient accuracy for our needs within a zone..

    EG: Roll of 43: Main Table = 380, in an up-by-30 result. So we use the “Up By 30” Sub-table and roll again: 72, which gives a +18 result. So the Base population is 380+18=398.

    These results are intentionally non-linear.

    Optional:

    If you want more precise figures, apply -3+d3.

    Or -6+d6.

    Or anything similar – though I don’t really think you should go any larger than -10+d10 – and I’d consider -8+2d6 first.

    I have to make it clear, this is relating to the population of a specific village in a specific zone not a generic one. For anything of the latter kind, continue to use the minimum base population. I just thought that it bookended the ‘real locus’ discussion. We had to have the former because it affects what terrain influences the town size and how much of it there is; the latter is just a bonus that seemed to fit..

    5.8.1.3 Village Demographics

    Let’s start by talking Demographics, both real-world and Fantasy-world.

    The raw population numbers are not as useful as numbers of families would be. But that’s incredibly complicated to calculate and there’s no good data – the best that I could get was a broad statement that medieval times had a child mortality rate (deaths before age 15) of 40-50%, an infant mortality rate (deaths before age 1) of 25-35%, and an average family size of 5-7 children.

    If look at modern data, we get this chart:

    Source: Our World In Data, cc-by, based on data from the United Nations. Click the image to open a larger version (3400 x 3003 px) in a new tab.

    I did a very rough-and-ready curve fitting in an attempt to exclude social and cultural factors and derive a basic relationship for what is clearly a straight band of results:

    Derivative work (see above), cc-by, extrapolating a relationship curve in the data

    …from which I extracted two data points: (0%,1.8) and (10%,5.6), which in turn gave me: Y = 0.38 X + 1.8, which can be restated, X = 2.63Y – 4.74. And that’s really more precision than this analysis can justify, but it gives a readout of child mortality for integer family sizes.

    Yes, I’m aware that the real relationship isn’t linear. But this simplified approximation is good enough for our purposes.

    That, in turn, gives me the following:

        Y = Typical Number Of Children,
        X = Overall Child Mortality Rate

        Y, X:
        1, -3%
        2, 0%
        3, 3%
        4, 5%
        5, 8%
        6, 11%
        7, 13%
        8, 16%
        9, 18%
        10, 21%
        11, 24%
        12, 26%

    …so far, so good.

    Next, I need to adjust everything for the rough data points that we have for medieval times, when bearing children was itself a mortality risk for the mothers.

    5-7 children, 40-50%

    so that gives me (5, 8, 40) and (7, 13, 50) – more useful in this case as (8, 40) and (13,50) – which works out to Z = 2 Y + 24.

        Z=Child Mortality, Medieval-adjusted

        Y, X, Z:
        1, -3%, 18%
        2, 0%, 24%
        3, 3%, 30%
        4, 5%, 34%
        5, 8%, 40%
        6, 11%, 46%
        7, 13%, 50%
        8, 16%, 56%
        9, 18%, 60%
        10, 21%, 66%
        11, 24%, 72%
        12, 26%, 76%

    But here’s the thing: realism and being all grim and gritty might work for some campaigns, but for most of us – no. What we need to do now is apply a “Fantasy Conversion” which contains just enough realism to be plausible and replaces the balance with optimism.

    I think Division of Z (the medieval-adjusted child mortality rate) by 3 sounds about right – YMMV. That gives me the F values below – but I also checked on a ratio of 2.5, which gives me the F2 values.

    Gemini suggested using 3.5 or 4 for an even ‘softer’ mortality rate, and 2.25 or 2 for a grittier one.

    In principle, I don’t have a problem with that – and part of the reason why I’m not just throwing the mechanics at you, but explaining how they have been derived, is so that GMs can use alternate values if they think them appropriate to their specific campaigns.

    I don’t just want to feed the hungry, I want to teach them to fish, to paraphrase the biblical parable.

        F= Fantasy Adjusted Child Mortality Rate
        F2 = more extreme Child Mortality Rate

        Y, X, Z, F, F2:
        1, -3%, 18%, 6%, 7%
        2, 0%, 24%, 8%, 10%
        3, 3%, 30%, 10%, 12%
        4, 5%, 34%, 11%, 14%
        5, 8%, 40%, 13%, 16%
        6, 11%, 46%, 15%, 18%
        7, 13%, 50%, 17%, 20%
        8, 16%, 56%, 19%, 22%
        9, 18%, 60%, 20%, 24%
        10, 21%, 66%, 22%, 26%
        11, 24%, 72%, 24%, 29%
        12, 26%, 76%, 25%, 30%

    I think the F values are probably more appropriate for High Fantasy, while the F2 are better for more typical fantasy – but you’re free to use this information any way you like, the better to suit your campaign world.

    You might decide, for example, that averaging the Medieval Adjusted Values with the F2 is ‘right’ – so that 5 children would indicate (40+16)/2 = 28% mortality.

    Social values can also adjust these values – traditionally, that means valuing male children more than females. But in Fantasy / Medieval game settings, I think that would be more than counterbalanced, IF it were a factor, by the implied increased risks from youthful adventuring. In a society that practices such gender-bias, it would not surprise me if the ultimate gender ratio was 60-40 or even 70-30 – in favor of Girls.

    5.8.1.3.1 Maternal Survival

    The next element to consider is the risk of maternal death in childbirth. That’s even harder to pin down data on, but 1-3% per child is probably close to historically accurate. Balanced around that is the greater risks from adventuring, and the availability of clerical healing. So I’m extending the table to cover 4, 5, and 6%, but you are most likely to want the values in the first columns. To help distinguish these extreme possibilities from the usual ones, they have been presented in Italics.

    We’re not interested so much in the number of cases where it happens as I am the number of cases where it doesn’t – the % of families with living mothers, relative to the number of children.

        Y, @1, @2, @3, @4, @5, @6:
        1, 99%, 98%, 97%, 96%, 95%, 94%
        2, 98.0%, 96.0%, 94.1%, 92.2%, 90.3%, 88.4%
        3, 97.0%, 94.1%, 91.3%, 88.5%, 85.7%, 83.1%
        4, 96.1%, 92.2%, 88.5%, 84.9%, 81.5%, 78.1%
        5, 95.1%, 90.4%, 85.9%, 81.5%, 77.4%, 73.4%
        6, 94.1%, 88.6%, 83.3%, 78.3%, 73.5%, 69.0%
        7, 93.2%, 86.8%, 80.8%, 75.1%, 69.5%, 64.8%
        8, 92.3%, 85.1%, 78.4%, 72.1%, 66.3%, 61.0%
        9, 91.4%, 83.4%, 76.0%, 69.3%, 63.0%, 57.3%
        10, 90.4%, 81.7%, 73.7%, 66.5%, 59.9%, 53.9%
        11, 89.5%, 80.1%, 71.5%, 63.8%, 56.9%, 50.6%
        12, 88.6%, 78.5%, 69.4%, 61.3%, 54.0%, 47.6%

    The method of calculation is 100 x ( 1- [D/100] ) ^ Y. Just in case you want to use different rates than these.

    There does come a point at which the likelihood of maternal death begins to limit the size of the average family, though, and I think the 6% values are getting awfully close to that mark.

    Let’s say that a couple have 6 children, right in the middle of the historical average. If the mother falls pregnant a 7th time, at 6%, she has roughly a 1 in 3 chance of dying (and a fair risk of the child perishing with her). Which means that she HAS no more children. But if she beats those odds to have 7 children, her chances are even worse when it comes to child #8, and so on.

    Of all the cases with a mother who survived childbirth, we then need to factor in death from all other causes – monsters and adventuring and mischance and so on. Fantasy worlds tend to be dangerous, so this could be quite high – maybe as much as 5% or 10% or 20%. So multiply the living mothers by 0.8. Or 0.7 Or 0.9 – whatever you consider appropriate – to allow for this.

    This rural community is obviously alongside a major river or coastline – the proximity of the mountains suggests the first, but isn’t definitive. The name offers a clue: ‘hallstatt’, which to me sounds Germanic, and suggests that the waterway may be the Rhine. Or not, if I’ve misinterpreted. Image by Leonhard Niederwimmer from Pixabay

    5.8.1.3.2 Paternal Survival

    The result is the % of families with a surviving mother. So how many surviving fathers are there per surviving mother? Estimates here vary all over the shop, and more strongly reflect social values. But if I’m suggesting 5% – 20% mortality for mothers from other sources, the same would probably be reasonably true of fathers – if those social values don’t get in the way.

        0.95 x 0.95 = 90.25%.
        0.9 x 0.9 = 81%.
        0.85 x 0.85 = 72.25%
        0.8 x 0.8 = 64%.

    Those values give the percentages in which both parents have survived to the birth of the average number of children.

    If you’re using 10% mortality from other causes, then in 90% of cases in which the mother has died, the father has survived. But in 10% of the cases in which the mother has succumbed, the children are orphaned by the loss of the other parent.

    The higher this percentage, the higher the rate of survivors remarrying and potentially doubling the size of their households at a stroke. And that will distort the average family size far more quickly than the actual mortality percentages, unless there is some social factor involved – maybe it’s expected that parents with children will only marry single adults without children, for example.

    The problem with this approach is that if it’s the mother who is remarrying, this puts her right back on that path to mortality through childbirth; the child-count ‘clock’ does not get reset. If it’s a surviving father marrying a new and childless wife, it DOES reset, because the new mother has not had children previously.

    In a society that permits such actions, there is a profound dichotomy at its heart that favors larger families for husbands who survive while placing mothers who survive at far greater risk of the family becoming a burden to the community – which is likely to change that social acceptance. Paradoxically, a double standard is what’s needed to give both parents a more equal risk of death, and a more equal chance of surviving.

    5.8.1.3.3 Childless Couples

    Next, let’s think about the incidence of Childless Couples. We can state that there’s a given chance of pregnancy in any given year of marriage; but once it happens, there is just under a full year before that chance re-emerges.

        Year 1: A% -> 1 child born
        Year 2: (100-A) x A% -> 1 child born, A%^2 -> 2 children born
        Year 3: (100-A^2) x A% -> 1 child born, (100-A) x A% -> 2 children born, A^3% -> 3 children born

    … and so on.

    This quickly becomes difficult to calculate, because each row adds 1 to the number of columns, and its easy to lose track.

    But here’s the interesting part: we don’t care. To answer this question, there’s a far simpler calculation.

    In any given year, there will be B couples married. (100-A%) of them will not have children in the course of that year. If we specify B as the average, rather than as a value specific to a given year, then the year before we will also have B couples marry, and (100-A%) of them without children at the end of that year – which means that in the course of the second year of marriage, A% will have children and stop being counted in this category, and (100-A)% will not, and will still count.

    Adding these up, we get (100-A)% + (100-A)%^2 + …. and so on. And these additions will get progressively and very rapidly smaller.

    Let’s pick a number, by way of example – let’s try A=80%, just for the sake of argument.

    We then get 20% + 4% + 0.8 % + 0.16% + 0.032% + 0.0064% … and I don’t think you’d really need to go much further, the increases become so small. I pushed on one more term (0.000128%) and got a total of 24.998528%. I pushed further with a spreadsheet, and not even 12 years was enough to cross the 25% mark – but it was getting ever closer to it. Close enough to say that for A=80, there would be 25 childless couples for every… how many?

    The answer to that question comes back to the definition of A: It the number of couples out of 100 who have a child in any given year. So, over 12 years, that’s a total of 1200 couples. And 25 / 1200 = 2.08%.

    I did the math – cheating, I used a spreadsheet – and got the following, all out of 1200 couples:

        A%, C, [C rounded]
        80%, 25,
        75%, 33.33, 33
        70%, 42.86, 43
        65%, 53.85, 54
        60%, 66.67, 67
        55%, 81.81, 82
        50%, 99.98, 100
        45%, 122.13, 122
        40%, 149.67, 150
        35%, 184.66, 185
        30%, 230.10, 230
        25%, 290.50, 291
        20%, 372.51, 373

    But that has to mean that the rest of those 1200 couples have to have children – and the number of children will approach the average number that you chose.

    So if you pick a value for A, you can calculate exactly how many childless couples there are relative to the number of families with children:

        A=45%, C=122:

        1200-122 = 1078
        1078 families with children, 122 childless couples
        1078 / 122 = 8.836
        8.836 + 1 = 9.863
        so 1 in 9.863 families will be childless couples.

    5.8.1.3.4 Unwed Singles

    The social pressure to marry has varied considerably through the ages, but the greater the dangers faced by the community, the greater this pressure is going to be. And the fitter and healthier you are, the greater this pressure is going to be amplified.

    This is inescapable logic – the first duty of any given generation in a growing society is to replace the population who have passed away, and it takes a long time to turn children into adults.

    You could calculate the average lifespan, deduct the age of social maturity, and state that society frowns heavily on unwed singles above that age, and as every year passed with the individual approaching that age, the greater the social pressure would become – and that would be a true approach.

    The problem is that the average lifespan is complicated by those high rates of childhood death, and trying to extract that factor becomes really complicated and messy. And then you throw in curveballs like Elves and Dwarves, with their radically different lifespans and the whole thing ends up in a tangled mess.

    So, I either have to pull a mathematical rabbit out of my hat, or I do the sensible thing and get the GM to pick a social practice and do my best to make it an informed choice.

    While a purely mathematical approach is possible, the more that I looked at the question, the more difficult it became to factor every variable into the equation.

    Want the bare bones? Okay, here goes.

    For a given population, P, there are B marriages a year, removing B x 2 unwed individuals from the population. We can already extract the count of those who are ineligible for marriage due to age, because they are all designated as children.

    We can subtract the quantity of childless couples who are already wed in a similar fashion to the calculations of the previous subsection.

    The end result is the number of unwed singles of marriageable age who have not married. Setting P at a fixed value – say 100 people – we can then quickly determine the number of unmarried singles.

    What ultimately killed this approach was that it was – in the final analysis – using a GM estimate of B as a surrogate for getting the GM to estimate the % of singles in their community – and doing so in a manner that was less conducive to an informed choice, and requiring a lot of calculations to end up with the number that they could have directly estimated in the first place.

    Nope. Not gonna work in any practical sense.

    So, instead, let’s talk about the life of the social scene – singles culture. There is still going to be all that social pressure to marry and contribute to the population, especially if you are an even half-successful adventurer, because that makes you the healthiest, wealthiest, and most prosperous members of the community.

    It can be argued that instead of using the average lifespan (with all its attendant problems) and deducting the age of maturity (i.e. the age at which a child becomes an adult) to determine at what age a couple have to have children in order to keep the population at least stable (you need two children for that, since there are two adults involved, and you need to take that child mortality rate into consideration, dividing those 2 by the mortality rate and rounding up), you should use add age of the mother as a factor in the rise of the mother’s mortality during childbirth, and work back from that age. In modern times, that’s generally somewhere in the thirties, maybe up to 40. That doesn’t mean that older women can’t have children, just that under these circumstances, the risks of dying before you have enough offspring are considered too high by the general culture.

    But what does that really get you? There’s always going to be some age at which the pressure to wed starts to grow. Shifting it this way or that by a couple of years won’t change much.

    Looking at it from the reverse angle – how much single life will society tolerate – can be far more useful.

    I would suggest a base value of a decade. Ten years to be an adventurer and live life on the edge.

    In high-danger societies, especially with a high mortality rate, that might come back 2 or 3 years, At it’s most extreme, 5. That’s all the time you have to focus on becoming a professional who is able to support a family, or at least to setting your feet firmly on that path.

    In low-danger societies, especially those with a lower mortality rate, it might get pushed out a few years, maybe even another 5. That’s enough time that you can spread some wild oats and still settle down into someone respectable within the community.

    How long is the typical apprenticeship? In medieval times? In your fantasy game-world? From the real world, I could bandy about numbers like 4 years, or 5 years, or 5 years and 5 more learning on the job, or repaying debts to the master that trained you. And you end up with the same basic range – 5-15 years.

    What is the age of maturity in your world? Again, I could throw numbers around – 18 or 21 seem to be the most common in modern society, but 16 (even 15) has its place in the discussion – that’s how old you had to be back when I was younger before you could leave school and pursue a trade, i.e. becoming an apprentice. But I have played in a number of games where apprenticeships started at eight, or twelve, and lasted a decade – and THEN you got to start repaying your mentor for the investment that he’s made in you. With interest.

    Does there come a point where people are deemed anti-social because they have not married, and find their prospects of attracting a husband or wife diminishing as a result? Don’t say it doesn’t happen, because there is plenty of real-life evidence that it’s there as a social undercurrent – one that shifts, and sometimes intensifies or weakens, without real understanding of the factors that drive the phenomenon – instead, forget the real world and think about the game-world.

    How optimistic / positive is the society? How grim and gritty?

    Think about all these questions, because they all provide context to the basic question: What percentage of the population are unwed with no (official) children?

    Here’s how I would proceed: Pick a base percentage. For every factor you’ve identified that gives greater scope for personal liberty, add 2%. For every factor that demands the sacrifice of some of that liberty, from society’s point of view, subtract 2%. In any given society, there are likely to be a blend of factors, some pushing the percentage up, and some down – but in more extreme circumstances, they might all factor up or down. If you identify a factor as especially weak, only adjust by 1%; if you judge a factor as especially strong, adjust by 3 or even 4%.

    In the end, you will have a number.

    Let me close out this section with some advice on setting that base percentage.

    There are two competing and mutually-exclusive trains of thought when it comes to these base values. Here’s one:

    ▪ In positive societies, low child mortality means fewer young widows/widowers. The society is more stable, allowing for strong family formation and early marriage. Base rate is low.

    ▪ In moderate societies, dangers still disrupt family units, leading to a moderate rate of single, adult households. Base rate is moderate.

    ▪ In dangerous societies, high death rates mean many broken families, orphans, and single parents. The number of adult individuals living outside a stable family unit is maximized. Base rate is high.

    Here’s the alternative perspective:

    ▪ Positive societies produce less social pressure and greater levels of personal freedom, reducing the rate of marriage and increasing the capacity for unwed singles. Base rate is high.

    ▪ Moderate societies have a positive social pressure toward marriage at a younger adult age, and less capacity for personal liberty. Base rate is moderate.

    ▪ Societies that swarm with danger have a higher death rate, and there would be more social pressure to marry very young to create population stability. The alternative leads to social collapse and dead civilizations.

    What’s the attitude in your game world? They are all reasonable points of view.

    In a high-fantasy / positive social setting, I would start with a base percentage of 22%. Most factors will tend to be positive, so you might end up with a final value of 32% – but there can be strains beneath the surface, which could lead to a result of 12% in extreme cases.

    In a mid-range, fairly typical society, I would employ a base of 27%. If there are lots of factors contributing to a high singles rate, this might get as high as 37%, and if there are lots of negatives, it might come down to 17% – but for the most part, it will be somewhere close to the middle.

    In an especially grim and dark world, I would employ a base of 33%, in the expectation that most factors will be negative, and lead to totals more in the 23-28% range. But if social norms have begun to break down, social institutions like marriage can fall by the wayside, and you can end up with an unsustainable total of 40-something percent.

    Anything outside 20-35 should be considered unsustainable over the long run. Whatever negative impacts can apply will be rife.

    5.8.1.3.5 Population Breakdown

    That’s the final piece of the puzzle – with that information, you can assess the four types of ‘typical families’ and their relative frequency:

        # Children with no parents,
        # Children with mothers but no fathers,
        # Children with fathers but no mothers, and
        # Children with two parents.
        # Childless Couples
        # Unwed Singles

    Get the total size of each of these family units / households* in number of individuals, multiply that size by the frequency of occurrence, add up all the results, and convert them to a percentage and you have a total population breakdown. Average the first five and you have the average family size in this particular region and all similar ones.

    Multiply each frequency of occurrence by the village population total (rounding as you see fit), and you get the constituents of that village.

    I have never liked the use of the term ‘households” in a demographic context, even though that seems to be the most commonly preferred term these days. I’ve lived in a number of shared accommodations as a single. over the years, and that experience muddies what’s intended to be a clearer understanding of the results. If you have 50 or 100 singles living in a youth hostel, are they one household or 50-100? Families – nuclear or non-nuclear – for me, at least, is the clearer, more meaningful, term.

    5.8.1.3.6 The Economics Of The Demographics

    In modern times, it’s not unusual for two adults and even multiple children all to have different occupations for different businesses all at the same time. Some kids start as paper boys and girls at a very young age. Even five year olds with Lemonade stands count in this context.

    Go back about 100 years and that all changes. There is typically only one breadwinner – with exceptions that I’ll get to in a moment – and while some of them will have their own business (be it retail or in a service industry), most will be working for someone else.

    There will be a percentage who have no fixed employment and operate as day labor.

    Going into Victorian times, we have the workhouses and poorhouses, where brutal labor practices earn enough for survival but little more. While some were profitable for the owners, most earned less than they cost, and relied on charitable ‘sponsorship’ from other public institutions – sometimes governments, more often religious congregations. These are the exceptions that I mentioned. This is especially true where the father has deserted the family or died (often in war) leaving the mother to raise the children but unable to do so because of the gender biases built into the societies of the time.

    Go back still further, and it was a matter of public shame for a woman to work – with but a few exceptions such as midwifery. Nevertheless, they often earned supplemental income for the families with craft skills such as sewing, knitting, and needlework.

    The concept that the male was the breadwinner only gets stronger as you pass backwards through history.

    Fantasy games are usually not like that. They do see the world from the modern perspective and force the historical reality to conform to that perspective. In particular, gender bias is frequently and firmly excluded from fantasy societies.

    The core reasoning is that characters and players can be of either gender (or any of the supplementary gender identifications) and the makers of the games don’t wish to exclude potential markets with discomforting historical reality.

    There are a few GMs out there who intentionally try to find an ‘equal but distinct’ role for females and others within their fantasy societies; it’s difficult, but it can be done – and it usually happens by excluding common males from segments of the economy within the society. If there are occupations that are only open to women, and occupations of equal merit (NOT greater merit) that are only open to men, you construct a bilateral society in which two distinct halves come together to form a whole.

    But it would still be unusual for a single household to have multiple significant breadwinners; you had one principal earner and zero or more supplemental incomes ‘on the side’.

    Businesses were family operations in which the whole family were expected to contribute in some way, subject to needs and ability.

    And that’s the fundamental economic ‘brick’ of a community – one income per family, whether that income derives as profits from a business or from labor in someone else’s business.

    You can use this as a touchstone, a window into understanding the societies of history, all the way back into classical times – who earned the money and how? In early times, it might be that you need to equate coin-based wealth with an equivalent value in goods, but once you start thinking of farm produce or refined ore as money, not as goods, the economic similarities quickly reveal themselves.

    So that is also the foundation of economics in this system. One family, one income (plus possible supplements). In fact, there were periods in relatively recent history in which the supplementary income itself was justification for marriage and children.

    In modern times, we evaluate based on the reduction of expenses; this is because most of our utilities don’t rise in usage as fast as the number of people using them (which goes back to the muddying concept of ‘households’; if two people are sharing the costs, both have more economic leftover to spend because the costs per person have gone down; if they are NOT sharing expenses, each providing fully for themselves, then they are two ‘households’, not one. It also helps to think of rent as a ‘utility’ within this context).

    But that’s a very modern perspective, and one that only works with the modern concept of ‘utilities’ – electricity, gas, and so on. Go back before that, into the pre-industrial ages, and the perspective changes from one of diminishing liabilities into one of growth of potential advantages. And having daughters who could supplement the household income by working as maids or providing craft services gave a household an economic advantage.

    5.8.1.3.7 An Economic Village Model

        8 a^2 = b^2 – c^2.

    Looks simple, doesn’t it? In fact, it is oversimplified – the reality would be

        a^d = (b^e – c^f ) / g,

    but that’s beyond my ability to model, and too fiddly for game use.

    a = the village’s profitability. Some part of this may show up as public amenities; most of it will end up in the pockets of the broader social administration, in whatever form that takes.

    b = the village’s productivity, which can be simplified to the number of economic producers in the village. You could refine the model by contemplating unemployment rates, but the existence of day laborers whose average income automatically takes into account days when there’s no work to be found, means that we don’t have to.

    c = the village’s internal demand for services and products. While usually less than production, it doesn’t have to be so. But it’s usually close to b in value.

    To demonstrate the model, let’s throw out figures of 60 and 58 for b and c.

        8 a^2 = 60^2 – 58^2 = 3600 – 3364 = 236.
        a = (236 / 8)^0.5 = 29.5^0.5 = 5.43

    The village grows. b rises to 62. c rises to 59.

        8 a^2 = 62^2 – 59^2 = 3844 – 3481 = 363.
        a = (363 / 8)^0.5 = 45.375^0.5 = 6.736.

    It has risen – but not by very much.

    Things become clearer if you can define c as a percentage of b:

        a^2 = b^2 – (D x b^2) / 100
        100 a^2 = 100 b^2 – D x b^2 = b^2 x (100-D)

    If 98% of the village’s production goes to maintaining and supporting the village, then only 2% is left for economic growth. If the village adds more incomes, demand rises by the normal proportion as well – so economic growth rises, but quite slowly. In the above example calculations, 59/62 = 95.16% going to support the village – and 95% is about as low as it’s ever going to realistically go. In exceptionally productive years, it might be as low as 66.7%, but most years it’s going to be much higher than that.

    Side-bar: 5.8.1.3.6.1 Good Times

    You can actually model how often an exceptional year comes along, by making a couple of assumptions. First, if 66.7 is as good as they get, and 95 is as bad as an exceptionally good year gets, then the average ‘exceptional year’ will be 80.85%.

    Second, if 95% is as good as a typical year gets, and 102% is as bad as a typical year gets, then the average ‘normal’ year will be 98.5%.

    Third, if the long term average is 95.16%, then what we need is the number of typical years needed to raise the overall average (including one exceptional year) to 95.16%.

        95.16 x (n+1) = 80.85 + (n x 98.5)
        95.16 x n + 95.16 = 80.85 + 98.5 x n
        (95.16 – 98.5) x n = 80.85 – 95.16
        3.34 n = 14.31
        n = 14.31 / 3.34 = 4.284.

        4-and-a-quarter normal years to every 1 good year.

    You can go further, with this as a basis, and make the good years better or worse so that you end up with a whole number of years.

        95.16 x (5 +1) = g + 5 x 98.5
        g = 95.16 x 6 – 98.5 x 5
        g = 570.96 – 492.5 = 78.46.

    That’s a six-year cycle with one good year averaging 78.46% of productivity sustaining the village and five typical years in which 98.5% of productivity is needed for the purpose.

    I grew up on the land, and I can tell you that an industry is thriving if one year out of 10 is really good; an industry is marking time if one year out of 20 is good, and in trouble if one year in 25 or less is really profitable. One year in six is a boom.

    So to close out this sidebar, let’s look at what those numbers equate to in overall economic productivity for the rural population that depend on them:

        Boom: (1 x 78.46 + 5 x 98.5) / 6
            = (78.46 + 492.5) / 6
            = 570.96 / 6
            = 95.16%
            (we already knew this but it’s included for comparison)

        Thriving: (1 x 78.46 + 9 x 98.5) / 10
            = (78.46 + 886.5) / 10
            = 964.96 / 10
            = 96.496

        Stable, Marking Time: (1 x 78.46 + 19 x 98.5) / 20
            = (78.46 + 1871.5) / 20
            = 1949.96 / 20
            = 97.498

        In trouble / in economic decline: (1 x 78.46 + 24 x 98.5) / 25
            = (78.46 + 2364) / 25
            = 2442.46 / 25
            = 97.6984

    Look at the differences, and how thin the lines are between growth and stagnation.

        Stable to In Decline: 0.2004% change.
        Stable to Thriving: 1.002% change.
        Thriving to Booming: 1.336% change.
        Booming to In Decline: 2.5384% change.

    The whole boom-bust cycle – and it can be a cyclic phenomenon – is contained within 2.54% difference in economic activity.

    An aside within an aside shows why:

        Boom: 95.16% = 0.9516;
        0.9516 ^ 6 = 0.74255;
        so 25.74% productivity goes into growth.

        Thriving: 96.496% = 0.96496;
        0.96496 ^ 6 = 0.8073;
        so 19.27% productivity goes into growth over the same six-year period.

        Stable: 97.498% = 0.97498;
        0.97498 ^ 6 = 0.859;
        14.1% of productivity goes into growth over the same six-year period.

        Declining: 97.6984% = 0.976984;
        0.976984 ^ 6 = 0.8696;
        13.04% of productivity goes into growth.

    Every homeowner sweats a 0.25% change in interest rates because they compound, snowballing into huge differences. This is exactly the same thing.

    5.8.1.4 The Generic Village

    The generic village is perpetually dancing on a knife-edge, but the margins are so small that it’s trivially easy to overcome a bad year with a better one. Even a boom year doesn’t incite a lot of growth, but a lot of factors pulled together over a very long time, can.

    Some villages won’t manage to escape the slippery slope long enough and will decline into Hamlets, but find stability at this smaller size. Given time, disused buildings will be torn down and ‘robbed’ of any useful construction material because that’s close to free, and that alone can make enough of a difference economically. With the land reclaimed, after a while you could never tell that it once was a village.

    Some won’t be able to arrest their decline – whatever led to their establishment in the first place either isn’t profitable enough, or too much of the profits are being taken in fees, tithes, greed, and taxes. They decline into Thorpes.

    In some cases, communities exist for a single purpose; they never grew large enough to even have permanent structures. They are strictly temporary in nature (though one may persist for dozens of years or more); they are forever categorized as Mining or Logging Camps.

    Other villages have more factors pushing them to growth, and once they reach a certain size, they can organize and be recognized as a town. And some towns become cities, and some cities become a great metropolis.

    With each change of scale, the services on offer to the townsfolk, and the services on offer to the traveler passing through, increase.

    The fewer such services there are, the more general and generic they have to become, just to earn enough to stay in operations.

    The general view of a generic village is that most services exist purely for the benefit of the locals, but a small number of operations will offer services aimed at a temporary target market, the traveler. These services are often more profitable but less reliable in terms of income, more vulnerable to changes in markets. They don’t tend to be set up by existing residents; instead, they are founded by a traveler who settles down and joins a community because they see an economic opportunity.

    That means that the number of such services on offer is very strongly tied to both the growth of the village, and to the overall economic situation of the Kingdom as a whole and to the local Region of which this village is a part.

    Here’s another way to look at it: The reason so much of the village’s economic potential goes into maintaining the village is because of all those tithes and taxes and so on. Some of those will be based on the land in and around the village; some on the productivity of that land; and some of it on the size and economic activity of the village. The rest provides what the village needs to sustain its population and keep everything going. There’s not a lot left – but any addition to the bottom line that isn’t eroded away by those demands makes the village and the region more profitable, creating more opportunities for sustained growth. Again, there is a snowball effect.

    Some villages – and this is a social thing – don’t want the headaches and complications of growth; they like things just the way they are. They will have local rules and regulations designed to limit growth by making growth-producing business opportunities less attractive or compelling. Others desperately want growth, and will try to make themselves more attractive to operations that encourage it.

    That divides villages into two main categories and a number of subcategories.

    Main Category: Villages that encourage growth
         Subcategory: Villages that are growing
         Subcategory: Villages that are not growing
         Subcategory: Villages that are being left behind, and declining.
    Ratios: 40:40:20, respectively.

    Main Category: Villages that are discouraging growth despite the risk of decline
         Subcategory: Villages that are growing and can only slow that growth
         Subcategory: Villages that have achieved stability
         Subcategory: Villages that have or are declining.
    Ratios: 20:40:40, respectively.

    And that will about do it for this post. It will continue in part 5b!

Leave a Comment

Trade In Fantasy Ch. 5: Land Transport, Pt 5 (incomplete)


This entry is part 18 of 20 in the series Trade In Fantasy

We’ve used the economy to distribute fortifications, and used those to locate inns. Now let’s wrap some communities around them.

I have a series of images of communities of different sizes which will be sprinkled throughout this article. This is the first of these – something so sparsely-settled that it barely even qualifies as a community. It’s more a collection of close rural neighbors! Image by Jörg Peter from Pixabay

Table Of Contents

In parts 1-3 of this chapter:

Chapter 5: Land Transport

    5.1 Distance, Time, & Detriments

      5.1.1 Time Vs Distance
      5.1.2 Defining a terrain / region / locality

           5.1.2.1 Road Quality: An introductory mention

    5.2 Terrain

      5.2.0 Terrain Factor
      5.2.1 % Distance
      5.2.2 Good Roads
      5.2.3 Bad Roads
      5.2.4 Even Ground
      5.2.5 Broken Ground
      5.2.5 Marshlands
      5.2.7 Swamplands
      5.2.8 Woodlands
      5.2.9 Forests
      5.2.10 Rolling Hills
      5.2.11 Mountain Slopes
      5.2.12 Mountain Passes
      5.2.13 Deserts
      5.2.14 Exotic Terrain
      5.2.15 Road Quality
           5.2.15.1 The four-tier system
           5.2.15.2 The five-tier system
           5.2.15.3 The eight-tier system
           5.2.15.4 The ten-tier system

      5.2.16 Rivers & Other Waterways
           5.2.16.1 Fords
           5.2.16.2 Bridges
           5.2.16.3 Tolls
           5.2.16.4 Ferries
           5.2.16.5 Portage & Other Solutions

    5.3 Weather

      5.3.1 Seasonal Trend
      5.3.2 Broad Variations
      5.3.3 Narrow Variations
           5.3.3.1 Every 2nd month?
           5.3.3.2 Transition Months
           5.3.3.3 Adding a little randomness: 1/2 length variations
           5.3.3.4 Adding a little randomness: 1 1./2-, 2-, and 2 1/2-length variations

      5.3.4 Maintaining The Average
           5.3.4.1 Correction Timing
                5.3.4.1.1 Off-cycle corrections
                5.3.4.1.2 Oppositional Corrections
                5.3.4.1.3 Adjacent corrections
                5.3.4.1.4 Hangover corrections

           5.3.4.2 Correction Duration
                5.3.4.2.1 Distributed corrections: 12 months
                     5.3.4.2.1.1 Even Distribution
                     5.3.4.2.1.2 Random Distribution
                     5.3.4.2.1.3 Weighted Random Distribution

                5.3.4.2.2 Distributed corrections: 6 months
                5.3.4.2.3 Distributed corrections: 3 months
                5.3.4.2.4 Slow Corrections (2 months)
                5.3.4.2.5 Normal corrections: 1 month
                5.3.4.2.6 Fast corrections: 1/2 month (2 weeks)
                5.3.4.2.7 Catastrophic corrections 1/4 month (1 week)

           5.4.4.3 Maintaining Synchronization
           5.4.4.4 Multiple Correction Layers

    5.4 Losses & Hazards
    5.5 Expenses – as Terrain Factors
    5.6 Expenses – as aspects of Politics
    5.7 Inns, Castles, & Strongholds

      5.7.1 Strongholds
           5.7.1.1 Overall Military Strength
                5.7.1.1.1 Naval Strength
                5.7.1.1.2 Exotic Strength
                5.7.1.1.3 Adjusted Military Strength

           5.7.1.2 Mobility
                5.7.1.2.1 Roads
                5.7.1.2.2 Cross-country

           5.7.1.3 Kingdom Size and Capital Location
           5.7.1.4 Borders
           5.7.1.5 Terrain
           5.7.1.6 Internal Threat
           5.7.1.7 Priority
           5.7.1.8 Threat Level
           5.7.1.9 Zones
                5.7.1.9.1 Abstract Zones
                5.7.1.9.2 Applied Considerations
                     5.7.1.9.2.1 Sidebar: Why do it this way?

                5.7.1.9.3 Preliminary Zones, Zomania

           5.7.1.10 Kingdom Wealth
                5.7.1.10.1 Legacy Defenses
                
      5.7.1.10.2 Military Training
                
      5.7.1.10.3 Disaster Relief
                
      5.7.1.10.4 Religion
                
      5.7.1.10.5 Magic
                
      5.7.1.10.6 Tools
                
      5.7.1.10.7 Entertainment
                
      5.7.1.10.8 Resource Development
                
      5.7.1.10.9 A Hypothetical Disaster
                
      5.7.1.10.10 Housing & Funding Boosts
                
      5.7.1.10.11 Food
                
      5.7.1.10.12 Diplomacy
                
      5.7.1.10.13 Trade
                
      5.7.1.10.14 Education
                
      5.7.1.10.15 Transport (Road Maintenance)
                
      5.7.1.10.16 The Impact On Population

           5.7.1.11 Military Need: Theoretical Scenario 2

In the last part of this series:

           5.7.1.12 Stronghold Density
           5.7.1.13 Zone Size
           5.7.1.14 Base Area Protected per Stronghold
                5.7.1.14.1 The Distance between defensive centers
                
      5.7.1.14.2 The relationship between defensive patterns
                
      5.7.1.14.3 The shape of the defensive pattern
                
      5.7.1.14.4 What is 100% coverage, anyway?          5.7.1.14.5 Calculating Area Protected
                     
      5.7.14.5.1 Three Satellite
                     5.7.14.5.2 Four-Satellite

                5.7.1.14.6 Configuration Choice(s)
                5.7.1.14.7 The Impact On Roads
                The Impact on populations

           5.7.1.15 Economic Adjustments
           5.7.1.16 Border Adjustments
           5.7.1.17 Historical vs Contemporary Structures
           5.7.1.18 Zone and Kingdom Totals
           5.7.1.19 Reserves

      5.7.2 Castles, Fortresses, and the like
           5.7.2.1 Distance to a satellite fortification using 2d6
           5.7.2.2 Distance to a neighboring hub
           5.7.2.3 Combining the two: the nearest neighbor

      5.7.3 Inns

In this part:

    5.8 Villages, Towns, & Cities

      5.8.1 Villages
           5.8.1.1 Village Frequency
           5.8.1.2 Village Initial Size
                Optional
           5.8.1.3 Village Demographics

                5.8.1.3.1 Maternal Survival
                5.8.1.3.2 Paternal Survival
                5.8.1.3.3 Childless Couples
                5.8.1.3.4 Unwed Singles
                5.8.1.3.5 Population Breakdown
                5.8.1.3.6 The Economics Of The Demographics
                     Side-bar: 5.8.1.3.6.1 Good Times

           5.8.1.4 The Generic Village
           5.8.1.5 Blended Models
           5.8.1.6 Zomania – An Example
                5.8.1.6.1 Zone Selection
                5.8.1.6.2 Sidebar: Elevation Classification
                5.8.1.6.3 Area Adjustments – from 5.7.1.13
                5.8.1.6.4 Defensive Pattern – from 5.7.1.14
                5.8.1.6.5 Sidebar: The Size Of Zomania, revisited
                5.8.1.6.6 Sidebar: Changes Of Defensive Structure
                5.8.1.6.7 Inns In Zone 7 – from 5.7.3

      5.8.2 Towns
           5.8.2.1 Towns Frequency
           5.8.2.2 Town Initial Size
           5.8.2.3 The Generic Town

      5.8.3 Cities
           5.8.2.2 Small City Frequency
           5.8.2.3 Small City Size
           5.8.2.4 Size Of The Capital
           5.8.2.5 Large City Frequency
           5.8.2.6 Large City Size

      5.8.4 Economic Factors, Simplified
           5.8.4.1 Trade Routes & Connections
           5.8.4.2 Local Industry
           5.8.4.3 Military Significance
           5.8.4.4 Scenery & History
           5.8.4.5 Other Economic Modifiers
           5.8.4.6 Up-scaled Villages
           5.8.4.7 Up-scaled Towns
           5.8.4.8 Up-scaled Small Cities
           5.8.4.9 Upscaling The Capital & Large Cities

      5.8.5 Overall Population
           5.8.5.1 Realm Size
           5.8.5.2 % Wilderness
           5.8.5.3 % Fertile
           5.8.5.4 % Good
           5.8.5.5 % Mediocre
           5.8.5.6 % Poor
           5.8.5.7 % Dire
           5.8.5.8 % Wasteland
           5.8.5.9 Net Agricultural Capacity

           5.8.5.10 Misadventures, Disasters, and Calamities
           5.8.5.11 Birth Rate per year
           5.8.5.12 Mortality
                5.8.5.12.1 Infant Mortality
                5.8.5.12.2 Child Mortality
                5.8.5.12.3 Teen Mortality
                5.8.5.12.4 Youth Mortality
                5.8.5.12.5 Adult Mortality
                5.8.5.12.6 Senior Mortality
                5.8.5.12.7 Elderly Mortality
                5.8.5.12.8 Venerable Mortality
                5.8.5.12.9 Net Mortality

           5.8.5.13 Net Population

And still to come in this chapter:

      5.8.6 Population Distribution
           5.8.6.1 The Roaming Population
           5.8.6.2 The Capital
           5.8.6.3 The Cities
           5.8.6.4 Number of Towns
           5.8.6.5 Number of Villages
           5.8.6.6 Hypothetical Population
           5.8.6.7 The Realm Factor
           5.8.6.8 True Village Size
           5.8.6.9 True Town Size
           5.8.6.10 Adjusted City Size
           5.8.6.11 Adjusted Capital Size

      5.8.7 Population Centers On The Fly
           5.8.7.1 Total Population Centers
           5.8.7.2 The Distribution Table
           5.8.7.3 The Cities
           5.8.7.4 Village or Town?
           5.8.7.5 Size Bias
                5.8.7.5.1 Economic Bias
                5.8.7.5.2 Fertility Bias
                5.8.7.5.3 Military Personnel
                5.8.7.5.4 The Net Bias

           5.8.7.6 The Die Roll
           5.8.7.7 Applying Net Bias
           5.8.7.8 Applying The Realm Factor
           5.8.7.9 The True Size
                5.8.7.9.1 Justifying The Size
                5.8.7.9.2 The Implications

    5.9 Compiled Trade Routes

      5.9.1 National Legs
      5.9.2 Sub-Legs
      5.9.3 Compounding Terrain Factors
      5.9.4 Compounding Weather Factors
      5.9.5 Compounding Expenses
      5.9.6 Compounding Losses
      5.9.7 Compounding Profits
      5.9.8 Other Expenses
      5.9.9 Net Profit

    5.10 Time
    5.11 Exotic Transport

In future chapters:
  1. Waterborne Transport
  2. Spoilage
  3. Key Personnel
  4. The Journey
  5. Arrival
  6. Journey’s End
  7. Adventures En Route
5.8 Villages, Towns, & Cities

Part 5 of Chapter 5 is all about Population and its distribution. Most systems that I’ve seen for this purpose start with an overall population and work backwards, and often end up with unreasonable results, like a village every mile-and-a-half.

My system works the other way – a population density model to a population density to a local population. Many local populations give a Zone population, and the total of the Zone populations gives the Kingdom population overall.

5.8.0 Concepts & Principles

Select a model based on the desired ‘look and feel’ of the society within the Kingdom / Zone. The model describes the general distribution of population within the Kingdom / Zone, assuming a fixed unit of area (10.000 km^2), but most zones will be smaller.

The model plus a random roll sets initial village size. Village Frequency is determined by the placement of Inns & Administrative / Military structures, already defined. Together these define the total population density of an entire Kingdom according to the model.

This can then be applied to the size of the actual Kingdom to determine the total population of the Kingdom.

All of the above is on today’s agenda. In addition, there will be contributing factors determined that will be applied going forward.

Each village occupies a footprint termed a Locus.

The location within a locus actually occupied by the village or town is generally defined by the content of that locus. The population center will always be in the location within the locus that is most advantageous to growth.

A series of factors increases the size of the village within the locus, sometimes positively and sometimes negatively. Each factor yields a fractional value called a Scale Value. Applicable Scale Values determine the village location because many of them are specific to this place or that, enabling the location to be quickly refined within the locus.

Where there are multiple possible locations of roughly equal value, a community will split into two half-sized populations which will begin growing toward each other.

These Scale Values are totaled. The total Scale Value is applied as exponential growth to the base village size to determine the nominal size of the community.

If this is sufficient to trigger growth into a new size category, it is further adjusted and the new base size is used with the adjusted value to redetermine the size. This process iterates (i.e. gets applied repeatedly) until the final size of the settlement is determined.

Some conditions restrict community size by passing on excess growth to neighboring communities; these are passed from one to another until reaching a community that is no longer restricted. That community is sometimes referred to as the “Gateway” to the region. Becoming a ‘gateway’ is also a growth factor!

This is all achieved by taking the excess part of the Scale Value and applying it as a modifier to the nearest Locus outside the restricted area, reducing the total scaling factor that applies to settlements within the restricted area. Not all the excess can be redirected; growth in restricted areas is slowed, not stopped.

Along the way, various side-issues will be raised and assessed, building up a population profile for the Locus, the administrative division, the Zone overall, and for the Kingdom as a whole. In particular, the political infrastructure of the Kingdom gets determined.

Finally, these various considerations will come together to provide a system whereby a GM can generate a village ‘on the fly’ whenever a group of characters (PCs most of the time) enter a locus or cross a border.

At least, that’s how it’s all supposed to work in theory! As always, if the reality doesn’t yield useful results, I’ll feel free to diverge from this road-map!

    5.8.0.1 Frequency, Size, and Services

    The section above does a good job of outlining the process, but I thought it worth taking a moment to explain the philosophy behind it and the reason for this particular approach.

      5.8.0.1.1 The traditional approach

      The fundamental concepts by which population levels are usually defined come down to two main ones and a boat-load of implications.

      The first primary factor is settlement frequency – how many miles or kilometers or day’s march apart they are. The first two options are the ones with which most readers will be familiar and they have the virtue – and penalty – of being absolute measurements. The third option is more abstract, but can also be more practical. It takes account of terrain, for example, and at first that might seem like a good thing – but then you realize that it takes it into account backwards: if the terrain is poor, travel over it will be slower – but a fixed ‘average time apart’ then means that the settlements will cluster more closely together, i.e. there will be less physical distance covered in the same amount of time because of the terrain. What you really want is the opposite – good terrain clustering communities together, bad terrain setting them further apart.

      The second primary factor is settlement size – how many families or dwellings make up a ‘typical community’ in the specific zone.

      It’s the implications that start to get complicated. Between them, these specify the level of economic and industrial capacity of the typical community, and thus, what services are likely to be available. But that then gets muddied somewhat by demand. Certain services are always going to be in demand and providing those services is an economic opportunity for a practitioner.

      And that then gets complicated by the logistics of travel – the ‘footprint’ serviced by a given provider will vary from one occupation to another. A good blacksmith may service several small communities (if they are close together), or just one, while a mill may have a much bigger ‘footprint’.

      Add to that the secondary impact of travel capabilities – if travel is easy, and the community is on a trade route, there will be more services geared toward supplying the needs of travelers; if not, the primary driving force will be the needs of the inhabitants.

      The more you look into it, the bigger the mess the whole thing becomes. And that’s why I have rejected this traditional approach, at least for the most part.

      5.8.0.1.2 The alternative approach

      Instead, each settlement starts off at a base size and separation. The ‘tail’ – the implications – then wag the dog. Every location has benefits and drawbacks – the benefits help the settlement grow, the drawbacks cause it to shrink in size. If the demand for a blacksmith is high enough, there will be a blacksmith – who gets added to the base population and causes further population growth. If there’s no local blacksmith, but there is one in the next town over, that makes that town grow at the expense of this community. Taking stock of every relevant factor, the size of the actual settlement is then adjusted.

      But there’s one more way of looking at this approach, and for me, it makes this the most compelling possible option – it develops village size to accommodate the needs of the plot! If you need there to be a sage, or a blacksmith, or a tavern with rooms for travelers in the next community, they are there – and the community grows, within the context of the terrain and other factors, to whatever size is needed o justify the presence of these services.

      And if you don’t have any specific plot needs, the defaults of terrain and frequency and traffic and trade dictate the size and the services that are available should the PCs decide they need them.

    5.8.0.2 Community Sizes: Base, and smaller

    The fundamental unit of community size in this system is the Village. It has a certain base population, and that population size supports the provision of a certain number of general services to the community. These are ‘General Services’ and they exist to meet the needs of the inhabitants. A base-sized village also supports a single “Specialist Service” – i.e. a service with a ‘footprint’ larger than just this community. If the distance between communities is large enough, it may add a second ‘Specialist Service”, causing the community to grow – but it’s still within the range of ‘normal’ for the base size.

    Various factors shrink communities. If a community shrinks too much, it enters a community scale lower down the size chart. While the real-world terminology is vague in application, in this ‘unified’ view, these are designated Hamlets, and they have a base size 1/8 that of the base community. Hamlets no longer offer any Specialist services, and support fewer ‘General Service’ providers. The model supports Ha-1, 2, and 3 (those terms will make more sense shortly).

    Communities smaller than a Hamlet are Thorpes. Officially, this is a variant of a Middle English word meaning hamlet or small village – but I’ve expropriated the term for usage to represent the smallest of settlements. Once again, we can have Th-1, 2, and 3, and the base size of a Thorpe is 1/8 that of a Hamlet.

    Except that we can go smaller!

    Smaller than a Thorpe is a mining or logging Camp. Actually, the biggest of these overlap with a Thorpe in size, but the typical-and-smaller range of camps starts where a Thorpe leaves off. Such camps exist to enable the residents to perform one function and one function only; they provide only the essentials necessary to achieve that. These are often (usually?) a satellite of a larger community somewhere nearby. Any single-purpose camp comes under this designation.

    Camps can be rated Ca-1, -2, -3, -4, or -5. The base size of a camp is 1/4 that of a Thorpe (but they also have a minimum population of 1).

    If you’re keeping track, that’s 1/4 of 1/8 of 1/8 of a village, or 1/256th. If your village base size is 256 people or smaller, then the ‘minimum 1’ rule can be said to be in effect.

    Technically, you could also describe a Caravan as a Camp – it just happens to be mobile, or semi-mobile.

    5.8.0.3 Community Sizes: Larger than Base

    Going the other way, we find ourselves buried in adjectives, because there aren’t many terms on offer. Things get even more confusing when you discover that the definition of a city isn’t what we tend to associate with the term – and different countries have different definitions in terms of size.

    And, since most adjectives tend to be relative in meaning, and subject to interpretation, I’ve tried to eschew them in favor of suffixes.

    So, larger than a Village is a Village-2, Larger than a Village-2 is a Village-3, and Larger than a Village-3 is a Village-4.

    A Village-5 is the same size as a Town (leaving off the -1 suffix). The meaning of the term “Town” is also something that can vary widely from one culture to another. The term is used here to designate a community with a municipal authority beyond a singular mayor / burgomaster / whatever. In England, a Town is usually formally defined by a legal Charter issued by the Crown, giving it a specific identity outside of the control of the regional Nobility. In the US, it loosely refers to incorporated communities – i.e. a community that has issued its own Charter, which formally “Incorporates” the community.

    Australia and Canada distinguish communities based on population thresholds – but these can vary from state to state. Nevertheless, this is the mindset that this system adopts.

    The difference between a Town and a Village is that the town provides, by virtue of its Charter, services restricted to the Town Limits, collecting rates and revenues to fund these services; in a Village, there is no central authority to provide these services, and any that are provided are provided by the broader administrative unit – be it a state government or a Nobleman, paid for from the taxes and fees they are entitled to collect.

    ‘Town’ is followed by Town-2, Town-3, Town-4, and Town-5.

    Towns -6 to -10 follow, but a Town-6 is the same size as a City-1.

    A city is distinguished by having a metropolitan area beyond a simple town square, surrounded by residential districts or suburbs. Many of these will possess some singular identifying traits or characteristics (social or economic in nature), or will claim such an identity. Each suburb or district has its own independent retail or services providers. The number of suburbs or districts is roughly equivalent to the city-suffix squared, plus 1, not counting the metropolitan zone. So City-1 has 2 residential zones, City-2 has 5, City-3 has 10, and so on. These residential zones are all still administered by the central metropolitan zone.

    City-5 (with its 26 residential zones) is the same size as Metropolis-1. This is the point at which the central metropolitan area and surrounding suburbs are excerpted from the larger community to form a smaller City (usually City-1 or -2), while the remaining suburbs or districts collectively organize into a separate but contiguous City (usually City-2 or City-3 in size) with an authority independent of that of the central hub. Collectively, these form “Greater [name]”. For example, Greater Sydney consists of the City Of Sydney and 32 surrounding Cities, each of which contains and administers a number of smaller Suburbs. My residence is in the suburb of Panania, which is one of 41 suburbs within the City of Canterbury-Bankstown.

    You can work backwards from such numbers.

    Canterbury-Bankstown, with 41 suburbs, would have a suffix = sqr root (41-2) = sqr root (39) = 6.245. But this is the result of forced amalgamation between two different cities by the state government, a quite unpopular move at the time. Canterbury used to have 17 suburbs and be a City-3.87, while Bankstown had 10, and was a city-2.8. When they were merged, additional suburbs were also added from surrounding areas. Greater Sydney itself would rank as a City-25.5 if taken collectively – but it instead rates as a Metropolis-5.5 (32 cities, -2, take the square root). But Greater Sydney is a BIG city – 5,356,944 people – or more than five times the population of Imperial Rome at its height (1 Million, according to best estimates).

    The justification given for the amalgamation was economy of scale, and for some councils who were struggling to provide services, that was fair enough – but some such mergers were refused by the State Government for political reasons, and others forced through against the wishes of residents even though the parent cities were financially sound. So the whole thing stank of corruption and political manipulation. The leader of the governing party saw his popularity plummet to trump-like figures as a result of this and a couple of other controversies, and was forced to resign so that his successor would stand a shadow of a chance at the next State Election and so that his unpopularity would not impact on the Federal Election due later that year. It was a successful move on the latter front (just barely) but the shadow wasn’t deep enough on the former, and there was a change of state government.

    Adding to the size of Sydney is the fact that it’s a State Capital – and our present National Capital only exists as a compromise between Sydney and Melbourne, neither of whom were willing to let the other be the political Big Dog.

    5.8.0.3 Demographic Research

    Although the models will abstract things greatly, and not adhere to historical reality if it’s inconvenient, reality has to be the underpinning of the Demographic Models that are available.

    You don’t have to dig very deep into the history of various townships in Arkansas to discover the effects, both economic and social, or gaining or losing County leadership; I can only project up to the effect of being named a State Capital, and then scale up again for a National Capital.

    But it is worth noting that in 33 out of 50 US States, the largest city in the state is Not the State Capital. I put this down to everyone else in the state not wanting to be dominated by that largest city, just as Melbourne would not accept Sydney as the capital of Australia as well as of the state of New South Wales.

    Before moving on from this discussion, some historical context is worth highlighting.

    According to this graph…

    Excerpted from “Mortality, migration and epidemiological change in English cities, 1600–1870” by Romola Davenport, University of Cambridge, CC BY 4.0, courtesy of Researchgate (image scaled by me)

    …in 1600, the population of England was 5 million, and about 10% – half a million – lived in an Urban setting. In about 1650, the general population peaked and only slow growth could be seen until about 1775. At that time, the urban population was about 25%, or 1.25 million – and half of them lived in London.

    This graph…

    Excerpted from “When Bioterrorism Was No Big Deal” by
    Patricia Beeson & Werner Troesken (both from the University of Pittsburgh), Copyright unstated, courtesy of Researchgate (left caption moved and image cropped and scaled by me).

    …is harder to read, but shows that the trend given in the first continues back another 50 years and then flattens – so in 1550 it would have been about 6% of 5 million (i.e. 300,000) and in 1500, it might only have been 5% (250,000). And almost all of them would have resided in London.

    (That paper, downloadable from the link “Researchgate”, has a bunch of others for comparison at the back – Western Europe, Scandinavia, Eastern Europe. Worth grabbing for reference if one of those resembles the Kingdom “tone” that you’re going for.)

    This graph…

    Historical_population_of_France.svg by Max Roser, CC BY-SA 3.0, via Wikimedia Commons

    …shows the historical population of France, which provides additional context.

    Below, I’ve isolated the part that matches the 1500-1950 range of the England Graphs:

    Extract From Historical_population_of_France.svg
    Creative Commons CC-BY-3.0 as above, Cropped and Enlarged by Mike

    In 1500, there were about 15 million in France, rising to 18 million by 1600. 1550 would therefore have been about 16.5 million.

    In 1500, it can be estimated that 5.6% of the French population lived in towns of 10,000 or more. In 1550, that was 6.3%; and in 1600, 8%, according to one source (and there aren’t many to pick from).

    In 1500, Paris had a population of about 150,000, or just 16.1% of the urban population.

    In 1550, that was somewhere between 300 and 350,000 people, and 25.2-29.4% of the urban population.

    In 1600, we’re talking between 300 and 400,000 people, and 18.8-25% of the urban population – so other cities grew faster than Paris in the 1550-1600 period.

    Which other cities? The only one with more than 60,000 on all three dates was Paris. In 1600, Lyon or Ruen may have hit that number. We need to go to one-sixth the size of Paris or less for the next biggest population center, Toulouse, but it might also be in the vicinity of Lyon and Ruen. Estimates of the population in those cities at the time vary from about 40-60,000 in 1500, and 70-80,000 in 1600. But when you compare that with England, you see a stark difference.

    Here are some estimated population densities and population levels from the year 1300:

    ▪  France – 36 to 40 people per sqr km – 18 to 20 million total population.

    ▪  England and Wales – 33 to 40 people per sqr km – 5-6 million total population.

    ▪  Germany (then core of the Holy Roman Empire) – 24 to 28 people per square km, 12 to 14 million total population.

    ▪  Scotland – 6-13 people per sqr km – 0.5 to 1 million total population.

    Some other relevant Demographic research:

    France

    ▪  Largest Regional Cities (Excluding Capital): Milan, Venice, Florence (in broader Western Europe) were over 100,000. In France, cities like Ruen or Bordeaux may have reached 25,000?40,000.

    ▪  Major Towns (5,000?10,000+): Numerous. The median major town size in this range may have been around 12,000?15,000.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): The backbone of the French urban network; perhaps a few hundred such towns scattered across the kingdom.

    ▪  Very Small Boroughs (Below 1,000): Most settlements below 1,000 people were agricultural villages.

    England (and Wales)

    ▪  Largest Regional Cities (Excluding Capital): York and Bristol were the undisputed next-largest, likely reaching 15,000?25,000 at their peak before the Black Death.

    ▪  Major Towns (5,000?10,000+): Only a handful of towns (eg., Norwich, Coventry, King’s Lynn) were in this tier, perhaps 8-10 total.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): This was the most numerous class of true urban centers in England. The average was likely around 2,000?3,500 people.

    ▪  Very Small Boroughs (Below 1,000): Many hundreds of market settlements were under 1,000 people, functioning as local market centers but not true urban areas.

    Germany (Holy Roman Empire Core)

    ▪  Largest Regional Cities (Excluding Capital): Cities like Cologne and Prague were major international centers, likely with 30,000?40,000 inhabitants.

    ▪  Major Towns (5,000?10,000+): Cities like Lübeck, Nuremberg, and Augsburg were regional powers, mostly in the 10,000?25,000 range.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): There were hundreds of walled, independent towns across the Empire, with many falling into this category. The average would be difficult to pin down but was lower than England.

    ▪  Very Small Boroughs (Below 1,000): A very large number of minor market towns and Minderstädte (small towns) were below 1,000.

    Scotland

    ▪  Largest Regional Cities (Excluding Capital): Edinburgh was the only city approaching major European size, perhaps 10,000?12,000 at its peak.

    ▪  Major Towns (5,000?10,000+): None. The scale of Scottish urbanization was significantly smaller than its neighbors.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): The largest burghs, such as Aberdeen and Perth, were likely only around 3,000 people.

    ▪  Very Small Boroughs (Below 1,000): Most Scottish burghs (towns) throughout the Middle Ages are believed to have had populations below 1,000.

    Those four models emerge as the most robust to choose from. But I’m going to expand the list further with some bigger-population models and one or two even smaller ones, and abstract the ones that have already been identified so that it doesn’t matter if the results of the generation model aren’t quite 100%.in line with History.

    This is clearly a village in Switzerland. The buildings are bigger and much closer together, but there’s still a lot of empty landscape. Image by ?Christel? from Pixabay

    5.8.0.4 The reality-based Demographic Models r

    &squarf:  France: Demonstrated a more distributed urban network with many cities (especially in the Low Countries/Italy) capable of sustaining populations of 25,000+.
        Urban Population: 5.6% (1500) – 8% (1600)
        Hierarchy Slope: Flat but rising sharply
        Regional Cities: 0.2-0.3 / 10,000 sqr km
        Major Towns: 0.5-1 / 10,000 sqr km
        Minor Towns: 5-7 / 10,000 sqr km
        Base Village: 320-480

    &squarf:  Germany: Akin to France but with a significant amount of Forests and Mountains which were relatively lightly populated while occupying great swathes of land.
        Urban Population: 10%
        Hierarchy Slope: Flat
        Regional Cities: 0.4-0.5 / 10,000 sqr km
        Major Towns: 1-2 / 10,000 sqr km
        Minor Towns: 8-12 / 10,000 sqr km
        Base Village: 400-600

    &squarf:  England: Had a relatively high urban density for its size, but a steep hierarchy. The difference between London and the next tier (York/Bristol) was large, and the gap between those and the average town was also significant.
        Urban Population: 5-6%
        Hierarchy Slope: Steep
        Regional Cities: 0.15 / 10,000 sqr km
        Major Towns: 0.4-0.5 / 10,000 sqr km
        Minor Towns: 3-4 / 10,000 sqr km
        Base Village: 240-360

    &squarf:  Scotland: Was the least urbanized region. Even its major burghs would be considered only medium-sized towns in England or minor towns in France.
        Regional Cities: None.
        Urban Population: 2-3%
        Hierarchy Slope: Very Flat, Slope flattens
        Major Towns: 0.1 / 10,000 sqr km
        Minor Towns: 0.5-1 / 10,000 sqr km
        Base Village: 160-240

    5.8.0.5 The Artificial Demographic Models

    To those four, I am adding the following:

    Imperial Core: A region dominated by a single capital or a handful of enormous cities, like Ancient Rome, Ancient China, or Mamluk, Egypt. It would also apply to any of the others if they have significant improvements over standard medieval technology (including magic) in the fields of agronomy and food transportation.
        Urban Population: 15-20%
        Hierarchy Slope: Very Steep
        Regional Cities: 0.5 – 1 / 10,000 sqr km
        Major Towns: 0.1 – 0.3 / 10,000 sqr km
        Minor Towns: 1-2 / 10,000 sqr km
        Base Village: 480-720

    Coastal Mercantile Model: Based on the late medieval and early modern low countries (Flanders./ Holland) and the Italian City States. Power and wealth are distributed among many medium-large communities, trading ports, and other economic centers, but there is no one super-sized city.
        Urban Population: 20-30%
        Hierarchy Slope: Very flat at low levels, rising sharply from higher town sizes (30,000 people)
        Regional Cities: 1 – 2 / 10,000 sqr km
        Major Towns: 2 – 4 / 10,000 sqr km
        Minor Towns: 4 – 6 / 10,000 sqr km
    Base Village: 280-420

    Frontier Nation: Somewhere in between Scotland and England, consisting of one part moderately densely settled, one part very sparsely settled (4-4 times as large) and a third part in the middle (2-3 times as large), relative to the densely settled region.
        Urban Population: 4-8%
        Hierarchy Slope: Moderate, flattens
        Regional Cities: 0.05 / 10,000 sqr km
        Major Towns: 0.2-0.25 / 10,000 sqr km
        Minor Towns: 1-2 / 10,000 sqr km
        Base Village: 200-300

    Tribal / Clan Model: based on Early Medieval Scandinavia and central Africa. Also useful for an extensive Nomadic Trading Network. Settlements are mainly defensive or seasonal gathering points.
        Urban Population: 2-5%%
        Hierarchy Slope: Impossibly Steep but capped
        Regional Cities: None
        Major Towns: 0.001 / 10,000 sqr km
        Minor Towns: 0.05 / 10,000 sqr km
        Base Village: 80-120

5.8.1 Villages

The village is the fundamental unit of the population distribution simulation – everything starts there and flows from it.

    5.8.1.1 Village Frequency

    I’ve given this section a title that I think everyone will understand, but it’s not actually what it’s all about. The real question to be answered here is, how big is the Locus surrounding a population?

    The answer differs from one Demographic Model to another, unsurprisingly.

    The area of a given Locus is:

        SL = MF x (Pop)^0.5 x k,
            where,
            SL = Locus Size
            MF = Model Factor
            Pop is the population of the village
            and k = a constant that defines the units of area.

    The base calculation, with a k of 1, is measured in days of travel. That works for a lot of things, but comparison to a base area of 10,000 km^2 isn’t one of them. For that, we need a different K – one based on the Travel Ranges defined in previous parts of this series.

    Section 5.7.1.14.5.1 gives answers based on travel speed, more as a side-issue than anything else, based on the number of miles that can be traversed in a day:

      (Very) Low d = 10 miles / day
      Low d = 20 miles / day
      Reasonable d = 25 miles / day
      Doable d = 30 miles / day
      Close To Max (High) d = 40 miles / day
      Max d = 50 miles / day
          ( x 1.61 = km).

    — but these are the values for Infantry Marching, and that’s a whole other thing.

    Infantry march faster than people walk or ride in wagons. The amount varies depending on terrain (that’s the main variable in the above values), but – depending on who you ask – it’s 1 2/3 or 2 or 2.5 times.

    But, because they travel in numbers, they can march for less time in a day. Some say 6 hours, some 7, some 8. Ordinary travelers may be slower, but they can operate for all but an hour or two of daylight. That might be 8-2=6 or 7 hours in winter, but it’s more like 12-2=10 or 11 hours in summer.

    And it has to be borne in mind that the basis for these values assumes travel in Summer – at least in medieval times. But we want to take the seasons out of the equation entirely and set a baseline from which to adjust the list given earlier.

    One could argue that summer is when the crops are growing, and therefore that should be the basis of measurement, given that we’re looking for the size of a community’s reach.

    So let’s take the summer values, and average them to 10.5 hours. When you take the various factors into account and generate a table (I used 6, 6.5, 7, 7.5, and 8 for army marching times per day, and the various figures for speed cited plus 2.25 as an additional intermediate value, and work out all the values that it might be, and average them, you get 1.04. That’s so small a change as to be negligible – 1.04 x 50 = 52. We will have far bigger approximations than that!

    So we can use the existing table as our baseline. Isn’t that convenient?

    But which value from amongst those listed to choose? Overall, unless there’s some reason not to, you have to assume that terrain is going to average out when you’re talking about a baseline unit of 10,000 sqr kilometers. So, let’s use the “Reasonable” value unless there’s reason to change it.

    And that gives a conversion rate of 1 day’s travel = roughly 25 miles, or 40 km. And those are nice round numbers.

    Now, a locus is roughly circular in shape, so is that going to be a radius or a diameter? Well, a “market day” is how far a peasant or farmer can travel with their goods and return. in a day, so I think we’re dealing with a radius of 1/2 the measurement, so that measurement must be the diameter of the locus.

    Which means that the base radius of a locus is 12.5 miles or 20 km.

    In an area where the terrain is friendly in terms of travel, this could inflate to twice as much; in an area where terrain makes travel difficult, it could be 1/2 as much or less. But if we’re looking for a baseline, that works.

    12.5 miles radius = area roughly 500 sqr miles = area 1270 sqr km. So in 10,000 sqr km, we would expect to find, on average, 7.9 locuses.. But that’s without looking at the population levels and the required Model Factors.

    The minimum size for an English Village is 240 people. The Square Root of 240 is 15.5.

    So the formula is now 1270 = 15.5 x 20 x Model Factor, and the Model Factor for England conditions and demographics is 4.1. Under this demographic model, there will be 4.1 Village Loci – which is the same thing as 4.1 villages – in 10,000 sqr km.

    Having worked one example out to show you how it’s done, here are the Model Factors for all the Demographic Models:

    ▪ Imperial Core: 480^0.5 = 21.9, and 21.9 x 20 x Model Factor = 1270, so MF = 2.9
    ▪ Germany (HRE): 400^0.5=20, and 20 x 20 x MF = 1270, so MF = 3.175
    ▪ France: 320^0.5 = 17.9, and 17.9 x 20 x MF = 1270, so MF = 3.55
    ▪ Coastal Mercantile Model: 280^0.5 = 16.733, and 16.733 x 20 x MF = 1270, so MF = 3.8
    ▪ England: 4.1
    ▪ Frontier Nation: 200^0.5 = 14.14, and 14.14 x 20 x MF = 1270, so MF = 4.5
    ▪ Scotland: 160^0.5 = 12.65, and 12.65 x 20 x MF = 1270, so MF = 5.02
    ▪ Tribal / Clan Model: 80^0.5 = 8.95, and 8.95 x 20 x MF = 1270, so MF = 8.95

    So, why didn’t I simply state the number of loci (i.e. the number of villages) in an area?

    It’s because that’s a base number. When we get to working on actual loci or zones, these can shrink, or grow; according to other factors. This is a guideline – but to define an actual village and it’s surrounds, we will need to use the MF. Besides, you might want to generate a specific model for a specific Kingdom in your game.

    You may be wondering, then, why it should be brought up at all, or especially at this stage? The answer to those questions is that the area calculated is a generic base number which may have only passing resemblance to the actual size of the locus.

    A locus will continue to expand until it hits a natural boundary, a border, or equidistance to another population center. Very few of them will actually be round in shape – some of them not even approximately.

    The ratio between ACTUAL area and BASE area is an important factor in calculating the size of a specific village.

    An example of the ‘real borders’ of a Locus

    To create the above map, I made a copy of the base map (shown to the left). At the middle top and bottom, i placed a dot representing the Locus ‘radius’.

    At the left top, another dot marked the half-way point to the next town (top left), where it intersected a change of terrain – in this case, a river.

    At the top right, doing the same thing would have made the town at top right a bit of a mixed bag – it already has forests and hills and probably mountains. I didn’t want it to have a lot of farmland as well. So I deliberately let the current locus stretch up that way. The point below it is also slightly closer to the top right town than it would normally be, but that’s whee there is a change of terrain – the road. I tossed up whether the locus in question should include the intersection and road, but decided against it.

    And so on. Once I had the main intersection points plotted, I thought about intermediate points – I didn’t want terrain features to be split between two towns, they had to belong to one or the other. You can see the results in the “bites” that are taken out of the borders of the locus at the bottom.

    If you use your fingers, one pointing at the town in the center and the other at the top-middle intersection point, and then rotate them to get an idea of the ‘circular’ shape of the locus, you can see that it’s missing about 1/6 of it’s theoretical area to the east, another 1/6 to the south, and a third 1/6th to the west. It’s literally 1/2 of the standard size. That’s going to drive the population down – but it’s fertile farmland, which will push it up. But that’s getting ahead of ourselves.

    As an exercise, though, imagine that the town lower right wasn’t there. The one that’s on the edge of the swamp. Instead of ending at a point at the bottom, the border would probably have continued, including in the locus that small stand of trees and then following the rivers emerging from the swamp, and so including the really small stand of trees. The Locus wouldn’t stop until it got to the swamp itself. The locus would have extended east to the next river, in fact, encompassing forest and hills until reaching the East-road, which it would follow inwards until ii joined the existing boundary. It would still have lost maybe 1/12th in the east, but it would have gained at least that much and probably more in the south, instead of losing 1/3. The locus would be 1 – 1/12 + 1/3 – 1/12 – 1/3 = 10/12 of normal instead of 1/2 of normal.

    5.8.1.2 Village Base Size

    If you look at the models, you will notice “Base Village” and a population count, and might be fooled into thinking that everything in that range is equally likely. It’s not.

    Take the French model – it lists the village size as 320-480.

    First, what’s the difference, high minus low? In this case, it’s 160. We need to divide that by 8 as a first step – which in this case is a nice, even, 20.

    Half of 20 is 10, and three times 10 is 30. Always round these UP.

    With that, we can construct a table:

        01-30 = 320
        31-40 = 321-350 (up by 30)
        41-50 = 351-380 (up by 30)
        51-60 = 381-400 (up by 20)
        61-70 = 401-420 (up by 20)
        71-75 = 421-430 (up by 10)
        76-80 = 431-440 (up by 10)
        81-85 = 441-450 (up by 10)
        86-90 = 451-460 (up by 10)
        91-95 = 461-470 (up by 10)
        96-00 = 470-480 (up by 10)

    I used Gemini to assist in validating various elements of this section, and it thought the “up by 30” was confusing and the terminology be replaced with something more formal.

    I disagree. I think the more colloquial vernacular will get the point across more clearly.

    It was also concerned – and this is a more important point – that GMs couldn’t implement this roll and the subsequent sub-table quickly. I disagree, once again – I’ve seen far more complicated constructions for getting precise population numbers than two d% rolls, especially since the same tables will apply to all areas within the Kingdom that are similar in constituents. Everywhere within a given zone, in fact, unless you deliberately choose to complicate that in search of precision.

    In general, you construct one set of tables for the entire zone – and can often copy those as-is for other similar zones as well. Maybe even for a whole Kingdom.

    The d% breakdown is always the same percentages, and there are always 2 “up by “3 x 1/2″s, 2 “up by 2 x 1/2″s, and 5 “up by 1/2″‘s – with the final one absorbing any rounding errors; in this example there aren’t any.

    We then construct a set of secondary tables by dividing our three (or four) increments by 10. In this case, 30 -> 3, 20 -> 2, 10 -> 1. And we apply the same d% breakdown in exactly the same way, but from a relative position:

    So:
        1/2 x 3 = 1.5, rounds to 2; 3 x 1.5 = 4.5, rounds to 5.
        1/2 x 2 = 1; 3 x 1 = 3.
        1/2 z 1 = 0.5, rounds to 1; 3 x 1 = 3.

    The “Up By 30” Sub-table reads:

        01-30 = +0
        31-40 = +5
        41-50 = +5+5 = +10
        51-60 = +10+3=+13
        61-70 = +13+3=+16
        71-75 = +16+2 = +18
        76-80 = +18+2 = +20
        81-85 = +20+2 = +22
        86-90 = +20+2 = +24
        91-95 = +24+2 = +26
        96-00 = +30 (up by whatever’s left).

    The “Up By 20” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+2 =+8
        61-70 = +8+2=+10
        71-75 = +10+1 = +11
        76-80 = +11+1 = +12
        81-85 = +12+1 = +13
        86-90 = +13+1 = +14
        91-95 = +14+1 = +15
        96-00 = +20 (up by whatever’s left).

    The “Up By 10” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+1 =+7
        61-70 = +7+1=+8
        71-75 = +8+1 = +9
        76-80 = +9+1 = +10
        81-85 = +0-1 = -1
        86-90 = -1-1 = -2
        91-95 = -2-1 = -3
        96-00 = -3-1 = -4

    Notice what happened when I ran out of room in the “+10”? The values stopped going up, and starting from +0, started going DOWN.

    It takes just two rolls to determine the Base Population of a specific village with sufficient accuracy for our needs within a zone..

    EG: Roll of 43: Main Table = 380, in an up-by-30 result. So we use the “Up By 30” Sub-table and roll again: 72, which gives a +18 result. So the Base population is 380+18=398.

    These results are intentionally non-linear.

    Optional:

    If you want more precise figures, apply -3+d3.

    Or -6+d6.

    Or anything similar – though I don’t really think you should go any larger than -10+d10 – and I’d consider -8+2d6 first.

    I have to make it clear, this is relating to the population of a specific village in a specific zone not a generic one. For anything of the latter kind, continue to use the minimum base population. I just thought that it bookended the ‘real locus’ discussion. We had to have the former because it affects what terrain influences the town size and how much of it there is; the latter is just a bonus that seemed to fit..

    5.8.1.3 Village Demographics

    Let’s start by talking Demographics, both real-world and Fantasy-world.

    The raw population numbers are not as useful as numbers of families would be. But that’s incredibly complicated to calculate and there’s no good data – the best that I could get was a broad statement that medieval times had a child mortality rate (deaths before age 15) of 40-50%, an infant mortality rate (deaths before age 1) of 25-35%, and an average family size of 5-7 children.

    If look at modern data, we get this chart:

    Source: Our World In Data, cc-by, based on data from the United Nations. Click the image to open a larger version (3400 x 3003 px) in a new tab.

    I did a very rough-and-ready curve fitting in an attempt to exclude social and cultural factors and derive a basic relationship for what is clearly a straight band of results:

    Derivative work (see above), cc-by, extrapolating a relationship curve in the data

    …from which I extracted two data points: (0%,1.8) and (10%,5.6), which in turn gave me: Y = 0.38 X + 1.8, which can be restated, X = 2.63Y – 4.74. And that’s really more precision than this analysis can justify, but it gives a readout of child mortality for integer family sizes.

    Yes, I’m aware that the real relationship isn’t linear. But this simplified approximation is good enough for our purposes.

    That, in turn, gives me the following:

        Y = Typical Number Of Children,
        X = Overall Child Mortality Rate

        Y, X:
        1, -3%
        2, 0%
        3, 3%
        4, 5%
        5, 8%
        6, 11%
        7, 13%
        8, 16%
        9, 18%
        10, 21%
        11, 24%
        12, 26%

    …so far, so good.

    Next, I need to adjust everything for the rough data points that we have for medieval times, when bearing children was itself a mortality risk for the mothers.

    5-7 children, 40-50%

    so that gives me (5, 8, 40) and (7, 13, 50) – more useful in this case as (8, 40) and (13,50) – which works out to Z = 2 Y + 24.

        Z=Child Mortality, Medieval-adjusted

        Y, X, Z:
        1, -3%, 18%
        2, 0%, 24%
        3, 3%, 30%
        4, 5%, 34%
        5, 8%, 40%
        6, 11%, 46%
        7, 13%, 50%
        8, 16%, 56%
        9, 18%, 60%
        10, 21%, 66%
        11, 24%, 72%
        12, 26%, 76%

    But here’s the thing: realism and being all grim and gritty might work for some campaigns, but for most of us – no. What we need to do now is apply a “Fantasy Conversion” which contains just enough realism to be plausible and replaces the balance with optimism.

    I think Division of Z (the medieval-adjusted child mortality rate) by 3 sounds about right – YMMV. That gives me the F values below – but I also checked on a ratio of 2.5, which gives me the F2 values.

    Gemini suggested using 3.5 or 4 for an even ‘softer’ mortality rate, and 2.25 or 2 for a grittier one.

    In principle, I don’t have a problem with that – and part of the reason why I’m not just throwing the mechanics at you, but explaining how they have been derived, is so that GMs can use alternate values if they think them appropriate to their specific campaigns.

    I don’t just want to feed the hungry, I want to teach them to fish, to paraphrase the biblical parable.

        F= Fantasy Adjusted Child Mortality Rate
        F2 = more extreme Child Mortality Rate

        Y, X, Z, F, F2:
        1, -3%, 18%, 6%, 7%
        2, 0%, 24%, 8%, 10%
        3, 3%, 30%, 10%, 12%
        4, 5%, 34%, 11%, 14%
        5, 8%, 40%, 13%, 16%
        6, 11%, 46%, 15%, 18%
        7, 13%, 50%, 17%, 20%
        8, 16%, 56%, 19%, 22%
        9, 18%, 60%, 20%, 24%
        10, 21%, 66%, 22%, 26%
        11, 24%, 72%, 24%, 29%
        12, 26%, 76%, 25%, 30%

    I think the F values are probably more appropriate for High Fantasy, while the F2 are better for more typical fantasy – but you’re free to use this information any way you like, the better to suit your campaign world.

    You might decide, for example, that averaging the Medieval Adjusted Values with the F2 is ‘right’ – so that 5 children would indicate (40+16)/2 = 28% mortality.

    Social values can also adjust these values – traditionally, that means valuing male children more than females. But in Fantasy / Medieval game settings, I think that would be more than counterbalanced, IF it were a factor, by the implied increased risks from youthful adventuring. In a society that practices such gender-bias, it would not surprise me if the ultimate gender ratio was 60-40 or even 70-30 – in favor of Girls.

      5.8.1.3.1 Maternal Survival

      The next element to consider is the risk of maternal death in childbirth. That’s even harder to pin down data on, but 1-3% per child is probably close to historically accurate. Balanced around that is the greater risks from adventuring, and the availability of clerical healing. So I’m extending the table to cover 4, 5, and 6%, but you are most likely to want the values in the first columns. To help distinguish these extreme possibilities from the usual ones, they have been presented in Italics.

      We’re not interested so much in the number of cases where it happens as I am the number of cases where it doesn’t – the % of families with living mothers, relative to the number of children.

          Y, @1, @2, @3, @4, @5, @6:
          1, 99%, 98%, 97%, 96%, 95%, 94%
          2, 98.0%, 96.0%, 94.1%, 92.2%, 90.3%, 88.4%
          3, 97.0%, 94.1%, 91.3%, 88.5%, 85.7%, 83.1%
          4, 96.1%, 92.2%, 88.5%, 84.9%, 81.5%, 78.1%
          5, 95.1%, 90.4%, 85.9%, 81.5%, 77.4%, 73.4%
          6, 94.1%, 88.6%, 83.3%, 78.3%, 73.5%, 69.0%
          7, 93.2%, 86.8%, 80.8%, 75.1%, 69.5%, 64.8%
          8, 92.3%, 85.1%, 78.4%, 72.1%, 66.3%, 61.0%
          9, 91.4%, 83.4%, 76.0%, 69.3%, 63.0%, 57.3%
          10, 90.4%, 81.7%, 73.7%, 66.5%, 59.9%, 53.9%
          11, 89.5%, 80.1%, 71.5%, 63.8%, 56.9%, 50.6%
          12, 88.6%, 78.5%, 69.4%, 61.3%, 54.0%, 47.6%

      The method of calculation is 100 x ( 1- [D/100] ) ^ Y. Just in case you want to use different rates than these.

      There does come a point at which the likelihood of maternal death begins to limit the size of the average family, though, and I think the 6% values are getting awfully close to that mark.

      Let’s say that a couple have 6 children, right in the middle of the historical average. If the mother falls pregnant a 7th time, at 6%, she has roughly a 1 in 3 chance of dying (and a fair risk of the child perishing with her). Which means that she HAS no more children. But if she beats those odds to have 7 children, her chances are even worse when it comes to child #8, and so on.

      Of all the cases with a mother who survived childbirth, we then need to factor in death from all other causes – monsters and adventuring and mischance and so on. Fantasy worlds tend to be dangerous, so this could be quite high – maybe as much as 5% or 10% or 20%. So multiply the living mothers by 0.8. Or 0.7 Or 0.9 – whatever you consider appropriate – to allow for this.

      This rural community is obviously alongside a major river or coastline – the proximity of the mountains suggests the first, but isn’t definitive. The name offers a clue: ‘hallstatt’, which to me sounds Germanic, and suggests that the waterway may be the Rhine. Or not, if I’ve misinterpreted. Image by Leonhard Niederwimmer from Pixabay

      5.8.1.3.2 Paternal Survival

      The result is the % of families with a surviving mother. So how many surviving fathers are there per surviving mother? Estimates here vary all over the shop, and more strongly reflect social values. But if I’m suggesting 5% – 20% mortality for mothers from other sources, the same would probably be reasonably true of fathers – if those social values don’t get in the way.

          0.95 x 0.95 = 90.25%.
          0.9 x 0.9 = 81%.
          0.85 x 0.85 = 72.25%
          0.8 x 0.8 = 64%.

      Those values give the percentages in which both parents have survived to the birth of the average number of children.

      If you’re using 10% mortality from other causes, then in 90% of cases in which the mother has died, the father has survived. But in 10% of the cases in which the mother has succumbed, the children are orphaned by the loss of the other parent.

      The higher this percentage, the higher the rate of survivors remarrying and potentially doubling the size of their households at a stroke. And that will distort the average family size far more quickly than the actual mortality percentages, unless there is some social factor involved – maybe it’s expected that parents with children will only marry single adults without children, for example.

      The problem with this approach is that if it’s the mother who is remarrying, this puts her right back on that path to mortality through childbirth; the child-count ‘clock’ does not get reset. If it’s a surviving father marrying a new and childless wife, it DOES reset, because the new mother has not had children previously.

      In a society that permits such actions, there is a profound dichotomy at its heart that favors larger families for husbands who survive while placing mothers who survive at far greater risk of the family becoming a burden to the community – which is likely to change that social acceptance. Paradoxically, a double standard is what’s needed to give both parents a more equal risk of death, and a more equal chance of surviving.

      5.8.1.3.3 Childless Couples

      Next, let’s think about the incidence of Childless Couples. We can state that there’s a given chance of pregnancy in any given year of marriage; but once it happens, there is just under a full year before that chance re-emerges.

          Year 1: A% -> 1 child born
          Year 2: (100-A) x A% -> 1 child born, A%^2 -> 2 children born
          Year 3: (100-A^2) x A% -> 1 child born, (100-A) x A% -> 2 children born, A^3% -> 3 children born

      … and so on.

      This quickly becomes difficult to calculate, because each row adds 1 to the number of columns, and its easy to lose track.

      But here’s the interesting part: we don’t care. To answer this question, there’s a far simpler calculation.

      In any given year, there will be B couples married. (100-A%) of them will not have children in the course of that year. If we specify B as the average, rather than as a value specific to a given year, then the year before we will also have B couples marry, and (100-A%) of them without children at the end of that year – which means that in the course of the second year of marriage, A% will have children and stop being counted in this category, and (100-A)% will not, and will still count.

      Adding these up, we get (100-A)% + (100-A)%^2 + …. and so on. And these additions will get progressively and very rapidly smaller.

      Let’s pick a number, by way of example – let’s try A=80%, just for the sake of argument.

      We then get 20% + 4% + 0.8 % + 0.16% + 0.032% + 0.0064% … and I don’t think you’d really need to go much further, the increases become so small. I pushed on one more term (0.000128%) and got a total of 24.998528%. I pushed further with a spreadsheet, and not even 12 years was enough to cross the 25% mark – but it was getting ever closer to it. Close enough to say that for A=80, there would be 25 childless couples for every… how many?

      The answer to that question comes back to the definition of A: It the number of couples out of 100 who have a child in any given year. So, over 12 years, that’s a total of 1200 couples. And 25 / 1200 = 2.08%.

      I did the math – cheating, I used a spreadsheet – and got the following, all out of 1200 couples:

          A%, C, [C rounded]
          80%, 25,
          75%, 33.33, 33
          70%, 42.86, 43
          65%, 53.85, 54
          60%, 66.67, 67
          55%, 81.81, 82
          50%, 99.98, 100
          45%, 122.13, 122
          40%, 149.67, 150
          35%, 184.66, 185
          30%, 230.10, 230
          25%, 290.50, 291
          20%, 372.51, 373

      But that has to mean that the rest of those 1200 couples have to have children – and the number of children will approach the average number that you chose.

      So if you pick a value for A, you can calculate exactly how many childless couples there are relative to the number of families with children:

          A=45%, C=122:

          1200-122 = 1078
          1078 families with children, 122 childless couples
          1078 / 122 = 8.836
          8.836 + 1 = 9.863
          so 1 in 9.863 families will be childless couples.

      5.8.1.3.4 Unwed Singles

      The social pressure to marry has varied considerably through the ages, but the greater the dangers faced by the community, the greater this pressure is going to be. And the fitter and healthier you are, the greater this pressure is going to be amplified.

      This is inescapable logic – the first duty of any given generation in a growing society is to replace the population who have passed away, and it takes a long time to turn children into adults.

      You could calculate the average lifespan, deduct the age of social maturity, and state that society frowns heavily on unwed singles above that age, and as every year passed with the individual approaching that age, the greater the social pressure would become – and that would be a true approach.

      The problem is that the average lifespan is complicated by those high rates of childhood death, and trying to extract that factor becomes really complicated and messy. And then you throw in curveballs like Elves and Dwarves, with their radically different lifespans and the whole thing ends up in a tangled mess.

      So, I either have to pull a mathematical rabbit out of my hat, or I do the sensible thing and get the GM to pick a social practice and do my best to make it an informed choice.

      While a purely mathematical approach is possible, the more that I looked at the question, the more difficult it became to factor every variable into the equation.

      Want the bare bones? Okay, here goes.

      For a given population, P, there are B marriages a year, removing B x 2 unwed individuals from the population. We can already extract the count of those who are ineligible for marriage due to age, because they are all designated as children.

      We can subtract the quantity of childless couples who are already wed in a similar fashion to the calculations of the previous subsection.

      The end result is the number of unwed singles of marriageable age who have not married. Setting P at a fixed value – say 100 people – we can then quickly determine the number of unmarried singles.

      What ultimately killed this approach was that it was – in the final analysis – using a GM estimate of B as a surrogate for getting the GM to estimate the % of singles in their community – and doing so in a manner that was less conducive to an informed choice, and requiring a lot of calculations to end up with the number that they could have directly estimated in the first place.

      Nope. Not gonna work in any practical sense.

      So, instead, let’s talk about the life of the social scene – singles culture. There is still going to be all that social pressure to marry and contribute to the population, especially if you are an even half-successful adventurer, because that makes you the healthiest, wealthiest, and most prosperous members of the community.

      It can be argued that instead of using the average lifespan (with all its attendant problems) and deducting the age of maturity (i.e. the age at which a child becomes an adult) to determine at what age a couple have to have children in order to keep the population at least stable (you need two children for that, since there are two adults involved, and you need to take that child mortality rate into consideration, dividing those 2 by the mortality rate and rounding up), you should use add age of the mother as a factor in the rise of the mother’s mortality during childbirth, and work back from that age. In modern times, that’s generally somewhere in the thirties, maybe up to 40. That doesn’t mean that older women can’t have children, just that under these circumstances, the risks of dying before you have enough offspring are considered too high by the general culture.

      But what does that really get you? There’s always going to be some age at which the pressure to wed starts to grow. Shifting it this way or that by a couple of years won’t change much.

      Looking at it from the reverse angle – how much single life will society tolerate – can be far more useful.

      I would suggest a base value of a decade. Ten years to be an adventurer and live life on the edge.

      In high-danger societies, especially with a high mortality rate, that might come back 2 or 3 years, At it’s most extreme, 5. That’s all the time you have to focus on becoming a professional who is able to support a family, or at least to setting your feet firmly on that path.

      In low-danger societies, especially those with a lower mortality rate, it might get pushed out a few years, maybe even another 5. That’s enough time that you can spread some wild oats and still settle down into someone respectable within the community.

      How long is the typical apprenticeship? In medieval times? In your fantasy game-world? From the real world, I could bandy about numbers like 4 years, or 5 years, or 5 years and 5 more learning on the job, or repaying debts to the master that trained you. And you end up with the same basic range – 5-15 years.

      What is the age of maturity in your world? Again, I could throw numbers around – 18 or 21 seem to be the most common in modern society, but 16 (even 15) has its place in the discussion – that’s how old you had to be back when I was younger before you could leave school and pursue a trade, i.e. becoming an apprentice. But I have played in a number of games where apprenticeships started at eight, or twelve, and lasted a decade – and THEN you got to start repaying your mentor for the investment that he’s made in you. With interest.

      Does there come a point where people are deemed anti-social because they have not married, and find their prospects of attracting a husband or wife diminishing as a result? Don’t say it doesn’t happen, because there is plenty of real-life evidence that it’s there as a social undercurrent – one that shifts, and sometimes intensifies or weakens, without real understanding of the factors that drive the phenomenon – instead, forget the real world and think about the game-world.

      How optimistic / positive is the society? How grim and gritty?

      Think about all these questions, because they all provide context to the basic question: What percentage of the population are unwed with no (official) children?

      Here’s how I would proceed: Pick a base percentage. For every factor you’ve identified that gives greater scope for personal liberty, add 2%. For every factor that demands the sacrifice of some of that liberty, from society’s point of view, subtract 2%. In any given society, there are likely to be a blend of factors, some pushing the percentage up, and some down – but in more extreme circumstances, they might all factor up or down. If you identify a factor as especially weak, only adjust by 1%; if you judge a factor as especially strong, adjust by 3 or even 4%.

      In the end, you will have a number.

      Let me close out this section with some advice on setting that base percentage.

      There are two competing and mutually-exclusive trains of thought when it comes to these base values. Here’s one:

      ▪ In positive societies, low child mortality means fewer young widows/widowers. The society is more stable, allowing for strong family formation and early marriage. Base rate is low.

      ▪ In moderate societies, dangers still disrupt family units, leading to a moderate rate of single, adult households. Base rate is moderate.

      ▪ In dangerous societies, high death rates mean many broken families, orphans, and single parents. The number of adult individuals living outside a stable family unit is maximized. Base rate is high.

      Here’s the alternative perspective:

      ▪ Positive societies produce less social pressure and greater levels of personal freedom, reducing the rate of marriage and increasing the capacity for unwed singles. Base rate is high.

      ▪ Moderate societies have a positive social pressure toward marriage at a younger adult age, and less capacity for personal liberty. Base rate is moderate.

      ▪ Societies that swarm with danger have a higher death rate, and there would be more social pressure to marry very young to create population stability. The alternative leads to social collapse and dead civilizations.

      What’s the attitude in your game world? They are all reasonable points of view.

      In a high-fantasy / positive social setting, I would start with a base percentage of 22%. Most factors will tend to be positive, so you might end up with a final value of 32% – but there can be strains beneath the surface, which could lead to a result of 12% in extreme cases.

      In a mid-range, fairly typical society, I would employ a base of 27%. If there are lots of factors contributing to a high singles rate, this might get as high as 37%, and if there are lots of negatives, it might come down to 17% – but for the most part, it will be somewhere close to the middle.

      In an especially grim and dark world, I would employ a base of 33%, in the expectation that most factors will be negative, and lead to totals more in the 23-28% range. But if social norms have begun to break down, social institutions like marriage can fall by the wayside, and you can end up with an unsustainable total of 40-something percent.

      Anything outside 20-35 should be considered unsustainable over the long run. Whatever negative impacts can apply will be rife.

      5.8.1.3.5 Population Breakdown

      That’s the final piece of the puzzle – with that information, you can assess the four types of ‘typical families’ and their relative frequency:

          # Children with no parents,
          # Children with mothers but no fathers,
          # Children with fathers but no mothers, and
          # Children with two parents.
          # Childless Couples
          # Unwed Singles

      Get the total size of each of these family units / households* in number of individuals, multiply that size by the frequency of occurrence, add up all the results, and convert them to a percentage and you have a total population breakdown. Average the first five and you have the average family size in this particular region and all similar ones.

      Multiply each frequency of occurrence by the village population total (rounding as you see fit), and you get the constituents of that village.

      I have never liked the use of the term ‘households” in a demographic context, even though that seems to be the most commonly preferred term these days. I’ve lived in a number of shared accommodations as a single. over the years, and that experience muddies what’s intended to be a clearer understanding of the results. If you have 50 or 100 singles living in a youth hostel, are they one household or 50-100? Families – nuclear or non-nuclear – for me, at least, is the clearer, more meaningful, term.

      5.8.1.3.6 The Economics Of The Demographics

      In modern times, it’s not unusual for two adults and even multiple children all to have different occupations for different businesses all at the same time. Some kids start as paper boys and girls at a very young age. Even five year olds with Lemonade stands count in this context.

      Go back about 100 years and that all changes. There is typically only one breadwinner – with exceptions that I’ll get to in a moment – and while some of them will have their own business (be it retail or in a service industry), most will be working for someone else.

      There will be a percentage who have no fixed employment and operate as day labor.

      Going into Victorian times, we have the workhouses and poorhouses, where brutal labor practices earn enough for survival but little more. While some were profitable for the owners, most earned less than they cost, and relied on charitable ‘sponsorship’ from other public institutions – sometimes governments, more often religious congregations. These are the exceptions that I mentioned. This is especially true where the father has deserted the family or died (often in war) leaving the mother to raise the children but unable to do so because of the gender biases built into the societies of the time.

      Go back still further, and it was a matter of public shame for a woman to work – with but a few exceptions such as midwifery. Nevertheless, they often earned supplemental income for the families with craft skills such as sewing, knitting, and needlework.

      The concept that the male was the breadwinner only gets stronger as you pass backwards through history.

      Fantasy games are usually not like that. They do see the world from the modern perspective and force the historical reality to conform to that perspective. In particular, gender bias is frequently and firmly excluded from fantasy societies.

      The core reasoning is that characters and players can be of either gender (or any of the supplementary gender identifications) and the makers of the games don’t wish to exclude potential markets with discomforting historical reality.

      There are a few GMs out there who intentionally try to find an ‘equal but distinct’ role for females and others within their fantasy societies; it’s difficult, but it can be done – and it usually happens by excluding common males from segments of the economy within the society. If there are occupations that are only open to women, and occupations of equal merit (NOT greater merit) that are only open to men, you construct a bilateral society in which two distinct halves come together to form a whole.

      But it would still be unusual for a single household to have multiple significant breadwinners; you had one principal earner and zero or more supplemental incomes ‘on the side’.

      Businesses were family operations in which the whole family were expected to contribute in some way, subject to needs and ability.

      And that’s the fundamental economic ‘brick’ of a community – one income per family, whether that income derives as profits from a business or from labor in someone else’s business.

      You can use this as a touchstone, a window into understanding the societies of history, all the way back into classical times – who earned the money and how? In early times, it might be that you need to equate coin-based wealth with an equivalent value in goods, but once you start thinking of farm produce or refined ore as money, not as goods, the economic similarities quickly reveal themselves.

      So that is also the foundation of economics in this system. One family, one income (plus possible supplements). In fact, there were periods in relatively recent history in which the supplementary income itself was justification for marriage and children.

      In modern times, we evaluate based on the reduction of expenses; this is because most of our utilities don’t rise in usage as fast as the number of people using them (which goes back to the muddying concept of ‘households’; if two people are sharing the costs, both have more economic leftover to spend because the costs per person have gone down; if they are NOT sharing expenses, each providing fully for themselves, then they are two ‘households’, not one. It also helps to think of rent as a ‘utility’ within this context).

      But that’s a very modern perspective, and one that only works with the modern concept of ‘utilities’ – electricity, gas, and so on. Go back before that, into the pre-industrial ages, and the perspective changes from one of diminishing liabilities into one of growth of potential advantages. And having daughters who could supplement the household income by working as maids or providing craft services gave a household an economic advantage.

      5.8.1.3.7 An Economic Village Model

          8 a^2 = b^2 – c^2.

      Looks simple, doesn’t it? In fact, it is oversimplified – the reality would be

          a^d = (b^e – c^f ) / g,

      but that’s beyond my ability to model, and too fiddly for game use.

      a = the village’s profitability. Some part of this may show up as public amenities; most of it will end up in the pockets of the broader social administration, in whatever form that takes.

      b = the village’s productivity, which can be simplified to the number of economic producers in the village. You could refine the model by contemplating unemployment rates, but the existence of day laborers whose average income automatically takes into account days when there’s no work to be found, means that we don’t have to.

      c = the village’s internal demand for services and products. While usually less than production, it doesn’t have to be so. But it’s usually close to b in value.

      To demonstrate the model, let’s throw out figures of 60 and 58 for b and c.

          8 a^2 = 60^2 – 58^2 = 3600 – 3364 = 236.
          a = (236 / 8)^0.5 = 29.5^0.5 = 5.43

      The village grows. b rises to 62. c rises to 59.

          8 a^2 = 62^2 – 59^2 = 3844 – 3481 = 363.
          a = (363 / 8)^0.5 = 45.375^0.5 = 6.736.

      It has risen – but not by very much.

      Things become clearer if you can define c as a percentage of b:

          a^2 = b^2 – (D x b^2) / 100
          100 a^2 = 100 b^2 – D x b^2 = b^2 x (100-D)

      If 98% of the village’s production goes to maintaining and supporting the village, then only 2% is left for economic growth. If the village adds more incomes, demand rises by the normal proportion as well – so economic growth rises, but quite slowly. In the above example calculations, 59/62 = 95.16% going to support the village – and 95% is about as low as it’s ever going to realistically go. In exceptionally productive years, it might be as low as 66.7%, but most years it’s going to be much higher than that.

      Side-bar: 5.8.1.3.7.1 Good Times

      You can actually model how often an exceptional year comes along, by making a couple of assumptions. First, if 66.7 is as good as they get, and 95 is as bad as an exceptionally good year gets, then the average ‘exceptional year’ will be 80.85%.

      Second, if 95% is as good as a typical year gets, and 102% is as bad as a typical year gets, then the average ‘normal’ year will be 98.5%.

      Third, if the long term average is 95.16%, then what we need is the number of typical years needed to raise the overall average (including one exceptional year) to 95.16%.

          95.16 x (n+1) = 80.85 + (n x 98.5)
          95.16 x n + 95.16 = 80.85 + 98.5 x n
          (95.16 – 98.5) x n = 80.85 – 95.16
          3.34 n = 14.31
          n = 14.31 / 3.34 = 4.284.

          4-and-a-quarter normal years to every 1 good year.

      You can go further, with this as a basis, and make the good years better or worse so that you end up with a whole number of years.

          95.16 x (5 +1) = g + 5 x 98.5
          g = 95.16 x 6 – 98.5 x 5
          g = 570.96 – 492.5 = 78.46.

      That’s a six-year cycle with one good year averaging 78.46% of productivity sustaining the village and five typical years in which 98.5% of productivity is needed for the purpose.

      I grew up on the land, and I can tell you that an industry is thriving if one year out of 10 is really good; an industry is marking time if one year out of 20 is good, and in trouble if one year in 25 or less is really profitable. One year in six is a boom.

      So to close out this sidebar, let’s look at what those numbers equate to in overall economic productivity for the rural population that depend on them:

          Boom: (1 x 78.46 + 5 x 98.5) / 6
              = (78.46 + 492.5) / 6
              = 570.96 / 6
              = 95.16%
              (we already knew this but it’s included for comparison)

          Thriving: (1 x 78.46 + 9 x 98.5) / 10
              = (78.46 + 886.5) / 10
              = 964.96 / 10
              = 96.496

          Stable, Marking Time: (1 x 78.46 + 19 x 98.5) / 20
              = (78.46 + 1871.5) / 20
              = 1949.96 / 20
              = 97.498

          In trouble / in economic decline: (1 x 78.46 + 24 x 98.5) / 25
              = (78.46 + 2364) / 25
              = 2442.46 / 25
              = 97.6984

      Look at the differences, and how thin the lines are between growth and stagnation.

          Stable to In Decline: 0.2004% change.
          Stable to Thriving: 1.002% change.
          Thriving to Booming: 1.336% change.
          Booming to In Decline: 2.5384% change.

      The whole boom-bust cycle – and it can be a cyclic phenomenon – is contained within 2.54% difference in economic activity.

      An aside within an aside shows why:

          Boom: 95.16% = 0.9516;
          0.9516 ^ 6 = 0.74255;
          so 25.74% productivity goes into growth.

          Thriving: 96.496% = 0.96496;
          0.96496 ^ 6 = 0.8073;
          so 19.27% productivity goes into growth over the same six-year period.

          Stable: 97.498% = 0.97498;
          0.97498 ^ 6 = 0.859;
          14.1% of productivity goes into growth over the same six-year period.

          Declining: 97.6984% = 0.976984;
          0.976984 ^ 6 = 0.8696;
          13.04% of productivity goes into growth.

      Every homeowner sweats a 0.25% change in interest rates because they compound, snowballing into huge differences. This is exactly the same thing.

    5.8.1.4 The Generic Village

    The generic village is perpetually dancing on a knife-edge, but the margins are so small that it’s trivially easy to overcome a bad year with a better one. Even a boom year doesn’t incite a lot of growth, but a lot of factors pulled together over a very long time, can.

    Some villages won’t manage to escape the slippery slope long enough and will decline into Hamlets, but find stability at this smaller size. Given time, disused buildings will be torn down and ‘robbed’ of any useful construction material because that’s close to free, and that alone can make enough of a difference economically. With the land reclaimed, after a while you could never tell that it once was a village.

    Some won’t be able to arrest their decline – whatever led to their establishment in the first place either isn’t profitable enough, or too much of the profits are being taken in fees, tithes, greed, and taxes. They decline into Thorpes.

    In some cases, communities exist for a single purpose; they never grew large enough to even have permanent structures. They are strictly temporary in nature (though one may persist for dozens of years or more); they are forever categorized as Mining or Logging Camps.

    Other villages have more factors pushing them to growth, and once they reach a certain size, they can organize and be recognized as a town. And some towns become cities, and some cities become a great metropolis.

    With each change of scale, the services on offer to the townsfolk, and the services on offer to the traveler passing through, increase.

    The fewer such services there are, the more general and generic they have to become, just to earn enough to stay in operations.

    The general view of a generic village is that most services exist purely for the benefit of the locals, but a small number of operations will offer services aimed at a temporary target market, the traveler. These services are often more profitable but less reliable in terms of income, more vulnerable to changes in markets. They don’t tend to be set up by existing residents; instead, they are founded by a traveler who settles down and joins a community because they see an economic opportunity.

    That means that the number of such services on offer is very strongly tied to both the growth of the village, and to the overall economic situation of the Kingdom as a whole and to the local Region of which this village is a part.

    Here’s another way to look at it: The reason so much of the village’s economic potential goes into maintaining the village is because of all those tithes and taxes and so on. Some of those will be based on the land in and around the village; some on the productivity of that land; and some of it on the size and economic activity of the village. The rest provides what the village needs to sustain its population and keep everything going. There’s not a lot left – but any addition to the bottom line that isn’t eroded away by those demands makes the village and the region more profitable, creating more opportunities for sustained growth. Again, there is a snowball effect.

    Some villages – and this is a social thing – don’t want the headaches and complications of growth; they like things just the way they are. They will have local rules and regulations designed to limit growth by making growth-producing business opportunities less attractive or compelling. Others desperately want growth, and will try to make themselves more attractive to operations that encourage it.

    That divides villages into two main categories and a number of subcategories.

    Main Category: Villages that encourage growth
         Subcategory: Villages that are growing
         Subcategory: Villages that are not growing
         Subcategory: Villages that are being left behind, and declining.
    Ratios: 40:40:20, respectively.

    Main Category: Villages that are discouraging growth despite the risk of decline
         Subcategory: Villages that are growing and can only slow that growth
         Subcategory: Villages that have achieved stability
         Subcategory: Villages that have or are declining.
    Ratios: 20:40:40, respectively.

    5.8.1.5 Blended Models

    In general, the rule is one zone, one model. In fact, as a general rule, your goal should be one Kingdom, one model – that way, if you choose “England” as your model, your capital city will resemble London in size and characteristics, and not, say, Imperial Rome.

    But, if you can think of a compelling enough reason, there’s no reason not to blend models. There are lots of ways to do this.

    The simplest is to designate one model for part of a zone, and another to apply to the rest.

    Example, if your capital city were much older than the rest of the Kingdom, you might decide that for IT ALONE, the Imperial model might be more appropriate, while the rest of the Kingdom is England-like. Or you might decide that because of its size, it has sucked up resources that would otherwise grow surrounding communities more strongly, and declare a three-model structure: Imperial Capital, France for all zones except zone 1, and England for the rest of Zone 1.

    Example: A zone contains both swamp and typical agricultural land. You decide that those parts that are Swamp are German or Frontier in nature, while the rest are whatever else you are using.

    An alternative approach to the problem that works in the case of the latter example is to actually average the two models’ characteristics and apply the result either to just the swamp areas, or to the zone overall.

    When you get right down to it, the models are recommendations and guidelines, describing a particular demographic pattern seen in Earth’s history. There’s absolutely nothing to prevent you from inventing a unique one for a Kingdom in your world – except for it being a lot of work, that is.

    5.8.1.6 Zomania – An Example

    I don’t really think that a fully-worked example is actually necessary at this point, but I need to have one up-to-date and ready to go for later in the article. So it’s time for another deep-dive into the Kingdom of Zomania.

    5.8.1.6.1 Zone Selection

    I’ll start by picking a couple of Zones that look interesting, and distinctive compared to each other.

    Zone 7 is bounded by a major road, but doesn’t actually contain that road; it DOES have capacity for a lot of fishing, though. And I note that there are cliffs in the zones to either side of it, so they WON’T support fishing – in fact, those cliffs appear to denote the limits of the zone..Zone 7 adds up to 167.8 units in area, and features 26 units of pristine beaches.

    Zone 30 has an international border, and a major road, lots of forest and foothills becoming mountainous. It’s larger than one 7, at 251.45 units.

    Because I haven’t detailed these areas at all, the place that I have to start is back in 5.7.1.13. But first…

    5.8.1.6.7.1.1 Sidebar: Anatomy Of A Fishing Locus

    I was going to bring this up a little later, but realized that readers need to know it, now.

    Coastal Loci are a little different to the normal. To explain those differences, I threw together the diagram below.

    1: is a coast of some kind. It might not be an actual beach, but it’s flat and meets the water.

    2: It’s normal, especially if there’s a beach, for the ends to be ‘capped’ with some sort of headland. This is often rocky in nature. This is the natural location for expensive seaside homes and lighthouses.

    3. Fishing villages.

    4. Water. It could be a lake, or the sea, or even a river if it’s wide enough.

    5. Non-coastal land, usually suitable for agriculture.

    6. A fishing village’s locus is compressed along the line of the coast and bulging out into the water. This territory produces a great deal more food than the equivalent land area – anywhere from 2-5 times as much. Some cultures can go beyond coastal fishing, doubling this area – though what’s further out than shown is generally considered open to anyone from this Kingdom. Beyond that, some cultures can Deep-Sea fish (if this is the sea), which quadruples the effective area again. If you’re keeping track, that’s 2-5 x 2 x 4 = 16-40 times the land area equivalent. The axis of the locus is always as perpendicular to the coast as possible.

    7. The bottoms of the lobes are lopped off…

    8. And the land equivalent is then found ‘squaring up’ the locuses…

    9. …which means that these are the real boundaries of the locus. The area stays roughly the same, though.

    The key point is this: you don’t have to choose “Coastal Mercantile” to simulate living on the coast and fishing for food. There are mechanisms already built into the system for handling that – it’s all done with Terrain and a more generous interpretation of “Arable Land”.

    Save the “Coastal Mercantile” Model for islands and coastal cultures whose primary endeavor is water-based trade.

    Zone 7, then, should have the same Model as all the other farmland within the Kingdom. I think France is the right model to choose.

    Zone 30 is a slightly more complicated story. For a start, don’t worry about the road – like coastal villages, that gets taken care of later. For that matter, so is the heavy forestation, and the local geography – hills and mountains. But this is an area under siege from the wilderness, as explained in an earlier post. Which changes the fundamental parameters of how people live, and that should be reflected in a change of model. In this case, I think the Germany / Holy Roman Empire model of lots of small, walled, communities is the most appropriate.

    But this does raise the question of where the change in profile takes place. I have three real options: The Zone in it’s entirety may be HRE-derived; or the HRE model might only apply to the forests; or might take hold in the hills and mountains, only.

    My real inclination would be to choose one of the first two options, but in this case I’m going to choose door number 3m simply because it will contrast he HRE model with the base French version of the hills and forests. In fact, for that specific purpose, I’m going to set the boundary midway through the range of hills:

    5.8.1.6.1.2 Sidebar: Elevation Classification

    Which means, I guess, that I should talk about how such things are classified in this system. There are eight elevation categories, but the categories themselves are based on the differences between peak elevation and base elevation.

    I tried, but couldn’t quite get this to be fully legible at CM-scale. Click on the image above to open a larger copy in a new tab.

    To get the typical feature size – the horizontal diameter of hills or mountains – divide 5 x the average of the Average Peak Elevation range by the average Relief range and multiply by the elevation category number, squared for mountains, or twice the previous category’s value, whichever is higher. Note that the latter is usually the dominant calculation! The results are also shown below. Actual cases can be 2-3 times this value – or 1/2 of it.

    1. Undulating Hillocks – Average Peak Elevation 10-150m, Local Relief <50m; Features 16m (see below).
    2. Gentle Hills – Average Peak Elevation 150-300m, Local Relief 50-150m; Features 32m.
    3. Rolling Hills – Average Peak Elevation 300-600m, Local Relief 150-300m; Features 64m

         -> □ Zone 30 Treeline from the start of this category
         -> □ Normal Treeline is midway through the range

    4. Big Hills – Average Peak Elevation 600-1000m, Local Relief 300-600m; Features 128m
    5. Shallow Mountains – Average Peak Elevation 1000-2500m, Local Relief 600-1500m; Features 417m
    6. Medium Mountains – Average Peak Elevation 2500-4500m, Local Relief 1000-3000m; Features 834 m
    7. Steep Mountains – Average Peak Elevation 4500-7000m, Local Relief 3000-5000m; Features 1668m
    8. Impassable Mountains, permanent snow-caps regardless of climate – Average Peak Elevation 7000m+, Local Relief 5000m+; Features 3336m.

    Undulating Hillocks (also known as Rolling Hillocks or Rolling Foothills) are basically a blend of scraped-away geography and boulders deposited by glaciers. If the boulders have any sort of faults (and most do), they will quickly become more flat than round and start to tumble within the Glacier. When they come to rest, several will be stacked, on on top of another, generally in long waves. There will be gaps in between, which get filled with earth and mud and weathered rock over time, unless the rocks are less resistant to weathering than soil, in which case the rocks get slowly eaten away. In a few tens of thousands of years, you end up with undulating hillocks, or their big brothers. The flatter the terrain, the more opportunity there is for floodwaters to cover everything with topsoil, smoothing out the bumps. The diagram above shows how this ‘stacking and filling’ can produce structures many times the size of individual hillocks.

    A very similar phenomenon – wind instead of glaciers, and sand instead of boulders – creates sandy dunes in deserts prone to that sort of thing. Over time, great corridors get carved out before and after each dune, generally at right angles to the prevailing winds. It can help you picture it if you think of the wind “rolling” across the dunes – when they come to a spot where the sand is a little less held together, it starts to carve out a trench, and before long, you have wave-shaped sand-dunes.

    5.8.1.6.3 Area Adjustments – from 5.7.1.13

    Zone 7 has a measured area of 167.8 units, but that needs to be adjusted for terrain. Instead of the slow way, estimating relative proportions, let’s use the faster homogenized approach:

    Hostile Factors:
         Coast 1.1 + Farmland 0.9 + Scrub 1.1 = 3.1; average 1.03333.
         Coast +0.25 + Beaches -0.05 + Civilized -0.1 = +0.1
         Towns -0.1
         Net total: 1.03333
    167.8 x 1.0333 = 173.4 units^2.

    Benign Factors:
         Town 0.1 + Coast 0.15 + Beaches 0.15 + Civilized 0.2
         Subtotal +0.6
         Square Root = 0.7746
    173.4 x 0.7746 = 134.3 units^2.

    Zone 30 is… messier. Base Area 251.45 units^2.

    Hostile Factors:
         Mining 1.5 +
         Average (Mountains 1.4 + Forest 1.25 + Hills 1.2 = 3.85) = 1.28
         Town -0.1 + Foreign Town 0.1 + River 0.2 + Caves 0.05 + Ruins 0.4 + “Wild” 0.1 = +0.75
         Net total = 1.5 + 1.28 + 0.75 = 3.53
    251.45 x 3.53 = 887.6 units^2.

    Benign Factors:
         Town 0.1 + Foreign Town -0.1 + River +0.1 + Caves 0.05 + Ruin 0.4 + Major Road 0.2
         Subtotal 0.75
         “Wild” = average subtotal with 1 = 0.875
         Sqr Root = 0.935
    887.6 x 0.935 = 829.9 units^2.

    To me, this looks very Greek – but it’s actually ‘Gordes’ in England, which the photographer describes as a village. One glance is enough to show that it’s bigger than the town depicted previously. Image by Neil Gibbons from Pixabay

    5.8.1.6.4 Defensive Pattern – from 5.7.1.14

    Zone 7 is pretty secure, the biggest threat being local insurrection or maybe pirate raids. A 4-lobe structure of 2½,5 looks about right.

    When I measure out the area protected by a single fort and 4 satellites, I get 47.2 days^2. That takes into account overlapping areas where this one structure shares the burden 50% with a neighboring structure, and the additional areas that have to be protected by cavalry units.

    That means that in Zone 7, there should be S x 134.3 / 47.2 = 2.845 x S of them, depending on the size of a “unit” on the map is, measured in days’ march for infantry.

    S is going to be the same for all zones I’ve avoided making that decision for as long as I can – the question is, how large is Zomania?

    5.8.1.6.5 Sidebar: The Size of Zomania, revisited

    16,000 square miles – at least, that’s the total that I threw out in 5.7.1.3.

    That’s about the same size as the Netherlands.

    It’s a lot smaller than the Zomania that I’m picturing in my head when I look at the map. It IS the right size if the units shown are miles. But if they aren’t?

    There are two reasons for regularly offering up Zomania as an example. The first is to provide a consistent foundation and demonstration of the principles discussed coming together into a cohesive whole. And the second is for me to check on the validity of the logic and techniques that I’ve described.

    Feeling ‘wrong’ is keeping my subconscious radar from achieving purpose #2. And the Zomania being described being too small, which is the cause of that ‘wrong’ feeling, means that it isn’t going to adequately perform function #1, either.

    There can be only one solution – Zomania has to grow, has to be scaled up. I want Zone 7 to be comparable to the size of the Netherlands, not the entire Kingdom, which should be comparable to France, or Germany, or England, or Spain.

    A factor of 10? Where would 160,000 sqr miles place Zomania amongst the European Nations that I’ve named?

    UK: 94,356. Germany: 138,063. Spain: 192,466. France: 233,032. So 160,000 would be smack-dab in the middle, and absolutely perfect for both purposes.

    So Zomania is now 160,000 square miles, and the ‘units’ on all the maps are 10 miles each.

    It wasn’t easy sorting this out – it’s been a road-block in my thinking for a couple of days now – triggered by results that seemed to show Zone 7 to be about 0.08 defensive structures in size.

    And that is due to a second scaling problem that was getting in the way of my thinking:

    How much is that in day’s marching?

    In 5.7.1.14.3, I offered up:

        If d=10 miles (low), that’s 103,923 square miles.
        If d=20 miles (still low), that’s 415,692 square miles.
        If d=25 miles (reasonable), that’s 649, 519 square miles.
        If d=30 miles (doable), 935,307 square miles.
        If d=40 miles (close to max), 1.66 million square miles.
        If d=50 miles (max), 2.6 million square miles.

    But that was in reference to a theoretical 6 x 4, 12 + 12 pattern. Nevertheless, the scales are there. And they are way bigger than I thought they would be, and way to big to be useful as examples. Yet the logic that led to them seemed air-tight. Clearly, there was an assumption that had been made that wasn’t correct, but this problem was getting in the way of solving the first one.

    Once I had separated the two, answers started falling into place. The numbers shown above are how far infantry can march in 24 solid hours, such as they might do in a dire emergency. But defensive structures would not be built and arranged on that basis.

    If infantry march for 8 hours, they have just about enough daylight left to break camp in the morning (after being fed) and set up camp in the evening (digging latrines and getting fed). That’s the scale that would be used in establishing fortifications, not the epic scale listed. In effect, then, those areas of protection are nine times the size they should be.

    So, let’s redo them on that basis:

        If d=10 miles (low), that’s 11,547 square miles.
        If d=20 miles (still low), that’s 46,188 square miles.
        If d=25 miles (reasonable), that’s 72,169 square miles.
        If d=30 miles (doable), 103,923 square miles.
        If d=40 miles (close to max), 184,444 square miles.
        If d=50 miles (max), 288,889 square miles.

    And those are still misleading, because mentally, I’m thinking of this as the area protected by the central stronghold, and ignoring the satellites. To get the area per fortification,, we should divide by the total number of fortifications in the pattern – in the case of the numbers cited, that’s 6×4+12=36.

        If d=10 miles (low), that’s 320.75 square miles.
        If d=20 miles (still low), that’s 1283 square miles.
        If d=25 miles (reasonable), that’s 2,004.7 square miles.
        If d=30 miles (doable), 2,886.75 square miles.
        If d=40 miles (close to max), 5,123.4 square miles.
        If d=50 miles (max), 8024.7 square miles.

    Reasonable = 2004.7 square miles, or roughly equal to a 44.8 x 44.8 mile area. For a really tightly packed defensive structure of the one being discussed, that’s entirely reasonable – and it fits the image in my head.

    In my error-strewn calculation, my logic went as follows:

        ▪ In the inner Kingdom, I think that life is easy and lived fairly casually. That points to the lower end of the scale – 10 miles a day or 20 miles a day.

        ▪ 10^2 = 100, so at 10 mi/day, 16,000 = 160 days march.
        ▪ 20^2 = 400, so at 20 mi/day, 16,000 = 40 days march.

        ▪ That’s a BIG difference. 40 is too quick, but 160 sounds a little too slow. Tell you what, let’s pick an intermediate value of convenience and work backwards.

        ▪ 100 days march to cover anywhere in 16000 square miles gives 160, and the square root of 160 is 12.65 miles per day.

    Now, that logic’s not bad. But it doesn’t factor in the ‘working day’ of the infantry march – it needs to be divided by 3. And it DOES factor in my psychological trend toward making the defensive areas smaller, because my instinct was telling me they were too large – but this is the wrong way to correct for that. So this number is getting consigned to the dustbin.

    After all, the ‘hostile’ and ‘benign’ factors are supposed to already take into account the threat level that these fortifications are supposed to address, and hence their relative density.

        ▪ So, let’s start with the “reasonable” 25 miles.
        ▪ Apply the ‘working day’ to get 8.333 miles.
        ▪ The measured area of the defensive structure is 47.2 ‘days march’^2.
        ▪ Each of which is 8.333^2= 69.444 miles^2 in area.
        ▪ So the defensive unit – stronghold and four satellites – covers 47.2 x 69.444 = 3277.8 sqr miles.
        ▪ Or 655.56 sqr miles each.
        ▪ Equivalent to a square 25.6 miles x 25.65 miles.
        ▪ Or a circle 12.51 miles in radius.
        ▪ Base Area 173.4 units^2 = 17340 square miles.
        ▪ Adjusted for threat level, 134.3 units^2 or 13430 square miles. In other words, defensive structures are further apart because there’s less threat than normal.
        ▪ 13430 / 3277.8 = 4.1 defensive structures, of 1 hub and 4 satellites each.
        ▪ So that’s 4 hubs and 16 satellites plus an extra half-satellite somewhere.

    Those satellites could be anything from a watchtower to a small fort to a hut with a couple of men garrisoned inside, depending on the danger level and what the Kingdom is prepared to spend on securing the region. The stronghold in the heart of the configuration needs to be more substantial.

    Okay, so that’s Zone 7. Zone 30 is a whole different kettle of fish.

    I wanted to implement a 3-lobed configuration with more overlap than the four-lobed choice made for Zone 7. And it was turning out exactly the way I wanted it to; some every hub was reinforced by three satellites, every satellite reinforced by three hubs. I had the diagrams 75% done and was gearing up to measure the protected area.

    Which is when the plan ran aground in the most spectacular way. There were areas where responsibility was shared two ways, and three ways, and four ways, and – at some points – six ways. It was going to take a LONG time to measure and calculate.

    If I were creating Zomania as an adventuring location for real, I would have carried on. If I lived in an ideal world, without deadlines (even the very soft ones now in place at Campaign Mastery) I would have continued. I still think that it would have provided a more enlightening example for readers, because I would be doing something a little bit different and having to explain the differences and their significance.

    But since neither of those circumstances is the case, and this post is already several days late due to the complications explained earlier, I am going to have to compromise on principle and re-use the configuration established for Zone 7.

    Well, at least that will show the impact that the greater threat level will impose on the structure, but it leaves the outer reaches of the Kingdom less well-protected than they should be. If and when I re-edit this series into an e-book, I might well spend the extra time and replace the balance of this section – or even work the problem both ways for readers’ edification.

    REMINDER TO SELF – 3 LOBES, 1 DAY EXAMPLE

    But, in the meantime…

    Zone 30.
        ▪ Actual area 251.45 square units = 25,145 square miles.
        ▪ Adjusted for threat level = effective area 829.9 square units = 82,990 sqr miles. (in other words, the defensive structures you would expect to protect 82,990 square miles are so closely packed that they actually protect only 25,145 square miles, a 3.3-to-1 ratio.)
        ▪ Defensive Structure = 3277.8 square miles (from Zone 7).
        ▪ 82,990 / 3277.8 = 25.32 defensive structures of 5 fortifications each, or 126.6 fortifications in total. Zone 7 is 69% of the area and had a total of 20.5 fortifications, in comparison.

    What does 0.32 defensive structures represent? Well, if I take the basic structure and ‘lop off’ two of the satellites, then it’s 3/5 of a protected area minus the overlaps. By eye, those overlaps look to be a bit more than 2 x 1/4 of one of those 1/5ths, and since 1/4 of 1/5 is 1/20th, that’s roughly 0.6-0.1 = 0.5.

    If I take away a third satellite, the structure is down to 2/5 protected area minus overlaps, and those overlaps are now 1 x 1/20th, so 0.4-0.05=0.35. So, somewhere on the border, there’s a spot with one hub and one satellite.

    One more point: 3.3 to 1. What does THAT really mean? Well, the defensive structure used has satellites 2.5 days march from the hub. But everything is more compressed, by that 3.3:1 ratio, so the satellites in Zone 30 are actually 2.5 / 3.3 = 0.76 day’s march from the hub. The area each commands is still the same, but there’s a lot more overlap and capacity to reinforce one another.

    Another way to look at it is that there are so many fortifications that each only has to protect a smaller area. 3277.8 sqr miles / 3.3 = 993 sqr miles.

    5.8.1.6.6 Sidebar: Changes Of Defensive Structure

    The point that I’m going to make in this sidebar won’t make a lot of sense unless you’re paying close attention, because the Zone 30 example has the same defensive structure as Zone 7 – it’s just a lot more compressed. But imagine for a moment that there was a completely different defensive structure in Zone 30.

    What does that imply for Zone 11, which lies in between the two?

    You might think that it should be some sort of half-way compromise or blend between the two, but you would be wrong to do so.

    If you look back at the overall zone map for Zomania (reproduced below)

    …and recall that the zones are numbered in the order they were established, a pattern emerges. Zone 1 first, then Zone 2, then Zones 3-4-5-6-7, then zones 8-9-10-11-12, and so on. Until Zones 29-32 were established, Zone 11 was the frontier. it would likely have the same defensive structure as Zone 30. Rather than fewer fortifications, it would have them at the same density as Zone 30 – but the manpower in each would be reduced.

    If you know how to interpret it, the entire history of the Kingdom should be laid bare by the changes in its fortifications and defenses.

    But that’s not as important as the verisimilitude that you create by taking care of little details like this and keeping them consistent. The specifics might never be overtly referenced – but they still add a little to the credibility of the creation.

    5.8.1.6.7 Inns in Zone 7 – from 5.7.3

    Zone 7 is noteworthy for NOT having a major road – that’s on the Zone 11 / Zone 6 side of the border. Some of the inns along that road, however, may well be over that border – it’s a reasonable expectation that half of them would count. But only that half that is located where the border runs next to the road – there’s a section at the start and another at the end where the border shifts away.

    But there’s a second factor – what is the sea, if not another road to travel down? And Zone 7 has quite a lot of beach. The reality, of course, is that these are holiday destinations, and places for health recovery – but it’s a convenient way of placing them.

    So that’s two separate calculations. The ‘road that is a road’ first: There are actually two sections. The longer one runs through Zones 6 and 11, as already noted; it measures out at 15 units long, or 150 miles.

    The second lies in Zone 15, and it’s got a noticeable bend in it. If I straighten that out and measure it, I get 5 units or 50 miles.

    Conditions:
        Road condition, terrain, good weather = 3 x 2.
        Load = 1 x 1/2.
        Everything else is a zero.
        Total: 6.5.
    6.5 / 16 x 3.1 = 1.26 miles per hour.
    1.26 mph x 9 hrs = 11.34 miles.

    Here’s the rub: we don’t know exactly where the hubs and satellites are in Zone 7, only how many of them that there are to emplace. But it seems a sure bet that those areas where the road and border part ways, do so because there’s a fortification there that answers to Zone 6 or Zone 11, respectively. And that means that we can treat the entire length of the road as being between two end points.

    We know from the defensive structure diagram that the base distance from Satellite to Hub is 2 1/2 days march, and that there’s a scaling of x 1.0333 (hostile) x 0.7746 (benign) = x 0.8 – and that benign factors space fortifications further apart while hostile ones bunch them together, so this is a divided by when calculating distances. We know that 8.333 miles has been defined as a “day’s march”.

    If we put all that together, we get 2.5 x 8.333 / 0.8 = 26 miles from satellite to hub.

    Armies like their fortifications on roads, it makes it faster to get anywhere. Traders like their trade routes to flow from fortification to fortification, it protects them from bandits. The general public, ditto. If a road doesn’t go to the fortification, people will create a new road and leave the official one to rot. So it can be assumed that the line of fortifications will follow the road, and be spaced every 26 miles along it, alternating between hub and satellite.

        150 miles / 26 = 5.77 of them.

    It’s an imperfect world; that 0.77 means that you have one of three situations, as shown below:

    The first figure shows a hub at the distant end of the road. The first shows a hub at the end of the road closest to the capital. And the third shows the hubs not quite lining up with either position.

    But those aren’t the actual ends of the road – this is just the section that parallels the border of Zone 7, or vice-versa. So the last one is probably the most realistic

    Now, let’s place Inns – one every 11.34 miles. But we have to do them from both ends – one showing 1 day’s travel for ordinary people headed out, and one showing them heading in. Just because I’m Australian, and we drive on the left, I’ll put outbound on the south side and inbound on the north.

    Isn’t that annoying? The don’t quite line up – to my complete lack of surprise. Look at the second in-bound inn – it’s about 20% of a day short of getting to the satellite, and that puts it so close that it’s not worth stopping there; you would keep going.

    Well, you can’t make a day longer, but you can make it shorter. And that makes sense, because these are very much average distances.

    I’ve shortened the days for the ordinary traveler – including merchants – just a little, so that every 5th inbound Inn is located at a Stronghold, and every 5th outbound inn is located at a satellite. Every half-day’s travel now brings you to somewhere to stop for a meal or for the night.

    It’s entirely possible that not all of these Inns will actually be in service, it must be added. Maybe only half of them are actually operating. Maybe it’s only 1/3. But, given it’s position within the Kingdom, there’s probably enough demand to support most of these, so let’s do a simple little table:

        1 inn functional
        2 inn functional
        3 inn functional but 1/4 day closer
        4 inn functional but 3/4 day farther away
        5 inn not functional
        6 inn not functional, and neither is the next one.

    Applying this table produces the following (for some reason, my die kept rolling 3s and 6s):

    Even here, in this ‘safe’ part of the Kingdom, travelers will be forced to camp by the roadside.

As the Table Of Contents makes clear, there’s still a lot to come in this part. It will continue in part 5c!

Comments (2)

All Spiders (And Snakes) Are Not Alike


Snakes & Spiders in RPGs tend to one-size-fits-all construction. Use reality to make them exceptional!

Image by Alan Couch, CC BY 2.0, via Wikimedia Commons

I got curious this morning.

Australia is well-known around the world for the number and variety of deadly fauna we live alongside.

The likelihood of your home being robbed drops by a ratio of between 100-1000 times if you live above the ground floor, to the point that if you are not away for an extended period (more than a day) and have no neighbors on the same level, it’s perfectly safe to leave your front door unlocked for a few hours – while you go shopping, for example (doing so freaks a lot of urban dwellers out, though – it’s far more comfortable for those coming from relative security like a small country town).

So I suddenly wondered, “How much do Sydney Funnelweb Spiders like to climb? What are the rates of reported bites taking place on any above-ground level higher than the ground floor?”

I wasn’t able to answer the second because it’s not a statistic that is routinely recorded, but was able to get an answer to the first, based on the behavioral traits of the spiders in question. And that answer got me to thinking about Spiders and Snakes in RPGs.

Funnelweb Spiders

These are, perhaps, the most deadly spider in Australia. Nevertheless, there have been few if any fatal attacks since the anti-venom was developed.

Sydney Funnelweb Spiders (Atrax Robustus) are generally terrestrial (ground-dwelling), but they are capable of climbing under specific circumstances.

Sydney Funnelweb Spiders are primarily known for building their silk-lined tubular burrows in sheltered, moist, cool habitats, usually under logs, rocks, or in suburban gardens. The females are especially sedentary and rarely leave their burrows.

The most common encounters occur with wandering males during the warmer months (especially November to April), particularly after rain, as they search for mates. This wandering behavior often leads them into backyards, garages, and houses, or they fall into swimming pools.

The species is overwhelmingly terrestrial (ground-dwelling). Their burrows are in the soil, under rocks, or in logs. The only ones that typically leave the burrow are the wandering males looking for a mate.

When males wander, they move across the ground and seek shelter at dawn. They are most often found entering homes by crawling under doors or sometimes through other ground-level openings.

They generally CANNOT climb smooth surfaces like clean glass, plastic, or very smooth painted walls due to a lack of specialized adhesive pads (like those found on many other spiders). This is common lore among experts.

They CAN climb textured or rough surfaces like rough brick, steps, or rough-barked trees, as their claws can find purchase. In fact, some related species, like the Northern Tree-dwelling Funnelweb (Hadronyche Formidabilis), are known to live meters above the ground in tree bark.

So, while they prefer to stay at ground level, a Sydney Funnelweb Spiders could potentially climb a textured wall or staircase to reach an above-ground level, but this is not their typical, preferred mode of movement or habitat.

By far the most likely source of an above-ground attack is from a Spider being carried up on furniture or boxes being moved (carried up by a human) or an accidentally journey in a lift – by definition, unnoticed by the user of that lift.

Bio-security Barrier

Living on an above-ground level in an apartment building significantly reduces your risk of encounter.

You can treat living above the ground floor as a form of “bio-security” against Funnelwebs (and many other ground-dwelling risks) that is analogous to the security drop in crime mentioned.

Comparison: Huntsman Spiders

Huntsmen are climbers; they like to live high up on walls and on ceilings. Most varieties (maybe all) don’t build webs at all. They are incredibly fast and often very large (bigger than an open hand with the fingers splayed out as far as they will go). They are also adept at squeezing themselves through gaps that are much smaller than their bodies.

While most Australians don’t welcome the intrusion of a Huntsman into the home, it’s rarely a cause for panic. They are actually fairly shy creatures – just getting close to one and staring at it for a few minutes can be enough to get them to leave on their own when you then leave the immediate vicinity and don’t look at them – they treat this as coming across a predator that isn’t hungry enough to have them for lunch, a lucky escape, ‘now let’s get the hell out of here before it comes back!’

Huntsmen live on cockroaches, flies, and other far more annoying insects, so there are exceptions to that general rule. For the most part, in Australia, if you leave them alone, they will earn their keep.

But for the especially arachnophobic, that’s not an option, and there’s always the risk of a visitor freaking out, so it’s common practice to remove them gently and release them outside. Again, this is viewed as a predator ‘toying’ with them cruelly before letting them go – the last place they are likely to go is where they were removed from.

They have been known to scuttle inside cars and can even work their way through the door-seals of a closed door or a window that’s only opened a crack – 1/4 of an inch is more than enough. That’s why you’ll often see videos on the internet of spiders inside cars or on windscreens, and sometimes the braver souls will catch them, open the door, and release them. No Aussie questions the validity of these videos, they are far too plausible for that.

Huntsmen CAN climb smooth surfaces like glass, and can cling to a windscreen at highway speeds. They may not like the experience, though – I can’t attest to that, either way.

The largest one I’ve ever seen was the size of a dinner-plate. I think they can grow a little larger than that, but not much. But size alone makes them terrifying to some.

Snakes

The same is true of the most venomous snake varieties here, provided there is no access for them to get into the ceiling of the ground floor space.

Australia’s most medically significant snakes (like Eastern Brown Snakes or Tiger Snakes) are also strongly terrestrial. While they can climb surprisingly well, they are not naturally adapted to navigate the smooth, high, sheer walls and stairwells of a multi-story building.

Awareness of the ground-floor ceiling / roof void is key. If a snake gets into the space above the ground floor (by climbing a vine, tree, or rough surface to the roof-line and entering through a small gap), it is primarily a risk to the ground-floor residents. If you live on the first floor or higher, this risk is eliminated unless there is some opening in that crawlspace upwards that the snake is small enough to take advantage of – heating ducts or something, perhaps.

There is an evolutionary rationale for this: Because they are principally terrestrial, they are more likely to encounter predators, and so are more likely to develop defenses against those predators. So the general rule is, the less a snake likes to climb, the more likely it is to be dangerous.

Carpet Pythons

Carpet Pythons, and constrictors in general, are far stronger and better able to climb. They can be viewed as the Snake-world’s equivalent of Huntsmen. Their preferred attack mode is to leap / fall on prey from above or from the side and wrap themselves around it, squeezing it until it dies, then swallowing it whole.

The Second Bio-security Barrier

Even the climbing species tend to stay close to where the food is, and that’s closer to the ground. While they can climb higher than the first floor above ground level, there is little advantage to them in doing so, so there is, effectively, an equivalent ‘bio-security barrier’ that’s just one floor above the first. Encounter incidence drops dramatically at such heights. Part of it might be that while robust, strong, climbing snakes and spiders can survive a one-story drop completely unharmed, there is far greater risk when falling two or more stories. Just like people, extreme heights are not what they are built for, and are therefore scary (to some, they are thrilling to others – I wonder if that’s true in the Animal kingdom as well?)

Spiders In RPGs

While there can be exceptions of small-but-deadly spiders taken from the real world – Black Widows, Tarantulas, and so on – for the most part, RPGs treat Spiders as “one stat block does all”. They are all venomous, all climbers, all web-spinners, all generic except for size. At most, there might be cosmetic variations.

Simply dividing the world of spiders into two – terrestrial types vs climbers – and applying the difference to determine capabilities – is a direct infusion of verisimilitude into spider encounters. Go back and read the spider encounter in The Hobbit again, and this time don’t let yourself get distracted by the conversations and “Attercop”, and you will find that the encounter has a greater level of credibility because the behavior of the spiders feels realistic. There are species whose venom doesn’t kill right away, and who surround their prey in webbing and leave it hanging to die on its own, because it’s harder to tear flesh from bone when it hasn’t started to rot.

Snakes In RPGs

These fare somewhat better, but the same truth can ultimately be found here in an awful lot of cases. It might be, in part, due to varieties of deadly snake being recognized culturally with greater frequency – the cobras with their flaring necks, rattlesnakes with their rattles, and so on. When these get super-sized, some of their traits – those known to the referee – tend to go along for the ride. Many systems explicitly detail a “Giant Boa” or other constrictor.

But, past a certain point, the same truth is there – all snakes past a certain size are venomous, have similar behaviors and attitudes, and behave the same way – and can benefit in the same way by a little differentiation.

Example: Giant Swampy Tree-snakes

You don’t have to ground your ideas in reality, the mere fact that they are different from the ‘norm’ gives them instant credibility and interest. As an example, let me present to you the Giant Swampy Tree-snake, better known as the Green-backed Swamp Viper.

My chain of thinking:

  1. I don’t know what the defining characteristics of a Viper are, but the name sounds cool.
  2. These snakes cannot swim. In a swampy environment, that’s the key point of distinction, from which everything else will flow.
  3. To cross small rivers and streams, they learned to climb one tree, head out along its branches until it was above another tree’s branches, then drop down into it.
  4. Evolution favored smaller, lighter specimens, but required the retention of above-average strength relative to their size.
  5. After a while, they learned how to wrap their tails around the end of a tree-limb and swing, greatly increasing their chances of traversing terrain. This favors a longer, thinner body.
  6. Their eyesight grew more acute and their reactions faster in order to better target neighboring tree-limbs.
  7. Once you have a locomotive ability that doesn’t require descending to ground level, there is a survival benefit to not doing so most of the time. The only reason to drop to ground level is to attack prey, and once it’s in your mouth and on its way to being digested, you would head straight for the nearest tree and climb.
  8. Minimizing the time spent on the ground naturally demands a quicker-acting venom. Smaller body sizes give this snake a lower metabolic demand, so smaller prey, less frequently, becomes sufficient. The improved eyesight aids in the resulting development path. So the snake has fewer doses of its venom but it’s more potent.
  9. Take all of the above changes and repeat them because they are not just a change, they are a trend.
  10. Swinging from tree-limb to tree-limb imposes a natural length limit of average height above ground plus enough length to firmly grasp the tree-limb – two or three coils around, so if the tree-limb is 1/2 an inch in diameter, that would be 6 pi 1/2 = 3 pi = 9.4 inches.

In reality, this looks a little cumbersome in terms of the snake releasing it’s grasp at the end of its’ swing – if it wants to leap from one tree to another, I’d probably take one coil out and make the added length 4 pi 1/2 = 6.3 inches.

Put all of these changes into an appropriate stat block, and you have something unique, interesting, unexpected, fantastic – and yet, it has a ring of authenticity.

Snakes that live in trees tend to evolve to have a diameter 1/2 the diameter of the branch, at most. If they stay in close to the trunk, they can be enormous in size; if they head for the outer branches, they shrink – fast. And maximum length, as said, tends to be height above ground in the average tree-limb plus a few inches.

Final Tips

Hunting Vs Defense: A creature’s venom can have either purpose or both.

If it’s for hunting, the quantity will be enough to bring down its usual prey quickly. Every second that a snake or spider is waiting for its prey to konk out (dead or unconscious) is another second that the spider or snake itself can be attacked.

If it’s for defense, the quantity and deadliness will follow the same logic with reference to whatever it usually has to defend itself from.

If both, half-way adaptions become likely – smaller venom amounts but the speed for multiple attacks, for example – so that venom is not wasted on prey when it might be needed for defense.

The same logic still applies when you scale these creatures up.

Before you go, I have a couple of announcements.

Monday Deadlines Erased (well, lightly scuffed)

I (or Johnn) have been publishing Campaign Mastery every Monday at around Midnight my local time since 2008 with just one extended break (not of my choice). Back then, we followed the usual formula of 1-2000 words to a post. For the first ten years, we published twice a week, Mondays and Thursdays.

As of this post, that changes. When I started, I could knock out a post in one day – I often didn’t start writing until the Monday Morning, though I liked to have time up my sleeve by writing the next post early.

I had a set routine – Monday, CM; Tuesday, Pulp; Wednesday, the real world; Thursday, CM; Friday, prep the next campaign to be played on the monthly rotation cycle; Saturday, play; Sunday, personal time.

Then the posts started getting longer and more complicated. First Sundays and then Saturday Nights and then Tuesday Nights all got added to the CM schedule, one at a time. Lately, it’s been Thursday, Saturday, Sunday, Monday – more than half the week – and that often hasn’t been enough.

A number of times, a post has been almost but not quite ready completable before deadline, come Sunday / Monday, and I would have to set it aside and throw something together at the last minute, when another day or two would have seen it good to go.

So, as of this post, there’s a new publishing schedule here at CM:

Something New Every Week.

Where possible, I’ll stick to the old deadline, but when something’s not quite ready to go, I’ll give it the extra time that it needs and publish when it’s ready. If I get to Thursday and it’s still not ready, I’ll do the ‘something quick’ trick – and aim for the delayed post to appear the following week.

Partial Posts

When it’s a major series, like Trade In Fantasy, I’m going to pull a new trick out of my hat, the Partial Post. In a nutshell, come Monday or Thursday, I’ll publish whatever’s ready to go, no matter how minimal it might seem. The following week, I’ll publish everything done since the last post as “Part 5b” or whatever, but I will also update the incomplete post with the new content.

Like I said above, something new every week. I’ll even take my usual Time Out breaks in the middle of working on the larger post instead of waiting until it’s complete.

The “Part 5b”-style posts will be minimal – no updated TOC, a repetition of the same feature image, no commentary – just straight ahead from where I left off, with only a single text panel at the top with a link back to the main post.

When one of these drops, it will also signify that there may have been retroactive amendments to the content of preceding parts – these will be Works In Progress, not complete until the main post is complete.

And, on that main post, there will be a similar text panel which will keep track of the status of that post.

Right now, I’m working on Chapter 5 part 5. So the first part of it will get uploaded and published as “Chapter 5 Part 5 (Incomplete)”.

It will be followed by “Chapter 5 part 5a”, with the date and text saying “partial post, click here to read the more complete version” in a panel at the top. And, when it drops, the content will be integrated with the old “Chapter 5 part 5”, the end-of-post blurb will be updated to indicate whether or not Part 5 is complete or will continue, and a text panel will appear at the start, showing the date, and “Integrated part 5a”.

How well this will work remains to be seen, but the theory is sound, and hopefully readers will stick around.

What’s that? Why post separately at all? There are a number subscribers who get Campaign Mastery delivered by email who won’t get the updated version of “Part X”. Posting the additional text means that they will still get the new parts.

Taking Time

I have a number of major projects on the go right now.

  • I’m illustrating a complex machine for the Warcry campaign – so far, it consists of more than 1800 layers.
  • When it’s finished, I have to write description and narrative around it in the adventure for which it’s written.
  • Then I have to finish the adventure – and I have a hard deadline of early January for this task. So far, it’s 41,200 words long and about 80% complete. It contains 97 original images and 7 sound effects (so far)!
  • Meanwhile, there’s a Pulp adventure that’s almost complete but needs some finishing touches. It has meant creating an 88-page offline website with 500 images, not counting ones that haven’t actually been used, and more than 129,000 words of text. I have one last page of the website to finish of this and then it’s done. The entire (still incomplete) “Value Of Material Things” series is a spinoff of the work put into this website. The adventure itself is is 16,100 words, is about 95% complete, and also contains about 60 illustrations.
  • But before I can finish it, I need to complete work on another article for CM that currently stands at about 90% complete and is almost 9000 words long (there will be some compression in editing and many of those words are HTML, so it won’t be that long when it’s published).
  • After that, there’s another Pulp adventure that’s 80% complete, maybe 90%, but it needs a complicated illustration that I’ve barely been able to start. It needs to be complete by May 2026. So far, it has 184 illustrations (some originals, many hand-edited) and is 24,300 words long.
  • And then there’s my Superhero campaign. The next adventure is more or less complete at 7200 words and 28 illustrations, most of them original, but I have a growing itch to go back and add to it. But I also have to find time for the adventure that’s to follow it, and I haven’t even started on that beyond basic notes. It’s likely to run to 10-15,000 words.
  • And, meanwhile, the current Dr Who adventure currently stands at more than 56,000 words and is only 22% complete. 7200 words of that total have already been played (one full session), so this is turning out to be a monster. So far, it has 33 original illustrations and (in another first for me) 5 animations. Because play has already started, this has been a high priority for me. And the rest of that adventure needs to be illustrated – that’s probably another 67 or so images, maybe more, to be sourced. Most of those won’t be originals, though – I just have to find the ones I need on the internet.

Put all that together.

  1. 718 illustrations, most of them original, with 2 more major ones in progress and 78 more to be sourced.
  2. 7 sound effects. And 5 animated short movies.
  3. 10 documents & 1 88-page website.
  4. 282,800 words. That’s approaching three full-length novels.
  5. With 67200 still to write by February. And another 160,400 to follow later in the year.

That’s doable, but it means stealing back some of the time that Campaign Mastery posts have soaked up in recent times (hence the Partial Post concept). So, in addition to the measures stated above, more time is going to be diverted away from writing longer blog posts for the next few months. And, on top of that, I will be taking a two-week vacation covering Christmas and New Years Day.

There’s a lot to do, so I’d better get on with it!

Leave a Comment

Once We Were Heroes and the AI Controversy


This post is a review of Once We Were Heroes by Fool Moon Productions, which uses art that’s AI-Generated. So I’ve had to set some ground rules.

This post features AI-generated art. If you object to that art or its use, you can click on This Link to read a plaintext version of the article.

As the owner/operator of Campaign Mastery, I have spent a lot of time thinking about what the site’s policy should be with respect to art by Generative AI, and the text below is the result.

Campaign Mastery Policy on AI-generated Art

1. Campaign Mastery will not use or show AI art unless it is profoundly essential to the content of an article. “profoundly essential” includes reviewing a product, tool, or published work which uses AI-generated art, or when the art itself is the subject of the article.

2. AI art will never be used to replace any original art that would normally have been commissioned from or provided by an actual human artist by Campaign Mastery.

3. When AI art is used, a disclaimer will always warn readers, as shown above. This will precede any significant article content and especially any AI-generated content. Whenever possible, a link will be provided to a plaintext, AI-free version of the article.

Campaign Mastery Policy on AI Text

1. While text generated by an AI may be quoted, it will never used to replace human-generated text. All text published on Campaign Mastery must be substantially written, analyzed, and edited by a human author.

2. Text generated by an AI may only be quoted for the purposes of analysis, illustration, or conversation (eg demonstrating prompt engineering). Any such text will be clearly identified as to source. Analyses performed by an AI must be converted into a HTML-code table.

3. Outside of direct quotation, AI may have been used for research or brainstorming, or generating outlines or summaries of other texts. Every such use will be verified for accuracy by a human and the final text will always be written and edited by a human.

4. No third-party submissions which are obviously AI-Generated in the exclusive opinion of the site owner will be accepted for publication.

Campaign Mastery Policy on AI Audio and Video

1. While Campaign Mastery is not an AV site, from time to time Audiovisual materials may appear, and some of these may be AI generated in some respect.

2. If any aspect of these materials (eg AI voice-overs, background music, etc) is significantly AI-generated, the materials will be treated as though they were “AI Art” as per the policies stated above.

This text has been added to the policies page and is effective as of this post.

Human-AI Collaboration?

I couldn’t find what I wanted to use to illustrate this post – an Artificial Artist painting a question mark. This is a next-best alternative, based on robot hand human handshake by Mohamed Hassan, to which I added a question mark image by Gerd Altmann in the background, and some color tweaking to get them to match. Both images were sourced from Pixabay using their “authentic” (human only) setting.

The AI controversy – an overview

I always knew that this day would come eventually. I had expected that the occasion would be when I wrote and published an article on how I use AI within my campaigns, and the techniques and limitations that come with it – but that article isn’t written yet, because the uses that are most illustrative come from an adventure that hasn’t yet been played.

Or another article that’s been drafted on the limitations of AI and how it could be improved – and on how to get the most out of what’s already here.

Polarization, Content, and Hard Lines

The issue of AI-generated art and other content is one of the most polarizing issues in the hobby. It has forced publishers, creators, GMs, and customers / players to establish rules that are starkly black and white, often with the best of intentions – but when those clash, the hobby itself can be the loser.

I’m more in favor of a softer line that acknowledges gray areas with transparency. There are certain ways that I consider ethical when it comes to the use of AI, and others in which it clearly is not. Asking for anything “in the style of” a living artist is a big no-no, for example. Asking for something in the style of a long-dead artist, that’s more of a gray area.

I regard AI as a tool, and like all tools, it can be used for good or ill. Throw in a healthy dose of pragmatism, an acknowledgment that no black-and-white policy can satisfy everyone and that there can be good and valid reasons for the ethical use of the tool, and you find yourself in the same uncomfortable middle-ground that I occupy, and that the policies stated above are intended to encapsulate and define.

Ethics and Labor Rights

This actually breaks down into a number of related concerns. First, there’s the conflict between how generative AIs learn to create their content and respecting the rights and integrity of human creators.

Most AI models are trained on massive accumulations of data scraped from the internet with no concern as to the sources rights, and without recompense. And that irks those who support the rights of writers and artists. I’m one of them, so naturally, my sympathies align more with those who are critical in this regard.

But that perspective is nuanced by the reality of the internet. Once material is publicly available, it’s there for anyone to refer to and use as reference or inspiration. So long as sufficient input from outside that source is incorporated, and in a non-superficial way, so long as you are building on what has been made available and not simply copying it outright, how is what an AI does any different from what a human writer does?

If I want to create an image of a clown, and I start by doing research using Google Images on how other artists have depicted clowns to get ideas, that’s generally considered fine – because at the end of the day, I have to synthesize all those elements and ideas together into my own representation of “a clown”. I don’t generally restrict or place boundaries on those searches; I want as much fuel for the creative fires as I can get.

It’s a long-held maxim – if you don’t want something to be public, don’t put it on the internet.

Here’s a bone to chew on: if it’s valid and legal for a human to be educated by viewing online content, how is it not valid and legal and Fair Use for an AI system to use it in the same way, for the same purpose?

Shades of gray.

Some content creators argue that the results are a form of “unlicensed derivative work”. And that might be true, if only that content creator’s works were used to train the AI – but with every outside source, the purity of that argument gets eroded.

There comes a point where so many sources are being fused into one that you have to draw the line. It’s like music – the difference between doing a cover version of a Beatles song and drawing inspiration from the Beatles is clear and obvious. Both are forms of copying – but the nuance is completely different. One requires the payment of a license fee to the songwriters, and the other doesn’t. Doing them without that payment is legal and ‘fair’ in one case – and completely the opposite in the other.

You can’t copyright the D7(diminished) chord just because you’ve used it in a song. It’s there for anyone else to use.

What’s more, consider the necessary ‘spark of originality’ that distinguishes human creation from artificial construction. In order to generate a good image, a human user of an AI has to specify a prompt, and the general rule is that the more detailed the prompt, the better the result. Is this not providing the needed ‘spark of originality’ into the resulting image?

The more vague and generalized the input, the weaker this line of argument, I admit. But where do you draw the line? How many of these creators started out by imitating someone else’s work?

Shades of gray.

I don’t see how you can end up anywhere else in the argument if you’re applying any half-way reasonable standards.

Devaluation Of Creativity

There’ is a widespread fear among freelancers (artists, mapmakers, writers, editors, you name it) that AI tools will drastically reduce the market rate for their services. Why pay $500 for a unique monster illustration when you can generate a passable image for nothing, or close to it?

And they have a valid point – up to a point.

The keys to deciphering this argument are subtle. AI images may be ‘passable’ but they aren’t going to be as nuanced as a bespoke image from a human artist. This is stealing my own thunder to a certain extent, but here’s the reality: The more detailed you make an AI prompt, the more you are likely to get something close to what you want – but the more likely it is that some crucial element, spelt out in specific detail, is going to be left out completely. And if what you’ve requested isn’t something that people routinely post images of on the internet, you’re going to struggle – try generating an image of a “crashed alien spacecraft” and what you generally end up with is a flying saucer hovering serenely in the air. People don’t take many photos of crashed objects! And if the AI can’t learn what it should look like, it can’t create something like it.

What this argument is really pointing out is that amateur-prompted AI art raises the bar of amateur art to something with much of the gloss of the professional artist. But the professional will always be better at capturing originality and bringing it to the creative table. The differences may be more nuanced than a black-and-white line drawing, but they are real.

You’re still getting something for your $500 that you don’t get from the cheaper alternatives – but it’s not the same thing as it used to be.

And this argument also smacks of similarity to the opponents of every technological advance and consequent job losses. I’ve heard those arguments advanced against everything from the word processor to assembly-line robots. In every case, there has been more employment afterwards than before, once things settled down – but in some cases, those have mandated an evolution of skill-set, and in others, a complete replacement. So the truth of the matter in this respect is, once again, nuanced.

The similarity not only weakens this argument considerably, it points out, more starkly, my previous point – you pay for the services of a human artist for what he or she can provide that the cheaper substitutes can’t. Will that result in a realignment of the market rates? Possibly. But that’s life, it happens to everybody, like it or not – things change, and you either evolve accordingly, or you stagnate.

But there is a sting in the tail – the proposition that this leads to a “race to the bottom,” where only AI-assisted production can compete on cost. And that’s a point that I can’t argue with, and hence my comments on a possible realignment of market rates.

That said, it can also be suggested that AI generators are tools – some will learn to use them more effectively than others, just as some people are better at watercolors than with oil paints. The solution to this problem is for the creators to embrace AI and use it to increase their productivity so that they can accept five or ten times as many commissions paying one-fifth or one-tenth of what they used to command, while leveraging their artistic expertise.

So this line of argument is not as cut-and-dried as it first seems.

Specificity Of Style

Artists often feel that AI allows users to “mimic their unique style” without the artist receiving credit or compensation, effectively eroding their brand and professional identity.

For me, this is a far stronger argument than the preceding one, but I think the proposed remedy (don’t use AI, anti-AI, no AI, no, no, no!) is the wrong line to be pursuing. As I said earlier, I view creating something in the style of a living artist to be an ethical no-no. Once an artist is no longer available to take commissions by virtue of being dead, that’s a different story.

I think the correct remedy here is an extension of copyright protection to include the “distinctive style” of an artist. That’s already implied in the existing protections – more strongly in some fields than others. I always remember the time John Fogerty was sued by his previous record label for sounding too much like himself. That case established (or reinforced) the principle that each artist carries with him a uniqueness of style that cannot be licensed or sold and is emphatically NOT included in the rights purchased when you acquire control over an artist’s work product.

I think the existence of generative AI advances the demand for such to be formalized and generalized to cover all modes of creativity, be it visual, or textual, or audible. I would include under that umbrella, a singer’s unique voice.

There would still be gray areas. A guitarist could argue that they had a distinctive and unique playing style, for example, and that style should merit protection. But they would have to prove that uniqueness in comparison to others within that musical field.

The final jigsaw piece would be to require AI interfaces to explicitly block requests that enter protected fields. “In The Style Of” is permissible once the ‘copyright’ on that uniqueness has expired, and should be blocked the rest of the time – UNLESS you are the artist in question, I suppose. But that gets murky, so let’s keep it clean, and ban them from being lazy, too.

Publisher and market integrity

Large TTRPG publishers have taken explicit stances, and the community judges them harshly when they waver.

Major players like Paizo (Pathfinder/Starfinder) and many prominent independent publishers have issued clear policies stating they will “only accept human-created artwork” for their products, usually citing ethical concerns regarding data scraping. This is often driven by a commitment to supporting the freelance community.

Wizards of the Coast faced significant backlash when multiple freelancers and even their own in-house content creations were discovered to contain AI-generated elements, despite WotC claiming an anti-AI stance. These incidents reinforced the community’s demand for strict auditing and absolute transparency.

Since it’s my position that human-created artwork is superior to AI-generated content in specific ways, I don’t agree with the reasons cited for these policies. Many criticize AI-generated art for lacking the “soul, texture, and character” of human-created fantasy illustrations. In the TTRPG world, art often sells the ‘vibe’ of the setting, and AI is frequently accused of producing generic, overly smooth, or inconsistent visuals that break immersion, and those goes directly to my allegation that human art is better in key respects. But I do agree with the policies themselves as a general principle.

It’s when people seek to extend these policies down the scale to smaller publishers that I think problems start to arise. But I have to admit to being a bit conflicted over that problem.

Ideologically, I’m egalitarian; I favor “one rule for everybody”. And yet, in this circumstance, I think that there need to be different standard applied to different scales, and see the good and ethical use of AI generation as ‘raising the bar’ for the small operators to the point where they are keeping the big-ticket producers honest.

My policies and ideologies don’t hold all the answers, and that admission pushes me back into the shades of gray. If you can afford to, you should always hire human artists because the results will be better. If you can’t afford to, I’ll give you a pass for using AI-generated art. So there are two rules and a lot of gray in between them. But no one, hard-and-fast rule or principle yields a satisfactory answer in every case, and I do NOT agree with anyone that tries to implement one. I’ll respect their position in terms of their own products or pages – the the extent that I’m offering a plaintext version of this article, for example – but that is as far as I’ll go.

I generally think hard-liners are part of the problem in any field, anyway. Having ethics and principles isn’t a problem; expecting them to hold all the right answers every single time, that’s a problem, and a serious one.

AI Limitations

I’m only going to touch on this briefly, because it’s not directly relevant – but it does at least need to be mentioned.

AIs are not intelligent. They don’t understand a word they say. They are sophisticated systems that guess at the best ‘next word’ to follow the word they have just decided to use. That these words form sentences that have emergent properties of meaning when read by a human is a reality with which they cannot contend and can barely cope with.

Some AIs do better in this respect than others. For brainstorming, and nailing down technical details, they can present an enormous advantage – but when it comes to writing text for a TTRPG rules-set or adventure, they vary from inspiring to exasperating in equal measure, sometimes within the same paragraph!

TTRPGs and good written works rely on an internal consistency that has to run deep. Very deep. And that’s a consistency of the emergent properties of meaning within a series of statements. And since AIs don’t understand meaning…

AIs – LLMs – are capable of generating vast amounts of text quickly. They can talk the ear off a donkey, even without voice synthesis enabled! But they are prone to “hallucinations” (in which they make up facts) and struggle to maintain adherence to obscure, specific worldbuilding details. Or a specific role in the creative process. This makes unedited AI text a major liability for professional products – or for decent amateur ones.

Partnership

I view my use of AI as a partnership with a very creative research assistant. I can offer a vague idea and have it refined. I can ask for a suggestion – but I then have to take the ball offered and run with it, or use it to spark a better idea in a brainstorming session. It’s great for narrowing in on technical details – but you have to check its work. One phrase that repeats frequently in my interactions is “Ask questions for clarification if necessary.” And a lot of my inputs start by clarifying or reiterating something that the LLM has not taken into account.

I see the big picture. I use the AI to help clarify and define the details. I frequently need to steer the conversation, offer corrections or clarifications, or outright reject something the AI has suggested, while using that suggestion to clarify my own thinking to offer an alternative. On a number of occasions, the AI that I use most frequently has made three or four suggestions, and I’ve accepted none of them – but taken part of one and part of another and a touch of my own creativity and sense of narrative direction to weld the parts together into something better.

That’s leveraging the strength of the AI while using my ‘bigger picture’ to overcome its limitations. There’s a huge amount more that can be said on that subject, but I’ll save that for another article sometime.

Summing Up, Moving On

If I were to generalize and sum up my ethical position on the use of AI, it could be encapsulated in the statement, “AI as a tool or in partnership with human creativity is fine – with inherent limitations. AI as a primary generator of content that a reader or viewer would expect to be produced by a human is unethical at best and incompetent at worst.”

This is the ethical boundary that we, as consumers, have to navigate. And it’s precisely this boundary that the creators of Once We Were Heroes have forced me, and you as a reader, to confront. This game supplement heavily employs AI-generated art, and makes no bones about it:

    “Recognizing their limited artistic expertise and budget, Jeremy and Matthew at Fool Moon Productions leverage generative AI to enhance their creative outputs. This includes generating thematic “original” artwork, refining existing designs, and improving written content by correcting spelling and grammar. Notably, even this disclaimer was crafted with the assistance of AI.”

It’s not my job, as a reviewer, to argue the rightness or wrongness of this policy or the motivations behind it. It IS my job as a reviewer to consider the efficacy of the results and to bring the matter to the attention of potential buyers, who can then make up their own minds.

To the maximum extent possible, this review will focus on the content without considering its source. If the use of AI has achieved something spectacularly fitting or evocative, I’ll comment on the fitness and the evocative nature of the art – and if something doesn’t fit, I won’t cut them any slack for the source; it will be judged by the standards of human art.

But I wanted to make that clear before we start, too.

To facilitate this review, I have been given a free copy of Once We Were Heroes. I have no other incentive to produce anything other than a fair and unbiased review.

Once We Were Heroes – First Impressions

Front Cover

The front cover gives a first impression of two worlds and a location trapped in between. It’s clearly a collage of two separate pieces of art, and the styles don’t quite mesh. Art by Fool Moon Productions with AI assistance.

You can’t escape a first impression from the front cover, but it’s not all that promising a beginning. The art of the house at the bottom doesn’t feel like any of the other art in the product, and more importantly, doesn’t quite gel with the top part as a result.

The title – for some reason, I started thinking of this as “We Were Once Heroes”, and I think that derives from a grammatical choice in the title – specifically, the absence of a comma after “Once”. It’s a piece of minutia in the larger scheme of things, but it is the difference between a statement that attracts attention and commands interest, and something that’s more vague and leaves you wondering what it’s all about. Compare for yourselves:

Once We Were Heroes

Once, We Were Heroes

The Subtitle doesn’t help much. “An Adventure About Life After You Are Left” – Left where? Left Hanging? Left Alive? Left For Dead?

For all I know at this point, though, that might be a masterpiece summary – the answer might be “All Of The Above, and more”. At least it tells me that this is supposed to be an Adventure.

But the first impression is that the subtitle is there to try and hook a reader into buying the book because the title isn’t doing a strong enough sales job, and it’s too wordy to be very effective at that job. This is back-cover text, not something that belongs on the Front Cover, especially since it’s distracting from the art of the cover.

And, aside from knowing it’s an adventure, I still don’t really know enough about the product to be interested in buying it – though price would factor into that question. I’ll deal with that toward the end of this review.

Back Cover

The first place I go when the front cover doesn’t enlighten me enough (which is usually, to be fair) is the back cover, where I would expect to find a more verbose blurb describing the product.

Okay, so there are cosmic purple swirls evocative of space, or a peculiar storm, set against what might be a mountain and the same two ‘spheres’ of existence. And aside from the Fool Moon Logo and credit, there’s… nothing. This cements the impression that the subtitle was the back cover blurb at some point, and used on the back cover it would be more effective as a tease, because it wouldn’t be trying to sell the product.

As it stands, the back cover is pretty but leaves me none the wiser.

Fool Moon Productions

I want to call attention at this moment to the Fool Moon logo, which they were kind enough to supply in a higher-resolution format – the version below is actually a compromised version of it because I had to shrink it down.

https://www.dmsguild.com/en/product/535760/once-we-were-heroes

I’m calling attention to it because there’s a subtlety within it that you can barely make out in the back cover presented above. It consists – at first glance – of a wolf (evocative of a full moon) wearing a fool’s cap, and set inside a white disk (often used as a symbol of the full moon). But there’s the barest hint of something more, when you look closely.

To examine what I was seeing, I did a little digital editing to bring up the slight tonal difference that I was detecting and make it more prominent.

And now it’s clear to see that this isn’t just a yellow-white circle – it’s an actual representation of the full moon, as seen in the Northern Hemisphere.

Sidebar: Inverted Moon

Wait, what? people in the hemispheres see the moon differently?

Yep. Because the Earth is a sphere, people in the southern hemisphere are upside-down relative to those in the north, and as a result, the moon looks upside down to us, and the phases of the moon run in the opposite direction.

This image is from a post by “The Secrets Of The Universe” on Facebook, and from the logo top right, I assume that it is copyright by them. I have tweaked it slightly to enlarge the explanatory diagram at the top. Link to their post containing the original image, or click on the image itself.

But this is a rabbit hole full of traps for the unwary. Their post’s URL, and it’s text, claims that this happens because the moon is a sphere. WRONG, though they get everything else pretty much right – and got called out on the error in the comments..

This Post on Facebook by “World GeoDemo” gets the explanation right – but has the flags that identify the perceived images back to front, which is only likely to spread confusion further. But they get the explanation right.

So even the people explaining the phenomenon struggle to get the details right. We live in a topsy-turvey world, sometimes…

And all this because I wanted to know which perspective on the moon was being illustrated by Fool Moon’s logo.

Getting back to the point that I was trying to make: While it might have been more effective to have painted the ‘dark parts’ out that lie under the wolf, the normal difference shown is subtle enough that you don’t really notice, it’s only when you darken those ‘blue areas’ that this becomes noticeable.

But the attention to detail displayed in the logo, as a general statement, boded well for what I might find within the product. Nuances and details and subtlety are what it promises; now it’s up to the product to deliver.

The other thing that scrolling through the PDF to the back cover does is hint at the scale of the product – the back cover is page 158, with the front cover counted as page 1. It’s BIG, a lot more so than most ‘adventures’, by a factor of 4 or 5. And that’s an important thing to notice at this point.

Art

Some of the art is quite evocative. This is perhaps the best image in the product, but one or two others come close. For the most part, though, the art is strongly illustrative but nothing more. It does (mostly) avoid the ‘plastic’ impression that some AI art possesses, thanks to the careful and subtle use of textures. Art by Fool Moon Productions with AI assistance.

In fact, so much of the detail was lost in compressing the image above to fit Campaign Mastery’s display space that I decided to capture a larger partial image. The textures are still hard to make out but the impression they create is not. Art by Fool Moon Productions with AI assistance.

The art has been generated using Affinity Suite, Dungeon Draft, and 2-Minute Tabletop. I don’t know any of those tools, but the latter two sound like they are mapping-related, and there are a number of richly-detailed maps provided, so I assume that the first was the primary source for the artwork. The disclaimer, quoted earlier, suggests that the primary human creators involved in the artwork creation were Jeremy “Wolf” Morris and Matthew “Soulforge” Walsh, who are also listed as the writers of the product.

And, for the most part, it’s not bad. I’ve included both the best and (in my opinion) worst as illustrations in this post, but for the most part, it’s effective – at communicating to the GM. I’ll delve into that comment a little later in the review; I’m still conveying my first impressions at this point.

Day-Night Theme

Many of the pieces contain a day-vs-night theme, which is obviously related to the ‘two worlds’ impression created by the cover. At this stage, I’m not sure of the relevance, but it’s too prevalent not to be significant, so I’ll be looking for an answer when I get into the text. Art by Fool Moon Productions with AI assistance.

Encounter Illustrations

There is a stylistic thread that runs through most of the encounter illustrations. Sometimes it works, sometimes I’m not so sure. This is one of those ‘unsure’ examples, but it’s certainly the cutest Beholder that I’ve ever seen, though. All it lacks is a ribbon tied into a bow on the top of its’ head. Is that impression appropriate? I don’t know yet. But this is NOT menacing in the way a Beholder usually would be. Art by Fool Moon Productions with AI assistance.

Compare the Beholder with this Half-orc image. Clever use of negative space creates an impression of size, while the textures transform an image that might have been cartoonish into something more substantial. I wish it were larger though – I’ll discuss that in the text below. Art by Fool Moon Productions with AI assistance.

So far as I can tell from a quick glance through the pages (used to select the images extracted for this review), there’s an image to go with each encounter, though this might be an inaccurate impression. It’s something for me to look for when I dig into the content.

Scene Illustrations

Locations are well illustrated. Some of them are stylistically more related to the encounter illustrations, others are more removed from that but with consistent tonality that works to create a sense of a unified whole. Art by Fool Moon Productions with AI assistance.

This is an example of a scene illustration that is more in line with the encounter illustrations. The biggest problem with it is the size – I had to ENLARGE it to fit the available space. Art by Fool Moon Productions with AI assistance.

I guess, right now, we get to the rub. In terms of presenting a representation of a scene or an encounter to the GM to help them interpret the text, the art is absolutely fine – for the most part. But it’s not all that useful for showing to players, it’s too small. Despite the large page count, this product would be even better if the locations and maybe the encounters were enlarged, even though this would add to that page count.

Sure, you can zoom in to enlarge the image…

Art by Fool Moon Productions with AI assistance.

…but that’s not a perfect solution. Either you cut the top and/or bottom off images, or you show players content to the side of what you’re trying to show them. That could be another area, it could be an encounter, it could be a magic item, it could be text – but what it is most likely to be is a surprise-killer.

Not enough thought has been put into how customers will actually use the product.

Having been involved in the production of Assassin’s Amulet and a few other things over the years, I can see why this has happened – it’s essentially the age-old problem of forest for the trees, and it’s an easy trap to fall into. In a nutshell, the creators were so busy actually making the content that no-one stepped back to look at usage, or not closely enough, anyway.

This goes right back to the initial content design decisions. Presenting the illustrations as full width, 1/3 height panels would need to be decided right from the beginning, because it affects the size of the illustrations that you need. It would have made layout a lot more difficult, with text in columns and illustrations not. But the product would be a lot more user-friendly as a result..

Character Illustrations

There are plenty of character illustrations, too. I’m not sure if this is a petrified character or a statue – not without consulting the text – but it’s effective. Art by Fool Moon Productions with AI assistance.

This image is probably more indicative of the character illustrations, many of which are obvious homages to characters from popular culture. Are these NPCs or PC presets? I’m not sure, yet. There’s lots of more typical spot illustrations throughout, too. Art by Fool Moon Productions with AI assistance.

The same problem affects most of the character illustrations in the book.

Now I don’t see this as a flaw in the product; it’s a lost opportunity to improve the product, but this won’t actually make it unusable, by any stretch of the imagination, and that’s the distinction that defines what I consider to be a flaw.

The Prelude Page

I don’t know whether they referred to this internally as a prelude or a preamble, but it’s the first solid information we get about what we’re looking at. It’s worth quoting the text in full:

An Adventure About Life After You Are Left

Step into the well-worn slippers of elderly parodies of pop culture heroes and heroines, enjoying a mundane day at the Adumbral Strobus Home for Retired Adventurers. But the ordinary turns to chaos when the entire facility is suddenly whisked away to another plane of existence. Waking up in this bizarre new realm, the adventurers quickly realize they’re not in Kansas anymore, Toto.

As they explore their surreal surroundings, they must unravel a series of perplexing mysteries. Clues scattered throughout the complex will help them escape the pocket dimension, discover the fate of their fellow residents, navigate the bizarre mutated growths and entropic rot, and decipher the strange artwork depicting one of their own. Along the way, they might even uncover some juicy staff scandals.

Venture into the enigmas of the Adumbral Strobus Complex to uncover what Dr Mortem has been doing with the poor inmates of the Asylum for the Neglected Elderly. Confront him in the Adumbral Strobus Institute of Entropic Research to find a way to return yourselves and your home to the material plane. Can you solve the riddles, face the horrors, and lead your comrades back home? Adventure and intrigue await in “Once We Were Heroes”!

And remember, whatever you do, don’t look too closely at the toilets.

Okay, so some of the characters are presets, and some are NPCs. The premise is that a nursing home for elderly ‘retired heroes’ from many different realities gets pitched into somewhere else, and the main quest is to get home again. But there are side quests along the way that may impact the success or failure of that main quest. This is a micro-game setting as much as it is an adventure.

Nostalgia, pop culture, iconic characters, and a situation that pitches them all into one last great adventure – sounds intriguing.

Let’s talk for a minute about the Font. For viewing on the internet or on screen pages, it’s long been recognized that a Serif font is not ideal – that’s why Campaign Mastery uses a dirt-common sans-serif font for it’s content. It’s more legible and less tiring. On the printed page, that is reversed. You can read a serif font on the printed page up to three or four times as quickly as you can a sans-serif font. So this product is optimized for screen viewing and not for printing. That’s fine, it’s just something to be aware of.

Because you want headings to stand out, they are frequently in whatever font you aren’t using for your text, and that’s the case here, too. So the designers of the product know what they are doing, or (at the very least) have imitated the work of someone else who knows what they are doing, in terms of typography.

There’s something a little strange about the line heights in some of the text, however. This is usually a result of peculiarities with the actual font used, and it’s incredibly hard to get right. I can’t mark the product down because of it, but I have to mention it.

The text above is then followed by a humorous “Disclaimer” passage which at first glance might appear to be just fluff. This is written, like all fine print ever, in a far smaller version of the main font. But it does actually serve a valid function in terms of the content – in essence, it evades the likelihood that someone will disagree with the specific adaption of a specific entity from pop culture.

“Involuntary translocation across dimensional boundaries may present unforeseen hazards. Accordingly, Adumbral Strobus accepts no liability for any personal belongings that may become entropically compromised, nor for any injuries, accidents, transmogrifications, or sudden instances of extra-dimensional dissolution occurring within the confines of our esteemed establishment during such excursions. For your safety and well-being, certain chambers, thoroughfares, and inter-dimensional portals may be sealed off without prior notification.

“Height, weight, and chronological restrictions may apply in some dimensions, and individuals with specific physiological, psychological, or metaphysical conditions or impairments may find themselves unable to participate in certain dimensional experiences. It is advised, with the utmost gravity, that consumption of any foodstuffs or beverages discovered in alternate realities is strictly ill-advised, as Adumbral Strobus accepts no responsibility for any ensuing transformations, spontaneous combustion, or heroic expulsions of stomach contents that may result from such gastronomic indiscretions.”

The disclaimer continues for another couple of paragraphs after that.

This is exactly the sort of nuance and attention to subtle detail that I expected to find from the Logo, and so it gets a big tick. The final sentence is worth highlighting because it (a) smacks of an Alice-In-Wonderland vibe, and (b) implies that some characters who take the risk may regain some of their youth and former glory. But it also suggests that such reactions will be addressed on a case-by-case basis within the content – which speaks well of the attention to detail within the content.

The Credits and Contents Pages

Pages 4-6 cover this ground. I noted that the credits acknowledged the copyrights over D&D, Forgotten Realms, Ravenloft, and Eberron amongst others.

The contents page reinforces earlier impressions. The introduction runs for four pages from 7 to 11, and will get looked at in detail below. Chapter 1 is “Welcome To The Adumbral Strobus”, Chapter 2 is “The Extra-planar Adventures”, Chapter 3 is “Asylum for the Neglected Elderly” and Chapters 4 and 5 relate to the “Institute of Entropic Research”. It also contains 4 versions of the Aftermath and name-drops three more entities: Mortem, Yixith, and Xeghic. At this point, I know from the prelude that Mortem is a mad scientist who has been experimenting on patients, but don’t know the other two – so I suspect (until I know better) that they are the personifications of the “Day vs Night” conflict implied by the artwork. If so, one or both are probably responsible for the transdimensional relocation – but that’s just speculation with precious little solid foundation.

I have to admit to having a minor problem with the name “Adumbral Strobus” – I keep wanting to read it “Admiral Strobus”. That might be just me, or it might be more common than I think it is. But I’m quite sure that it would trip me up sooner or later.

The 5 main chapters are then followed by 7 appendices, and Appendix C, “Character Concepts” stands out to me. It tells me – without actually saying so – that this is an adventure designed for some variety of D&D / Pathfinder, because it lists the different character classes and then offers two residents as representative of that class.

The Homages, when you look at them, are very tongue-in-cheek. The one that I used as an illustration is of “Prof. Alfus Percy Ulric Bron
Dumblebeard” – I don’t think anyone will need a second guess as to who this is supposed to represent. But that sets a tone for the rest of the product that seems a little incompatible with the content thereof – it will be interesting to see how they cope with that.

The Introduction

Let’s look at the subsections of the Introduction – “About This Adventure,” “Once They Are Heroes,” “Adventure Summary,” “Running The Adventure,” “Character Creation,” “Locales” and “Dungeon Master’s Preparation Checklist”. Some of these are subdivided.

The Game System

Quote: “Once We Were Heroes” is an adventure based on the 5th edition of the worlds most popular role playing game, designed for four to six characters, where the player characters take on the roles of the story’s heroes. This book outlines the villains and monsters they must defeat, as well as the locations they must explore, to successfully complete the adventure.

So, that answers that question, but it produces a big black mark on the product in terms of my personal taste.

You see, like a lot of others, my friends and I participated in the WotC 5e play-test, back when it was “D&DNext,” and after a while, we noticed that every time our feedback said “Zig Left,” the next iteration of the rules went “Zag Right”. There was little-or-no interaction with anyone at WotC in the playtesting feedback reports that we filed, so there was little explanation as to this phenomenon; we could only assume that “Zag Right” was the more popular choice amongst other playtesters. Slowly, what ended up D&D 5e became something we were no longer interested in playing. Some have since changed their minds; others have not. It is what it is.

The problem with tying yourself to one game system so absolutely is that you find yourself living and dying with that game system. When writing Assassin’s Amulet, my co-authors and I worked very hard at making everything compatible with both D&D 3.5 and Pathfinder for that very reason.

Does that mean that this is un-runable, or that it shouldn’t even be up for purchase consideration? Absolutely not. But it does mean that to run it, I would need to adapt it, and that adds to the hurdles that the quality of the product have to surmount.

Anyway, getting back to the “About This Adventure” text… setting for this adventure, right… can be placed in many published settings or even a world of the DM’s creation, good… Intended to be played as a one-shot, okay… Players can either choose from the provided options or create their own 10th-level characters, okay.

…The Tone of this adventure is a comedic take on a horror mystery, okay that’s interesting – those two are hard to make go together (though it can be done)… encourage you not to take it too seriously, okay…

Once They Were Heroes

“Many years ago, the world was saved by a legendary group of adventurers. They stood against the darkness, vanquished terrible evils, and ensured peace for generations…”

So the characters / PCs are not from ‘all over,’ they were allies and teammates who worked together, and then ALL of them ended up in this place? The first part is a disappointment, and the second strains credibility to breaking point right off the bat.

Were I to run this adventure, i would probably go back to my original impression – that these are retired heroes from multiple planes of reality who have been ‘parked’ in this facility; they don’t know each other; and the big thing that they offer (besides aged care) is anonymity, distance from the scenes that made you legendary, so that no-one from home can call you up one last time. This is a Retirement home.

Some may find that this interpretation is even harder to swallow, in terms of credibility, and it probably is – if you run it using normal characters and not the ‘pop culture icons’ provided. But that risks undermining the ‘fun factor’ and making this all too serious. And if you create your own versions of iconic pop culture characters, you’ll find yourself back at the same basic question.

Of course, you may find that the premise doesn’t stretch your credibility as badly as it does mine – but that still doesn’t negate the possibility that your players may struggle with it more than you. So this is something that every GM will have to at least thing about addressing.

The introduction then goes on to outline the adventure, but I’m not going to get into those specifics, there’s a lot of information that players will have to find out the hard way.

The plotline breaks down into three main sections – a ‘get to know you’ routine morning (my comments above pay into this section very heavily); a sudden event and their need to work out what’s happened and what they can do about it, which leads into investigating the mystery and stumbling across side-plots; and the ultimate confrontation and resolution of the plotline.

Running The Adventure

This is pretty standard fare, with no surprises. Stat blocks for all encounters, and any spells or equipment referenced are provided, so the PHB and DMG are the only real requirements.

Character Creation

This section contains ‘meta-rules’ for character generation and explicitly references the PCs as parodies of pop-culture icons, who have aged and retired. Outlines for equipment (very limited) and aging the characters (may not go far enough, but there’s a playability need that has to be taken into account).

“Additionally, randomly allocate one flaw and one feature to each character, either by rolling a d20 and referring to the table in the Appendix A or by dealing cards from the provided deck. Encourage players to incorporate these traits into their role-playing to add depth and humor to their characters.”

The text also states that the characters supplied in appendices C and D should be considered backups for players who are struggling to create their own characters, not as the primary source.

Locales

Interior maps are provided for three buildings within the Adumbral Strobus complex – the Home For Retired Adventurers, the Asylum for the Neglected Elderly; and the Admin building, which includes the facilities belonging to Dr Mortem.

There are two pocket dimensions, the Everburn and Evergloom, which have an interesting cosmological concept that makes total sense in terms of the adventure as described (I’m being deliberately vague to leave players who may read this in the dark).

Visiting these pocket dimensions is not quite what players might expect – there are stings in the tail that are exactly the sort of thing that I like to build into my own campaigns.

This section also categorically identifies Yixith and Xeghic, who were name-dropped in earlier material, and their relationship to the plotline. I have one suggestion to make in this respect but don’t want to make it too easily accessed, so it’s in black text against a black background in a text box below – select the text contents with a shift-and-mouse-drag to read it. The text DOES contain spoilers that will ruin the adventure for any player who reads it, be warned.

One realm is a microplane of life and the other of death. Yixith and Xeghic are inhabitants of these microplanes, one to each. The depictions of each match the illustrations of the microplanes. I suggest REVERSING the indicated images WHEN THEY ARE ‘AT HOME’ so that they contrast with their environments. This will throw a curve ball that is likely to deceive even experienced players – for a while.

After a spot illustration of a nameplate that is REALLY hard to read, the introduction segues into a brief description of the setting – the grounds of Adumbral Strobus, the retirement home building, the Asylum, and the Institute.

Maps vs Battlemaps

The creators suggest using theater of the mind, with the GM referring to the maps provided for cues and the battlemaps in Appendix G reserved for combat situations. They point out that this will speed play, which is true. But they don’t mention that a battlemap should only be placed on the table when combat is actually about to begin – don’t telegraph the situation to the players! Stay in theater-of-the-mind mode until the last possible moment.

This also plays into my statements regarding image size. It can be argued that these are intended only for the GM, and not for player consumption, and it seems clear that this is what the writers had in mind; but it can also be argued that using theater of the mind is sped up and improved by giving a common visual reference for the group to process.

Prep Checklist

This has some additional steps not previously mentioned, and shouldn’t be ignored. But that’s what is most likely to happen because the only two entries on the first page on which it appears are reiterations of advice already provided. All the new content is on page 13. This is the biggest misstep so far in the content, in my opinion; if this is as bad as things get, OWWH will deserve very high praise and recommendations, indeed.

Encounter Balancing

Closing out the Introduction is a section on Encounter Balancing. There’s nothing startling or wrong with this section; the biggest issue is what is Not there.

This adventure is designed, according to the “About This Adventure” text, for 4-6 characters, with a presumed ratio of one character per player.

This section shows how to adjust encounters for 4, 5, or 6 players. It also has an adjustment for having less than the recommended number of players (3). But it makes no accommodation for groups with more than the recommended number. It’s not likely to come up often – but surely expending the three lines of text needed to cope with 7 or 8 players would not have been too much to ask?

That said, as I commented above, if this is the biggest faux pas, this adventure will be doing very well indeed.

Looking Deeper – Chapter 1

I’m not going to break this down into subsections the way I did the introduction – there will be too much trouble with spoilers if I do that. Instead I’m going to skim the chapter and report back.

  • While I can guess, I don’t know for certain what “Balloon Volleyball,” or it’s in-game equivalent, “Beholder Ball” is.
  • It would have been a good idea to warn the GM to come up with “20 questions” for the Getting Ahead game. Unless this game is also not what I think it is.
  • Tess Trill – every facility of this type needs a hot girl for those characters that way inclined to drool over, and she fills that need here. Her male equivalent for those looking in the other direction is the cleaner, Fenim. The text hints that he might have feelings for her, about which she is naively ignorant. Adding the above to their respective descriptions adds massively to the background and general realism of the setting – even if they are cliches.
  • That credibility is severely needed to counterbalance the presence of Derrick the Chevalier. Older nobility, as a general rule, do NOT get shuffled off to somewhere like this. Instead of an actual Noble, he should be a commoner with delusions of Nobility – or maybe pretensions of Nobility.
  • This whole sub-sequence would be a lot easier to roleplay if there was some indication of what this group was actually up to – they are clearly up to something that they probably shouldn’t be. The GM should probably also prepare some relationship cues that can be expressed through dialogue with the PCs. These might be friendly (“Don’t forget we’ve got a chess game to finish later”) to softly hostile (“Mind your own business, [PC], and I’ll mind mine, and we’ll both be happier for staying out of each other’s way.”) In general, I get the impression that the PCs are the ones who have ‘settled’ into a calm existence in the retirement home, while this group are those who are still rebelling a bit and bucking the discipline. That too, would be useful direction – especially if that wasn’t the impression the creators intended.
  • Okay, now we get the explanation of the 20 questions game. Some sort of indicator at the first mention that ‘details will be provided below’ would have been helpful.
  • While the text solves the puzzle, some sort of motivation on the part of the guilty party would be helpful.
  • Context within the adventure explains the Beholder image – so my earlier comments regarding it can be ignored.
  • The first real plot hole – “After the conclusion of the pirate hunt game”… but no such game has been specified or described.

Nine notes, two of them canceled out by a third, and only one (maybe two) really critical. I’ve read a lot of adventures and while there have been one or two that have scored ten out of ten for content, the vast majority have far more serious faux pas and plot holes.

Narrative Content

Most importantly, the narrative generally succeeds in bringing the location to life in a way that feels natural, realistic and interesting. Nailing any two of those three can be difficult, ticking all three boxes – especially in such an unorthodox setting with… unusual… characters is top-rate work.

Locations, Encounters, Mysteries, Solutions, and Action: Chapters 2-4

At this point, I don’t think I need to delve into these areas too deeply. While it’s possible that one of them will lower the established standard, there’s no reason to expect it. A quick skim of the next few chapters confirms that impression; this is a really well-written well-crafted adventure.

It may have the occasional small hole for you to plug, but nothing that won’t be easily taken care of if you do what everyone always says to do and read the whole adventure before play.

I’ve very much been mindful, in writing this review, not to read ahead, but to generate my comments as I came to each passage of content. That permits an honest impression of what’s actually presented by that point in the product, with no cheating by looking ahead.

When I was selecting images, I was deliberately careful to avoid reading any of the text. When I was reading the introduction and making comments on it, I wasn’t looking ahead – I was reacting to what was currently in front of me, in the context of what I had already read. Similarly, my notes on Chapter 1 were very much stream-of-consciousness as I was reading – and you can see in those comments where that caught me out.

Above all else, I was making every effort to make this review both honest and comprehensive, without any bias resulting from the source of the artwork. I hope that I’ve succeeded in reviewing it without any bias or taint, so that you can make a fair assessment of what’s being offered without compounding of any bias or taint from considering the art source.

Price

The price is Australian $7.58 which is $4.95 US. I would actually expect the price to be $5 from this conversion, I suspect that what I got was the “live” conversion rate and not the daily rate. And if you don’t know the difference, don’t worry about it.

Where Do You Get It?

https://www.dmsguild.com/en/product/535760/once-we-were-heroes

, or just click on any of the illustrations excerpted from the product.

The Judgment Call

So here’s the bottom line: If you are really seriously opposed to AI-generated art in RPG products, I don’t think this adventure will change your mind.

If, however, you are willing to even contemplate the possibility that there are potentially valid counterarguments to that opposition, this adventure has enough merit that you should contemplate buying it.

Only the maps are really essential for play; you can blank out every other illustration and still be left with a product worth your attention. It will be diminished by that act, but that’s your choice to make.

If the art had not been AI-sourced, there are two possible paths that this adventure could have taken:

  • Far less art, far weaker presentation, and far less appeal despite the length. Marketplace viability would probably require reduction in the price by 1/3, eating directly into the profits and making the existence of another small publisher less viable. Or,
  • Far less art of potentially slightly superior quality, and a price tag closer to USD $40 – a price that would be sure to compromise sales. The net effect is the same – reduced profitability and a small publisher becoming less viable within the hobby.

Some may argue that no publisher that crosses their hard line deserves to be viable in the market. I think that’s going too far.

For my (metaphoric) money, Fool Moon have done everything right in terms of ethics, here. They are up-front about the art and its source. They have done their best to leverage the output to the maximum benefit of their product without making it an indispensable element of that product.

Is it the greatest RPG product ever published? Probably not, but what right do you have to expect that – especially at this price point?

Is it worth every one of those US dollars? I think it is, and then a couple. And I don’t think you can ask more of Fool Moon Productions than that.

Leave a Comment

Once We Were Heroes and the AI Controversy – AI Redacted


This post is a review of Once We Were Heroes by Fool Moon Productions, which uses art that’s AI-Generated. So I’ve had to set some ground rules.

As the owner/operator of Campaign Mastery, I have spent a lot of time thinking about what the site’s policy should be with respect to art by Generative AI, and the text below is the result.

Campaign Mastery Policy on AI-generated Art

1. Campaign Mastery will not use or show AI art unless it is profoundly essential to the content of an article. “profoundly essential” includes reviewing a product, tool, or published work which uses AI-generated art, or when the art itself is the subject of the article.

2. AI art will never be used to replace any original art that would normally have been commissioned from or provided by an actual human artist by Campaign Mastery.

3. When AI art is used, a disclaimer will always warn readers, as shown above. This will precede any significant article content and especially any AI-generated content. Whenever possible, a link will be provided to a plaintext, AI-free version of the article.

Campaign Mastery Policy on AI Text

1. While text generated by an AI may be quoted, it will never used to replace human-generated text. All text published on Campaign Mastery must be substantially written, analyzed, and edited by a human author.

2. Text generated by an AI may only be quoted for the purposes of analysis, illustration, or conversation (eg demonstrating prompt engineering). Any such text will be clearly identified as to source. Analyses performed by an AI must be converted into a HTML-code table.

3. Outside of direct quotation, AI may have been used for research or brainstorming, or generating outlines or summaries of other texts. Every such use will be verified for accuracy by a human and the final text will always be written and edited by a human.

4. No third-party submissions which are obviously AI-Generated in the exclusive opinion of the site owner will be accepted for publication.

Campaign Mastery Policy on AI Audio and Video

1. While Campaign Mastery is not an AV site, from time to time Audiovisual materials may appear, and some of these may be AI generated in some respect.

2. If any aspect of these materials (eg AI voice-overs, background music, etc) is significantly AI-generated, the materials will be treated as though they were “AI Art” as per the policies stated above.

This text has been added to the policies page and is effective as of this post.

Human-AI Collaboration?

I couldn’t find what I wanted to use to illustrate this post – an Artificial Artist painting a question mark. This is a next-best alternative, based on robot hand human handshake by Mohamed Hassan, to which I added a question mark image by Gerd Altmann in the background, and some color tweaking to get them to match. Both images were sourced from Pixabay using their “authentic” (human only) setting.

The AI controversy – an overview

I always knew that this day would come eventually. I had expected that the occasion would be when I wrote and published an article on how I use AI within my campaigns, and the techniques and limitations that come with it – but that article isn’t written yet, because the uses that are most illustrative come from an adventure that hasn’t yet been played.

Or another article that’s been drafted on the limitations of AI and how it could be improved – and on how to get the most out of what’s already here.

Polarization, Content, and Hard Lines

The issue of AI-generated art and other content is one of the most polarizing issues in the hobby. It has forced publishers, creators, GMs, and customers / players to establish rules that are starkly black and white, often with the best of intentions – but when those clash, the hobby itself can be the loser.

I’m more in favor of a softer line that acknowledges gray areas with transparency. There are certain ways that I consider ethical when it comes to the use of AI, and others in which it clearly is not. Asking for anything “in the style of” a living artist is a big no-no, for example. Asking for something in the style of a long-dead artist, that’s more of a gray area.

I regard AI as a tool, and like all tools, it can be used for good or ill. Throw in a healthy dose of pragmatism, an acknowledgment that no black-and-white policy can satisfy everyone and that there can be good and valid reasons for the ethical use of the tool, and you find yourself in the same uncomfortable middle-ground that I occupy, and that the policies stated above are intended to encapsulate and define.

Ethics and Labor Rights

This actually breaks down into a number of related concerns. First, there’s the conflict between how generative AIs learn to create their content and respecting the rights and integrity of human creators.

Most AI models are trained on massive accumulations of data scraped from the internet with no concern as to the sources rights, and without recompense. And that irks those who support the rights of writers and artists. I’m one of them, so naturally, my sympathies align more with those who are critical in this regard.

But that perspective is nuanced by the reality of the internet. Once material is publicly available, it’s there for anyone to refer to and use as reference or inspiration. So long as sufficient input from outside that source is incorporated, and in a non-superficial way, so long as you are building on what has been made available and not simply copying it outright, how is what an AI does any different from what a human writer does?

If I want to create an image of a clown, and I start by doing research using Google Images on how other artists have depicted clowns to get ideas, that’s generally considered fine – because at the end of the day, I have to synthesize all those elements and ideas together into my own representation of “a clown”. I don’t generally restrict or place boundaries on those searches; I want as much fuel for the creative fires as I can get.

It’s a long-held maxim – if you don’t want something to be public, don’t put it on the internet.

Here’s a bone to chew on: if it’s valid and legal for a human to be educated by viewing online content, how is it not valid and legal and Fair Use for an AI system to use it in the same way, for the same purpose?

Shades of gray.

Some content creators argue that the results are a form of “unlicensed derivative work”. And that might be true, if only that content creator’s works were used to train the AI – but with every outside source, the purity of that argument gets eroded.

There comes a point where so many sources are being fused into one that you have to draw the line. It’s like music – the difference between doing a cover version of a Beatles song and drawing inspiration from the Beatles is clear and obvious. Both are forms of copying – but the nuance is completely different. One requires the payment of a license fee to the songwriters, and the other doesn’t. Doing them without that payment is legal and ‘fair’ in one case – and completely the opposite in the other.

You can’t copyright the D7(diminished) chord just because you’ve used it in a song. It’s there for anyone else to use.

What’s more, consider the necessary ‘spark of originality’ that distinguishes human creation from artificial construction. In order to generate a good image, a human user of an AI has to specify a prompt, and the general rule is that the more detailed the prompt, the better the result. Is this not providing the needed ‘spark of originality’ into the resulting image?

The more vague and generalized the input, the weaker this line of argument, I admit. But where do you draw the line? How many of these creators started out by imitating someone else’s work?

Shades of gray.

I don’t see how you can end up anywhere else in the argument if you’re applying any half-way reasonable standards.

Devaluation Of Creativity

There’ is a widespread fear among freelancers (artists, mapmakers, writers, editors, you name it) that AI tools will drastically reduce the market rate for their services. Why pay $500 for a unique monster illustration when you can generate a passable image for nothing, or close to it?

And they have a valid point – up to a point.

The keys to deciphering this argument are subtle. AI images may be ‘passable’ but they aren’t going to be as nuanced as a bespoke image from a human artist. This is stealing my own thunder to a certain extent, but here’s the reality: The more detailed you make an AI prompt, the more you are likely to get something close to what you want – but the more likely it is that some crucial element, spelt out in specific detail, is going to be left out completely. And if what you’ve requested isn’t something that people routinely post images of on the internet, you’re going to struggle – try generating an image of a “crashed alien spacecraft” and what you generally end up with is a flying saucer hovering serenely in the air. People don’t take many photos of crashed objects! And if the AI can’t learn what it should look like, it can’t create something like it.

What this argument is really pointing out is that amateur-prompted AI art raises the bar of amateur art to something with much of the gloss of the professional artist. But the professional will always be better at capturing originality and bringing it to the creative table. The differences may be more nuanced than a black-and-white line drawing, but they are real.

You’re still getting something for your $500 that you don’t get from the cheaper alternatives – but it’s not the same thing as it used to be.

And this argument also smacks of similarity to the opponents of every technological advance and consequent job losses. I’ve heard those arguments advanced against everything from the word processor to assembly-line robots. In every case, there has been more employment afterwards than before, once things settled down – but in some cases, those have mandated an evolution of skill-set, and in others, a complete replacement. So the truth of the matter in this respect is, once again, nuanced.

The similarity not only weakens this argument considerably, it points out, more starkly, my previous point – you pay for the services of a human artist for what he or she can provide that the cheaper substitutes can’t. Will that result in a realignment of the market rates? Possibly. But that’s life, it happens to everybody, like it or not – things change, and you either evolve accordingly, or you stagnate.

But there is a sting in the tail – the proposition that this leads to a “race to the bottom,” where only AI-assisted production can compete on cost. And that’s a point that I can’t argue with, and hence my comments on a possible realignment of market rates.

That said, it can also be suggested that AI generators are tools – some will learn to use them more effectively than others, just as some people are better at watercolors than with oil paints. The solution to this problem is for the creators to embrace AI and use it to increase their productivity so that they can accept five or ten times as many commissions paying one-fifth or one-tenth of what they used to command, while leveraging their artistic expertise.

So this line of argument is not as cut-and-dried as it first seems.

Specificity Of Style

Artists often feel that AI allows users to “mimic their unique style” without the artist receiving credit or compensation, effectively eroding their brand and professional identity.

For me, this is a far stronger argument than the preceding one, but I think the proposed remedy (don’t use AI, anti-AI, no AI, no, no, no!) is the wrong line to be pursuing. As I said earlier, I view creating something in the style of a living artist to be an ethical no-no. Once an artist is no longer available to take commissions by virtue of being dead, that’s a different story.

I think the correct remedy here is an extension of copyright protection to include the “distinctive style” of an artist. That’s already implied in the existing protections – more strongly in some fields than others. I always remember the time John Fogerty was sued by his previous record label for sounding too much like himself. That case established (or reinforced) the principle that each artist carries with him a uniqueness of style that cannot be licensed or sold and is emphatically NOT included in the rights purchased when you acquire control over an artist’s work product.

I think the existence of generative AI advances the demand for such to be formalized and generalized to cover all modes of creativity, be it visual, or textual, or audible. I would include under that umbrella, a singer’s unique voice.

There would still be gray areas. A guitarist could argue that they had a distinctive and unique playing style, for example, and that style should merit protection. But they would have to prove that uniqueness in comparison to others within that musical field.

The final jigsaw piece would be to require AI interfaces to explicitly block requests that enter protected fields. “In The Style Of” is permissible once the ‘copyright’ on that uniqueness has expired, and should be blocked the rest of the time – UNLESS you are the artist in question, I suppose. But that gets murky, so let’s keep it clean, and ban them from being lazy, too.

Publisher and market integrity

Large TTRPG publishers have taken explicit stances, and the community judges them harshly when they waver.

Major players like Paizo (Pathfinder/Starfinder) and many prominent independent publishers have issued clear policies stating they will “only accept human-created artwork” for their products, usually citing ethical concerns regarding data scraping. This is often driven by a commitment to supporting the freelance community.

Wizards of the Coast faced significant backlash when multiple freelancers and even their own in-house content creations were discovered to contain AI-generated elements, despite WotC claiming an anti-AI stance. These incidents reinforced the community’s demand for strict auditing and absolute transparency.

Since it’s my position that human-created artwork is superior to AI-generated content in specific ways, I don’t agree with the reasons cited for these policies. Many criticize AI-generated art for lacking the “soul, texture, and character” of human-created fantasy illustrations. In the TTRPG world, art often sells the ‘vibe’ of the setting, and AI is frequently accused of producing generic, overly smooth, or inconsistent visuals that break immersion, and those goes directly to my allegation that human art is better in key respects. But I do agree with the policies themselves as a general principle.

It’s when people seek to extend these policies down the scale to smaller publishers that I think problems start to arise. But I have to admit to being a bit conflicted over that problem.

Ideologically, I’m egalitarian; I favor “one rule for everybody”. And yet, in this circumstance, I think that there need to be different standard applied to different scales, and see the good and ethical use of AI generation as ‘raising the bar’ for the small operators to the point where they are keeping the big-ticket producers honest.

My policies and ideologies don’t hold all the answers, and that admission pushes me back into the shades of gray. If you can afford to, you should always hire human artists because the results will be better. If you can’t afford to, I’ll give you a pass for using AI-generated art. So there are two rules and a lot of gray in between them. But no one, hard-and-fast rule or principle yields a satisfactory answer in every case, and I do NOT agree with anyone that tries to implement one. I’ll respect their position in terms of their own products or pages – the the extent that I’m offering a plaintext version of this article, for example – but that is as far as I’ll go.

I generally think hard-liners are part of the problem in any field, anyway. Having ethics and principles isn’t a problem; expecting them to hold all the right answers every single time, that’s a problem, and a serious one.

AI Limitations

I’m only going to touch on this briefly, because it’s not directly relevant – but it does at least need to be mentioned.

AIs are not intelligent. They don’t understand a word they say. They are sophisticated systems that guess at the best ‘next word’ to follow the word they have just decided to use. That these words form sentences that have emergent properties of meaning when read by a human is a reality with which they cannot contend and can barely cope with.

Some AIs do better in this respect than others. For brainstorming, and nailing down technical details, they can present an enormous advantage – but when it comes to writing text for a TTRPG rules-set or adventure, they vary from inspiring to exasperating in equal measure, sometimes within the same paragraph!

TTRPGs and good written works rely on an internal consistency that has to run deep. Very deep. And that’s a consistency of the emergent properties of meaning within a series of statements. And since AIs don’t understand meaning…

AIs – LLMs – are capable of generating vast amounts of text quickly. They can talk the ear off a donkey, even without voice synthesis enabled! But they are prone to “hallucinations” (in which they make up facts) and struggle to maintain adherence to obscure, specific worldbuilding details. Or a specific role in the creative process. This makes unedited AI text a major liability for professional products – or for decent amateur ones.

Partnership

I view my use of AI as a partnership with a very creative research assistant. I can offer a vague idea and have it refined. I can ask for a suggestion – but I then have to take the ball offered and run with it, or use it to spark a better idea in a brainstorming session. It’s great for narrowing in on technical details – but you have to check its work. One phrase that repeats frequently in my interactions is “Ask questions for clarification if necessary.” And a lot of my inputs start by clarifying or reiterating something that the LLM has not taken into account.

I see the big picture. I use the AI to help clarify and define the details. I frequently need to steer the conversation, offer corrections or clarifications, or outright reject something the AI has suggested, while using that suggestion to clarify my own thinking to offer an alternative. On a number of occasions, the AI that I use most frequently has made three or four suggestions, and I’ve accepted none of them – but taken part of one and part of another and a touch of my own creativity and sense of narrative direction to weld the parts together into something better.

That’s leveraging the strength of the AI while using my ‘bigger picture’ to overcome its limitations. There’s a huge amount more that can be said on that subject, but I’ll save that for another article sometime.

Summing Up, Moving On

If I were to generalize and sum up my ethical position on the use of AI, it could be encapsulated in the statement, “AI as a tool or in partnership with human creativity is fine – with inherent limitations. AI as a primary generator of content that a reader or viewer would expect to be produced by a human is unethical at best and incompetent at worst.”

This is the ethical boundary that we, as consumers, have to navigate. And it’s precisely this boundary that the creators of Once We Were Heroes have forced me, and you as a reader, to confront. This game supplement heavily employs AI-generated art, and makes no bones about it:

    “Recognizing their limited artistic expertise and budget, Jeremy and Matthew at Fool Moon Productions leverage generative AI to enhance their creative outputs. This includes generating thematic “original” artwork, refining existing designs, and improving written content by correcting spelling and grammar. Notably, even this disclaimer was crafted with the assistance of AI.”

It’s not my job, as a reviewer, to argue the rightness or wrongness of this policy or the motivations behind it. It IS my job as a reviewer to consider the efficacy of the results and to bring the matter to the attention of potential buyers, who can then make up their own minds.

To the maximum extent possible, this review will focus on the content without considering its source. If the use of AI has achieved something spectacularly fitting or evocative, I’ll comment on the fitness and the evocative nature of the art – and if something doesn’t fit, I won’t cut them any slack for the source; it will be judged by the standards of human art.

But I wanted to make that clear before we start, too.

To facilitate this review, I have been given a free copy of Once We Were Heroes. I have no other incentive to produce anything other than a fair and unbiased review.

Once We Were Heroes – First Impressions

Front Cover

The front cover gives a first impression of two worlds and a location trapped in between. It’s clearly a collage of two separate pieces of art, and the styles don’t quite mesh.

You can’t escape a first impression from the front cover, but it’s not all that promising a beginning. The art of the house at the bottom doesn’t feel like any of the other art in the product, and more importantly, doesn’t quite gel with the top part as a result.

The title – for some reason, I started thinking of this as “We Were Once Heroes”, and I think that derives from a grammatical choice in the title – specifically, the absence of a comma after “Once”. It’s a piece of minutia in the larger scheme of things, but it is the difference between a statement that attracts attention and commands interest, and something that’s more vague and leaves you wondering what it’s all about. Compare for yourselves:

Once We Were Heroes

Once, We Were Heroes

The Subtitle doesn’t help much. “An Adventure About Life After You Are Left” – Left where? Left Hanging? Left Alive? Left For Dead?

For all I know at this point, though, that might be a masterpiece summary – the answer might be “All Of The Above, and more”. At least it tells me that this is supposed to be an Adventure.

But the first impression is that the subtitle is there to try and hook a reader into buying the book because the title isn’t doing a strong enough sales job, and it’s too wordy to be very effective at that job. This is back-cover text, not something that belongs on the Front Cover, especially since it’s distracting from the art of the cover.

And, aside from knowing it’s an adventure, I still don’t really know enough about the product to be interested in buying it – though price would factor into that question. I’ll deal with that toward the end of this review.

Back Cover

The first place I go when the front cover doesn’t enlighten me enough (which is usually, to be fair) is the back cover, where I would expect to find a more verbose blurb describing the product.

Okay, so there are cosmic purple swirls evocative of space, or a peculiar storm, set against what might be a mountain and the same two ‘spheres’ of existence. And aside from the Fool Moon Logo and credit, there’s… nothing. This cements the impression that the subtitle was the back cover blurb at some point, and used on the back cover it would be more effective as a tease, because it wouldn’t be trying to sell the product.

As it stands, the back cover is pretty but leaves me none the wiser.

Fool Moon Productions

I want to call attention at this moment to the Fool Moon logo, which they were kind enough to supply in a higher-resolution format – the version below is actually a compromised version of it because I had to shrink it down.

https://www.dmsguild.com/en/product/535760/once-we-were-heroes

I’m calling attention to it because there’s a subtlety within it that you can barely make out in the back cover presented above. It consists – at first glance – of a wolf (evocative of a full moon) wearing a fool’s cap, and set inside a white disk (often used as a symbol of the full moon). But there’s the barest hint of something more, when you look closely.

To examine what I was seeing, I did a little digital editing to bring up the slight tonal difference that I was detecting and make it more prominent.

And now it’s clear to see that this isn’t just a yellow-white circle – it’s an actual representation of the full moon, as seen in the Northern Hemisphere.

Sidebar: Inverted Moon

Wait, what? people in the hemispheres see the moon differently?

Yep. Because the Earth is a sphere, people in the southern hemisphere are upside-down relative to those in the north, and as a result, the moon looks upside down to us, and the phases of the moon run in the opposite direction.

This image is from a post by “The Secrets Of The Universe” on Facebook, and from the logo top right, I assume that it is copyright by them. I have tweaked it slightly to enlarge the explanatory diagram at the top. Link to their post containing the original image, or click on the image itself.

But this is a rabbit hole full of traps for the unwary. Their post’s URL, and it’s text, claims that this happens because the moon is a sphere. WRONG, though they get everything else pretty much right – and got called out on the error in the comments..

This Post on Facebook by “World GeoDemo” gets the explanation right – but has the flags that identify the perceived images back to front, which is only likely to spread confusion further. But they get the explanation right.

So even the people explaining the phenomenon struggle to get the details right. We live in a topsy-turvey world, sometimes…

And all this because I wanted to know which perspective on the moon was being illustrated by Fool Moon’s logo.

Getting back to the point that I was trying to make: While it might have been more effective to have painted the ‘dark parts’ out that lie under the wolf, the normal difference shown is subtle enough that you don’t really notice, it’s only when you darken those ‘blue areas’ that this becomes noticeable.

But the attention to detail displayed in the logo, as a general statement, boded well for what I might find within the product. Nuances and details and subtlety are what it promises; now it’s up to the product to deliver.

The other thing that scrolling through the PDF to the back cover does is hint at the scale of the product – the back cover is page 158, with the front cover counted as page 1. It’s BIG, a lot more so than most ‘adventures’, by a factor of 4 or 5. And that’s an important thing to notice at this point.

Art

Some of the art is quite evocative. This is perhaps the best image in the product, but one or two others come close. For the most part, though, the art is strongly illustrative but nothing more. It does (mostly) avoid the ‘plastic’ impression that some AI art possesses, thanks to the careful and subtle use of textures.

In fact, so much of the detail was lost in compressing the image above to fit Campaign Mastery’s display space that I decided to capture a larger partial image. The textures are still hard to make out but the impression they create is not.

The art has been generated using Affinity Suite, Dungeon Draft, and 2-Minute
Tabletop. I don’t know any of those tools, but the latter two sound like they are mapping-related, and there are a number of richly-detailed maps provided, so I assume that the first was the primary source for the artwork. The disclaimer, quoted earlier, suggests that the primary human creators involved in the artwork creation were Jeremy “Wolf” Morris and Matthew “Soulforge” Walsh, who are also listed as the writers of the product.

And, for the most part, it’s not bad. I’ve included both the best and (in my opinion) worst as illustrations in this post, but for the most part, it’s effective – at communicating to the GM. I’ll delve into that comment a little later in the review; I’m still conveying my first impressions at this point.

Day-Night Theme

Many of the pieces contain a day-vs-night theme, which is obviously related to the ‘two worlds’ impression created by the cover. At this stage, I’m not sure of the relevance, but it’s too prevalent not to be significant, so I’ll be looking for an answer when I get into the text.

Encounter Illustrations

There is a stylistic thread that runs through most of the encounter illustrations. Sometimes it works, sometimes I’m not so sure. This is one of those ‘unsure’ examples, but it’s certainly the cutest Beholder that I’ve ever seen, though. All it lacks is a ribbon tied into a bow on the top of its’ head. Is that impression appropriate? I don’t know yet. But this is NOT menacing in the way a Beholder usually would be.

Compare the Beholder with this Half-orc image. Clever use of negative space creates an impression of size, while the textures transform an image that might have been cartoonish into something more substantial. I wish it were larger though – I’ll discuss that in the text below.

So far as I can tell from a quick glance through the pages (used to select the images extracted for this review), there’s an image to go with each encounter, though this might be an inaccurate impression. It’s something for me to look for when I dig into the content.

Scene Illustrations

Locations are well illustrated. Some of them are stylistically more related to the encounter illustrations, others are more removed from that but with consistent tonality that works to create a sense of a unified whole.

This is an example of a scene illustration that is more in line with the encounter illustrations. The biggest problem with it is the size – I had to ENLARGE it to fit the available space.

I guess, right now, we get to the rub. In terms of presenting a representation of a scene or an encounter to the GM to help them interpret the text, the art is absolutely fine – for the most part. But it’s not all that useful for showing to players, it’s too small. Despite the large page count, this product would be even better if the locations and maybe the encounters were enlarged, even though this would add to that page count.

Sure, you can zoom in to enlarge the image…

…but that’s not a perfect solution. Either you cut the top and/or bottom off images, or you show players content to the side of what you’re trying to show them. That could be another area, it could be an encounter, it could be a magic item, it could be text – but what it is most likely to be is a surprise-killer.

Not enough thought has been put into how customers will actually use the product.

Having been involved in the production of Assassin’s Amulet and a few other things over the years, I can see why this has happened – it’s essentially the age-old problem of forest for the trees, and it’s an easy trap to fall into. In a nutshell, the creators were so busy actually making the content that no-one stepped back to look at usage, or not closely enough, anyway.

This goes right back to the initial content design decisions. Presenting the illustrations as full width, 1/3 height panels would need to be decided right from the beginning, because it affects the size of the illustrations that you need. It would have made layout a lot more difficult, with text in columns and illustrations not. But the product would be a lot more user-friendly as a result..

Character Illustrations

There are plenty of character illustrations, too. I’m not sure if this is a petrified character or a statue – not without consulting the text – but it’s effective.

This image is probably more indicative of the character illustrations, many of which are obvious homages to characters from popular culture. Are these NPCs or PC presets? I’m not sure, yet. There’s lots of more typical spot illustrations throughout, too.

The same problem affects most of the character illustrations in the book.

Now I don’t see this as a flaw in the product; it’s a lost opportunity to improve the product, but this won’t actually make it unusable, by any stretch of the imagination, and that’s the distinction that defines what I consider to be a flaw.

The Prelude Page

I don’t know whether they referred to this internally as a prelude or a preamble, but it’s the first solid information we get about what we’re looking at. It’s worth quoting the text in full:

An Adventure About Life After You Are Left

Step into the well-worn slippers of elderly parodies of pop culture heroes and heroines, enjoying a mundane day at the Adumbral Strobus Home for Retired Adventurers. But the ordinary turns to chaos when the entire facility is suddenly whisked away to another plane of existence. Waking up in this bizarre new realm, the adventurers quickly realize they’re not in Kansas anymore, Toto.

As they explore their surreal surroundings, they must unravel a series of perplexing mysteries. Clues scattered throughout the complex will help them escape the pocket dimension, discover the fate of their fellow residents, navigate the bizarre mutated growths and entropic rot, and decipher the strange artwork depicting one of their own. Along the way, they might even uncover some juicy staff scandals.

Venture into the enigmas of the Adumbral Strobus Complex to uncover what Dr Mortem has been doing with the poor inmates of the Asylum for the Neglected Elderly. Confront him in the Adumbral Strobus Institute of Entropic Research to find a way to return yourselves and your home to the material plane. Can you solve the riddles, face the horrors, and lead your comrades back home? Adventure and intrigue await in “Once We Were Heroes”!

And remember, whatever you do, don’t look too closely at the toilets.

Okay, so some of the characters are presets, and some are NPCs. The premise is that a nursing home for elderly ‘retired heroes’ from many different realities gets pitched into somewhere else, and the main quest is to get home again. But there are side quests along the way that may impact the success or failure of that main quest. This is a micro-game setting as much as it is an adventure.

Nostalgia, pop culture, iconic characters, and a situation that pitches them all into one last great adventure – sounds intriguing.

Let’s talk for a minute about the Font. For viewing on the internet or on screen pages, it’s long been recognized that a Serif font is not ideal – that’s why Campaign Mastery uses a dirt-common sans-serif font for it’s content. It’s more legible and less tiring. On the printed page, that is reversed. You can read a serif font on the printed page up to three or four times as quickly as you can a sans-serif font. So this product is optimized for screen viewing and not for printing. That’s fine, it’s just something to be aware of.

Because you want headings to stand out, they are frequently in whatever font you aren’t using for your text, and that’s the case here, too. So the designers of the product know what they are doing, or (at the very least) have imitated the work of someone else who knows what they are doing, in terms of typography.

There’s something a little strange about the line heights in some of the text, however. This is usually a result of peculiarities with the actual font used, and it’s incredibly hard to get right. I can’t mark the product down because of it, but I have to mention it.

The text above is then followed by a humorous “Disclaimer” passage which at first glance might appear to be just fluff. This is written, like all fine print ever, in a far smaller version of the main font. But it does actually serve a valid function in terms of the content – in essence, it evades the likelihood that someone will disagree with the specific adaption of a specific entity from pop culture.

“Involuntary translocation across dimensional boundaries may present unforeseen hazards. Accordingly, Adumbral Strobus accepts no liability for any personal belongings that may become entropically compromised, nor for any injuries, accidents, transmogrifications, or sudden instances of extra-dimensional dissolution occurring within the confines of our esteemed establishment during such excursions. For your safety and well-being, certain chambers, thoroughfares, and inter-dimensional portals may be sealed off without prior notification.

“Height, weight, and chronological restrictions may apply in some dimensions, and individuals with specific physiological, psychological, or metaphysical conditions or impairments may find themselves unable to participate in certain dimensional experiences. It is advised, with the utmost gravity, that consumption of any foodstuffs or beverages discovered in alternate realities is strictly ill-advised, as Adumbral Strobus accepts no responsibility for any ensuing transformations, spontaneous combustion, or heroic expulsions of stomach contents that may result from such gastronomic indiscretions.”

The disclaimer continues for another couple of paragraphs after that.

This is exactly the sort of nuance and attention to subtle detail that I expected to find from the Logo, and so it gets a big tick. The final sentence is worth highlighting because it (a) smacks of an Alice-In-Wonderland vibe, and (b) implies that some characters who take the risk may regain some of their youth and former glory. But it also suggests that such reactions will be addressed on a case-by-case basis within the content – which speaks well of the attention to detail within the content.

The Credits and Contents Pages

Pages 4-6 cover this ground. I noted that the credits acknowledged the copyrights over D&D, Forgotten Realms, Ravenloft, and Eberron amongst others.

The contents page reinforces earlier impressions. The introduction runs for four pages from 7 to 11, and will get looked at in detail below. Chapter 1 is “Welcome To The Adumbral Strobus”, Chapter 2 is “The Extra-planar Adventures”, Chapter 3 is “Asylum for the Neglected Elderly” and Chapters 4 and 5 relate to the “Institute of Entropic Research”. It also contains 4 versions of the Aftermath and name-drops three more entities: Mortem, Yixith, and Xeghic. At this point, I know from the prelude that Mortem is a mad scientist who has been experimenting on patients, but don’t know the other two – so I suspect (until I know better) that they are the personifications of the “Day vs Night” conflict implied by the artwork. If so, one or both are probably responsible for the transdimensional relocation – but that’s just speculation with precious little solid foundation.

I have to admit to having a minor problem with the name “Adumbral Strobus” – I keep wanting to read it “Admiral Strobus”. That might be just me, or it might be more common than I think it is. But I’m quite sure that it would trip me up sooner or later.

The 5 main chapters are then followed by 7 appendices, and Appendix C, “Character Concepts” stands out to me. It tells me – without actually saying so – that this is an adventure designed for some variety of D&D / Pathfinder, because it lists the different character classes and then offers two residents as representative of that class.

The Homages, when you look at them, are very tongue-in-cheek. The one that I used as an illustration is of “Prof. Alfus Percy Ulric Bron
Dumblebeard” – I don’t think anyone will need a second guess as to who this is supposed to represent. But that sets a tone for the rest of the product that seems a little incompatible with the content thereof – it will be interesting to see how they cope with that.

The Introduction

Let’s look at the subsections of the Introduction – “About This Adventure,” “Once They Are Heroes,” “Adventure Summary,” “Running The Adventure,” “Character Creation,” “Locales” and “Dungeon Master’s Preparation Checklist”. Some of these are subdivided.

The Game System

Quote: “Once We Were Heroes” is an adventure based on the 5th edition of the worlds most popular role playing game, designed for four to six characters, where the player characters take on the roles of the story’s heroes. This book outlines the villains and monsters they must defeat, as well as the locations they must explore, to successfully complete the adventure.

So, that answers that question, but it produces a big black mark on the product in terms of my personal taste.

You see, like a lot of others, my friends and I participated in the WotC 5e playtest, back when it was “D&DNext,” and after a while, we noticed that every time our feedback said “Zig Left,” the next iteration of the rules went “Zag Right”. There was little-or-no interaction with anyone at WotC in the playtesting feedback reports that we filed, so there was little explanation as to this phenomenon; we could only assume that “Zag Right” was the more popular choice amongst other playtesters. Slowly, what ended up D&D 5e became something we were no longer interested in playing. Some have since changed their minds; others have not. It is what it is.

The problem with tying yourself to one game system so absolutely is that you find yourself living and dying with that game system. When writing Assassin’s Amulet, my co-authors and I worked very hard at making everything compatible with both D&D 3.5 and Pathfinder for that very reason.

Does that mean that this is un-runable, or that it shouldn’t even be up for purchase consideration? Absolutely not. But it does mean that to run it, I would need to adapt it, and that adds to the hurdles that the quality of the product have to surmount.

Anyway, getting back to the “About This Adventure” text… setting for this adventure, right… can be placed in many published settings or even a world of the DM’s creation, good… Intended to be played as a one-shot, okay… Players can either choose from the provided options or create their own 10th-level characters, okay.

…The Tone of this adventure is a comedic take on a horror mystery, okay that’s interesting – those two are hard to make go together (though it can be done)… encourage you not to take it too seriously, okay…

Once They Were Heroes

“Many years ago, the world was saved by a legendary group of adventurers. They stood against the darkness, vanquished terrible evils, and ensured peace for generations…”

So the characters / PCs are not from ‘all over,’ they were allies and teammates who worked together, and then ALL of them ended up in this place? The first part is a disappointment, and the second strains credibility to breaking point right off the bat.

Were I to run this adventure, i would probably go back to my original impression – that these are retired heroes from multiple planes of reality who have been ‘parked’ in this facility; they don’t know each other; and the big thing that they offer (besides aged care) is anonymity, distance from the scenes that made you legendary, so that no-one from home can call you up one last time. This is a Retirement home.

Some may find that this interpretation is even harder to swallow, in terms of credibility, and it probably is – if you run it using normal characters and not the ‘pop culture icons’ provided. But that risks undermining the ‘fun factor’ and making this all too serious. And if you create your own versions of iconic pop culture characters, you’ll find yourself back at the same basic question.

Of course, you may find that the premise doesn’t stretch your credibility as badly as it does mine – but that still doesn’t negate the possibility that your players may struggle with it more than you. So this is something that every GM will have to at least thing about addressing.

The introduction then goes on to outline the adventure, but I’m not going to get into those specifics, there’s a lot of information that players will have to find out the hard way.

The plotline breaks down into three main sections – a ‘get to know you’ routine morning (my comments above pay into this section very heavily); a sudden event and their need to work out what’s happened and what they can do about it, which leads into investigating the mystery and stumbling across side-plots; and the ultimate confrontation and resolution of the plotline.

Running The Adventure

This is pretty standard fare, with no surprises. Stat blocks for all encounters, and any spells or equipment referenced are provided, so the PHB and DMG are the only real requirements.

Character Creation

This section contains ‘meta-rules’ for character generation and explicitly references the PCs as parodies of pop-culture icons, who have aged and retired. Outlines for equipment (very limited) and aging the characters (may not go far enough, but there’s a playability need that has to be taken into account).

“Additionally, randomly allocate one flaw and one feature to each character, either by rolling a d20 and referring to the table in the Appendix A or by dealing cards from the provided deck. Encourage players to incorporate these traits into their role-playing to add depth and humor to their characters.”

The text also states that the characters supplied in appendices C and D should be considered backups for players who are struggling to create their own characters, not as the primary source.

Locales

Interior maps are provided for three buildings within the Adumbral Strobus complex – the Home For Retired Adventurers, the Asylum for the Neglected Elderly; and the Admin building, which includes the facilities belonging to Dr Mortem.

There are two pocket dimensions, the Everburn and Evergloom, which have an interesting cosmological concept that makes total sense in terms of the adventure as described (I’m being deliberately vague to leave players who may read this in the dark).

Visiting these pocket dimensions is not quite what players might expect – there are stings in the tail that are exactly the sort of thing that I like to build into my own campaigns.

This section also categorically identifies Yixith and Xeghic, who were name-dropped in earlier material, and their relationship to the plotline. I have one suggestion to make in this respect but don’t want to make it too easily accessed, so it’s in black text against a black background in a text box below – select the text contents with a shift-and-mouse-drag to read it. The text DOES contain spoilers that will ruin the adventure for any player who reads it, be warned.

One realm is a microplane of life and the other of death. Yixith and Xeghic are inhabitants of these microplanes, one to each. The depictions of each match the illustrations of the microplanes. I suggest REVERSING the indicated images WHEN THEY ARE ‘AT HOME’ so that they contrast with their environments. This will throw a curve ball that is likely to deceive even experienced players – for a while.

After a spot illustration of a nameplate that is REALLY hard to read, the introduction segues into a brief description of the setting – the grounds of Adumbral Strobus, the retirement home building, the Asylum, and the Institute.

Maps vs Battlemaps

The creators suggest using theater of the mind, with the GM referring to the maps provided for cues and the battlemaps in Appendix G reserved for combat situations. They point out that this will speed play, which is true. But they don’t mention that a battlemap should only be placed on the table when combat is actually about to begin – don’t telegraph the situation to the players! Stay in theater-of-the-mind mode until the last possible moment.

This also plays into my statements regarding image size. It can be argued that these are intended only for the GM, and not for player consumption, and it seems clear that this is what the writers had in mind; but it can also be argued that using theater of the mind is sped up and improved by giving a common visual reference for the group to process.

Prep Checklist

This has some additional steps not previously mentioned, and shouldn’t be ignored. But that’s what is most likely to happen because the only two entries on the first page on which it appears are reiterations of advice already provided. All the new content is on page 13. This is the biggest misstep so far in the content, in my opinion; if this is as bad as things get, OWWH will deserve very high praise and recommendations, indeed.

Encounter Balancing

Closing out the Introduction is a section on Encounter Balancing. There’s nothing startling or wrong with this section; the biggest issue is what is Not there.

This adventure is designed, according to the “About This Adventure” text, for 4-6 characters, with a presumed ratio of one character per player.

This section shows how to adjust encounters for 4, 5, or 6 players. It also has an adjustment for having less than the recommended number of players (3). But it makes no accommodation for groups with more than the recommended number. It’s not likely to come up often – but surely expending the three lines of text needed to cope with 7 or 8 players would not have been too much to ask?

That said, as I commented above, if this is the biggest faux pas, this adventure will be doing very well indeed.

Looking Deeper – Chapter 1

I’m not going to break this down into subsections the way I did the introduction – there will be too much trouble with spoilers if I do that. Instead I’m going to skim the chapter and report back.

  • While I can guess, I don’t know for certain what “Balloon Volleyball,” or it’s in-game equivalent, “Beholder Ball” is.
  • It would have been a good idea to warn the GM to come up with “20 questions” for the Getting Ahead game. Unless this game is also not what I think it is.
  • Tess Trill – every facility of this type needs a hot girl for those characters that way inclined to drool over, and she fills that need here. Her male equivalent for those looking in the other direction is the cleaner, Fenim. The text hints that he might have feelings for her, about which she is naively ignorant. Adding the above to their respective descriptions adds massively to the background and general realism of the setting – even if they are cliches.
  • That credibility is severely needed to counterbalance the presence of Derrick the Chevalier. Older nobility, as a general rule, do NOT get shuffled off to somewhere like this. Instead of an actual Noble, he should be a commoner with delusions of Nobility – or maybe pretensions of Nobility.
  • This whole sub-sequence would be a lot easier to roleplay if there was some indication of what this group was actually up to – they are clearly up to something that they probably shouldn’t be. The GM should probably also prepare some relationship cues that can be expressed through dialogue with the PCs. These might be friendly (“Don’t forget we’ve got a chess game to finish later”) to softly hostile (“Mind your own business, [PC], and I’ll mind mine, and we’ll both be happier for staying out of each other’s way.”) In general, I get the impression that the PCs are the ones who have ‘settled’ into a calm existence in the retirement home, while this group are those who are still rebelling a bit and bucking the discipline. That too, would be useful direction – especially if that wasn’t the impression the creators intended.
  • Okay, now we get the explanation of the 20 questions game. Some sort of indicator at the first mention that ‘details will be provided below’ would have been helpful.
  • While the text solves the puzzle, some sort of motivation on the part of the guilty party would be helpful.
  • Context within the adventure explains the Beholder image – so my earlier comments regarding it can be ignored.
  • The first real plot hole – “After the conclusion of the pirate hunt game”… but no such game has been specified or described.

Nine notes, two of them canceled out by a third, and only one (maybe two) really critical. I’ve read a lot of adventures and while there have been one or two that have scored ten out of ten for content, the vast majority have far more serious faux pas and plot holes.

Narrative Content

Most importantly, the narrative generally succeeds in bringing the location to life in a way that feels natural, realistic and interesting. Nailing any two of those three can be difficult, ticking all three boxes – especially in such an unorthodox setting with… unusual… characters is top-rate work.

Locations, Encounters, Mysteries, Solutions, and Action: Chapters 2-4

At this point, I don’t think I need to delve into these areas too deeply. While it’s possible that one of them will lower the established standard, there’s no reason to expect it. A quick skim of the next few chapters confirms that impression; this is a really well-written well-crafted adventure.

It may have the occasional small hole for you to plug, but nothing that won’t be easily taken care of if you do what everyone always says to do and read the whole adventure before play.

I’ve very much been mindful, in writing this review, not to read ahead, but to generate my comments as I came to each passage of content. That permits an honest impression of what’s actually presented by that point in the product, with no cheating by looking ahead.

When I was selecting images, I was deliberately careful to avoid reading any of the text. When I was reading the introduction and making comments on it, I wasn’t looking ahead – I was reacting to what was currently in front of me, in the context of what I had already read. Similarly, my notes on Chapter 1 were very much stream-of-consciousness as I was reading – and you can see in those comments where that caught me out.

Above all else, I was making every effort to make this review both honest and comprehensive, without any bias resulting from the source of the artwork. I hope that I’ve succeeded in reviewing it without any bias or taint, so that you can make a fair assessment of what’s being offered without compounding of any bias or taint from considering the art source.

Price

The price is Australian $7.58 which is $4.95 US. I would actually expect the price to be $5 from this conversion, I suspect that what I got was the “live” conversion rate and not the daily rate. And if you don’t know the difference, don’t worry about it.

Where Do You Get It?

https://www.dmsguild.com/en/product/535760/once-we-were-heroes, or just click on any of the illustrations excerpted from the product.

The Judgment Call

So here’s the bottom line: If you are really seriously opposed to AI-generated art in RPG products, I don’t think this adventure will change your mind.

If, however, you are willing to even contemplate the possibility that there are potentially valid counterarguments to that opposition, this adventure has enough merit that you should contemplate buying it.

Only the maps are really essential for play; you can blank out every other illustration and still be left with a product worth your attention. It will be diminished by that act, but that’s your choice to make.

If the art had not been AI-sourced, there are two possible paths that this adventure could have taken:

  • Far less art, far weaker presentation, and far less appeal despite the length. Marketplace viability would probably require reduction in the price by 1/3, eating directly into the profits and making the existence of another small publisher less viable. Or,
  • Far less art of potentially slightly superior quality, and a price tag closer to USD $40 – a price that would be sure to compromise sales. The net effect is the same – reduced profitability and a small publisher becoming less viable within the hobby.

Some may argue that no publisher that crosses their hard line deserves to be viable in the market. I think that’s going too far.

For my (metaphoric) money, Fool Moon have done everything right in terms of ethics, here. They are up-front about the art and its source. They have done their best to leverage the output to the maximum benefit of their product without making it an indispensable element of that product.

Is it the greatest RPG product ever published? Probably not, but what right do you have to expect that – especially at this price point?

Is it worth every one of those US dollars? I think it is, and then a couple. And I don’t think you can ask more of Fool Moon Productions than that.

Comments (1)

Traits of Exotic d20 Substitutes pt 3: The Really Weird


Lots of die configurations can substitute for a d20, or for 3d6. This article looks at some of the most unusual. Part 3 of 3.

The image of the balance is by Anna Varsányi from Pixabay. I’ve changed it’s balance, added a load of dice, and changed the background color.

Time Out Post Logo

I made the time-out logo from two images in combination: The relaxing man photo is by Frauke Riether and the clock face (which was used as inspiration for the text rendering) Image was provided by OpenClipart-Vectors, both sourced from Pixabay.

There’s something indescribably appropriate about writing the first words of this post on Halloween – after all, many of these rolls are monsters unfit for gentle company. At the same time, some of them might get under your skin and make themselves at home, because there are some absolutely fascinating (not to mention strange) alternatives being put under the microscope today!

Because the die rolls are so strange, I’ve decided that each graph will be linked to a larger version that can be opened in a separate tab by clicking on the thumbnail. I’m also toying with the notion of doing some even larger versions in a PDF – if so, I’ll feature the link to it prominently.

I’m kicking things off today with a last-minute extra inclusion just as a warm-up. Although conceptually wild, it’s by far the tamest alternative on show today!

BONUS EXTRA: Exotic Choice #0a: 2d6+1 (for high results desired) or 2d6+6 (for low results desired)

I came up with this while finalizing the formatting of the previous post; when a couple of the things I had written about caught my eye in succession and sparked new thoughts.

Specifically: what if the roll was 3d6 – but one of the dice was fixed, in the opposite direction of what a character wants to roll to succeed? A ‘1’ if they want to roll high, a ‘6’ if they want to roll low?

In form, this would then become a triangular probability curve, because it’s functionally the same as 2d6 plus modifier – against a target intended for 3d6. That modifier is critical – the average roll of a d6 is 3.5, so a 1 effectively means a -2.5 modifier against a target intended for 3d6 when you are trying to roll high, and a 6 means a +2.5 modifier on 3d6 when you are trying to roll low.

Integer values matter when they trigger a binary choice like that. In the Hero System, several defined rolls set the standards: 5/-, 7/-, 11/-, 14/-, and 17/-. These are all attempting to roll low, to get below the target number. In D&D, back when it was still 3d6 based, you often had to roll high but sometimes you had to roll low – it depends on what you’re rolling for. With 3rd Ed, this was cleaned up so that you were always trying to roll higher than the target. So both variations have to be evaluated. To do so, I’ll use the same standards – but look at rolling 5+, 7+, 11+, 14+, and 17+ – even though that edition also shifted to the d20. So this is a legitimate option for replacement of both.

With the Hero System rolls, the higher the target number, the easier the roll is supposed to be. With ‘modern’ D&D and Pathfinder, the higher the target number, the harder the roll.

We start, as usual, with some probability graphs:

Every result in between the two curves is bad news for the rolling character. Click the thumbnail for 1024 x 361 version.

It’s the same story here. Click the thumbnail for 1024 x 361 version.

Min, Max, Ave

    2d6+1:

      Minimum 3
      Maximum 13
      Average 8

    26+6:

      Minimum 8
      Maximum 18
      Average 13

    The fact that one peaks as the other begins makes me kinda curious about what the sum of the two – 4d6+7 – would look like, but that’s outside the scope of this article.

The Thresholds
    The 1% Threshold

      Everything beats this minimum – no valid results are off the table.

    The 3% Threshold

      On 2d6+1, 3 and 13 are just below this threshold. On 2d6+6, 8 and 18 are in the same category. In both cases, it’s the most extreme results only; everything else is in the next threshold group or higher.

      In fact, there’s nothing in the 3%-5% band, either. The probability is rising too quickly for that.

    The 5% Threshold

      Breaking the 5% threshold but not making it to the next, 10% mark, are a couple of results on each side of each of the curves.

      2d6+1: 4-5 and 11-12; 2d6+6: 9-10 and 16-17. So these results are more likely to come up than on a d20.

    The 10% Threshold

      Between 10% and 15% are also two results from each side of the curve.

      2d6+1: 6-7 and 9-10. 2d6+6: 11-12 and 14-15. These results are more likely to come up than on a d10.

    The 15% Threshold

      That leaves only the absolute peaks of both ‘curves’, 8 and 13 respectively. They aren’t much higher than 15% but they legitimately beat that target. In fact, these results have the same probability as a flat 1d6 roll plus modifiers.

Slices Of Range: Percentages Of Probability
    Range Of Results

      3-13 and 8-18 have exactly the same range of results, which is not all that surprising since they are both 2d6 rolls. 11 results in each. The odd number means that there is a single result that represents the peak probability – until you get into the exotic die rolls to come, anyway!

    Ave – Min, Max – Ave
      These values will also be the same in all four cases – 8-3=5, 13-8=5, and 18-13=5.
    1/3 (Ave-Min) + Min

      Here’s where things have to diverge because the two rolls have different minimum values.

      1/3 of 5 is 1.6667, which will be common to both.

      1.6667 + 3 = 4.6667, so 3 & 4 are the lowest tier of results for 2d6+1. They have a combined probability of 8.33%.

      1.6667 + 8 = 9.6667, so 8 & 9 are the equivalents (with the same combined probability) for 2d6+6.

    2/3 (Ave-Min) + Min

      2/3 of 5 is 3.3333, again common to both because it’s a function of the 2d6 part of the rolls.

      3.3333 + 3 = 6.3333, so 5 and 6 are the middle lower results band for 2d6+1. They have a combines probability of 27.78 – 8.33 = 19.45%.

      3.3333 + 8 = 11.3333, so 10 and 11 are the equivalents for 2d6+6, with the same probability.

    The Lower Core

      That means that 7 and half of 8 comprise the lower core for 2d6+1 – that’s 13.89 + 1/2 x 16.67 = 22.225%.

      The 2d6+6 equivalents, with the same probability, are 12 and half of 13.

    The Upper Core: 1/3 (Max-Ave) + Ave

      Starting on the downhill leg of the probability charts, we have another 22.225% representing 9 and the other half of 8 on 2d6+1, and 14 and the other half of 13 on 2d6+6.

    2/3 (Max-Ave) + Ave

      Those are followed by the upper middle, a combined probability of 19.45% again, and a span of 2. For 2d6+1, that’s 10 & 11, and for 2d6+6, it’s 15 & 16.

    The Lofty Outcomes

      The very best results, with a probability of 8.33%, are 12 & 13 on 2d6+1, and 17 & 18 on 2d6+6.

    2d6+1:

      03-04: 8.33%
      05-06: 19.45%
      07-08: 22.225%
      08-09: 22.225%
      10-11: 19.45%
      12-13: 8.33%

    2d6+6:

      08-09: 8.33%
      10-11: 19.45%
      12-13: 22.225%
      13-14: 22.225%
      15-16: 19.45%
      17-18: 8.33%

    No real surprises in this set of results except possibly the closeness of 19.45% to 22.225% – especially given the threshold indicator that the probability slope is quite steep with 2d6.

Slices Of Probability: The Definitive Result Values

    Slicing up the 100% pie into 5 slices as equally as possible is the name of the game in this subsection.

    The Lowest 20%

      20% falls after the third result on each curve, so the lowest 20% of results comprise outcomes of (2d6+1) 3-5 and (2d6+6) 8-10. I think it’s just a coincidence that the upper limit of one is double the upper limit of the other.

    Second Lowest 20%

      21-40% contains only a single result in each case – 6 for 2d6+1 and 11 for 2d6+6.

    The Middle 20%

      41-60% contains two values, including the peak. For 2d6+1, those are 7-8, and for 2d6+6, 12-13.

    Second-Highest 20%

      61-80% again holds just one result – 9 for 2d6+1, and 14 for 2d6+6.

    Highest 20%

      Which means the highest 20% of rolls will contain the results from 10-13 for 2d6+1 and 15-18 for 2d6+6.

      Peak Probability

      In both cases the peak probability is 16.67%.

    Matching Result: 1/3 Peak Probability

      1/3 x 16.67 = 5.5567%. This lands in between 3 & 4 (and 12 & 13) on 2d6+1, and between 8 & 9 and 17 & 18 on 2d6+6. So, once again, only the most extreme results are chosen by this method. That’s actually rather predictable, given the earlier threshold results, since 5.5567 is so close to the 5% threshold.

    Matching Result: 2/3 Peak Probability

      2/3 x 16.67 = 11.1133%. As it happens, there are results that have 11.11% probability of occurring, and so these would have to be right on this line. On 2d6+1, these are 6 and 10 – so 4-6 are in this probability zone, as are 10-12. The 2d6+6 equivalents are, predictably, 5 higher – 9-11 and 15-17.

      The most probable results are therefore 7-9 (on 2d6+1) and 12-14 (on 2d6+6).

    2d6+1:

      01-20%: 3-5 (span 3)
      21-40%: 6 (span 1)
      41-60%: 7-8 (span 2)
      61-80%: 9 (span 1)
      81-100%: 10-13 (span 4)

      < 1/3 peak probability: 3 (span 1)
      1/3 – 2/3 peak probability: 4-6 (span 3)
      2/3 – peak – 2/3 peak: 7-9 (span 3)
      2/3 – 1/3 peak probability: 10-12 (span 3)
      < 1/3 peak probability: 13 (span 1)

      It’s the evenness of the spans in the latter table that are most telling. While there is clearly a peak probability associated with the innermost results, there is a significant chance of a result outside them. In fact, there is a 100 – 13.89 x 2 – 16.67 = 55.55% chance that the result of any given roll will be outside the 7-8-9 peak.

    2d6+6:

      01-20%: 8-10 (span 3)
      21-40%: 11 (span 1)
      41-60%: 12-13 (span 2)
      61-80%: 14 (span 1)
      81-100%: 15-18 (span 4)

      < 1/3 peak probability: 8 (span 1)
      1/3 – 2/3 peak probability: 9-11 (span 3)
      2/3 – peak – 2/3 peak: 12-14 (span 3)
      2/3 – 1/3 peak probability: 15-17 (span 3)
      < 1/3 peak probability: 18 (span 1)

      And these are exactly the same, just 5 higher on the results.

Summary Of Results

    The bottom line in terms of mechanics is that you are taking a d6 away from the character’s roll and replacing it with the worst possible outcome.

    But I also have to make the point that you can work it in the other direction – choosing the option that is most beneficial to a character’s chances of success.

When To Use This Substitute

    That matters because of what this die roll is saying to whoever runs that character. If it’s the more difficult option, you are telling the operator of the character, “I want this roll to fail and I want to be sure that you know that”. Or, more simply, “This roll deserves to fail.”

    The alternative construction, that benefits the character’s chances of success, says, “I want this roll to succeed and I don’t care who thinks I’m being biased.”

    In other words, this construction should be reserved for those occasions when the whole point of the roll is making that statement. When a move is so brain-dead stupid that it doesn’t deserve even the minimal chance of success it might have on 3d6 or d20.

    So I guess I need to actually compare what the chances of success are for different targets.

    Target 17/- (17 or less)

      With 3d6, you have a 99.54% chance of making this target.

      With a d20, it’s 85% chance.

      With the penalizing construction (2d6+6), it’s 97.22%.

      With the advantageous construction for an “or less” roll (2d6+1), it’s 100% certain that you will succeed.

    Target 14/-

      With 3d6, your chances of success are 90.74%. With a d20, it’s 70%.

      With 2d6+6, it’s 72.2% – so better than on a d20, but not by much. Compared to a 3d6 roll, you are way worse off.

      With 2d6+1, it’s still 100% success.

    Target 11/-

      3d6 gives a 62.5% chance of success. A d20 gives 55%.

      2d6+6 gives 27.78% chance. That’s like half the chance of a d20.

      And, for the first time, not even 2d6+1 makes success certain – there is a 91.67% chance of success, so the odds are way better than ‘normal’.

    Target 8/-

      3d6 has only a 25.93% chance of making this roll. 3 times in 4, roughly, you would expect to fail. On a d20, the chances are a little better at 40%, but the odds are still stacked against you a little.

      2d6+6 has just a 2.78% chance of success. It literally takes the lowest roll possible to make this target. If both the dice aren’t snake eyes, you’ve failed.

      2d6+1 has a better shot at it – 58.33% – but you’re still going to fail almost half the time. This is actually a fairly hard target to achieve!

    Target 5/-

      …but not as hard as this target. On 3d6 you have just a 4.63% chance. On a d20 it’s a little over 5 times that, at 25%.

      2d6+6 – forget it, your lowest result is an 8, so a 5 or less is not an option.

      Even the construction that appears to give as good a chance at success as you are likely to get, a 2d6+1, has only a 16.67% of success – so a d20 is actually the more generous option with a target this low.

    It’s much the same story if you look at rolling X or more, just in the other direction. The 2d6+6 becomes the generous option, and the 2d6+1, the handicapping one.

    Either way, this choice is all about the message; the actual die roll is almost superfluous.

    Exotic Choice #8: d4 x d6 – d4 +1 or +4

    Now, things start getting strange. For this, you need two different colored d4s and a d6. One d4 is designated the multiplier; whatever shows on the face of that die gets multiplied by whatever’s showing on the d6. The usual nomenclature around me borrows from the d% – the multiplying d4 is “high”.

    The +1 option is for replacing a 3d6 roll, the +4 is for replacing a d20 roll.

    If you want strange, you’ve got it!

    Originally, I had this listed with no modifier whatsoever, but I was looking at the resulting probability chart and thinking about the prospects of replacing d20 and 3d6, and the modifiers suddenly made a lot of sense to me.

    Let me explain why. a d20 has results from 1 to 20, yes? The native construction of this roll gave results from -3 to 23. Which puts the mid-point of the results (NOT the average!) at 10. Bu the bulk of the probability is below this, at around 0-5. A +4 modifier shifted the curve to the right, because that’s what positive modifiers do – the middle of the range becomes 14, and the average will be 4-9. That makes it a usable substitute, if one that’s heavily weighted low.

    3d6 ranges from 3-18. The significant probability results of the native curve end at around 11. So adding 6 shifts the minimum to a 3d6-comparable 3, the middle of the range to about 16, the peak probability to 6-11, and the end of that significant results range up to about 17, again making this a usable substitute for the 3d6 roll, again one that is biased low.

    By applying the different modifiers, it makes both versions fit for purpose and the advice regarding the use of this construction, the same, or close enough to it.

    With that addressed, let’s talk about the core of the roll. Multiplied die rolls have a singular characteristic: they bulk the probabilities low, but have long tails leading off into higher values. These come at a penalty – certain results that simply can’t happen. There’s no multiple that leads to a result of 17, for example – it’s a prime number.

    To solve this issue, you either have to add a die roll or subtract one. Adding one extends the length of the tail by the size of the added die roll, subtracting one shortens it. Adding one also shifts the probabilities right by the average of the added die, while subtracting shifts it left by that average.

    Once you’ve decided to use a multiplied die roll, you’re then negotiating a compromise between the native result and a useful configuration by way of the added or subtracted die roll. The smaller you make it, the smaller the impact – so I thought hard about d2 and d3, but decided that d4 was small enough in this instance. I also considered d5 and d6, but thought that the impact of the larger die was too significant. So that’s why this offering is d4 x d6 – d4 + 4 or 6.

    I’m going to introduce a new way of writing die rolls, having typed that sequence once too often for it to be convenient.

    It’s a simple extension of what’s already done – low dice size to high within an expression, ending with ‘d0’ i.e. modifiers. The new part is a way to indicate Conditional Changes.

    In this case,

    d4 x d6 – d4 +4,6 [d20r,3d6r]

    The conditional parts are separated by a comma instead of text and are followed by a symbolic representation of the condition for differentiation between the two. Once that has been established, in whatever context you are using this notation, you can leave off the content of the square brackets, with the empty brackets meaning “as before”:

    d4 x d6 – d4 +4,6 [ ]

    So, for example, you might have the following as a legitimate construct for some purpose:

    d4,d6 [a,b] x d6 – d6 +1,10 [ab,c]

    [a] = d20r
    [b] = 3d6r
    [c] = x ->20

    and, after the first use, you would just write

    d4,d6 [ ] x d6 – d6 +1,10 [ ]

    until the content of the square brackets next changed.

    Let’s break that example down for anyone who’s struggling to keep up (should be no-one but you can never tell).

    If this roll is to be used in place of a d20, you get condition A, in which your main roll is d4 x d6. The “r” in condition [a] signifies ‘replacement’.

    If the roll is to be used in place of 3d6, you get condition [b], and the main roll becomes d6 x d6.

    Both a and b have a modifier of +1. But if the results of the multiplication and subtraction of die rolls – that’s the “x -” in condition [c] – that is greater than 20, that modifier goes up to +10.

    All clear?

    So, for the remainder of this subsection, I’ll be writing d4 x d6 – d4 +4,6 [ ] for the die roll, with the [ ] signifying [d10r, 3d6r] without explicitly stating the condition every time. Okay?

    An afterthought – how do you decide where the body ends and the tail begins?

    There is a sharp flattening out of the curve at the point of division. You may even enter a secondary peak.

    Everything to the right of that dividing line is tail, everything to the left of it is body.

Min, Max, Ave

    Minimum = [1, 3]
    Maximum = [27, 29]
    Average = [10.25, 12.25]

    Right away, the new format has been extended to the display and differentiation of results, showing them in a far more compact way than would otherwise be possible.

    I got the average the old-fashioned way – multiply each result by it’s % chance and divide the total of all those results by 100.

    I did so because I wanted to test a shortcut that I’ve been using without verification like, forever – substituting in the value of an average roll to calculate the average result of a complex expression like we have here. So let’s try it:

    2.5 x 3.5 – 2.5 + 4 =
    8.75 – 2.5 + 4 =
    10.25

    correct result. It seemed logical and obvious to me that it would work, but I’ve never actually tested it to be sure, until now.

The Thresholds
    The 1% Threshold

      Everything beats this – but in the cases of [1 & 22-27, 3 & 24-29], only just, at a probability of 1.04%.

    The 3% Threshold

      Now things get juicier. in addition to the results mentioned above, each roll has 5 results below this threshold, and they are all in the tail: [16-17 & 19-21, +2]

      Another extension to the protocol – instead of explicitly listing the second case results, I’ve just indicated what the difference is.

      In the case of the second example offered initially, because the core die roll was changing, this wouldn’t work and you would have to use the longer, more explicit format continually.

      The +2 simply indicates, add 2 to get the alternative results, so 16-17 becomes 18-19.

    The 5% Threshold

      Between 3% and 5% things get more varied. We have one result in the main body – [2, +2] – and the entire rest of the tail except for [12, +2] – [10-11 & 13-15 & 18, +2].

    The 10% Threshold

      There are no results with a higher probability than this threshold, so the 5-10% bracket holds the entire rest of the results: [3-9 & 12, +2].

    I don’t usually do this, but I thought it would be worthwhile this time around: a summary of these results in tabular form.

      1-3%: [1, +2]
      3-5%: [2, +2]
      5-10%: [3-9, +2]
      3-5% [10-11, +2]
      5-10%: [12, +2]
      3-5%: [13-15, +2]
      1-3%: [16-17, +2]
      3-5%: [18, +2]
      1-3%: [19-27, +2]

    The other thing worth mentioning is that the average has clearly been ‘pulled’ to a higher number by the tail. If [3-9,+2] is considered the main body, which is what the above results show, then you would expect an average of [6, +2] or thereabouts.

    The greater the probability contained in the tail, the greater the shift. In this case, up a full 4.25 from [6,+2] to [10.25,+2].

    That will have an impact in the next section.

Slices Of Range: Percentages Of Probability
    Range Of Results

      27-1 = 26, +1 for the 1 itself, makes 27.
      The results are [,+2] higher for the alternative construction, but the range is exactly the same.

      Ave – Min, Max – Ave

      10.25 -1 = 9.25
      27 – 10.25 = 16.75

      Because minimum, maximum, and average all go up by the same amount in the second formulation, these ranges are exactly the same.

      The tail isn’t quite twice as long as the main body – 16.75/9.25 = 1.8108. I’ve never tested whether or not that’s true globally, so at this point it’s just an observation, not even a demonstration of a rule-of-thumb principle..

    1/3 (Ave-Min) + Min

      The part of the graph that lies below the average is going to take in the entire body and part of the tail.

      [1/3 x 9.25 + 1 = 4.0833, +2]

      So the band of worst results runs from [1 to 4,+2] and has a combined probability of 17.71%.

    2/3 (Ave-Min) + Min

      [2/3 x 9.25 + 1 = 7.1667, +2]

      The poor results are from [5 to 7,+2] and these have a probability of 42.71 – 17.71 = 25%. So 1 in every 4 rolls will yield a [5, 6, or 7,+2].

    The Lower Core

      This obviously contains everything else up to the average, so [8-10,+2]. The total probability of these results is 59.38 – 42.71 = 16.67%. This is ever-so-slightly less than the bottom band.

    The Upper Core: 1/3 (Max-Ave) + Ave

      For the first time, we have an asymmetric roll, which means that I can’t simply echo the spans in reverse sequence, I have to actually calculate these values.

      [1/3 x 16.75 + 10.25 = 15.8333,+2]

      So the upper core is 11-15, and includes the secondary peak at 12. The total probability in this span of 5 results is 80.21 – 59.38 = 20.83%.

      If the main body is 3-9, this shows that the early part of the tail is quite fat.

    2/3 (Max-Ave) + Ave

      [2/3 x 16.75 + 10.25 = 21.4166,+2]

      The band of ‘good’ results ranges from [16 to 21,+2] and has a total probability of 93.75 – 80.21 = 13.54%.

      This is the lowest-probability band that we’ve see so far. But the 93.75% [1-21,+2] indicates that there’s not much probability left for the very best results.

    The Lofty Outcomes

      The results from [22-27,+2] have to contain the rest of the 100% total, so 100 – 93.75 = 6.25%.

    d4 x d6 – d4 +4,6 [ ]:

      [01-04,+2]: 17.71%, span 4, sub-average=4.4275%
      [05-07,+2]: 25%, span 3, sub-average=8.3333%
      [08-10,+2]: 16.67%, span 3, sub-average=5.5555%
      [11-15,+2]: 20.83%, span 5, sub-average=4.166%
      [16-21,+2]: 13.54%, span 6, sub-average=2.5667%
      [22-27,+2]: 6.25% span 6, sub-average=1.0417%

      This table introduces a new diagnostic tool, the sub-average. This is the probability of the range divided by the span of results – so the range of [05-07,+2] has a total probability of 25% and a span of 3, giving an average probability across the span of 8.3333%.

      The combination of range and sub-averages gives a very approximate description in actual numbers of the shape of the probability curve, ironing out little deviations like the secondary peaks at [12 and 15 and 18,+2].

      I haven’t needed it before, but this is a far more complicated curve than the previous ones.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      The 20% mark in total probability falls between [4 and 5,+2], so this band runs from [1-4,+2].

    Second Lowest 20%

    The 40% mark is a little below 7, so this 20% holds results from [5-6,+2].

    The Middle 20%

      We get a total probability of 60% just above [10,+2], so this band contains results from [7-9,+2].

    Second-Highest 20%

      The 80% total is reached just below 15, so this group contains results [10-14,+2].

    Highest 20%

      Which leaves only the cream of the crop, from [15-27,+2].

    Peak Probability

      The peak probability belongs to a result of [6,+2], exactly as I forecast from the body range of [3-9.+2]. It is 9.38%.

    Matching Result: 1/3 Peak Probability

      1/3 x 9.38 = 3.1267%

      [2,+2] equals this almost exactly, at 3.13%. It’s so close that it has to be included.

      In the tail, things get more interesting. You can look at the probability chart and describe the tail as having peaks at 12, 15, and 18, and/or you can talk about valleys at [10-11, 13-14, and 16-17,+2].

      [14 & 16-27] are all below this threshold.

    Matching Result: 2/3 Peak Probability

      2/3 x 9.38 = 6.2533%.

      [3 & 8-13 & 15,+2] are all at or below this value.

      Which leaves [4-7,+2] as exceeding it.

      The question is always whether or not results that land exactly on a dividing line like this should be counted above or below it. But in this case, [2,+2] above set a precedent of including such cases in the lower of the divisions. So the dividing lines can be read as “[value] or less”.

      d4 x d6 – d4 +4,6 [ ]

      01-20%: [1-4,+2], span 4
      21-40%: [5-6,+2], span 2
      41-60%: [7-9,+2], span 3
      61-80%: [10-14,+2], span 5
      81-100% [15-27,+2], span 13

      [1-2,+2] 4.17%, span 2
      [3,+2] 5.21%, span 1
      [4-7,+2] 33.33%, span 5
      [8-13,+2] 30.30%, span 6
      [14,+2] 3.13%, span 1
      [15,+2] 4.17%, span 1
      [16-27,+2] 19.79%, span 12

Summary Of Results

    This is about as simple and clean as a multiplied die roll gets. The addition or subtraction of a die has done it’s job.

    If you examine the d4 x d6 chart above, one of the first things you notice is that it looks unfinished and incomplete. There are gaps – there’s no way to roll a 7, for example. Adding or subtracting a die fills in those gaps – at the expense of lowering probabilities (the possibility of the additional results ha to come from somewhere).

    Note that if the gaps are too large, a d4 might not be big enough. With d6 x d8 – d4, there is still a gap between 41 and 44, with two results missing. To fill them in, the d4 has to grow to a d6 – note that 6-4=2=the number of missing results.

When To Use This Substitute

    I wouldn’t use this to replace a d20 or 3d6 rolled for the usual purpose. I WOULD use it to replace those things on a custom table.

    For example, when it comes to diseases, there are all sorts of things that you need to know.

    • Unhosted Half-life
    • Base Infectious Rate
    • Immunity
    • Pre-symptom period
    • Infectious Stage Start
    • Infectious Stage End
    • Symptom Recovery
    • Disease Recovery

    Now, you could get some graph paper and draw a number of pretty curves to represent the probability you want; total those up and you can scale to exactly 100%.

    Or you can simply use a die roll like this one to create the curves for you.

    If I were to do that, I might get:

    • Unhosted Half-life = 9 days
    • Base Infectious Rate = 12/-
    • Immunity = 3%
    • Pre-symptom period = 4 days
    • Infectious Stage Start = 5 days
    • Infectious Stage End = 6 days
    • Symptom Recovery = 10 days
    • Disease Recovery = 7 days

    All these numbers were generated just by rolling d4 x d6 -d4 +4.

    What do these numbers mean? Well, a disease starts out with a Base Infection Rate chance of being caught. If it’s out in the open, in the soil for example, it loses half it’s infectiousness as disease cells die off every unhosted half-life that passes.

    So it starts as 12/- – that could be on 3d6 or d20 or whatever. After 9 days, it’s down to 6/-. 9 days later, it’s 3/-. 27 days later, and it’s 1.5 /-; then 0.75, 0.375, and so on. But that’s per exposure – if a dungeon was once plagued by the illness, you might easily have 10, or 50, or 100 exposures.

    You aren’t going to roll all of them. There’s a shortcut.

    • Determine the chance of failure of 1 exposure.
    • Convert it to a decimal.
    • Estimate the number of exposures to be rolled at once. 20, 50, 100 – the choice is yours.
    • Raise the decimalized risk of NOT catching the disease to the power of the number of exposures.
    • The result will be a much smaller number. Convert it to a percentage.
    • That’s your chance of not contracting the disease. Subtract from 100 to get the matching chance that you WILL contract the disease.
    • For example, let’s take our 12/- and assume it’s on 3d6. That’s 74.07%. But 10 half-lives have passed since then; 2^10 = 1024, so the chance per exposure is now down to 74.07 / 1024 = 0.072334%.
    • Which means your chance of NOT catching it is 99.927666%.per exposure.
    • to convert it to a decimal, divide by 100. So that’s 0.99927666.
    • The GM decides that every 100 exposures sounds about right, with each step (and the dust raised) counting as an exposure, as does handling an object, touching a surface, or engaging in a round of combat.
    • 0.99927666 ^ 100 = 0.930196.
    • 0.930196 = 93.0196%.
    • So, every 100 exposures, there is a 6.9804% chance of catching the disease.
    • Instead of counting, the GM assumes 100 feet of walking is 100 steps, and whenever the time since the last check feels about right, based on their activities since, he has the characters roll.

    If the dungeon is 100′ x 100′, divide the area by 2 – that’s a safe estimate for the minimum number of exposures through the whole thing, without allowing for rounds of combat, touching things, etc. So 100^2 / 2 = 10000 / 2 = 5000 exposures. Every 100 exposures means 50 checks will be needed. The GM decides that’s too many and decides to increase the number of exposures per check to 500.

    • 0.99927666 ^ 500 = 0.696.
    • 0.696 = 69.6%. So there’s a 69.6% chance of NOT getting it every 500 exposures.
    • Which means that there’s a 30.4% chance of catching it, per roll.

    A 30.4% chance per check, 10, maybe 11 checks, 4 PCs – what are the odds?

    At least 1 character: 100%. Well, more than 99.9999%.

    At least 2 characters: 99.997%.

    At least 3 characters: 99.80%

    All four characters: 92.78%

    Once someone is infected, they no longer need to roll, but the GM doesn’t want them to know that anything’s changed until symptoms appear, so he lets them continue and just ignores the results.

    The pre-symptom period is 4 days, so 4 days after infection, the symptoms start.

    A day later (5 days), the character becomes infectious – at the full base rate of 12/- on 3d6.

    They will stop being infectious 6 days later, so 11 days after infection (5+6=11).

    Symptoms might end before or after that date. The disease is far more dangerous if they end while the sufferer is still infectious! But in this case, symptoms persist for 10 days, so they end on day 14 (4+10=14). For the last 3 days of that period, they were no longer contagious.

    But the disease will have taken it’s toll. Recovery was rolled at 7 days, and that final clock starts when all the others have stopped – so day 21 is when the victim is back to their old selves – assuming they survived.

    In all 8 cases, the roll used was this one, and the results then interpreted. If I rolled up a second disease the same way, the results would be completely different:

    • Unhosted Half-life = 2 days
    • Base Infectious Rate = 18/-
    • Immunity = 6%
    • Pre-symptom period = 15 days
    • Infectious Stage Start = 15 days
    • Infectious Stage End = 18 days
    • Symptom Recovery = 8 days
    • Disease Recovery = 14 days

    This is a much slower, more pernicious ailment – but despite it’s very high infectiousness (18/- on d20 this time), it has a very short half-life, and 45 of them have passed..

    • 18/- on d20 = 90%
    • 14 half-lives so 90 / 351844 = 2.55795e-4%
    • 100 – 2.55795e-4 = 99.999744205%
    • 99.999744205% = 0.9999744205
    • 500 checks
    • 0.999744205 ^ 500 = 0.9987218
    • 0.9987218 = 99.87218%
    • 100 – 99.87218 = 0.128159%. Effectively no chance.
    • 5500 checks – the entire dungeon: 0.999744205 ^ 5500 = 0.986
    • 0.986 = 98.6%
    • 100 – 98.6 = 1.4%

    So, unless there are 50 people in the party, it’s extremely unlikely that anyone will catch this. It’s half-life is so short that’s effectively dead. But encountering someone who has managed to beat those odds would be extremely bad news. 18/- on d20 chance of catching it? And not knowing it until 15 days later?

    Depending on your interpretation of the rules, having a disease like this might mean that ‘Cure’ spells no longer work on you – that they try to cure the disease and fail. If that’s your GM’s interpretation, it might at least offer an early clue.

    On the other hand, at least part of hit points are self-confidence, and there would be a psychological lift at receiving a Cure Light Wounds spell, and cosmetic improvements, so you might well regain some HP, anyway.

    Okay, here’s the important bit: Why this die roll works

    As the analysis shows, results skew markedly low. It’s rare for anything to be higher than 12. But it can happen. That means that results are focused on the trait that you are rolling for, and need only simple interpretation.

    While it’s rare to get a high result, it can occasionally happen, and it always causes something memorable and significant when it does.

A couple of quick other notes about multiplied die rolls.

  1. If you want the curve to bias in the other direction, (Maximum+1) – the die roll is your solution.
  2. There’s a huge temptation to try dividing a something by the die roll. Don’t – it’s impossible to control. Most of your results will be sensible, but there’s always going to be a divide by 1 or a divide by zero to mess things up.

Exotic Choice #9: d30 +1 – d10

This roll looks deceptively simple. It only has two dice, for heaven’s sake!

And yet, dice subtraction can sometimes do weird things, so let’s take a look at this one…

You see what I mean? I wasn’t expecting it to look like that…!

Probably the first thing you notice is the flat top of what might once have been a triangle. It runs from 1 to 21.

The second thing that strikes you is the enormous range of results – from -8 to 30.

And then that minimum result sinks in. What does a roll of -8 even mean?

Min, Max, Ave

    Minimum -8.
    Maximum 30.
    Average 11.

The Thresholds
    The 1% Threshold

      -6 is exactly at the 1% threshold. So is 28. So the really improbable rolls are -8 to -6 and 28-30.

    The 3% Threshold

      0 and 22 are exactly at the 3% threshold – so the unlikely rolls are -5 to 0 and 22 to 27.

    The 5% Threshold

      Nothing gets this high. Everything from 1 to 21 is at an absolutely flat 3.33%.

Slices Of Range: Percentages Of Probability
    Range Of Results

      30-(-8) = 38, +1 for the -8 result itself. So there is a span of 39 results!

    Ave – Min, Max – Ave

      11-(-8)= 19.
      30-11=19.

      So the roll is symmetric. The fact that the range spans an odd number of results means that there will be one result nominally in the middle whose probability is going to have to be split.

    1/3 (Ave-Min) + Min

      1/3 x 19 + -8 = -1.6667, so the division falls between -1 and -2.

      So the lowest division of results runs from -8 to -2, and comprises 9.33%. Span of 7.

    2/3 (Ave-Min) + Min

      2/3 x 19 + -8 = 4.6667, so the division is between 4 and 5, which means that the next tier of results are -1 to 4. These have a total probability of 28.33 – 9.33 = 19.00%. Span of 6.

    The Lower Core
      That means that everything from 5 to 10, and half of 11, form the lower core. This group have a total probability of 48.33 – 28.33 + 1/2 x 3.3333 = 20 + 1.6667 = 21.6667%. Span of 6 1/2.
    The Upper Core: 1/3 (Max-Ave) + Ave

      The upper side is a mirror-image of the lower. So the upper core is 6 1/2 wide, including 11 (which is split). That gives results of 11-17 and total probability of 21.6667%.

    2/3 (Max-Ave) + Ave

      Above the central core are the good results, a span of 6, starting at 18 – so 18-23 – and with a probability of 19.00%.

    The Lofty Outcomes

      At the very top, the very best results therefore are 24-30, a span 7, and a total probability of 9.33%.

    d30+1 -d10:

      -8 to -2 = 9.33%, span 7.
      -1 to 4 = 19%, span 6
      5 to 11 = 21.667%, span 6.5
      11 to 17 = 21.667%, span 6.5
      18 to 23 = 19%, span of 6
      24 to 30 = 9.33%, span of 7.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      The 20% total comes between 1 and 2, so -8 to 1.

    Second Lowest 20%

      The 40% mark is reached between 7 and 8, so this bracket contains 2-7.

    The Middle 20%

      We cross the 60% mark between 13 and 14, so this band consists of results from 8-13.

    Second-Highest 20%

      80% is almost but not quite to the 20 result. So this band contains 14-19.

    Highest 20%

      Which obviously leaves results from 20-30 to form the highest band of results.

    Peak Probability

      As already mentioned, this is 3.3333% – and it’s shared by 21 results.

    Matching Result: 1/3 Peak Probability

      1/3 of 3.3333 = 1.1111%. Results of -8 to -6, and 28 to 30, are below this level.

    Matching Result: 2/3 Peak Probability

      2/3 of 3.3333 = 2.2222%. That probability band contains -5 to -3 and 25 to 27.

      Everything else, from -2 to 24, is between 2.2222% and 3.3333%.

    d30+1-d10:

      01-20%: -8 to 1, span 8.
      21-40%: 2 to 7, span 6
      41-60%: 8-13, span 6
      61-80%: 14-19, span 6
      81-100% 20-30, span 11

      -8 to -6, 2%, span 3
      -5 to -3, 5%, span 3
      -2 to 24, 86%, span 27
      25 to 27, 5%, span 3
      28 to 30, 2%, span 3

Summary Of Results

    If you use the full span of results, you are going to get some very extreme results. But here’s the thing: If you re-roll any result below 1 or higher than 20, this is a perfect d20 simulation.

    Of course, it’s a lot of malarkey to go through for that result.

When To Use This Substitute

    This is the perfect die roll for bringing a sense of the absurd or ridiculous into a game. For example, when two combatants are roaring drunk.

    Anytime someone rolls below 1, they do something stupid or something completely ridiculous happens to them. Anytime someone rolls above 20, something ridiculous happens to their opponent.

    When circumstances warrant neither a farce nor a circus, there are better constructions to choose. But when those are the orders of the day, this construction is hard to beat.

Exotic Choice #10: 5d4 / d5

We’ve had multiplication and subtraction as well as the more commonplace addition – so it’s no surprise that division makes an appearance at this point.

This chart shows three curves,
all discussed in the text below (or this caption would be far too long):
5d4 / d5; [(2d4+2d6+2d8) / 3d2] +1; and (6d4 / d6) +5.

Three compositions for the price of one!

The first, 5d4 / d5, is the one we’re mainly interested in. It shows all of the classic characteristics of a divided die roll quite clearly – there’s a front, a crown, a back, and a tail with a secondary peak or ‘hump’.

The second shows how complicated these things can get. It was chosen to illustrate two things, maybe three: (1) that 2d4+2d6+2d8 have a maximum of 36, the same as 6d6; (2) that if the denominator is large enough with respect to the numerator, the ‘crown’ can compress into a single point with an extremely high probability – note the scale on the left and you’ll see that the peak is approaching 30% probability. That’s absolutely ridiculous in a roll with this many results! And (3) the back can make a smooth descent to a long tail of virtually no probability while the ‘hump’ has been flattened out of existence, so this shows how the shape of a divided die-roll curve can change.

The third is the configuration I almost chose for this section, shifted 5 spaces to the right because the resulting curve is so like the subject one that it would be confusing. But now that you can see how similar they are by having them side-by-side, you can meaningfully evaluate the differences, which are also significant in revealing traits of divided-die-roll anatomy:

See text below

First, notice that the brown line – our subject construction – isn’t quite flat at the crown, and that our reference comparison, the gold line, is even more angled. I’ve never seen one slope the other way, but that wouldn’t surprise me if I did.

Second, notice that the gold reference line has a tertiary hump at results of 8 & 9 – and, in fact, that our subject composition has one too, at 6 – it’s just a lot smaller.

Until I saw just how similar they are, I was tossing up whether or not to include 6d4 / d6 as a bonus extra, even though time is growing a little short and there’s still a lot to do. But the differences seem to be so small that it’s not worth the effort, and time, involved.

Afterthought: How do you decide where the back ends and the tail starts?

As with the multiplied die roll, there is a sudden flattening, and maybe even entry into a secondary peak. The back includes any tertiary hump(s).

In this case, 4 is a transition between crown and back; 5 is back; 6 is back and the tertiary hump; 7 is back; 8 is back; but at 9, there is a flattening, and 10 starts the buildup to the secondary hump in the tail. so 4-8 are the clearly back and 10+ are clearly tail, with 9 the dividing point, able to go either way.

From the definitions, and comparing the probability differences 8-to-9 (0.98%) to that from 9-to-10 (0.33%), there is an obvious difference that connects 9 more strongly to the tail than to the back. So I would classify 9 as the start of the tail.

Min, Max, Ave

    Minimum 1
    Maximum 20
    Average predicted 5 x 2.5 / 3 = 4.1667
    Average, measured = 5.4367 (which makes me glad that I decided to do it both ways!)

The Thresholds
    The 1% Threshold

      The only results with a probability of 1% or less are in the end of the tail, from 17-20.

    The 3% Threshold

      This threshold is a bit more diverse. Falling beneath it are the front (1), a little of the back (8) and most of the tail (9-11, 14-16).

    The 5% Threshold

      Between 3% and 5% there is part of the back (7) and the rest of the secondary hump in the tail (12-13). In fact, half the time, I would probably have rounded the latter (3.03%) down to include them in the 1-3% category. But the more accurate approach better reflects the anatomy of the die roll results.

    The 10% Threshold

      The 5%+ to 10% bracket has the middle of the back (5-6).

    The 15% Threshold

      In this bracket we have the remainder of the back (4).

    The 20% Threshold

      Both results in the crown climb higher than this percentage (2-3) – which will result more than 40% of the time, collectively!

    5d4 / d5:

      1% to 3%: 1
      20 to 25%: 2-3
      10% to 15%: 4
      5% to 10%: 5-6
      3% to 5%: 7
      1% to 3%: 8
      1% to 3%: 9-11
      3% to 5%: 12-13
      1% to 3%: 14-16
      1% /-: 17-20

Slices Of Range: Percentages Of Probability
    Range Of Results

      There are 20 results, so if the curve were symmetric (it’s not) there would be two results with equal probabilities in the crown.

    Ave – Min, Max – Ave

      Here’s where things get interesting!

      5.4367 – 1 = 4.4367.
      20 – 5.4367 = 14.5633.

      One side of the average result is more than 3.2 times the size of the other!

    1/3 (Ave-Min) + Min

      The worst results band runs from the minimum (1) to

      1/3 x 4.4367 + 1 = 2.4789 – so almost exactly mid-way between 2 and 3.

      1-2 have a total probability of 23.75%.

    2/3 (Ave-Min) + Min

      2/3 x 4.4367 + 1 = 3.9578, so 4 doesn’t quite> make the cut – but it’s so close that I would round to include it, anyway, splitting it in two (a leg in both camps).

      3 has a probability of 21.25%, +1/2 of 4’s probability of 13.01 = 6.505%, gives a total of 27.755%.

    The Lower Core

      Between 3.9578 and 5, we have 5, and the other half of 4. 5 has a probability of 8.57%, and 1/2 of 4 is still 6.505, so the total probability here is 15.075%.

      The lower bands of the curve total 66.58% of all the results!

    The Upper Core: 1/3 (Max-Ave) + Ave

      1/3 x 14.5633 + 5.4367 = 10.2911333, so the upper core stretches from 6 to 10 – that’s the lower back and the start of the tail, but not including the peak of the secondary hump.

      6-10 have a total probability of 84.34 – 66.58 = 17.76%.

    2/3 (Max-Ave) + Ave

      2/3 x 14.5633 + 5.4367 = 15.1455666, so 11 to 15 make up the ‘good but not great’ band of results. Those have a combined probability of 97.64 – 84.34 = 13.3%.

    The Lofty Outcomes

      That leaves the great results as being 16-20, with a combined probability of 100 – 97.64 = 2.36%.

      But I want to especially note the low chance of a 20 at 0.02%. Rounding error is likely to be huge, but on the face of it, you are 5 / 0.02 = 250 times more likely to get a 20 on a d20 than on this roll.

      A moment’s reflection will show why – to get there, absolutely everything has to go right. Maximum result on the 5d4 (20) and minimum result on the d5 (1). Out of 5 x 4 x 5 = 100 possible results. Actually, by my math, that’s a 1% chance, so I’m going to have to look into this a little further. One moment…

      (a few minutes later:) Okay, I’m back. My mistake in the above is in calculating the number of possible outcomes on the 5d4, which I’m sure most of you will have spotted right away.

      The correct number of possible result combinations of die faces is 4^5, not 4×5. That gives 1024, which multiplied by 5, gives 5120 combinations all told. Only 1 of them produces a result of 20, so that’s 0.01953%. And a d20 does indeed have 256.016385 times greater likelihood of resulting in a 20.

      All this might seem like a minor side-note at the moment, but I’m thinking ahead, and expecting it to weigh heavily on evaluating when to use this particular construction.

    5d4 / d5:

      1-2 ‘Worst possible roll’ = 23.75%, span 2
      3-4 ‘Poor result’ = 27.755%, span 2
      4-5 ‘Below Average result’ = 15.075%, span 2
      6-10 ‘ Above Average result’ = 17.76%, span 5
      11-15 ‘Good result’ = 13.30%, span 5
      16-20 ‘Great result’ = 2.36%, span 5
      (20 ‘Best possible result’ = 0.01953%).

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      We get to the 20% total really quickly – in fact, only one result falls into this band, a 1, which has a probability of just 2.79%. Extending the range to 2 carries it over the 20% total, to 23.75%.

      That tells me two things: (1) this tool is of limited utility for the analysis of divided die rolls because of the phenomenally steep face and high crowns; and (2) it might still be useful if I round and generalize a bit. This will compromise the precision of the result, but still give some value in terms of understanding the die roll.

      So, on that basis, the ‘lowest 20%’ contains 1-2.

    Second Lowest 20%

      And, right away, that plan goes off the rails and for exactly the same reason. The 40% mark lands between 2 and 3 and 2 has already been used – so that depopulates this entire zone. I could round 3’s 45% total down to include it, I suppose, but 45% is a full quarter of the way through to the next band.

      Part of the purpose in breaking up all these rolls in the same size divisions – the 20%’s – was to enable direct comparison. (dividing the range of results into two parts about the average and each part into thirds has a similar comparative benefit but one arranged around the results, not the probabilities). That still has value, so I’m going to accept the rounding.

      Which means that this band consists of the result of 3.

    The Middle 20%

      The 60% mark is between 4 and 5, so this band also contains just one result: 4.

    Second-Highest 20%

      We get to 80% almost exactly at 8 – we’ve had to swallow much larger deviations twice already than including 80.68 in the 61-80% band – so this is 5-8.

    Highest 20%

      Which leaves 9-20 for the rest. Basically, anything in the tail is a ‘good result’ to some degree.

    Peak Probability

      This belongs to the result of 3, at 21.25%, which narrowly beats 2 and 20.96%.

    Matching Result: 1/3 Peak Probability

      1/3 x 21.25 = 7.0833. That point-0833 can be very important because it makes it almost impossible for any result to fall exactly on the line, which is more likely to happen with an exact integer result.

      Anyway, 1 and 6-20 all fall below this line, with no results close enough to 7% to even argue about.

    Matching Result: 2/3 Peak Probability

      2/3 x 21.25 = 14.1667. Again, a clear division between the results – 4-5 are below this line and 2-3 are above it.

    5d4 / d5:

      01-20%: 1 to 2, span 2.
      21-40%: 3, span 1
      41-60%: 4, span 1
      61-80%: 5-8, span 4
      81-100% 9-20, span 12

      1: 0-7%, span 1
      2-3: 14%+, span 2
      4-5: 7-14%, span 2
      6-20: 0-7%, span 15

Summary Of Results

    This is a fairly basic divided die roll. It exhibits all the traits of that type of construction. It’s massively biased low in results, with a long tail of relatively low probability. You can spend hours playing around with variations of the general principle, and often land on unexpected results.

When To Use This Substitute

    This is the die substitute to use when failure is – in the GM’s mind – not possible, but degrees of success and complications of pathway in getting to that success ARE.

    “So, you’ve rolled a 2? No problem, here’s what happens…” followed by set-back after set-back, and a last-minute success that the characters fall into more than reach towards. In other words, it’s all about driving the narrative, about roleplay.

    And if you should happen to fall over the line with a result in the tail, that indicates one of those occasions where the universe seems bound and determined to let you succeed; even outright errors of judgment end up working to your benefit, potentially earning the party an unjustified reputation for brilliance – which they will then have to try to live up to.

Exotic Choice #11: (3d6+2) / d4

Having examined the probability curve, this construction has only one novel feature – a singular peak of probability at result 3. So I’ve decided that it’s not worth the additional time it would take, which I can put to better use on something far more exotic and interesting.

Exotic Choice #12: (4d10 / 2) – d2 +1

Okay, now this one’s subtle. If you look really closely, i think there’s the most minute difference in the two sides of the curve. To test this perception, below are graphed two curves: the Main Curve, M, and 21-M.

If there is a difference, the two will not line up.

…and there it is. A subtle but definite asymmetry.

The main roll is averaging just a little higher probability on the low side of the average and a little less on the high side. I wonder what that will do to the average?

Min, Max, Ave

    Minimum 1
    Maximum 20

    Average: Predicted: (4 x 5.5) / 2 -1.5 + 1 = 11 – 1.5 + 1 = 10.5
    Average, measured:10.24977502
    Call it 10.25. And there, again, is that very small difference manifesting itself.

    So I decided to look into why it’s there. Here’s what I found: The division by 2 implicitly rounds down results by treating odd and even rolls on the 4d10 differently. What appears to be one curve is, in fact, the sum of two interleaved curves – odds and evens. Because we’re dividing by 2, the losses on the odd-result rolls are -0.5 each, and because half the possible results of 4d10 are odd and half are even, when this gets averaged over the whole, the net effect is a -0.25 bias on the results. It’s a perfect example of how small nuances can manifest in real-world differences.

The Thresholds
    The 1% Threshold

      Below this threshold are 1-3 and 17-20, so the overall shift low has already had a significant effect. 1-3 have a cumulative probability of 0.80% (so they are well below the 1% mark individually), while 17-20 have a 1.4% total probability – but span 4 results, not 3.

    The 3% Threshold

      4-5 and 16 are below the 3% threshold. The difference in span is because the curve almost has symmetry, meaning that any disparity is likely to be counterbalanced somewhat later on. In this case, the previous band’s spans of 3 vs 4 are the disparity, and the difference in spans, 2 vs 1, this time around are the counterbalance.

      4-5 have a total probability of 4.42%.
      16 has a probability of 2.13%.

      The 5% Threshold

      6 is almost at the 5% threshold, with a probability of 5.08%. It’s close enough for my money. At the high end, we have 15 & 16 – so the disparity in spans returns. 15-16 is a combined probability of 6.07%.

    The 10% Threshold

      7 on the low side and 13-14 on the high are in the 5-10% bracket – so the disparity has worsened.

      7 has a probability of 7.63%, while 13 & 14 total 15.24%.

    The 15% Threshold

      Everything that remains is in the 10-15% range, nothing breaks the 15% threshold. So that’s 8-12, which have a total probability of 38.02%.

      With no time left to even out the disparity, it has to stand – meaning that the right-hand side is cumulatively down on probability and needs a longer span to get to similar probability values. The span of this central region is 5 results.

    (4d10 /2) -d2 +1:

      <1%: 1-3 = 0.8%, span 3
      1% to 3%: 4-5 = 4.42%, span 2
      3% to 5%: 6 = 5.08%, span 1
      5% to 10%: 7 = 7.63%, span 1
      10% to 15%: 8-12 = 38.02%, span 5
      5% to 10%: 15.24%, span 2
      3% to 5%: 15-16 = 6.07%, span 2
      1% to 3%: 16 = 2.13%, span 1
      <1%: 17-20 = 1.4%, span 4

Slices Of Range: Percentages Of Probability
    Range Of Results

      Results span from 1 to 20, so a range of 20.

    Ave – Min, Max – Ave

      10.25 – 1 = 9.25.
      20 – 10.25 = 9.75

      There, once again, is the very subtle asymmetry lurking in the heart of this construction. At least I know and understand what’s causing it now.

    1/3 (Ave-Min) + Min

      1/3 x 9.25 + 1 = 4.0833, so 4 just scrapes into the lowest division of results. 1-4 have a total probability of 2.28%, roughly half of which is 4, and half of what’s left is 3. The remaining quarter is split between 1 and 2.

    2/3 (Ave-Min) + Min

      2/3 x 9.25 + 1 = 7.1667, so this band contains results from 5 to 7. They have a collective probability of 17.93 – 2.28 = 15.65%. That’s 6.864 times the probability of the previous division, meaning that you would expect to see 5, 6, or 7 come up about 7 times for every result in the 1-4 range.

    The Lower Core

      8-10 fall into this band. They have a combined probability of 53.30 – 17.93 = 35.37%.

      That’s about 2 1/4 times the probability of a 5-7 result, so for every four results in that range, you would expect to see 9 rolls producing results of 8-10.

    The Upper Core: 1/3 (Max-Ave) + Ave

      Because of the asymmetry, this has to be actually calculated.

      1/3 x 9.75 + 10..25 = 13.5. This range contains results from 11-13, and they have a combined probability of 86.22 – 53.30 = 32.92%, just a little less than the lower core.

      In fact, while probability says that it could happen sooner, what this amounts to is 12 results in this span for every 13 in the lower core.

    2/3 (Max-Ave) + Ave

      2/3 x 9.75 + 10.25 = 16.75. This band contains results from 14-16, which have a combined probability of 98.6 – 86.22 = 12.38%.

      The upper core will result 2.7 times as often as this range, so for every 8 results in the above average category, there will be 3 ‘good’ rolls.

    The Lofty Outcomes

      The best range of results are therefore 17-20, with a combined probability of just 1.4%.

      That’s a ratio of 8.8 times, so for every 5 results yielding this tier, there would be 44 rolls of the band below it.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      20% of the probability contains results from 1-7 – so, on 100 rolls you would expect to see 20 of them within this range, give or take.

    Second Lowest 20%

      The 40% mark just fails to capture 9, so results of 8, technically, have this 20% all to themselves. That said, 40.19% is close enough that I’ll include it here for a span of 2. I think that’s a fairer representation of both results.

    The Middle 20%

      The 60% mark splits the difference between 10 and 11, magnifying the asymmetry to the point where it is undeniable – 60% of the rolls will be below the average, and 40% above it.
      10 alone occupies this space, with an actual probability of 13.12%. When the disparity is that large (13.12% vs 20%, so almost half of the 20% is missing), you have to consider including the next result up. 11/- has a combined probability of 66.08%, so this would be an error – but it’s a smaller error than not doing it. So this 20% is now considered to be 10-11, and to have a span of 2.

    Second-Highest 20%

      The 80% mark is distinctly between 12 and 13, so this range contains a single result 12. However, 12 only has a probability of 11.22% – even closer to 1/2 of the desired range of results. So I have to look at whether or not 13 can be included, with it’s 8.91% probability. The combination is a total of 20.13%, so even without looking at the combined value, I’m inclined to say yes. That combined value of 86.22, as before, does represent an error, but it’s a smaller error than not doing so, which confirms the predisposition. So this band is 12-13, a span of 2.

    Highest 20%

      But that leaves the last 20% to hold everything else – results from 14 to 20. That’s a span of 7 results, which is the same size as the first bracket, to be fair.

    Peak Probability

      Breaking this down by the alternative route requires the Peak probability. This belongs to a result of 10, without question, and 10 has a probability of 13.12%.

    Matching Result: 1/3 Peak Probability

      1/3 x 13.12 = 4.3733%.
      1-5 are below this chance, and so are 15-20. note: span of 5 and span of 6, respectively.

    Matching Result: 2/3 Peak Probability

      2/3 x 13.12 = 8.7467%.
      6-7 and 14 are below this result. I’d like to have included 13 or 15 to preserve the symmetry in this range, but the error that results is too great. Which means that this range cancels out the span discrepancy of the previous set of results.

      That leaves 8-13 as having the highest individual probabilities.

    (4d10 / 2) – d2 + 1:

      01-20%: 1 – 7, span 7
      21-40%: 8 – 9, span 2*
      41-60%: 10-11, span 2*
      61-80%: 12-13, span 2*
      81-100% 14-20, span 7

      The results of all that hand-tweaking of errors (indicated by the * in the table above) is a perfect reflection of the underlying symmetry of the curve; the bias is completely hidden. That’s why I’ve used so many analysis approaches – you can never tell which ones will definitively describe the curve, and they are all valid – just with a different emphasis.

      1-5: 0-4.37%, span 5
      6-7: 4.37-8.75%, span 2
      8-13: 9.75%+, span 6
      14: 4.37-8.75%, span 1
      15-20: 0-4.37%, span 6

      More than any other tool, this shows that this curve contains 3 major bands of results – low, middle, and high – connected by two short and therefore steep rises and falls in probability. It’s a classic bell curve, in other words. But it also highlights that slight bias low.

Summary Of Results

    And that sums up this construction, really – a classic bell curve with a hidden tiny bias.

When To Use This Substitute

    To be honest, I can’t think of an occasion that ticks all the boxes for using this alternative. Let’s check off the criteria, though, in case you are cleverer in this respect than I.

    • For the difference between this construction and 3d6 to matter, you need to be making a lot of rolls, or the bias won’t show up.
    • The range runs from 0 to 20, but the most extreme values are so unlikely that the practical range is 4-17. So you have to want to have the chance at a more extreme result, whilst making that chance vanishingly small.
    • It’s probably fair to say that this is a more ready substitute for 3d6 – but that’s not a good thing as it means you need a compelling reason to make that substitution.
    • Substituting this for a d20 roll integrates all the consequences of a bell-shaped curve, so that’s a more dramatic and potentially useful difference – but there are better choices for those cases. You need some valid reason for those choices not to work and for this choice to still be valid in order to justify using this roll. And that’s going to be rare.

    Ultimately, I think the greatest value that this construction holds is as an object lesson and a demonstration of principle.

    The object lesson relates to subtlety and nuance, and the dangers of making assumptions when probabilities are involved. They can, and from time to time will, lead you astray.

    And the demonstration of principle relates to what happens when dividing a die roll by a fixed value. The more you dig into this, the more you get swamped by minutia becoming relevant characteristics.

    Dividing by 3, for example, means that 2/3 of results will have a rounding distortion.

    Dividing by 4 takes that up to 3/4.

    But – some of those bias errors will be larger than others. Take dividing by 10 – a rounding bias that loses 0.1 is not very large, while a bias that loses 0.9 is comparatively huge. And the overall impact: A bias adjustment of -0.5.

    Compare that with the divide-by-3: some results will have an error of -1/3, some of -2/3, and some will have no error at all. And the average of -1/3 and -2/3? It’s -0.5 – again.

    Dice with unequal numbers of odd results vs even can amplify or diminish the bias slightly. That requires the d# to be odd – so d3, or d5. d7 if you can find them – the only one’s I’ve seen are marked with the days of the week. And so on.

    Is the resulting bias large enough to justify the complexity of the process and analysis? I’m not sure that it is, but can’t say that it isn’t either.

BONUS EXTRA: Exotic Choice #13: [log (base 2) [(d6 / 3) ^ d8] +d8 +1}

I’ve saved the weirdest till last! The ultimate in weirdness, this possibility came to me at the last possible moment, just a day before posting this work..

I couldn’t fully analyze this on my own (lack of time more than anything else), so I sought help from Google’s Gemini.

And AnyDice doesn’t understand logarithms, though it does understand exponents. So I’m going to have to do all the analysis the hard way, using a spreadsheet. Which will take additional time.

It’s even possible that I won’t have time to write it up before publication – in which case, I’ll update the post on Thursday and you can read all about this weirdie on Friday.

In which case, right now, the article will shift into ‘conclusions’ mode – but when people check back, they will find this final section miraculously inflated with content!

Have you ever seen anything like it? It looks like some sort of geological formation – but it only uses 3 dice!

The power of this lies in the d6. How well you roll on it determines what effect the first d8 has on the total. If it is 1-2, then the d8 takes a small value and potentially makes it much smaller; if it’s 3 or 4, the effect is neutral; and if the result is 5-6, it takes a large value and makes it much larger – depending on what you roll.

The logarithm then compresses the results back down to a usable scale while placing emphasis on low results.

The second d8 smooths the curve a little, and fills in any gaps, while the +5 shifts the curve into the result space we want.

But it’s by far the weirdest computed probability curve that I’ve ever seen.

NOTE that you can take results of off the table and replace them with results of 20 just by increasing the modifier from +5 to +6.

Min, Max, Ave

    Minimum 1
    Maximum 21
    Average (measured) 6.67989

The Thresholds
    The 1% Threshold

      Two results fall below this line: 19and 21.

    The 3% Threshold

      2-3 and 17-18 and 20 are all below this line. 16 is so close to it that I will include it too, at 3.033%.

    The 5% Threshold

      4-5 and 15 are in this band.

    The 10% Threshold

      Between 5% and 10% we find everything else – nothing crosses this boundary. So that’s 6-14.

    log2 (d6/3) ^ d8 +d8 +5:

      1% to 3%: 2-3 = 4.841%
      3% to 5%: 4-5 = 6.794%
      5% to 10%: 6-14 = 71.942%
      3% to 5%: 15 = 4.688%
      1% to 3%: 16-18 = 6.894%
      <1%: 19 = 0.827%
      1% to 3%: 20 = 1.869%
      <1%: 21 = 0.276%

Slices Of Range: Percentages Of Probability
    Range Of Results

      21 results are possible. But with the average so far removed from the mid-point of this range, the roll is biased somewhat low, and that will be reflected in the divisions.

    Ave – Min, Max – Ave

      6.67989 – 1 = 5.67989
      21 – 6.67989 = 14.32011

    1/3 (Ave-Min) + Min

      1/3 x 5.67989 + 1 = 2.89329. The lower band contains 1 and 2. coming close to inclusion is 3, but not quite close enough.

      1-2 have a total probability of 4.03%.

    2/3 (Ave-Min) + Min

      2/3 x 5.67989 + 1 = 3.78659.

      The only result in this span is 3, which has a probability of 2.681%.

    The Lower Core

      Between 3.78659 and the average are 4, 5, and 6. They have a total probability of 13.106%.

    The Upper Core: 1/3 (Max-Ave) + Ave

      As usual with an asymmetric roll, this has to be calculated; it won’t be the same as the span on the other side of the average.

      1/3 x 14.32011 + 6.67989 = 11.45326, so this band of results contains everything from 7-11, a combined probability of 42.742%.

    2/3 (Max-Ave) + Ave

      2/3 x 14.32011 + 6.67989 = 16.22663, so this band contains results from 12 to 16, a combined probability of 30.609%.

    The Lofty Outcomes

      At the very top, we have results 17-21, which have a cumulative probability of 6.833%.

    log2 (d6/3) ^ d8 +d8 +5:

      1-2 ‘Worst possible roll’ = 4.03%, span 2
      3 ‘Poor result’ = 2.681%, span 1
      4-6 ‘Below Average result’ = 13.106%, span 3
      7-11 ‘ Above Average result’ = 42.742%, span 5
      12-16 ‘Good result’ = 30.609%, span 5
      17-21 ‘Great result’ = 6.833%, span 5

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      1-6 will be the lowest 20% of results.

    Second Lowest 20%

      The 40% mark captures results 7-8.

    The Middle 20%

      The 60% mark contains 9-10. 11/-, with a combined percentage of 62.56%, just misses out.

    Second-Highest 20%

      So it starts this band off, which ends at the 80% mark and a result of 13, capturing 12 along the way.

    Highest 20%

      Leaving 14-20 as the top end of town.

    Peak Probability

      The other way of dividing results up is to stratify them by fractions of peak probability, which in this case is 11 at 9.100%.

    Matching Result: 1/3 Peak Probability

      1/3 of 9.1% is 3.0333%. Below that line we find 1-3 and 17-21, with 16 exactly on the line.

    Matching Result: 2/3 Peak Probability

      2/9 of 9.1 is 6.0667%. Between this line and the previous one we have a middle stratum of results: 4-5 and 14-15.

      Which in turn means that 6-13 are in the uppermost stratum.

    log2 (d6/3) ^ d8 +d8 +5:

      01-20%: 1 – 6, span 6
      21-40%: 7 – 8, span 2
      41-60%: 9-10, span 2
      61-80%: 11-13, span 3
      81-100% 14-20, span 7

      1-3: 0-3.0333%, span 5
      4-5: 3.0333-6.0667%, span 2
      6-13: >6.0667%, span 8
      14-15: 3.0333-6.0667%, span 3
      16-21: 0-3.0333%, span 6

      When you cut the results up this way, the result seems relative prosaic, barely hinting at the complexity below the surface.

      You can get even stranger results if you use 3d6/12 as the core roll. Some of the results I got while playing around with the concept looked like a cartoon shark’s tooth!

Summary Of Results

    In this case, nothing captures the nuance of what’s going on quite as well as the graph that I made at the top. It’s a bell curve with a flattened top and a longer descent to a secondary peak at 20 – but it’s lumpy.

    That’s because this isn’t one curve, it’s the sum of six different curves.

    There’s log2 (1/3 ^ d8) +d8,
    log2 (2/3 ^ d8) + d8,
    log2 (1 ^ d8) + d8 = d8
    log2 (4/3 ^ d8) + d8,
    log2 (5/3 ^ d8) +d8, and
    log2 (2 ^ d8) + d8 = 2d8.

    Plus the modifier to shift the results, of course.

When To Use This Substitute

    This is the perfect roll to use when results could go either way and snowball, because that’s exactly what is being simulated. The d6 controls the ‘either way’ and the exponentiated d8 controls the degree of snowballing, from none (d8=1) to massive (d8=8). The rest of the construction is just there to make things pretty, and functional.

    There is a slight bias low, which is why the average is so low – but that is compensated for because a d6 has an even number of faces, so division by 3 adds a bias high. The result is the tail, which is clearly longer than the front-face of the curve.

The Wrap-up

If the content below looks familiar, it’s because it is, in essence, a summary of the ‘when to use this roll’ discussion, re-sequenced into a more streamlined narrative, with less focus on the die rolls and more focus on the circumstances that suggest their use be considered.

Replacing a d20:
  • When you need one and don’t have one to hand, 10 x (d2-1) + d10 or 5 x (d4-1) + d5 are perfect replacements.
  • For everyday skill checks with little value in an extreme result, consider 4d6-3. Add + modifiers to nuance the odds in the character’s favor.
  • Ditto combat training.
  • Consider using 3d6 for anything involving biological systems to take advantage of the trend toward the average. Be very aware of the impact of modifiers – which results become impossible, and which results get put on the table to replace them, and what it does to the neutral bias relative to a d20.
  • When you want to take extreme results off the table but still want to preserve a lot of the evenness of results throughout the range, consider 2d10.
  • You can put fumbles back on the landscape with 2d10-1 but this takes away the critical success possibility. This is recommended for a character performing a task unskilled.
  • Also consider 2d10-# when the game system states that you need a certain minimum attack bonus even to hit – it transforms ‘impossible’ into ‘unlikely’, giving your PCs a chance to survive. Works for NPCs up against PCs decked out with magical gear, too.

  • When a player indicates that near-enough-will-be-good-enough, use 2d10+#. It makes extremely good results more unlikely while increasing the likelihood of success. 3d6+# has the same effect but with a stronger bias toward the average result. This is also appropriate when time is more important than ‘pretty’.
  • Whenever 210 is an option, d10+d12-1 also needs to be considered. This makes extreme results just a little more common and resists the trend to the central results a little bit more.
  • When someone is being taught a skill, consider d8+d12, representing a supervisor who will gently nudge toward a satisfactory result, helping out when things get sticky. This roll makes both extremes less likely.
  • When a delicate situation could abruptly swing either way, consider using 2d4+d12 instead of d20. Especially when one character is actively trying to help or hinder another. Extremely sensitive to modifiers; there’s a whole range of nuanced options to pick from.
  • When you want to give players a sense that they are ‘winning’ (even if they aren’t), consider using 2d8+d6-2 instead of d20. Extreme results are more possible than on some other rolls but the overall average is higher, so success is more likely. At the same time, there is a mild push toward more average results.
  • Alternatively, consider d4+d6+d12-2, which is a flatter, more evenly distributed option with greater potential for extreme results. Or d4+d8+d10-2, which is not significantly different.
  • When you want the character to succeed while preserving the chance of potential failure, consider 3d8 – 3. This has an average result of 10.5, same as a d20, but that goes up by 1 for each +1 modifier to the roll. It might be easy to be too heavy-handed in this respect.
  • When you want to convey to a player that they are making a stupid mistake that you don’t want to succeed for the sake of the game, use 2d6+1 (for high results desired) or 2d6+6 (for low results desired).
  • When you want to convey to a player that the circumstances don’t really permit failure and probably don’t need to be rolled (but they insist or it’s an NPC doing something contrary to what the PCs would want), use 2d6+1 (for low results desired) or 2d6+6 (for high results desired).
  • When constructing a table based on a probability chart drawn to your specifications, consider using d4 x d6 – d4 +1 instead and letting the roll do all the hard work.
  • When you want to bring a carnival atmosphere, a sense of the absurd, into the game, use d30 +1 – d10 instead of d20. If you roll less than 1, the opponents do something monumentally stupid or something ridiculous happens to them; if you roll above 20, the shoe is on the other foot.
  • When the GM thinks that there is no chance of failure but degrees of success or complications to be overcome getting to success are present, consider replacing d20 with 5d4 / d5. A gateway to roleplaying.
  • When results could go either way and snowball quickly, consider using log2 (d6/3) ^ d8 +d8 +5.

    If you don’t know how to do a logarithm to the base X, the trick is

    logX(#) = log(#) / log(X).

    For example, log(1024) is 3.0103; log(1024) to the base of 2 is 3.0103 / log(2) = 10. Which mean that 1024 is 2^10.

    Absurdities that are real: 1024 = 6.0551 to the base of pi. I don’t know why you would ever need to know that, but this is the technique that lets you calculate it if you ever do.

Replacing 3d6:
  • For additional drama: consider 4d6-3. Especially to resolve skill checks in which there is significant opposition or circumstantial difficulty to overcome.
  • Consider using a d20 and re-rolling any result below a threshold to describe the results of genetic modification or selective breeding.
  • Consider using d8+d12 to replace 3d6 for the simulation of poisons and diseases, where some effect takes place but extreme effects are unlikely – but can be worse than on a 3d6 roll. But there are better options even than this for this circumstance.
  • When a delicate situation could abruptly swing either way, consider using 2d4+d12 instead of 3d6. Especially when one character is actively trying to help or hinder another. Extremely sensitive to modifiers; there’s a whole range of nuanced options to pick from.

  • Consider using 2d6+d8 instead of 3d6 when the outcome is of lower importance. It has a much lower chances of an extreme result and more even chances of anything else. Modifiers are especially powerful. So trouble is more likely to happen and can be better mitigated by arranging circumstances in your favor. This encourages roleplaying AND tactical thinking.
  • 2d8+d6-2 Increases the potential diversity of results, useful for situations that are on a knife-edge. Far less centrally-dominated than a 3d6 roll.
  • Alternatively, consider d4+d6+d12-2, which is a flatter, more evenly distributed option with greater potential for extreme results. Or d4+d8+d10-2, which is not significantly different.
  • When you want to convey to a player that they are making a stupid mistake that you don’t want to succeed for the sake of the game, use 2d6+1 (for high results desired) or 2d6+6 (for low results desired).
  • When you want to convey to a player that the circumstances don’t really permit failure and probably don’t need to be rolled (but they insist or it’s an NPC doing something contrary to what the PCs would want), use 2d6+1 (for low results desired) or 2d6+6 (for high results desired).
  • When constructing a table based on a probability chart drawn to your specifications, consider using d4 x d6 – d4 +4 instead and letting the roll do all the hard work.
  • When you want to bring a carnival atmosphere, a sense of the absurd, into the game, use d30 +1 – d10 instead of 3d6. If you roll less than 3, the opponents do something monumentally stupid or something ridiculous happens to them; if you roll above 18, the shoe is on the other foot.
  • When the GM thinks that there is no chance of failure but degrees of success or complications to be overcome getting to success are present, consider replacing 3d6 with 5d4 / d5. A gateway to roleplaying.
  • When results could go either way and snowball quickly, consider using log2 (d6/3) ^ d8 +d8 +5. See the notes on d20 substitution if you don’t know how to turn a logarithm in one base (usually 10 or e) into another (for example, 2). This comes up all the time in the Hero System where +5 = twice as much. For example, adding 20 strength means you can lift 2 ^ log(20/5) times as much = 2 ^ 4 = 16 times. And 5 points of temporary stat damage means you have half as much of that stat.

And that brings to a close yet another example of “it seemed like a quick and easy post when I started”. It wasn’t – it’s been arduous and grinding, with lots of detail needing very close attention and very high levels of concentration, which were mentally exhausting – to the point where I could only do about 1/2 a die roll’s analysis in a session without pausing to recuperate and recharge.

But I think the results are worthwhile, and in some cases, fun!

yellow d20

Leave a Comment

Traits of Exotic d20 Substitutes pt 2: The Slightly Strange


Lots of die configurations can substitute for a d20, or for 3d6. This article looks at some of the more unusual. Part 2 of 3.

The image of the balance is by Anna Varsányi from Pixabay. I’ve changed it’s balance, added a load of dice, and changed the background color.

Time Out Post Logo

I made the time-out logo from two images in combination: The relaxing man photo is by Frauke Riether and the clock face (which was used as inspiration for the text rendering) Image was provided by OpenClipart-Vectors, both sourced from Pixabay.

Progress on writing this post has been a lot slower per word than I was expecting. That’s partly down to the attention to detail, partly it’s the complexity of the subject, partly its’ my refusal to do a half-arsed job, but mostly it’s the intensity of focus and concentration.

I can get about 2/3 of the way through an analysis – at which point my brain is fried. I need 30-90 minutes of recuperation before I’m ready to go again – but a part of that second session is refreshing my recollection of the results so far, in minute detail. With that taken into account, it’s two equal servings per entry.

On top of all that, there have been a couple of bonus entries added to the list along the way, so even though it only contains half the number of analyses so far, in word length this part seems fairly comparable to the first.

The upshot of all of this is that what was originally intended to be one quick post is taking 3-4 times as long as expected, and so the content has now been subdivided into three parts.

Here’s the revised table of contents:

Part 1: The Standards & Mildly Exotic:

  1. 4d6: The Methodology Demonstration
  2. d20: The Yardstick
  3. Boring Workaround #1: 10 x (d2-1) +d10
  4. Boring Workaround #2: 5 x (d4-1) + d5
  5. 3d6: The Standard
  6. Exotic Choice #1: 2d10
  7. Exotic Choice #2: d8+d12
  8. Exotic Choice #3: 2d4+d12

Part 2: The Slightly Strange

  1. Exotic Choice #4: 2d6+d8
  2. Exotic Choice #5: d10+d12-1
  3. Exotic Choice #6: 2d8+d6-2
  4. BONUS EXTRA: Exotic Choice #6a: 3d8-3
  5. Exotic Choice #7: d4+d6+d12-2
  6. BONUS EXTRA: Exotic Choice #7a: d4+d8+d10-2

Part 3: The Really Weird

  1. Exotic Choice #8: d4 x d6 – d4
  2. Exotic Choice #9: d30+1-d10
  3. Exotic Choice #10: 5d4 / d5
  4. Exotic Choice #11: (3d6+2) / d4
  5. BONUS EXTRA: Exotic Choice #11a: (d6 x 2d5) / d3
  6. Exotic Choice #12: (4d10 / 2) -d2 +1
  7. Summary of available d20 substitutes
  8. Summary of available 3d6 substitutes

Exotic Choice #4: 2d6+d8 (vs 2d4+d12)

We start this post with a die combination that wasn’t even on my radar until I got into the last offering of part 1.

And isn’t THAT interesting! It’s a proper dumbbell curve! So much so that I had to generate a second chart…

Notice that I’ve applied a -1 modifier to the 2d6+d8 curve to get it to align with the 3d6 curve.

If you had asked me before I wrote last week’s post, this is exactly what I would have expected to see from 2d6+d8 – but it’s also exactly what I expected to see from 2d4+d12, and you only have to look at the first of the graphs above to see how that worked out last time…!

Let’s dig into the specifics.

Min, Max, Ave

    Minimum 3
    Maximum 20
    Average 11.5

The Thresholds
    The 1% Threshold

      1% is just below 4 and 19, in fact that are so close that I’m going to round them down to include them. so 3-4 and 19-20.

    The 3% Threshold

      3% is about 2/3 of the way between 5 and 6 on the low side and 1/3 of the way between 17-18 on the high side. So 5 and 18 are in this zone, everything else is not.

    The 5% Threshold

      Results of 7 and 16 are higher than this threshold by enough that there is a clear separation. So 6 and 17 are in this zone, while 7 through 16 are outside it.

    The 10% Threshold

      This falls between 9 and 10 on the low side and 13 and 14 on the high. So 7-9 and 14-16 are in this zone.

    The 15% Threshold

      No result gets this high, So 10-13 is below this threshold in the peak probability zone.

      Four results in the peak probability range, 3 to either side of it at 5-10% probability, and every result outside of that is less than 5% likely. In fact, 3 and 20 are so unlikely that the effective range of this roll is 4-19, and 4 and 19 themselves are barely more than 1% – so for most practical purposes, the effective range is 5-18.

Slices Of Range: Percentages Of Probability

    As usual, I’m going to start by breaking up the range of results and seeing what the total probability is across that range.

    Range Of Results

      20-3=17, plus 1 for 3 itself, is a range of 18 results. As noted above, the effective range is 4 results smaller than this, or 14 results.

    Ave – Min, Max – Ave

      11.5 – 3 = 8.5 (effectively 6.5)
      20-11.5 = 8.5 (effectively also 6.5)

      The probability curve is symmetrical.

    1/3 (Ave-Min) + Min

      1/3 x 8.5 is 2.8333. Adding the minimum back puts the 1/3 results mark at 5.8333.

      So the low-results zone contains 3, 4, and 5.

      These have a total probability of 3.47%

      Six out of 10 of these results will be a 5, three out of 10 will be a 4, and 1 in ten will be a 3.

    2/3 (Ave-Min) + Min

      2/3 x 8.5 is 5.6666; adding the minimum marks this band as containing 6, 7 and 8. They have a total probability of 19.44 – 3.47 = 15.97%, so this band is 4.6 times as likely to contain the results of a die roll than the lowest bracket.

    The Lower Core

      That defines the lower core as 9, 10, and 11, with a total probability of 50 – 19.44 = 30.56%, almost twice as likely as the middle band.

    The Upper Core: 1/3 (Max-Ave) + Ave

      Since the roll is symmetrical in probability, this is the same size as the lower core, 30.56% and results 12, 13, and 14.

      If you combine the lower and upper cores, you find that 9-14 will result more than 61% of the time!

    2/3 (Max-Ave) + Ave

      This will contain 15, 16, and 17, and will again hold 15.97% of the outcomes.

    The Lofty Outcomes

      Rolling really well will only happen 3.47% of the time, and will yield outcomes of 18, 19, and 20 when it does – with 18 being 6 in ten of the results.

    2d6+d8 (vs 3d6):

      03-05: 3.47% (03-05 4.63%)
      06-08: 15.97% (06-08 21.3%)
      09-11: 30.56% (09-10 24.07%)
      12-14: 30.56% (11-12 24.07%)
      15-17: 15.97% (13-15 21.3%)
      18-20: 3.47% (16-18 4.63%)

      I don’t know about you, but I find these numbers really interesting. The low results are not stretched in span to make room for the extra results at the top, at all – but they are even less likely to be the outcome.

      The lower middle also isn’t stretched at all – but the probability total is a lot lower.

      All the stretching happens at the very top of the probability curve, the most likely results – instead of a core only stretching to 12, it now continues all the way to 14.

      What’s more, the total chance of those middle results occurring is significantly higher than in the case of 3d6. So 2d6+d8 manages to be wider, flatter, AND steeper than 3d6!

      This, of course, is only putting some numbers on the observations that could be made just from studying the second probability chart above.

Slices Of Probability: The Definitive Result Values

    Okay, so let’s slice up the 100% of outcomes into 5 bands of equal probability, in sequence of result. This is always an informative analysis!

    The Lowest 20%

      The lowest 20% band lands just after a result of 8, so this band contains results from 3-8.

    Second Lowest 20%

      We find the 40% probability total just after a result of 10 – so this entire zone consists of 9 and 10.

    The Middle 20%

      60% is reached just before 12, so this entire band is just one result: 11.

    Second-Highest 20%

      The 80% mark is just below 14, in fact it’s so close that 14 can be reasonably included. So this band is 12-14.

    Highest 20%

      And that leaves the 15-20 band as ‘rolling high’ with this die configuration.

    As usual, I’ll now look at these results another way, dividing the peak probability into low rolls, medium rolls, and high rolls.

    Peak Probability

      Peak probability is shared by both results 11 and 12, and is 11.11% in both cases.

    Matching Result: 1/3 Peak Probability

      1/3 x 11.11% is 3.7033%, which is just above a 6 result. So 3-6, a span of 4 results, is at or below this threshold, and this is matched by 17-20 on the high side.

    Matching Result: 2/3 Peak Probability

      2/3 x 11.11% is 7.4066%, which is below 8. So this is a span of 2 results – a little smaller than the first span, indicating that the sides of this curve are indeed steeper.

      That, of course, means the 9-15 span is at the top of the probability curve, a span of 7 results In fact, this span accounts for 100% – 2 x (11.11) = 77.78% of the outcomes.

    2d6+d8:

      00-20%: 3-8 (span 6)
      21-40%: 9-10 (span 2)
      41-60%: 11 (span 1)
      61-80%: 12-14 (span 3)
      81-100%: 15-20 (span 6)

      < 1/3 peak probability: 3-6 (span 4)
      1/3 – 2/3 peak probability: 7-8 (span 2)
      2/3 – peak – 2/3 peak: 9-15 (span 7)
      2/3 – 1/3 peak probability: 16-17 (span 2)
      < 1/3 peak probability: 18-20 (span 3)

      I’ve reformatted the second table to be more like the first because I think it might be just a little clearer. And I’ve started noting the results spans because that highlights some interesting anomalies.

Summary Of Results

    You don’t have to look very hard at the first chart to see that 2d6+d8 is biased high relative to d8+d12 and biased slightly low relative to 2d4+d12. The second chart shows that it’s also biased high by a whole integer of result relative to a 3d6 roll.

    For me, it’s that second probability curve that really tells the story. The 3 and 20 results are so unlikely they may as well not be there – so what we have, effectively, is a 3d6+1 curve that is significantly flatter, more d20-like, in a very wide range of the middle results.

When To Use This Substitute

    As a replacement for a d20 roll, I wouldn’t use this at all. In every measure that counts, there are better options already listed, especially 3d6+1.

    But as an alternative to a 3d6 roll, 2d6+d8-1 has a lot to commend it. The results are going to be more anarchic, more random – across a span that’s slightly more than 1/3 of the total, wide. The price for that is vastly lower chances of an extreme result.

    But, in particular, I want to highlight how responsive this is to modifiers relative to a straight 3d6 roll. If I ‘add’ a modifier of 1 to the base 2d6+d8-1 roll (to get straight 2d6+d8), the chance of rolling particular results becomes very interesting.

      03   0.35 – 0.46 = – 0.11%
      04   1.04 – 1.39 = – 0.35%
      05   2.08 – 2.78 = – 0.7%
      06   3.47 – 4.63 = – 1.16%
      07   5.21 – 6.94 = – 1.73%
      08   7.29 – 9.72 = – 2.43%
      09   9.03 – 11.57 = – 2.54%
      10   10.42 – 12.50 = – 2.08%
      11   11.11 – 12.50 = – 1.39%
      12   11.11 – 11.57 = – 0.46%
      13   10.42 – 9.72 = + 0.7%
      14   9.03 – 6.94 = + 2.09%
      15   7.29 – 4.63 = + 2.66%
      16   5.21 – 2.78 = + 2.43%
      17   3.47 – 1.39 = + 2.08%
      18   2.08 – 0.46 = + 1.62%
      19   1.04 – 0 = + 1.04%
      20   0.35 – 0 = + 0.39%

    If I increase the modifier from +1 to +2, giving 2d6+d8+1:

      03   0 – 0.46 = -0.46%
      04   0.35 – 1.39 = – 1.04%
      05   1.04 – 2.78 = – 1.74%
      06   2.08 – 4.63 = – 2.55%
      07   3.47 – 6.94 = – 3.47%
      08   5.21 – 9.72 = – 4.51%
      09   7.29 – 11.57 = – 4.28%
      10   9.03 – 12.50 = – 3.47%
      11   10.42 – 12.50 = – 2.08%
      12   11.11 – 11.57 = – 0.46%
      13   11.11 – 9.72 = + 1.39%
      14   10.42 – 6.94 = + 3.48%
      15   9.03 – 4.63 = + 4.40%
      16   7.29 – 2.78 = + 4.51%
      17   5.21 – 1.39 = + 3.82%
      18   3.47 – 0.46 = + 3.01%
      19   2.08 – 0 = + 2.08%
      20   1.04 – 0 = + 1.04%
      21   0.35 – 0 = + 0.35%

    And, if I increase it again from +2 to +3, giving a roll of 2d6+d8+2:

      03   0 – 0.46 = -0.46%
      04   0 – 1.39 = -1.39%
      05   0.35 – 2.78 = – 2.43%
      06   1.04 – 4.63 = – 3.59%
      07   2.08 – 6.94 = – 4.86%
      08   3.47 – 9.72 = – 6.25%
      09   5.21 – 11.57 = – 6.36%
      10   7.29 – 12.50 = – 5.21%
      11   9.03 – 12.50 = – 3.47%
      12   10.42 – 11.57 = – 1.15%
      13   11.11 – 9.72 = + 1.39%
      14   11.11 – 6.94 = + 4.17%
      15   10.42 – 4.63 = + 5.79%
      16   9.03 – 2.78 = + 6.25%
      17   7.29 – 1.39 = + 5.90%
      18   5.21 – 0.46 = + 4.75%
      19   3.47 – 0 = + 3.47%
      20   2.08 – 0 = + 2.08%
      21   1.04 – 0 = + 1.04%
      22   0.35 – 0 = + 0.35%

    2d6+d8+2 has diminished chances of rolling anything below 12 (not gone) and massive boosts to the chances of a 14-18 result. 19s and 20s and even 21s will result on rare occasions, and any result below 7 is extremely unlikely.

    Does it work in the other direction, you may be wondering? Well, what if I add a -1 modifier to the ‘base’ -1 to get 2d6+d8-2?

      01   0.35 – 0 = + 0.35%
      02   1.04 – 0 = + 1.04%
      03   2.08 – 0.46 = + 1.62%
      04   3.47 – 1.39 = + 2.08%
      05   5.21 – 2.78 = + 2.43%
      06   7.29 – 4.63 = + 2.66%
      07   9.03 – 6.94 = + 2.09%
      08   10.42 – 9.72 = + 0.70%
      09   11.11 – 11.57 = – 0.46%
      10   11.11 – 12.50 = – 1.39%
      11   10.42 – 12.50 = – 2.08%
      12   9.03 – 11.57 = – 2.54%
      13   7.29 – 9.72 = – 2.43%
      14   5.21 – 6.94 = – 1.73%
      15   3.47 – 4.63 = – 1.16%
      16   2.08 – 2.78 = – 0.70%
      17   1.04 – 1.39 = – 0.35%
      18   0.35 – 0.46 = – 0.11%

    You might think that these results are comparatively mild in comparison to the big differences we saw earlier – but that’s exactly what you want – you want to penalize a PC, make life more difficult for them, but not to the point where they are helpless to do anything; that’s just frustrating.

    Adding another -1 starts to grow significant:

      00   0.35 – 0 = + 0.35%
      01   1.04 – 0 = + 1.04%
      02   2.08 – 0 = + 2.08%
      03   3.47 – 0.46 = + 3.01%
      04   5.21 – 1.39 = + 3.82%
      05   7.29 – 2.78 = + 4.51%
      06   9.03 – 4.63 = + 4.40%
      07   10.42 – 6.94 = + 3.48%
      08   11.11 – 9.72 = + 1.39%
      09   11.11 – 11.57 = – 0.46%
      10   10.42 – 12.50 = – 2.08%
      11   9.03 – 12.50 = – 3.47%
      12   7.29 – 11.57 = – 4.28%
      13   5.21 – 9.72 = – 4.51%
      14   3.47 – 6.94 = – 3.47%
      15   2.08 – 4.63 = – 2.55%
      16   1.04 – 2.78 = – 1.74%
      17   0.35 – 1.39 = – 1.04%
      18   0 – 0.46 = – 0.46%

    Your chances of rolling a 14 are halved by the -2 modifier to the base roll and replacement of a d6 with d8-1. Your chances of rolling a 6 have almost doubled! (Both in comparison to a straight 3d6 roll).

    You may also be wondering how this compares with a the same modifier applied to 3d6. The answer is that this penalizes characters slightly less. Again, the base roll is flatter and a little more anarchic, or perhaps you would prefer egalitarian.

    So PCs would have better chance of getting themselves out of trouble, after a struggle – but they would definitely know that they have been on the wrong end of the odds at the end of that trouble.

    So, when to make this substitution? The key is in the flatness of the curve and the increased unpredictability that results. In particular, I would be inclined to use this when the outcome of the roll won’t make a great deal of significant difference – increasing the anarchy while still leaving a situationl salvageable is a prompt for roleplaying, and for the awarding of a small bonus (or penalty), which, as you’ve seen, can make a significant difference!

Exotic Choice #5: d10 + d12 – 1

I gave serious thought to pulling this construction from the list. How different would it be from 2d10?

Well, as it turns out….

This was originally going to be about d10+d12, no -1 to be seen – but when I noticed that it aligned the result with 2d10 perfectly, simplifying the comparison, it seemed obvious to change it.

Min, Max, Ave

    Minimum 1, Maximum 21, Average 11.

The Thresholds
    The 1% Threshold

      2 and 20 are considerably higher than 1% probability, so the only values that don’t exceed this (low) limit are 1 and 21. As explained with other analyses, anything this low might as well not be part of the roll – so the effective range of this configuration is 2-20.

    The 3% Threshold

      3% falls between 3 & 4 on the low side and 18 & 19 on the high. So 2-3 and 19-20 don’t reach this target.

    The 5% Threshold

      6 and 16 are exactly on this limit, so 4-6 and 16-18 are in this band.

    The 10% Threshold

      Nothing in this roll gets this high, which means that all results from 7 to 15 are more probable than on a d20 but not by very much. I keep saying it, but this is another very flat roll. That’s mostly because steepness comes with many dice – 3 or more.

    That’s 9 results in the peak probability range, 6 in the 3%-5% range, 4 in the 1%-3% range, and 2 in the 0-1% range.

Slices Of Range: Percentages Of Probability
    Range Of Results

      21-1 = 20, plus 1 for the 1 itself, =21. This can be verified by adding up the results in the different bands above – 9+6+4+2=21.

    Ave – Min, Max – Ave

      11-1=10;
      21-11=10. The probability curve is symmetrical.

    1/3 (Ave-Min) + Min

      1/3 x 10 +1 = 4.3333. That’s a little higher than the chance of getting a 5, so 1-5 form the first group of results. Collectively, these have a whopping 12.5% chance of showing up.

    2/3 (Ave-Min) + Min

      2/3 x 10 +1 = 7.6667. That’s just above 9, so 6-9 fall into this category. Between them, these results have a 37.5 – 12.5 = 25% chance of occurring.

    The Lower Core

      We haven’t had a case in this second part where we have to split a result into two bands – it happened a number of times in the first. But that’s the case this time around.

      This band contains 10, and 11 has one foot in it as well (and one in the next). So the total probability encompassed by this band is 8.33 + 1/2 x 8.33 = 12.495%.

      Except that it’s not. Closer examination of the probabilities shows that each increases smoothly by 0.8333333333333 etc % for each result closer to the average – which is why the ‘curve’ is actually a straight line. And, when I calculate 1 1/2 times this, with all those repeating threes, it comes to exactly 12.5%.

    The Upper Core: 1/3 (Max-Ave) + Ave

      In the ‘above average’ category of results, we have 11’s other half and 12, again totaling 12.5% of results.

    2/3 (Max-Ave) + Ave

      13 to 16 are in the ‘above average’ result category, and have a 25% chance of occurring.

    The Lofty Outcomes

      And that means that the best results, 17-21, will occur 12.5% of the time.

    d10+d12-1:

      01-05 12.5% (span 5)
      06-09 25% (span 4)
      10-11 12.5% (span 1 1/2)
      11-12 12.5% (span 1 1/2)
      13-16 25% (span 4)
      17-21 12.5% (span 5)

      Notice that if you put the two middle results together, you get a smooth 5-4-3-4-5 pattern of result spans. I suspect that’s because the probability increases at such regular intervals.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      0-20% encompasses results from 1-6.

    Second Lowest 20%

      21-40% contains 7-9.

    The Middle 20%

      41-60% is results 10 & 11. Twelve almost makes it, but doesn’t quite get there.

    Second-Highest 20%

      61-80% gets us from 12 to 14.

    Highest 20%

      Which means, obviously, 15-21 are the highest 20% of outcomes.

    Peak Probability

      Three results share the peak probability of 8.33333333333333%, as can be seen on the probability chart.

    Matching Result: 1/3 Peak Probability

      1/3 of 8.333333333333 (I’m being careful with the rounding!) is 2.7777777778.

      This is located between the probabilities for 3 and 4, and also 18 and 19. So this band contains 1-3 and 19-21.

    Matching Result: 2/3 Peak Probability

      2/3 of 8.333333333333 is 5.5555555556.

      This falls on the curve between the results of 6 and 7, and again between 15 and 16. This band contains 4-6 and 16-18.

      There’s an important lesson here. For some reason, instead of 4-6, I had written the above as “5-6 and 16-18”.

      I immediately observed that these spans were of different size – 2 vs 3 results – and had a gut reaction of “that’s not right”. So I double-checked and found that there was indeed an error.

      You can’t trust your instincts 100%, but as a motivation for double-checking something, they are always worth paying attention to.

      That also defines, by exclusion, the peak probability zone, containing results of 7-15, a span of 9 results.

    d10+d12-1:

      00-20%: 1-6 (span 6)
      21-40%: 7-9 (span 3)
      41-60%: 10-11 (span 2)
      61-80%: 12-14 (span 3)
      81-100%: 15-21 (span 7)

      < 1/3 peak probability: 1-3 (span 3)
      1/3 – 2/3 peak probability: 4-6 (span 3)
      2/3 – peak – 2/3 peak: 7-15 (span 9)
      2/3 – 1/3 peak probability: 16-18 (span 3)
      < 1/3 peak probability: 19-21 (span 3)

Summary Of Results

    This ‘curve’ is slightly flatter than 2d10, but starts a little higher at the extreme values. The two chart lines cross at 6 on the way up and 16 on the way down. Unlike 2d10, this has a flattened probability cap over three results, creating a small plateau.

    But really, it’s 2d10 stretched one result further in each direction.

When To Use This Substitute

    The big thing with 2d10 is the degree of push toward the middle of the results – it’s not a lot, but it’s constant all the way through.

    The d10+d12-1 alternative makes extreme results just a little more likely, and resists this push enough that it has a small probability plateau.

    That means that it’s not quite as egalitarian as a d20 roll, but it’s one of the more egalitarian of the alternatives.

    And that’s the key to when this option should be considered – any time a 2d10 alternative comes up as a possibility, the two factors (greater potential for extreme results, more evenly distributed result probabilities) should tell you whether or not this is a better option.

    If both factors are in agreement, congratulations – the choice is made. That will usually be the case.

    When they disagree, you have to decide which of the two is the more important, given the in-game context at the time – and that choice then controls, breaking the tie.

Exotic Choice #6: 2d8+d6-2

This option completes this family of related die roll options. I wasn’t all that surprised when it came up with another dumbbell curve.

This was always on my list of combinations to examine.

If you took a straightforward 3d6-3 roll (so that the minimum is zero), multiplied the result by 19/15, then added 1, you would technically be stretching the dumbbell curve to fit a 1-20 range.

There are two problems with this approach: first, it’s a fiddly three-step calculation to extract a result (and 19/15 isn’t a convenient multiple to work with, to boot), and secondly, there would be gaps due to rounding – which specific results would depend on how you rounded the results to get them to integers.

  • If you round down, the gaps are found at 5, 10, 15, and 19.
  • If you round off, 3, 8, 13, and 18 are impossible results.
  • If your round up, you cannot ever roll 2, 6, 11, or 16.

There are always four outcomes missing because you’re stretching 15 numbers to cover 19 possible results.

I thought all along that this construction might be the answer, and that was a large part of the starting point for this 2-part article. But, cool as that might be, it lacked a purpose – which is why all the other die roll configurations were brought on-board.

Min, Max, Ave

    Minimum 1, Maximum 20, Average 10.5. And it’s a perfect bell curve.

The Thresholds

As usual, we start by looking at the probabilities of individual results and classifying them into bands.

You may be wondering why I chose the bands that I did. I started with 5%, because that’s the d20 probability of results; anything more than this will be more probable than on a d20 roll, anything below it will be less.

Everything from that is roughly a multiple of 2, or of 1/2.

  • 5 x 2 = 10.
  • 5 x 1/2 = 2.5, which either rounds to 2 or 3 – I chose 3 as being more ‘in the middle’.
  • 5 x 1/2 x 1/2 = 1.25, which can only reasonably round to 1.
  • Finally, I knew some curves would have some probabilities above 10, so I needed a cut-off for the most extreme results. The technically-correct choice would have been 20%, but it takes a d4 or d5 to get up that high, so this would not have been all that useful. So I compromised on 5 x 3 = 15.
    The 1% Threshold

      Just above 2 the curve crosses this mark, and just before 19 on the way down. So 1-2 and 19-20, spans of 2 results.

    The 3% Threshold

      A little beyond 4, this line is crossed, and a little before 17, it gets crossed again going the other way. So 3-4 and 17-18. These are also spans of 2 results.

    The 5% Threshold

      With most bell curves, this is usually the vicinity of the inflection point, where the probability starts rising comparatively quickly. That results in smaller spans – in other words, we get to 5% from 3% a lot more quickly.

      So it is in this case: the 3-5% band contains just two results, 5 and 16.

    The 10% Threshold

      Almost all remaining results fall into the range of 5-10% probability. Only the very peak – 10 & 11 – lie above this limit. So the 5-10% range contains 6-9 (a span of 4) and 12-15 (likewise).

Slices Of Range: Percentages Of Probability

Next, we break up the range of results into 6 groups as evenly as possible – 3 low and 3 high – and see what the probabilities are for each group and which results they contain.

    Range Of Results

      20-1=19; +1 for the 1 itself = 20.

    Ave – Min, Max – Ave

      10.5 – 1 = 9.5. And 20 – 10.5 also = 9.5. So the curve is symmetrical, as has been the case for every construction that we’ve looked at so far – but won’t always be true.

    1/3 (Ave-Min) + Min

      1/3 x 9.5 + 1 = 4.16667. This is the division point between worst results and poor results – so 1 to 4 are the worst results. These have a combined probability of 5.21% and a span of 4.

    2/3 (Ave-Min) + Min

      2/3 x 9.5 + 1 = 7.333, so that sets the ‘poor results’ as 5-7.

      The total probability of a poor result is 21.61 – 5.21 = 16.4%. This bracket spans 3 results. 16.4% is a little over three times 5.21%, so these are about three times as likely to occur as the worst results.

    The Lower Core

      Everything between 7.333 and the average of 10.5 is a ‘below average’ roll, by definition. so results 8-10 fall into this bracket.

      This band spans 3 results and has a total probability of 50 – 21.61 = 28.39%.

      28.39% is about 1.73 times 16.4%, so this band is not that much more probable than a poor result. That speaks to this being a relatively flat probability curve.

    The Upper Core: 1/3 (Max-Ave) + Ave

      Because the curve is symmetrical, this will mirror it’s below-average counterpart – span of 3 and 28.39% probability. So that defines this band as containing 11, 12, and 13.

    2/3 (Max-Ave) + Ave

      For the same reason, this is also a span of 3 and has a total probability of 16.4%. That encompasses 14, 15, and 16.

    The Lofty Outcomes

      And that leaves the final span of 4 results (17, 18, 19, and 20) with a combined probability of 5.21%.

    2d8+d6-2 (vs 2d6+d8-2):

      01-04: 5.21%   (01-03: 3.47%)
      05-07: 16.4%   (04-06: 15.97%)
      08-10: 28.39%  (07-09: 30.56%)
      11-13: 28.39%  (10-12: 30.56%)
      14-16: 16.4%   (13-15: 21.3%)
      17-20: 5.21%   (16-18: 4.63%)

      That this probability curve is even flatter than 2d6+d8 is obvious, and for the same basic reason that 2d6+d8 is flatter than 3d6 – it’s range stretches out two extra results, to a top of 20 from a top of 18.

      That flatness manifests as a lower chance of a result in the lower and upper core probabilities and an increased chance of a result outside of that central region. Those ‘outside results’ will still cluster toward the average, because that’s what bell-shaped curves do – but this result is even more extreme than 2d6+d8 in that department.

Slices Of Probability: The Definitive Result Values

Finally, we slice the probability ‘cake’ two ways – first into bands of 20% from lo high, and then as equal portions of the peak.

The Lowest 20%

    Results of 1-6 are the lowest 20% in terms of probability. 7 is almost low enough, at 21.61% cumulative, to be included in this category as well.

Second Lowest 20%

    The bracket from 20% to 40% encompasses results from 7 to 9. Note that 9 only just scrapes in, with a total cumulative probability of 39.84%.

The Middle 20%

    Sitting across the middle, the 40-to-60% range contains results 10 and – being generous – 11. The cumulative probability is 60.16%, that’s close enough for inclusion I think.

Second-Highest 20%

    The 60-to-80% group comprises results 12 and 13.

Highest 20%

    Which, in turn, means that 14-20 are the highest 20% of results that will occur.

Peak Probability

    There are two results at the peak of probability: 10 and 11. That peak is 10.16%.

Matching Result: 1/3 Peak Probability

    1/3 of 10.16 is 3.3867%. This is close to a result of 5’s probability but not close enough. So the “polar lines” of this die configuration separate 1-4 and 17-20 from the rest of the results. That’s two spans of 4 results each.

Matching Result: 2/3 Peak Probability

    2/3 of 10.16 is 6.7733%. That’s more-or-less midway between results 6 & 7 – so the “tropical lines” of this configuration exclude 5 & 6 and 15 & 16 from the remainder. Those are spans half the size of the previous band.

    Living “in” those tropics are 7 to 14. A span of 8 results, divided across the “equator” that is the average roll (10.5) – so two spans of 4. The “Temperate Zone” that would result if these corresponded to latitude markers would thus be extremely narrow.

    What’s all this talk about “Latitudes” all of a sudden? Long-time readers know me, I’m always looking for a different metaphor to help people visualize what I’m describing – and this one just came to me.

2d8+d6-2:

    00-20%: 1-6 (span 6)
    21-40%: 7-9 (span 3)
    41-60%: 10-11 (span 2)
    61-80%: 12-13 (span 2)
    81-100%: 14-20 (span 7)

    Notice that rounding errors have ‘stolen’ 14 from the 61-80% band and given it to the 81-100% band.

    < 1/3 peak probability: 1-4 (span 4)
    1/3 – 2/3 peak probability: 5-6 (span 2)
    2/3 – peak – 2/3 peak: 7-14 (span 8)
    2/3 – 1/3 peak probability: 15-16 (span 2)
    < 1/3 peak probability: 17-20 (span 4)

    At least this table is symmetrical!

Summary Of Results

    It’s a bell curve that runs smoothly from 1 to 20. The peak probability is a little less than a 3d6, but that’s understandable given the stretching across 20 results. Minimum, Maximum, Average – they are all bang on. If you want to replace a d20 with a bell curve, they don’t come much better than this.

When To Use This Substitute

    I think the question then has to be asked, why use 2d6+d8-1 when you’ve got 2d8+d6-2? And it’s a fair question, if interpreted as “Under what circumstances would you NOT prefer to use 2d8+d6-2?”

    For the answer, I think we need to go back to the probability chart with which this analysis started. Here it is again:

    Compare the two. 2d6+d8 starts, natively, with the same result as 3d6 and stretches to 20, a full 2 points higher than a 3d6 roll goes. That shifts its peak forward – it will tend to roll high. And that always helps make players happy. It’s completely fair if the NPCs / monsters are using it too – it just makes everything that little more dramatic, that little bit more inclined towards a success than a failure – which, once again, is completely fair if both sides get the same benefit. Tacking on the -1 to the base roll shifts the curve so that it fits perfectly on top of a 3d6.

    This roll, 2d8+d6-2, starts two lower than a 3d6 roll and still ends two higher, and the probability curve is flatter. It’s less dramatic than a 2d6+d8 plus modifiers-1, and that is the real point of difference between the two.

    Whenever I want the PCs to feel like they were winning (even if they weren’t), any time I wanted to push or milk the drama of a situation, I would choose the 2d6 variant, with modifiers.

    Any time I wanted to calm the players, while still retaining the advantage of a bell curve over a flat line, I would consider replacing the normal d20 with 2d8+d6-2. Making extreme results less likely makes them more significant, more special, but also makes them less of a problem any time you want to soak the campaign in Calm for a while.

    The difference is psychological and all about controlling the emotional pacing of the campaign. I’ve written two major series on that subject:

    – and this is absolutely going into my locker of techniques as a tool to help make the drama ebb and flow. You want it to ebb (at the right times) so that when you ratchet things up, there is a visceral reaction to the excitement.

    As a replacement for 3d6, this option creates the latitude and some of the probabilistic diversity of results that you get with a d20 while retaining the trend for results to be consistently average, most of the time. A best of both worlds, if you will.

    As a replacement for a d20, this retains the potential for extreme results while making those more unlikely, introducing the ‘trend toward the average” of a bell curve.

    In both cases, the difference is in extreme results and their merits under the circumstances of the campaign.

BONUS EXTRA: Exotic Choice #6a: 3d8 – 3

When I look at this chart, the adjective that comes first to mind is that it ‘lopes’ casually from one extreme to the other.

.

I want to draw attention, right off the bat, to the fact that 0 and 21 have a 0.20% probability. You would expect to see each once in every 500 rolls. Outside of theoreticals, for all practical purposes this is a gentle bell curve from 1 to 20.

Min, Max, Ave

    Minimum 0
    Maximum 21
    Average 10.5

The Thresholds
    The 1% Threshold

      The 1% chance falls between 1 and 2. So 0-1 are below this threshold. On the high side, it’s 20 and 21.

    The 3% Threshold

      The 3% line is just above 4, so 3 and 4 are below this threshold, and 18 & 19 are the high-side equivalents. These will come up less than half as often as with a d20 roll.

      The difference between this and 3d6 are more interesting – a total probability of 6.84% in this case plays a total probability of 1.85%, or 3.7 times as likely.

    The 5% Threshold

      The 5% line is between 5 and 6, so this no-dices-land contains just two entries – 5 and the corresponding high result, 17.

    The 10% Threshold

      Nothing quite gets to the 10% mark, so everything else is more probable than a d20 rolls but nothing is twice as likely.

      3d6 has much sharper sides and a taller peak. So most results in this range will occur less often.

      6-16 is a span of 11 results, the largest that we’ve seen to date.

      Completely irrelevant but very interesting – I wanted to look into the differences between 3d8-3 and 3d6. So I tried division and got the expected result, which wasn’t especially illuminating. To make sure that the curve captured the nuance, though, I multiplied the 3d8-3 result by 10.

      So I changed the division to a subtraction, and – in a momentary lapse in concentration – forgot to take away the times 10.

      The results are predictable, beautiful in their own way, and very strange! I had to share them with readers :)

      Definitely one of the weirdest probability charts that I’ve ever seen!

Slices Of Range: Percentages Of Probability
    Range Of Results

      Results span from 0 to 21, so 22 wide.

    Ave – Min, Max – Ave

      No surprises here – 10.5 – 0 = 10.5, and 21-10.5 = 10.5. Both are the same, so the curve is symmetrical.

    1/3 (Ave-Min) + Min

      1/3 x 10.5 + 0 = 3.5. So the lowest results band contains 0-3. There’s a cumulative probability of 3.91% for this group of results.

    2/3 (Ave-Min) + Min

      2/3 x 10.5 +0 = 7. Which means that 7 has one foot in the middle lower band and the other in the lower core.

      This band therefore contains 4-7, which have a cumulative probability of 23.44 – 1/2 x 7.03 – 3.91 = 16.015%. That’s a little more than 4 times the low band.

    The Lower Core

      Results 8-10, plus half of 7, form the lower core, the below average results. There is a cumulative probability of 50 – 23.44 + 1/2 x 7.03 = 30.075%.

    The Upper Core: 1/3 (Max-Ave) + Ave

      Because the curve is symmetric, this is the same size as the lower band, containing 11-13 plus 1/2 of 14. The cumulative probability of these results is also the same at 30.075%.

    2/3 (Max-Ave) + Ave

      The middle upper band hold results of 15-17 plus half of 14, with a cumulative probability of 16.015%.

    The Lofty Outcomes

      The best results band runs from 18-21, and have a total probability of 3.91%.

    3d8-3:

      01-04: 3.91% (span 4)
      05-07: 16.015% (span 2.5)
      07-10: 30.075% (span 3.5)
      11-14: 30.075% (span 3.5)
      14-17: 16.015% (span 2.5)
      18-21: 3.91% (span 4)

      If I were to ignore the math and go for the more aesthetically-pleasing breakup, 7 shifts completely into the middle lower band, and 14 into the middle upper band, and that looks like this:

      01-04: 3.91% (span 4)
      05-07: 19.53% (span 3)
      08-10: 26.56% (span 3)
      11-13: 26.56% (span 3)
      14-17: 19.53% (span 3)
      18-21: 3.91% (span 4)

      The previous section of results highlighted the massive span covered by the probability range of 5-10%, and that same message is reinforced by the small difference in probability distribution shown. The ‘correct’ breakup emphasizes the differences a little more – 30.075% is almost twice 16.015% – but the ‘aesthetic’ breakup, in which the core bands are only 1.36 times as probable as the middle bands puts things into a clearer perspective, I think.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      Breaking up the outcomes by probability, the 20% line falls between 6 and 7, so one-fifth of results will be 0-6.

    Second Lowest 20%

      The 40% mark is just below 9, so the next 1/5th of results are 7s and 8s.

    The Middle 20%

      The 60% mark is just above 11, so the middle band holds 9-11.

      It’s interesting that the second lowest 20% only had a span of 2 results, while this band has a span of 3, and those are 3 of the 4 most probable outcomes.

      That highlights why this approach to analysis, while sometimes useful, is employed way too often and can be misleading.

    Second-Highest 20%

      80% lands between 13 and 14. So the second-best 20% of results will be 12s and 13s.

    Highest 20%

      Which means that the best 20% of results will contain everything from 14 to 21, a span of 8 results.

    The real problem is that the actual rolls will be fuzzy around the edges. 8^3= 512 combinations of die faces. Exactly 1 of them will be 0; so many will be 1, so many will be 2, and so on. The odds that 20% won’t include some that are on this side or that of the dividing line are remote to say the least.

    The cumulative probability through to results of 6 is 16.41% – and 16.41% of 512 is 84.0192. We can assume that the extra decimal places are rounding errors and say that from 512 rolls, 84 of them will be 6 or less.

    The cumulative probability through to results of 7 is 23.44%, because 7s have a 7.03% chance of showing up. 7.03% x 512 = 35.9936, and again, the difference from a whole number is down to rounding errors. So you would expect 36 results of 7 from the 512 rolls, and some of them will be in the lowest 20% of the 512 rolls, and some will be in the band above it.

    It gets worse: a perfect 20% of 512 is 102.4 rolls. What does “0.4 rolls” mean? Does it mean that 0.4 of a result of 7 goes onto one side of the line (2.8) while the rest (4.2) goes onto the other?

    The point is this: there are inherent and unavoidable limits to precision that can and do distort results. You need to take this into account when analyzing die rolls (or when interpreting someone else’s analysis).

    The differences show up all the time in computer technology. “One K” is 1024 bytes, not 1000. “One Meg” is 1024 K, or 1048576 bytes.

    But disk manufacturers use “Meg” in the “1,000,000” sense because it makes their hard disks sound larger – so a “1 Meg” hard disk (or USB stick, these days) is 1.048576 Megabytes. And a 4.3 Gb disk is roughly 4.3 thousand million bytes of capacity and technically is really only 4.004687 Gb, or 4100.8 Mb.

    Don’t expect to pack 1024 four-point-three megabyte files onto one, because you won’t (but some unscrupulous manufacturers used to charge as though you could).

    What’s more, not every byte of storage will be accessible; some of them are used to label blocks of information, and you can’t have a block that’s partially one file and partially another. So actual size on the storage medium always rounds up to the size of a block.

    Back when 100Mb was the ‘typical size’ of a hard disk, and I was in the IT support game, the number of calls we used to get from irate users whose new Hard Disk showed only 97 Mb was astonishing – and it all stems from the difference between 1024 and 1000.

    Peak Probability

      Results of 10 and 11 share the peak probability of 9.38%.

    Matching Result: 1/3 Peak Probability

      1/3 x 9.38 = 3.1267%. This lands in between 4 and 5 on the low side and 16 and 17 on the high. So the low probability results are 0-4 and 17-21.

    Matching Result: 2/3 Peak Probability

      2/3 x 9.38 = 6.2533%. This is found between 6 & 7 on the low side and 14 & 15 on the high. So intermediate probability results are 5-6 and 15-16.

      That means that high-probability results are 7-14.

    3d8-3:

      00-20%: 0-6 (span 7)
      21-40%: 7-8 (span 2)
      41-60%: 9-11 (span 3)
      61-80%: 12-13 (span 2)
      81-100%: 14-21 (span 8)

      < 1/3 peak probability: 0-4 (span 5)
      1/3 – 2/3 peak probability: 5-6 (span 2)
      2/3 – peak – 2/3 peak: 7-14 (span 8)
      2/3 – 1/3 peak probability: 15-16 (span 2)
      < 1/3 peak probability: 17-21 (span 5)

      These show properties of bell-curves that you see all the time. The 20%-band breakups shuffle extreme results off to the top and bottom; 25-50% of these will never show up unless you roll a lot (or get very lucky or unlucky). The peak probability scales the extremes more usefully, but shifts medium-probability outcomes into the peak bracket because of the steep sides of the bell curve, and the natural ‘crown’ of the curve..

Summary Of Results

    When I look at the two of them side-by-side (or one above the other as they appear in my Window), the comparison of greatest interest and that is the most compelling is with 2d6+d8-3.

    The 3d8-based roll is flatter but broader, with an increased chance of extreme results – and those results extend further to both the left and the right.

    The crossovers are between 5-6 (low) and 14-15 (high), so results of five or less and of 15 or more are more likely while the distribution of probability from 7-14 is more even.

When To Use This Substitute

    The thing that strikes me most about this construction, especially in comparison to 2d6+d8-1, is how much room there is for positive modifiers.

    For it to be useful, therefore, two conditions have to be met. First, a positive modifier has to be applicable. Any amount from +1 to even as high as +6 is fine. And second, I have to be willing to live with the chance of an extreme result, even a result as high as 24 or 25 (only possible with a +3 modifier or better).

    This configuration already has an average of 10.5, and every +1 modifier increases that by 1. So this is most useful for rolls when I want the PCs to succeed but need to still preserve a chance of failure, no matter how slim that might be.

Exotic Choice #7: d4+d6+d12-2

Here’s the secret sauce to any substitution of a single dice: for every extra die you’re adding, increase the desired maximum by 1 so that you can include a default modifier that brings the minimum, maximum, and average back to what you want. So, 1 dice -> 2 dice? Aim for a maximum of D#+1. 1->3 dice? Aim for a maximum of D#+2, and so on.

Single dice have a minimum of 1, unless modified in some way, and that points the way to adjusting that principle for the replacement of N dice. The best approach is to reduce both minimum and maximum by (N-1), effectively putting the construction on a shape that looks like that of a single die. You get bonus points for making the modifier in the D#+modifier cancel out that (N-1). 3 dice -> 2 dice? That requires an (N-1) of -2, and a D#-1 to boot. 3 dice for 3 dice? (N-1) of 2, and D#+0.

Which brings me to this particular construction. It has every opportunity for every feature we’ve seen to date, and I have no idea what to expect. 3 dice, so a bell curve, differences in die sizes, so maybe a plateau. – who knows? I wouldn’t be all that surprised if it looked like a step pyramid with the top missing!

Let’s see what we get…

Is that it? My first reaction was to be underwhelmed. But then I started noticing details.

There’s an obvious inflection point at 3 – but then there’s another one at 6, and the two are linked by what appears to be a plumb-straight line. The same is true of the descending side of the curve, with the inflection points being 15 and 18.

These inflection points are of opposite sign – the first steepens the increase in probability and the second one diminishes it. We’ve seen that before, with the 2d4+d12 curve (from part 1) – but that had inflection points on successive results, so they were even less obvious.

There’s a central plateau. that’s 4 spaces wide and then a very gentle rolling off on both sides, which is also reminiscent of 2d4+d12 – but that had comparatively steep sides and a larger plateau.

The scales can be hard to read, but 4.50% half-way up means 9% at the top and the intermediates are, well, in-between. 9% is a fairly high probability for a d20 substitute but not an atypical one. Some rolls, like 3d6, go even higher at the peak of their bell-curves. The peak probability looks very similar to 2d4+d12, which had the exact same peak probability as d8+d12.

What it doesn’t seem to have is the bias that 2d4+d12 exhibited – it was always either biased high or low relative to 3d6.

So there’s a lot of subtlety going on, and plenty to talk about.

Min, Max, Ave

    Minimum 1
    Maximum 20
    Average 10.5

The Thresholds
    The 1% Threshold

      Results of 2 and 19 happen at 1.04%. This is so close to the line that they should be included – so 1-2 and 19-20 are really improbable results, akin to rolling 01 or 00 on d%, or worse.

    The 3% Threshold

      3% falls in between 3 and 4, and again between 17 and 18. So results of 3 and 18 will occur about half as often as they would on a d20.

    The 5% Threshold

      5% divides 5 from 6 and 15 from 16. So results of 4-5 and 16-17 will occur a little less frequently than they do on a d20.

    The 10% Threshold

      Nothing gets above this on the chart. In fact, as noted earlier, nothing even gets as high as 9%. So everything from 6 to 15 will come up more often than on a d20 – but not as much as twice as often.

Slices Of Range: Percentages Of Probability
    Range Of Results

      This combination gives a range of 20 results, obviously.

    Ave – Min, Max – Ave

      Both of these calculations yield ranges of 9.5, so the curve is symmetrical.

      1/3 (Ave-Min) + Min

      1/3 x 9.5 + 1 = 4.16667. This divides worst outcomes (1-4) from the rest. There’s a combined probability of 6.94%; 2d8+d6-2 had the same set of outcomes, but only a 5.21% chance of them occurring.

      This curve has a flat – steep – flat – steep – flat thing happening, courtesy of it’s total of 4 inflection points. What this probability total shows is that the flattest parts are even flatter than some of the others that we’ve examined, making it more possible to get extreme results.

    2/3 (Ave-Min) + Min

      2/3 x 9.5 + 1 = 7.3333, so 5-7 are the ‘poor but not worst possible’ results. These have a combined probability of 25.35 – 6.94 = 18.41%.

      Again, this is higher than the results of 2d8+d6-2 but occupies the same range.

      This section of the curve carries us past the first inflection point and into the ‘steady rise’ part of the graph.

    The Lower Core

      That means that the lower core has the peak probability, the second inflection, and most of the steep-rise section between the inflection points – an absolutely huge total probability. Results of 8-10 are in this zone, and they have a total probability of 50 – 25.35 = 24.65%.

      Another indicator of the overall flatness of the probability curve is how close this total is to the previous one, despite all those noteworthy inclusions – 24.65% vs 18.41% is not a huge difference, by any means. More of the probability is spread over more extreme results with this construction.

    The Upper Core: 1/3 (Max-Ave) + Ave

      Since we know the curve is symmetrical, this is the same total percentage chance as the lower core, 24.65%, and the same span of results – in this case, from 11-13.

    2/3 (Max-Ave) + Ave

      Better than merely “above average”, the good results band spans from 14-16, and has a total probability of 18.41%.

    The Lofty Outcomes

      Which leaves the ‘best’ results as 17-20, with a total probability of 6.94%.

    d4+d6+d12-2:

      01-04: 6.94% (span 4)
      05-07: 18.41% (span 3)
      08-10: 24.65% (span 3)
      11-13: 24.65% (span 3)
      14-16: 18.41% (span 3)
      17-20: 6.94% (span 4)

      The only results here that are so unlikely they are only theoretically possible most of the time are 19-20. At 0.35% each, they can be expected to appear once each in every 288 rolls.

      If you take these out, the very even spread of results that you get from the table above becomes evident.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      20% probability (cumulative) falls above 6 but below 7, so 1-6 form the lowest band of results. Notably, 1-4 are 10% of results and 5-6 are the other 10%.

    Second Lowest 20%

      40% probability is reached between 8 and 9 – the latter almost gets into this category but is not quite close enough. So this band contains results of 7 and 8.

    The Middle 20%

    60% probability is between 11 and 12, so results from 9 to 11 can be found here.

    Second-Highest 20%

      The 80% mark is just below 14, so this band is 12-13.

    Highest 20%

      That leaves 14-20 as the highest 20% of results. As with the lowest 20%, 14-15 are 10% of the results and 16-20 are the remainder.

    Peak Probability

      Peak probability is 8.3333% – again.

      If you search the two articles for 8.33, you’ll find that this peak probability occurs time and time again. There’s obviously some significance to that, it’s happened too often to be a coincidence, but what it might be escapes me.

      So I asked an AI, and what I got back highlights that 8.3333% is the same as 1/12, and the various combinations where this has resulted all have dice that are factors of 12 – four, six, twelve – and so the peak probability is going to be a numeric fraction of the possible results that will simplify to 1/12.

      It also pointed out that the size of the plateau. is always equal to the size of the smallest dice in the combination, an observation that I had already noticed for myself, but that I hadn’t mentioned because I didn’t have a large enough sample to be sure of the relationship.

    Matching Result: 1/3 Peak Probability

      1/3 x 8.3333% = 2.7777%. Only the results of (low) 1-3 and (high) 18-20 are below this threshold.

    Matching Result: 2/3 Peak Probability

      2/3 x 8.3333% = 5.5555%. This threshold is exceeded by 6 (low) and 15 (high) so 4-5 and 16-17 are in this band.

      Which means that 6-15 are in the top third of results in terms of probability – a span of 11 results!

    d4+d6+d12-2:

      00-20%: 1-6 (span 6)
      21-40%: 7-8 (span 2)
      41-60%: 9-11 (span 3)
      61-80%: 12-13 (span 2)
      81-100%: 14-20 (span 7)

      < 1/3 peak probability: 1-3 (span 3)
      1/3 – 2/3 peak probability: 4-5 (span 2)
      2/3 – peak – 2/3 peak: 6-15 (span 11)
      2/3 – 1/3 peak probability: 16-17 (span 2)
      < 1/3 peak probability: 18-20 (span 3)

Summary Of Results

    The probability curve and that last table of results are, to me, the definitive characteristics of this die combination.

When To Use This Substitute

    In a nutshell, this is an alternative to 2d8+d6-2. it can be used anytime you would consider using that configuration. The question then becomes, when to use this choice instead of that one. To find out, I generated one more probability chart comparing the two.

    It’s clear that this is a flatter, squatter version of that chart, with greater potential for extreme results. There is still a strong central plateau. – results from 6 to 15 are far more likely than anything outside that range – but results outside that range will still occur. 1-5 and 16-20 results each contain 4.86% of the outcomes – so 4.86% x 4 x 6 x 12 = 14 in 288 times, each. Between them, 7 in 72 outcomes will be outside the ‘core range’ – which means that 65 out of 72 will be inside that range.

    So if you want a 2d8+d6-2 chart, but with a (slightly) increased potential for extreme results and a more even spread of results, this is the option to choose.

BONUS EXTRA: Exotic Choice #7a: d4+d8+d10-2

Another of the curves that weren’t on my list, this is a variation on the previous one. The d6 has grown to a d8 and the d12 has shrunk to a d10 – and the results should be quite interesting in comparison. I don’t expect to dig too deeply into this one (or this article won’t be ready in time, I’m already pushing my luck), but it needs to be included, I think.

These three curves are so close together that to be be able to show things clearly, I’m going to need to offer up an enlargement.

Even this wasn’t enough, so if you click on the thumbnail above, it will open a still-larger version in a new tab. But below is an even-more-enlarged extract from that image, focusing on the complex interplay between results 6 and 8.

So. what do I see when I examine this?

The previous combination told me that this curve was going to be closely related to both that (d4+d6+d12-2) and 28+d6-2, analyzed earlier.

The relationship between this new construction and those alternatives was going to be critical to evaluating it. So I deliberately included them in the probability chart for comparison purposes.

The results show that for the most part, this curve is bound by those others, occupying some middle ground between them. Initially, d4+d6+d12-2 has the higher probabilities, and this curve tracks along with it fairly closely in shape.

However it does slowly lose ground to that roll and gets closer to 2d8+d6-2. Between 6 and 7, they cross, and for a single result (7), d4+d8+d10-2 is actually the lowest of the probability curves.

That doesn’t last because between 7 and 8, 2d8+d6-2 flattens out massively, heading for it’s broad plateau.; it crosses both the other probability tracks before 8 is reached.

It started as the highest of the three, and at that crossover point, abruptly switches to being the lowest.

The object of our interest, meanwhile, tracks very closely to 2d8+d6-2 right up to the point of peak probability and back down to the equivalent situation on the high side.

The differences in probability are very subtle up to the point where d4+d6+d12 flattens out and crosses over the other constructions and remain so when comparing d4+d8+d10-2 with 2d8+d6-2.

To sum up the differences in a nutshell:

  • Compared with d4+d6+d10-2, this curve is a full dumbbell curve; extreme results are slightly less likely but central results are far more likely. The crossover point between the two is between 7 & 8 and 13 & 14.
  • Compared with 2d8+d6-2, this curve makes extreme results a little more likely and centralized results a little less likely. The crossover point is between 6 & 7 on the low sides and between 14 & 15 on the high.
    Min, Max, Ave

      Minimum 1, Maximum 20, Average 10.5.

    The Thresholds
      The 1% Threshold

        1 & 2, and 19 & 20, are clearly below this threshold. Not by a lot in the innermost of these results, but enough. I’ll round 0.04% difference, I won’t round 0.06%.

      The 3% Threshold

        This is just below 4 and 17 – so the only results in this bracket are 3 and 18.

      The 5% Threshold

        Between 5 & 6, and 15 & 16, is where this line sits, so this area contains 4-5 and 16-17.

      The 10% Threshold

        The probability curve never quite gets to this threshold, so everything from 6 to 15 falls into this region.

    Slices Of Range: Percentages Of Probability
      Range Of Results

        1 to 20, obviously.

      Ave – Min, Max – Ave

        Both low and high ranges span 9.5 results. The curve is symmetrical.

      1/3 (Ave-Min) + Min

        Breaking the range of results up into 6 bands as evenly as possible is the name of this game.

        1/3 x 9.5 + 1 = 4.16667. That lands between results 4 and 5 (closer to 5), so this band contains 1-4. The cumulative chance of a result in this range is 6.25%.

      2/3 (Ave-Min) + Min

        The next band up divides at 7.33333. That holds results 5-7. The cumulative chance of one of these showing on the dice after a roll is 23.13 – 6.25 = 16.88%.

      The Lower Core

        The below average results range takes us through 8, 9, and 10. There’s a cumulative 50 – 23.13 = 26.87% chance that one of these results from a roll.

      The Upper Core: 1/3 (Max-Ave) + Ave

        Because the results are symmetrical, this has the same characteristics as the Lower Core – a span of 3 results (11, 12, and 13) and a total probability between them of 26.87%.

      2/3 (Max-Ave) + Ave

        Similarly, this is the same as the medium-poor results – a span of 3 (14, 15, and 16) and 16.88% probability.

      The Lofty Outcomes

        That leaves 17, 18, 19, and 20 at the top of the tree, sharing a net 6.25% probability.

      d4+d8+d10-2:

        01-04: 6.25%
        05-07: 16.88%
        08-10: 26.87%
        11-13: 26.87%
        14-16 16.88%
        17-20: 6.25%

    Slices Of Probability: The Definitive Result Values
      The Lowest 20%

        This threshold is almost exactly midway between results of 6 & 7. So 1-6 are in the lowest 20% of outcomes that will occur.

      Second Lowest 20%

        This is just below 9. It’s so close that I’m willing to include that result. So this set of 20% of the outcomes is 7-9.

      The Middle 20%

        60% is just over 11, so there’s no controversy in including it in this band, which therefore contains 10 & 11.

      Second-Highest 20%

        The 80% mark falls between 13 and 14, according to AnyDice’s ‘at most’ cumulative probability chart. So this 20% band contains 12-13.

        I’d like to have included 14, but at a cumulative total of 83.75%, it’s just too big a stretch.

      Highest 20%

        That leaves 15-20 as the members of the top-tier 20%. And half of that 20% is 15 alone.

      Peak Probability

        10 & 11 share the peak probability of 9.69%.

      Matching Result: 1/3 Peak Probability

        1/3 x 9.69 = 3.23%. That says that 1-4 occupy this bottom rung of the probability ladder, along with 17-20.

      Matching Result: 2/3 Peak Probability

        2/3 x 9.69 = 6.46%. This is just below 7 & 14. So 5-6 and 15-16 occupy this intermediate rung.

        That leaves 7-14 as the most probable results, all characterized by a chance greater than 2/3 of the peak.

      d4+d8+d10-2:

        00-20%: 1-6 (span 6)
        21-40%: 7-9 (span 3)
        41-60%: 10-11 (span 2)
        61-80%: 12-13 (span 2)
        81-100%: 14-20 (span 7)

        < 1/3 peak probability: 1-4 (span 4)
        1/3 – 2/3 peak probability: 5-6 (span 2)
        2/3 – peak – 2/3 peak: 7-14 (span 8)
        2/3 – 1/3 peak probability: 15-16 (span 2)
        < 1/3 peak probability: 17-20 (span 4)

        Note that these tables are exactly the same as 2d8+d6-2. That’s how small the difference between them is.

    Summary Of Results

      Functionally, this is the same as 2d8+d6-2. The probability differences are so small that they might as well not exist. In fact, if someone were rolling d4+d8+d10 and simply giving you the results, you would need to record about 7,000 rolls before you could be confident that they weren’t rolling 2d8+d6!

    When To Use This Substitute

      A first for this 2-part article: I wouldn’t. I don’t see any point of difference significant enough to warrant it.

      Here’s another way to look at the number offered in the previous section: let’s assume that the average player makes 20 die rolls in a gaming session. Let’s assume that you play once a week. Let’s assume that you game for 50 weeks a year. And let’s assume that 1/4 of the die rolls you make are either 2d8+d6 or d4+d8+d10. How long to get to that 7,000-roll target where it would be certain that there was a difference between the two?

      20 x 50 x 1/4 = number of affected rolls in a gaming year = 250.

      7,000 / 250 = number of years before there is a statistically-significant difference = 28 years.

      Say no more, really.

    Okay, that gets the only slightly-strange choices out of the way. Next week, in Part 3, the really strange stuff and the wrap-up.

Leave a Comment

Traits of Exotic d20 Substitutes pt 1


There are lots of dice configurations that can substitute for a d20, or sometimes for a 3d6. This 2-part article looks deeply at some.

The image of the balance is by Anna Varsányi from Pixabay. I’ve changed it’s balance and added a load of dice.

Time Out Post Logo

I made the time-out logo from two images in combination: The relaxing man photo is by Frauke Riether and the clock face (which was used as inspiration for the text rendering) Image was provided by OpenClipart-Vectors, both sourced from Pixabay.

Not all RPG players and GMs are Geeks, it has to be said, but many of us can spend hours noodling over dice and probability curves and other Geeky dice-related subjects, like “Does my die roll true?” and “Are results more evenly distributed on a die if all opposite sides sum to Maximum + Minimum or can you get high-and-low rollers by clumping results together?”

Well, I’m not going to get into those issues today, but I am going to take a deep dive into d20s and a whole heap of alternatives that might merit consideration from time to time.

In addition to analyzing the bog-standard d20, I have 2 boring alternatives for when you can’t find yours, and 11(+) possible substitutes ranging from the boringly obvious 3d6 through to some quite exotic constructions.

In addition to analyzing and comparing to a vanilla d20 and the obvious 3d6, a number of these rolls merit comparison with each other. Because all that is a LOT of work that I might not get done prior to publishing this article, I’m going to use a lot of headings and subheadings, and indent these, so that you can find and compare the results from one to another for yourself.

This was originally intended to be just one post, but eventually I thought of one exotic die roll too many, and then another (exotic choices #3 and #8, respectively) and was left with no real option. (I’m saving the most interesting alternatives for part 2!)

Index Of Analyses

I wasn’t going to include this, but decided to throw it in at the last minute.

    Today:

    1. 4d6: The Methodology Demonstration
    2. d20: The Yardstick
    3. Boring Workaround #1: 10 x (d2-1) +d10
    4. Boring Workaround #2: 5 x (d4-1) + d5
    5. 3d6: The Standard
    6. Exotic Choice #1: 2d10
    7. Exotic Choice #2: d8+d12
    8. Exotic Choice #3: 2d4+d12

    In Part 2:

    1. Exotic Choice #4: 2d6+d8
    2. Exotic Choice #5: d10+d12
    3. Exotic Choice #6: 2d8+d6-2
    4. Exotic Choice #7: d6+d4+d12-2
    5. Exotic Choice #8: d4 x d6 – d4
    6. Exotic Choice #9: d30+1-d10
    7. Exotic Choice #10: 5d4 / d5
    8. Exotic Choice #11: (3d6+2) / d4
    9. Exotic Choice #12: (4d10 / 2

These choices all derive from thinking about the minimum and maximum results. They are all (with one exception) ways of getting a result from about 1 to about 20, some more effective than others. This article is all about what the differences in probability curves do to the results of using them as a replacement when a d20 or 3d6 roll would normally be called for.

4d6: The Methodology Demonstration

For every roll type to be analyzed, I’ll be presenting a set of probability curves generated by AnyDice.com. This article literally would not be possible in terms of practical delivery without this vital tool.

It’s actually possible to display virtually every parameter to be measured on a single graphic, just barely.

But, when I add labels, it gets a little congested and they are very hard to read. So clicking on either of these images will open a 200%-sized version in a new tab that is a LOT clearer.

Let’s run through the content of each analysis. It will start with a graph of the probability curve, just like the ones above (but much simpler, with fewer labels).

Beneath that, I’ll give a verbal summary of what the chart shows. These will be very brief and ignore details.

Those details will be given in an inset text block with a subtitle, as below, and sometimes subdivided into sub-subtitles that are further inset to make them clear. Unlike my normal practice, I WON’T be insetting the titles, only the content under those titles – I think this will help the titles stand out a little more.

Min, Max, Ave

    The first set of specifics will be the minimum, maximum, and average result values. These will be compared with the values for a d20 and for 3d6 if relevant (at least one of the two always will be).

The Thresholds

    That will be followed with a set of analyses relating to the probability distribution. I’ll use seven of these, between them defining results. It will actually be quite rare for a result to match the distribution parameter precisely; in most cases, I’ll round up (and comment on the fact) but when that distorts the outcome too much, I’ll round down (and again comment).

    The probability distribution parameters are mostly defined as “probability thresholds” and the measure that will be given is the result that matches that probability. This will become clearer in a moment!

    The 1% Threshold

      What’s the first result with a 1% probability of taking place or better? Results lower than this may technically be possible, but to all intents and purposes, they might as well not exist, and those are the results – both low and high – that this test is intended to highlight. On 4d6, 6 or less and 22 or more fall into this classification.

    The 3% Threshold

      I strongly debated whether or not to include this measurement until I saw the ‘analysis’ of the 4d6 roll used for the explanatory images above. This again covers very low-probability events that are very unlikely to show up unless you roll a lot of times. On 4d6, results of 7, 8, 20, and 21 qualify as falling into this probability band – which means that the results most likely to occur are 9-19.

    The 5% Threshold

      I was always going to include this category, however, because (by definition) any result on a d20 will be at this threshold. So the results in this band are less likely to show up than on a d20 roll. On 4d6, that’s a 9 and a 19. Which means that on 4d6, 10-18 are more likely to result than on a d20 roll. That’s an important point to note when considering a substitution of 4d6 for a d20.

      But another way of looking at these results are defining the results that are realistically as likely or more to occur as on a d20 – so this can be considered the true basis of comparison with a d20. It’s not technically accurate, but for practical purposes, you can actually define a 4d6 roll as giving results of 10-18 with a slight chance of slightly more extreme results. And that’s exactly the sort of comparison that this article is intended to discover and convey.

    The 10% Threshold

      This selects out only the most likely results. Some rolls may not even reach this threshold. Depending on the probability curves, there may also be a 15% Threshold canvassed in these results. On 4d6, 12 and 16 fall just below this threshold but they are close enough to it that I would (and have) counted them as being in this band of results. So 12-16 are the peak results of 4d6.

      What’s most interesting and useful about this metric is the range relative to the previous category, which also includes 10, 11, 17, and 18. That’s four results, while this top category (in this case) includes 12, 13, 14, 15, and 16 – five results. If we exclude the exact average result (14) then – in this case – the peak probability band is the same size in terms of number of results. In cases where the average ends in a decimal place, it wouldn’t be necessary to exclude it or treat it differently, because actual results will always be higher or lower than the average.

    Peak Probability

      The peak probability is usually but not always the average result. That’s certainly the case when it comes to 4d6, with an average result of 14. That’s four results higher than the average result of a d20 – 11, 12, 13, and 14 are all above the d20 average of 10.5. Four results out of 20 are a full 20% of the possible d20 results, so this is a noteworthy result – but the primary focus of this section of analysis is listing exactly what the probability of the most likely result actually is. In the case of 4d6, that’s 11.27%.

    1/3 & 2/3 Peak Probability Results Thresholds

      Why is that important? Because it is used to set a second pair of thresholds – one at 1/3 of that peak probability and one at 2/3. This divides the entire range of results into three bands of equal probability, or as close to it as possible.

      In the case of 4d6, 1/3 x 11.27% = 3.75667% and 2/3 x 11.27% = 7.51333%.

      The 3.75667% threshold falls somewhere in between 8 and 9 on the low side and 19 and 20 on the high. That means that 8 and 20 are outside of it and 9-19 are inside it. Observe that in the case of 4d6, these are the same results given by the 3% threshold – which is not too surprising, there isn’t a lot of difference between 3% and 3.75667%.

      The 7.51333% threshold falls in between 10 and 11 on the low side and 17 and 18 on the high. So 11-17 are above this threshold and 9-10 and 18-19 are in the middle bracket but not this one. That’s an important measure of the steepness of the gradient of the probability, I think – we’ll have to see if actual results bear that out. If I’m right, some of the steeper curves – the ones with only two or three dice – will have even fewer results in this classification, while those that are flatter – like 5d6 or 6d6 (neither of which I’m using) – would potentially have more results.

Slices Of Range: Percentages Of Probability

    That ends the probability-based analyses and lets me move on to the results-based analyses, where the results are a net total probability. The previous set of results are something of a bridge between the two!

    So this section of analysis is all about slicing up the range of possible results and looking at what the probabilities are of that sub-range.

    There are two preliminary analyses and then six bands. In mathematical terms, this section is more about the area underneath the probability curve, i.e. the cumulative effect across a range of results. It may or may not yield the same specific transition points as the probability-based analyses in the preceding section.

    Range Of Results

      First, what’s the total range of results? In the case of 4d6, results run from 4 to 24 – which gives a range of 24-4+1 = 21 results (it’s not 20 because that would exclude the 4, which is [technically] a valid result}).

      Compared to a d20: 20-1+1=20 results. Not much difference. In fact you could argue that 4d6-3 is a valid substitute for a d20 – and one that’s not on my list! But since I’m analyzing it anyway, I don’t think there’s any real cause for complaint.

    Ave – Min, Max – Ave

      With the less exotic die rolls, these should be the same – but it’s not so when some of the more complicated substitutions come under the microscope..

      Take 4d6. Average 14, Minimum 4, maximum 24. 14-4=10; 24-14=10. Perfect symmetry around the average.

      There’s actually not a lot to see in this section except in the case of those stranger die rolls, but it’s a necessary procedural step to defining the bands of results.

    1/3 (Ave-Min) + Min

      1/3 of (Ave-Min) + Min defines the lowest 1/6th of the results. In the case of 4d6, 1/3 x 10 + 4 = 3.333 + 4 = 7.333 – so everything below 8 is the lowest one-sixth of possible results.

      The “At Most” table from AnyDice translates that, adding up the percentages (even if they are less individually than 0.01) – a total probability of 2.70%.

    Previous to 2/3 (Ave-Min) + Min

      This defines the second sixth band. In terms of results below average, this is the ‘middle 1/3’.

      For 4d6, 2/3 x 10 + 4 = 6.667 + 4 = 10.667. So results of 8-10 fall into this band. The probability of results from 4-10 are 15.90%; subtracting the probability of results 4-7 (calculated in the previous sub-section) of 2.70% gives a result for this band of 13.2%.

      It’s the ratio to the low that’s of greatest interest, though – 13.2% / 2.70% is a ratio of 4.889. So the results in this band are almost 5 times as likely to come up.

    The Lower Core

      The Lower Core is the 1/6th of results below the average from the previous result up. If the average result is an actual possible result, as it is in the case of 4d6, then half of the probability of that average has to be added; the band splits it right in two. Life is simpler when that’s not the case.

      Results of 11-13, plus half of 14, give 4d6 probabilities of 44.37 – 15.90 + 1/2 x 11.27 = 28.47+ 8.635 = 34.105%.

      Again, it’s the ratios of lesser value that are of greatest interest. 34.105 / 13.2 = x 2.583; and 34.105 / 2.70 = x 12.63. So the low end of maximum probability is more than 2.5 times as likely to result as the middle lower band, and more than 12 1/2 times as likely as the lowest results.

    The Upper Core: Average to 1/3 (Max-Ave) + Ave

      I’ve given the technical definition above, but in most cases this is more easily calculated from the work already done: the % is 34.105% again, and the values are half of 14 + 15-17, the same width as the Lower Core. Only where a roll’s probability is asymmetric will this not be the easier way of handling this result. There won’t really be a lot to say, for most results, but some of the more exotic options may be interesting.

    2/3 (Max-Ave) + Ave

      The same goes for this band of results – we already know that it’s 13.2% of the results, and three results wide – so 18-20. Again, things will get more interesting with some of the exotic results.

    The Lofty Outcomes

      And, the uppermost band of results – for 4d6, that’s 21-24, and 2.70%.

    I’ll then wrap up this section by giving a table of the bands and their probabilities. I won’t be doing any fancy formatting, so the columns probably won’t align very well:

    4d6:

      04-07 = 2.7%
      08-10 = 13.2%
      11-14 = 34.105%
      14-17 = 34.105%
      18-20 = 13.2%
      21-24 = 2.7%

Slices Of Probability: The Definitive Result Values

    That was dividing the results up into six unequal bands. Next, I’ll divide the probability up into 5 equal bands and look at what results are within the probability distribution. I expect these to be a bit of an eye-opener, even with the hints given by the results above.

    The Lowest 20%

      The cumulative probability of results puts the 20% mark between the results of 10 and 11. So 4-10 are the results in this band.

    Second Lowest 20%

      We know the lower value result in this band is 11, but where does the cumulative probability cross the 40% mark? The answer, for 4d6, is between 12 and 13. So this entire band is defined as two results – 11 and 12.

    The Middle 20%

      Using an odd number of bands avoids the complications of the average result sometimes needing to be split. With 4d6, the 60% mark falls between 14 and 15 – so this band is also just two results, 13 and 14.

    Second-Highest 20%

      The 80% mark with 4d6 lands between 16 and 17 – so, again, 2 results wide, 15 and 16.

    Highest 20%

      Which, by definition (having excluded everything else), leaves the highest 20% of results containing 17-24. That’s a band 8 results wide, one more than the lowest 20% band.

    Peak Probability

      We already know this from earlier, but it needs to be recapitulated at this point. The peak probability for 4d6 comes at a result of 14, which has a probability of 11.27%. It’s what I now do with that information that I hope will be informative.

    Matching Result: 1/3 Peak Probability

      So, at 1/3 of this peak we define two bands, one low and one high. With 4d6, 1/3 of 11.27 = 3.756%. The result with a matching probability doesn’t exist – it’s between 8 and 9 – but using that as a dividing point works. So the bands for 4d6 are 4-8 and 20-24.

      I’m honestly not entirely sure whether or not this test will reveal anything significant, but I’d rather include it than risk not doing so.

    Matching Result: 2/3 Peak Probability

      2/3 of the peak defines another 2 bands, one high and one low, and – by exclusion – a band in the middle. For 4d6, 2/3 x 11.27 = 7.513%. That’s between 10 and 11, and between 17 and 18, so the lower band is 9-10 and the upper is 18-19. The central band is therefore 11-17.

    I’ll then wrap up this section of results with a pair of tables, each with 5 bands, summarizing all of the above:

    4d6:

      00-20%: 4-10
      21-40%: 11-12
      41-60%: 13-14
      61-80%: 14-16
      81-100%: 17-24

      04 – 08: < 1/3 peak probability
      09 – 10: 1/3 – 2/3 peak probability
      11 – 17: 2/3 peak probability to peak to 2/3 peak probability
      18 – 19: 1/3 – 2/3 peak probability
      20 – 24: < 1/3 peak probability

Summary Of Results

    A brief verbal summary of the results. I’ll try to highlight the significant bits without getting into recapitulation of specifics. In particular, I’ll want to compare and contrast with a d20.

    In the case of 4d6, it’s not an entirely inappropriate substitute for a d20, especially given how low the probability of extreme results is. The fact that the effective range is 7-21. A numeric modifier can shift that towards a more equitable substitution – minus three means that the effective low is 4, the effective high is 17, and the average is 11. There would be a very low probability of 1-3 or 18-21 resulting.

When To Use This Substitute

    And then, the real meat: the conclusion. When might this be a useful replacement for a d20? Do the implications of the statistics make it a better fit for skill rolls or attack rolls or saving rolls under certain circumstances?

    I’ll also cast a weathered eye (very briefly) over the possibility of substituting the roll when a 3d6 would normally be called for. The same basic questions and options as above.

    So, 4d6-3 would be a useful substitute when there is little value in an extreme result – an everyday skill check, for example, or routine training exercise that lacks the real adrenaline ‘punch’ of actual combat. You can reduce that -3 to nuance the odds in the character’s favor, and this would actually represent a significant improvement in the chances of success – useful when there is no serious opposition or difficulty in what the character is trying to do, biasing results in favor of a success.

    Substituting 4d6-3 for 3d6 is a more interesting story. There’s a slim chance of a worse result or a better result than is possible on a 3d6 roll, and the average is ever-so-slightly higher. Varying the -3 permits nuanced shading. But the 4d6 roll overall is flatter than a 3d6 roll, so there would be a broader distribution of results and less of a knife-edge. That suggests that this would yield more drama when used in combat or saving throws, if the 3d6 chances were fairly even. if the chances were not even, this would amplify the bias in outcomes – again, more dramatic. I would also be inclined to make this substitution for skill rolls in which there IS significant difficulty or opposition to overcome. In effect, this is the complete opposite of a d20 replacement!

I’ll then conclude the analysis with any other notes that might be of interest – especially comparing one exotic option to another, and any further conclusions that result.

d20: The Yardstick

Probably the most boring probability chart ever, matched only by any other single die chart. It is, as you would expect, flat as a pancake from 1 to 20.

Min, Max, Ave

    Minimum 1, Maximum 20, Average 10.5.

    3d6 has a minimum of 3, a maximum of 18, and an average of 10.5. From which, you might conclude that 19/15 x (3d6-3) +1 would translate a 3d6 roll into a d20 roll. Let’s test that:

    Minimum: 19/15 x (3-3) +1 = 1. Correct.
    Average: 19/15 x (10.5-3) + 1 = 19/15 x 7.5 +1 = 9.5 + 1 = 10.5. Correct.
    Maximum: 19/15 x (18-3) +1 = 19/15 x 15 +1 = 19 + 1 = 20. Correct.

    If 3d6 were a flat roll, a single die, that’s “all” there would be to it. But it’s not. You’ll see how much it’s not when I analyze 3d6 as a d20 substitute, a little later. For now, I wanted to present this as a warning against assuming that getting the right minimum, maximum, and average was all there was too it – that’s the beginning of the story, but nothing more.

The Thresholds

    Ho-hum. Anyone expecting anything other than an anticlimax needs their head read. Instead, let me relate an anecdote. I once knew a guy who very carefully and meticulously beveled the edges of the “1” face of his d20 on the premise that if the die was teetering on the edge of one of the surrounding faces, this would make it more prone to roll onto the 20. After a little thought, I let him use the die in-game.

    Why? First, the converse was also true – if his theory was valid, a potential 20 was just as likely to roll to one of the adjacent sides. And since there were three adjacent faces, this was three times as likely to happen.

    Second, by carving away a bit of the dice on one side, he had slightly altered it’s balance so that it was heavier on the side of the 20 – so that would decrease the probability of a 20-or-adjacent, just a little, but probably by more than the increase that he achieved!

    In six months of gaming, he never did manage to roll a 20. At that point, he binned it and went back to using an undoctored die. And got a 20 in the first session after he made this change.

    A little knowledge can be a dangerous thing…

    The 1% Threshold

      Never happens, it’s that simple.

    The 3% Threshold

      Same with this result.

    The 5% Threshold

      Every possible result meets this threshold. What does that tell us, really?

    The 10% Threshold

      And, once again, this never happens.

    Peak Probability

      This is 5%, and it applies to every result. I don’t think I’m telling anyone anything new at this point.

    1/3 & 2/3 Peak Probability Results Thresholds

      Anyone want to guess which results meet these thresholds? Right, None of them.

Now, maybe things will get a little more interesting, if no less predictable.

Slices Of Range: Percentages Of Probability
    Range Of Results

      1 to 20. Are we excited yet?

    Ave – Min, Max – Ave

      1-10 is the low band, 11-20 is the high. No surprises here. Both have a range of 10 results.

    1/3 (Ave-Min) + Min

      1/3 of 10 is 3.333. So 1-3 is the lowest 1/6th of the results, with a net 15% chance of occurrence.

    2/3 (Ave-Min) + Min

      2/3 of 10 is 6.667. So 4-6 are the second 1/6th band of results, again with a net 15% chance of occurrence.

    The Lower Core

      So the lower core therefore has to be 7-10, with a total 20% occurrence. This shows the effects of rounding errors in going from fractional results (that don’t exist in real life) to integer results (which do).

    The Upper Core: 1/3 (Max-Ave) + Ave

      1/3 x (20-10.5) + 10.5 = 1/3 x 9.5 + 10.5 = 3.167 + 10.5 = 13.667. So 11-13 is the upper core, with a 15% rate of occurrence.

    2/3 (Max-Ave) + Ave

      2/3 x (20-10.5) + 10.5 = 2/3 x 9.5 + 10.5 = 6.333 + 10.5 = 16.833. So 13-16 is the second-highest band, again with a 15% chance.

    The Lofty Outcomes

      That leaves 17-20 and a net 20% at the top – again showing the rounding error effect.

    d20:

      01-03 = 15%
      04-06 = 15%
      07-10 = 20%
      11-13 = 15%
      14-16 = 15%
      17-20 = 20%

      That’s the most even that you can slice 100% into six ‘equal’ chunks with an indivisible unit of 5% at a time.

Slices Of Probability: The Definitive Result Values

    The difference between division into 6ths and division into 5ths is a profound one – when it avoids the rounding errors that showed up in the previous tests.

    The Lowest 20%

      1-4 comprise the lowest 20% of d20 results.

    Second Lowest 20%

      5-8 are the second lowest 20% band.

    The Middle 20%

      9-12 are the middle 20% band. Unsurprisingly, these perfectly bracket the average of 10.5.

    Second-Highest 20%

      Would anyone be surprised by a 13-16 result set?

    Highest 20%

      And finally, we have a perfectly satisfactory 17-20 as the highest 20% of results.

    Peak Probability

      This is 5% but we already know that there will be no matching results from this metric. Skipped.

    d20:

      00-20%: 1-4
      21-40%: 5-8
      41-60%: 9-12
      61-80% 13-16
      81-100%: 17-20

Summary Of Results

    a d20 has a flat probability curve. Unsurprisingly, it makes a perfect substitute for a d20 roll. But let’s look at subbing in a d20 for a 3d6 roll, just for a moment.

    Your average results are a LOT less likely to occur. The chances of extreme results are a LOT higher, at both ends of the scale. The dumbbell-curve of the standard 3d6 roll ‘steals’ probability from extreme results to bolster those middle-of-the-road outcomes.

    The reality is that most biological systems and measurements have a dumbbell-shape when you plot them out, all other factors being equal. So the 3d6 roll is a good way of simulating normality, and a d20 is a bad way of doing so.

    That’s why D&D uses d6s to generate stats instead of a d20, even though it’s now d20-based in play.

    When To Use This Substitute

    That doesn’t mean that the d20-for-3d6 substitution is entirely without merit. As soon as non-randomness enters the picture, it can give a better simulation of some outcomes. Five or less on 3d6 represents the lowest 5% chance of success – so for an experiment involving genetic manipulation, you might use d20 instead of 3d6, with the instruction to re-roll any result of 5 or less. This simulates culling the obvious failures and produces a far greater likelihood of successes – exactly what you would expect in such an artificial situation.

That brings me to the two boring workarounds – for when you need a d20 but can’t find one. I wasn’t originally going to include these, but changed my mind when I remembered someone with little grounding in mathematics who absolutely could not be convinced that this was a valid substitution.

Boring Workaround #1: 10 x (d2-1) + d10

Using a d10 and any dice that gives a high-low result of even probability (don’t try using a d5 or d7) is the standard workaround for occasions when your d20 escapes for a time.

Min, Max, Ave

    The high-low tells you whether or not to add 10 to the d10.

    The minimum is 1, the maximum is 20, and the average is 10.5 – exactly what you want in a d20.

The Thresholds

    These results are exactly the same as for a d20.

Slices Of Range: Percentages Of Probability
    Range Of Results

      1 to 20, obviously.

    Ave – Min, Max – Ave

      1-10 and 11-20, both a range of 10 results.

    1/3 (Ave-Min) + Min
    2/3 (Ave-Min) + Min
    The Lower Core
    The Upper Core: 1/3 (Max-Ave) + Ave
    2/3 (Max-Ave) + Ave
    The Lofty Outcomes

      These are all exactly the same as a d20. Right down to the rounding error effect.

    5 x (d4-1) + d5:

      01-03 = 15%
      04-06 = 15%
      07-10 = 20%
      11-13 = 15%
      14-16 = 15%
      17-20 = 20%

      Exactly the same as a d20.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%
    Second Lowest 20%
    The Middle 20%
    Second-Highest 20%
    Highest 20%

      Once again, these are all exactly the same as a d20.

    10 x (d2-1) + d10:

      00-20%: 1-4
      21-40%: 5-8
      41-60%: 9-12
      61-80% 13-16
      81-100%: 17-20

Summary Of Results

    There’s a reason why this is the first substitute that comes to mind – it’s the obvious solution to the problem.

    There is a question of dice etiquette when someone wants to use this because their d20 is misbehaving, though, because it is more work and that makes it harder to police for cheating. As a general rule, that sort of thing doesn’t bother me, but others may be more sensitive to the issue, so I thought it worth mentioning.

When To Use This Substitute

    When you need a d20 and don’t have one to hand – but you do have a d10 and a d-something else.

Boring Workaround #2: 5 x (d4-1) + d5

Look familiar? This is a perfect substitute for s d20.

Min, Max, Ave

    d4-1 gives results of 0-3. Multiply by 5 to get 0, 5, 10, or 15. Adding a d5 makes each of these 1-5, 6-10, 11-15, and 16-20 respectively.

    The minimum is 1, the maximum is 20, and the average is 10.5 – all exactly as they should be.

The Thresholds

    As the probability curve above confirms, these results are exactly the same as for a d20.

Slices Of Range: Percentages Of Probability
    Range Of Results

      1 to 20, as noted earlier.

    Ave – Min, Max – Ave

      1-10 and 11-20, both a range of 10 results. Perfect.

    1/3 (Ave-Min) + Min
    2/3 (Ave-Min) + Min
    The Lower Core
    The Upper Core: 1/3 (Max-Ave) + Ave
    2/3 (Max-Ave) + Ave
    The Lofty Outcomes

      These are all exactly the same as a d20. Right down to the rounding error effect.

    5 x (d4-1) + d5:

      01-03 = 15%
      04-06 = 15%
      07-10 = 20%
      11-13 = 15%
      14-16 = 15%
      17-20 = 20%

      Exactly the same as a d20.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%
    Second Lowest 20%
    The Middle 20%
    Second-Highest 20%
    Highest 20%

      Once again, these are all exactly the same as a d20.

    5 x (d4-1) + d5:

      00-20%: 1-4
      21-40%: 5-8
      41-60%: 9-12
      61-80% 13-16
      81-100%: 17-20

Summary Of Results

    The only real problem with this method is that if you’ve got a d4 and a d10 to use as a d5, there’s a simpler approach – using the d4 (or a d6, more commonly) to get high or low ranges and then a d10 within the range.

When To Use This Substitute

    There’s one theoretical occasion when this might be the way to go – when all you have is a d4 and a 6-sided dice. By ignoring sixes on the d6 (re-rolling them) you turn it into a d5 – so the combination is a workaround for when you have neither a d10 nor a d20.

3d6: The Standard

Okay, now we’re getting into more interesting results. 3d6 make a poor substitute for a d20 in strictly mathematical terms – but the resulting probability curve – especially when shifted with a modifier – might be exactly what you need under some circumstances.

Min, Max, Ave

    Minimum: 3
    Maximum: 18
    Average: 10.5 (this is the characteristic that is most like a d20).

The Thresholds
    The 1% Threshold

      Only 3 and 18 are below the 1% threshold, though you could argue that at 1.39%, 4 and 17 should be included. I’m not gong to do so. So this threshold is not met by just 1 number from each end of the results.

    The 3% Threshold

      This falls between 5 and 6 and between 15 and 16 on the other side. Which means that 4 and 5 don’t meet the target on the low side and 16 and 17 on the high – making this 2 results wide on each side. Immediately, there’s a pattern, which is why I stuck to my guns in the previous section – I could see this coming!

    The 5% Threshold

      Sadly for that pattern, the peak of 3d6 rises so quickly that this target is met between 6 and 7 & 14 and 15. So the band of results which don’t rise to this standard of probability contain just one number from each side of the average – 6 & 15.

    The 10% Threshold

      8 and 13 almost get there with 9.72% – but rules are rules. And so the pattern continues – One, then Two, then One, and now 2 results again (on each side of the average). 7, 8, 13, and 14 fall below the 10% threshold.

      Nothing gets as high as the 15% mark, so that leaves plateau of results as 9-12.

    Peak Probability

      Both 10 and 11 share this property – 12.50%.

    1/3 & 2/3 Peak Probability Results Thresholds

      1/3 of 12.5 = 4.1667%, and double that is 8.333%.

      The probability band defined by 0-4.1667% almost includes a six and a 15 at 4.63%, but they just sneak into the middle bracket. So 3-5 and 16-18 are in the lowest tier. That’s three results on each side.

      Things are a little more clear-cut with the upper limit of that middle tier – 7 and 14 are definitely in the middle group, 8 and 13 are clearly not. so 6-7 and 14-15 hold the middle ground That’s 2 results on each side.

      Which defines the high ground as 8-13. And that’s three results on each side again. Those six results occur, according to the statistics, no less than 67.6% of the time – That’s 2 times in 3 and one extra in a blue moon territory!

Slices Of Range: Percentages Of Probability
    Range Of Results

      3d6 consists of 16 different results. That’s only 80% of the d20’s range.

    Ave – Min, Max – Ave

      10.5 – 3 = 7.5.
      18-10.5 = 7.5.

    1/3 (Ave-Min) + Min

      7.5 / 3 gives us probability bands that are 2.5 results wide. So the first of these is 3 to 5.5, i.e. 3-to-5. These results have a probability of 4.63%, collectively.

    2/3 (Ave-Min) + Min

      Adding another 2.5 to 5.5 gives 8. So this band of results runs from 6 to 8, and has 25.93-4.63 = 21.30% of the probability.

      That’s a ratio of 4.6, almost exactly. Compare that to the 4d20 ratio of 4.889, and you can see that the 4d6 curve is flatter than the 3d6 (a higher number = a flatter result).

      And when you think about it, that makes sense – the more dice you roll, the smaller the difference any one individual high or low result makes. If you were rolling 10d6, and 8 of them came up 3.5 on average (an even mix of 3s and 4s), and those last two were box cars, you have a total result of 28+12=40. That will happen 4.8% of the time – but compared to the average result of 35, it’s only 14.28% higher.

      Most of the gain from those box cars is taken up just getting to the average, 7 result increases, leaving just 5 to go beyond the average.

    The Lower Core

      So the lower core is 9 and 10. Between them, 50 – 25.93 = 24.07% of the probable results will be one of these outcomes, on average.

      24.07 is 13% higher than 21.30%. The ratio is 1.1300. That’s a massive difference to the 4d6 ratio of 2.583 – close to half of it, in fact. So the lower core is smaller relative to the results band below it in the case of 3d6 than is the case with 4d6.

    The Upper Core: 1/3 (Max-Ave) + Ave

      The upper core is the same size as the lower, so the same ratios will apply. It consists of results of 11 and 12.

    2/3 (Max-Ave) + Ave

      The upper-midband is the same size as the lower midband, 21.30%. It consists of 13, 14 and 15 results.

    The Lofty Outcomes

      So the upper end of town is 16, 17, and 18 – and these have a total probability of 4.63%.

    3d6:

      03-05: 4.63%
      06-08: 21.3%
      09-10: 24.07%
      11-12: 24.07%
      13-15: 21.3%
      16-18: 4.63%

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      20% cumulative probability comes between 7 and 8, almost midway between them in fact. So 7 is in but 8 is out, giving a results range of 3-7.

    Second Lowest 20%

      The 40% threshold is just above 9. This band therefore contains results of 8 and 9.

      When you consider the anatomy of a dumbbell curve – flat bottom, inflection point, flat crown of diagnostic width, and a slope connecting the crown to the flat bottom through the inflection point, the fewer the results of this band relative to those above and below, the steeper that slope. In the case of 3d6, it took 5 results to get to 20% and just two to get to 40, indicating a severely sloping upswing in probabilities.

      So much so that I would predict, from these results alone, that the inflection point – where the slope of probability changes from more horizontal to more vertical – is at or near 7 or 8, and probably the first.

      What do I mean by ‘Definitive?’

      Every probability curve has certain attributes that, when quantified, define the shape of that curve. 3d6 is NOT QUITE the same shape as, say, 3d8 and VERY different from, for example, 6d3. The width of the crown, the width of the tails at top and bottom, the location of the inflection point, and the steepness of the slope from inflection point to crown, are such ‘definitive’ attributes. It’s an esoteric area of statistics that only people deeply concerned with quantities of dice being rolled simultaneously and summed – which we do all the time in RPGs – would ever become aware of, let alone analyze.

    The Middle 20%

      The 60% mark is just short of 11, so we’re still within the crown when we get to this point – or, to put it another way, the crown is wider than this 20% net probability. In fact, this consists of a single result, 10.

    Second-Highest 20%

      The 80% mark lands about 2/3 of the way between 12 and 13, so this band contains 11 and 12.

    Highest 20%

      And that leaves 13-18 to encompass the rest.

      It’s worth pointing out that despite having a symmetric probability curve and definitively symmetric breakup of probabilities, rounding error has produced asymmetric results – six results in this upper band vs 5 in the lowest 20%.

      And, if we were to start at the top and count down by 20%s, we would observe the same pattern – 5 entries in the first band and 6 in the last.

    Peak Probability

      Two results share the peak probability of 12.50%, balanced on either side of the average result – 10 and 11.

    Matching Result: 1/3 Peak Probability

      1/3 of 12.5 = 4.1667%. That’s just below 6. So the matching result rounds in this case to the 6, and this band is therefore 3-6, or 4 results wide.

      On the upper end, we also get 15-18 within this band.

    Matching Result: 2/3 Peak Probability

      2/3 of 12.5 = 8.3333%, and that’s so close to midway between 7 and 8 that I can’t – at a glance – tell which way to round. So I’ll have to calculate it.

      7: 8.3333% – 6.94% = 1.3933%.
      8: 9.72% – 8.3333% = 1.3867%.

      So it’s not quite enough to round up to include the 8. That steepness of slope at the inflection point strikes again, with a single result occupying this entire band – a 7, and it’s upper-side counterpart, 14. Is it just a coincidence that both of these are even multiples of the average result of a d6 roll? Despite being able to convincingly fake it, I’m not really an expert in everything – and that’s where my expertise reaches it’s limits.

      So, I did what many people these days do in order to fake being better-educated than they are – I asked an AI. It took a bit of conversation back and forth, but eventually we drilled down to the central point: both 7 and 14 are exactly 3.5 away from the 3d6 average. Because of the symmetric nature of the 3d6 probability curve, if you get a 7, you also get a matching 14. It’s that interval to the average that produces the relationship between the average of 1 dice and the results of 7 and 14; the fact that a particular method of selecting intervals led those to be the points chosen is where the coincidence lies, and why there’s no deeper significance to this observation.

      That, by definition, gives the crown of the curve, on top of that 2/3 probability line, as 8-13 – a whole six results wide!

    3d6:

      00-20%: 3-7
      21-40%: 8-9
      41-60%: 10
      61-80%: 11-12
      81-100%: 13-18

      Notice that it’s in excluding 11 from the central 20% that the rounding error manifests. If 11 were part of the central 20%, then 12-13 would be the second-highest band and 14-18, the highest – and perfect symmetry results.

      03-06: < 1/3 peak probability 07: 1/3 - 2/3 peak probability 08-13: 2/3 peak probability to peak to 2/3 peak probability 14: 1/3 - 2/3 peak probability 15-18: < 1/3 peak probability

Summary Of Results

    If it were a flat curve, this would be like a d20 with the extremes lopped off at either end. But it’s not a flat curve. There’s a much higher chance of getting an 8-13 result than in a d20 roll, and even 7 and 14 are more likely than on a d20. All that extra probability has to come from somewhere, and in this case, it’s not only come from the lowest and highest range of rolls, but also from those results that are possible on a d20 but not on 3d6, like 1, 2, 19, and 20.

When To Use This Substitute

    I need to highlight the impact of modifiers to a 3d6 roll. 3d6+1 and 3d6+2 are significantly different beasts to a straight 3d6 – in comparison to a d20. And the same is true of 3d6-1 and 3d6-2.

    Those modifiers apply to every result on the 3d6, shifting it this way or that relative to the average d20 result.

      3d6-2: 1 and 2 are now possible results, but the best you can get is now a 16 – and the average is just 8.5.

      3d6-1: Results now range from 2 to 17 and the average is down to 9.5.

      Straight 3d6: The average result is exactly the same as a d20 but massively more likely to occur.

      3d6+1: Only adds 19 to the mix (0.46% chance, a one in 217.4 chance) but takes away 3 as a result, and increases the average to 11.5. So it’s a less extreme modification, one that’s hardly worth making.

      3d6+2: You can now get 19 or 20, though they are unlikely (1.8% chance), but 3 and 4 are off the table, and 1 and 2 were never on it to begin with. The average result is now 12.5 – significantly better than the average d20 result.

    So use a straight 3d6 when you want to take extreme results off the table and when you want to increase the reliability of getting close to average. This is useful when characters are doing something relatively trivial and mundane, with somewhere close to a 50-50 chance of success.

    This can be nuanced with 3d6-1 and 3d6+1 respectively, representing slightly adverse or advantageous situations, or slightly lower or higher chances of success.

    When both those factors apply, the effects can be said to compound, or to compensate (depending on the combination). Compensation leads back to the basic 3d6; compounding leads to the more profound 3d6-2 or 3d6+2.

    And that’s also true in combat situations (attack rolls and saves), though some of the 4d6 constructions discussed earlier might be even more applicable.

    Having both options in your pocket gives further room to nuance and finesse the simulation of reality.

Exotic Choice #1: 2d10

On general principles, I would expect 2d10 to offer a still more nuanced option relative to 3d6, especially in 2d10-1 and 2d10+1 configurations. I will also be interested to see whether 2d10-2 and 2d10-2 yield more or less extreme probabilities than 3d6-2 and +2, respectively. But let’s see what the actual analysis yields!

I’m not sure why, but a lot of people don’t seem to think about 2d10 as a substitute for a d20. It seems a perfectly valid (if perhaps flawed) alternative, to me!

I remember the first time that I realized that two dice – 2d-anything – gave straight lines rising to a point like a triangle; my early education in gaming had told me to expect a dumbbell curve.

A little later, I realized that it WAS in fact a dumbbell curve – just one in which the sides of the curve had been flattened into a perfectly straight line. And the way you can tell this is because 4d-anything won’t give a scaled representation of this straight-lines curve – instead, the dumbbell will manifest as a distortion of those straight lines, a predictable and measurable one – see, for example, the 4d6 analysis earlier; it doesn’t look like this!

Min, Max, Ave

    Minimum 2, Maximum 20, Average 11.

    So the minimum is a fraction higher, and that lifts the average a fraction higher, too.

    In theory, if you did 2d10 – 0.5 and then used a d2 to decide whether or not to round up or down, you might think that you would get a perfect d20 simulation – but it still has the basic 2d10 shape when you plot it out. No matter what you do, it doesn’t quite work – and always has the characteristic 2d10 triangle shown above..

The Thresholds
    The 1% Threshold

      When you look at the probabilities that go with those positions on the curve, you find that each +1 result increases the probability of that result by a perfect 1% from an initial 1% until you get to the peak probability – and then back down the other side of the triangle. The probability, all the way up to 11, is always (R-1)%.

      So 2 and 20 are exactly on the 1% threshold and everything else is above it.

    The 3% Threshold

      And 4 and 18 are exactly on the 3% threshold, so this contains 3-4 and 18-19.

    The 5% Threshold

      …and the 5% threshold is reached at 6 and 16, and therefore contains 5-6 and 16-17.

    The 10% Threshold

      The 10% threshold is met by only the peak probability, which happens to be exactly 10%.

    1/3 & 2/3 Peak Probability Results Thresholds

      1/3 of 10% us 3.333, and 2/3 is 6.667.

      2-4 are therefore below the 1/3 threshold and 5-8 below the 2/3 mark. On the other side of the triangle, it’s 18-20 and 14-17, respectively.

      That leaves 9-13 as the upper band with respect to peak probability.

Slices Of Range: Percentages Of Probability
    Range Of Results

      2-20, or 19 results.

      But that lets me bring up a pattern that someone once claimed to see that doesn’t actually exist. Specifically, the claim was that if you had an odd number of results possible, you would get one result at the average that had a higher probability of occurring – the peak of the triangle in this case – and if it was even, then the results would be evenly split on either side of a non-integer average, like 10.5.

      Here’s where it comes unstuck: 3d6 had 15 results and no peak point. 1d20 has 20 results and – by definition – no peak point.

      I suspect (but haven’t sat down to prove) that the true relationship is not to the number of results but to the number of results PLUS the number of dice IF the latter is greater than 1. Consider:

      ▪ 2d10: 19+2=21, odd, single peak.

      ▪ 3d6: 15 + 3 = 18, even, no single peak.

      ▪ 4d6: 21 + 4 = 25, odd, single peak.

      It’s not proof – just an observed pattern.

    Ave – Min, Max – Ave

      11-2 = 9; 20-11=9.

    1/3 (Ave-Min) + Min

      1/3 of 9 is 3, and 3+2 = 5. So the bottom 1/3 of lower results are 2-5, and they have a total cumulative probability of 10%.

    2/3 (Ave-Min) + Min

      2/3 of 9 is 6, and 6+2=8. The middle third of the lower results is thus 6-8, and they contain a cumulative probability of 28-10=18%.

      “Wait, what?” I hear someone shout. No, it’s not a perfect x2, which people seem to instinctively expect from the shape and the 10% in the previous result.

      1+2+3+4 = 10; that’s why the previous section had a probability total of 10%. 10+5+6+7 = 28, less that initial 10%, and you get 18. Deal with it.

    The Lower Core

      With a single peak, we again have to split that peak both ways as it ‘stands astride’ the boundary between lower and upper core, one leg in each, as it were.

      9-10 is 45-28= 17%, plus 1/2 of 10% = 22%.

      “Wait, what?” I hear, again. “How did the missing 2% end up over here?” By not being ‘missing’ in the first place, of course. Neither mathematics nor reality have no need to accommodate simplified human biases toward whole numbers that end in 5 or 0. But it still throws some people for a loop. I even used it in a magic act when I was about 6, making 2% ‘magically’ disappear and reappear, long before I’d ever heard of a d10 – I used a spinner in the act (home-made, of course).

    The Upper Core: 1/3 (Max-Ave) + Ave

      Therefore, the upper core is 22% and runs from 11 to 13.

    2/3 (Max-Ave) + Ave

      And the middle upper band is 18% and contains 14-16.

    The Lofty Outcomes

      Which leaves 17-20 for the top echelon results, and a total probability of 10%.

    2d10:

      02-05: 10%
      06-08: 18%
      09-11: 22%
      11-13: 22%
      14-16: 18%
      17-20: 10%

    It’s worthwhile comparing the ratios of highest to lowest for this as compared to 3d6 and 4d6.

      ▪ 2d10: 22 / 10 = 2.2
      ▪ 3d6: 24.07 / 4.63 = 5.2
      ▪ 4d6: 34.105% / 2.7% = 12.63

      If you think of a dumbbell probability as being a triangle which is pinned part-way up the sides and the upper part is then squashed inwards toward the center to create a sharper rise – that’s inflection point and upper slope to the crown – then this is an indicator of the relative steepness of that slope. With 2d10, it’s not squashed inwards at all; with 3d6, it’s squashed inwards a bit; and with 4d6, it’s squashed inwards a lot.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      The cumulative probability gets to 20% just below 7, so 2-6 are in this band.

    Second Lowest 20%

      The 40% barrier is almost midway between 9 and 10. So the second lowest 20% contains 7-9.

    The Middle 20%

      Where would we be without rounding error? Probably a lot less confused about some of this, to be honest. We get to the 60% mark almost midway between 11 and 12, to 10-11 is contained herein.

    Second-Highest 20%

      The 80% threshold is just above 14, so 12-14 occupy this bracket.

    Highest 20%

      And the highest 20% of results are therefore going to 15-20.

      That’s 6 results wide, compared to the 5-wide of the lowest 20%.

    Peak Probability

      As already mentioned, result 11 contains the peak probability of 10%.

    Matching Result: 1/3 Peak Probability

      1/3 of 10% is 3.3333 and that falls between 4 and 5 – but it’s closer to 4.

      On the other side of the peak, 18-20 are also in this low-probability domain.

    Matching Result: 2/3 Peak Probability

      2/3 of 10% is 6.6667 and that’s between 7 & 8, and 14 & 15. But it’s closer to 8 and 14 so they get included in this band.

      That means that the top band contains 9-13.

    So, in summary: 2d10:

      00-20%: 2-6
      21-40%: 7-9
      41-60%: 10-11
      61-80%: 12-14
      81-100%: 15-20

      02-04: < 1/3 peak probability 05-08: 1/3 - 2/3 peak probability 09-13: 2/3 peak probability to peak to 2/3 peak probability 14-17: 1/3 - 2/3 peak probability 18-20: < 1/3 peak probability

Summary Of Results

    When you get right down to it, d10 behaves like a dumbbell curve – it’s just one with unusually flat sides, an even probability increase and decrease up and down the results table.

    Some people think that makes it the perfect half-way house between a d20 and a full-on dumbbell curve. I’m not going to buy into that debate on either side.

    Ultimately, a native 2d10 roll does two things: it rules out a result of 1, and it makes middle-range results more likely.

    It can also be said that extreme results, while still rare (especially in comparison to a d20), are a lot more likely than with greater numbers of dice. Consider:

      ▪ d20: Minimum 1, probability 5%
      ▪ 2d10: Minimum 2, probability 1%
      ▪ 3d6: Minimum 3, probability 0.46%
      ▪ 4d6: Minimum 4, probability 0.08%

    So risk of failure is mitigated but not entirely removed.

When To Use This Substitute

    The most significant application of this alternative comes with taking fumbles off the table – if you use fumbles in your game. The price of doing so is making mediocre results more frequent.

    That’s the only justification for using this straight-up unmodified substitute, but it’s a powerful one – especially if you give a player the choice: play it safe, or take the risk? Fumbles, in this context, become the price you pay for a better chance of a critical success.

    2d10-1 puts fumbles back and takes away the critical success option – but still makes mediocre results more likely than either, and far more likely than a d20 roll. There may be times – especially attempting to perform a task unskilled – where that might be appropriate.

    It’s also a useful option to consider when the game system mandates that you need a certain attack bonus to even hit an enemy – because it turns “impossible” into “possible but unlikely”. That can not only be a life-or-death difference to the PCs (or to NPCs if the PCs are decked out with magic gear), it restores an element of player agency that “impossible” takes away. That’s always food for thought.

    2d10+1 increases the chances of 20+, which is important if you consider that to be the threshold of a critical success instead of a nat 20. It’s worth noting that the chances of getting 20+ with this construction are still less than with a d20, though. But it takes fumbles completely off the table, emphatically. Nevertheless, I don’t recommend this substitution.

    And with 2d10+2, that balance shifts slightly – 6% chance of a 20+ result instead of a d20’s 5%.

    Again, the price of these benefits is the increased chance of a mediocre roll, but that also shifts with +1 or +2. With +1, the average result becomes 12, and with +2, it’s 13. Both are significant improvements over the d20 average of 10.5. Psychologically, a 13 average feels a lot more profound than a 12 average, and that’s actually a legacy of having grown up using 3d6.

      ▪ 3d6: Average 10.5, chance of a 13 or better = 25.93%
      ▪ 2d10+1: Average 12, chance of 13 or better = 45%
      ▪ 2d10+2: Average 13, chance of 13 or better = 55%.

    A “mediocre” result on 2d10+2 is not a bad result at all. How “not bad” it is depends on the target number to reach – but even if it’s a 14 or a 15, you’ll have about as or better chance –

      ▪ 14: 36% on 2d10+1, 45% on 2d10+2; 35% on d20.
      ▪ 15: 28% on 2d10+1, 36% on 2d10+2; 30% on d20.

    – with these options than a native d20. So I would also use these alternatives when a player indicates that “near enough is good enough”, as a way of baking that attitude into the outcome of the roll.

Exotic Choice #2: d8+d12

I wasn’t originally going to do 2d10, but I always had this option on my list. It will be interesting to see what differences there are, if any, between this option and that one.

Because I expect that to be of special interest, I’m including the 2d10 chart on this probability chart, and…

…and WHOA, I was not expecting that! There’s a CLEAR difference between the two, with the top of the triangle lopped off – and has ALL that probability really gone into making the sides just that tiny bit steeper??

Min, Max, Ave

    Minimum 2
    Maximum 20
    Average 11

The Thresholds
    The 1% Threshold

      2 and 20 are only just above this marker at 1.04%. It’s so close that I’m going to include them anyway.

    The 3% Threshold

      This falls a little way under 4 and 18, both of which score 3.13% probability. But that’s not close enough in my book, so the 3% threshold is only occupied by 3 and 19.

    The 5% Threshold

      It’s a little closer to 6 than to 5, but not close enough at 5.21%. The match on the other side is 17. So this band is occupied by 4-5 and 17-18.

    The 10% Threshold

      No results get here – there are 5 results with an 8.33% probability.

    1/3 & 2/3 Peak Probability Results Thresholds

      1/3 x 8.33 = 2.7767.
      2/3 x 8.33 = 5.5533.

      The 1/3 mark is closer to the 4 than the 3, but not close enough. So 2-3 and 19-20 fall into the 1/3 peak probability zone.

      The 2/3 mark is above 6, but nowhere near making it to 7; the corresponding high side result is 16. So the middle third contains 4-6 and 16-18.

      That leaves the top of the charts as containing 7-15, a whole 9 results or 1 1/2 times the results span of the zone below. Despite having a more aggressive probability rise and fall, this is actually a LOT flatter than 2d10. Part of that is the ‘plateau’ – but that alone isn’t enough to fully account for it; it’s a genuine statistical phenomenon.

Slices Of Range: Percentages Of Probability
    Range Of Results

      2-20 – which is 19 results wide, the same as 2d10..

    Ave – Min, Max – Ave

      Ave – Min = 11 – 2 = 9.
      Max – Ave = 20 – 11 = 9.
      Symmetrical about the average.

    1/3 (Ave-Min) + Min

      9 / 3 = 3; 3 + 2 = 5. So a result of 5 marks the boundary between these layers. Just as I’ve had to do when the overall average ‘overlaps’ the lower an upper core, that means that 5 has to be treated as half-in and half-out of this results band.

      2-4 gives 6.25%. 5 alone is worth 4.17%, so half of that is 2.085; adding that to the 6.25 makes this zone 8.335% tall.

    2/3 (Ave-Min) + Min

      If 1/3 is 3, then 2/3 must be 6, and 6 + 2 = 8. Once again, a result that is astride two different zones.

      6-7 are 21.88 – 10.42 = 11.46%. Half of 5 adds 2.085, and half of 8 = 7.29 / 2 = 3.645. Adding those makes this band of results 17.19% tall, which is a little more than twice the height of the previous zone.

    The Lower Core

      It’s not particularly surprising to me at this point that the overall average of 11 has to be split between lower and upper core. So the lower core is half of 8, plus 9 & 10, plus half of 11:

      3.645 + (45.83 – 29.17) + 8.33 / 2
      = 3.645 + 16.66 + 4.165 = 24.47%.

      That’s about 1.42 times the lower middle total, and almost 3 times the bottom-most bracket – which, as a ratio, is rather low, and signifies that low results are relatively likely compared with other constructions.

    The Upper Core: 1/3 (Max-Ave) + Ave

      The set of results above the overall average mirror those below. So the upper core is also 24.47%, effectively creating a single span through the center of the results that comprises almost half of ALL the results.

      The upper core is half of 11, plus 12 & 13, plus half of 14.

    2/3 (Max-Ave) + Ave

      The upper middle band, like the lower middle band, is 17.19% in height, and consists of half of 14, 15 & 16, and half of 17.

    The Lofty Outcomes

      Which leaves half of 17 plus 18-20 for the top end results, and a probability of 8.335%.

    d8+d12:

      Lowest Results: 02-05: 8.335%
      Lower Middle: 05-08: 17.19%
      Lower Core: 08-11: 24.47%
      Upper Core: 11-14: 24.47%
      Upper Middle: 14-17: 17.19%
      Highest Results: 17-20: 8.335%

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      7 is almost in the lowest 20%, but the cumulative probability is 21.88% – so just a little too high.

      This band is results 2-6. Again, the flatness of the ‘curve’ is apparent from the 5-results span.

    Second Lowest 20%

      The 40% probability mark is closer to midway between 9 and 10 – it’s about 1/3 of the way up. So 9 is in but 10 is not, making this span, 7-9.

    The Middle 20%

      Rounding error strikes again!! The 60% total probability mark is between 11 and 12, closer to the higher number but not quite getting there.
      So the middle “20%” is 10 & 11.

    Second-Highest 20%

      The 80% mark is a little above 14. So this band spans results from 12 to 14.

    Highest 20%

      Which means that ‘rolling high’ with this construction is anything from 15 to 20.

    Peak Probability

      8.33%, as already noted.

    Matching Result: 1/3 Peak Probability

      1/3 of 8.33 = 2.7767. That is higher than 3 but lower than 4, so 3 is in and 4 is not. On the high side, 19 and 20 are also in.

    Matching Result: 2/3 Peak Probability

      2/3 of 8.33 = 5.5533, which is between 6 and 7. So 4-6 are in the middle 1/3 of probabilities, matched by 16-18, and 7-15 are the most probable results.

    d8+d12:

      00-20%: 2-6
      21-40%: 7-9
      41-60%: 10-11
      61-80%: 12-14
      81-100%: 15-20

      02-03: < 1/3 peak probability 04-06: 1/3 - 2/3 peak probability 07-15: 2/3 peak probability to peak to 2/3 peak probability 16-18: 1/3 - 2/3 peak probability 19-20: < 1/3 peak probability

Summary Of Results

    Extreme results are possible, and even a little more likely than with other constructions – but the difference is so small that few will care. 2 & 3 have a total probability of about 3.1% – any two results like that on a d20 have a 10% chance.

    The dominant feature of this construction is that plateau of results, 5 – almost 7 – results wide.

    Things get a little more interesting when you compare to a 3d6 roll. Not only are low results a LOT more likely, but they also extend both higher and lower by 1 at the bottom and 2 at the top. there is also far less focus on the average result than on 3d6 – there’s a greater spread across the top of the probability chart. So there is greater uncertainty over the outcome – and a greater potential for near-miss results if the target is close to the average result of 11.

When To Use This Substitute

    Over the central plateau, this is a lot like a d20 in that the curve is flat. In fact, the wider spread of mediocre results is more likely to eventuate than on a d20. That makes this an appropriate substitute for a d20 in cases where someone is being taught a skill – there’s a supervisor who will step in (if he can) before things really go pear-shaped, but beyond that, the student lives and dies on the merits of his own effort. This construction simulates this situation very well, with both extremes less likely for different reasons.

    Things get a little bit trickier when discussing replacing 3d6 with this roll. Extreme outcomes are both more likely and potentially better, or worse. And yet, the vast majority of times, you will get a result somewhere in the middle. The most appropriate use for this substitution is for the simulation of some poisons and diseases, where some damage is almost certain to take place, and extreme results are possible but unlikely – but potentially better or worse than a 3d6 roll. It might be necessary to scale the results before it can be used this way – dividing by 4 would give a results span of 0 to 5, with 2-3 the most likely to result; dividing by 3 would give a span of 0 to 6, with 2-4 most likely. But I’m not sure that it;s worth the extra trouble. There are other ways that are even more dramatic.

    “Like what?”, you ask. Well, imagine a disease or poison that is represented by 4d6 or 5d6. Every time a 1 comes up, the sufferer takes 1, 2, 3, or even 4 points of damage and 1 point of stat loss – for each 1 showing. On a 2, you take half this and no stat loss. When your rolls have accumulated a certain number of 6s, the poison / disease has run it’s course; until then, you roll at regular intervals. If you roll lucky, you get only a passing brush with the disease; if you don’t, it can ravage even the healthiest of bodies. Rolling multiple dice using such a system is inherently dramatic and scary for the players, helping them get into the correct roleplaying mindset.

    That said, the d8+d12 option is, at the very least, exotic, and that can be useful in and of itself in such situations. So save this as a 3d6 substitute for when you want to emphasize and punctuate that exotic quality.

Exotic Choice #3: 2d4+d12

The d8+d12 option soon suggested this three-dice alternative, but I wasn’t 100% sold until I saw how interesting that two-dice was. I have to admit to not being quite sure what the resulting probability curve would look like. The results are perhaps even more interesting than I expected them to be!

The first thing you notice is the flat top with exactly the same probability as d8+d12. Then you notice the curvature of the walls. Third, you notice that relative to d8+d12, results are biased high.

Let’s get into the details and see what makes this probability tick.

Min, Max, Ave

    The minimum is 3, the maximum is 20, and the average is 11.5. The plateau is 6 spaces wide, not 5.

The Thresholds
    The 1% Threshold

      3 and 20 are below this threshold – for practical purposes, the range is 4-19.

    The 3% Threshold

      5 and 18 are just above this threshold, so even 4 and 19 are improbable results.

    The 5% Threshold

      6 and 17 are just above this threshold, so even 5 and 18 are less likely than on a d20 roll.

    The 10% Threshold

      No results exceed this threshold, so everything from 6-17 are in this band – that’s 12 results!

    Peak Probability

      This is 8.33%, the same as d8+d12.

    1/3 & 2/3 Peak Probability Results Thresholds

      1/3 x 8.33 = 2.7767, which falls between the probabilities of 4-5 and 18-19. So the most improbable results are 3-4 and 19-20.

      2/3 x 8.33 = 5.5533, which lands between 6 and 7 on the low side and 16 & 17 on the high. So the middle probability range holds 5-6 and 17-18.

      That means that the top tier of probability holds all results from 7 to 16, a range of 10 results. Since there’s a total results span of 18, that’s more than half of them!

Slices Of Range: Percentages Of Probability
    Range Of Results

      18 results range from 3 to 20.

    Ave – Min, Max – Ave

      11.5 – 3 = 8.5.
      20 – 11.5 = 8.5.

      So the roll is symmetrical.

    1/3 (Ave-Min) + Min

      1/3 x 8.5 is 2.8333. So 1/3 of the way through below-average range of results is 2.8333 + 3, or 5.8333.

      The bottom range of results is 3-5, which have a total probability of 5.21%.

    2/3 (Ave-Min) + Min

      2/3 x 8.5 = 5.6667. So 2/3 of the way through the below-average range of results is 5.6667+3 = 8.6667. So the lower-middle sixth contains results 6-8 at a total probability of 25 – 5.21 = 19.79%. Not quite 4x as likely as the lower band.

    The Lower Core

      That means that the lower core is 9-11, with a total probability of 50 – 25 = 25%. This is only a little over a 25% increase on the previous band, an indicator of the extreme flatness of this probability distribution.

    The Upper Core: 1/3 (Max-Ave) + Ave

      2.8333 + 11.5 = 14.333, so the upper core ranges from 12 to 14, and represents 75 – 50 or another 25%. The core, in total, is going to come up on half of all rolls.

    2/3 (Max-Ave) + Ave

      5.6667 + 11.5 = 17.1667, so the middle upper range contains 15-17, with 94.79 – 75 = 19.79% of all rolls falling into this area.

    The Lofty Outcomes

      And, the top of the range is therefore 18-20, with a total of 5.21% of results – most of which will be 18s.

    2d4+d12:

      03-05 = 5.21%
      06-08 = 19.79%
      09-11 = 25%
      12-14 = 25%
      15-17 = 19.79%
      18-20 = 5.21%

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      Breaking the 100% total probability of results as evenly as possible into bands of 20% shows 3-7 as being the bottom range. 20% of the time, you’ll roll a number in that range.

      Second Lowest 20%

      The 40% threshold is just below 10, so results 8 & 9 occupy this space.

    The Middle 20%

      60% is just over 12. so 10-12 are the middle bracket of possible results.

    Second-Highest 20%

      The 80% threshold falls just below 15, so the band of results that fall into the 61-80 range are 13-15.

    Highest 20%

      Which in turn defines ‘rolling high’ with this construction to be any result from 16-20.

    Peak Probability

      I’ve just realized that I’m already calculating this in a previous section – no wonder I kept getting deja vu. So this sub-section, and the two that have been following it, are redundant. But since they seem more relevant to this area of analysis, it’s the earlier one that’s going to be excised hereafter.

2d4+d12:

    00-20%: 3-7
    21-40%: 8-9
    41-60%: 10-12
    61-80%: 13-15
    81-100%: 16-20

    04-05: < 1/3 peak probability
    06-07: 1/3 – 2/3 peak probability
    07-16: 2/3 peak probability to peak to 2/3 peak probability
    17-18: 1/3 – 2/3 peak probability
    19-20: < 1/3 peak probability

Summary Of Results

    This gives the flattest dumbbell shape that I’ve ever seen – though it may not hold that record for long.

    I couldn’t help but notice that this curve has inflection points at 6 and 17. There are three results before the first, and three after the second. Those are the parts of the probability chart where probability is rising faster with each successive result – a positive rate of change.

    Above these inflecton points, probability increases are getting smaller, until they hit zero at the very broad plateau of results.

    That’s prompted me to add a bonus extra alternative, not on the list presented in part 1. Specifically, I wondered about the size of the plateau and the effect on the slope of 2d6+d8 as an alternative.

    Just thought I’d mention it!

    Extreme results are even less likely than with d8+d12 but the construction will generally roll higher than that particular d20 substitute.

    Once past the inflection points, result probability is very even, with just a little drop-off on both sides of the plateau.

    The final and most significant observation about this construction is that, relative to d8+d12, and d20, and 3d6, it tends to roll high. Not by much – the average is 0.5 higher than d8+d12 and 1 higher than the other two – but, over multiple rolls, that adds up. These rolls are biased toward success.

    But it only takes a -1 modifier for the shoe to be on the other foot. Now, they bias toward failure.

    Using +2 and -2 modifiers results in even more extreme examples.

    For example:

    What’s more, because the likelihood of extreme results is so low, this substitution can me adapted ignoring those results.

    2d4+d12+2 may have a maximum of 22 – but 21 and 22 will come up so infrequently that they might as well not be there. At the low end, it’s 5 and 6 that are orphaned – the roll is essentially 7-20 with diminished chances of 7-8 or 19-20.

    That gives GMs a lot of latitude to play with.

    The other thing that struck me – how could it not? – is how closely the curve of the curving part of the 2d4+d12-2 matches the curve of the 3d6 roll. They are almost identical! Which gives this construction a couple of bonus points as a 3d6 substitute in my opinion.

    Finally, you can gild the lily; instead of 2d4, roll 3d4 and keep either the best two or the worst 2.

    Sadly, that graph is beyond what AnyDice can analyze, so I’ll have to do it with a spreadsheet. Here’s what I came up with:

    Right away, though the effect is subtle, you can see that the curves are no longer symmetrical. The plateau and results to either side of it are unchanged, but the side described by what you ‘keep’ then follows a more gentle curve for a bit, then parallels the converse at 1 result higher, then declines more steeply at the last to meet up with the final data point of it’s opposite number.

    Yes, I know that’s a lot to work through. Take your time and look at each of those descriptive statements, I’m not going anywhere.

    (later)

    The net effect is of +1 to some of the results, and a shift of the average – to 14.39 if you keep the best two, and to 12.61 if you keep the worst 2. Both should be compared to the roll with straight 2d4, which has an average of 13.5.
    /ol>

    When To Use This Substitute

    This is a combination that’s very sensitive to nuance, as expressed by modifiers. At the same time, it’s relatively forgiving and flat, offering a variety of results about the average. A combination of predictability and randomness, in fact.

    I would consider using this combination when that sensitivity is likely to come into play – when decisions or conditions can shade outcomes one way or another.

    This would be especially appropriate when one character is actively trying to hamper or support another, instead of doing their own thing. Maneuvering into a position to flank while attacking some other target, or attempting to assist (or distract) from an important use of skill, for example.

    You’ve effectively got no less than 27 variations to reflect nuance. Here they are, ranked from most penalizing to most beneficial:

      1. 3d4 (keep the worst 2) + d12 – 4:
                ave 7.5 – 0.89 = 6.61, min -1, max 16, non-ignorable 1-14

      2. 2d4 + d12 -4:
                ave 7.5, min -1, max 16, non-ignorable 1-14

      3. 3d4 (keep the worst 2) + d12 – 3:
                ave 8.5 – 0.89 = 7.61, min 0, max 17, non-ignorable 2-15

      4. 3d4 (keep the best 2) + d12 -4:
                ave 7.5 + 0.89 = 8.39, min -1, max 16, non-ignorable 1-14

      5. 2d4 + d12 -3:
                ave 8.5, min 0, max 17, non-ignorable 2-15

      6. 3d4 (keep the worst 2) + d12 – 2:
                ave 9.5 – 0.89 = 8.61, min 1, max 18, non-ignorable 3-16

      7. 3d4 (keep the best 2) + d12 -3:
                ave 8.5 + 0.89 = 9.39, min 0, max 17, non-ignorable 2-15

      8. 2d4 + d12 -2:
                ave 9.5, min 1, max 18, non-ignorable 3-16

      9. 3d4 (keep the worst 2) + d12 – 1:
                ave 10.5 – 0.89 = 9.61, min 2, max 19, non-ignorable 4-17

      10. 3d4 (keep the best 2) + d12 -2:
                ave 9.5 + 0.89 = 10.39, min 1, max 18, non-ignorable 3-16

      11. 2d4 + d12 -1:
                ave 10.5, min 2, max 19, non-ignorable 4-17

      12. 3d4 (keep the worst 2) + d12:
                ave 11.5 – 0.89 = 10.61, min 3, max 20, non-ignorable 5-18

      13. 3d4 (keep the best 2) + d12 -1:
                ave 10.5 + 0.89 = 11.39, min 2, max 19, non-ignorable 4-17

      14. 2d4 + d12:
                ave 11.5, min 3, max 20, non-ignorable 5-18

      15. 3d4 (keep the worst 2) + d12 + 1:
                ave 12.5 – 0.89 = 11.61, min 4, max 21, non-ignorable 6-19

      16. 3d4 (keep the best 2) + d12:
                ave 11.5 + 0.89 = 12.39, min 3, max 20, non-ignorable 5-18

      17. 2d4 + d12+1:
                ave 12.5, min 4, max 21, non-ignorable 6-19

      18. 3d4 (keep the worst 2) + d12 + 2:
                ave 13.5 – 0.89 = 12.61, min 5, max 22, non-ignorable 7-20

      19. 3d4 (keep the best 2) + d12 +1:
                ave 12.5 + 0.89 = 13.39, min 4, max 21, non-ignorable 6-19

      20. 2d4 + d12 +2:
                ave 13.5, min 5, max 22, non-ignorable 7-20

      21. 3d4 (keep the worst 2) + d12 + 3:
                ave 14.5 – 0.89 = 13.61, min 6, max 23, non-ignorable 8-21

      22. 3d4 (keep the best 2) + d12 +2:
                ave 13.5 + 0.89 = 14.39, min 5, max 22, non-ignorable 7-20

      23. 2d4 + d12 +3:
                ave 14.5, min 6, max 23, non-ignorable 8-21

      24. 3d4 (keep the worst 2) + d12 + 4:
                ave 15.5 – 0.89 = 14.61, min 7, max 24, non-ignorable 9-22

      25. 3d4 (keep the best 2) + d12 +3:
                ave 14.5 + 0.89 = 15.39, min 6, max 23, non-ignorable 8-21

      26. 2d4 + d12 +4:
                ave 15.5, min 7, max 24, non-ignorable 9-22

      27. 3d4 (keep the best 2) + d12 +4:
                ave 15.5 + 0.89 = 16.39, min 7, max 24, non-ignorable 9-22

    I’ve numbered them so that they won’t take up as much room on this chart:

Yellow is ignorable, green is possible or even likely, and the average is shown as a line graph, to scale.

    My approach would be to pick the base roll and modifier according to the default average result I thought appropriate, locate it on the list, and proceed up or down the list from there as seemed appropriate.

And that’s where this part has to come to an end. Next week, part 2 (if all goes according to plan)!

Comments (2)

Campaign Creation Through Iteration


Iteration is one of the most useful campaign planning tools I can think of. This article demonstrates the technique and why it should be your favorite, too.

To create this image, I blended a Recursion Icon by mcmurryjulie with a mandelbrot image by AlexN20, both from Pixabay.

Time Out Post Logo

I made the time-out logo from two images in combination: The relaxing man photo is by Frauke Riether and the clock face (which was used as inspiration for the text rendering) Image was provided by OpenClipart-Vectors, both sourced from Pixabay.

It’s been a while since I wrote about iteration and how to use it – in fact, the last time it was the focus of attention was, (I think), Adding Stealth Dynamics To Sandboxes back in 2024 (but that was about adventures).

There was 2019’s Into Each Plot, A Little Chaos Must Fall which is about randomness and translating layers of plotlines into plot sequences within adventures – again, not quite the same thing, though it touches on the subject.

There was a similar theme to the 2017 article, Tying Plot Threads Together: Concepts to Executable Plot.

And the tool was described as a problem-solver in Fire Fighting, Systems Analysis, and RPG Problem Solving Part 2 of 3: Prioritization – which is closer to the mark, but still not squarely hitting the target.

No, to really find something about campaign development and iteration as a tool for that development, you need to go all the way back to Top-Down Design, Domino Theory, and Iteration: The Magic Bullets of Creation, written in January 2014. For one of my favorite tools, that’s almost scandalous.

Basics

In the case of this article, I want readers to focus less on the actual outcome (the generated campaign concept) and more on the process that I’m demonstrating.

Iteration is the process of performing the same simple steps over and over to accumulate a complex outcome.

The process in this case is:

    1. Ask a question or add an idea seed.
    2. Identify the ramifications for the campaign world.
    3. Identify the intersection point with the PCs.
    4. If a new idea seed was just added, ask “why and where” and proceed back to step 2.
    5. If not, return to step 1 unless satisfied with your overall world building.

This loop repeats endlessly until the test condition in step 5 is satisfied.

It’s possible to refine it further by inserting a step 3a:

    3a. Have all the consequences and ramifications been explored? If not, go back to step 2.

The power of the system comes from the systematic integration of new ideas into a broader tapestry. You can start with the same initial premise multiple times and end up with a completely different campaign.

Of course, you don’t want to do that per se; instead, you want to leverage that to make your campaign different from any others that have the same central premise as their foundation.

Basic assumptions

I’m going to assume that there will be a session 0 for character creation, and therefore that I don’t yet know who’s going to be in the party and what their character classes are.

1. Central Premise

This is your initial starting point. This may end up being a prominent feature of the campaign or it may get buried into the background, lurking as a campaign-level plot twist until the PCs are powerful enough to deal with the situation.

A note about nomenclature

I’m going to inset actual campaign creation content. Each step will be in a sub-section by itself and the title of those subsections will be preceded by the step number of the process. That’s what the “1” is doing there in the section title preceding this subsection.

Key building blocks will be in Bold Italics. This makes them leap off the page and reminds you that they need expansion.

A note about genres

It must be pointed out that while the specific example is for some form of D&D / Pathfinder, the same techniques work in any genre / game system.

A note about established campaign backgrounds

This can be both good and bad. Good, in that there are all kinds of resources and ideas out there already for you to build on; bad in that those can constrain you, if you let them, and can be a little daunting, again, if you let them be that way. Good, in that players will have some idea of the kinds of adventures – the “style” of the campaign – to expect, bad in that the GM then has to deliver on that implied promise.

    1. Central Premise: The Banning Of Magic

    King Theovold of the kingdom of Astrangier has just issued a proclamation banning the use of Sorcery and Spellcasting throughout the kingdom, with the penalty being immediate death without trial.

A strong opening, with lots of room to grow, but it’s been done before. I’ll need to work on making this version distinctly different from the others that have appeared, and especially from my Fumanor campaign where this was a historic event long before the PCs were even born.

    2. Ramifications

    A grace period of 14 days is allowed for mages and other professional practitioners to relocate from the Kingdom. During this grace period, the use of magic will still be interdicted by the Scarlet Legion, a force of 1,000 elite warriors sworn to the King’s direct service. They function as his central military defense and strong right arm. It is rumored that they are actually a double-legion, with a secret sub-organization known as the Black Shadows who are spies and secret police.

I’m getting a strong whiff of a very repressive society here. In old-school alignment terms, it sounds very Lawful Evil or – at best – Lawful Neutral.

What are the consequences likely to be of such a mandate?

Anyone who practices Magic or Sorcery will have to sell everything they can’t take with them and flee. The markets will be awash with furniture and possessions, and there will be a flood of real estate on the market. Over-saturation means that people will be able to buy up their lifestyle on the cheap – someone‘s going to make a lot of money in the long term. The losers will be those practitioners, who will have to settle for pennies on the pound, getting no more than about 5% of what their belongings are actually worth.

Higher level practitioners might have the resources to take almost everything, but lower- and mid-level practitioners won’t have that option. And I note that nothing has been said about where the banned can find refuge – perhaps this proclamation has come out of the blue and no-one’s had a chance to react yet.

So markets will be in turmoil, and the economy is going on a roller-coaster ride. The makers of furniture are going to find their prices undercut massively by the flood of used goods onto the market.

In any modern society, 14 days is NOT enough time to actually sell things like real estate. But this is a simpler time, so it’s just plausible that a sale can be completed that quickly – leaving the former owners rushing off at the last minute, trying to get away before the Scarlet Legion, guided by the Black Shadows, come looking for them.

There’s been nothing said yet about Public Reactions to the ban. There’s nothing about Clerics and the reactions of the Temples / Churches, or any other recognized organizations.

Notice that we aren’t really building out on any of these elements yet – we’re still working on the main premise and simply listing all the other things that need to be detailed.

It can be useful to maintain a running list of such things, because it’s easy to overlook them if you don’t. So here’s our world-building checklist so far:

  • King Theovold
  • Astrangier (Kingdom)
  • immediate death without trial – is this normal? What else is punishable this way?
  • The Scarlet Legion
  • The Black Shadows (secret police)
  • Someone’s going to profit
  • Economic Turmoil (I missed highlighting this one!)
  • Neighboring Kingdoms for Mages to flee to
  • Public Reactions
  • Clerics & Temples / Churches – Permitted? Banned? Reactions?
  • Other Recognized Organizations & reactions

I find that’s also a good practice to jot down any preliminary ideas on such a list, so that you can interrupt your world-building and resume it a day or two (or a week or two) later:

  • King Theovold

    Reputation as an enlightened monarch, making this action all the more remarkable.

  • Astrangier (Kingdom)
  • immediate death without trial – is this normal? What else is punishable this way?

    ‘High Justice’ – can anyone other than the King’s Scarlet Legion dispense it? Princes? Dukes? Barons? Counts? Sheriffs? Other Officials? The general public?

  • The Scarlet Legion

    Name comes from Scarlet tunics worn over armor. Outside the control of any other nobility – which means they can target that Nobility. Connect with questions of High Justice, above.

  • The Black Shadows (secret police)

    Name comes from secrecy. More spies than police force. Maintain trustworthiness of the Scarlet Legion, probably fairly small in numbers – but no-one knows.

  • Someone’s going to profit

    Could be the crown, could be upper nobility, could be wealthy private citizens, most probably all three to differing extents. Will impact public opinion over time.

  • Neighboring Kingdoms for Mages to flee to

    How many days of travel will it take to get there? If a mage can fly or teleport, they have time on their side – but if they can’t, two weeks might demand leaving right now!

  • Economic Turmoil

    Short term, there will be bargains. Medium term, there will be price rises. Anything that used to be made more efficiently with magic will show the steepest increases.

  • Public Reactions
  • Clerics & Temples / Churches – Permitted? Banned? Reactions?

    I’ve used the Clerics vs Mages trope before, so prefer another path. Menaces best confronted by Mages – Clerics etc have to pick up the slack. A large temple might offer to act as sales agents on behalf of those having trouble selling property – for 5% of the take. All this suggests a ‘soft opposition’ to the policy, overall, but some will no doubt take a harder line – which ones?

  • Other Recognized Organizations & reactions
    • Merchant’s Guild – opposed because of the economic impacts.
    • Bandits – lots of money moving around the countryside in the pockets of mages.
    • Thieves Guilds – generally stronger in cities and larger towns, which is where there is suddenly less money but more property to pilfer. Largely neutral on the subject.

3. Intersection Points

This is tricky, without knowing who’s in the party. We need to list the possible situations and permutations and have ideas ready to go, but focus on the most probable one.

I can think of four major alternatives.

A. No mage in the party.
B. A mage in the party goes into exile and the party splits.
C. A mage in the party goes into exile accompanied by the party.
D. A mage in the party ‘goes underground’ in defiance of the law.

I rate A and B as least likely, C and D as more likely. If they are low-level, and unlikely to be able to impact the situation (yet), C is the most likely; if they are mid-to-high level when all this goes down, D becomes the more likely.

I also note that there will be NPCs taking options B, C, and D, regardless of what the PCs choose.

    Intersection Points: No mage in the party

    If they weren’t aware of the threat immediately (higher INT, remember), it won’t take long for mages on the move with lots of cash to become aware of the Bandit threat. They will look to hire PCs for escort duty if they can – as a general rule, Mage level = total levels in the party that they can afford to hire. Focus of an adventure.

    Party will be routinely interrogated by Scarlet Legion – ‘seen any mages lately?’ Legion will take names, so any ‘passive resistance’ by offering false leads will cause a party to acquire a reputation and eventually create interest by the Black Shadows. Assumption: It’s against the law to lie to the Scarlet Legion. Punishments initially minor but ramping up with repeat offenses.

    If the party are inclined in the Murder Hobo direction, they may become bandits themselves, leading to a Chicago-style gang war through the early campaign! If they lose that war, that can be enough to push them into exile or the underground.

    Intersection Points: Party splits

    Party may or may not escort the mage to safety. This splits the campaign in two at least temporarily; if that is unacceptable, then reasons need to be found to rule this option out. Perhaps the best option: it becomes publicly known BEFORE the law is announced that there is a mage in the party. When they contemplate splitting, they discover that the Black Shadows are targeting them as potential violators of the law. Persecution to be real, however unofficial and unjustified.

    Intersection Points: Party go into exile

    I need somewhere for them to go. I need adventures to happen there. Eventually, I need some trigger event to pull them back into the plotline.

    Somewhere for them to go: The Elven Kingdom of Lethorial welcomes exiled human mages. So does the Human Kingdom of Sandival. Let the PCs choose.

    Adventures to happen there: set this question aside for later.

    Trigger events – Elvish Kingdom: The former mentor / master of the party mage reaches out to the PC to recruit him (and his friends) into the underground, bringing the plotline back to the forefront.

    Trigger events – Kingdom Of Sandival: King Theovold becomes convinced that the civil unrest perpetuated by the Underground are acting with the covert support of Sandival. He mounts an invasion / crusade to “Kill All The Mages”. Makes this plotline front and center.

    Intersection Points: Party join underground resistance

    If the PCs are low-level, they will be assigned low-level tasks without explanation. Assuming intelligent leaders, some of these will be tests, and some will have significant impact – but only afterwards will the PCs be able to connect the dots as they see consequences of their actions. Multiple adventures possible.

    If the PCs are mid-level, they will be assigned tasks without specific instructions on how to proceed. Some of these will be tests, but most will be targeted at outlying districts. At least one should be aimed at generating / disseminating propaganda to shift public opinion further against King Theovold. At least one should be an attempt to rescue a mage. At least one should be an attempt to recruit a high-level mage who has resisted out of distrust for the secretive heads of the underground. And at least one should be a diplomatic mission to somewhere, probably neither Sandival nor the Elves. Dwarves are the most likely option but that doesn’t seem to be a great ‘fit’ – needs further thought.

Okay, so it doesn’t matter what classes the PCs choose, the event is still going to impact them, as something of this magnitude should. Time to move on.

3a. Further consequences and ramifications

What we’ve got at the moment is pretty good in terms of the short term impact. We still haven’t really dug into the medium and long term. So it’s back to step 2.

    2. Medium-term Impact

    Certain magic items should be banned, confiscated by the State. Other magic items should attract a ‘Licensing Fee’ from the owners. Specific inheritance taxes should penalize the generational passing of such items. Finding / looting of such items should also attract a one-off tax.

    All this aims at restricting and controlling the spread of magic through the broader community, especially any magic that can ‘replace’ having a Mage on hand. This is a logical extension of the initial proclamation.

    It will impoverish the lower-middle class and lower classes, and enrich the throne and the upper social echelons. It will mean that Banditry remains an ongoing problem.

    The Black Shadows are certain to have identified the existence of the underground by now, and will be actively operating to uncover their identities and turn the Scarlet Legion loose on suspects.

    Expect a law to be passed making it a capital offense to harbor a known mage. That won’t make a lot of difference to anything, but it is a clear escalation and sharpens the dividing lines within society.

    For some within the Scarlet Legion, this may be a bridge too far. Expect a Black Shadows -led purge of the ranks. This is likely to harden public opinions further – some in favor, some opposed. Being a fence-sitter will simply get you targeted by both sides.

    3. PC Intersections

    There are all sorts of ways the PCs and their adventures can intersect with these developments. In some cases, depending on their initial choices of response, these will be the focal points of adventures; in others, they will be background events. There are too many combinations possible to do much advance planning until that initial decision is made.

    In general, tension and hostility will ramp up, and this element of the campaign will become more dominant, either in the form of encounters or as elements of adventures.

    2. and 3. Long-term Impact

    Eventually, the ban-on-magic plotline will come to a head and be resolved, one way or another. This may involve the deposing of the king and elevating someone else in his place, or not. Either way, there will be short-term chaos within the society of Astrangier.

    This might seem to be the big finish for the campaign, but one further escalation is possible into a big finish. Remember the warning of the Churches / Temples, that there were menaces out there that are best addressed by Magic and that others (themselves included) would have to pick up the slack? That should have manifested in at least one such adventure in the medium term, where the PCs find themselves at the focal point of dealing with just such a threat.

    As the big finish, though, an even bigger such threat should manifest, attempting to take advantage of the combination of the chaos and the moment of greatest vulnerability. What form this ‘ultimate menace’ should take is not yet clear. I also made a point of signifying that at least one set of Temples / faiths should support the ban on magic. Their motivations for this agreement should be reasonable and rooted in the gain of power and authority in general – opportunistic, in other words, but in keeping with their general theology. But I can’t help but think that this might be just a cover for their real motivations, which derive from the behind-the-scenes string-pulling of the ultimate big-bad.

Okay, so that finally lets us escape the clutches of step 3a and progress to step 4.

4. The Why and Where

The ‘where’ is the easiest to dispense with, so let’s tackle that first.

    4. Where?

    The Kingdom of Astrangier is going to have a capital city, still unnamed.

    That capital city will contain a Castle, where the King lives, and where the various government functions are headquartered. That’s obviously ‘where’ the critical events are going to begin.

At the moment, the PCs are not going to be moving in such lofty circles, so we don’t need any real details of the Castle – just some general description. At some point, that will change and we will need to supply more specifics.

Whether or not the same is true of the Capital City is another question – it depends on where the campaign is going to be initially based.

I can see arguments both for and against starting things off in the capital.

For: It puts the PCs closer to the heart of the action, and amplifies the challenge posed by the initial proclamation. It keeps D alive as an option. A major population center is more likely to have mages around, so it’s more sensible for any PC mage to be based there.

Against: It doesn’t force the PCs to engage directly in the plotline before they are ready to do so, enhancing campaign viability. It lets the campaign start in a more ‘traditional’ fashion, easing players into events, and increasing the shock value when the proclamation gives the campaign an early plot twist. It makes the more likely early options, A, B, and C, more viable. And, it defers the detailing of the capital until you have more time to lavish on it.

Ultimately, though, this decision should not be made arbitrarily by the GM; it should be a consequence of the planned Session 0, and the character classes chosen by the players for their PCs.

I would rank each of the PC classes from -2 (strongly urban) to +2 (strongly rural) in terms of their preferred operating environment.

Clerics are everywhere, but there would be a slight concentration in the capital, so -0.25. Rogues would also be everywhere, but there would be an even stronger concentration on the urban setting, so -0.75. Mages can be either urban or rural, so either -0.5 or +0.25. Fighters are everywhere, so +0. Druids are strongly rural, so +1.5, and Rangers are even more rural, so +2. And so on. I would make these assessments as each character gets generated, not in advance. Add them all up, and you will get a total. To accommodate groups of differing size, divide the total by the number of PCs to get an average.

  • -2 to -0.5: Strongly Urban. Start in the capital or another major city.
  • -0.5 to 0: Weakly Urban. Start in a large city other than the capital or a large town.
  • 0 to +0.5: Weakly Rural. Start in a moderately large town 2/3 of the way from the borders to the capital, on the fringes of the Inner Kingdom.
  • +0.5 to +2: Strongly Rural. Start in a small town or village not too far from a larger town, located 1/3 of the way from the borders to the capital at most.

This preserves a semblance of urban civilization for those classes that need it, or a connection to a rural culture for those classes that need it, while placing most PCs in a workable environment. There may be some element of specific individuals being slightly ‘fish out of water’ necessary, which the player should incorporate into his character.

Remember, too, that the relative likelihoods of C and D are contingent on the degree of “urban” to the initial setting. Strongly Urban makes D more likely to eventuate, anything else makes B or C more likely. A can happen equally in either setting. This should inform your choices for adventure prep. Obviously, the longer you can put off doing something major like generating a city, the more you can invest in the necessary prep, so if in doubt, swing rural.

The worst possible outcome: Strongly Urban, so much so that the capital is the only realistic option – and for the party to then choose options B or C (if applicable), so that all the prep invested goes to waste, at least in the mid-to-short-term.

    4. Why?

    We’ve decided already on the principle agent of the proclamation: King Theovold. Our notes suggest that he had a reputation as a kind and progressive King, so this harsh action is noticeably out of character – and that makes the ultimate ‘why’ of critical importance.

    You can expect the players to speculate endlessly, and every possibility that they come up with (and more besides) will probably circulate as rumors. Come up with as many half-baked theories of your own to add to the mix.

    There are several obvious options. The King has been replaced (a twin, or a doppelganger). The King is being influenced by an outsider (either mentally or criminally or by magic).

Both are entirely too predictable, I think. So let’s complicate the why with two causes: a Proximate Cause to initially justify the harsh laws, and a True Cause to give the PCs something to overcome in the penultimate climax of the campaign.

    4. Why? Proximate Cause

    The Wizard’s Guild have been agitating against other groups for a while now, in particular disliking the Merchant’s Guild’s ability to constrain rates charged for services and some of the Temples for their ability to override what the Wizards deem acceptable. When their level of frustration grew too much, they attempted a coup of sorts, seeking to dominate the King’s thoughts with Magic. It’s not clear whether this was a separate sect within the Guild or if the entire leadership was in on it, but their efforts were thwarted somehow and the King declared them the equivalent of a terrorist organization (he would probably have used the term ‘subversive’ but that’s what he would have meant).

That gives a real, concrete reason for a draconian response. If you don’t know who to trust, you can’t trust anybody. The slightest hint that the Wizards are generally prone to being seduced by the practice of their arts, overriding their consciences and any wisdom they possess, and their fate would be sealed. The forces that the Guild had been agitating against would have been only to happy to put the boot in, amplifying the perceived threat and disloyalty shown.

Was this conspiracy real, or just an excuse? The entire Guild Leadership were publicly hanged while bound and gagged. There was no trial – they were guilty because the King said they were guilty.

Let’s consider the possibilities:

    4. Why – If The Conspiracy Wasn’t Real?

    Then the Wizard’s guild were fall guys, removed from the picture because the ultimate big bad is vulnerable to magic. It’s neat and tidy but seems too simple a picture. You also find that the ultimate “Why?”” has become more of a “How?” – remembering that the King would have been protected against anything and everything that could be thought of.

    This doesn’t exclude the possibility that the King has genuinely fallen into Madness, but there isn’t a whole lot that the PCs can be expected to do about that, so I don’t regard this option favorably.

    4. Why – If The Conspiracy WAS Real?

    Then the Wizard’s Guild really were the bad guys, and the ban was justified – though a possible over-reaction. Maybe the King is prone to that in cases of Dishonor or Betrayal or threats to his family.

I like this option because it’s not going to be expected – “the plot twist is that there IS no plot twist!” – but it’s likely to put the nose of any Mage-player out of joint.

This only shifts the subject of the “Why” from the King to the Guild, but that innately gives us more options, some of which can assuage the anger of any Mage-player by recasting the Guild as victims, not criminals.

    A while back, I took part in an online conversation speculating wildly about better choices for a Lich’s phylactery. Amongst the more interesting options suggested were,

    • An ordinary copper coin released into general circulation, indistinguishable from any other;
    • A single facet of an incredibly valuable gemstone, so that not only would there be resistance to breaking it, doing so would not actually destroy the phylactery.;
    • An Iron Golem;
    • The Heart of Bahamet;
    • A magical ring that bestowed great powers upon any who wear it (the One Ring kinda fits this description);
    • The City Of Brass;
    • A Relic or Artifact.

    Let’s take that last one. If such a relic or artifact were uncovered by an adventuring group, and it was clearly inherently evil, it might be turned over to the Wizard’s Guild who would search for ways to unmake it. If they did not suspect the Lich‘s presence within the relic, they might not take adequate precautions, and so could prove vulnerable to subtle but growing manipulation.

    Petty complaints and general disagreements would initially become fiery and escalate. Feelings of persecution would begin to grow as the other parties to those complaints and disagreements argued against the mages, any wins being minimized and any losses amplified in the mage’s perceptions. Over time, they would be increasingly corrupted – until the conspiracy came into being.

    The conspiracy was real, and the victims – the guild – were also the villains that they were accused of being.

Now that’s got some texture to it. It’s not as simple as the alternative, it contains nuance and leaves a lurking menace at the center of the Kingdom – that’s great for what will initially appear to be the climax of the campaign. And, in the form of the adventuring group who initially recovered and turned the relic over to the guild, but who have not put two and two together, it creates a group who knows at least part of the story-behind-the-story, and who can be encountered by the PCs at some future point.

    The Lich has not been simply waiting around for his machinations to unfold; he may well recognize his slight vulnerability to the secrets known to those adventurers and would seek to influence others to get rid of them. The Secretive Head of the Black Shadows would be a particularly compelling target because of the very secrecy surrounding him, and his ability to covertly act.

Better and better – this adds a direct confrontation with agents of the Black Shadows to the adventure in which the PCs start to learn the truth! What’s more, even if the Black Shadows complete their assignment of assassinating the last member of that old adventuring party, who has seen his comrades cut down one by one no matter where and how they hid, it has to be assumed that he may have lived long enough to pass on his secret – so it’s now the PCs who have infinite knives at their backs. What they do about that is up to them – but self-preservation should incline them to get involved in the real plotline at this point. It’s no matter an abstract good-vs-evil story that they can take or leave, it’s a direct threat against them.

    4. Is the Lich, the Ultimate Big Bad?

    If just any Lich could use a relic as a phylactery., they’d all be doing it. But if we assume that this ability is an unusual choice not normally available, then we create room for a real ultimate big bad to have facilitated this, making the Lich a lieutenant at worst and an Agent at best.

    This reasoning demands that the ultimate big bad be something more akin to a Morgoth or a Sauron, a manipulator capable of making Artifacts of great power and evil, who conceived of this plan and manipulated the circumstances to make it possible. Someone who has been lurking in the shadows even less suspected of involvement than the Lich has been.

    All that is needed to initiate the ultimate climax of the campaign is for the PCs – already weakened and exhausted, one hopes, by their battle with the Lich – is for them to spot the shadowy hand of this hidden evil, causing it to fully reveal itself for one epic final battle within the campaign.

5: Are You Satisfied with the worldbuilding?

So let’s take stock for a moment.

The fundamental premise of the campaign has carried it from its earliest in-play beginnings all the way through to plot twists and an ultimate menace, with significant heightening of tension and drama in the end.

The last four or maybe five adventures in the campaign show this escalation quite clearly:

  • F-4: Learn of the relic, direct confrontation with Black Shadow assassins.
  • F-3: Learn more about the artifact and the leadership of the Black Shadow assassin while realizing that they are now the targets of the Black Shadows.
  • F-2: Travel to the Capital City of Astrangier, confront the head of the Black Shadows, discover the hidden Lich.
  • F-1: Confront the Lich without time to recover from the fight with the head of the Black Shadows and his bodyguards. Defeat the Lich and discover / confront the Ultimate Big Bad.
  • Final Adventure: Confront the ultimate big bad in an epic conclusion.

Insofar as outlining the campaign, then yes, I would be happy with this. However, there are a great many elements that have been name-dropped along the way and need further development before this campaign would be ready to play. The last time I listed them, there were ten of them, most with some initial thoughts, but some not even with that much done. And there have been more added since then.

Which means that it’s back to step 1 to work on the first item on the list, King Theovold. And when he’s done, the whole process will repeat for Astrangier, the Kingdom, and so on.

I think it worth a time check at this point.

Coming up with the ideas presented in this example took me about 20 minutes to go from an empty page to everything listed above. Writing them down (typing them up) in this case took another 20 minutes or so, maybe 30.

I started writing this at 1:37 PM and it’s now 6:52 PM – so that means that explaining the process has consumed 4 hrs 25 minutes. The real number to pay attention to is that 50 minutes. In fact, I’ll be generous and bump it up to an even hour.

With about 14 items left on the list to develop, if I spend just as much time on them as I have the initial construction of the campaign, I end up with a total of about 15 hours from blank page to fully-playable campaign.

The reality is, though, that part of the development process so far has been deciding the framework and campaign structure. That’s a lot of detail that doesn’t have to happen a second time, and so it would not surprise me if the time needed to generate and document concepts for the remaining items were halved or even quartered.

That’s either 8 1/2 or 4 3/4 hours. A solid day’s work, and my mind (these days) would probably be fairly fried at the end of it – but in days gone by, I could manage that in a day and be ready to run by evening. If I’m more willing to improvise and come up with details when I need them, that additional development time gets slashed again, probably to another hour, especially if I could dip into my library of resources for appropriate city maps and the like.

A note about additional plotlines

No campaign should ever rely solely on one single plotline. Instead, I would designate this as the Wizard Plot Arc and generate other plot arcs for the different character classes. These would not be as detailed as this one, which lies at the heart of the campaign; but they should be there ready for any PC who chooses that particular character class.

I’d probably spend 20-30 minutes on each possible plot arc outside of this one. That would mean that each and every PC had their own plotline through the campaign, so that this main plotline could fade into the background most of the time.

With unused plotlines, you have several options to consider.

  1. They can describe background events, sources of news and rumors, keeping those relevant beyond the mere gossip that would be 80% of such content.
  2. One of the PCs can be cast as “a fish out of water” and flung headlong into some other classes’ character arc instead of their own. This choice works especially well if the player shows no interest in what you had initially planned.
  3. You can save them for a future campaign.

In none of those cases is the development work that has been done, a wasted effort. Only if you simply throw them away is that the case.

Having a plotline squarely aimed at making their PC the center of attention, and progressing from one plotline to another, with foreshadowing and the other tricks of the trade making the transitions more seamless, is the best way that I know to create player investment in both their characters and the campaign.

And, ultimately, this is yet another set of iterations of the iterative process. Simple steps accumulating to a rich and complex amalgam. That’s the power of iteration, and why it remains my number one campaign planning and development tool.

Leave a Comment

Expectations and Promises, Real and Imagined


We all have expectations when we belly up to the game table. Sometimes, the GM delivers on promises both real and implied, and sometimes those expectations were never realistic in the first place.

This merges two images, First, the book, Image by Peggy und Marco Lachmann-Anke, and second the magic, Image by Stefan Keller, both via Pixabay, with additional effects by Mike..

After struggling with the most recent installment of the Topologia game setting (it really should have been three parts, maybe 4, but it broke so naturally into two more-or-less even pieces), I’ve barely made a dent in the next part of the Trade In Fantasy series, which is now way behind. So I thought up this post ideas as I was going to bed last night and am going to use it to steal an extra week of writing time.

Time Out Post Logo

I made the time-out logo from two images in combination: The relaxing man photo is by Frauke Riether and the clock face (which was used as inspiration for the text rendering) Image was provided by OpenClipart-Vectors, both sourced from Pixabay.

Every person who sits down at the game table brings expectations with them. This is true whether this is an RPG “Blind Date” (players and GM have never gamed together before), if there is history of which the person is aware, or if the situation is more akin to old friends with an actual history of gaming together, though the expectations can be different in each of these cases.

Superficially, these can look quite simple, but as soon as you start digging into the subject, complications and nuances begin to exert themselves and situations can become a lot less simple in a hurry.

Over time, as participants in the campaign get to know both it and the other participants, expectations can and will evolve, and that’s a source of additional complications.

I’ve broken the subject down into 21 key aspects – you read that right, I said 21 – and I’m still not sure that I’ve captured everything that falls under this topic heading. Since this is supposed to be a fairly quick fill-in post aimed at making deadline tonight – a little less than 10 hours from now – I don’t have a lot of time to spend on each. Take off an hour for pre-publishing and an hour for meals and a break somewhere along the way, and I have about 8 hours to get this written. I’m also going to take an hour out as a reserve, to be used where things take a little longer than scheduled or go wrong or I get sidetracked. So, 21 topics, 7 hours – that’s 20 minutes to each topic.

Let’s go!

1 Blind Expectations – Players

A player sitting down at a game table for the first time may have no idea of what to expect or may have expectations deriving from past experiences in other campaigns with other GMs and players. If those expectations exist with no knowledge of the other participants and no prior knowledge of the campaign, they are ‘blind expectations’.

The more experience a player has, especially outside this particular group, the more of these pre-formed expectations they will carry with them. As they grow more familiar with this particular social group coalescing, these blind expectations will evolve and get replaced with a history of playing with these particular individuals in this particular campaign using this particular game system.

That last one is important, because a lot of blind expectations will be carried by the player’s sense of what the game system embodies, permits, requires, and symbolizes. What sort of adventures can they expect? What sort of characters? What sort of choices?

Bringing pregenerated characters or preexisting characters into a new campaign adds to this mixture, because players expect to be able to marry the characters with the campaign background and setting. In fact, these expectations – like all expectations in this context – can be broken into Rules, Campaign, Adventure types, Adventure Participation, Character Agency, Spotlight Share, and Social Compacts.

  • Rules – preexisting house rules, rules-as-written, rule enforcement, gaming philosophy, GMing style.
  • Campaign – the background and backdrop, internal cohesiveness, and how the PCs will integrate with these. In particular, if there’s a conflict, which takes priority and how can / will that conflict be resolved? The campaign is what distinguishes this story-line from every other one run with the same game system, and derives directly from the GM and what they are bringing to the table.
  • Adventure Types – if you know nothing about the campaign prior to play, you have only whatever knowledge you possess of the game system and the types of adventures permitted, to guide you. That can become a problem if you expect one thing, and design your PC with that in mind, and the campaign wasn’t designed to accommodate that. It’s less likely to be a problem if the GM incorporates a session zero for character construction and background briefing, because player choices can be tailored to better fit the campaign world. I’ve participated in session zero’s (long before the term was invented) that were as short as 5 minutes or as long as three game sessions. I’ve even been involved in one where characters were entirely abstract and conceptual for the entire session zero, not being translated into game mechanics until a session 0.5 either later in that game session or in a separate game session.
  • Adventure Participation – There’s a reasonable expectation that their characters will get to participate in the adventure of the day, whatever it may be. They may not be the center of attention in that adventure, but there is an equally-valid expectation that they will get their share of starring roles in the future. Some players demand attention from the first day of play, others prefer to get used to the campaign, GM, and background first, keeping an initially low profile before being put on the spot. All of this translates into the GM preparing adventures for the group of PCs that he’s got and not shoehorning them into a preexisting structure of which they were not advised during character construction. That’s why session zero can be so important.
  • Character Agency – There’s a reasonable expectation that PCs will have an impact on the game world. Initially, it will probably be the case that the world has a greater impact on the PCs, but over time, that should reverse. Initially, those impacts may only be felt locally; by the end of the campaign, they may be profound, even existential. At the end of a campaign, each player should be able to look back on it and say that if their character had been different, it would have changed the story, sometimes in part, and sometimes profoundly, even if the broad shape of the skeleton remained unchanged.
  • Spotlight Share – There’s a reasonable expectation that their PC will get a fair share of the spotlight during each adventure. Sometimes, GMs balance these things over the span of many adventures, giving rise to the choices discussed in the article Ensemble or Star Vehicle – Which is Your RPG Campaign?
  • Social Compacts – Everyone carries expectations of how game participants will interact socially. If no-one knows anyone else, there’s an expectation of politeness, respect and tolerance at the very least – but a stiff formality is often better than an excessively casual approach, at least at the beginning. Novice players also sometimes have trouble separating character from player, NPC from GM. This revolves around expectations of fairness in decision-making and rules interpretation, too. If a GM is seen as being unfair, a campaign is almost certainly doomed. I always like to schedule in a session “1.5” at the tail-end of the first day’s actual play for people to talk about the campaign and themselves and actually draw a line between the exhibited traits of their characters and the beliefs and practices of the players. Sometimes its not needed, but sometimes it can be all-important.

Blind expectations can also stem from genre interpretations provided by media. Those may or may not be relevant to the actual game on hand. If you keep your eyes open, you can see the impact of Star Wars, of Lord Of The Rings, of the Marvel Cinematic Universe, of the Big Bang Theory, and so on – each creates a ripple effect that permeates gaming for a while and informs how participants view the content of a genre, and hence what expectations they have of that genre. And, before movies and TV were as big, there were books. How past encounters with a genre have shaped expectations of that genre is one of the hardest things to pin down, and at the same time, one of the most profound influences on an individual’s playing style and expectations.

The problem with blind expectations is that they are often inchoate and unstated. They can be fuzzy and undefined. They can be baggage from the past, which makes them more clearly-focused and defined. They can be reasonable or unreasonable. None of which makes them any less real.

2 Reputational Expectations – The GM

If the GM has been in that role for a while, even outside of this new group, that creates expectations based on their history and reputation. “GM for 12 years” – or “20 years” – or “44 years” in my case – definitely creates expectations. That may or may not relate to any particular game system, but the longer it spans, the broader their expertise is likely to be.

Imagine that your entire history as a GM is archived on the internet somewhere and you were to ask an AI – with a reference link to that history – “what is my reputation as a GM?”

The results are likely to be ‘as viewed through rose-colored glasses’, though I’m fairly open about past mistakes and misjudgments. If I were to use Campaign Mastery as my reference source, the results of such a question would be very much an idealized version of myself as a GM – I know because I ran just such a search for the purposes of this article. The practical reality is likely to be somewhat less perfect and idealized than expectations if this is the foundation. They would not be completely wrong, but the reality would be less perfect than expectations might hold.

And the same holds true, to some extent, regardless of the information source, though sources outside the direct control of the GM in question are just as likely to amplify negative traits and events, and those are also going to influence expectations while comprising only a small part of the GM’s actual ability. Anything that creates a false expectation, either good or bad, is a potential problem, because those expectations are extremely unlikely to actually represent the reality of the game experience..

3 Promises – The Campaign

Any campaign briefing carries with it promises both explicit and implied about the campaign, the adventures, the style, the genre, and so on. I often preface mine with a statement that the briefing represents the game world as the PCs understand it – giving me the freedom to expand on or depart from that blueprint. I can also incorporate deliberate ‘errors’, giving the players the opportunity to uncover ‘secrets’ about the universe their characters inhabit, and ensuring some surprises along the way.

I went a step further with the Fumanor campaign – there was a group of common pages of text, but those with extensive theological training (clerics etc) got additional information (not all of it accurate), mages got different information (not all of it accurate), and so on. And none of it was, therefore complete.

A lot of GMs spend a lot of their time and effort in making their campaign plans exciting and interesting with lots of plotline potential. That’s great,. and to be encouraged. Others put all of that effort into campaign backstory generation, under the premise that it doesn’t matter where the PCs choose to go and what they choose to do, there will be something interesting for them to do and find – but that can also imply a lot of development work that never sees the light of day.

This can be both bad, and good. It’s bad if it’s wasted effort, it’s good in that it means there’s content for a sequel campaign to pick up on.

Very few GMs take the time to think about what their briefings are actually promising potential players, and what scant attention is paid usually focuses on the explicit promises. The implied promises are poor second cousins in comparison. Partly, that’s because it’s hard, and partly because the GM is not omniscient and hasn’t encountered every possible combination of media inspirational source material. But it’s worth spending a little more time on it than most people do, and asking yourself about the expectations of those participants with a media consumption profile that includes the dominant sources of the day – just so that any false expectations that might be generated get headed off at the pass.

4 Promises – GM Skill-set & Infrastructure

At first glance, this might seem to be inherently derivative of the previous section but that’s because almost everyone will misinterpret the meaning of “GM Skill-set”.

I richly illustrate my adventures. Sometimes with the results of deliberate image searches, sometimes with hand-drawn or digital illustrations. I’m fairly good at editing and compositing image elements into a whole utterly distinct from the source material. I’ve made explorations into using AI generation for images (which is a subject for a whole other article). A current adventure I’m working on employs sound effects for the first time (harmonious and discordant Gregorian chants, back-masked, plus various mechanical sounds and violent sounds. Those are all about creating atmosphere). For only the second time ever, another adventure has relied on the creation of custom animations, like this one:

Because it’s been reduced in size to fit the screen real estate at Campaign Mastery, here’s a closeup view that’s closer to how it will appear when used in play:

Okay, so back to the point: The “GM Skill-set” refers to everything else that the GM brings to the table outside of his ability to deal with rules and craft campaigns and adventures and encounters. It?s whatever personal resources the GM has to bring the game world to life. It might be voice acting, with several different character voices on tap; it might be image creation / manipulation; it might be poetry or song, or 3D battlemaps or custom miniatures or just well-painted miniatures. It might be having access to a vast library of sound effects and the ability to mix and compose them on the fly.

And yes, it can include tools for interaction with the game mechanics – how you track initiative, for example. It can be mandating a particular character sheet, or even a bespoke one specifically for this campaign.

It can even be homework: “Jim, you’re playing an Elf. In this world, they have a very lilting speech pattern, a little like Irish. Here’s a link to an interview with an Irish comedian, I want you to watch it a couple of times, practice saying the same words the same way he does a couple of times, and then practice saying things your character might say as though he were saying them. Marty, you’re playing a Dwarf, they have a far more guttural speech pattern and one that lacks the expression even of German. Here’s a link to speech by Londo Milari, a character from Babylon-5; the actor based his accent on Hungarian which I think would sound just about right. I want you to do the same thing for your character, but with this voice. One hint: the character has a very particular way of naming another character, sounding out each syllable – Mis-ter Gar-a-ball-di, which the actor found was a ‘touchstone’ for letting him get immediately into character, even sub-vocalized. You might find it works for you, too.”

My players know what to expect – that I will carry part of the load of the suspension of disbelief and a lot of narrative description through images and the occasional extra – from my campaigns. It only takes a single session for new players to get used to it, and another for them to come to rely on it. It’s a key part of my game prep.

It creates expectations – the more complex a scene, the more I will have found some way to represent or depict it. It might be as simple as drawing parts of the map only when they come into view, with the players adjusting their positions as crosses on the map each time they move, or covering part of a map with post-it-notes so that they can’t see what’s underneath. But there will usually be something. If ever there isn’t, they feel a little short-changed, even though they don’t make a fuss about it – and the strength of those feelings can differ from one player to another.

It’s all part of my GMing style, and the players have an expectation that I will deploy my full armory of tricks to execute that style, even in a new campaign, unless I’ve deliberately told them to expect something different..

5 History & Implied Player Expectations

Every GM has a history that leads them to expect certain things from players. Every player has a history that has taught them to provide certain things in play. The two may not match, in fact they probably don’t. The GM needs to be upfront about the minimum engagement that he expects from players – but that necessitates his being clear on what his expectations can reasonably be, in the first place.

Several of my players, for example, have a great deal of trouble shifting gears from talking in character to a more omniscient perspective in which they represent their characters thoughts and words in the third person. As a result, it can be like pulling teeth getting them into first-person mode for roleplaying immersion. Unless it’s absolutely critical to a scene, I don’t even try, most of the time.

And, when it is critical, I make efforts to sustain that mode of interaction without forcing them back into the world of game mechanics. There have been times when I’ve even had them roll a series of results (writing them down) which I then commandeer and interpret whenever a roll might be needed in-game – because I’m better at shifting back-and-forth than they are, most of the time.

I will even break a scene into two parts and spend time interacting with another player to keep the immersed-roleplaying part of their sequence whole and intact, or to give them time to reset their mindset before going back into character if for some reason they do have to break character and go into ‘character-narration’ mode.

When I first started as a GM, my expectations in this respect were framed by the experience I had with the players in other games, and were a lot higher than this current group could readily meet. It took time and experience to ground my ‘minimum levels’ at a more reasonable standard, resetting my expectations. Anything now delivered in excess to those expectations is a bonus. (The illustration-rich aspect of my GMing style was also founded on the principle of forcing fewer breaks in perspective onto the players, at least initially).

6 Implied Promises – Adventures

If your campaign briefing talks about the sort of adventures that the players will experience – and it should – that carries an explicit promise to deliver that type of adventure to the game table. If it doesn’t, something that happens all too often, players are left to surmise on the adventure types they will ‘enjoy’ from what is implied by what briefing is provided in advance, by reputations, by media-driven perceptions of genre, and by their own past experiences. All of the latter are Implied promises, and they can be viewed by players to be just as binding as explicit statements.

What you deliver will rarely match the resulting expectations. Sometimes they will come close enough, and sometimes they will be worlds apart.

I get why GMs don’t want to tip their hat in this respect in the briefing materials. Doing so invites the construction of characters designed to interact with the game world in the specified manner, rather than being ordinary people caught up in extraordinary times – which is often what the GM is aiming for the campaign to be. So I have two different bodies of advice for dealing with this situation.

    6.1 Avoid Implied Adventure Promises If You Can

    If it’s at all possible to do so, be explicit about the types of adventures that are to take place. Any surprise factor won’t last long anyway, and most players are capable of separating player knowledge from character knowledge for long enough that the real focus of the campaign becomes clear to them.

    I’ve had some success – and some failures – running little slice-of-live vignettes or micro-adventures taking PCs through key formative events in their childhood in play as part of a Session Zero. So that’s an alternative to consider. But the failures have convinced me that the first approach is the better one – be explicit and up-front. MAYBE you can justify delaying this until after characters are generated. My experience is that this just creates ill-will amongst players, because their expectations get derailed immediately.

    6.2 If you can’t, pay extra attention to the implied promises – and construct a campaign plot arc that transitions slowly from those base expectations to the endpoint

    My, but that’s a mouthful. If, for any reason, you don’t feel you can be explicit about the types of adventures to come, this is the advice to follow. DO put in a statement of the “ordinary people adventurers who find themselves living in extraordinary times” type, just as a hint – and a note of forewarning.

    More importantly, study what you have prepared to tell the players, looking for any hints or ways of interpreting it that suggest a particular type of adventuring to follow. Tweak what you are going to give them until it’s not incompatible with your plans, and that you can reasonably forecast what the players will expect in response to the content.

    Then, deliver that content – at least at first. Make the transition to the ‘true shape’ of the campaign a gradual one that begins only after those expectations have been met. I don’t care if it adds 5 adventures or 3 character levels or whatever – that’s better than the jarring that can otherwise result. Insert events that naturally and gradually reshape the campaign and the adventures of which it is composed.

    But there’s a downside to be wary of, too – if you spend too much game time constructing plot elements that will eventually come together, players can feel like the campaign is stagnating and nothing big ever happens. You need to balance the resolution of some plot arcs with development of the big picture – even if that means deliberately inserting plot arcs for that specific purpose.

Six sections done, how’m I doing for time? Midnight minus an hour is 11 PM. It’s now 5:36 PM – so 5 hrs 24 minutes from now. Take off the hour in reserve and the hour for meals and that leaves 3 hrs 24 minutes. With 6 out of 21 sections done, that leaves 15 to go – at an average of 13.6 minutes each, not the 20 minutes originally scheduled.

To be fair, I have used a little of those breaks already – but not that much. 15 x 20 = 300 minutes; I’ve only got 204 left – so the six sections written have used 96 minutes more than expected, less about 10. And 86 / 6 = about 14 minutes too long, each, on average.

That says I’m not gonna get there, not without picking up the pace. But if I have to, I’ll publish a day late rather than splitting this into two pieces.

So let’s carry on.

7 Implied Promises – Campaign Backstory

The Fumanor backstory included the Godswar, in which whole pantheons were (mostly) slaughtered, the Kingswar in which the conflict in “heaven”” was mirrored by political conflicts on Earth, the consequent economic and social and political collapse, the Reformation when the surviving gods founded a new, blended, pantheon, and the beginnings of a slow recovery on Earth. Clerics blamed Mages for unleashing the events, and came this close to having magic banned entirely; it was very much driven underground. That’s the backstory, in a nutshell.

It carries within it certain expectations and implied promises. Were mages to blame? Maybe. If not, could they be reformed in the eyes of society? Maybe, just maybe. if theology was shown to be wrong or incomplete or inadequate, there would be social consequences for the churches, who were becoming so dominant socially that the political reality was verging on a theocracy. Not all the temples and chapels dedicated to fallen deities had given up hope of a miraculous resurrection, if they just prayed hard enough, a hope buoyed by the fact that none of those clerics dedicated to the fallen had lost their powers. Was their faith enough on its own? Or was something more going on? Heck, no list of ‘surviving deities’ had been compiled or generally accepted, and all the old religious conflicts and rivalries were still going full tilt. Obviously, there would have to be a reckoning. So there were strong implications for mage characters and for cleric PCs.

The initial PCs included a mage and a cleric. The mage had the choice – be open about their status (and hence controversial) or secretive – the player chose the first. the cleric had a choice between being open-minded and willing to evolve, or theologically hide-bound – the player in this case also chose the first option. So the pair of them deliberately put themselves on a path to confronting the errors, misjudgments, and lies of both commission and omission implicit in the background material.

They correctly deduced the implied promises of engagement with significant plotlines for those character types and chose to take them on, which would reshape the campaign world in the process, rather than letting it define and confine them. Either would have been valid character choices, but the chosen paths promised greater and more significant adventures.

A third arc revolved around the restoration of Elves and their place in society. The player who chose to take on an Elf paid lip service to the different foundation, but found translating that into an atypical mindset much harder than they expected, falling back on familiar tropes and attitudes all too often. Eventually, this led to them leaving the campaign, their burden of social reformation incomplete. I had the choice of letting the arc continue with an NPC as the focus, letting the arc die, or letting the character die and placing the burden on the shoulders of one of the other PCs. Because it was critical to the endpoint of the campaign, I ruled out the first two options – one would have derailed it, and the other would deny critical elements of PC agency that were crucial to the plotline. That left only the third choice, and the Mage-playing character (who had already completed a lot of making Mages respectable again) stepped up.

Adventures should be shaped by player expectations – you should always give ‘the paying audience’ what they came for. That gets a lot easier if you have shaped those expectations around the plotline that you want the adventures to deliver, in the first place.

Either your campaign should be a natural outgrowth of the backstory, or the backstory should be shaped around the campaign that you want to run. Anything else subverts expectations and can lead to campaign collapse and failure.

8 Blind Expectations – GMs

I talked in the first section about the blind expectations that players can have of a GM they have never gamed under. The converse is also true – the GM will have expectations of players that they have never shared table space with, too.

You can have a lengthy debate about which one is more important. It really depends on what those expectations are and how reasonably the person holding them can expect them to be fulfilled. A GM can, for example, reasonably expect players to commit to the campaign, attending as often as they can, being on time, and following the social contract of the group – I’ll talk about the latter a little later.

Attendance in my campaigns is mandatory – with a lot of reasonable exceptions. I want my games to be beloved social activities – less important than real world emergencies, subject to real world problems like employment demands and health, and even overridden by the occasional major family event. I work with regular schedules in an attempt to let players schedule other events around their commitment to the game, creating the maximum opportunity for the two to work hand-in-hand with each other.

Committing to the campaign also means accepting the premise and central philosophies on which the campaign is built, regardless of personal feelings on the matter – see Moral Qualms on the Richter scale – the need for cooperative subject limits – if you can’t do that, don’t sign up for it. And if you aren’t sure, talk to me about it before it becomes a problem.

It means accepting the occasional bit of homework – be it reading some briefing materials in between game sessions, or developing a key NPC deriving from your PC’s background, or writing up part of that background, or revising a character’s abilities (that doesn’t happen often).

GMs can reasonably expect players to implement the PC that the GM agreed to in prior discussion, defined within the context of the game world, and not some other character. They can reasonably expect players to try and play the character that their character sheet describes, which should also match that conceptual agreement.

Most of the time when this doesn’t happen, it’s “Magpie Syndrome,” where an immature player becomes captivated with a “newer, shinier” character construction. But I have met at least one player who deliberately reinvented his character before game session 1 to demonstrate that they had power over the campaign, believing that the GM had to accept the revised character or the campaign would fail. Well, there’s an old saying about paying the danegeld…

For anyone who doesn’t recognize that reference, it’s “Once you pay the Danegeld, you NEVER get rid of the Dane.” Originally attributed in somewhat different language to Rudyard Kipling and also referenced by Shakespeare, the premise is that once you pay blackmail or extortion, you will never be rid of the blackmailer, who will return regularly with a new demand for more.

In this case, it means that once you let a player hold the campaign hostage, they will do it again whenever they want something you don’t want to give. And you had better believe that the other players would be paying close attention, too.

When something like this happens, you have only two choices: Come down on the character, hard, immediately, and figure out how to deal with the consequences afterwards, or let the player think they have gotten away with it for a while – until you’ve figured out how you’re going to deal with the fallout – and THEN lower the boom on the character, maybe with an encounter that would have had a completely different outcome if the character matched what was originally promised.

A different variety of the same sort of thing was a player who kept reinventing their character mid-game, and cheating (badly) to improve it. This was in a points-buy game system and never seemed to understand that paying points for something meant that the GM couldn’t take it away from the character permanently – at worst, they would get the points back, more commonly they would get back what they had previously had, possibly in a variant form. But if you rort the system to get something for nothing, the GM is under no obligation not to take it away from you or reinvent it to make it a poisoned pill. I walled the character off in a side campaign and let them do their worst, because they never came up with anything that I couldn’t turn to my (plot) advantage. That campaign was more like a chess game than a traditional RPG, but – as a solution – it worked well, and kept everyone happy.

But I have also seen that same player exhibit some behavior that was even less commendable. He was providing transport to and from gaming for his GM of the time, with whom he had been friends for well over a decade, and (for most of that time) had been a player in that GM’s D&D campaigns. Almost always with a variant on the same basic character, but that’s neither here nor there. But at one point, he threatened to leave if the GM didn’t give the character something he badly wanted the character to have. Or maybe it was ‘didn’t do something to the character that he didn’t want to happen’, or some other variant – I was busy running my own game at the time. This goes beyond trying to take the campaign hostage – he was confident that since the GM had no other way to get to gaming, the GM would back down. The GM didn’t back down, and called the players’ bluff, so the player walked out and left. It was years before there was peace between them after that.

Before you get too sympathetic to the GM in question, though, there is another war story to relate. At one point, both the player and this other GM decided to join my campaign, which was insanely popular at the time in terms of attracting most of the best players in the group. I told the tale of what transpired in the preface to If I Should Die Before I Wake: A Zenith-3 Synopsis but – in a nutshell – the GM/player in question decided to actively sabotage the campaign so as to pry players loose from it so they could join his campaign, which had been shut down when the other player in question joined mine – and who he then blackmailed / bribed into helping him.

While there’s no ill-will between any of us these days, I would never have either of them back in one of my campaigns, and none of the other players who witnessed all this unfold would ever join their campaign either.

He did later admit that he was not in his right mind at the time for personal reasons, and that was why he didn’t simply approach me to try and sort out the problem like an adult.

Of course, since I knew both of them already, these weren’t blind expectations (maybe blind-spot expectations would come closer), but they do reveal what sort of problems can arise from blind expectations not being met.

9 Reputational Expectations – Players

And that feeds straight into the next topic, which is also the mirror-image of an earlier section. If a player claims to have 5 years experience at gaming, you expect a certain level of expertise from them. If they have a claimed 5 years at playing a different game system, some of those expectations change, but either way, there’s a certain level of ability that you expect to result.

There’s one colossal problem with that, right off the bat. How many times a year did this person play, and how many hours at a time?

I was once approached by a player who claimed umpteen years of playing experience – but who didn’t seem to know the basics when engaged in conversation. It came out that they played once a year at a convention for about 10-15 hours total a year, and had always used pregenerated characters provided by the convention GM. Hence the puzzlement when I asked questions about what sort of character they might want to play. “You mean, you don’t tell me what to play?” – “Anything you like that doesn’t step on the toes of an existing character is fine,” I answered.

When I started playing, the day started at Noon and went until 2-4 AM – every week – plus the occasional game session outside of that. And then I added a Friday night from about Six PM to about 2 AM on top of that 20-24 hours a week, 50-51 weeks a year. Around 1200 hours a year. It only took me a year of that to be more than ready to GM; to get the equivalent level of experience in sheer hours, it would have taken this player around 74 years.

As a popular advert on TV said when I was a teen, sometimes “Oils ain’t Oils” – meaning that there’s a difference between generic and premium quality.

And there’s a second problem, too: how many GMs have they played under? If it’s the one GM for say, 1000+ hours (even spread over multiple years), they will be used to what the GM wanted and the way he handled the role, and will be fairly firmly set in their ways. If there has been a variety of GMs, is there some problem that caused him to continually shift from one campaign to another? Either road can lead to trouble.

Some GMs and sites recommend a questionnaire to get to the bottom of such things. I prefer a conversation. You aren’t really looking to triage players, you trying to assess what’s going to be needed to get this prospective player to fit – assuming there’s room in your campaign for one more, and a conversation is less formal and less judgmental.

10 Implied Promises – Players & GMs

The previous section, in turn, flows into things the GM should expect from players that they do accept into the game. I’ve covered a lot of this already, so I’ll tip my hat in the direction of the earlier sections and move on.

What hasn’t really been mentioned so far is the expectations that players should have of a GM.

They should reasonably expect that the GM will be close to being ready-to-run when they arrive. At absolute worst, no more than an hour before play can commence – most of which can be consumed by the players eating a meal or talking between themselves.

They should reasonably expect that the GM will not play favorites, and that if a PC gets more of the spotlight this time around, it will be someone else’s turn next time – and that the other players were engaged enough, often enough, that they don’t feel completely left out.

They can reasonably expect that there won’t be one rule for the PCs and another for the NPCs – and that the NPCs will rarely, if ever, be better at the adventure than the PCs are. That can get tricky with some campaigns, in which the PCs are juniors or subordinates to the NPCs.

That was one criticism of the Adventurer’s Club campaign before I started co-GMing it. My now co-GM had created the NPCs based on the famous archetype characters from Pulp novels and movies and what they were capable of – which meant that each of them was capable of solving most adventures on their own. His notion was that these would backstop the PCs, be a resource that they could turn to if they became stumped on how to proceed, but that they would otherwise stay in the background. But they still cast a long shadow and there was the continual inference that the GM loved his NPCs more than he loved the PCs who were supposed to be ‘the stars of the show’.

Three of the first changes that I made to the campaign, after discussion of this perception with the GM, were:

  1. The PCs may have been less-capable individually than the NPCs, but collectively they were just as good and – with their capability of doing multiple things at the same time – potentially even superior.
  2. The NPCs had problems of their own that were scaled to their capabilities and they never worked together to solve these problems, they were too busy being solo stars. In fact, if any of them found that they had bitten off more than they could chew, they would call on the PCs for help – because the PCs were used to working in a group, and their compatriots of similar vintage were not.
  3. The NPCs were to become increasingly aware that they were aging – slowing down just a hair, just enough that trying to live up to past glories would place them increasingly at risk – and that they needed the PCs to pick up any slack.

You can see how this trio combined to solve most of the problem. It took a while for perceptions to change, but the above – combined with throwing adventures the PCs way that were WAY bigger and more complex than what the campaign had been delivering before – the difference accumulated until it became profound. The bottom line was that the PCs were the stars and the NPCs were enablers and backstory supplements – perhaps better than the PCs in one particular area, but far less capable in several others. Where the NPCs sometimes excelled was at managing to force events to play against their strengths – but this didn’t always work, and sometimes got the NPCs in deeper trouble than they already knew they were in. (Hmm, it’s about time for us to reinforce that lesson with an NPC needing the PCs help – and probably one outside our ‘usual suspects’).

The GM can’t know what the players are going to expect of him unless he’s gamed with them before, and those expectations can be further colored by genre. He has to make his best guess, relative to what he would expect if he were a player, be prepared to make mistakes and adapt to them, as each side gets more comfortable with the other.

11 Social Contracts – Explicit Table Behavior

There are three related aspects to the question of social contracts. The firs governs explicit table behavior. Eating at the game table. Eating during play. Requesting breaks. Breaks being time out for one, or for all. What constitutes a ‘cocked’ die and what’s the procedure to be for handling one. What happens when a die goes off the table. Can a player ask for Divine Intervention and how is it handled if he does?

(A quick side-story – I once had a player seek Divine Intervention. Beelzebub showed up instead, saying “God’s on vacation, I’m filling in. What can I do for you? Just sign here….”)

Other elements of social contracts that govern at-table behavior – interrupting or talking over others; interrupting or talking over the GM; mobile phone usage; touching another player’s dice; lending another player dice; moving another player’s mini; the list just goes on and on.

Here’s a biggie that most people don’t think of: passing notes from one player to another without funneling them through the GM.

There are so many situations and possible situations that even if your group has evolved specific approaches to each of these questions, these are better not written down. Things can get even more complicated when you are hiring hall space and have to provide your own insurance – that was the governing force that led our group to formally structure, back in the early-to-mid 80s.

The problem that sometimes come with putting such things in writing is the mentality that anything not forbidden is permitted. That’s asking for trouble. A better approach is to employ general principles and peer group pressure – and the first time a situation occurs, the GM explains how he thinks it should be handled and lets the group find its own way forwards on that basis. Over time, an accepted “acceptable standard” of behavior will emerge.

12 Social Contracts – Explicit Social Behavior

The same applies to the second tranch, which governs non-gaming social behavior. If bring snacks or drinks, are you expected to bring enough for everyone? Is one person designated responsible for such? Is everyone expected to chip in, i,.e. to share some or all of the costs? What about expenses incurred by the GM in prepping for the day’s play – should they be expect to be compensated?

As a general set of rules, a starting point for discussion, at my table, everyone brings their own snacks and if they offer them around, or to the GM specifically, that’s their prerogative. DON’T take unless offered. Everyone provides their own drinks. People should make some effort to clean up after themselves at the end of the days’ play – unless the GM pushed the session later than usual, in which case that’s on him. The GM covers their own costs of GMing as a general rule, unless discussed and agreed-to in advance. If there is some shared expense – hiring a venue – then everyone pays equally EXCEPT the GM.

But there are other approaches. I’ve heard of a group which includes one player who loves to bake; they provide snacks for the whole group, every time, and in return, gets an automatic success on their first save of the day – which the GM agrees to hold until it’s a significant save, not something cooked up just to get their “immunity” out of the way.

And another group, where each person takes it in turns to provide snacks, and another person takes it in turn to provide snacks – and if there’s an absence, the roster is permanently re-ordered, so you can’t simply not show up when it’s your turn and get away with it. The snacks person always precedes the drinks person by two on the rotation, it can never be the same person responsible for both.

That only scratches the surface of the many possible configurations of social contracts in this space.

If you’re always gaming with the same people, you probably don’t need to write these down anywhere. That can change when someone new joins in.

13 Social Contracts – Implied & Evolving

It’s the habits that come to be accepted norms while never being explicitly stated – sometimes. not even noticed – that are the most problematic. The previous social contract elements have evolved and been recognized as the way this group does things. It’s the etiquette of the game. A lot of the time, though, things will become habit that don’t even draw attention to themselves. And sometimes, an existing ‘rule’ will be varied for practicality and usage reasons.

There are occasions when I’m GMing where I will need to take a player aside for a private briefing, some private role-play, even a die roll or two that the others aren’t to know about. I work hard at keeping these brief and succinct. It’s often the case that if they really tried, the non-participating players could listen to parts of the conversation, even if they don’t hear it all – but the expectation is that they won’t do so, unless I deliberately locate the conversation so close to the game table that the others can’t help but overhear (I only ever did that once, and the reasons became clear almost immediately – it’s hard to keep secrets when there’s a telepath in the team).

14 Realistic Expectations

Some expectations are realistic, meaning that they are practical, and fair, won’t step on the toes of anyone in the playing group, and will achieve the desired results almost all the time.

All four of those criteria have to be met in order for the expectation of behavior to be considered reasonable..

  • Practical – is the expectation something that a person of the type to whom the expectation will apply will be able to carry it out? If you’re all struggling students without two coins to rub together most of the time, a swear jar or penalty fine for misbehavior is not practical. With a different group under different circumstances, it might be entirely practical.
  • Fair – even if there’s only one participant who is likely to offend against the expectation, it still has to apply to, and restrict, everyone equally – including the GM who is setting the expectation. The way expectations become rules is pertinent – everyone brings their own way of doing things to the table, and – when there’s a problem or a conflict – the group decides whose approach is going to be The Rule from now on, usually based on the fairness and practicality. Another GM I know once had The Black Chit – a home-casino-kit plastic coin that he had spray-painted black. Especially grievous behavior at his table led the black chit to be bestowed for a range of time frames – 30 minutes, an hour, a game session, two game sessions. Holding the chit meant that for every roll the character made, the owning player had to roll twice and take the worse result. The only parole came if someone else committed a felony of equal or greater measure – when the chit would get passed on and the clock restarted. The GM also warned that there were 49 others still in the kit and he could spray paint as many of them as he needed – a threat against too many such ‘paroles’. Is this rule ‘fair?’ – only if it also applies to GM misbehavior, which can take many different forms and only partially overlaps the pool of player offenses.
  • Won’t offend – This is usually a fairly low bar to jump, but it can catch you out when you least expect it. A trivial example – expecting everyone to offer a christian prayer before play each session, regardless of their faith (I don’t know anyone who actually does this). There are plenty of more subtle examples, too – expecting people to discuss adult situations in a PG13 rated game, for example, regardless of individual moral stances that may be wildly at odds with the morality of their characters. Or requiring a vegetarian to eat meat or someone with an allergy to eat peanut butter, just to stay in character, though you could argue against those on practicality grounds.
  • Will achieve the desired results most of the time – This is often where the rubber meets the road. I’ve seen countless examples of social rules that weren’t fit for purpose (mercifully few of them at the gaming table) – dress codes come to mind, immediately. I once knew someone (a non-gamer) who claimed to be so sensitive to smoke that she demanded all smokers who visited to shower and change into clean clothes for the duration of their stay – yet burned incense at regular intervals.

Any expectation of behavior has to pass all four of these tests, but they often don’t get explicitly tested against them – which means that the failure to pass one of these tests results in an unrealistic expectation being fostered upon the group.

15 Unrealistic Expectations

A certain level of exposure to unrealistic expectations will often be tolerated – for a while. Then it will generate complaints, then arguments, then ultimatums, and finally, group disintegration as a cohesive entity. When that happens, people start dropping out, or finding excuses not to attend. The situation becomes toxic.

During the ‘tolerated’ phase, there are few indicators of trouble. That means that the process is already well underway by the time actual complaints start – I’m talking something more serious than table bellyaching. When there is a serious complaint made, however informally, and even if the party making the complaint is smiling at the time, dig out the four criteria of ‘Reasonable’ and give the ruling a fair-dinkum test against that standard. ‘Fair Dinkum,’ in this case, means honest and without preconceptions.

It might be that the expectation passes the tests and the complaint is unreasonable. It might be that the expectation seems reasonable to you but is failing the third test for some reason you aren’t taking into account. Or it might be that what seemed like a good idea at the time is no longer fit for purpose, if it ever was; that can happen because of changing economic circumstances, for example.

When anything but the first is the judgment, the rule has to be changed or voided. This is especially true if the rule fails one of the first two tests – practicality or fairness.

Things get a little stickier when the problem lies in the third rule, because now you’re dealing with personal opinions, with which someone can reasonably disagree. It can be considered unfair to force conformity on someone, for example. Some people will be happy if you find a way to meet them half-way; others will insist on their own standards of behavior as the minimum tolerable level.

By the time you get to the complaints stage when there is a conflict between standards of offensiveness, it’s often the case that positions have hardened, and it can be too late for a compromise to be acceptable. That’s when trouble is most likely to escalate, so solving such problems has to be a priority.

16 Recognizing Expectations

Most expectations evolve naturally, without anyone really noticing until a newcomer arrives who wasn’t part of that evolution and the only explanation offered is, “that’s just the way we do things around here”. For example, a dropped die doesn’t count, it has to be re-rolled – at my game table. It’s completely acceptable to handle someone else’s dice – if it has been dropped and ended up closer to you than to them. It’s usually acceptable to borrow a die for a roll, so long as it is then returned. Cocked dice have to be re-rolled, and the test is to place a d6 that is smaller than the die being tested on the top of it; if the d6 will stay there on its’ own, the roll is valid, if not, re-roll.

Other groups have other expectations, other rules, that have evolved from their past experiences. I once visited one such group where the GM insisted on eyeballing every roll for himself – the result of catching someone cheating, I expect.

That means that it can be tricky to recognize uncodified expectations. Your best guide is to watch for ‘habitual ways of doing things’. When you see one, you then need to ask yourself whether you need to formally codify it into a social house rule, or if it’s fine being left as a voluntary practice. What will your tolerance level be for someone who has a different approach? What will the group’s tolerance level be? Those are the questions that will guide you to an answer about codification.

17 Assessing Expectations

It’s fair to expect that most people reading this already have some expectations in place, recognized or unrecognized. The first doesn’t represent a general problem; individual rules can be assessed as necessary, but until there’s a problem, the group has accepted the restrictions imposed so there’s no problem (yet).

It’s the unrecognized etiquette rules that are where the real dangers lie, waiting like land-mines to go off underfoot. It can be hard to observe these after the fact; after all, you were busy running the game, or playing your character. The time to pay attention to the way things are done is when the example is right in front of you.

If the subject seems serious enough, take a minute or two for a discussion of the subject. If not, let it slide. You’ll generally find that once you notice one, you’ll start to see others intruding into your awareness, as yo become more sensitized to how others are doing things.

Your opening question – to yourself, if not to the group generally – is why is that person dealing with that situation in that way? Once you have either an answer, or a working theory, you can assess the expectation that has manifested in the practice in question. Critical to this assessment is the question of tolerance for other approaches – but, unless the group have only just come together for the first time, it’s likely that this is fairly high.

I can’t stress this strongly enough: If you don’t need a formal rule about something, don’t impose one. Too many rules end up getting in each others’ way, and distracting from serious concerns with trivialities. If the situation changes and a practice becomes offensive or permits behavior that is unacceptable, a rule can always be imposed – but it’s often better to let sleeping dogs lie.

18 Communicating Expectations

So you’ve noticed some practice or habit spreading amongst the group, and you either find it acceptable or not. When that happens, it’s time to communicate your expectations and discuss reasonableness. This can easily be misconstrued as criticism of the individuals employing the practice, so you need to be very clear from the outset about what the problem is and whether or not the practice is a solution. If you have objections, make them clear, and grounded in dispassionate reality, not in personal judgments.

The other times that expectations need to be communicated is when someone is joining the group, or the group is gathering for the first time. For the most part, you can let group practices evolve on their own, responding to specific issues as they arise. But there can be exceptions, and they need to be announced up-front.

There are four that seem to come up more often than most, generally because they are imposed from the outside.

  • Smoking – There was a time when smoking restrictions were unheard of. Now, they sometimes seem ubiquitous. It’s quite common to require people to go to a designated area to smoke, even in a private residence – it might be a balcony, or a back yard.
  • No Food Or Drink in common areas – This used to be quite common, where a facility made premises available for social use by groups. My first regular RPG groups were in such an area attached to the University Of NSW; we would play from Noon on Saturday until 1, 2, 3, sometimes even 4 or 5 AM Sunday Mornings. Fortunately, they didn’t have this rule. Unfortunately, the bins provided by the university were entirely inadequate for clean-up afterwards, and campus security hated having people active on campus so late at night. Their complaints eventually led to us being denied permission to assemble in those premises, forcing our relocation (the first of many over the years). But this was the rule at the apartment building I was living in at the time, and it was strictly enforced – even eating an ice-cream as you passed through the lobby was grounds for possible eviction. It’s all about how much work you create for the cleaners, and how much it will cost to have that work attended to.
  • Post-activity cleanup – The gaming group in question moved to the MLC Insurance Building in North Sydney. Government regulations then required groups such as ourselves to carry our own public liability insurance, and the providers of such insurance demanded formal structure within the group – so we had to get serious in our organization. Eventually, we had to relocate to the Institute Of Technology in central Sydney (now the University Of Technology), then moved again within that complex a couple of times, and finally to a Council-owned building in Burwood, where we gamed for many years. In those earlier moves, we were very conscious of the basis of the complaint which led to our initial move – inadequate capacity for clean-up – so we were careful to always meticulously clean up after ourselves, and this became a recognized benefit of the group – we tended to leave areas spotless, even cleaner than we found them. That, perhaps, is more extreme than is needed for most groups – but expectations about post-game cleanup are one of the most poorly communicated and implemented social rules, in my experience outside of the organized group.
  • Noise restrictions – If you’re playing in a public space, these apply as soon as the noise of multiple conversations intrudes on other patrons. If there are no other patrons to consider, if you have exclusive use of the space for a period of time, then this is usually not an issue. In a private residence, the story can be quite different, depending on local laws. In general, you’re good until some point in the evening – it’s been 7:30 at times and 10:30 at times and Midnight at other times, here where I am. Beyond that, you have to keep the noise down. I once had a neighbor call the police to complain about the noise and concern about the content – typical gaming banter about hacking people up, beaning them with a mace, and so on. Police knocked on the door at about 1 AM to advise that there had been a complaint about ‘terrorists having a wild party’. RPGs weren’t well known at the time – this coincided with the Satanic Panic of the 80s – but once we explained what we were doing and promised to keep the noise down, everything was fine. The next day, I made it a point to apologize to the neighbor for disturbing them, explaining again what we were doing and that there was no real mayhem involved, and patched things up on that front. At a different location, I once had a neighbor who was not so easily mollified; he thought RPGs were dangerous and subversive, and complained so regularly that the police no longer took his calls seriously. So rules in this area depend massively on factors completely outside your control – but they are a community and legal standard that you have to accept and obey.

19 Communicating Explicit Promises

Anything that qualifies as an explicit promise by the GM, or an explicit expectation that the players need to promise to adhere to, should be spelled out in writing in the campaign briefing materials. Some heavily sandboxed groups may find that this is the totality of the notes made available pre-game to the players!

It’s often a good idea to have a serious review of your explicit promises before making this list public.

    Is there anything by which your campaign would be better served if it were implied, and how can you make that switch?

    Is there anything that will only apply in the latter or mid-parts of the campaign, and if so, is it appropriately ‘fenced off’, i.e. wrapped around appropriate limiting language?

    Is there anything in those lists that should catch the players by surprise as it develops – if that’s the case then your explicit promise should be more general and vague, even though you know precisely what interpretation you are going to place on it.

    What about explicit promises that are only going to apply in the early campaign – these are the ones offering guidance to the players as to what kind of characters to create. Are they appropriately restricted, and are some going to be more dominant than others?

    Are there any that synergise, that – in combination – say more than either does alone?

    What expectations are you seeking to create in the minds of the players and why?

20 Communicating Implied Promises

This is a lot trickier. First of all, you need to recognize that there is an implied promise. Second, you may have a specific interpretation in mind, but all possible interpretations are equally applicable – if anyone builds toward the wrong interpretation, will they be put out? Or is there scope for you to enlarge upon your basic plans to accommodate the interpretation of the player and his character – a way to play to the PCs strengths?

The answer is usually yes, with a bit of effort. If you need to revise your campaign plan to accommodate the actual PCs you’re being offered, the time to do so is now, so that any hints and plot seeds that you plant (intending them to bloom later) will be consistent with that eventual manifestation.

Probably the hardest thing for any GM or writer to do is look at their text critically as though they have never read it before, looking for the nuances and textures and implications, for what is hinted at but not stated outright, for what is implied by the actual text and not what the GM / writer intended to foreshadow. It’s incredibly hard to separate what you know or intend from what is actually on the page. But it’s necessary to find a way to do it.

One approach is to let an experienced player review it – one who is not going to be part of the campaign. In return, you can at some point perform a similar service for them. Get them to generate a summary paragraph outlining what they would expect from what you have written, edited to conform to your actual intent and not to any false impressions – unless you want to reserve the accurate foretelling as a campaign-level plot twist.

Some people can achieve the right frame of mind by letting the campaign briefing sit, unread, for a couple of weeks while you work on something else completely, doing nothing regarding the campaign in question. It can also help to be reading something else for 30 mins to an hour before you start reading the campaign briefing.

One technique that works is to copy and paste the whole briefing to an AI like ChatGPT and ask for the implications it can perceive concerning the overarching narrative that is to emerge from this foundation. Sometimes, it will get things wildly wrong, sometimes it will be close enough to the mark to be useful, and sometimes, it will reveal things to you that you never realized were there. Those light-bulb moments make the effort of reading and assessing the resulting review line by line, item by item, worth the effort ten times over.

It’s usually helpful, when doing this, to explicitly define the copy-and-pasted text as the outlines of a series of fictional stories that will combine into a broader narrative. And mention that it’s for an RPG, so there will be character-driven elements that will only emerge from actual play and decisions by others, and which can’t be predicted in advance with any certainty.

You can also ask what character race and class combinations would be best served in the broader plotline, and how can the text be improved to better serve them? Again, most of this will be exactly what you expected to be there, but sometimes there will be a surprise or two.

Are there any character race and class combinations that are served particularly badly by the plotline that is implied? You can consider warning players explicitly that these combinations may not work well in the campaign.

  • Here’s a valuable tip I haven’t seen elsewhere – if you bookmark the actual chat session in which these things are discussed and then return to that bookmark when next you have a question about the campaign or something you want to develop with AI assistance, it will be a continuation of the same session with the AI remembering everything that’s been said on the subject as though it had just been said. This saves you from having to repeat yourself. As preamble to further discussions and planning, copy and paste an updated version of the briefing materials, especially if it’s now in its final form.

Once you have recognized the implications of what you’ve said, the next question to be answered is, “is this what you meant to imply? If not, is it better or worse? Will it create false or misleading expectations?”

Revise the text accordingly. And, if the changes are radical, review it again (which is even harder to do, but necessary).

There are certain things that you want to communicate explicitly to the players before character construction begins. Things like variations on races, and social attitudes toward specific classes and adventurers in general. A few highlights of the things the characters would know all about from growing up in the cultures of their birth / childhood, like societies and economics and recent history and so on. And there are going to be other things that you only want to hint at or foreshadow. You communicate the latter through implication, saying things that suggest them without coming out with a declarative statement. Those are the implied promises that you need to communicate.

21 Communicating Failures

No plan, no campaign, no adventure, survives contact with actual PCs and their players, unscathed. They will all misinterpret something at some point, they will shoot off on tangents, they will decide they aren’t interested in the adventure premise the GM is holding out in front of them.

So it is, too, with created expectations. There will be pathways that you intended to explore, but that don’t fit the mix of characters with which you have been presented – if you were to actually go down those pathways, the PCs would be fish out of water, and while that can be occasionally fun if that’s what’s to be expected, it can be problematic in the long run.

There will be social contract elements that don’t work or that fail under particular pressure. Handling these is as much an element of administering the campaign as is the handing out of experience. YOU have to lead the discussion of what failed and what should be done about it.

That can be awkward when it’s a failure on your part, an inability to deliver on a promise made or implied, but that doesn’t abrogate the responsibility; it’s your campaign, and you have to manage these things, setting aside game time if necessary.

Be dispassionate. Don’t make it personal. Be self-critical to the same extent as you would criticize others. Treat every failure as an opportunity to improve, to do better or make the campaign better. Roll with the punches, as I had to do in the case of My Biggest Mistakes: Magneto’s Maze – My B.A. Felton Moment, one of my greatest failures – and an object lesson for other GMs.

Every cloud has a silver lining, but sometimes you have to look for it. Doing so makes you a better GM and a happier person. There are no magic bullets when it comes to relationships, and those include the GM-player relationships, the Campaign-Character relationships, and even the Player-Character relationships. Failures can happen in any of these spaces. It’s up to you to communicate them, articulate them, offer up a proposed management plan for dealing with the problem, and lead a review of that plan, knowing that it could be rejected by the players.

Afterword

I’ve seen a lot of suggestions over the years regarding player survey forms. They often sound good, a way to get players to articulate what they like and don’t like about the campaign. My experience has been that they only work well if the players take them seriously, and only work at all if they have sufficient critical facilities to analyze the campaign properly. And sometimes, they will say things that the player didn’t intend to say, or to make a fuss about, “but I had to write something”.

Whether you use them or not is up to you. But my experience is that if you provide players with a forum, you had better act and act quickly on whatever issues they raise. By offering up the form, you are making a binding and implicit promise to act on the feedback that you receive. The worst thing you can do is nothing.

If you don’t change something the players don’t like, it might be that they aren’t seeing the implications and consequences that are intended to develop as a result. The path of development of the campaign can be lost in white noise. It doesn’t matter how random an event is – it still has to factor into the broader story that you and the PCs are telling, and give players a space to express their own creativity. Forging those connections is up to you, an essential part of being a GM. But something has to change in response.

That’s why I like character-driven plot arcs so much – they provide a definitive pathway, a road-map to a resolution, and that can be as definitive or profound as the player wants it to be. There are other solutions, but that’s one that works for me – far better than player surveys.

Comments (1)

Anonymization by Anonymouse.org ~ Adverts
Anonymouse better ad-free, faster and with encryption?
X