There was much debate a little while back about the Modern bannings. Should Mox Opal get the axe? Is the card selection of Ancient Stirrings too good for Modern? I saw a lot of people making arguments on either side, but I did not see a lot of solid evidence for either case. It was more based on how good KCI was doing in tournaments and whether or not it is too good to exist in Modern in a theoretical sense. I think it would be useful in this debate to attempt to quantify the power of Ancient Stirrings. Due to the complexities of Magic, precisely quantifying the power of any particular card is an immense problem. For solving these classes of problems I prefer to enlist the aid of computers.
I love data. Every time I read about people accumulating large amounts of data in regards to Magic it gets me excited. Pouring over logs of matchup data that we can use to inform us on current tournament Magic trends is a treat for me. I am frequently using tools like hypergeometric calculators to aid me in deck building. If you have never used one, I highly recommend it. The cross-section of statistics and gaming is underutilized and can greatly increase our understanding of Magic strategy.
Given that I have a background in computer programming, I occasionally put it to good use for Magic. My main use for it is in designing Monte Carlo simulations. When people use the term Monte Carlo, all it means is that randomness is involved. This can mean either that there is inherent randomness in the events being simulated, and/or that the actions being taken in the simulation are random. In the context of a game like chess where there is no randomness, the randomness could come from the moves selected. If I wanted to determine which opening move is best, I could randomly play out millions of games for each opening move to determine which move has the highest win percentage. In the context of a card game like Magic, the randomness comes from the cards drawn. For example, I could design a simulation for a combo deck that is trying to determine how often it can win on turn four just by goldfishing. The logic of the actions the simulated player takes are preset, but the cards that the player draws are random each time.
Monte Carlo simulations are incredibly useful for games that are as complex as Magic. If we want to use data to inform our decisions about Magic, we need a very large sample size. The sample size is much larger than any one individual just playing out games on their own can provide. Out of necessity, we need to speed up the process. With simulations, we can play out thousands or millions of games in the time it takes to shuffle up a deck.
Simulations have their limit though. They are good for answering simple questions, like how often you will draw a specific card. Answering a question like who is favored in a match-up is much too difficult. That level of analysis would require revolutionary complex AI.
What to Test
To see how good Ancient Stirrings is, I wanted to determine how much consistency it adds to a deck. Most decks that play Ancient Stirrings are playing it primarily to dig for specific cards. In decks like Lantern Control or Tron, it is there to find Ensnaring Bridge or Tron lands. The card does have additional utility like finding lock pieces in the case of Lantern, or payoffs in the case of Tron. That utility is more of a secondary benefit instead of its main purpose. If those decks could play one-mana tutors that only grabbed specifically Ensnaring Bridge or one of the Tron lands, they certainly would. I think seeing how close Ancient Stirrings is to one-mana Demonic Tutor is a reasonable measure of its power level.
For the purposes of testing, I chose to simulate goldfishing Lantern Control trying to find an Ensnaring Bridge. In a lot of matchups, the deck functions as a combo deck trying to find Ensnaring Bridge to lock the opponent out of the game. This is the perfect scenario for gauging the added consistency of Ancient Stirrings. We can treat Ancient Stirrings effectively as additional copies of Bridge—the question becomes exactly how many each Ancient Stirrings is worth.
Assumptions for Goldfishing
In designing simulations, certain assumptions need to be made. Magic is an incredibly complex game, and trying to capture all of that complexity is a difficult task. The beginning assumptions help simplify the problem for testing. It is important to be careful about the assumptions made, though. They need to be made in a way that still allow for drawing meaningful conclusions. If the assumptions are too broad, then the results will not be an accurate reflection of actual games.
I looked at a few different Lantern lists that have been posted lately to get an idea of the common mana bases. All of the lists I looked at play 18 lands and four Mox Opal. Counting the Mox Opals, there are 15 green sources in the mana base: four Glimmervoid, four Spire of Industry, and three Botanical Sanctum. For the purposes of my simulations, I assumed that all of the green sources could always tap for green. I ran some simulations with varying numbers of green sources, and it impacted the percentages by fewer than a whole percent, so I think this assumption is a reasonable approximation.
I used a very basic mulliganing heuristic. For six- and seven-card hands, if it contained six or more lands or fewer than two, it was a mulligan. The simulations kept all five-card hands. This mulliganing heuristic is fairly generous, but any more complexity would require more context than a goldfishing scenario could provide. None of the simulations accounted for scrying after mulligans. This deflates the results slightly, but the comparisons are unaffected.
The approach to playing out turns is straightforward. When playing a land, the simulation prioritized green sources over non-green sources. Whenever it had an Ancient Stirrings and an untapped green source, it would cast it. When deciding what card to take from Ancient Stirrings, it would prioritize, in order: an Ensnaring Bridge, a green source, a non-green source. After that, which card it takes does not really matter as it would have no impact on the simulation. Then, on turn three, it would determine whether or not it had found an Ensnaring Bridge and enough lands to cast it. Each time the simulation had a castable Bridge on turn three was counted as a success.
Conditions for the Simulations
For the simulations, I decided I wanted to compare the impact of adding more than four Ensnaring Bridges to a deck against the impact of four Ancient Stirrings. This will give insight into how close Ancient Stirrings is to a tutor. Tutors function as effective additional copies of a combo piece. The closer Ancient Stirrings is to adding an Ensnaring Bridge to the deck, the closer it is to a tutor.
I ran a total of twelve different simulations. The first was with four Ensnaring Bridges and no Ancient Stirrings, to serve as a baseline. The next had four Bridges and four Stirrings. Finally, I ran four different simulations with no Ancient Stirrings and 5, 6, 7, or 8 Bridges respectively. I did these six simulations for being on the play and for being on the draw to cover all goldfishing scenarios. For each of the twelve scenarios, I ran 100,000 goldfish games to provide a sufficient sample size. The program recorded the number of successful games as defined by casting an Ensnaring Bridge on turn three. Using that data, I determined the percentage of successful games.
|# of Ensnaring Bridge||# of Ancient Stirrings||% of games Bridge was cast (Play)||% of games Bridge was cast (Draw)|
I find the results of this experiment astonishing. Having four Ancient Stirrings and four Ensnaring Bridges in the deck is very close to having six copies of Ensnaring Bridge. Each copy of Ancient Stirrings added to the deck is effectively half of a Demonic Tutor. The massive impact on consistency that Ancient Stirrings brings is something that I would not have intuitively picked up on while playing the deck. It only looks at the top five cards. That is only one twelfth of the deck. That is nowhere close to searching the entire library.
Seeing the impact of Ancient Stirrings in this context makes me want to look at some of the other card selection spells in Modern. Perhaps some of the two-mana ones that dig five cards deep are more playable than currently believed. Maybe cards like Peer Through Depths, Grisly Salvage, or Commune with the Gods are secretly great. The last two even have the side benefit of filling the graveyard.
It is entirely possible, however, that two mana is just the breaking point. One mana is very strong and possibly too good, and two mana might simply not be good enough. It is difficult to tell on its face, but that is what testing is for.
Should Ancient Stirrings be Banned?
Before picking up the ban hammer, it is important to keep in mind the deck-building constraints Ancient Stirrings imposes. Loading a deck with 40+ colorless cards to ensure that it always gets a card is a big ask. This leads to a lot of inflexibility in card choices. The card selection spells currently on the Modern banlist—Ponder and Preordain—only ask the deck to have lands that tap for blue. That is a much looser constraint. Personally, I like that Ancient Stirrings exists as a reward for building a colorless deck.
Ultimately, I do not think Ancient Stirrings deserves a ban. It is the best rate on any card selection spell in Modern, but the drawbacks in deckbuilding are too significant. It does function as a half tutor, but only for colorless cards. If a deck using Ancient Stirrings ever becomes too problematic for Modern, I envision that the problem lies with the card it is helping to find and not Ancient Stirrings itself.