Skip to content

Sigbovik 2017#

Click for full version PDF

the association for computational heresy presents

a record of the proceedings of

SIGBOVIK 2017

the eleventh annual intercalary robot dance party in celebration of
workshop on symposium about 26th birthdays; in particular, that of
harry q. bovik

carnegie mellon university

pittsburgh, pa

april 0, 2017

i

SIGBOVIK

A Record of the Proceedings of SIGBOVIK 2017

ISSN 2155-0166

April 0, 2017

Copyright is maintained by the individual authors, though obviously
this all gets posted to the Internet and stuff, because it's 2017.

Permission to make digital or hard copies of portions of this work for
personal use is granted; permission to make digital or hard copies of
portions of this work for classroom use is also granted, but seems
ill-advised. Abstracting with credit is permitted; abstracting with
credit cards seems difficult.

Additional copies of this work may be ordered from Lulu; refer to
http://sigbovik.org for details.

ii

SIGBOVIK 2017 {width="1.0004866579177603in"
height="1.000485564304462in"}

Message from the Organizing Committee

hese are the proceedings of the 1011th2annual conference of the
Special Interest Group on Harry Q.1 Bovik, organised by the
Association for Computational Heresy in honour of Harry Q. Bovik's
26.20945...th birthday.

2017 was a monumental year for SIGBOVIK. We experienced a factor of
1,522 increase in submis sions over last year, that's to say, we
received 35,000 submissions. For your convenience, we made a histogram
showing the number of submissions across time in Figure 1. As much as
we would have liked to accept all of these scientiûc breakthroughs,
they would have caused our proceedings to weigh about 868 lbs or 394
kg,2and face it, nobody needs that heavy of a paper paperweight.

30,000

s

n

o

i

s

s

i

m

sub

f

o

r

e

b

m

u

20,000 10,000

N

0

62 64 66 68 70 72 74 Harry Q. Bovik's age at the submission
deadline

Figure 1: Number of SIGBOVIK submissions across time

Our reviewers tirelessly reviewed these submissions, and concluded
that only є of them were truly worthy of being accepted for inclusion
in the SIGBOVIK proceedings. "What, if anything, is epsilon?", you
might ask. In this case, it so happens thatє = 0.001, a value ofє
found in the wild in the C++ species of programs 5.4% of the time,
according to seminal work published in the proceedings of SIGBOVIK

1In case you were wondering: Quahaug.

Assuming a page weight of 4.5 grams, double-sided printing, and an
average

2 {width="1.4856944444444444in"
height="0.9898458005249344in"}

of 5 pages per submission. his corresponds to a bit more than the weight
of

a Kodiak bear (Ursus arctos middendorõ) [Wik17], pictured le .

iii

2014 [PhD14, Figure 6]. Comparing the SIGBOVIK 2017 acceptance rate
with the acceptance rates of other "prestigious" computer science
conferences (Table 1), we see that SIGBOVIK is anywhere from 169 times
(SIGCOMM) to 283 times (SODA) more prestigious than other top computer
science conferences.

Conference

Name Year Acceptance rate

FOCS 2016 27.7%

ICML 2016 25%

POPL 2016 23%

SODA 2016 28.3%

SOSP 2015 17%

STOC 2016 24.9%

SIGCOMM 2016 16.9%

VLDB 2016 21.2%

Table 1: Acceptance rates at "prestigious" computer science
conferences

SIGBOVIK used a new submissions website this year. Woven from the
ûnest spider webs by Jordan Brown and Jean Yang, this website survived
a DoS attack from the PCˆWˆWˆWˆWˆWˆWˆW successfully handled 35,000
totally genuine submissions. We are grateful for their support and for
letting us stress-test their so ware.

We hope that you will ûnd the seminal works below informative and
illustrative of the high-calibre research SIGBOVIK has become known
for. From breakthroughs in debugging to new advances in impure math
and game theory, we are sure there will be something of interest for
everybody.3

Our thanks to the volunteers who made SIGBOVIK possible. In
particular, we would like to thank Sol Boucher for assembling these
proceedings; Rose Bohrer, Stefan Muller, and Ben Blum for reviewing
papers and ensuring SIGBOVIK accepts only the highest calibre
research; Carlo Angiuli for maintaining the SIGBOVIK website and his
helpful advice; Chris Yu for the artwork; Catherine Copetas for
managing SIGBOVIK ûnances and other administrative concerns; Ryan
Kavanagh for organising the organisers; and last, but not least, the
authors, without whom none of this would be possible.

he SIGBOVIK 2017 Organising Committee

Pittsburgh, PA

References

[PhD14] Dr. Tom Murphy VII Ph.D. "What, if anything, is epsilon?"
In: A Record of the Proceedings of SIGBOVIK 2014. Apr. 2014, pp.
93--97.

[Wik17] Wikipedia. Kodiak bear. Mar. 2017. url:
https://en.wikipedia.org/wiki/Kodiak_ bear.

3And if you can't ûnd something you're interested in, you should
have submitted it!

iv

These papers are so awesome few can bear to look away!

Bear track: Strong Accept 3 1 Is this the shortest SIGBOVIK paper? . .
. . . . . . . . . . . . . . . . . . . 4

2 I'm on vacation so I'm submitting a vacation picture instead of a
paper, or, perhaps, a vacation photo in the format of a paper; I hope
a predatory open access journal e-mails me about this article . . . .
. . . . . . . . . . . . . . . 6

Mule track: I'm Going to Use This! 7 3 Who sorts the sorters? . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 8 4
Objectionability: A computational view of mathematical computation . .
. 12 5 Towards a well-defined and secure flirtation protocol . . . . .
. . . . . . . . 15 6 A solution to the two-body problem . . . . . . .
. . . . . . . . . . . . . . . 22

Magpie track: Who Said Money Can't Buy... 27 7 Call For Partners:
Romance with rigor . . . . . . . . . . . . . . . . . . . . . 28

8 Nano electronic transplant in brain and eyes to analyze and process
human thoughts and emotions . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 34

9 Grant proposal: Monetary policy of sparkly things . . . . . . . . .
. . . . . 36

Monkey track: It's Only a Game Theory 39 10 Is it Percival time yet?:
A preliminary analysis of Avalon gameplay and strategy 40

11 Dr. Boozehead, or How I learned to stop worrying and get drunk:
Design principles and analysis of drinking games in the silicon age .
. . . . . . . . . 47

12 A boring follow-up paper to "Which ITG stepcharts are turniest?"
titled, "Which ITG stepcharts are crossoveriest and/or
footswitchiest?" . . . . . . . 54

Dog track: Nonstandard ML 63 13 Batch normalization for improved DNN
performance, my ass . . . . . . . . 64 14 Colonel density estimation .
. . . . . . . . . . . . . . . . . . . . . . . . . . . 66 15
Degenerative adversarial networks . . . . . . . . . . . . . . . . . .
. . . . . 68 16 Stopping GAN violence: Generative unadversarial
networks . . . . . . . . . 76

1

Groundhog track: Putting the "Under" in "Image Understanding" 83

17 DeepDoggo: Learning the answer to "Who's a good dog?" . . . . . . .
. . . 84 18 e ee e ee e e ee e ee e e e ee e e e ee e e e ee e e e
ee e e e e ee e ee . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 88 19 Distinguishing humans from other forms of
cattle . . . . . . . . . . . . . . . 94

Chipmunk track: New and "Improved" Languages 101 20 On the Turing
completeness of MS PowerPoint . . . . . . . . . . . . . . . . 102 21
Effective multi-threading in Befunge . . . . . . . . . . . . . . . . .
. . . . . 107 22 Automatic distributed execution of LLVM code using
SQL JIT compilation 114 23 WysiScript: Programming via direct syntax
highlighting . . . . . . . . . . . 119 24 ZM˜˜ # PRinty# C with ABC!
. . . . . . . . . . . . . . . . . . . . 129

Insect track: Debugging 149 25 Amazon Web Services: Field observations
related to arachnid cohabitation . 150 26 Blackberry Debugging . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 153

Moose track: Impure Math and \Big Data 157 27 Fake news logic . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 28 RRR
for UUU: Exact analysis of pee queue systems with perfect urinal
etiquette163 29 The next 700 type systems . . . . . . . . . . . . . .
. . . . . . . . . . . . . 169 30 A modular approach to
state-of-the-art big data visualization . . . . . . . . 172 31
Efficient computation of an optimal portmantout . . . . . . . . . . .
. . . . 176

Raccoon track: Talkin' Trash 191 32 Garbage collection for heaps only
a mother could love . . . . . . . . . . . . 192

33 A new paradigm for robotic dust collection: Theorems, user studies,
and a field study . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 194

Cat track: Work-~stealings~aving 199 34 The Zero-color Theorem: An
optimal poster design algorithm . . . . . . . . 200 35 Cerebral genus:
Dead duck or phoenix? . . . . . . . . . . . . . . . . . . . . 203

2

{width="0.725in"
height="0.30277777777777776in"}{width="0.725in"
height="0.30277777777777776in"}{width="0.725in"
height="0.30277777777777776in"}{width="0.725in"
height="0.30277777777777776in"}

Bear track

Strong Accept

{width="0.725in"
height="0.30277777777777776in"}{width="0.725in"
height="0.30277777777777776in"}{width="0.725in"
height="0.30277777777777776in"}{width="0.725in"
height="0.30277777777777776in"}

1 Is this the shortest SIGBOVIK paper?

Joe Doyle

2 I'm on vacation so I'm submitting a vacation picture instead of a
paper, or, perhaps, a vacation photo in the format of a paper; I hope
a predatory open access journal e-mails me about this article

Jim McCann

3

1

Is This the Shortest SIGBOVIK Paper?

Joe Doyle

March 7, 2017

Maybe not.

1

4

CONFIDENTIAL COMMITTEE MATERIALS
{width="1.000485564304462in"
height="1.000485564304462in"}

SIGBOVIK 2017 Paper Review

Paper 14: Is This the Shortest

SIGBOVIK Paper?

Reviewer ϕ

Rating: e/Ã

Confidence: ϵ

While the importance of this work to the SIGBOVIK community can't be
debated (because we have no established protocol for such a debate),
it is not novel. The content of this paper is en tirely contained
within the papers "The Portmantout" from SIGBOVIK 2015, "A
shortmantout" from SIGBOVIK 2016 and "Efficient Computation of an
Optimal Portmantout" from the present SIGBOVIK. It should therefore
only be accepted if our goal is to increase our page count which, as
always, it is.

5

I'm on Vacation So I'm Submitting A Vacation Picture Instead Of A
Paper, Or, Perhaps, A Vacation Photo In The Format Of A Paper; I

2

Hope A Predatory Open Access Journal E-Mails Me About This
Article.

Jim McCann

TCHOW llc

{width="7.0030971128608925in"
height="4.696319991251094in"}Figure 1: At the "Inventions" buffet in
Hotel Disneyland.

e-mail: ix@tchow.com

6

{width="0.5in"
height="0.26200021872265966in"}{width="0.5in"
height="0.26200021872265966in"}{width="0.5in"
height="0.26200021872265966in"}{width="0.5in"
height="0.26200021872265966in"}{width="0.5in"
height="0.26200021872265966in"}{width="0.5in"
height="0.26200021872265966in"}

Mule track

I'm Going to Use This!

{width="0.5in"
height="0.26200021872265966in"}{width="0.5in"
height="0.26200021872265966in"}{width="0.5in"
height="0.26200021872265966in"}{width="0.5in"
height="0.26200021872265966in"}{width="0.5in"
height="0.26200021872265966in"}{width="0.5in"
height="0.26200021872265966in"}

3 Who sorts the sorters?

Alexander R. Frieder

4 Objectionability: A computational view of mathematical com putation

Cameron Wong and Dez Reed

5 Towards a well-defined and secure flirtation protocol Rowan Copley

6 A solution to the two-body problem

Rose Bohrer

7

3

1 Introduction

Who sorts the sorters?

Alexander R. Frieder

March 31, 2017

In this paper, we will review a variety of sort

ing algorithms and evaluate their performance

Any introductory programming class will cover the basics of different
sorting algorithms. Sorting algorithms are the perfect introduction to
algo rithmic analysis for a multitude of reasons. First, there are
straightforward proofs of the lower bound in terms of performance, thus
we can eas ily teach students that O(n log(n)) sorts are op timal
algorithmically. Second, the na¨ıve sorts are fairly far from optimal
and that is simple to show students. Finally, sorting is a deceptively
easy task for humans. To expand on that final point, given a list of
numbers, it is very easy for a human to order them correctly. Indeed,
most introductory lessons will have the teacher prompting the student to
describe exactly how they put them in order.

The important takeaway from the lesson is that sorting is not a trivial
process and it must be done through some algorithm. It does not suf fice
to take numbers and simply put them in order. The lesson will then
proceed by walk ing the student through multiple algorithms and then
displaying their performance on some large set of numbers, presenting
how long each algo rithm took. The teacher will then often conclude with
some statement like "Thus we can see that merge sort is faster than
selection sort". How ever, this directly contradicts the previous les
son: the teacher has taken performance numbers and somehow magically put
them in order. In deed, we need to use a sorting algorithm to deter mine
which sorting algorithm is best. But which sorting algorithm is best for
that task?

email: alex.frieder@gmail.com

on large randomly sorted arrays of numbers. We will then use the more
general version of sort ing to compare each sorting algorithm with
each other to correctly evaluate which sorting algo rithm is the best
for sorting sorting algorithms. Finally, we will use the same
algorithms to de termine which sorting algorithm is best for de
termining which sorting algorithm is best for de termining which
sorting algorithm is best.

2 Sorting Algorithms

In this section we will review six different sort ing algorithms that
will be used throughout this paper and present, without proof1,
their algo rithmic complexity. If you feel you are an expert sorter,
feel free to proceed to section 3. I will not be offended. Not at all.
I promise. These are not tears. I am just cutting onions.

2.1 Selection Sort

Selection sort is often the first sorting algorithm taught and is
often the way humans intuitively sort. The algorithm behind selection
sort is ab surdly simple: find the smallest element left in the list,
remove it, and put it in the output list. Repeat. Once the original
list is empty, the out put list will be sorted.

Since selection sort requires finding the max at each step, it runs in
O(n2) time.

1Unless you count Wikipedia as proof.

8

Exercise 1

Teach your dog2 how to perform selection sort.

2.2 Insertion Sort

Insertion sort is a similar idea to selection sort, but instead of
putting in the effort to pull the correct element from the original
list, we put in the effort to put a random element from the orig inal
list into the right location. For each element in the original list, we
remove it, iterate through the output list, find the smallest element
larger than it, and insert it immediately before that. Thus, at each
step, the output list is sorted, and at the end, the original list is
empty and the out put list is sorted.

Since inserting an element may require iterat ing over the entire
output list, insertion sort runs in O(n2) time.

Exercise 2

Put the words of this paper into alphabetical order using insertion
sort3.

2.3 Odd-Even Sort

Odd-even sort is a parallelizable version of the notorious and often
politicized bubble sort4. In odd-even sort, we first iterate through
the odd indices and for each element that is larger than the next
element, we swap them. We then re peat for all the even indices.
Finally, we repeat this pair of steps until no swaps are made. By
definition, the list is now sorted. Since each step only swaps either
exclusively odd or exclusively even indexed elements with its neighbor,
the en tire step can be completely parallelized.

Despite being parallelizable, odd-even sort is not more efficient than
bubble sort and runs in a worst case of O(n2).

2Other pets may suffice for this exercise, but not cats. Cats are
fundamentally incapable of learning algorithms. 3Ensure you have at
least 15 minutes of free time be fore attempting this.

4https://www.youtube.com/watch?v=k4RRi_ntQc8

Exercise 3

There actually exists exactly one list of in tegers on which odd-even
sort does not termi nate. Find that list, treat the numbers as binary,
read off the ASCII-encoded message, and con tinue your quest to find
the Fountain of Youth.

2.4 Comb Sort

Comb sort is also a generalization of the noto rious and often
politicized bubble sort5. Comb sort is parametrized by a so-called
shrink factor k, optimally around 1.36. Comb sort begins by iterates
over every pair of elements n away from each other, where n is the
length of the list, and swapping the elements if they are not sorted
rel ative to each other. Initially, that is only the first and last
element. The distance is then set to n/k and the process is repeated.
This process repeats, dividing the distance by k until the dis tance
is 1, at which point we revert to bubble sort, swapping out of order
pairs until the list is sorted.

Despite having faster run time than bubble sort, comb sort matches its
complexity of O(n2).

Exercise 4

Do 30 push-ups, 30 sit-ups, and wall sit for at least 2 minutes.

2.5 Merge Sort

Merge Sort is the prototypical divide and con quer algorithm. First,
recursively sort the first half of the list and the second half of the
list, then merge them, by removing the smaller head of the two lists
and inserting it into an output list. Repeat until both lists are
empty. The out put list is now sorted.

Merge sort takes exactly Θ(n log(n)) compar isons and is thus
algorithmically optimal.

5https://youtu.be/M0zg_Cf4K4w?t=21s

6As cited from the Epic of Gilgamesh.

9

Exercise 5

Write an angry letter to the author explaining that the O(n log(n))
lower bound only applies to comparison-based sorts and that some types
can be sorted into linear time so therefore the author is wrong.

2.6 Quicksort

Quicksort takes, in the worst case, O(n2), but on average, O(n
log(n)).

Exercise 6

Enroll at Carnegie Mellon and take any of the following courses to
learn in-depth about quick sort:

• 15-122

• 15-210

• 15-359

• 15-451

3 Methods/Implementation

For comparing sorting algorithms, the na¨ıve method is to sort a large
number of random lists of numbers and take the average time spent sort
ing. As mentioned originally, this method is an tithetical to the entire
purpose of sorting algo rithms. But it will suffice as a first level
approx imation of the efficiency of sorting algorithms.

When comparing higher-order sorting algo rithms we need to enforce a
partial ordering. In other words, we need to define a \< operator on
algorithms. There are multiple ways to do so, but we have chosen to use
the most natural or dering: when comparing two algorithms A and B, we
define A \< B to be true iff the total time spent sorting n random
shuffles of some list L is less when sorting under A than when sorting
under B. This is obviously a probabilistic total ordering, but in the
limit, it is a deterministic total ordering. We will pretend our numbers
are big enough to be considered "the limit".

We used n = 3 across all experiments. For sorting algorithms, we used
L = [1, 105) and de fined \< as the normal integer \<. For sorting
sort ing algorithms, we used L = the list of all sorting algorithms
discussed in section 2. As described above, we determined for two
algorithms A and B if A \< B by running A on 3 random shuffles of [1,
105) and B on 3 random shuffles of [1, 105) and using normal
integer \< on the time taken. Finally, for sorting sorting sorting
algorithms, we use the same L, that is, all algorithms discussed
above. We define \< in the logical expansion of the previous
definition: A \< B if 3 runs of A sorting sorting algorithms takes
less time than 3 runs of B sorting sorting algorithms.

These were all implemented in Python 3.6 but ported to and eventually
run in PyPy2.7 v5.6.0 due to time concerns.

4 Discussion

All of the numeric results of these tests can be found in table 1 on
the next page.

For the integer sorting algorithms, we see ex actly the results
usually presented in introduc tory classes and as expected. Quicksort,
using probabilistic magic, is fastest, with merge sort following
closely. Comb sort, using its heuristic advantage, is fairly fast,
with the other O(n2) algorithms following behind. There is nothing
too exciting here. The full ordering is quicksort \< merge sort \<
comb sort \< insertion sort \< selection sort \< odd-even sort.

However, both the sorting sorting algorithms and the sorting sorting
sorting algorithms have a different order from the integer sorting:
merge sort \< insertion sort \< quicksort \< selection sort \<
odd-even sort \< comb sort.

There are two interesting changes here. We can use merge sort as a
good baseline since it does a relatively deterministic and static
number of comparisons.

First, insertion sort is much much better for higher-order sorting
than for integer sorting. For integer sorting, insertion sort is about
60x slower than merge sort. However, for sorting sorting,

10

+----------------+-----------------+----------------+-----------------+
| > Algorithm | 0th Order Time | 1st Order Time | 2nd Order Time |
+================+=================+================+=================+
| > Quicksort | 0.06127 s | 329.88 s | 18.62 min |
+----------------+-----------------+----------------+-----------------+
| > Merge Sort | 0.06757 s | 277.32 s | 16.65 min |
+----------------+-----------------+----------------+-----------------+
| > Comb Sort | 0.09867 s | 792.78 s | 235.46 min |
+----------------+-----------------+----------------+-----------------+
| > Insertion | 4.03476 s | 314.31 s | 17.67 min |
| > Sort | | | |
+----------------+-----------------+----------------+-----------------+
| Selection Sort | 8.15739 s | 565.02 s | 25.33 min |
+----------------+-----------------+----------------+-----------------+
| Odd-Even Sort | 15.79098 s | 781.93 s | 216.02 min |
+----------------+-----------------+----------------+-----------------+

Table 1: The time take for algorithms to sort integers (0th order),
sorting algorithms (1st order), and sorting sorting algorithms (2nd
order)

it is only 1.13x slower, and for sorting sorting sorting, it is only
1.06x slower. It is conceivable that since insertion sort spends most of
its time iterating over the beginning of the sorted list that when
comparisons get more expensive, it gets more efficient.

Second, comb sort is much much slower for higher-order sorting than
for integer sorting. For integer sorting, comb sort is only about 1.5x
slower than merge sort, but for sorting sorting, it is about 3x
slower, and for sorting sorting sorting, it is 14x slower. It is the
slowest sort for higher-order sorting and took almost 4 hours compared
to merge sort's 17 minutes. It is con ceivable that comb sort, with
its decreasing win dow, spends more time doing redundant tests as it
reduces to bubble sort.

We ran tests counting comparisons to test both of these hypotheses and
the evidence is clear: there is no evidence for these hypothe ses
whatsoever. Thus we present no explanation and offer a 50¢ prize for
the first plausible expla nation submitted to the author.

5 Conclusions

Use quicksort, unless you need to sort algo rithms, in which case use
merge sort. Most sorts behave the same across orders, but some do not.
Thinking about higher-order sorting algorithms may cause
temporary7insanity.

7At least, I hope it is temporary...

6 Future Work

There are numerous ways this novel direction of research can be
expanded:

• Higher-order functions including, but not limited to, 3rd, 4th, and
ωth order sorting algorithms

• Something involving quantum computers working on big data in the
cloud

• Determining if there exist sorting al gorithms that excel at
non-constant comparison algorithms8

• Other meta-analyses such as:

-- A search algorithm for searching for search algorithms

-- A database for managing databases -- A container for containing
other con tainers9

-- A cache invalidator for managing cache invalidation algorithms

7 Acknowledgements

The author would like to thank a handful of peo ple, including Arley
Schenker and Ben Lichtman for proofreading this and sharing in the mad
ness, Thomas Bayes, for obvious reasons, the Academy, for making this
all possible, and read ers like you, without whom this whole ordeal
would have been meaningless.

8This is actually an interesting problem that I could not find much
research on.

9This may be made redundant by Docker.

11

4

Objectionability: A Computational View of Mathematical Computation

Cameron Wong, Dez Reed

March 10, 2017

Abstract

Any student in an "engineering" discipline (that is, the fields of
math, computer science, mechanical engineering, underwater basket
weaving, et al) can sympathize with looking at a equation or problem
and realizing that pages of unintelligible scribbles would be required
to reach a satisfactory conclusion. Similarly, professors often add
allowances for "algebraic grunt work" that is often assumed to be
correct in grading rubrics. However, such terms are often, at most,
vacuously defined and it is up to the subjective preferences of
whoever is looking to determine whether four pages of equational
reasoning is sufficient or to remove points for omitting a further two
pages of integration by parts, second-order substitution and a
re-derivation of the Cauchy-Schwarz inequality.

In this paper, we attempt to provide a mathematical model to describe
the "grossness" of any particular mathematical problem P (the precise
formal definition of which is semi intentionally left vacuous). In
particular, we define G(P) to be the computational grossness of P and
prove several further useless results that come of direct consequence.
As such, we also establish several grossness classes for problems.

Finally, we introduce the concept of objectionability, whether any
arbitrarily gross problem is uncomputable by a non-idealized human.
Among other things, we will find that O, determing whether a problem
is objectionable, is itself objectionable.

1 Introduction

In 1936, Alan Turing introduced the concept of a Turing Machine that
could be used to com pute formally-defined mathematical functions. In
the same vein, the idea of decidability was in troduced, determining
whether a given problem was solvable with pen-and-paper mathematical
manipulations.

We see, though, that some problems, while perhaps decidable, are simply
\"gross\"- that is to say, they are difficult to generate solutions to.
This phenomenon has even been referenced by the famed Jay-Z, referring
to \"99 problems\" that seemed to be of trivial value compared to the
problems that the song's subject had been presented with.

However, in reasoning about decidability, lit tle to no thought is given
to whether anyone would want to solve such problems using pen and-paper
mathematical methods -- with the ad vent of computers, many of these
"gross" compu tations can be done with little to no effort on the part
of a human. Even so, the fact remains that such "gross" problems exist.
Further thought re veals that (without loss of generality) the prob lem
of writing Python code to solve something may be "too gross".

This paper attempts to resolve this issue by defining G(P), the
grossness of P for a prob lem P in E (the set of all problems) then
discuss several schemes by which G may be estimated. We also briefly
discuss how we might determine whether a problem is objectionable,
that some one may reasonably object to being asked to solve it.

2 Repugnance Theory

We begin by defining G(P) for some P ∈ E as the minimum grossness for
any solution to P. As we will see, however, this is difficult to deter
mine in a general case.

Consider the following problem:

Problem 1. Fix some alphabet Σ and other arbitrary set Π. Define the
following sets:

Γ = (µ, n, τ ) ∈ {w ∈ Σ: w every

proper prefix of w is in µ} × N ×

{ζ ∈ Π : hζi ≥ n} : |hτ i|µ|hni| ≤

|hni|| + |hτ i|

1

12

κ =, ι, ψ) : ψ ∈ {f : Σ × Σ → Σ| ∀x, y ∈ Σ, f(x,
y) = f(y, x)} and

∀x ∈ Σ, ψ(ι, x) = x = ψ(x, ι)

where hxi is an arbitrary encoding of an object x over Σ.

Let Q be the subproblem of "Given a string S over Σ, is S
equivalent to h(ν, χ, ω)i where (ν, χ, ω) ∈ Γ ∩ κ?".

Is Q decidable?

A naive approach might have a student at tempt to determine what each
set means and to reason about the properties and behavior of each. Note,
however, that hPi over the alpha bet of unicode characters occurring in
this pa per (which we will take to be the problem state ment as given)
contains 39 characters when any symbols that are not considered "common
En glish characters" are removed. This, too, is a vacuous definition,
however, so we will define a "common English character" to be anything
that

we can apply the Rauen Incompleteness Princi ple to see that V is a
complete but undefined function.

From this, however, we can establish the fol lowing lemma:

Lemma 1 (Wong's Lemma). For all P ∈ E, let L be the upper bound of
G(P) as determined by the character counting method. Additionally,
define %= G(P0), where P0is the derivative of P. Then,

G(P) ≥ V(P)% (mod L) (1)

Proof. Fix a problem P. G(P0) is trivially bounded from above by
1114016 and below by N. We can then express V(P) type-theoretically
as a combinatorial 4-form, which means that, by the Chinese remainder
theorem, V(P)% can not be greater than a type-theoretic solution to
P modulo L. But wait, we already established that G(P) is bounded from
below by the set of type-theoretic solutions to P, so the claim im

has a standard unicode codepoint less than 95. With this interpretation,
then, we could say that

mediately follows.



G(P) = 39, as any solution to this problem must begin by reading the
problem statement. We will call this approach the "character counting"
approach.

An alternative approach, however, might in volve inspecting the types of
the sets involved -- µ is a string, n is a natural number and τ takes
some type t, where t is the type of "things in Π". This is not to start
a discussion of type theory, so we suffice to say that this apprach
leads to finding that G(P) = T, where T is the set of all types that
fulfill the given criteria. It then follows that the "character
counting" method is only sufficient to provide an upper bound on G.

A real type-chasing approach, however, would note that the definition of
κ is actually a monoid, giving us that any element of κ is also a monoid
in the category of endofunctors and thus Γ × κ = ∅. In this case,
though, the problem cannot possibly be less gross than this observation,
giving us that G(P) ≥ \"glasgow\".

3 Voodoo

In this paper, we will be concerned mostly with the lower bound of
grossness as established via another measure we have provisionally named
voodoo, denoted with V(P) defined as follows:

Take a problem P ∈ E. If P can be reduced to some P0 via an
application of some subroutine Q, V(P) ≥ n·G(P0). It naturally falls
out, then,

that V(P) = PP reduces to P0 G(P0). From here,

Note that this does not contradict our earlier statement about
character-counting; the modulo ensures that we remain under our upper
bound. We now present the following result:

Theorem 1 (Reed's Theorem). For all P, Q ∈ E,

G(P) ◦ G(Q) ≥V(−−−−−−→

P · G(Q) ×−−−−−−→

G(P) · Q)

V(P ◦ Q)%Q(2)

where ◦ is the composition operator.

Proof. We begin by applying Wong's Lemma to the right hand side of the
equation.

V(−−−−−−→

P · G(Q) ×−−−−−−→

G(P) · Q)

V(P ◦ Q)%Q

\%−−−−−→ P ·G(Q)×−−−−−→ G(P )·QG(−−−−−−→

P · G(Q) ×−−−−−−→

G(P) · Q)

GP ◦ Q ◦ P(mod L)

Note that, at this point, all operators are now all near-computable with
sufficient paper (and we can thus assume that our human prover has done
so).

However, on attempting to apply the same thing to the left hand side, we
get that

G(P) ◦ G(Q) ≤ V(P)%P ◦ V(Q)%Q > (V(P) ◦ V(Q))%P ◦Q

From here, however, the authors of this pa per object (see section 2) to
showing the remain ing computations. Without lots of Generality,

This is the difference between the highest defined unicode
codepoint and 95

2

13

we see that the two quantities thus have well ordered grossness.
Specifically, something that is near-computable cannot be greater than
any thing that is not. Because the LHS is ultimately too gross to
reduce, then, it must be greater than

tionable. Consider the 3-Body Problem used to establish the Principle
of Repugnant Exclusion. This problem involves several abstractions, re
ductions, equations and several calculus and al gebraic
impossibilities. It would be reasonable

the idealized RHS.

to say that no reasonable person could sit down



Because of this, we can also establish the fol lowing corollary

Corollary 1 (Falk Corollary). (E, G, ◦) is a well-defined field.

4 The Principle of Repug nant Exclusion

Consider the 3-Body Problem (∴). Notice that, despite being somewhat
simple to state, it is dif ficult to intuit a value of G(∴).

Suppose P is as follows: Given ⊥ ⇒ß, find ß. Simply enough, G(P) = ⊥.
Note that P has triviality and ∴ has pure nontrivality, so we can apply
the law of excluded middle to see that G(∴) = >. We will refer to this
relation by say ing that ∴ is excluded from P. Without loss of
generality, we can generalize this proof to say that, given a problem P
that can be excluded from a problem Q such that G(Q) = ⊥ must have G(P)
= > (and vice versa). This is the Principle of Repugnant Exclusion.

5 Objectability

We end by opening a discussion on objectabil ity, the discussion of
whether a problem is objec

References

and solve this problem, so we would call this problem objectionable.
We consider this to be a simple enough definition with profound conse
quences. Formally, we believe that any problem P for which G(P) is
greater than |hPi||hP iis ob jectionable, but we do not have a
proof at the current time (we offer this as an open problem).

By way of example, consider the problem Given a problem P, is P
objectionable? (this is the problem O). By applying earlier results
from the Rauen Incompleteness Principle and Wong's Lemma, we see that
O is undecidable. Any resulting decider M, then, cannot be en coded
via any finite alphabet Σ, which means that it must contain an
infinite number of non common English symbols (thus immediately fail
ing the character-counting test). Furthermore, attempting to analyze
the types of L(O) (the set of hPi for all P that are objectionable) is
ob jectionable without a well-defined proof of the objectionability of
O (see citation 2). If such a proof existed, then, it must itself be
objection able (and so we cannot assume that the proof was ever
written or read by any human author). We, as humans, must thus
conclude that O itself is objectionable.

1. A. M. Turing. On computable numbers, with an application to the
entscheidungsproblem. Proceedings of the London Mathematical Society,
s2-42(1):230--265, 1937

2. You thought there would be a citation, but it was me, Dio Brando!

3. "Anime was a mistake" - Hayao Miyazaki

It is also already unsolvable.

3

14

5

Towards A Well-Defined and Secure Flirtation Protocol

Rowan Copley,1∗

1Department of Love Quantification, Witty Pear LLC

To whom correspondence should be addressed: rowan.copley@gmail.com.
Introduction

Human interactions are inconsistent, ambiguous, and fraught with
danger. Protocols for in teraction are so nebulous that there is an
entire cottage industry devoted to after-the-fact in terpretations of
interpersonal interactions[1]. In fact, many people will live their
entire lives with their own internal understanding of proper
communication protocols, which are mutually incompatable with other
such implicitly defined systems.

This is especially true in the communication between sexes, frequently
referred to in the vernacular as "flirtation." We believe that this is
an unsatisfactory state of affairs that can be im proved with the
application of networking and information theory. Therefore we are
proposing a protocol for inter-gender communication.

1 Architecture

Consider social encounters to be a nested stack of "consent". We start
by defining a stack of consent levels that are traversed during the
course of the flirtation session. To keep the definition

1

15

Table 1: The consent hierarchy.

Consent Level Description

-2 Don't even look at me

-1 Don't talk to me

0 You may speak to me briefly if there's a good reason

1 We can talk

2 You can talk to me all you want

3 You can touch my hand

4 Long eye contact might not be creepy

... How deep this stack goes is outside the scope of this paper.

intuitive and easy to understand, we define consent levels below in a
colloquial and intentionally non-rigorous way.

Behind this is the idea of a mutual consent to recurse on that stack
to a "deeper" level. You'll need a protocol of mutual agreement to
recurse that is also recursive: knowing that the other person knows
that you know, and so on. But such a protocol can be implemented as a
Two Generals' Problem, which, while technically unsolvable, is still
provably simpler than everyday human interactions.

One design decision is whether consent levels should be symmetric,
where each participant is cool with what is going on, or asymmetric,
where maybe something creepy is going on. While our protocol is able
to support both by letting the two parties negotiate this at runtime,
we focus on the symmetric use-case here as (we hope) it is more
common.

The heart of the protocol is each party's location in the stack, and
the messages about accept ing or denying access to deeper levels, as
well as revoking access to a level that was previously accepted.
Revoking access to a consent level is left up to the particular
implementer,will depend on localized and internationalized norms. Some
examples of revoking access are getting up and walking out, saying "no
thanks", and kicking them in the balls.

2

16

2 Demonstration

Suppose that Alice wishes to initiate a flirtation request with Bob.
That is, she wishes to enter into a negotiated descent down the
consent stack. Currently, they are only on speaking terms, but Alice
wants more. However, Alice does not wish to be in a more consenty
level of the stack than Bob. That would expose Alice to embarrassment
should Bob not be aware of Alice's intentions and similarly change
positions on the stack. Alice has lead a life of disillusionment and
disappointment which has left her with a calloused emotional exterior.
And she's too old for pussy-footing around anymore. She's had it with
immature young men whose only weapon is flattery. Alice wants a man
who can be honest about his flaws, who opens up to you. She want Bob.
Therefore, Alice and Bob must engage in a negotiation which proceeds
as follows.

We define two variables, Cself and Cother, to indicate the degree
of confidence that Alice in herself has that she wants to intimate a
more flirtatious atmosphere with Bob (Cself ) and the degree of
confidence that Alice has in Bob that he wishes for the same.

3

17

Here we see Bob's internal variables set at near their default levels.
Because he has been obliv ious to Alice's advances, he has no idea
what her true intentions are and thus his internal model is quite
inaccurate. The social engagement blunders onward, however, driven by
Alice's deter mination.

We see that Alice has updated likelihood estimators to reflect a
decrease in confidence that Bob wishes to engage in a flirtation
excecrsize with her. However, Alice decides to try again by
calculating that although the chances of payoff are estimated to be
low, the penalty for failure is also low.

4

18

In this case, Bob makes the decision to initiate the simultaneous
stack traversal immediately. This may not always be the case, as
sometimes a delay is a safer option.

Alice takes the potentially risky decision to initiate a descent
without receiving a response from Bob. Unfortunately, Bob interprets
this as desperation and revokes permission to proceed at the last
minute. Alice must bury her feelings deep inside her and only
indirectly express them, such as in passive-aggressive post-it notes
or clinical dissections of human behaviour in academic journals.

During this exchange, suppose Eve wishes to intercept Alice's
flirtation message as if it were intended for her. Alice did not
intend that Eve receive her obfuscated message to Bob. Alice desires
only Bob, and considers Eve to be an obstreperous harlot. Alice's only
wish is to hold Bob close, to feel his caresses.

5

19

Unfortunately, unlike in computing where each machine is assigned a
convenient protocol address (e.g., 127.0.0.1), humans have no such
unambiguous addressing. Furthermore, in a social setting where one is
broadcasting flirtatious messages, simply claiming to be the recipient
of those messages may lead to a self-fulfilling prophecy. Therefore,
at least until such a time as this promising academic direction can
lead to more concrete results, we recommend three potential solutions
to this problem:

1. Distance method: always be the closest to the person for whom your
flirtatious message is intended,

2. Focus method: make eye contact with that person continuously
through the course of the social event,

3. Name method: use their name in every sentence.

For redundancy, using more than one of these methods will increase the
likelihood of suc cess.

3 Issues With Adoption

An important aspect to consider for any protocol is how difficult it
will be to get a substantial percentage of the population to adopt it.
While it is true that the protocol is not useful unless both parties
have adopted it, in this case, we believe the advantages to the
protocol are obvious and the rollout will be smooth. Furthermore, due
to network effects, adoption will follow an exponential growth curve.

6

20

4 Future Work

We plan to expand the protocol to include flirtation sessions
involving more than two partici pants, and are recruiting graduate and
undergraduate research assistants.

References and Notes

1. Psychology. Wikipedia: the free encyclopedia

7

21

6

A Solution to the Two-Body Problem

Rose Bohrer

Abstract

The two-body problem asks whether two massive academics under

going mutual attraction can locate stable employment and lodging. We
identify an important special case in which a solution exists, the
first of its kind. We give a lower bound on net worth as a function of
time. Given the significance of the result, we give a near-formal
proof in Differential Dynamic Logic dL to increase confidence in the
result, without the hard work of actually doing the proof correctly.

1 Introduction

The three-body problem has been known since Newton: Given three mas
sive bodies mutually exerting gravity, compute their trajectories. It
has also been known since the time of Poincaree´ that no analytic
closed-form solution exists in the general case. Poincare´ was not
discouraged by this. Rather, (mostly because he and his wife were both
on the academic job market at the same time), he did what any good
mathematician would do and simplified the problem until there was some
hope of solvability. Poincar´e's simplification is what we now know as
the two-body problem: "Given two massive academics in mutual
attraction, determine stable em ployment and lodging". This
simplification is of massive importance to academics even today, and
as the modern academic knows, no general solution has yet been found.

2 Related Work and Previous Solutions

In extremely special cases, satisfactory solutions to the two-body
problem have been found. The most notable special case is that of Blum
and Blum who have found simultaneous full-professorships at a Research
I university in an affordable city in related research areas. This is,
clearly, the most difficult instance of the 2-body problem. However,
the B2solution requires at least one Turing award, and thus does not
scale to us mere mortals. It would have worked for Poincar´e, I'm
sure, but not for me. Who am I even kidding, the 1-body problem has
long been solved.

Other approaches are more ambitious, seeking to provide an answer to

us mere mortals by sidestepping the requirement for an analytic closed
form solution, satifying themselves with implicit or non-analytic
solutions instead. The most notable approach is that of the Klein Four
group, who take on the even more general three-body problem using
nonanalytic functions such as a cappella:
https://www.youtube.com/watch?v=Aiq\ _oaIvyak

1

22

3 Our Approach

The key issue here, of course, is to identify a salary for two PhD's
at the same time, as only so many employers are willing to employ the
un employable. In offline dating, it has been widely recognized that
the problem becomes significantly easier as the distance in thesis
topics in creases. Thus, the Two-Body problem can be reduced to the
Some-Body Problem as posed by Mercury: Can anybody find me somebody to
love? Who has a PhD in a technical field, but preferably not CS and
definitely not formal methods, oh god please not formal methods?

Our approach is centrally based on Mercury's reduction. Further, we
make the simplifying assumption that the relationship between the
bodies is stable, a common assumption which significantly simplifies
the dynam ics. Because we provide the first and only solution for such
a seminal prob lem in bodyology, we provide a formal model and proof
of our solution to the two-body problem in Differential Dynamic Logic,
an established logic for verifying hybrid systems, many of which are
Cyber-Physical Sys tems. This marks its first known use in verifying
Cyber-Social-Physical- ;-)-Academic Systems.

3.1 Model of Academics in Motion

In the two-body problem, we are given two rigid ;-) romantic bodies in
motion and seek to show they can sustatin some minimum level of income
over an extended period of time. As in the typical presentation of the
problem, we assume the sole source of income is a university.

In this model, we consider two-dimensional rigid bodies, whose posi
tions are described by x1, y1, x2, y2. Following standard
simplifications, we assume the position of the university is somewhere
between the aca demics in a way we will make precise soon. According
to government statistics, the salary for each academic is inversely
proportional to the distance from the university. We add another
variable t to track the evo lution of time.Putting this all together,
we arrive at a 10-dimensional ODE describing the evolution of the
system:

d2, x02 = vx,2, v0x,2 = −m1 · vy,2

α ≡ {x01 =vx,1, v0x,1 = −m2 · vy,1

d2,

d2, y02 = vx,2, v0x,2 = −m1 · vx,2

y01 =vy,1, v0y,1 =m2 · vx,1 $0 =1

d1,U+1

d2,U, t0 = 1}

d2,

Where we define d ≡p(x1 − x2)2 + (y1 − y2)2 and
d1,U ≡p(x1 − xU )2 + (y1 − yU )2 and
d2,U ≡p(xU − x2)2 + (yU − y2)2 where the university
is at a fixed location xU , yU .

Like all good proofs we have some preconditons. Specifically, the mass
and distance are in harmony such that the bodies will maintain a
stable

2

23

orbit, and the university lie somewhere inside that orbit:

P re ≡ v2x,1 + v2y,1 = v2x,2 + v2y,2

d=m2

=m1

d∧ t = 0

∧ (xU − (x1 + x2)/2)2 + (yU − (y1 + y2)/2)2 \<= d2

And our postcondition establishes a bound on income:

P ost ≡ $ ≥ $0 +2 · t

d

Putting these together, we arrive at our theorem-statement: Stable
Relationship Solution to the 2-Body Problem:

P re → [α]P ost

Proof. We decompose the proof into its key lemmas, leaving the details
of the proof as an exercise for the reader.

Lemma: Bounded Attraction. A stable relationship requires that the
bodies be attracted to each other, but not so attracted to each other
that they might lose their wits, or in our case, their orbits. The
Bounded Attraction lemma asserts an upper bound on the attraction, or
force:

[α]v02x,1 + v02y,1 ≤m2 · v21

d2

And similar for the second body.

Lemma: Constant Separation. Next, we must prove the essential
simplifying step that distance between the bodies is constant, without
which the rest of the proof is intractible. Because distance makes the
heart grow fonder, we call this lemma constant separation:

[α]d0 = 0

Lemma: Stable Relationship. Stability of relationships is essen tial
to maintain the healthy life attitudes that allow one to succeed in
academia. Thus we show the bodies are in a stable orbit at the radius
d/2 which is centered around the origin, without loss of generality:

[α]x21 + y21 = (d/2)2

And so on for the other body.

Lemma: Home Sweet Home. In order to establish a bound on salary, we
need a bound on the distance to the university. Because the university
stays at its starting position inside the circle, we can bound this
distance by the diameter:

[α](x1 − xU )2 + (y1 − yU )2 ≤ d2

Lemma: Continuous Cash Flow These combine to give us a bound on the
continuous rate at which cash changes, which leads directly to the

theorem:

[α]$0 2 · t d

3

24



4 Conclusion

We have proven the first known lower bound on salary in the two-body
problem under realistic situations. Practical applications of this
solution abound for otherwise-broke academics on the job market. Our
simple closed-form solution simplifies financial planning for the
affected academic masses. We "formalized" the result for increased
confidence, in case any. . . body should dispute its correctness.

4

25

26

{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}Magpie track

Who Said Money Can't Buy...

{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}{width="0.20833333333333334in"
height="0.20833333333333334in"}

7 Call For Partners: Romance with rigor

Rose Bohrer

8 Nano electronic transplant in brain and eyes to analyze and process
human thoughts and emotions

Chinmaya Lele

9 Grant proposal: Monetary policy of sparkly things Pete, Luna, and
Stefan Muller

27

7

Call For Partners: Romance with Rigor

Rose Bohrer

1 Introduction

Traditional dating sites are based on simple premise: participants are
capable of identifying, with a reasonable degree of accuracy, which of
their proposed matches are appropriate for them. Millions of years of
empirical dating evi dence show this premise to be false. As in all
human pursuits, the dating field is rife with biases and irrational
judgements, such as "You're too ugly", "you

don't make enough money", and "at least he can remember my birthday,
Rose." While no system can completely eliminate human biases, the
scientific community has, by and large, done an impeccable job of
minimizing the influ ence of bias through its system of peer review.
Even in non-blind review, simply entrusting the review process to a
disinterested third party with no conflicts of interest greatly
increases the quality of the outcome. If only the same could be said
for love.

Unlike all other dating sites, callfor.partners addresses the
underlying problem in online dating: lack of rigor. We do so by
applying that most suc cessful of human inventions, peer review. As
with academic peer review, not all decisions are made by peers. Just
like you get to choose what paper to submit, YOU are in charge of who
you want to date, by writing your own personal CALL FOR PARTNERS and
YOU get to make the best impression by submitting your own PARTNERSHIP
ABSTRACTS. Only the messiest, most error-prone part is spread between
peers: Deciding which partnerships to pursue and which to reject. The
part that nobody wanted to do anyway.

2 Design

The central feature of the Call For Partners Partnering Workflow is
the Part nering Committee. As with the peer review process, one
selects a group of one's closest friends to make decisions as to which
advances should be accepted or rejected. Usually (but not always) the
reviewing process is mutual: users are motivated to join their
friends' partnering committees for the reviewing services they
themselves receive in return. The standard dating website trope of
"pro files" appears in Call For Partners under the guise of Calls For
Partners. The distinguishing feature of a CFP vs. the traditional
profile is that unlike a tradi tional profile, a CFP is all about what
YOU want, specifying in utmost precision the desired features in a
partner. CFPs come in multiple styles. For example, a Journal CFP
often has a rolling deadline or no deadline at all, where potential

1

28

partners are encouraged to submit at whatever time they find
convenient. In contrast, a Conference CFP generally has a strict
deadline which is extended only after a disappointingly small number
of submissions. The Conference CFP is especially useful for
implementing rebound relationships, a implemented on many sites, but
never before with such rigor.

The CFP model acknowledges that you are TALENTED, and your history
will speak for itself. In addition to a CFP, every participant has a
CV listing relevant accomplishments. Because we are living in the
future, Call For Partners applies advanced AI technology known as
"Facebook Stalking" to automatically generate large portions of a CV.
This open-access model (with a level of open ness exceeding many
leading-edge academic organizations including SIGPLAN) reduces
harrassment and other abuses of the system, because participants are
held accountable for their behavior in public. Remember, boys: Before
you do something you'll regret, DBLP don't forget.

Given a CFP and CV, particpants have all the information necessary to
write a Partnership Abstract. A partnership abstract gives a brief,
concise descrip tion of the contents of the proposed relationship. The
Partnership Committee compares the abstracts against each author's CV
and own CFP, and uses this to write reviews, ranking each abstract
and/or each individual's self-worth on a scale of A-F. In most cases,
the reviews are clear enough that the PC can reach an anonymous
conclusion as to whether the abstract should be accepted or not. In
the case of a dispute, a PC Chair can be appointed with tie-breaking
authority.

Upon acceptance, the relevant parties gain the ability to message each
other Often, the author of the CFP will request revisions from the
author of the Partnership Abstract before starting the Parternship.
While the proposer does not have the ability to reject the proposee
nor request changes of them, it is traditional to disclose further
results publicly, which over time has a significant affect on the
proposee's Impact Factor.

As with most dating sites, CFP provides funcitonality to help you
search through potential matches into to identify someone to whom you
wish to submit a Partnership Abstract. In addition to the barebones
necessities like salary, race, and number of previous partners, CFP
allows you to perform custom searches that solve the problems specific
to academic communities. In fact, we provide the first known solution
to the Two-Body Problem, a generalization of the Three-Body Problem
initially posed by Newton.

The Two-Body Problem was originally phrased by Newton is stated as fol
lows: Given two bodies A and B each with an attraction and a PhD, find
a place of residency and a salary.

The key issue here, of course, is to identify a salary for two PhD's
at the same time, as only so many employers are willing to employ the
unemployable. In offline dating, it has been widely recognized that
the problem becomes sig nificantly easier as the distance in thesis
topics increases. Thus, the Two-Body problem can be reduced to the
Some-Body Problem as posed by Mercury: Can anybody find me somebody to
love? Who has a PhD in a technical field, but preferably not CS and
definitely not formal methods, oh

2

29

god please not formal methods?

At this point, the astute reader will notice that this problem is
amenable to solution via an adequate search feature. The
first-of-its-kind CFP Field Search allows one to specify the exact
desired distance between their mate's work and
their own, using traditional metrics such as Er´'os
Numbers, Bacon Numbers, Erdos-Bacon Numbers, and also less traditional
metrics. In our companion paper, we have verified our solution to the
Two-Body Problem using Mercury's Reduction.

3 Implementation

Call For Partners is currently available to the public as an open beta
at callfor. partners. Call For Partners is implemented with Scala,
Slick, Play, Post greSQL, Heroku and assorted other buzzwords.

At present the implementation is limited, with the primary limitation
being one known as "grad school", specifically, "actual work".
However, and especially due to the subject matter, the implementation
process has been utterly free of the other canonical time limitation
known as "girlfriends," which the author credits with the
implementation progress made so far.

Producing a robust, production-ready implementation of an application
like CFP requires many things. The most important requirement from a
business perspective is the ability to rapidly acquire, and
potentially disseminate to St. Petersburg, a wide variety of sensitive
personal information on customers. In this critical area, we already
accell, through a robust account application pro cess. Our application
process is based on well-established social engineering techniques
that lull a user into a false sense of security before we go in for
the kill. For example we ask them common questions such as their
Social Security Number and mother's maiden name before getting them to
divulge information that many users find sensitive, such as their
thesis topic.

Common wisdom in the software industry is that a successful web
application also needs "features". One contribution of our work is to
refute the above, phony claim. Websites like Match.com and
eHarmony.com have implemented almost all the features shown in their
business proposals and an even more astonishing fraction of features
featured in their advertising materials. CFP, being authored by the
expeditious academics that it is, has more or less elided this step,
yet derived an equal number of publications.

4 Evaluation

An early version of callfor.partners was tested on select attendees of
SIBOVIK 2016 in a private beta for the past year. In retrospect, this
choice of test users presented a huge logistical problem: In large
part due to their collective scientific prowess, the average attendee
of SIGBOVIK 2016 had at the start of our trial 2.3 girlfriends, 6
boyfriends, 1.9 wives, 4 husbands, 24 desperate exes trying to

3

30

get back together with them and the occasional 1.5 Texas. In order to
keep the evaluation simple, all such relationships were terminated at
the start of the study, the last of them much to the chagrin of James
K. Polk. This in fact interfered significantly with the validity of
our study by sending a vast tidal wave of fresh singles throughout the
surrounding community. However, keeping with the long-held standards
of the scientific community, we will publish the study anyway.

Our experimental evaluation sought to answer the following questions:

• How does CFP affect the overall number of people dating in a
population? We call this the mojo quotient.

• How does CFP affect the romantic success of individuals within the
pop ulation? We call this the love-potion factor.

• How effective is CFP at enabling the treatment group to restore
itself to its equilibrium quantity of relationships when the
equilibrium is disturbed? We call this the rebound factor.

A treatment group of 125 attendees was provided with an experimental
pre release version of CFP. A control group of 75 attendees was
provided with industry-standard dating software. Out of the control
group, only 15 partici pants made it to the end of the trial. Out of
the remaining 60, 40 quit the trial early due to the despicable dating
options available to them, 18 died of loneliness and two of them
decided to date each other as an excuse to stop talking to hot singles
in their area. By the end of the trial, the treatment group had
reached a size of 500, for the participants had produced on average 3
offspring, with the remaining 25 additions consisting of a mixture of
immaculate conceptions, Russian hackers, and Peruvian drug kingpins
bribing their way into the trial.

We initially proposed that the mojo quotient be computed as the
fraction of an overall population engaged in a romantic relationship,
but this had two failings:

• The structure of the population changed vastly over the course of
the study due to widespread procreation.

• By the end of the study, each participant had an average of 550
relation ships, contracting common wisdom as to the value of Dunbar's
Number.

The former failing was ameliorated by the fact that a large number of
the resulting newborns also started dating each other. However, this
result was so surprising and potentially-unethical that we felt it,
too, ought to be somehow reflected in our metric.

Thus we settled on the following definition of mojo quotient:

P

QM =

p∈Participants |rels(p)| · cdiapers(p) |{p|age(p) ≥ 18}|

4

31

Where rels is the number relationships a participant is engaged in,
age(p) is their age, diapers(p) is the average number of diapers they
use in a day, and c is an experimentally determined constant set to 5
for the purposes of this study.

The love-potion factor is defined as the median ratio of number of
relation ships/year after/before the introduction of CFP. Naturally
this metric was com puted only for participants already in existence
at the beginning of the study. We computed the love-potion factor to
be exactly 3, no more, no less, whereas the control group had a
love-potion factor of 0.4. This factor was arguably not too meaningful
given the large number of deserters.

The unique structure of this study greatly simplified our calculation
of the rebound factor, because all participants were artificially
reduced to 0 relation ships at the onset of the study. The rebound
factor is computed by calculating the time it takes for both the
treatment group and control group tor reach their initial fraction of
romantically active participants, then taking the ratio between the
two groups. Unfortunately, the control group never reached its initial
fraction, and thus we have computed a rebound factor of ∞.

In conclusion, CFP is infiitely better than all competing options.

5 Testimonials

As if the cold, hard, numbers from the previous section were not
enough, we have also obtained a series of testimonial comments from
participants in our study. While testimonials lack the rigorous proof
of superiority that we already provided above, they aid us in the
analysis of subjective aspects of the work, such as why we are the
superior dating service.

Initially, I was skeptical about entrusting all my personal
information to a website that encrypted my communications with a Vic
cypher, and whose idea of a salted hash was cannabis with a dash of
sodium chloride, but the minute Anastassia SQL-injected her way into
my financial records, she injected herself into my heart! Nothing
spells true love quite like breaking past a firewall, compromising
private records and selling my credit card to the Russian mafia. We've
been together for months now and we really just click. --- Ryan
Kavanagh

Before I used CFP, I was in, like, one relationship, tops. Now I'm in
so many relationships I think the US Navy is jealous of just how often
I am shipped. --- Stefan Muller

6 Related Work

Many websites have addressed the related work of dating, though the
author strongly suggest that you do not date your relatives for it is
bad for the gene pool. Websites such as eHarmony are based on the
pseudo-science of compatibility. Anyone who has ever tried to open a
Word 2016 in Word 95 knows that most

5

32

things we say are compatible, are not, and thus it is with humans.
OkCupid appeals to paganistic rituals in the hopes of receiving
optimal pairings from the divines, who, unfortunately, do not believe
in computers. Match.com and Tinder both determine optimal dates by
lighting large groups of singles on fire and seeing which ones burn to
the ground. Those which do not burn are witches and thus not good
dating material, but unfortunately by that point all the good ones are
dead, making the method ineffective in practice.

7 Promotional Offers

Most scientific papers offer you so-called "knowledge" for "free",
where "free" means you already belong to an institution which has
already paid an ex horbinant fee in order to access the publication,
or perhaps you have located the author's so-called "web-site". At CFP,
we believe in doing things differently. This article has already
provided you with free knowledge, free not only as in beer but as in
speech, and now we are going to provide you with even more freedom
like the good patriotic Americans that we are.

Typically, like all profitable businesses, CFP does not come for free.
It can be paid for in several different ways, including not only the
typical monthly subscription models, but also more academic-friendly
methods such as co-author status on your publications or NSF grant
funding (after extensive lobbying of the NSF Computer Science
Directorate, acquisition of life partners is now considered a billable
research expense). Since you, dear reader, have gotten in early by
reading our very first publication, you can try CFP for a limited
time, free-of cost. This year's SIGBOVIK 2017 proceedings come with an
included 6-month membership to CFP (this was totally not a bribe in
order to get the present article published), which can be extended to
at least 2 years via our affiliate program: every friend or lover you
recommend to CFP (to the paid service, of course) will extend your
membership by 1 month.

Now how's THAT for some science?

8 Conclusion

In this paper, we have presented Call For Partners, an online dating
service based on academic peer-review. Call For Partners uses the
rigor of peer-review to reduce the number of dating errors due to
human bias. An implementation is provided using the Inter-Net, by
which users can try out the proposed dating methods for themselves and
significantly increase the author's material wealth. An empirical
study shows that not only does CFP significantly reduce the num ber of
errors, but it drastically increases the amount of dating at the same
time. Not only are the results of our empirical study significant, but
a promotional offer is given whose savings are significant as well.

6

33

Nano Electronic Transplant in Brain and Eyes to Analyze and Process
Human Thoughts and 8

Emotions.

Chinmaya Lele

University of Pittsburgh

chinmaylele1993@gmail.com

Striking a conversation is very important and saying the right words
is as important as to

identify what9s going in the mind of the other person. Identifying the
emotions of the other person helps us in narrowing down the things
which we must say to turn the conversations into opportunities.

The transplant is supposed to be implanted in brain and eyes, where
the eye implant would capture the facial expressions of the person.
The brain implant is going to communicate with the eye implant which
would send the information signals to the brain implant to analyze the
information captured.

The brain implant will analyze the information which would send back
signals to the eye implant to display the appropriate response to be
given to the person while engaged in a conversation. The eye implant
would also have a omni display interface which helps the implant
wearing person to see the information displayed. The eye implant is of
the size of an eye contact lens which can be worn out when not
required and which is charged through the solar energy which the eye
absorbs while looking around. The brain implant would also have space
to store the conversations which are being recorded by the eye implant
which could be retrieved to be seen again and reviewed on the eye
implant.

1

34

The idea resembles to the 2008 movie 8Iron Man9 in which Tony Stark
could see all the information through his display inside the armor
suit. The only difference is that in this case it is the implant which
is doing all the processing of images and emotions.

2

35

Grant proposal: Monetary policy of sparkly things 9

Pete and Luna, co-purrnciple investigators

Stefan Muller, research assistant

1 Introduction to sparkly things

We propose an investigation into the policy surrounding a currency
recently introduced into our economy, known as the sparkly thing.

{width="3.251444663167104in"
height="1.828944663167104in"}s

Figure 1: The purrnciple investigators with the sparkly things.

Sparkly things are earned by individuals as compensation for labor
(see Figure 2) and can be exchanged for products and services such as
prime lounging spots (see Figure 3).

{width="3.251444663167104in"
height="1.828944663167104in"}

Figure 2: Working hard to earn sparkly things.

{width="1.9507775590551182in"
height="1.0973151793525808in"}{width="1.9507775590551182in"
height="1.0973151793525808in"}Figure 3: The spoils purchased with
sparkly things.

Sparkly things can also be saved in long-term accounts.

1

36

{width="1.950680227471566in"
height="3.4678608923884515in"}{width="1.9506813210848644in"
height="3.4678608923884515in"}Figure 4: Savings account.

2 Central bank of sparkly things

We believe that the supply of sparkly things is controlled by the
Sparkly Thing Reserve System, headquar tered in the silverware drawer
(Figure 5).

{width="1.950680227471566in"
height="3.4678619860017497in"}

Figure 5: The Sparkly Thing Reserve Bank.

237

3 Interest Rates

The interest rate was recently lowered for the third time in two
months, after we discovered that the sparkly things are not food.

4 Financial Crimes

As part of our investigation, we hope to discover ways of better
identifying crimes such as money laundering.
{width="3.2512915573053367in"
height="2.0398468941382326in"}

338

{width="0.5916666666666667in"
height="0.23055555555555557in"}{width="0.5916666666666667in"
height="0.23055555555555557in"}{width="0.5916666666666667in"
height="0.23055555555555557in"}{width="0.5916666666666667in"
height="0.23055555555555557in"}{width="0.5916666666666667in"
height="0.23055555555555557in"}{width="0.5916666666666667in"
height="0.23055555555555557in"}

Monkey track

It's Only a Game Theory

{width="0.5916666666666667in"
height="0.23055555555555557in"}{width="0.5916666666666667in"
height="0.23055555555555557in"}{width="0.5916666666666667in"
height="0.23055555555555557in"}{width="0.5916666666666667in"
height="0.23055555555555557in"}{width="0.5916666666666667in"
height="0.23055555555555557in"}{width="0.5916666666666667in"
height="0.23055555555555557in"}

10 Is it Percival time yet?: A preliminary analysis of Avalon gameplay
and strategy

Yuzuko Nakamura

11 Dr. Boozehead, or How I learned to stop worrying and get drunk:
Design principles and analysis of drinking games in the silicon age

Kelvin M. Liu-Huang and Emily J. Simon

12 A boring follow-up paper to "Which ITG stepcharts are turni est?"
titled, "Which ITG stepcharts are crossoveriest and/or
footswitchiest?"

Ben Blum

39

10

Is it Percival time yet?: A preliminary analysis of Avalon gameplay
and strategy

Yuzuko Nakamura

Carnegie Mellon University

5000 Forbes Avenue

Pittsburgh, PA 15213

yuzi@cs.cmu.edu

ABSTRACT

The Resistance: Avalon is a hidden-roles-style board game. In this
paper, we use data collected over dozens of Avalon games to make
recommendations on role sets and game sizes that maximize the
game-playing experience. We also evaluate the e ect of various
strategies on good and evil's chances of winning.

KEYWORDS

Board games, hidden-role games, game design

ACM Reference format:

Yuzuko Nakamura. 2017. Is it Percival time yet?: A preliminary analysis
of Avalon gameplay and strategy. In Proceedings of SIGBOVIK, Pittsburgh,
PA USA, April 2017 (SIGBOVIK'17), 6 pages.

DOI: 10.475/123_4

1 INTRODUCTION

The Resistance: Avalon [1], like Ma a, is a multiplayer game cen tered
around hidden roles. Hidden role games involve players being randomly
assigned roles that are not revealed to other players. These games often
feature two or more sides with their own win conditions; in particular,
there is frequently an evil or sabotaging side that attempts to blu and
win people's trust in order to win the game, and a good (but generally
information-less) side that must correctly guess who to trust in order
to win the game.

Avalon is a two-team game themed around King Arthur: the loyal knights
of King Arthur (good team) attempt to succeed three quests (missions),
and the minions of Mordred (evil team) attempt to be placed on the
missions so as to sabotage them and lead three of them to fail. As
such, the aim of evil is to be trusted by good players and the aim of
good is to determine who can be safely trusted to be sent on a
mission. In addition, all members of the evil team know each other
(with possible exceptions).

Avalon in its simplest form1features two special roles, Merlin and
the Assassin. The addition of two other special roles, Percival and
Morgana, creates more opportunities for blu ng, trust, and strategy so
we are interested in games with these four roles:

1Avalon may also be played without any special roles (in which case
it resembles its predecessor, The Resistance). However, the main
feature of Avalon are these special roles.

Permission to make digital or hard copies of part or all of this work
for personal or classroom use is granted without fee provided that
copies are not made or distributed for pro t or commercial advantage
and that copies bear this notice and the full citation on the rst
page. Copyrights for third-party components of this work must be
honored. For all other uses, contact the owner/author(s).

SIGBOVIK'17, Pittsburgh, PA USA

© 2017 Copyright held by the owner/author(s). 123-4567-24-567/08/06. .
. $15.00 DOI: 10.475/123_4

40

• Merlin (good): The player with this role knows all evil roles. They
must guide other good players to this knowledge. However, Merlin must
be subtle in the way they do this due to the role of the Assassin.

• Assassin (evil): If good successfully passes three missions, the
player with this role gets to choose one good person to assassinate
(after discussing with the rest of the evil team). If the assassinated
player was Merlin, evil snatches victory from the jaws of defeat and
wins the game. As such, this person serves as a check on Merlin's
ability to help the good team.

• Percival (good): The player with this role knows who Mer lin is. As
such, they can also gain indirect information about who to trust by
quietly observing Merlin's actions and can appear to evil as a decoy
Merlin. However, due to the role of Morgana, Percival must rst
determine which Merlin to trust.

• Morgana (evil): The player with this role appears as a sec ond
Merlin to Percival, forcing Percival to spend some time determining
who to trust, and possibly leading Percival to sabotage the good team
by trusting Morgana instead of Merlin.

Good players who are neither Merlin nor Percival (generic good) know
nothing about any player other than themselves. Merlin and evil
players know the full evil team. Percival knows the two people who are
Merlin and Morgana, but not which is which. This selective information
is revealed to the appropriate players at the start of the game -
termed the "nighttime phase" - by asking players to close their eyes,
and then open their eyes to get information or raise their thumb to
identify themselves, as appropriate.

In this paper, we seek to evaluate how rules and game size a ect the
funness of the gaming experience, and how di erent strategies are more
or less successful for good and evil.

2 GAME ANALYSIS

2.1 Evaluating funness

2.1.1 Ideal win ratio. One feature of Avalon is that, not only are the
good and evil teams asymmetrical (having di erent abilities,
information, and objectives), but they are also of asymmetric sizes
(the evil team in Ma a-style games needing to be a minority to avoid
the game being trivially easy).

Supposition A: Funness is maximized by maximizing uncertainty. In
other words, we want the probability of winning to be ½ re gardless
which team one is on. This means overall win chance of

SIGBOVIK'17, April 2017, Pi sburgh, PA USA Yuzuko Nakamura

Table 1: Ideal win ratios for each possible size of Avalon game under
Supposition B (making evil wins as likely as good wins).

Game Size # Evil Ideal Good Win Chance

5 2 .40

6 2 .33

7 3 .43

8 3 .38

9 3 .33

10 4 .40

good and evil must be balanced to be ½ for an ideal game-playing
experience.

Supposition B: Funness is maximized by making each person's wins equally
likely to be earned while on the good team as while on the evil team.
Equivalently, across all games, the good and evil teams both produce
roughly the same number of winners. Under this supposition:

Pr(i is good | i wins) = Pr(i is evil | i wins) =12(1)
Or...

Pr(i is good ∩ i wins)

Pr(i wins)=Pr(i is evil ∩ i wins)

Pr(i wins)(2)

Assuming player i doesn't a ect the win probability of their team,
this is the same as:

Pr(i is good) · Pr(good wins) = Pr(i is evil) · Pr(evil wins) (3)

If pдood is the win probability for good, and G is the chance of
being good (i.e. the number of good roles over the total game size),
then this equation is:

G · pдood = (1 − G) · (1 − pдood ) (4)

G · pдood = 1 − G − pдood + G · pдood (5)

pдood = 1 − G (6)

In other words, under this supposition of equalizing the portion of
good and evil wins, the ideal win ratio for win is 1 − G, or the
chance of being evil.

The number of evil players changes depending on the total num ber of
players. Table 1 summarizes the ideal win chances under this second
supposition.

2.1.2 Game duration. Another important component of fun is the length
of a board game session. We hypothesize that length of game goes up as
the number of players in the game increases due to more discussion. We
also are interested in comparing the typical game length to the 30
minutes claimed on the box.

2.2 Strategy

2.2.1 Percival claims. The rules in the instruction manual are not
clear on whether players are allowed to claim to be Percival. However,
the role of Percival is similar to the role of generic good -- and
unlike Merlin or the evil roles -- in that claiming the role can
potentially help the claimant's team (whether good or evil).
Therefore, we allow players to publicly claim to be Percival.

A true Percival claim (Percival claiming Percival) can increase trust
among good members but can possibly make Merlin assassina tion easier
for the evil team. We are interested in whether claiming to be
Percival, and the timing of such claims, tends to help good or evil.

2.2.2 The first mission fail. Evil players have the choice whether to
throw in a fail card or a success card for missions that they go on. A
sole evil player on the rst mission may decide to pass the mission to
avoid detection / suspicion for being on a failing team. However, an
early mission fail can make the evil team's task of failing three
missions total easier.

We are interested in whether rst mission fails overall help the good
or evil team, and how the size of the rst mission factors in to this.

2.2.3 Evil coordination failures. When two or more evil players are on
a mission team, they each have to choose whether to throw in a fail or
success card, not knowing what their teammates are planning to do. As
a result, sometimes evil players may end up passing the mission, or
may put in more than one fail card, revealing key information about
the make-up of the team. As such, a team with more than one evil
person is not ideal for the evil team, and they may be cautious about
proposing or approving teams with this make-up.

How often do coordination failures happen, and how do they a ect
evil's chance of winning? We investigate these questions in this
paper.

3 METHOD

A body of 38 graduate students played games of Avalon (20 of which
played "semi-regularly" i.e. ve or more times during the data
collection period). In total, 66 games were recorded although 5 were
discarded due to incomplete information, resulting in 61 games
overall.

All games were played using the Merlin, Percival, Morgana, and
Assassin special roles. In addition, 10 of these games added the
special roles Mordred and/or Oberon.

The following data were collected for each game:

• Number of players and role of each

• Approved mission teams and mission outcomes (but not proposed
mission teams)

• Outcome of each mission (pass/fail)

• Outcome of game (win for either good or evil), including the win
condition (three mission fails (evil win), mission success but Merlin
assassination (evil win), or mission suc cess and failed Merlin
assassination (only good win condi tion))

• Which player(s) claimed to be Percival and when (if appli cable)

41

Is it Percival time yet?: A preliminary analysis of Avalon gameplay
and strategy SIGBOVIK'17, April 2017, Pi sburgh, PA USA

Figure 1: Number of games played for each size of game.

Figure 2: Good's win record at each game size. The dotted

• Which player was assassinated (if applicable)

• Duration of game (measured from the end of night-time phase to
either three mission fails or evil's Merlin assassi nation choice)

Linear regression was used to determine whether game size a ected
duration.

Chi-squared tests were used to determine whether (1) the pres ence of
Percival claims a ected the evil's team Merlin guess rate (Percival
claim/no claim vs. Merlin guess/good win condition); (2) evil failing
the rst mission a ected the chance of evil winning ( rst mission
pass/fail vs. winner of game); (3) the presence of missions requiring
evil coordination a ected the chance of evil winning (zero/non-zero
coordination missions in game vs. winner of game).

4 RESULTS

4.1 Number and size of games

Fig. 1 shows how many games of each size were played in the dataset.
Although Avalon can in theory be played with game sizes of 5 to 10,
players did not enjoy games of size 5 and so only played Avalon if at
least 6 players were present. A game of size 11 can be played with the
10-player board and 4 evil characters (as in a 10-player game), and an
extra set of vote tokens.

4.2 Win ratio

Overall, the win rate of the good team was .34. Fig. 2 shows how this
win ratio changes with the size of the game. The good win ratio for
9-player games stands out as unusually high. This is also the game
size with the fewest data points (see Fig. 1), so that may be part of
the reason.

Under Supposition A of game funness, 6- and 9-player games are the
only ones close to the ideal di culty for good. 7-, 8-, and 10-player
games fall short of both ideal win ratios. As such, it may be
worthwhile to use gameplay mechanics that tilt the game in favor of
good (Oberon as one of the evil roles, Lady of the Lake, etc.).

Under Supposition B of game funness, the 6- and 9-player games need to
be altered to be more di cult. In particular, a 9-player

line shows the ideal .5 win ratio under the supposition that good and
evil should be equally likely to win. The dashed line shows the ideal
win ratio under the supposition that people earn wins equally as good
people as they do as evil people.

game might bene t from 4 evil roles (instead of 3), one of which is
Oberon.

4.3 Game duration

Fig. 3 shows the distribution of game length. The mean game length is
57.3 minutes and the median game length is similar -- 57 min utes.
Most (80% of) games can be played within 80 minutes. This is markedly
longer than the 30 minutes estimated in marketing materials.2

We can break down game length by the size of game, resulting in Fig.
4. Games of size 9 are again an outlier, being unusually quick, and
being the only game size that approaches the 30-minute estimated play
time.

There seems to a slight trend of longer games with more players in
line with our hypothesis; however linear regression (removing the
9-person games) does not quite reach signi cance (p=.0663) and game
size has low explanatory power for duration (R2=.0524).

4.4 Percival claims

Fig. 5 compares the outcome of games where a Percival claim is made
vs. ones where no Percival claims are made. Games with Percival claims
are much more likely to end in mission failure, which makes sense
because one reason why Percival might claim is because several failing
missions have happened, and Percival (or Merlin) is trying to increase
the chance of choosing an all-good team (i.e. scenarios with multiple
failing missions are scenarios where Percival is likely to claim).

Among the remaining cases where three missions succeed, we are
interested in comparing how often Merlin is assassinated in

2It is possible that this particular group of graduate students
discusses an unusually large amount during Avalon.

42

SIGBOVIK'17, April 2017, Pi sburgh, PA USA Yuzuko Nakamura

Histogram of game length

2

1

0

1

y

8

c

n

e

6

u

q

e

r

4

F

2

0

20 40 60 80 100

) s

n

i

m

(

n

o

i

t

a

r

u

D

Duration (minutes)

Figure 3: Histogram of game lengths.

Duration by game size

0

0

1

0

8

0

6

0

4

0

2

0

6 7 8 9 10 11

Game size

Figure 5: Portion of games that end with mission fails (evil win),
Merlin assassinations (evil win), or neither (good win) under the
conditions of Percival not revealing and Percival revealing.

Table 3: Game victor under the conditions of a rst mission fail vs. a
rst mission pass (games sizes 8+).

Evil Good

First mission fail 13 3

First mission pass 5 4

There was not enough data to do an analysis of how the timing of
Percival claims a ect good's chance of victory. We leave this to future
work.

4.5 The rst mission fail

Although the intent was to analyze how rst mission team size (two person
rst missions (in games with 5-7 players) vs. three-person rst missions
(in games with 8+ players)) a ects the outcome of the game, in practice,
only one two-person rst mission with an evil player (out of 16) was
failed by the evil player. This is a di cult

Figure 4: How length of game changes with game size.

Table 2: Merlin assassination successes under the conditions of a
Percival claim vs. no such claim.

Merlin assassinated Good wins

Percival claim 16 11

No Percival claim 10 10

each case (Percival reveal vs. no reveal). Table 2 shows the number of
games in each condition. The Merlin assassination chance when Percival
reveals (.59) is higher than when there is no Percival reveal (.50).
However, the chi-squared test shows no signi cant di erence (p=.738).

strategy to pull o for the evil player because for the rest of the
game, at least one good person knows for sure one member of the evil
team, and the evil person must consistently behave to give the
impression of being someone in that situation.

Therefore, we instead look only at games with three-person rst
missions. Of the 24 games with evil players present on the rst
mission, 16 (67%) were failed by those players. Table 3 shows how
evil's play during the rst mission a ected the victor of the game.

There is not enough data to perform a reliable chi-squared anal ysis,
but it is possible that failing the rst mission is overall a good
strategy for the evil team.

4.6 Evil coordination failures

Of the 61 games, 32 (52%) featured no evil coordination missions,
while the rest had at least one evil coordination team. Fig. 6
indicates how often games featured a certain number of evil
coordination

43

Is it Percival time yet?: A preliminary analysis of Avalon gameplay
and strategy SIGBOVIK'17, April 2017, Pi sburgh, PA USA

Table 4: Game victor under the conditions of zero or at least

one evil coordination mission.

Evil Good

No evil coordination missions 15 10

1+ evil coordination mission 22 7

Figure 6: Number of games featuring zero, one, two, or three

missions with more evil people than the required number

of fails.

Figure 8: Evil win rate broken down by ability of evil to co

ordinate.

We also analyze how evil coordination missions a ect the chance

of good or evil winning the game. Removing from consideration

games where evil never gets the chance to go on any mission,

Table 4 summarizes the game outcomes when there are no evil

coordination missions vs. when there's at least one. Evil's win ratio

in the presence of coordination missions (.76) is higher than when

there are no coordination missions (.60), although this e ect does

not reach signi cance (p=.338).

Figs. 8 and 9 break down the e ect of evil coordination further.

Fig. 8 takes into account evil's coordination performance -- success

means throwing out exactly the number of fails needed to fail the

Figure 7: How often zero, one, or two fails come out for one
fail-required missions.

teams. Overall, this suggests the chance of any mission containing
multiple evil people is roughly 15%. Note: It is impossible to have
coordination issues on the fth mission because any number of fails is
acceptable for the evil team. Coordination issues on the fourth
mission of games of 7+ players, where two fails are required, are
rarer (as this only happens when three evil people are placed on the
team) but are still important to the game.

38 of the 41 coordination missions (92%) involved two evil people on
one-fail-required missions. Fig. 7 summarizes how frequently zero,
one, or two fails come out in this situation. This gure shows that the
number of fails that come out in Mission 2 and Mission 3 are roughly
what you'd expect based on random chance (independent events with .5
probability of occurring). However, Mission 1 is much more skewed
toward zero fails, corresponding to roughly a 20-25% chance of each
person throwing out a fail. This makes sense as a two-fail result on
the rst mission can be costly to the evil team.

mission, and failure means throwing out more or fewer fails than
needed. Fig. 8 separates the data into three conditions: games where
evil mostly failed at coordinating (14 games), games where evil both
succeeded and failed at coordinating once (4 games), and games where
evil more often succeeded at coordinating (11 games). Fig. 9 shows how
evil's win rate changed as the number of coordination missions in the
game increased.

In all cases, there is not enough data to draw any de nitive
conclusions. However, contrary to expectations, it is possible that
evil coordination situations might be slightly bene cial to the evil
team.

5 DISCUSSION

We analyzed data from 61 games of Avalon. We found that games of size
9 were unusual in the amount they were played (less popular), how long
they lasted (shorter), and game di culty (good team more likely to
win). Some games might require adjustment to their di culty. In
particular, 9-player games might require more evil characters, and 7-,
8-, and 10-player games might require a slight good handicap. Typical
game time is more than one hour.

44

SIGBOVIK'17, April 2017, Pi sburgh, PA USA Yuzuko Nakamura

REFERENCES

[1] Don Eskridge. 2012. The Resistance: Avalon. Indie Boards and
Cards. Board

game.

Figure 9: Evil win rate broken down by how many times dur

ing the game evil needed to coordinate.

In this particular dataset, Percival reveals resulted in slightly

more Merlin assassinations; evil failing the rst mission resulted

in more evil wins; and the presence of evil coordination missions

resulted in more evil wins. One possible explanation for this last

nding is that it is hard for good people to reason about teams

where more than one of the members was evil, and so they may

be more likely to make decisions that assume only one evil person

was on the team.3 However, more data might reveal these trends to

be spurious/random noise.

A notable gap in the dataset is the general absence of two-person

rst missions failed by an evil player on the team. In the future, it

would be interesting for evil to experiment with failing two-person

missions to see if this strategy might be bene cial for evil overall.

A more detailed analysis of Percival claim timing would be also be

good to do with more data. Another promising avenue of future

work would be to analyze the e ect of rejecting missions on good

and evil's chance of winning.

5.1 Conclusion

Hidden role games like Avalon provide a large space for both game

design (e.g. number of players, set of specialized roles) and player

strategy (e.g. failing the rst mission, claiming Percival, team ap

proval strategies, etc.). As such, collecting and analyzing data under

di erent game conditions can be useful in improving the player

experience and evaluating the strength of di erent strategies. Al

though limited by the amount of data, this work represents a pre

liminary step in the direction of analyzing the gameplay of Avalon.

ACKNOWLEDGMENTS

The author would like to thank Kristy Gardner for kick-starting

this collective Avalon Problem in the department, Ryan Kavanagh

for taking better care of the Avalon stats sheet than the author, and

Liam K. Bright for listening to the author's shaky probability math.

3If evil does indeed bene t from coordination issues, this might
interestingly increase

the value of Zhu-Brown strategies (multiple evil proposing multiple
evil people on

their teams).

45

CONFIDENTIAL COMMITTEE MATERIALS
{width="1.000485564304462in"
height="1.000485564304462in"}

SIGBOVIK 2017 Paper Review

Paper 92: Is it Percival time yet?:

A preliminary analysis of Avalon

gameplay and strategy

Percival

Rating: Vote your conscience

Confidence: I'm so confused, guys. . .

It's no fun when Merlin gets assassinated or evil wins, so Supposition
A sounds like something a spy would say. As a result, I question one
of this paper's fundamental premises. Is the author a spy?!
Regardless, this paper is a first-class analysis of our collective
obsession with Avalon.

Percival II

Vote: Approve

Confidence: We've got to do this.

I'm totally not evil, and I'm the real Percival. I support this
mission.

Ryan Kavanagh

Rating: Approve

Confidence: I think Percival is the real Percival?

No more Percivals, please! Think of the poor stats sheet!

Normally we would recuse ourselves from reviewing papers we're
involved with,1 but the program committee couldn't think of anybody
anybody who hadn't contributed to this paper's data set. This paper
provides a fun analysis of Avalon's innate funness. Section 4.5 claims
there was insufficient data to perform a reliable chi-squared
analysis. Consequently, I encourage the author to switch her research
area to the analysis of Avalon and hire the program committee as
research assistants so that we can play more Avalon, err, I mean, help
her collect important data for a follow-up paper.

1Because SIGBOVIK is a serious conference with serious reviews.

46

Dr. Boozehead, or How I Learned to Stop Worrying and Get Drunk:

11

Design Principles and Analysis of Drinking Games in the Silicon
Age

Kelvin M. Liu-Huang

Carnegie Mellon University

kmliu@cmu.edu

Abstract

From beer pong to beer bong, drinking games have a storied past, seated
at the intersection of sublimating puritanical repression [1] and the
great ape\'s boundless curiosity. Animals utilize play to express
themselves and practice behaviors. For humans, play is so important that
rules of play are codified into games. Yet, scientific study of human
games and game design has been greatly underrepresented, and even more
so for drinking games. In the present study we sought to distill the
essential principles of those traditions, which lie at the intersection
of interactive gaming and indulging in poisonous fluids. Through careful
field analysis and repetitive study, we propose that concrete
prerequisites, mental requirements, and social abetment are all
fundamental attributes of a successful drinking game. To evaluate our
design principles, we designed three novel drinking games, beer
baseball, soccer shots, and beer nim. We also evaluate the popular
drinking game, beer pong, as a benchmark. Comparing our innovations to
the benchmark, we demonstrate the effectiveness of applying our design
principles, showing that beer baseball and beer pong knock it out of the
park, while beer nim (our straw man) eats dirt.

1. Introduction

Animals evolved play to communicate and manipulate [2][3]; learn
aggressive, predatory, and foraging behaviors [4]; and improve
cognitive function [5]. To the great ape, play is so important that
rules of play are codified into numerous philosophies called \<games.=
Popular games are standardized internationally, generously funded,
vicariously enjoyed by large fractions of the population, and game
elders typically receive the

Emily J. Simon

Carnegie Mellon University

ejsimon@andrew.cmu.edu

highest salaries at learning institutions [6]. Furthermore, games
(as well as all other activities) are often integrated with ingestion
of poisonous liquids to stimulate social interaction and enjoyment. In
many ways, these \<drinking games= may be regarded as paragon forms of
play because they achieve so many different objectives of play.

Despite the importance of such games and the complexities of game
design, very little formal study and scientific discourse have been
devoted to game design. Ordinary tabletop games require delicate
balance of tool complexity, rule complexity, computational complexity,
game-to game variance, audience appeal, and mechanical and narrative
harmony.

The design of drinking games requires arguably even more sensitivity.
Between the innately chaotic environment of parties, the need to
facilitate communication, and judicious application of refreshments
[1], drinking games embody the highest achievements of human design
gathered from the likes of Chess, Go, or Ping Pong. Yet the design and
study of drinking games is even lower in the pecking order than
ordinary games. Even fewer serious examinations have been made of
drinking game design [1][a][8]. Popular with men and women fond
of classical languages, drinking games have historically been typecast
as intellectually and socially inferior. At the risk of resorting to
platitudes, we know that correlation does not imply causation [9],
so this alone should not be an indictment of the noble pastime of
drinking games.

2. Design Principles

A fecund party is a palpable maelstrom of active, bass/brainless,
clumsy, dance, and entropy. Look

47

for these symptoms using the simple acronym, ABCDE. A drinking game
should satisfy all the principles of game design, as well as judiciously
accounting for these party properties.

2.1. Easy to organize (ABCDE)

Due to spontaneity and inattentiveness (A), a drinking game must require
minimal planning, simple setup, and little infrastructure. Due to heavy
bass (B), brainlessness (B), and entropy (E), mobilizing players and
organizing a game must be simple. Props (if used) should be low cost and
ubiquitous, or at least portable. There should be few and simple rules
to explain due to (A) as well as interjection from the hard bassline
(B). Due to brainlessness (B), clumsiness (C), and entropy (E), the
drinking game should be low risk. Messes and injuries are sure to dampen
a thriving party. Above all, the drinking game needs to be technically
feasible given the specific parameters of the party. Space for a large
game can depreciate due to dancing (D) and entropy (E). Too much bass
(B) might also drown out the speaking portion of some would-be drinking
games.

2.2. Social (NP, P=NP, KEG)

Parties must facilitate social interaction to avoid noncompliant
prairie-dogs (NP), individuals who wallflower, stand alone, or look
around perplexed. In general, we don't want players not playing (P=NP).
A drinking game readily serves this need by providing a platform for
players to communicate [1]. Meanwhile the game itself cannot require
too much focus, so as to allow informal conversation. To facilitate
social networking, a drinking game ideally allows players to join or
leave as they please. We introduce a metric for this fungibility called
the KEG (keep entering/exiting games) norm. Though some partygoers may
wish to linger on one game, the option to devote only an aliquot of time
is vital. Therefore we must always remember the KEG!

2.3. Appropriate difficulty (NP-complete)

The computational complexity of such critically acclaimed board games
as Agricola and 7 Wonders tend to be unpalatable for a drinking
environment. Other forms of play, such as football, hunting, and
monster truck driving, carry a level of risk and finesse that should
not be expected of inebriated patrons, due to brainlessness (B) and
clumsiness (C). That is not to say that refreshments do not go well
with a titillating round of Elder Dragon Highlander, but rather, the
choice of drinking game depends heavily on the mood and flavor of the
party.

Because intoxication impairs judgment (B), a drinking game has an
ideal runtime complexity between O(0) and that of ordinary games,
inclusive. As with ordinary games, the level of difficulty needs to
carefully chosen, commensurate to the mood and audience. The game is
boring if too easy and either boring or draining if too hard. That
optimal level just happens to be lower than for ordinary games. More
importantly, a drinking game should have a runspace complexity much
less than that of ordinary games because impaired memory capacity is
one of the first symptoms of intoxication (B). We must avoid a game
that is completely not playable (NP-complete).

2.4. Low cost, high reward (PING PONG) Given the whimsical yet
effusive milieu (A) of a party, patrons should not feel too
physically, mentally, or emotionally drained after a single game.
Therefore we propose the following heuristics to optimally calibrate
the primary investment energy gift (or PING) against the principle
output and gain (PONG). (1) A single game should not occupy an
unreasonably large aliquot of the party time. (2) Players should not
have to learn unreasonable skills. (3) Players should receive maximum
fun output in exchange for participation input.

Points (1) and (2) requires a reduction in the activation energy for
playing the game due to inattentiveness (A) and brainlessness (B).
This disqualifies widely lauded games such as Settlers of Catan with
the Cities and Knights expansion, Warhammer 40,000, and Dungeons and
Dragons. These games may offer high payoff in

48

the currency of intrigue and imagination, but prove unfeasible for the
passing tourist without dedicating hours or weeks preparing and learning
the strategy. Unless the social norms of partying undergo a dramatic
paradigm shift to accommodate pre-party strategy sessions and avatar
development, drinking games will remain limited to simple setup and
rules.

To satisfy point (3), players cannot be excessively focused on winning
or losing (since only a fraction of players can win each game). Point
(3) comes attached with the caveat that anyone who does not find the
game \<fun= will be ceremonially denominated as \<excretory celebrants.=
It thus follows that any reasonable partier should find the game
entertaining and exciting in a manner linearly related to BAC.

2.5. Drinking is integral (DUI)

We all like games, from corn hole to cricket to Chrono Trigger, and we
all like drinking, but drinking games stand alone. While drinking can be
performed alongside almost any activity, games that are not designed
with drinking in mind often fail to synergize logistically and
thematically. Therefore we propose the Drinking is Utterly Indispensable
principle (the DUI principle). A drinking game must be unable to
progress without players taking their apportioned drinks [1].

For example, while Twister surely makes a fun party game, drinking is at
best encouraged but not mandatory. In contrast, flip cup cannot progress
until the beverage has been downed (or players start flipping full cups
whereupon the game surreptitiously transforms into Stand on a Sticky Wet
Floor). Secondly, drinking games are reserved for parties. If one were
to play them sober, they would be reduced to \<games for people with
poor fine motor dexterity= due to (C), or alternatively, \<stupidly easy
games= due to (B). Third, winning and losing, and increasing inebriation
by proxy, should not make the game less fun [1]. In fact, a good
drinking game ripens with age as the party progresses!

3. Examples

3.1. Beer Pong

Few drinking games are as popular and time honored [1] as beer pong,
also known as Beirut [10]. Beer pong is often considered the
progenitor of the shooting into cups (SIC) drinking games archetype.
Thus due to natural selection, one would expect beer pong to be a
highly optimized drinking game which satisfies many of our design
principles.

With regards to feasibility, beer pong requires virtually no planning
(just selecting two or four players), ubiquitous resources (red solo
cups and ping pong balls), and little maintenance. However, the full
rule set can be quite cumbersome and vary dramatically with the east
and west coast populations. Furthermore, the large number of cups
poses a high risk of a pathogenic state known as a \<party foul.=

Socially speaking, players can freely apply KEG if they can find a
substitute, or even take \<celebrity shots.= Watching balls land in
cups can be as exciting for the players as the crowd. The strategy is
simple enough for any patron to enjoy. In fact it may be too simple.
We estimate an O(0) runtime complexity for determining the optimal
strategy. Beer pong satisfies low-cost, high-reward in many ways.
Little preparation and time are necessary. Games can often be decided
by the last cup, providing excitement until the last moment.

Drinking is heavily integrated into beer pong, both thematically and
mechanically. The cups both hold and are stabilized by the beverage.
However this historical methodology has been hotly contested by
hygiene scientists. Furthermore, inebriation conveniently amplifies
the dexterity challenge. However, one potential issue is that the
loser drinks more, becoming less dexterous, which positively feeds
back to losing even more.

As we can see, aside from the risk of party foul, excessive
simplicity, and potential positive feedback, beer pong is virtually a
paragon of design principles. So can we do better? We will demonstrate
that improvement is in fact possible.

49

3.3. Beer Nim

We designed a game entitled beer nim, which is exactly equivalent to
the classical game, nim, played with beer cups instead of stones
[11]. A number of red solo cups filled with an arbitrary quantity of
beverage are arranged into three groups. Players take turns drinking a
number of cups (instead of removing a number of stones). The player to
drink the last cup wins.

Beer nim requires little planning, simple setup, little maintenance,
relatively few and simple rules, and low risk of party foul. However it
fails to adhere to, and even actively opposes, virtually all other
design principles. Socially, beer nim can only be played with two
players, facilitates little conversation because it requires so much
thinking, and requires a great deal of attention. It does, however,
allow the crowd to vicariously play the game mentally. In terms of
difficulty, the runtime complexity of beer nim is technically O(t),
though the constant is much larger than the other games discussed.
Furthermore, the runspace complexity is significantly larger and left as
an exercise to the reader. This is more problematic due to impaired
memory constraint.

Based on utter failure to satisfy most of the design principles, we must
conclude that beer nim is a terrible innovation. Therefore we can use
beer nim as a lower bound benchmark.

3.2. Beer Baseball

In preparation for an Olympics themed house party, Gisolfi and Liu-Huang
developed a sports themed drinking game, beer baseball. We describe beer
baseball's rules below and compare its funness and adherence to design
principles against beer pong, the benchmark.

3.2.1. Setup

∙ 2 teams of 4+ players (do not have to be the same size)

∙ Small table

∙ Line 4 \"base\" cups moving away from the shooter

∙ Put 6 additional \"out\" cups, one on each side of second, third,
and home base

3.2.2. Gameplay

∙ Teams take turns \"batting\" and \"fielding\" ∙ Batting:

o Players on the batting team take turns trying to shoot the ping pong
ball into base cups o During her turn, a batter can keep shooting
until she makes a base cup or gets out

o Outs:

▪ If the batter misses the cups, he gets a \<strike=

▪ If a batter gets three strikes, he is out. ▪ If the batter ever
makes an out cup, he is immediately out regardless of the number of
strikes

▪ After three outs, the inning ends, and the teams switch batting and
fielding roles

o If the batter makes a base cup, he takes that base by moving to that
side of the table (1st base = right side, 2nd = opposite, 3rd = left,
home = he goes all the way around)

o Whenever a batter returns home, each fielder must take a drink

∙ Fielding:

o Whenever a batter takes a base, a fielder can choose to make a play

▪ If so, she tries to shoot for the same base cup made by the batter

▪ If she makes it, the batter is out

▪ If she hits an out cup, it is an error, and all the batters advance
an extra base

▪ If she misses or hits any other cup, nothing happens and she does
not get another try

50

3.2.3. Alternate rules

1. At the start of fielding, each fielder chooses a base and is the
only one who can defend that base (requires teams of 4+).

2. If a fielder hits a different base cup than the one made by the
batter, it is a \<foul.= Nothing happens for a foul; the batter does
not get a strike.

3. Whenever the batter misses the cups but does hit (anything on)
the table, players on the batting and fielding team may both race to
retrieve the ball and touch it to the table. If a fielder succeeds, it
is a strike. If a batter succeeds, it is a \<ball.= If a batter gets
two balls, that batter walks to first base for free.

3.2.4. Analysis

Just like beer pong, beer baseball is also a SIC (shooting into cups)
game. As such, beer baseball shares the same desirable properties in
terms of setup, low-cost high-reward, and integration of drinking.
However, beer baseball is more engaging. Players on both the batting
team and fielding team have a role to play at all times. Using alternate
rule 3, it is even possible to engage all players during each shot.
Furthermore, there is nontrivial strategy involved in deciding when to
field. Therefore we estimate that the runtime complexity of beer
baseball is O(t) with the duration of the game. Having nonzero strategy
means the crowd can also engage in discussion and mock strategizing.
Considering these points, we believe beer baseball satisfies more design
principles than beer pong, and is likely to be a better game.

3.4. Soccer Shots

3.4.1. Setup

∙ 2 teams (teams must be same size) of 1-3 players (can accommodate
even more players, but the table may get crowded)

∙ Large table

∙ A ping pong ball

∙ Two empty six-pack cartons (or some other way to mark the goal)

3.4.2. Gameplay

∙ Players run around the table using the index and middle fingers of
one hand of their choice ∙ The objective is to flick the ping pong
ball into the opposing team's goal

∙ Whenever a team scores, the opposing team members must each drink a
shot

∙ No flying: either the index or middle finger must be in contact with
the table at all times

∙ No sliding: you may only move by running along the table using index
and middle finger

∙ Players cannot touch the ball with anything besides the index
finger, middle finger, and back of hand of the chosen hand

∙ If a player breaks a rule, he must drink a shot

3.4.3. Analysis

Among all the games described, soccer shots boasts the easiest setup,
requiring only a table, ping pong ball, and two readily available
markers (such as a six-pack carton). It is also easy to organize in
all other respects, with simple setup and few rules. Socially, soccer
inherently requires communication and engages the audience. While
soccer shots is easier than soccer, it still requires strategy with
respect to formation and coordination. Therefore we estimate that
soccer shots has a runtime complexity of O(t). Drinking is not
integral because the game is identical without beverage, though
\<shots= is in the name.

4. Discussion

We sought to codify the core principles common to drinking games.
Through close analysis and repeated playthrough of the aforementioned
games, we found that the proposed principles are indispensable for a
successful drinking game. Through creativity and adherence to the
principles, we also designed a drinking game, beer baseball, which
satisfies more design principles than even the highly regarded beer
pong, our benchmark. While more testing is required, theory suggests
that beer baseball is better than beer pong.

51

5. Acknowledgements

The authors would like to thank Junxing Wang for inspiration and
numerous discussions on game design; Nick Gisolfi for helping to compile
and prune the list of core principles, and for extensive play testing;
and Zachary Batts for extensive play testing.

References

[1] Brice, M. (25 January 2015). \<I went to a drinking game jam and
this is what I did.= www.mattiebrice.com.

[2] Horowitz A. (2004). \<Dog minds and dog play.= In M. Bekoff
(Ed.), Encyclopedia of Animal Behavior. Greenwood Publishing Group,
Westport, CT, 835-838.

[3] Horowitz, A. (2009). \<Domestic dogs (Canis familiaris) use
visual attention cues when play signaling.= Journal of Veterinary

Behavior: Clinical Applications and Research, 4, 53-54.

[4] Bergh‰nel, A.; Sch¸lke, O.; Ostner, J. (2015). \"Locomotor play
drives motor skill acquisition at the expense of growth: A life
history trade off\". Science advances 1 (7): 1--8. doi:
10.1126/sciadv.1500451

[5] Robin M Henig (17 February 2008). \"Taking Play Seriously\".
The New York Times.

[6] Fisher-Baum, R. (9 May 2013). \<Infographic: Is Your State\'s
Highest-Paid Employee A Coach? (Probably).= Deadspin.

[7] Tom Murphy VII. \"New results in k/n Power Hours.\" In
Proceedings of SIGBOVIK 2014 (2014).

[8] g_squidman (2016). \<Any Tips for Designing a Good Drinking
Game?= Reddit.

[9] \<Correlation does not imply causation= Wikipedia.

[10] \<Beer Pong.= Wikipedia.

[11] \<Nim.= Wikipedia.

52

CONFIDENTIAL COMMITTEE MATERIALS
{width="1.000485564304462in"
height="1.000485564304462in"}

SIGBOVIK 2017 Paper Review

Paper 79: Dr. Boozehead, or How I learned to stop worrying and get
drunk: Design principles and analysis of drinking games in the silicon
age

Robert J. Simmons, second general chair of the 26ish family of
SIGsBOVIK Rating: Unpleasantly sober

Confidence: Mostly shattered by JavaScript

There is yet again a drinking paper that doesn't cite my seminal work.
Kids these days. 53

12

Abstract

A Boring Follow-Up Paper to "Which ITG Stepcharts are Turniest?"
Titled, "Which ITG Stepcharts are Crossoveriest and/or
Footswitchiest?"

Ben Blum

bblum@cs.cmu.edu

In which I deliver on last year's promise of future work.

a

A

to

.

,

or

t

s

s

e

e

g

g

d

to

ac

w

bu

)

)

i

an

t

As

r

r

va

n

n

5

w

p

.

o

n

fill

i

i

m

ga

u

(to

arr

g

0

20

o

r

e

l

e

t

(fig

a tim

arr

l

Du

,

n

,

or

z

.

s

i

)

i

o

s

m

l

n

mu

an

scr

d

r

r

e

s

i

of

ba

w

e

e

a

Th

Mi

r

(lo

oft

.

o

r

g

n

t

o

of

d

pe

m

g

jud

o

s

r

Categories and Subject Descriptors D.D.R. [Exercise and

c

a

ac

e

a

fix

t

n

d

so

.

e

an

e

life

s

c

s

d

a

n

t

i

an

are

n

y

n

e

c

a

(Ko

ind

n

pla

l

.

i

r

o

d

r

an

c

r

i

d

"

c

o

ind

the

e

e

i

t

*

i

e

n

e

,

t

d

a

an{width="0.5939665354330709in"
height="0.429919072615923in"}{width="0.42512904636920384in"
height="0.47774496937882766in"}

ˇ

ind

t

c

h

f

r

t

Ot

Of

,

t

s

e

pa

r

i

h

the

eit

.

e

t

e

of

3

s

,

s

u

e

ob

dir

o

r

po

r

)

"

scr

Fitness]: Arcade Dance Games

y

1)

)

o

x

sco

o

a

Ro

l

ch

c

co

ˇ

d

s

.

,

,

y

b

a

t

d

e

s

r

"

e

tir

e

l

Wa

a rat

on

r

g

)

the

r

a

e

a

be

fix

a

d

e

ˇ

,

m

co

t

u

,

sco

n

g

are

r

n

n

i

e

e

3

c

a

ac

av

n

,

u

(

"

c

s

o

o

g

d

i

r

t

Fig

at

i

n

a

e

the

c

pe

ˇ

a

n

t

e

n

d

Th

n

,

s

i

?

e

r

e

h

d

c

e

"

o

an

t

u

the

e

e

in

t

s

Wh

g

(

wi

mo

l

t

ge

e

to

in

t

i

p

c

p

t

ap

ˇ

r

dir

no

inc

l

,

De

n

s

l

i

a

.

mi

"

s

t

)

e

or

3

Keywords crossovers, footswitches, jacks, sidefoots

(as

tim

.

,

h

r

"

a

ˇ

av

)

r

I'm

y

,

,

t

ba

RIP

,

wi

d

,

e

r

s

"

,

co

s

l

r

)

"

t

15

y

t

t

e

i

g

e

ˇ

r

d

r

a

a

:

z

t

i

e

y

M

l

s

n

i

a

,

on

t

a

(m

'

.

e

i

Gr

pla

s

f

n

n

the

r

y

s

P

life

p

"B

n

h

l

h

b

ca

i

t

e

d

e

e

g

e

c

i

e

r

o

c

r

n

d

Tu

y

r

of

r

s

n

r

p

i

,

p

m

o

pla

i

the

r

m

o

h

ga

ste

a

,

t

h

a

c

,

c

e

e

ac

l

t

wi

"st

g

d

t

u

ITG

t

yo

c

h

n

g

n

u

t

a

e

to

a

av

i

co

n

syn

e

jud

t

r

w

l

e

n

the

the

e

l

pro

gra

o

G

exc

o

for

,

f

e

d

arr

n

m

t

T

m

t

are

,

t

h

c

e

e

Ex

I

"

s

A

the

s

r

t

s

e

1. Introduction

t

,

1.

c

i

2.

i

c

t

r

e

the

y

y

a

g

ass

a

t

t

ch

i

ou

no

n

i

n

the

a

g

o

t

jud

be

a

a

(he

f

l

c

s

jud

o

e

u

alw

e

d

p

e

e

o

a

g

l

d

s

r

t

the

c

ass

r

r

e

d

a

e

r

a

e

r

u

d

a

s

t

n

t

n

s

i

e

u

u

l

e

l

p

r

o

u

t

.

l

l

m

t

o

a

e

e

e

a

c

r

g

g

w

s

e

a

i

o

r

m

i

i

u

b

a

r

i

F

t

e

h

h

u

h

s

n

n

r

F

p

s

t

F

i

(

n

g

w

a

i

t

w

r

"

e

i

v

t

m

Blu

a

c

.

Let's resume right where I left off in my last paper (Blum

s

h

c

c

ne

@

ar

i

t

h

s

.

r

d

.

s

l l

l

d

a

h

on

,

t

of

e

"ar

)

,

r

e

,

ac

h

t u

.

e

c

g

n

i

an

i

t

t

an

o

i

s

a

"

t

h

t

a

t

Th

c

t

no

s

h

a

m

c

c

c

c

su

o

o

p

ind

t

t

t

a

,

wi

c

mu

m

i

u

a

co

B

i

i

a

r

r

t

t

G

r

i

r

n

u

on

e

da

t

wh

wh

r

s

fac

p

g

Ste

l

"IT

t

b

a

n

thi

e

s

c

ov

ch

Ab

p

d

m

o

i

v

i

e

,

.

e

2016), shown in Figure 1. Unlikemainstream conferences,

e

e

ste

o

e

b

.

f

of

o

a

t

r

,

v

s

s

o

i

h

o

rec

,

g

r

.

na

f

g

b

r

in

n

of

t

k

e

i

g

t

b

t

t

is gra

s

n

d

n

s

s

au

o

i

l

d

i

Gr

e

o

r

e

r

au

c

r

r

c

p

r

d

r

a

n

Tu

co

to

wo

rea

a

h

m

n

ste

y

r

the

o

h

y

s

t

m

a

nu

l

o

ign

o

t

n

t

of

r

t

for

wi

e

pla

e

f

to

,

t

t

e

us

n

e

pro

r

e

me

h

(he

n

e

fac

tur

x

t

s

lev

h

e

e

e

ev

n

n

e

t

c

o

lov

the

e

i

s

t

a

mu

c

c

r

c

ea

t

E

f

Th

s

m

pa

ITG

i

the

p

fee

i

r

[

e

o

m

.

ga

"

r

at,

n

r

e

the

o

ste

n

s

y

r

the

'

o

SIGBOVIK doesn't make me waste space repeating all the

an

t

h

r

i

pr

o

e

r

u

t

yo

s

e

c

)

(he

s

e

s

y

m

t

p

Th

s

c

e

an

r

r

R

for

d

g

cla

y

of

u

fac

y

,

lar

.

n

u

e

t

d

o

h

a

l

a

t

pla

)

an

n

o

tha

h

r

.

or

in

s

l

(he

,

of

us

t

n

r

i

w

g

o

t

s

i

h

D

g

"

o

lau

h

I

s

i

.

a

y

rh

H

the

l

o

s

r

a

p

b

thi

h

c

AC

tip

ap

s

c

n

a

.

v

e

arr

g

g

be

ob

"ca

d

y

i

g

c

n

D

mu

p

u

,

o

s

to

e

ap

tw

m

p

i

o

fitn

t

h

o

n

\%

h

l

e

en

s

r

s

i

ste

n

e

t

n

d

s

of

of

r

s

e

pe

ind

s

c

c

a

3

o

"so

ma

e

exi

d

mu

v

background material, and I can just say go read that paper

o

e

ple

s

n

i

s

c

m

g

the

e

l

d

mi

am

i

t

a

3

for

s

for

the

wh

e

e

o

e

a

.

i

r

t

)

81

Th

c

n

r

s

n

a

d

w

r

e

d

e

H

e

lic

An

is ho

i

t

o

s

us

AC

e

g

Gr

a

k

t

stu

l

rel

ob

e

d

fla

o

c

y

mo

r

,

d

fin

y

t

the

in

wo

i

t

y

h

.

a

n

s

(or

d

r

to

d

h

t

e

n

a

h

d

i

g

ind

y

p

.

tha

p

h

s

n

t

e

stu

the

wi

t

rig

n

s

arc

d

a

u

r

s

thi

g

i

e

a

r

n

c

ma

l

t

r

l

i

r

e

e

e

o

.

a lib

i

i

rec

is alr

us

r

r

we

ran

of

l

s

f

e

inc

Th

s

r

i

r

g

e

d

n

W

we

is pre

o

Inc

s

i

c

e

e

o

t

y

y

0%

first and get back to me. It's probably a lot funnier than

f

all

i

oth

p

e

l

n

l

s

t

of

mu

t

m

e

c

c

n

h

De

a

l

r

e

o

m

s

l

ga

Ot

or

Ga

a

In

i

c

p

by

H

o

m

l

i

,

AC

e

o

c

a

l

e

t

h

e

y

u

d

u

dir

v

a

the

s

ca

h

n

s

b

Am

t

a

Pu

vid

tak

,

g

to

s

ma

n

r

y

pa

n

e

c

d

o

e

un

c

l

g

h

Th

o

s

e

m

s

t

n

a

e

(he

i

.

y

of

.

s

be

i

gro

h

.

i

m

n

fi

s

e

e

)

a

t

t

s

ow

wi

e

c

n

so

wh

dif

an

fro

t

pla

s

o

.

e

Inc

e

-

x

(

1,

c

.

c

g

i

r

e

e

r

sid

a pro

s

n

g

c

,

c

n

ma

y

t

o

e

y

j

d

k

g

i

m

t

g

i

t

fou

n

i

l

n

r

2,

h

n

e

da

t

Ga

n

h

n

wo

p

c

t

,

this one anyway, which is gonna be sort of dry, and really

co

i

Da

t

dir

r

i

b

s

r

l

s

no

u

(bu

t

i

an

A

e

n

u

o

Su

s

to

the

c

l

,

e

y

file

a

i

'

d

a

a

i

mu

g

US

a

n

dir

v

u

s

/

l

inc

ha

thi

t

s

r

d

l

l

r

e

d

t

tur

r

t

m

ha

n

i

e

c

h

g

s

r

r

e

n

ab

n

e

r

of

r

,

.

PA

n

e

i

d

h

e

l

l

c

ow

c

or

u

r

i

e

a

fi

sim

o

n

r

,

s

a

t

o

v

u

t

wi

d

lis

dif

g

t

)

d

t

s

i

e

e

bu

l

pro

an

o

t

s

,

p

o

m

by

d

Fig

,

h

t

u

l

u

t

sid

n

d

Ga

h

"

w

ran

r

e

v

a

e

n

n

n

x

the

e

m

i

g

u

co

a no

n

t

a

Fig

m

c

Ro

,

,

t

s

t

d

e

-

r

i

a

t

e

d

y

of

a s

dig

d

.

d

e

l

n

l

u

c

Ar

l

e

r

h

he

p

t

p

e

e

n

o

n

b

is a po

co

p

t

t

rea

s

of interest only to other ITG players who already know

m

w

t

ga

in

a ste

s

m

s

s

e

d

t

o

co

p

s

e

h

e

an

i

t

e

in

sp

e

l

by

t

a

m

r

u

y

n

rh

e

o

pre

6

i

k

ho

m

r

co

s

i

c

m

ea

m

i

by

n

ma

ge

y

1

c

i

P

r

r

,

m

is pe

o

wh

o

h

t

In

d

s

0

:

"ar

fro

0

r

.

'16

u

n

2

to

for

]

c

t

d

.

5

s

a

d

r

x

.

e

t

s

Ro

g

l

s

stu

0

to

o

-

a

d

)

m

y

.

e

0

r

n

r

o

20

s

n

n

e

r

s

n

5

r

t

s

e

raw

$1

m

n

t

,

n

-

i

g

"

t

t

d

t

K

p

wo

i

i

o

e

e

mo

i

h

i

d

y

w

e

I

d

i

,

h

h

r

c

e

s

s

s

...

v

w

e

cre

w

w

l

s

o

g

g

V

s

s

x

y

x

pro

i

i

e

s

n

n

l

o

T

.

t

o

n

i

o

r

r

y

O

p

d

n

m

o

o

t

e

G

r

r

w

w

a

y

y

r

g

e

a

t

e

B

what's going on.

s

a

e

b

m

h

.

a

,

e

H

m

p

p

o

l

a

r

l

a

a

i

r

t

s

r

l

a

l

o

G

h

h

e

i

o

o

T

u

u

o

o

n

C

n

c

s

e

I

e

S

A

I

r

c

p

i

t

a

p

W

p

i

A

t

m

C

F

K

1

I

d

p

fl

s

a

a

r

c

i

s

p

a

c

P

f

C

w

C

A

The TL;DR is that I made a program which figures out

how to foot stepcharts in the least crossovery possible way (short of
double-stepping everything), then found which charts ultimately had the
most. The algorithm also naturally identifies footswitches and jacks,
and some times it's smarter than me in amusing ways. I put all the
goodies in a giant spreadsheet at http://tinyurl.com

Figure 1. (okay twist your head to read this)

Right foot

← ↓ ↑ →

? U R U L U

/crossoveriest, and the program itself is of course freely available at
https://github.com/bblum/sigbo vik/blob/master/itg/code/ITG.hs.

Left foot

D L ? L U LD R R ? U RD D R D L ?

2. Revisiting Turniness (Flashback Scene)

Recall Table 1 from the last paper, in which I left un defined the
facings for L L, D D , U U , and R R, the four

Permission to make digital or hard copies of part or all of this work
for personal or classroom use is granted without fee, provided...
honestly, provided nothing. The ACH is already flattered enough that
you're even reading this notice. Copyrights for components of this
work owned by others than ACH must be laughed at, then ignored.
Abstracting with credit is permitted, but abstracting with cash is
preferred. And please tip us, for the love of Turing.

SIGBOVIK '17 Pittsburgh, PA, USA

Copyright c 2017 held by owner/author(s). Publication rights licensed
to ACH.

ACH ... $15.00

Table 1. Facing directions.

footswitches. I show a typical D D /U U footswitch pat tern in
Figure 2(a), and typical L L/R R switches (hence forth "crossover
footswitches") in Figure 2(b). To step these patterns, the player
still alternates feet as usual, but must lift one foot off the
repeated arrow before stepping it with her other foot. Chart authors
will often, but not always, include a "mine cue" (shown in the figure)
to hint that the second foot should switch onto the same arrow.

54

a DD, UU (b) LL, RR © RR, LL Figure 2. Footswitches of
various crossoveriness/facing.

It is tempting to assign the facings L, U , U , and R
respectively to the L L, D D , U U , and R R footswitches.
However, Figure 2© shows that if a footswitch begins with a crossover
on U , the facing should be reversed: the R R footing should face
L, and L L should face R. "Spin switches" with D facing are also
theoretically possible, arising from patterns such as LU R D D L or
LD RU U L, or similarly, "270-switches", as shown in Figure 4(a).

Before I realized that, I modified the turniness algo rithm (Blum 2016)
to face footswitches as above, and it surprised me with charts of T >
2, in excess of the theo retical maximum! I show one such chart in
Figure 3(a), in which the step from L L (φ = L) to U L (φ = D
R
) has in dividual T = 3, and so on for U L  R U U  L RU
R R R. The steps R R  L LR  R L L are both candles (T =
2), resulting overall in T = 8*/*3 for the whole chart.

Indeed, when I further modified the algorithm to force D D switches
to face D (i.e., always facing the direction of the repeated arrow),
it produced the chart shown in Figure 3(b), with overall T = 3. (Note
its resemblance to the basic spin pattern, LD RU , whose T = 2.)

To fairly represent a human player's desire to step in the least turny
of ambiguous ways, I extended the algo rithm to provide either the
assigned facing, from above, or its polar opposite, chosen at runtime by
whichever is closer to the presequent facing. This restores the maxi mum
overall chart turniness to T = 2, a new example of which is shown in
Figure 3©. Figure 4(b) also shows a real-world chart exhibiting this
pattern.

However, note that individual steps may still have

a T = 8*/*3(?) (b) T = 3(?)

c T = 2. (d) T = 2+ε.

Figure 3. The turniest footswitch patterns. (a) and (b) are false
positives (see prose), while © and (d) provide theo retically
maximal turniness.

T = 3, as shown in Figure 3(d). In this example, the step D L  L L
L
assigns L L to face L, but the subsequent step to LD cannot
avoid facing U R. The reason charts still

a Web33,260.8 (12, Rikame 5)

b Fuego

(12, Best of Gazebo)

cannot exceed overall T = 2 is that setting up such a sit uation
requires a T = 1 step, which negates the benefit. A chart could
conceivably end right before such a step, sneaking through some small
ε extra turniness (VII 2014) (similar to the case of 270s in (Blum
2016)), but sustained average T > 2 remains impossible.

Another approach could assign such a footswitch the opposite footing
of the previous facing, regardless of the arrow itself; so in this
case the L L would face U L, and each step would have exactly T =
2.

Figure 4. Real-world examples of turny footswitches.

3. Analyzing Crossoveriness

The major flaw of the turniness algorithm (Blum 2016) was that it
didn't care whether a stream started with the left or right foot; it
simply exploited the symmetry of Ta ble 1 to find turniness regardless
of footing. Hence, it could not distinguish technical footing patterns
which

55

data Step = L | D | U | R | Jump deriving Eq

data AnalysisState = S { steps :: Int, xovers :: Int, switches :: Int,
jacks :: Int, lastStep :: Maybe Step, doubleStep :: Bool, lastFlip ::
Bool,

lastFoot :: Bool, stepsLR :: [Bool] }

commitStream :: AnalysisState -> AnalysisState

commitStream s = s { xovers = xovers s + if f then ns - nx else nx,
switches = switches s + fromEnum (f == lastFlip s && doubleStep s),
jacks = jacks s + fromEnum (f /= lastFlip s && doubleStep s), lastFlip
= f, stepsLR = [] }

where ns = length $ stepsLR s

nx = length $ filter not $ stepsLR s

-- reverse the stream's footing if more L/R steps were crossed over
than not. f = nx * 2 > ns || nx * 2 == ns && ((switches s >
jacks s) == lastFlip s)

analyzeStep :: AnalysisState -> Step -> AnalysisState

analyzeStep s step

| step == Jump = (commitStream s) { lastStep = Nothing, doubleStep =
False } | lastStep s == Just step = stream (commitStream s) {
doubleStep = True } | otherwise = stream s

where foot = not $ lastFoot s

-- record whether we stepped on a matching or crossed-over L/R arrow.
addStep ft L steps = steps ++ [ft]

addStep ft R steps = steps ++ [not ft]

addStep ft _ steps = steps -- U/D don't help to determine L/R
footing. stream s = s { steps = steps s + 1, lastStep = Just step,
lastFoot = foot, stepsLR = addStep foot step $ stepsLR s }

analyze :: [Step] -> AnalysisState

analyze = commitStream . foldl analyzeStep (S 0 0 0 0 Nothing False
False False []) Figure 5. Pseudocode description of the
crossoveriness and footswitchiness algorithm.

could affect the way a human would play the chart. It of ten played
charts inhumanly, facing backwards and/or stepping 270s for most of a
song.

So, my contribution this year is an algorithm which plays more
naturally, and which consequently can report on a chart's technical
patterns beyond simple turniness. The algorithm realizes three
principles of ITG:

1. Alternate feet as much as possible.

2. Step crossed-over as little as possible.

3. Jumps or jacks allow the player to reset her footing.

Figure 5 describes the algorithm in pseudocode. To summarize it in
prose:

• Split the chart into several units of stream, the bound aries of
which occur at every jump and any time an ar row is repeated.

• Step each stream with alternating feet.

• Compare the number of matching steps (i.e., L foot on L arrow or
R on R) versus crossover steps. If the latter

is greater, re-step the stream with opposite feet from before (this
kills the crossovers).

• After flipping each stream, if necessary, count the total crossovers
in the whole chart.

4. Analyzing Footswitchiness

Because we split the chart whenever an arrow is repeated, figuring out
whether that arrow is stepped with different feet on either side of
the stream boundary is a natural consequence of figuring out how to
step each stream in dividually. This is also shown in Figure 5's
pseudocode. To summarize in prose, if neither stream needed to be
flipped (or if both did), then the alternating feet assump tion holds,
and the repeat must be a footswitch.

5. Analyzing Jackiness

A jack occurs when a repeat arrow is stepped with the same foot,
rather than alternating (hence the name, from "jackhammer"). You've
got the idea by now, right?

56

a Paradise Lost (16, Cirque du Lykan)

b Heartbeat

(13, TranceNation)

Algorithm 1: HeuristicallyDoublestep(S)

Input : S, a step sequence s0...sn

Invariant: ∀si,sj ∈ S, j = i +1 →

¬StreamBoundary(si,sj)

Input : nflip, heuristic minimum length Input : %flip,
heuristic percentage, initially 100 1 for i ∈ length(S)∧
¬defined(iDS) do

2 S0 ← {sk|sk ∈ LRs({sj|sj ∈ S∧ ji
})∧knflip} 3 if |S0| = nflip ∧ |Crossovers(S0)|
≥ %flip × |S0| then 4 iDSi

5 end

6 end

7 if defined*(i*DS) then

8 if iDS = 0 then

9 iDS ← FindUnflippedSection({si|si ∈ S∧i 6= 0})
10 end

11 HeuristicallyDoublestep({si|si ∈ S∧i \< iDS})
HeuristicallyDoublestep({si|si ∈ S∧iiDS}) 12 else

13 CommitStream(S)

14 end

Figure 6. The doublesteps in some streamy charts must be
identified, and the stream split, lest "too much" of the following
stream appear completely crossed-over.

6. Forced Doublesteps

After painstakingly translating the pseudocode from Fig ure 5 into a
real implementation, I found it vulnerable to false positives when a
single double-step could force a long section of stream to be stepped
backwards. As an extreme example, consider the pattern LR LR LR LRD
LR LR LR LR. Because no jumps or jacks allow the footing to be
reset, either the first or last 4 pairs of *LR*s must be stepped
crossed-over.

Figure 6 shows two examples from real-world charts: in (a), the player
must not alternate feet across the mea sure break, while in (b), two
L arrows are replaced with rolls for an artistic visual accent,
which must be stepped twice each. On inspection, these charts should
be stepped with no crossovers, but were evaluated otherwise (730 XOs
(27.9%) and 173 XOs (9.6%), respectively).

To handle such cases, I extended the algorithm with a heuristic to
identify when a stream becomes "too crossed over" for "too long", and
to force a doublestep by split ting the stream to flip it back.
Algorithm 1 shows the im plementation. I will not summarize how it
works due to space limitations, but the description should be
intuitive enough. The heuristic evaluated Figure 6's charts as hav ing
0 crossovers each and 1 and 21 doublesteps, respec tively. In my
analysis next section, I will use nflip = 9 (de termined by
inspection of a few favourite charts, which should be scientifically
rigorous enough for anyone).

7. Evaluation

Our experimental corpus has grown considerably since last year, and
now comprises 11,666 stepcharts. I ran the crossoveriness/etcetera
algorithm on all of them, and counted the total steps (not including
jumps), crossovers, footswitches, jacks, forced doublesteps, and
crossover switches for each. I also grouped the charts by author and
by song pack to calculate each author's/pack's over all
crossoveriness/etc. You can view the entire dataset at
http://tinyurl.com/crossoveriest.

Tables 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12 summarize the dataset as
"leaderboards" for each category. They present the data the same way
as last year (Blum 2016), so I won't explain it again. Suffice to say
that ITG enthusiasts should use their personal chart style preferences
to navi gate these tables and find song or pack recommendations or
feel proud of themselves or whatever.

In the by-author analysis, I excluded authors with fewer than 10
charts, like last time. Also, in by-author and by-pack, I excluded 1-,
2-, and 3-foot charts, on the pre tense that they often ignore the
alternating feet assump tion (though I also admit this biases the
analysis against DDR/Konami). Nevertheless, DDR charts generally ruled
the single-digits, even in footswitchiness, showing per haps more
technical depth than I gave them credit for.

By the way, the theoretical maxima for XO%, FS%, and JK% are 50-ε,
100-ε, and 100-ε, respectively (VII 2014).

57

Ft. Name Pack #XO 6 Autoload ITG 3 54 7 Tetris CuoReNeRo M'PacK 58
8 Pulse CuoReNeRo M'PacK 94

9 MAX Forever CuoReNeRo M'PacK 107 10 J-PARA SUPER MEGAMIX CuoReNeRo
M'PacK 141 11 Somebody I Used To Know Best of Gazebo 114 12 Credens
Justitiam Stuff B Likes 136 13 Banshee Strikes VocaJawnz 152 14 Slow
Down Sexuality Violation 2 160 15 yoshikawa45 vs siesta45 Rikame's
Simfiles 4 111 16 Your Best Nightmare Undertale 97

no 17s-19s with50 XOs

20 Rainbow Dimension Rikame's Simfiles 2 84 21 Teenage Dream Sexuality
Violation 2 280

Table 2. Charts with the most total crossovers (XOs).

Ft. Name Pack XO% 4 DAM DIRIRAM DDR 3rdMIX 27.3 5 STRICTLY
BUSINESS DDR 1st 21.4 6 STRICTLY BUSINESS DDR 1st 20.9 7 MOBO*?*MOGA
DDR EXTREME 17.3 8 PARANOiA DDR 1st 19.6 9 Dazzlin Darlin r21twins
22.0

10 Enchanted Journey ITG Rebirth 19.3 11 Lune Noir r21freak Friendship
13.4 12 W'peg is Fucking Over best of r21freak ii 14.3 13 The Sampling
Paradise The Paradise Sampler 15.6 14 Slow Down Sexuality Violation 2
13.3 15 yoshikawa45 vs siesta45 Rikame's Simfiles 4 10.8 no 16s-19s
with
10% XO

20 Rainbow Dimension Rikame's Simfiles 2 10.7 21 Teenage Dream
Sexuality Violation 2 11.8

Table 3. Charts with the highest percentage of XOs among total
steps.

8. Discussion

To verify the algorithm's accuracy, I manually inspected a random (read:
not random) sample of the charts at or near the top of the various
leaderboards (read: I played a lot of Stepmania). I also consulted a
leading expert in the field of automated ITG chart analysis (read:
myself ), who reported that the algorithm is infinitely more accurate
than the prior state-of-the-art.

Honestly though, it works really well. Last year's algo rithm was
often finicky and prone to all sorts of false positives, while this
one plays ITG in a recognizably hu man way (almost) without fail. It
was a joy to use.

Surprises. On occasion, the algorithm surprised me by stepping with
crossover footswitches which, at first glance, I would probably jack or
double-step. However, these always proved to be perfectly valid
alternative foot ings, in some cases requiring considerable look-ahead.

Ft. Name Pack #FS 6 Sweat Shop ITG Rebirth 2 Beta 56 7 Silent Hill
DDR 3rdMIX 29 8 Pulse CuoReNeRo M'PacK 41 9 Dr. Boom-Bombay fort
rapids vii 34

10 Eat 'Em Up! Mute Sims 5 54 11 Nemeton The Legend of Zim 4 97 12
Nemeton Subluminal 140 13 Love Is Eternity Subluminal 140 14 Switch
Getty 347 15 Danse Macabre Aoreo's Ariginals 201 16 Weird Science
Stamina Showcase 61 17 Arcane Apparatus Tachyon Gamma 32 18
Metallic-A- Oh Henry! Mad Stamina 27 19 Geronimo Sexuality Violation 3
39 20 Scatman's World Jimmy Jawns 2 22 21 He He He Jimmy Jawns 2 4 22
Architecture SPEEEDCOOOORE 4 24 23 Geronimo Sexuality Violation 3 39

Table 4. Charts with the most total footswitches (FSs).

Ft. Name Pack FS% 3 Sweat Shop ITG Rebirth 2 Beta 15.3 4 DROP THE
BOMB DDR 3rdMIX 9.8 5 MAKE IT BETTER DDR 2ndMIX 20.0 6 Sweat Shop ITG
Rebirth 2 Beta 18.2 7 5.1.1 DDR MAX 12.9 8 La Señorita Virtual DDR
3rdMIX 8.4 9 PARANOiA KCET DDR 2ndMIX 10.2

10 Delhi Ill Mute Sims 6 11.6 11 Sweat Shop ITG Rebirth 2 Beta 11.8 12
Nemeton Subluminal 14.8 13 Love Is Eternity Subluminal 16.8 14 Switch
Getty 15.7 15 Flames of the Sky fort rapids vii 16.5 16 Mermaid Island
Tachyon Alpha 9.5

no 17s+ with5% FS

Table 5. Charts with the highest percentage of FSs.

Ft. Name Pack #XF no 8s- with12 XFs

9 Dr. Boom-Bombay fort rapids vii 18 10 Toxic Sexuality Violation 2 12
11 Heart Shooter VocaJawnz 44 12 Web 33,260.8 Rikame's Simfiles 5 16
13 Toxic Sexuality Violation 2 12 14 Fancy Footwork Cirque du Zeppelin
40 15 yoshikawa45 vs siesta45 Rikame's Simfiles 4 20 no 15s+ with
12 XFs

Table 6. Charts with the most crossover footswitches (XFs). (Here
I chose 12 as the cut-off to exclude a bunch of ambiguously-patterned
charts from DDR.)

58

Author Charts Total Steps XO% Konami 530 114623 8.03 sssmsm 41
17219 6.55 NEMORIGINAL 44 20165 5.36 M. Emirzian 23 9307 5.16 J.
DeGarmo 16 4362 4.86 D. Renzetti 18 7877 4.47 R. McKanna 47 14855 4.23
M. Puls 26 8379 4.18 King of Light 24 8817 4.13 D. Bernardone 217
76290 4.10 D. D'Amato 107 36265 3.92 bblum 32 32382 3.76 ...

B. Vergara 13 15454 0.091 Aoreo 21 27990 0.089 Zaia 368 448460 0.080
t0ni 85 128964 0.058 Burn 27 60052 0.057 Dirk 12 21996 0.055 Happy
Feet 30 57888 0.040 @@ 63 199530 0.026 Arvin 79 108612 0.023 Drazu 153
221460 0.021 teejusb 11 11298 0.018 Fraxtil 19 25612 0

Table 7. Chart authors with the highest/lowest XO%.

Author Charts Total Steps FS% Konami 530 114623 2.29 bblum 32
32382 1.29 M. Puls 26 8379 1.16 R. McKanna 47 14855 1.11 mudkyp 63
42792 1.09 S. Venkat 24 11084 0.96 K. Ward 281 86475 0.89 xRGTMx 19
11734 0.88 sssmsm 41 17219 0.85 D. Bernardone 217 76290 0.83 ATB 31
20673 0.82 Happy Feet 30 57888 0.82 ...

Hsarus 18 70162 0.070 Drazu 153 221460 0.057 @@ 63 199530 0.037
Revolver 11 8302 0.036 T. Swag 13 22909 0.031 Dirk 12 21996 0.023 B.
Vergara 13 15454 0.019 warpdr!ve 16 19090 0.016 Burn 27 60052 0.015
teejusb 11 11298 0.009 t0ni 85 128964 0.002 S. Tofu 26 34609 0

Table 8. Chart authors with the highest/lowest FS%.

Author Charts Total Steps JK%

King of Light 24 8817 12.94

R. McKanna 47 14855 9.95

Konami 530 114623 9.63

P. Shanklin 21 9834 8.91

M. Puls 26 8379 8.90

K. Ward 281 86475 8.80

ATB 31 20673 8.46

J. DeGarmo 16 4362 8.30

D. Bernardone 217 76290 7.98

Renard 45 15035 7.94

C. Foy 133 52418 7.72

Yoko 10 4128 7.63

...

bblum 32 32382 1.56

B. Vergara 13 15454 1.31

Arvin 79 108612 1.30

Drazu 153 221460 1.06

Zaia 368 448460 0.86

Aoreo 21 27990 0.78

T. Swag 13 22909 0.70

@@ 63 199530 0.61

warpdrive 16 19090 0.49

Dirk 12 21996 0.35

Burn 27 60052 0.28

Hsarus 18 70162 0.27

Table 9. Chart authors with the highest/lowest JK%.

Pack Charts Total Steps XO% DDR 1st Mix to Extreme 530 114623 8.03
r2112 47 18377 4.50 the best of r21freak 100 45156 4.22 the best of
r21freak ii 48 25024 4.16 In The Groove 2 222 66113 4.05 In The Groove
Rebirth+ 108 43758 3.99 r21twins 52 22347 3.89 In The Groove 3 320
106363 3.61 CuoReNeRo MeGaPacK 423 248625 3.45 In The Groove 1408
491819 3.27 r21freak Friendship Pack 47 20363 3.13 BemaniBeats 4 31
18464 3.02 ...

Tachyon Epsilon 150 208830 0.064 SPEEEDCOOOORE 4 101 123814 0.064
TranceMania 80 121415 0.062 Cirque du Lykan 129 160312 0.059 Cirque du
Zonda 45 74890 0.057 Jimmy Jawns 109 170894 0.044 Getty 26 53528 0.043
Tachyon Delta 32 36712 0.038 Tachyon Gamma 32 36134 0.033 Oh Henry!
Mad Stamina 46 152326 0.028 Causality Violation 10 19507 0.021 Fast
Track to Brutetown 29 46082 0.020

Table 10. Packs with the highest/lowest XO%. 59

Pack Charts Total Steps FS% Subluminal 17 13418 4.70 Aoreo's
Ariginals 2 16 15222 2.42 DDR 1st Mix to Extreme 530 114623 2.29 Aoreo's
Ariginals 31 26418 2.26 rocky mount xi 113 79986 0.97 In The Groove 2
222 66113 0.93 FA and Chill 35 22045 0.86 Getty 26 53528 0.85 Undertale
19 14722 0.84 r2112 47 18377 0.82 Fort Rapids VI 75 75563 0.77 Mute Sims
8 72 47140 0.71 ...

SPEEEDCOOOORE 4 101 123814 0.093 Stamina Showcase 38 126146 0.088
VocaJawnz II 128 183153 0.084 Cirque du Zeppelin 109 102991 0.082
SPEEEDCOOOORE 3 66 69236 0.071 Causality Violation 10 19507 0.051
TranceNation 41 123940 0.047 Oh Henry! Mad Stamina 46 152326 0.046
Tachyon Epsilon 150 208830 0.040 Noisiastreamz 20 41917 0.019

a Dr. Boom-Bombay (9, fort rapids vii)

b Toxic

(10/13, Sex'y Violation 2)

TranceMania 2 40 64109 0.003 TranceMania 80 121415 0.001

Table 11. Packs with the highest/lowest FS%.

Pack Charts Total Steps JK% DDR 1st Mix to Extreme 530 114623 9.63
r2112 47 18377 8.80 In The Groove 2 222 66113 8.64 Gensokyo Holiday 87
51652 7.87 r21Freak's Friendship Pack 2 32 15649 7.71 Omnifarious 10
5594 7.53 r21twins 52 22347 7.18 Piece of Cake 7 20 11554 6.97 In The
Groove 1408 491819 6.97 In The Groove Rebirth+ 108 43758 6.97 TLOES
Chapter 1 85 42201 6.89 ITG Rebirth 2 Beta 262 99078 6.79 ...

TranceMania 2 40 64109 1.03 Causality Violation 10 19507 0.98 VocaJawnz
II 128 183153 0.98 Tachyon Epsilon 150 208830 0.94 Tachyon Delta 32
36712 0.87 Cirque du Lykan 129 160312 0.87 Cirque du Veyron 31 51545
0.79 Cirque du Zeppelin 109 102991 0.77 Oh Henry! Mad Stamina 46 152326
0.67 Stamina Showcase 38 126146 0.56 Cirque du Zonda 45 74890 0.47
TranceNation 41 123940 0.25

Table 12. Packs with the highest/lowest JK%.

Figure 7. Sometimes the algorithm was smarter than me.

In Figure 7(a), the chart repeats L (later R) thrice, begin ning
with the right (later, left) foot. While a human player would jack
these repeated arrows, the crossoveriness al gorithm performs a
double-footswitch, effectively reduc ing the total crossover steps by
1 each time. In Figure 7(b), a mine cues the player to double-step
with her right foot, but the crossoveriness algorithm can begin this
section already crossed-over, owing to an earlier L jack on which it
could switch feet.

Honourable Mentions. I omitted a table for the jacki est charts,
on account of most of them being either trivial beginner charts or
extra-long megamixes. One deserves a special mention: Sandstorm (Jimmy
Jawnz 2), shown in Figure 8(a), has more than twice as many total
jacks as the next jackiest chart, clocking in at 1049 (78.5%) with its
15 and 992 (69.6%) with its 17. And looking at that chart, can't you
just hear Sandstorm playing in your head already?

I also wanted to highlight the chart with the most crossover switches,
shown in Figure 8(b), mostly because the skittle notes should add some
nice variety of colour to the paper (with apologies to the dead-tree
SIGBOVIK au dience reading in greyscale). Figure 8©, with 2nd place
in crossover switches following (b), comes with an edit chart titled
"no sidefoots", and to be perfectly honest I just kept saying the word
"sidefoots" to myself and giggling a lot while writing this paper.

Sweet spot. Finally, in case it wasn't obvious in the ta bles,
I'll point out that 9-15 is clearly the sweet spot of dif ficulties
for technical stepcharts.

60

a Sandstorm (15/17, J. Jawnz II)

b Heart Shooter (11, VocaJawnz)

c '76 (Slow Train) (11, Mute Sims X)

d Conflict

(12, Stephcharts)

e Matador (11, Valex 8)

Figure 8. Miscellaneous interesting charts I discovered while
browsing the giant spreadsheet.

9. Never Work

Let's be honest: this isn't gonna be a paper trilogy. Okay, with that
said, here are some things that would be cool to implement in a
fantasy universe with infinite free time. (I have renamed this
"future" work section accordingly.)

There are a few remaining cases the algorithm doesn't yet understand:

• Doublesteps forced either by mine cues or by holds;

• Crossover and/or bracket jumps, not usually forced but often way
less turny than the alternative;

• Forced footings across stream boundaries arising from bracket jumps
or jump-footswitches.

For example, Figure 8(d) shows many sequential dou blesteps, each forced
by a mine cue, but which the al gorithm interprets as spins because the
flipped stream length falls below nflip. Figure 8(e) shows an
example of jump-footswitches which the algorithm fails to count be cause
it ignores the footing of jumps.

These patterns would all have to be identified heuris tically. Apart
from that being more work than I wanted to do, I also feel that adding
too many heuristics to SIG BOVIK research compromises the simple and
innocent beauty of an implementation unbound by the demands of
mainstream conferences.

10. Conclusion

Please accept my paper. I worked hard on it.

References

B. Blum. Which ITG stepcharts are turniest? SIGBOVIK, 2016. T. VII.
What, if anything, is epsilon? SIGBOVIK, 2014.

61

62

{width="0.65625in"
height="0.3in"}{width="0.65625in"
height="0.3in"}{width="0.65625in"
height="0.3in"}{width="0.65625in"
height="0.3in"}

Dog track

Nonstandard ML

{width="0.65625in"
height="0.3in"}{width="0.65625in"
height="0.3in"}{width="0.65625in"
height="0.3in"}{width="0.65625in"
height="0.3in"}

13 Batch normalization for improved DNN performance, my ass Joshua A.
Wise

14 Colonel density estimation

Harish Krishna et al.

15 Degenerative adversarial networks

Raphael Gontijo Lopes and Diptodip Deb

16 Stopping GAN violence: Generative unadversarial networks Samuel
Albanie, S´ebastien Ehrhardt, and Jo˜ao Henriques

63

Batch Normalization for Improved DNN Performance, My Ass

13

Joshua A. Wise

joshua@joshuawise.com

Emarhavil Heavy Industries

Abstract

Batch normalization is an extremely popular tech

nique to enable faster training, and higher network performance after
training. We apply batch normal ization to a relatively small network,
and find it to be completely ineffective, and indeed, to reduce network
convergence and overall network performance.

1. Introduction

Batch normalization [4] is a strategy used to accelerate learning in
deep neural networks. Theorized to work by reducing "internal covari
ate shift", it dynamically computes normaliza tion coefficients at
each channel internal to a convolutional network while training, and
then during validation and operation, hopes that they generalize.
Although the effect of batch nor malization can, in theory, be baked
into weights at each neuron, the batch normalization coeffi cients are
not learned through gradient descent, and only their second-order
effects are. Through a convoluted process, this means that adding more
parameters somehow makes the network converge more readily, and so
everybody does it.

Batch normalization has been used in many networks from deep to
shallow: recent DCGAN architectures (for instance, pix2pix [5]) have
used batch normalization between layers when training regression, and
Google's Inception net work has used it when training classification.
Batch normalization is said to be tolerant to hyperparameters; for
instance, the decay hy perparameter is said to reasonably range from
0.999 through 0.99 all the way down to 0.9 and "etc.", which is
apparently one nine fewer than 0.9. It also has a configurable value
of epsilon [2], which is likely to be valuable during times of
shortage [9].

In this work, we sprinkle batch normalization pixie dust onto an
existing neural network to improve its performance, and analyze the
per formance gained.

Figure{width="3.0018755468066494in"
height="2.251402012248469in"} 1. Batch normalization performance vs.
classical training.

2. Related Work

Everybody who does work on deep neural net works cites the founding
paper on the subject that was written long before anyone had ever
heard of a GPU. So we do so here too [7]. But let's be real here,
this whole lab report is actu ally a take-off of Kovar and Hall [6],
who did this way better than I did.

3. Experimental Procedure

We took an existing neural network of a few layers, a corruption of
the work in [3]. It already did not work very well, but the batch
normal ization pixie dust was expected to substantially improve it,
and make everything all better. We inserted batch normalization layers
in all but the final convolutional layer, since adding a normal izing
layer before the output seemed obviously stupid and likely to produce
absurd nonlineari ties.

The batch normalization layer was built using TensorFlow's
tf.contrib.layers.batch [n]{.underline}orm [10] function. (The
contrib in the Python mod ule path means that the routine is
extra-well tested.) We experimented with multiple sets of
hyperparameters, primarily because the first set of hyperparameters
were no good. The initial set of hyperparameters used a value of 0.9
for decay and a value of 10−5for epsilon, because

1

64

{width="3.0018755468066494in"
height="2.251403105861767in"}

Figure 2. Batch normalization performance, with stability en
hancements.

that's what pix2pix did. The results were, hey, wait, this is the wrong
section for that. The second set of hyperparameters used in creased the
decay coefficient to 0.999, and en abled zero [d]{.underline}ebias
[m]{.underline}oving [m]{.underline}ean, because it is said that one
should do that if one's results are unstable.

Training on both runs took place overnight using TensorFlow on 8
NVIDIA Really Big GPUs in parallel. On the new power-efficient Pascal
architecture, training consumed approxi mately 1.5 kW, for 12 hours,
or 18 kWh of total power, or enough for my coworker to boil 540 cups
of tea [1].

4. Results

The results were utter crap. The first run was dramatically unstable
(see Figure 1). When measures were taken to make the system more stable,
it responded in the opposite fashion (see Figure 2). Convergence did not
happen faster than without batch normalization, inasmuch as anything
that the batch normalization runs did could be at all described as
converging.

Visual quality of the output batch-normalized runs was not verified,
because, let's face it, it's going to be noisy crap. Also, I didn't
finish writing the support to load and save the batch normalization
coefficients into checkpoint files, so that's another strike against
that.

5. Future Work

Maybe someone can get this crap to work. Like, everyone else who
sprinkles batch nor malization pixie dust on their CNNs gets them to
train right quick, and the Google folks say that you don't even need
L2 normalization with them, let alone any other kind of
normalization. Work should be done to investigate whether the Google
folks just got a really lucky RNG seed each time they did their
batch-norm runs, or maybe a really bad one for their control runs,
because clearly this stuff ain't working for me.

Other experiments could be run with other normalization schemes, like
Dropout [8]. Initial experiments are under way that indicate that
all of the literature about Dropout is also a lie.

6. Conclusion

I still don't know anything about how neural networks work, and as far
as I can tell, neither does anyone else.

References

[1] Personal correspondence.

[2] Tom 7. What, if anything, is epsilon? SIGBOVIK, you know, like
the only one that year, 2014. [3] Chao Dong, Chen Change Loy,
Kaiming He, and Xi aoou Tang. Image super-resolution using deep con
volutional networks. CoRR, abs/1501.00092, 2015. [4] Sergey Ioffe
and Christian Szegedy. Batch nor malization: Accelerating deep network
train ing by reducing internal covariate shift. CoRR, abs/1502.03167,
2015.

[5] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros.
Image-to-image translation with conditional adversarial networks.
CoRR, abs/1611.07004, 2016.

[6] Lucas Kovar. Electron band structure in germa nium, my ass.
Online, http://pages.cs.wisc.edu/ ~kovar/hall.html, 2007.

[7] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W.
Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip
code recognition. Neural Comput., 1(4):541--551, Decem ber 1989.

[8] Infected Mushroom. Drop out. From the album, Converting
Vegetarians, Disc 2, 2003.

[9] Chris Tuffley. The great epsilon shortage. Mathe matical
Intelligencer, 21(4):37, 1999.

[10] Someone who didn't proofread their code sam ple. Tensorflow API
documentation. Online, https://www.tensorflow.org/api_docs/python/
tf/contrib/layers/batch_norm, 2017.

2

65

Colonel Density Estimation

14

Harish Krishna

harishkrishna.v@research.iiit.ac.in

Bharat Lal Bhatnagar

bharat.bhatnagar@research.iiit.ac.in

Nishant Prateek

nishant.prateek@research.iiit.ac.in

Bhaktipriya Shridhar

bhaktipriya.r@students.iiit.ac.in

Abstract

The highly relevant and important problem of Colonel Density Estima
tion has seen little focus in recent times. In this work, we present
fresh approaches to solve both classes of Colonel Density Estimation -
Colonel Density Estimation and Colonel Density-Estimation. The
proposed solu tion is currently the state-of-the art in both classes
of the problem. We also discuss how this approach can easily be
extended to solve the more General Density Estimation problem.

1 Introduction

Colonel Density Estimation is an important problem in the fields of
statistics, physics, biology and military recruitment but has
surprisingly got almost no interest from research groups. In contrast,
the easier and similar sounding problem of Kernel Density Estimation
has seen a sig nificantly disproportionate amount of effort trying to
advance the current state-of-art. Recently, [4] suggested the method
of diffusion while [12] explained how to go about choosing the
kernel and bandwidth.

Kohli Center for Intelligent Systems

IIIT-H

Hyderabad

method suggested in this paper is more akin to generalization when com
pared to the existing approaches for Kernel Density Estimation. There
are two classes of Colonel Density Estimation. The first, Colonel
Density Estimation
is the problem of estimating the density of
colonels. The second, Colonel Density-Estimation involves using
colonels for the problem of Density Estimation. In this work, we
provide a novel approach for Colonel-Density Estimation that beats all
prior research that attempts to solve this problem. For the second
problem, we propose a solution that needs much fewer resources when
compared to the state-of-the-art.

2 Colonel-Density Estimation

We solve the problem of estimation of the density of colonels by first
finding the mass and volume of the colonel and dividing the two
quantities to get the density.

2.1 Finding mass

We were disappointed that though several earlier works like [7],
[9] , [3] claim to introduce a novel method, they do not actually
use any novels. We introduce a novel method that does actually involve
novels. Though we are the first to use novels for mass in the context
of density estimation, the idea we propose and the novel have been
time-tested in a different field for several centuries now.

Inspired by religion where a novel is used for obtaining mass, we let
out colonels read out from the same novel. The money raised in the
process (but expressed in the SI units of mass) is the mass of the
colonel.

2.2 Finding volume

We use the classical method [2] to find the volume of a colonel. The
colonel is immersed into a tank filled to the brim with a Newtonian
liq uid. The volume of the colonel is equal to the volume of the
liquid dis placed. We experimentally found that better results were
achieved when the colonel was immersed for quite a while so that the
liquid displaces the air in the lungs of the colonel as well.

2.3 Calculation of density

We calculate density as

D =mass

volume(1)

Figure 1: A colonel

Despite the apparent similarities of the two problems, the methods used
for Kernel Density Estimation can't be used for Colonel Density Es
timation [11]. Kernel Density Estimation can give only the density of
the bones of the colonel, not the whole colonel. Finding the density of
a whole colonel is one of the two classes of problems Colonel Density
Estimation must solve. Also, while Kernel Density Estimation is a non
parametric method, Colonel Density Estimation is non-paramilitaric. The

where the mass and volume are in SI units (when the volume is zero, it
would mean that Nishant had probably messed up somewhere. )

2.4 Results

We choose colonels who know their densities for evaluating our perfor
mance. The calculated density D of a colonel is correct if it falls
between Dˆ −ε and Dˆ+ε where Dˆis the actual
density of the colonel. The accuracy of the method is equal to the
ratio of the number of correct estimates of density to the total
number of colonels who participated in the experiment.

We compare our performance with [8], [1] and [6]. The results
are summarized in Figure 2. It is to be observed that we perform
significantly better than other methods. We reason that this could be
because these earlier works did not intend to solve the problem of
Colonel-Density Es timation at all. We leave this for future work to
verify.

66

Figure 2: Comparison of accuracies of our method (blue) with respect
to other methods (red) for the task of Colonel-Density Estimation

3 Colonel Density-Estimation

There is very limited prior knowledge and experiences in using colonels
for density estimation. Colonels, usually found shouting for attention,
are easy and efficient tools for density estimation. In our experiments,
we found that if we asked a colonel politely to guess the density of an
object we were pointing to, the colonel usually obliged. However, eval
uation of colonel-based density estimation is hard [5] as the accuracy
of the method is so heavily dependent on the choice of colonel and how
an noyed the colonel is. This is one similarity this shares with Kernel
Density Estimation where the choice of Kernel makes a difference.

Instead, we assert the relevance of our approach by comparing the kind
of resources our method needs with prior work that uses colonels for
density estimation. The only works that we found that could perhaps give
a more accurate estimate of density using a colonel are in [13] and
[14]. We believe that the method of just asking the War Machine with
Colonel J Rhodes in it has the potential to be more accurate than our
method. This is because Jarvis, the Artificial Intelligence that helps
control the metal suit, is pretty smart and probably knows the densities
of most objects. However, the cost of building such a War Machine or
Iron Patriot suit is very high [10] and can only be afforded by
billionaires. In comparison, the cost of our suggested method is
negligible (refer Figure 4). Also, since Colonel J Rhodes is currently
recovering after an injury sustained in a civil war, our colonel-based
density estimation technique is the state-of-art, at least until he
returns.

Figure 3: Colonel Rhodes and Jarvis, the only competition for Colonel
Density-Estimation

4 Generalizability

The solutions to the problem of Colonel Density Estimation proposed in
this paper are perhaps among the most easily generalizable solutions
ever. This only involves replacing the colonel with a general in every
step of each process. We found that in general, generals perform better
at den sity estimation. An interesting observation was that the
colonel-density and general-density are different, despite both colonels
and generals be ing humans. We attribute this to the fact that generals
are more mean, and hence probably more thick-skinned.

Figure 4: Colonel Rhodes in his War Machine suit was the state-of-art
for Colonel Density-Estimation until he was injured.

Figure 5: Cost of [14] (red) and our proposed solution (blue) for
Colonel Density Estimation. Note that the y-axis is a logarithmic
scale.

5 References

[1] Larry C Andrews and Ronald L Phillips. Laser beam propagation
through random media
, volume 1. SPIE press Bellingham, WA, 2005.

[2] Archimedes. Eureka eureka, 220BCE.

[3] Geoffrey H Ball and David J Hall. Isodata, a novel method of
data analysis and pattern classification. Technical report, DTIC Docu
ment, 1965.

[4] Zdravko I Botev, Joseph F Grotowski, Dirk P Kroese, et al.
Kernel density estimation via diffusion. The Annals of Statistics,
38(5): 2916--2957, 2010.

[5] Theophilos Cacoullos. Estimation of a multivariate density.
Annals of the Institute of Statistical Mathematics, 18(1):179--189,
1966. [6] Ian Holyer. The np-completeness of edge-coloring. SIAM
Journal on computing
, 10(4):718--720, 1981.

[7] Kazutaka Katoh, Kazuharu Misawa, Kei-ichi Kuma, and Takashi
Miyata. Mafft: a novel method for rapid multiple sequence align ment
based on fast fourier transform. Nucleic acids research, 30
(14):3059--3066, 2002.

[8] Rainer Martin. Noise power spectral density estimation based on
optimal smoothing and minimum statistics. IEEE Transactions on speech
and audio processing
, 9(5):504--512, 2001.

[9] Nicholas J Miller, Catherine Rice-Evans, Michael J Davies,
Vimala Gopinathan, and Anthony Milner. A novel method for measuring
antioxidant capacity. Clinical science (London, England: 1979), 84
(4):407--412, 1993.

[10] moneysupermarket.com. The cost of building one iron man suit.
[11] Nishant Prateek. On the differences between colonel density
esti mation and kernel density estimation, 2006.

[12] Simon J Sheather and Michael C Jones. A reliable data-based
band width selection method for kernel density estimation. Journal of
the Royal Statistical Society.
, pages 683--690, 1991.

[13] Marvel Studios. Iron man movies, 2007-2017.

[14] Marvel Studios. Avengers movies, 2011-2017.

267

15

Degenerative Adversarial Networks

Raphael Gontijo Lopes& Diptodip Deb

Georgia Tech, Atlanta, GA

{raphaelgontijolopes, diptodipdeb}@gatech.edu

Abstract

In recent years, Deep Learning researchers have collectively achieved
a pace of useful information extraction that is dangerously close to
outstripping the second law of thermodynamics. To solve this problem,
we propose a new framework for estimating degenerative models via an
adversarial process, in which we simulta neously train two models: a
degenerative network D that destroys the data distri bution and a
discriminative model D that estimates the probability that a sample
came from true noise rather than D. The training procedure for D is to
maximize the probability of D making a mistake. Within the space of
arbitrary D and D, we roll a D20 and check for damage. This system
corresponds to entropy maximiza tion, which ensures a timely heat
death. Experiments would have demonstrated the potential of the
framework, but most of our results were degenerated in the process of
running them.

1 Introduction

The promise3 of deep learning is to discover rich, hierarchical
{width="0.8253029308836396in"
height="0.6181802274715661in"}{width="0.8253040244969378in"
height="0.6189774715660542in"}

models [5] that represent probability distributions over differ

ent kinds of data, such as natural images, audio waveforms con

taining speech, and symbols in natural language corpora (see

Figure 1). All of this structuring of data works to decrease en tropy by
creating discriminators that are able to classify this data into
well-defined labels. Furthermore, we now see the suc cess of deep
generative models due to Goodfellow et. al [5] and

Figure 1: The power of deep learning. [photos: Bobolas, 2009 [1],
Maley, 2011 [10]]

Kingma et al. [8], which further accelerates the pace of structured
data generation.

All of this model discovery is creating too much information. At this
rate, we will outstrip the second law of thermodynamics and begin to
decrease entropy in the universe (see Figure 2). In order to maintain
reality as we know it, we present the Degenerative Adversarial
Network, or DAN, which sidesteps the successes of deep learning models
in order to maintain a steady and healthy pace towards the sweet
release of heat death.

In this proposed adversarial degeneration framework, the degenerative
model is pitted against an adversary: a discriminative model which
attempts to distinguish whether actual data has been de generated or
whether the observed sample is true noise. The degenerative model can
be thought of as a team of steamrollers, flattening data into a
uniform distribution for maximum entropy. The discriminative model can
be thought of as a team of protractors, trying to determine if its
input has been properly flattened into true noise. Competition in this
game will drive both groups to improve until the discriminator cannot
reliably distinguish between generated noise and degenerated data.

Currently looking for grad school.

Please accept me into your lab.

3unfulfilled

68

{width="3.3015977690288714in"
height="1.9919870953630796in"}

Figure 2: Plot showing the dangers of deep learning. In hindsight,
this plot generates information so please refrain from looking at it.

This framework can give specific algorithms for degeneration and
discrimination. We explore the case in which the degenerative model
destroys data by passing it through a multilayer after being perturbed
by noise and the discriminator model is also a multilayer perceptron.
We refer to this special case as a DAN and show that we can train
both networks using backpropagation in an end to-end fashion. Uniquely
to our approach, there is no need to actually code this network.

2 Related Work

Training adversarial networks is infamously hard [12], be
{width="2.201000656167979in"
height="1.137509842519685in"}

cause the optimization objective equates to trying to find

a Nash equilibrium in a non-cooperative game. We found

this to be even more complicated when degenerating, be

cause the procedure makes it very easy for the Degenera

tor to output images of PhD students4.

To solve this, some work in the field has argued that a

balance between the two adversaries needs to be found in order to
stabilize the game and avoid local minima. How ever, Goodfellow [4]
shows how a more robust solution involves creating an overpowered
adversary Discrimina tor, which ensures that the values of collaboration
and

Figure 3: Example of bad Nash Equilib rium for the DAN model, the
degenerate result is a PhD candidate hard at "work" on his thesis.

looking past one another's differences will not come into play.
Therefore, we proceed with the adversarial technique. Our methods are
described below.

Goodfellow et al. [6] also use adversarial techniques to degenerate
data. However, their method is limited in that it is only able to
trick other neural networks. While this is useful for slowing down the
pace of data creation by generative models, it is not sufficient for
our objectives as their results preserve enough structure that humans
can still discriminate with ease. The method we present is robust even
to human discriminators.

{width="2.4265562117235344in"
height="0.659646762904637in"}{width="2.4265562117235344in"
height="0.659646762904637in"}(a) Adversarial Degeneration in
Goodfellow et al. (b) Adversarial Degeneration using DANs

Figure 4: Comparison of the results presented in [6] and ours. Note
how DANs are able to degenerate data well enough to trick both neural
networks and humans, whereas Goodfellow et al. are only able to trick
neural network models.

4with our apologies to degenerates

269

3 Degenerative Adversarial Nets

3.1 Description

The two models D and D play the following role-playing game:

domD subD V (D, D) = EGeof f [log(D(eep))] + EHinton[log(1
− D(addy))] (1)

Some might say that this notation is confusing. Those people would be
wrong. ■

In practice, Equation 1 provides absolutely no information at all on
how to train either D or D. We think this is OK, as reproducibility is
the least important aspect of science.

The adversarial framework is most straightforward to apply when we
straight copy paste someone else's code. As such, to learn the
degenerator's distribution pd over x, we don't do anything at all
and just use the method from [5], except we ignore the given input
and replace it with noise generated using Python's random module.

{width="4.402069116360455in"
height="3.3015562117235344in"}

Figure 5: Comparison of the GAN (top) and the DAN architectures
(bottom). In the former, noise is used to generate data. In the
latter, data is degenerated into noise.

See Figure 6 for an approximately probably equally formal
5explanation of our approach. We would include more explanation
here, but our model is basically just a GAN so we refer the reader to
Goodfellow et. al[5].

In the next section, we present some theoretical results about our
adversarial degeneration, essen tially showing that the training
criterion presented above allows one to lose all of their data.

5i.e. not at all

370

{width="1.1004472878390201in"
height="1.0389009186351705in"}{width="1.1004472878390201in"
height="1.0389009186351705in"}{width="1.1004472878390201in"
height="1.0389009186351705in"}{width="1.1004472878390201in"
height="1.0389009186351705in"}(a) (b) © (d)

Figure 6: Degenerative adversarial nets are trained by simultaneously
updating the discriminative distribution (D, blue, dashed line) so
that it discriminates between samples from the original data (black,
dotted line) px from those of the degenerative distribution pd (D)
(green, solid line). The lower horizontal line is the domain from
which we sample noise. The higher horizontal line is part of the
domain of x, which we destroy. The upward arrows show how the mapping
x = D(z) imposes the uniform distribution pd on transformed samples,
which later overwrite the original x. D learns to scatter uniformly.
(a) Consider an adversarial pair before divergence: pd is
dangerously similar to pdata and D is much too accurate a
discriminator. (b) In the inner loop of the algorithm D is trained to
discriminate samples from data that have been ruined, and starts to
diverge. We also see that the data distribution has started to
disperse, and that the degenerator distribution is heavily unlearning
the data. © After an update to D, the gradient of D no longer
exists. We see that we have reached uniformity of data. (d) After
several steps of training, if D and D have enough capacity, there is
no need to map the noise to the degeneration anymore. The DAN has
learned to generate its own noise (hence the self loop) and the data
distribution has reached what we call "super-uniformity," which is how
we boost entropy at a high enough rate to counteract others in the
field. A key step in this procedure requires that we mention it here
in this figure only, and never mention it in the rest of the paper.

3.2 Method

We download an out-of-the-box GAN model [7] and repurpose it to a
DAN -- by which we mean we trained it on a new dataset without
modifying a single line of code. We present our algorithm below.

Algorithm 1: Degeneration algorithm.

Input: Internet connection, GAN, Python noise module

Output: Noise

1 Open browser.

2 Go to www.google.com.

3 Type in "google."

4 Click on Google.

5 Type in "gan tensorflow".

6 git clone

7 Generate noisy images in Python.

8 Train your GAN on the noise.

9 Leave on the counter for 15 minutes to cool.

10 Overwrite your original data with the generated noise.

We believe this algorithm can be generalized to other kinds of models.
This would involve optimiz ing the algorithm's search strategy for
those other models (e.g.: you might need to type "infogan tensorflow"
in step 5). We leave this discussion for future work.

Additionally, in our algorithm we are overwriting the original data
through manually-written disk writes. We believe in the potential of
end-to-end learning in tackling this task, but we also leave this for
future work. Some AI ethics alarmists might claim such a model would
accelerate the impeding doom of civilization [13]. However, we see
this as a reasonable alternative to heat death, and thus posit that
it's a worthwhile pursuit.

In other subfields of Machine Learning, one would use a dataset like
MNIST [9] or ImageNet [3]. For our purposes, we'd need a
randomness dataset, such as the ones used in the cryptography field.

471

However, in a desperate bid for citations, we ignore existing options
in favor of fabricating our own, which we call DANOISE6. We discuss
the implications of this decision below.

When training a DAN, it's important to keep in mind that the generated
data must represent true randomness. Unfortunately, the availability
of true randomness is scarce in a deterministic Turing machine, so we
settle for the python approximation random module.

It's crucial to note, however, that the random module does not give
you true randomness. The validity of this claim is based in the fact
that we seek to model true randomness through a neural network. If
randomness were readily available through a simple module import then
there would be no point in using Deep Learning. ;)

4 Theoretical Results

The degenerator D implicitly defines a probability distribution pd
as nothing. Therefore, we would like Algorithm 1 to converge to
nothing, which is equivalent to diverging to a uniform distribution
that maximizes entropy. The results of this section are organized in a
similar manner: uniformly random and without meaning.

We will show in section 4.1 that our role-playing game has no meaning,
but that the network has a global optimum when pd = puniform.

4.1 Global Optimality of Entropy

We first consider the optimal discriminator D for any degenerator D.

Proposition 1. For D fixed, the optimal discriminator D gets
overwritten by D.
DD(x) ≡ /dev/urandom (2)

Proof. The training criterion for D (whichever D) does not really
matter. What does matter is that no matter what the trained
degenerator is, the final step is to overwrite your data. Therefore,
the discriminator simply ceases to exist. This is accomplished by
overwriting the data with values that

are equivalent to /dev/urandom (since D is optimal).



Theorem 1. The global minimum of the training criterion occurs when
the universe reaches max imum entropy. However, reaching a local
optimum is equivalent to enabling
D to degenerate data (and write to
disk) into uniformly random noise. Therefore, any local optima is
actually just a saddle point on the way to global optimum.

Proof. We have an elegant proof, however this saddle point is too
itchy, forcing us to abstain from including the it in this paper. In
future, we plan to also abstain from including proofs if the margins

are too small.



4.2 Divergence of Algorithm 1

Proposition 2. Regardless of the capacity of D and D*, if at each
step of the algorithm we ignore the meaningless criterion of and
overwrite data, the algorithm will always diverge.*

Proof. We wou l d lik e't o s h' o w t ha t
#%fs[-3]



Note: Unfortunately, we lost the above proof as well other results
due to the model overwriting them.

6we're accepting suggestions for potential acronyms that justify
this dataset name

572

5 Results

{width="1.5007086614173228in"
height="1.5007086614173228in"}{width="1.5007086614173228in"
height="1.5007086614173228in"}

a Sample degenerated by DAN after 24 epochs. (b) Random sample
taken from Python. Figure 7: Comparison of results from DAN and
results from Python.

The noise examples generated by DAN look very visually similar to a
sample from a true random source (or its equivalent python
non-approximation), as can be seen in Figure 7. Therefore, it's
probably approximately correct to say we've established the state of
the art in degenerating data. The definitive results were destroyed by
our preliminary experiments in end-to-end training, so we present a
novel metric of Data Degeneration: percentage of data destroyed
(%dd). We compare our methods with other models in Table 1

Model %[dd]{.underline} in training set %[dd]{.underline} in my
family photos album VGG trained on ImageNet 0


0


0

0


Inception trained on Britney Spears MP3s 0 AlphaGo trained on
MNIST 0
DAN trained on DANOISE dataset~7~ 100 100

Table 1: Comparison of different models on the task of permanent data
degeneration. It's clear from this comparison that our architecture is
inherently superior to these other ones, because of the bold font
highlighting the DAN results.

6 Conclusion

We have presented an entirely new8, model that achieves the state of
the art in data degeneration. With it, we get the field one step
closer towards stopping Big Data terror and maintaining a steady pace
towards heat death.

We hope that this has inspired the reader to help us further these
novel Data Degeneration models. We conclude by presenting a few
examples of potentially interesting and useful future research
directions:

Inspired by InfoGAN [2], InfoDAN would preserve the property of
uninterpretability of latent space, by degenerating the data, as well
as any structural features it is composed of.

Similarly to work by Radford et al. [11], DCDAN is a model with the
same architecture as a DAN, but with twice the number of convolution
layers, and half as many learnable parameters.

Lastly, also inspired by the work of Chen et. al [2], EntropyDAN
could use the data input as a latent representation of how true the
generated noise is.

8i.e.: plagiarized repurposed

673

References

[1] Bobolas. brain-neurons.

[2] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya
Sutskever, and Pieter Abbeel. In fogan: Interpretable representation
learning by information maximizing generative adversarial nets. In
Advances in Neural Information Processing Systems, pages 2172--2180,
2016.

[3] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li
Fei-Fei. Imagenet: A large-scale hierarchical image database. In
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE
Conference on
, pages 248--255. IEEE, 2009.

[4] Ian Goodfellow. Nips 2016 tutorial: Generative adversarial
networks. arXiv preprint arXiv:1701.00160, 2016.

[5] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David
Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.
Generative adversarial nets. In Advances in neural information
processing systems
, pages 2672--2680, 2014.

[6] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy.
Explaining and harnessing adver sarial examples. arXiv preprint
arXiv:1412.6572
, 2014.

[7] Google. First result when googling "gan tensorflow".

[8] Diederik P Kingma and Max Welling. Auto-encoding variational
bayes. arXiv preprint arXiv:1312.6114, 2013.

[9] Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist
database of handwritten digits, 1998.

[10] Maley. neuron.

[11] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised
representation learning with deep convolutional generative adversarial
networks. arXiv preprint arXiv:1511.06434, 2015.

[12] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan
Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing
properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.

[13] Tim Urban. The ai revolution: The road to superintelligence.

774

7 Appendix A

Because 7 pages wasn't enough.

{width="5.502666229221347in"
height="3.4391666666666665in"}

Figure 8: Generated image samples after one pass through the dataset.
The degenerator is still trying to unlearn the structure of the input
data.

{width="5.502666229221347in"
height="3.4391666666666665in"}

Figure 9: Generated image samples after 15 passes through the dataset.
There appears to be evidence of visual under-fitting via repeated
noise textures across multiple samples.

875

Under review as a conference paper at SIGBOVIK 2017

16

STOPPING GAN VIOLENCE:

GENERATIVE UNADVERSARIAL NETWORKS

Samuel Albanie

Institute of Deep Statistical Harmony

Shelfanger, UK

Sebastien Ehrhardt ´

French Foreign Legion

Location Redacted

Joao F. Henriques ˜

Centre for Discrete Peace, Love and Understanding

Coimbra, Portugal

ABSTRACT

While the costs of human violence have attracted a great deal of
attention from the research community, the effects of the
network-on-network (NoN) violence popularised by Generative
Adversarial Networks have yet to be addressed. In this work, we
quantify the financial, social, spiritual, cultural, grammatical and
der matological impact of this aggression and address the issue by
proposing a more peaceful approach which we term Generative
Unadversarial Networks
(GUNs). Under this framework, we
simultaneously train two models: a generator G that does its best to
capture whichever data distribution it feels it can manage, and a
motivator M that helps G to achieve its dream. Fighting is strictly
verboten and both models evolve by learning to respect their
differences. The framework is both theoretically and electrically
grounded in game theory, and can be viewed as a winner-shares-all
two-player game in which both players work as a team to achieve the
best score. Experiments show that by working in harmony, the pro posed
model is able to claim both the moral and log-likelihood high ground.
Our work builds on a rich history of carefully argued position-papers,
published as anonymous YouTube comments, which prove that the optimal
solution to NoN violence is more GUNs.

Takes skill to be real, time to heal each other

Tupac Shakur, Changes, 1998

1 INTRODUCTION

Deep generative modelling is probably important (see e.g. Bengio et
al. (2013a), Bengio et al. (2013b), Bengio et al. (2007a), Bengio et
al. (2015) Bengio et al. (2007b) and (Schmidhuber et al., circa 3114
BC)). Justifications recently overheard in the nightclubs of
Cowley1include the ability to accurately approximate data
distributions without prohibitively expensive label acquisition, and
computationally feasible approaches to beating human infants at
chess2. Deep generative modelling

Authors are listed according to the degree to which their home
nation underperformed at the 2016 European football championships

1The nightclubs of Cowley are renowned for their longstanding
philosophical support for Dubstep, Grime and Connectionism, and should
not be confused with the central Oxford nightclub collective which
leans more towards Dubstep, Grime and Computationalism - speak to Old
Man Bridge
at 3am on a Friday morning under the stairs of the smoking
area for a more nuanced clarification of the metaphysical differences
of opinion.

2Infants of other species (fox cubs, for example) remain an adorable
open question in the field.

76

1