The Justice Syndicate

In collaboration with interactive theatre group fanSHEN and neuroscientist Dr. Kris De Meyer. A leading children's surgeon Dr. Simon Huxtable is accused of a crime. Do you convict him, what's the evidence?

This piece of interactive theatre gathers 12 audience members to take on the role of jurors. Sitting around a table, combing through select evidence from the defense and prosecution. 12 iPads carefully analyze the decisions they make in real time, providing live opinion polls and interactive elements, leaving the jurors to question their own ethics and how we act when required to make a public decision on a highly controversial topic.

Note: This isn't an in-depth discussion of the technology or concept behind the piece. It's a few tidbits I found interesting, or people asked me about. I'll probably write more about the specific technology behind it at a later point. If you want to know about the concept behind the piece or its development; there's a really great article written by Rachel here that might interest you.


The Justice Syndicate being played at Kings College, London


The Technology

Without a doubt, the technology behind The Justice Syndicate was the most bespoke and complex system I have ever created. Three iterations exist with the final completed in early 2018. In fact, the system was so bespoke in the end that the final version works without the use of any libraries or frameworks whatsoever. The reason behind this was an attempt to create a system that was as close to the teams' vision as possible while also maintaining accessibility and practicality to be later deployed at scale. That and a lot of things we wanted to do technology-wise hadn't really been done in this specific context before.

Just a note: when I say "framework-less," I really mean "public-framework-less," but that sounds far too confusing when written multiple times. This is as I created numerous proprietary frameworks to solve issues as I discovered them. I'll detail these later.

The system behind the piece is primarily comprised of the following components:

  1. Front-end web app - Javascript, JQuery;
  2. Syncing system - Javacript, PHP;
  3. Back-end results handler - PHP;
  4. Real-time logic system - Python.

Unusually, I left the majority of the technical-development until after we had created the underlying narrative of the piece. This was as the specific events within the case had a significant effect on the functionality and limitations of the systems. It's also good to note that some of the systems, particularly the logic-system do a large number of things I won't detail here. This is because to disclose it could taint the enjoyment of anyone experiencing the piece for the first time (ask me in private). It was interesting to see how well the narrative blends with the technology. At times jurors actually forgot the piece was fictional, intuitively using the technology without question. This was especially important to the accessibility of the piece as we were keen for all users to be able to use the devices without prior-training or even knowledge of how to use an iPad.

I'd put this success down to the extreme simplification of the UI within each new iteration. We would study the elements that people struggled with the most during the trial (e.g., selecting other jurors to exclude in the first play-test -- via a list of juror id's) and then I would re-design the element to try and be more accessible in time for the next play-test -- jurors around a table. One thing that amusingly kept coming up was the direction in which a time bar should animate. If it's counting down to zero should it go left to right? Or right to left? I always intuitively designed it to be right to left, starting at 100% width, slowly animating to 0% width. Although it appeared everyone else found this odd, instead opting for it to go left to right. I'm still not entirely sure why this was stuck in my head. If this is also your chain of thinking, please let me know, so I don't feel insane.


Front-end web app

This is the only side of the piece that the viewers directly see. So, therefore, arguably it is the most important. Each iteration had a re-designed UI with each attempting to improve the formers accessibility and practicality when used in the jury setting. This annoyingly seemed to comprise of asking "how can we stop someone pressing *this* button immediately without reading the above first?". A task that would appear mundane but caused headache after headache.

Technology-wise the front-end app is not actually native but was instead designed to be run within a native iOS "shell" with near-native performance (that was hard). I'm not a fan of "electron" style apps, but this was needed as it was important to cover as much ground as possible in making it universal (for future-proofing). But it also was required to work on the equipment we had available for the show - 12 iPad Mini 2nd Generations. I ended up opting for a JQuery based web app, an approach that remained consistent throughout all three iterations.

To achieve the near-native performance mentioned earlier, I built a proprietary version of the WKWebkitView class in Swift that improved the response time of resources such as images by dynamically loading them directly from the device as well as optimizing the overall latency through a few new techniques (which I'm keeping secret for now). This was in an attempt to achieve the fastest refresh rate as possible meaning the later "syncing system" (which held all the iPads video in sync) would not suffer. This proved to be near hell to achieve, mainly caused by a fatal bug in iOS 11.2 regarding WebSockets and WKWebKitView. This meant the updated system was only implemented in the final iteration and was limited to iOS 11.2+. But it saw a subsequent performance increase around the ballpark of 900ms response reduced to 80ms. This is fast enough to feel this difference in real life, scrolling was silky smooth.

Back to the UI: Who thought it would be so hard to make someone read out the correct text? Our issue was that everyone really needed to know what was going on at this stage of the show, even if they were first to read.


I tried my best.

We must have tried 10's of different approaches to get the jurors reading out the correct part of the script, only reaching a 'kind-of' okay solution for the final iteration. We settled on an approach where we changed the font and size of the text they had to read while placing a speaking symbol to the left of it. Only to introduce a new issue where the jurors would occasionally confidently read out the entire instructions before realizing they were wrong. Like, how?


Syncing system

It's more or less that simple.

A vital element of the piece are the videos the jurors need to watch to get critical information about the trial. We discussed the various forms this could take, e.g., watching on a central screen; listening on individual iPads with headphones. However both approaches seemed restrictive, they either required more technology for us to transport from venue to venue, additional steps for the jurors (put headphones on/off) or were overall too clunky and unelegant. What we really needed was a way to watch videos simultaneously on all iPads with one sound source. The only issue was, this was an internet-connected piece that would sync via a live internet connection. Getting the iPads to sync video over this would be difficult.

My first thought was, "can we take this piece offline?". If this could be contained within a local network, broadcast via a router, we could significantly reduce the potential latency between the iPads. But this would mean a likely future plan for the piece would require a full re-write in the coming months -- I can't do that. It had to work via standard wifi, on varying speeds and strengths ranging from 2mbps to 100mbps with potentially unlimited traffic and disruption on the network.

To do this, I needed to find a central source for the video that would likely have good bandwidth, be universally allowed on most wifi-networks and is reliable. For this, I chose to host the videos directly on YouTube and work on a custom embedding system using their official API. The reasoning for not wanting to host the videos myself was that I could never compete with Google's monopoly in server technology and this also acted as a way of future-proofing and cutting the costs of the production.

Next I needed a way of syncing the video's playback that was independent of my server as the alternating wifi conditions could result in differing latency (if I sent a command to all iPads to play they might receive the command inconsistently, therefore introducing a lag between some iPads as they would start at different times). To get past this, at the start of the piece my server sends its local timestamp to the web app which parses it, figures out any timezone difference and stores the difference locally. This meant any subsequent timestamps posted by the server could be adjusted to suit any differences on the local devices. Next, I exploited the fact all the iPads' internal time was synced to a high degree of accuracy by Apple. My method goes like this:

  1. SERVER: Sends command to switch to video playback alongside a timestamp for 5 seconds in the future;
  2. CLIENT: Recieves timestamp, begins loop waiting for the timestamp to match the devices local time;
  3. CLIENT: Once the timestamp == local_time the command to play the pre-cached video executed.

This means regardless of latency on the network the 5 seconds leeway between the server and the local command allows any device to catch up and subsequently start the video within milliseconds of each other. Without even the need to communicate with the devices!! -- Simple! Perfect synced audio and video across 12 iPads on a public wifi network. We tried this on wifi speeds down to 2mbps, and it still worked perfectly. This is one of the things I'm most proud of, to be honest, it's a marvel watching all the videos play correctly, synced over such terrible conditions. While it may sound simple, this approach took about 6 months to perfect.


Back-end results handler & Real-time logic system

This is the part I will be most vague about so it doesn't ruin the piece. The piece follows a pre-defined narrative with each stage revealing new information about the case, prompting discussion or voting amongst the panel. Using machine learning each element of the jurors' individual actions are analyzed so that the piece can be altered in real-time to adapt to the views of specific jurors. This machine learning runs a custom model trained by myself based on old playtests we had run. It was able to accurately determine the voting intentions of ~70% of the jurors from just a few stages in and subsequently automatically adjust narrative elements accordingly.

This meant we could influence the jury intelligently without the need for human intervention. Furthermore, we could accurately determine which part of the narrative sways most jurors and critical events that lead to a switch of opinion in the case. It revealed enough patterns to make myself worried if I ever were to take place in a jury trial. I might write about those patterns at a later date, they're fascinating.


The debate about "what is concent?" was most likely the best part of creating this piece. It was sobering to see such progressive discussion about an important topic, especially within universities. This piece really makes you question how you would react when having to make difficult decisions where you don't have all the information. Check this website for the next performance times and availabilities. It might also be a good idea to follow me on twitter as the performances sell out pretty fast.


FAQ

Why didn't you live-stream the videos (on the iPads)? Wouldn't that have been quicker?

While this sounds like a good idea, there are a few issues. This method would have been more susceptible to bad internet connections (no caching) which was my primary concern as we toured the piece to a few places with awful internet. This would also have cost a fortune in bandwidth if this project was scaled upwards. Not to mention the delay within live streams and the tendency for them to stop to buffer.

Why PHP for the backend?

At the time this was the best option for the Nginx server -- like it or not PHP runs on practically everything, if I were to upgrade the piece I might turn to NodeJS.

Will you release the source code for this project?

No, I usually will release the source for all my projects, but it's not so appropriate for this piece. To do so would potentially ruin the narrative but also reveals a few pieces of technology I'm not happy with publically releasing yet. I might release these parts as standalone projects in the future but not the justice syndicate as a whole.

Why don't you detail the whole project?

To be honest, it's just a massive project. To detail it all would take ages, and it would probably be mostly dull. It's much better to catch me sometime and ask me any questions you have (or email me). I'm also only one of a pretty large group of people behind this project, I can just give my own perspective. There's a short documentary out soon that I'll detail below when I know more. This will likely touch on the topics in the piece that I haven't covered.


Recent showings:


Upcoming showings:


Supported by

The Justice Syndicate is supported by the King's Cultural Institute, London South Bank University and Near Now Nottingham.


Credit

DIRECTION & DRAMATURGY Dan Barnard & Rachel Briscoe | ARTISTIC ASSOCIATE (NEUROSCIENCE) Kris De Meyer | COMPUTATIONAL ARTIST Joe McAlister | PRODUCTION ASSISTANT Ewan Samson | ADDITIONAL IDEA DEVELOPMENT Rebecca Atkinson-Lord, Chris Bone, Tom Chambers, Shireen Mula, Zoe Nicole, Theo Papatheodorou & Delme Thomas | PROTOTYPE CAST Claire Cordier, Henry Everett, Jon Foster, Ezra Ingleson, Oli Ingleson, Dean Lepley, Angelina Marchevska, Marina Marchevsha, Sarah Savage, Georgina Sowerby, Monsay Whitney | FEATURED EXPERTS Jan Bowden