diff --git a/podcast/67/transcript.markdown b/podcast/67/transcript.markdown new file mode 100644 index 00000000..97988a61 --- /dev/null +++ b/podcast/67/transcript.markdown @@ -0,0 +1,215 @@ +*Mike Sperber (0:00:14)*: Our guest today is Alex McLean, who created the TidalCycles system for electronic music implemented in Haskell, of course. We talk about how Alex got into Haskell coming from Perl, how types helped him think about the structure of music and patterns, the architecture and evolution of TidalCycles, about art, community, and making space for new ideas, and lots of things in between. + +So, hello, Alex. It’s a great pleasure to finally have you on the podcast. Can you tell us what your first contact with Haskell was? + +*Alex McLean (0:00:48)*: Yeah. Hi, Mike. Yeah, a little while ago. So while I was a research student, a PhD student at Goldsmiths in London. Actually, before that, I was a master’s student in Goldsmiths on a program called Arts and Computational Technology, I think, or maybe it was Arts Computing. But anyway, it was in a computing department in the situation where it was really interdisciplinary. So a lot of the academics were also musicians or composers, also quite a lot of music psychologists. So I was doing this crossover course between arts and computing. At the time, I’d been a Perl programmer for many years. So I was a mature student at that point already. So I must be really old by now. I wanted to learn something new, maybe more academic language, because I had a bit of impostor syndrome. So coming from doing programming for a small independent company, working in the independent music industry, I thought, well, I should look for something more serious. Maybe something like that strange Lisp language I’ve occasionally looked at. But at the time, Perl 6 was becoming a thing, and there was someone called Audrey Tang who very impressively implemented Perl 6 in Haskell in a very short period of time. I believe they’ve gone on to be a really influential politician in Taiwan, but at the time they were making – I think it was called Pugs or something like that. + +*MS (0:02:35)*: Yes.  + +*Andres Löh (0:02:35)*: Yeah.  + +*MS (0:02:36)*: Yeah. I remember that. I remember hearing a talk. + +*AL (0:02:38)*: Yeah, I remember that as well at ICFP. + +*AM (0:02:42)*: Yeah. Although I never actually got to try out Perl 6, I never went back to Perl programming, really. It just piqued my interest. And so I started learning it and found it really fascinating, just like a different way of thinking about code, really. I hadn’t really done any functional programming before.  + +*AL (0:03:00)*: So you started learning Haskell at that point. So not Perl 6, but Haskell? + +*AM (0:03:07)*: Yeah.  + +*AL (0:03:08)*: Because you were inspired by – okay. + +*AM (0:03:10)*: Yeah. I was looking for a more serious language, I suppose, after having lots of fun with Perl but wanting to try something new. And it immediately seemed very useful for thinking hard about representations. So at the time, I was getting into machine learning. My supervisor was Geraint Wiggins in Goldsmiths, who’s an AI person. Although at the time, AI was really out of fashion, so he wouldn’t call himself an AI academic. He’d be laughed out of the university. And so he called himself a computational creativity professor. Things have changed since then, I think. So I was trying to model rhythmic continuations. That’s what I did for my master’s project, using Haskell to represent rhythm. And yeah, just immediately, it was such a great experience, just being forced to think really carefully about what the representation of what something should be. And then if I change that representation, being really helped by the strict types to point towards all the bits of my program where I should change it to match the new representation. And that’s what gave me the bug, really.  + +*AL (0:04:36)*: Can I just briefly ask, because you were saying the program was something like arts and computation, and I have not a good feeling for how such a program works. But if you’re, for example, going to university and study computer science, then usually the universities teach you programming languages, right? Now, if this is a program that is also adjacent to computing and computer programming, was the university taking a stance on what the proper languages are that you ought to be using, or did they leave you complete freedom? + +*AM (0:05:08)*: I think they left freedom. I mean, I remember them trying to teach me Java. That was quite a funny experience because I was being taught by someone called Christophe Rhodes, who was actually quite an amazing Lisp person, I think. But there was no idea about this is the correct language to use in that context. There was whatever they chose to teach the undergraduates. And actually, that’s always been my experience. I’ve never really had the feeling that a particular language was forced on me. And that’s something that, from my perspective, is quite strange about computer science culture, actually. I work a lot with my friend Dave Griffiths, who often works in Scheme, including in performance, and we perform together on stage, projecting our screens with Dave working in Scheme, me working in Haskell. I remember someone coming up to us after one gig just being astonished that we’d even be in the same room, let alone making music together. But I find this a bit strange because Haskell, it’s so high-level that it lets you see the connections between all these different programming languages. I find it quite a unifying experience and helps me actually move from one language to another. So I’m not quite sure why people are so territorial about languages, really. + +*AL (0:06:32)*: It’s like they’re territorial about everything, right? About editors, about operating systems.  + +*AM (0:06:38)*: Yeah.  + +*AL (0:06:39)*: Like, I mean, it’s not just programming languages. + +*AM (0:06:43)*: Yeah. I think it’s nice to try and get away from that. Yeah. I think people make their technological choices, but that doesn’t mean we have to stop respecting and talking to each other. + +*AL (0:06:53)*: Yeah. No, I think you’re right. Okay. So you were saying, like, you successfully, I guess, used Haskell in your master thesis project. + +*AM (0:07:03)*: Yeah. + +*AL (0:07:03)*: So how did it go on from there? + +*AM (0:07:05)*: Yeah. So I was still using Perl to perform with, so I’d made an environment called feedback.pl where you could code on the fly. So from around the year 2000, I’d been really interested in algorithmic music, and then increasingly interested in live coding as it emerged over that decade. So around 2003, sort of started meeting up with other people who were interested in this live coding thing, which is where you make music usually at the time. Now there’s a lot of people also using it for live video, live visuals, but yeah, live coding is – I mean, other people use the term, but for us, live coding is where you make music by writing code while it’s being interpreted. So working in something like a REPL to manipulate musical structures to create music on the fly, maybe not with a particular idea at the start about what music you’re going to make. So it’s like improvised. You’re just responding to what music comes out to inform the next change to the code.  + +So I’d made an editor designed for doing that with Perl, but finding it was just far too slow to work with instrumental musicians. Like I was working with my friend, Alex Garacotche, who was a jazz improv drummer as well as a music technologist. He was really encouraging me to live code from scratch. So, just start with nothing and write code to generate music. But it might take me a couple of minutes to actually start making sound, which isn’t very good in terms of riffing off another musician, where it takes so long to make a change. So at that point, I was compelled to make my own language, building on the rhythm representation stuff that I mentioned earlier, coming across existing environments like SuperCollider, but also more rhythmically interesting languages, I think, like the Bol Processor by Bernard Bel, who’s like a logic programmer. But he had made this system originally for representing Indian music, in particular North Indian tabla drum percussion rhythms, and generalized it to be used for any kind of music. And so I was really inspired by that kind of approach to rhythmic cycles and polymeter, polyrhythm, embedded rhythms. And so, yeah, mainly inspired by that and work by Laurie Spiegel, for example, the American composer who wrote this beautiful short essay about manipulations of musical pattern. And so I was compelled to really make a language about pattern making that would be very fast to use and start doing that in Haskell. And that’s what became TidalCycles, eventually. + +*AL (0:10:18)*: I mean, obviously, I’m quite impressed with this line of reasoning, and I’m grateful in a way that you chose Haskell because it gives us something to talk about on this podcast. But I’m nevertheless surprised about the dismissal of Perl because when you’re saying improvised programming from scratch, the baseline is always a little bit relative, right? Because obviously, you’re using libraries. I mean, you’re not implementing everything from scratch. Also on TidalCycles, you’re relying on all sorts of predefined abbreviations and so on and so forth. So, would it not have been feasible in Perl to just write a library and then get to a point where you can make sound more quickly or more easily as well? + +*AM (0:11:05)*: Yeah, I think it would’ve been very different. I think, as I say, the real focus on the strict typing and the type system really helped me come up with a representation of rhythm that would have been beyond my understanding to implement without it, I think, because it’s not just a case of having the idea and implementing it. It’s a case of starting with something and gradually working with the types until I could understand what a rhythm was to me through that process. Because since I have re-implemented TidalCycles in both Python and JavaScript, it’s totally possible to implement it. But I think coming up with the ideas would have been difficult. And in fact, I still jump between mainly JavaScript and Haskell. The JavaScript implementation is called Strudel. You can go to strudel.cc to try that. And TidalCycles, you can see about on tidalcycles.org. But yeah, I’m contributing to both in parallel, and when there’s something difficult to do in terms of representation, I’ll be doing that in Haskell still. I think it just wouldn’t be possible to do it in JavaScript, at least not for me. + +*MS (0:12:25)*: So I heard you say continuation earlier, right? And I read your recent blog posts on the evolution of the data representation of rhythms and patterns in Tidal. So, I mean, it seems to me you’re really doing software engineering at a pretty high level. You know stuff that is not really usually in beginner’s books on programming. Can you say a little bit about what resources you used in learning Haskell? + +*AM (0:12:49)*: Yeah. I can try and remember. There was one particular book by Graham. You can probably help me with his name. Hutton?  + +*MS (0:12:59)*: Yes. + +*AM (0:13:00)*: Graham Hutton, yeah. I think that came out part of the way into my journey. And that’s what really helped a lot click for me. Clearly, he taught Haskell a lot, and the order of the chapters and how it built up into making a pass from things really helped. But from there, it’s just like feeling around, really, getting ideas from reading papers by people like Conal Elliott and Paul Hudak about functional reactive programming. That was a major influence. But yeah, I think it was mainly Graham Hutton’s book that really helped me grasp things. But at the same time, I don’t fully grasp exactly the full idea of what a monad is. I’ve just learned through the process of writing Tidal a feel for it. Like, I came up with a join independently when I tried to work out how to deal with patterns of patterns. I’d call it an unwrap and then realize, oh, that’s actually quite a fundamental construct.  + +So I should say that, actually, what really is great about Haskell is how it’s great for making embedded domain-specific languages, which is what Tidal is. So it is a pattern type, but then it’s a whole combinator library of functions for playing with and manipulating patterns, as well as having an extra domain-specific language for expressing sequences, which we call the mini-notation, which is embedded in Tidal thanks to overloaded strings. So a shout-out for the fantastic ability to create domain-specific languages in Haskell, which means that thousands, maybe tens of thousands, of non-programmers are actually writing in Haskell without realizing it, making music with Tidal. + +*MS (0:15:17)*: So, are type error messages ever a thing for all of these people performing with TidalCycles, who presumably have less of a background in Haskell than you do? + +*AM (0:15:27)*: Yeah, I’m not sure. It’s a good question. Like when people hit an error, what do they do about it? But no, you don’t have to worry about types when you’re using Tidal, I don’t think, because everything is a pattern. It might be a pattern of strings. It might be a pattern of synthesizer control messages. But yeah, everything’s really contained within that single type, really. At the end of the journey, what we’ve ended up with as a representation is a function from time to events. That’s what a pattern is. But in particular, a function from time ranges to events which are active in that time. So what you might do as an end-user live coder is use a function like rev, which will reverse a pattern. But there’s no data structure. It’s just a function. So in that case, you’ll just wrap one pattern in a function, which will just manipulate the time of the query, the input to the function of time, as well as the output being the events active at that time, just to adjust the query in terms of which part of a rhythmic cycle you’re querying, and then mirroring the cycle so that it comes out reversed in the results. I think syntax errors are probably more of an issue for users of Tidal than type errors.  + +*AL (0:17:05)*: But when you get an error, you just basically have your screen flashing or something like that, and you have to redo until you correct, and the sound stays unchanged for the time, right, or something like that. + +*AM (0:17:18)*: That’s right. Yeah. So that can be a bit annoying if you’ve queued up a change to your code, and you hit evaluate at exactly the right moment, and nothing changes. But because music is, by its nature, cyclic, you can just wait until the same moment comes around again. At least if you’re making techno, you can just wait another four repetitions or something, and then it’s all fine. + +*AL (0:17:44)*: Ah, so this is – I mean, okay. So I have very little experience with all this. You’re saying you’re manually updating the code at very specific moments in time to make it fit nicely, or are you programming the update in such a way that it will update at the right moment in time anyway? Is this a manual timing thing, or is this something that you are programming? + +*AM (0:18:10)*: Yeah, really the former. So, the difference, I think, between live coding and software engineering is that, as the programmer, you’re really part of the program. When you make the changes, it’s as important, if not more important, as what the change is in music in terms of the musical results. So I might be on stage with people listening in the audience, and what they hear is the code running, but also me making changes to the code, which is where it’s a bit different from the functional reactive programming inspiration for Tidal is that I’ve never actually used any of these FRP frameworks. But my understanding is there’s two parts. There’s the signal, which is the analogue of my pattern type, and then there’s a whole event framework around it. But I don’t need that event framework because all I need is the representation of a pattern, what’s running at the moment, and then everything else is just the programmer making changes to it to create music. + +*AL (0:19:17)*: Yeah, I think I should probably rephrase a little bit more because I think I’m not actually probably doing a very good job at asking the question that I have precisely. You said that music is, in its nature, cyclic. So there are certain points in time when it feels natural to the listener that something changes.  + +*AM (0:19:36)*: Yeah. + +*AL (0:19:37)*: And there are certain points in time where it would feel like a hard cut. + +*AM (0:19:40)*: Yeah.  + +*AL (0:19:41)*: Now, I would have assumed that even though, obviously, you’re live coding and you’re making changes, that it’s a property of the system that if you make an update, that it does not execute the update at exactly the moment that you hit return or whatnot, but that it waits until the next opportune moment to do the update. Is that not accurate? + +*AM (0:20:08)*: Yeah, I guess, I mean, you can do that. There’s facility to delay a change that, yeah, there’s different ways of making transitions from one state to another. So one might be waiting until a modulo of four cycles or one cycle. But most of the time, it’s fine. You can just time it just by hitting evaluate at the right moment.  + +*AL (0:20:33)*: Okay. + +*AM (0:20:33)*: Because sometimes, musically, you want it to be a very clean transition right at the start of the rhythmic cycle, but sometimes you might actually want it to jar a bit or make it a bit wonky. And so it just makes sense to have that in control of the user. But yeah, I’ve got things where it will change from one state of the pattern to another by randomly taking out events from one version of the pattern and introducing them from the next version, or doing a sort of audio crossfade, or gradually building up a pattern before dropping into another one. But in general, I just make a cut whenever I evaluate. There’s no state in the system – well, there is, but that’s another story. In terms of the basic types, there’s no state. And so the only state coming in, in general, is time. You can have controller values coming in as well, and you can have – those events can actually be – so you can have a pattern of anything. So you could, if you want, have a pattern of functions that take state in and then pass the state from one event to the next. But in general, it’s a completely stateless, pure type. Everything is just dependent. The only state is time. And so it’s actually quite straightforward to replace one pattern with another. + +*AL (0:22:00)*: Could you perhaps generally describe the architecture of the system a little bit more? Because I mean, obviously, we have some sort of editor in which the performer is coding. But then, how exactly does this work that is being reloaded and executed on the time, and what are you exactly communicating with, and how in order to create the sound? And so, how do the components all fit together, and how difficult was it to set that up around Haskell and the Haskell toolchain? + +*AM (0:22:34)*: Yeah. So people generally use usual programmers’ editors like Emacs or Vim or VS Code, but all of these plugins generally are just front ends to GHCi. So the plugin will load GHCi and run the script that starts Tidal. And then they will evaluate code, usually in blocks separated by whitespace, and that will basically take that code and paste it into GHCi. And any feedback they get is just the output of GHCi. So quite straightforward, really. I have made an editor called Feedforward, which works with the Hint package, the Haskell interpreter package, and has more visualization built in and things like that. But most people use one of these plugins. And when they’re running code, there’ll be a mutable variable containing a pattern, which they will be updating with a scheduler querying the pattern at 20 hertz by default and sending messages using a network protocol called OSC, Open Sound Control protocol, I think. So the sound actually isn’t generated by Haskell. TidalCycles itself is just like a pattern engine, and it’ll send messages to an external synthesizer to actually make the sound. In particular, one called SuperDirt, which runs in an environment called SuperCollider, which I mentioned earlier, which might, in turn, send messages on to synthesizers using a protocol like MIDI. So yeah, that’s the basic structure.  + +*AL (0:24:24)*: Okay. And that’s always basically been the architecture of TidalCycles from the very beginning, or has it undergone major changes over the years? + +*AM (0:24:35)*: Yeah, that’s been basically it. SuperDirt came along a little bit into its life. Before that, I was using something that was just called Dirt that was implemented in plain C and also used Haskell’s MIDI to control MIDI gear. But it just made sense to have everything going through SuperCollider in the end. + +*MS (0:24:56)*: So Alex, we’ve known each other for quite a while because we were both involved in a workshop that usually runs as part of ICFP, the FARM Workshop on Functional Art. I noticed you over the years getting interested in non-Western kinds of music. Did that lead to any changes in TidalCycles or any more ideas about how to represent music and patterns in computing? + +*AM (0:25:20)*: Yeah, for sure. Yeah, it’s been a sort of ongoing thing. I don’t have any musical training. So I’ve come to the world of music wanting to look for alternatives, because all I have really is the beginner’s mind. So I’ve stayed away from learning what’s called Western music theory and looked for alternatives. I’m trying to avoid that kind of score-based approach to music, which, although Paul Hudak’s work has been a massive inspiration, his work on the Haskell School of Expression, I think, does focus on that kind of score-based approach to music, which is fair enough. I mean, obviously, there’s trade-offs at play, but most of the music we listen to is in that model.  + +So as I said, I was influenced by the Bol Processor and, through that, by Indian conceptions of time, which are based on metrical cycles, but not really getting too deep into it, just learning about it through reading the documentation of someone else’s software. But more recently, I’ve been learning South Indian rhythms more in depth, in particular learning konnakol, which is the practice of reciting rhythms verbally using non-let’s-call syllables, i.e., syllables which don’t have any meaning as we’d normally understand it but instead just exist as rhythmic sounds like tadhinginatom or takadina. They do reference drum strokes similar to North Indian music. You have bols, which represent particular ways of articulating the tabla drum, which is what the Bol Processor is named after. In South India, they have solkattu, bunches of syllables, which are usually related to the mridangam drum. But konnakol takes it further with additional syllables and also relates to dance movements and things like this. I’m not an expert, but this is as far as I understand it.  + +So yeah, I’ve been getting really into konnakol because it is very numerical and it’s very algorithmic, and yet it’s thousands of years old. So it’s nice to have a grounding for algorithmic music, which goes beyond the normal conception of 50, 80 years of computing history but looks for a basis for algorithmic music that is part of a much longer human obsession with patterns and algorithms. So maybe an example would be taking a group of syllables like tadhinginatom, but just stretching it according to and compressing it according to a rhythmic sequence like a tadhinginatom, tadhinginatom, tadhinginatom, tadhinginatom, tadhinginatom, tadhinginatom, ta, which compresses and then stretches again according to a different number of beats per syllable, but does it in such a way that it fits a particular metrical cycle or called a tala. And I find this really fascinating, the idea of this kind of cycle, which might be really long, might be dozens of beats long, but somehow calculating these rhythmic transformations so that they fit it perfectly. So yeah, it’s been really inspiring. + +And so in TidalCycles, everything is based on these metrical cycles. So when you reverse something, you actually reverse every cycle. When you concatenate two patterns together, because they’re functions, you have no idea about their structure. So all you can do is just alternate between the cycles of the patterns that you’re concatenating. Whereas in konnakol, in South Indian practice, you do all kinds of things which are actually based on beats, but just within the cycle. So I guess my misunderstanding was that everything is a cycle, but actually that’s not true at all. These beat-based stepwise transformations are really run through all, as far as I’ve seen. That made me quite frustrated with Tidal because I was learning all these rhythms, which were actually really difficult to express in Tidal. Of course, I could just notate the individual sounds as this long sequence, but I couldn’t actually represent the structure in a satisfying way that I was looking for.  + +So in that blog post you mentioned earlier, I wrote about how the representation of Tidal has changed over time, and it really comes down to a struggle between being able to represent beats and how one comes after another and being able to represent cycles. That’s why I ended up with querying time spans instead of individual time values, because I worked out that I could represent both signals, like continuously varying signals and discrete events. If I queried time spans, then you can find all the discrete events in there, if it’s a discrete pattern, and you can find, or you can just sample a point within that range, if it’s a continuous pattern, if that makes sense. It’s like trying to represent both analog and digital patterns within one type, which I managed to do but ended up with something really opaque, where I can then do these transformations on the beats because I’m dealing with a function where I’m not actually querying the function; I’m just wrapping it in another function. So there’s no possibility to really deal with beat-based transformations within that, if that makes sense.  + +So I’ve been over time moving between that and splitting out the types so that I can represent sequences separately from signals. But what I’ve settled on at the moment is, after spending, actually, last year, quite a few months, really, trying to rework things into this complex set of types that were unified by a type class and having a lot of fun with it, but just getting lost in the weeds. And eventually, I just went back to my original type and just added an extra field to it, which is the number of steps that a pattern has, so that I can then use that in order to combine two patterns in a whole new set of different ways. Just enough information to create a whole new set of possibilities for combining patterns.  + +*MS (0:33:05)*: So, I mean, you talked about making TidalCycles so that you could use it to do the music that you’re interested in, but as you mentioned earlier, there’s now quite a large community around TidalCycles. Can you talk a little bit about what it took to build that community and to get the software in a state where it’s usable by other people, specifically people who have little or very little background in programming previously? + +*AM (0:33:30)*: Yeah, it was quite a challenge. So it was just me using it for the first few years, probably. I think I started around 2009, something like that, which is quite a long time ago now. But I remember the first workshop I ran with it, and it was just really nice seeing people use it for that. I made a USB stick that people could boot off, which worked at the time. It’s a bit harder now with the development of macOS, but at the time, I managed to get a USB stick, which people could just boot into Linux with Tidal running. But there’s so many different moving parts, including Haskell, SuperCollider, SuperDirt running in that, a text editor, and the plugin. People have spent days trying to get it all working. But the community’s grown. They’ve helped each other install it and also helped with various methods for installing it.  + +I had a Summer of Haskell student, Martin Giss, who set out to make a binary version of Tidal, but he never succeeded because actually managing to get a portable binary with a Haskell interpreter in it was a massive challenge, made even worse more recently because we have this link library for synchronizing, which has made it even harder to create that binary. So, it’s always been a case of getting people to install Haskell, which also uses a lot of disc space and all the stuff around it, and a lot of people have failed. But somehow, it was compelling enough that a lot of people pushed through and made local communities where you could go and help get it installed. So, it’s been a huge challenge, but somehow people have risen to the challenge and managed to get it installed. It means that people have been really paranoid about changing it or updating to a new version because they’ve had such almost trauma trying to get it installed. + +*AL (0:35:45)*: Is this one of the reasons that motivated re-implementing in other languages? + +*AM (0:35:52)*: I think that’s part of it. I mean, I started off on that project by actually rewriting Tidal in Haskell. I just had the idea of what would it be like if I tried to rewrite TidalCycles from scratch without looking at the original source code. So, I did a two-hour stream of that where I talked through the process of writing Tidal. I managed to get quite far, actually, implementing the monads and all of that. + +*AL (0:36:25)*: But that didn’t culminate in an actual re-implementation of Tidal that has now replaced the first, or did that? + +*AM (0:36:32)*: Definitely stuff that I’ve written in that was clearer than the actual code base and did form part of Tidal, but it also was just like a clean implementation. + +*AL (0:36:46)*: But did that get merged in ever? That was basically like my – + +*AM (0:36:49)*: I can’t remember exactly. I think when I did at one point try to – I mentioned I tried to split out the two types, and at that point I was pulling in quite a lot of code, but that branch was abandoned. But I feel like I must have copied and pasted some stuff back in, but it wasn’t like completely merged. But from there, the whole thing felt more plastic. I felt like I could understand it more after managing to just re-implement it in a couple of hours, and also meant that other people had watched it and had more of an insight into the reasons why things were as they were and found it from there, it was much easier to re-implement it in Python. I didn’t take that very far. I did manage to make music with it, but ported it from there to JavaScript, and that has become its own community project. Someone called Felix Roos came along and turned it into a usable live coding framework, and it’s put loads of work into web audio synthesis and stuff. But yeah, all kinds of projects are branching off from there. So it’s funny, just like rewriting something and sharing the rewriting of something has meant that other people have re-implemented it in Lua and all kinds of other strange languages, because there was just this really simple implementation of its core, which wasn’t really usable, but still was enough for people to build on. + +*AL (0:38:14)*: So, how different are all of these versions from each other? I mean, so I guess if you look at Tidal code, then it still at some level looks very much like Haskell in some sense. I mean, you have dollars for application and stuff like that, for example. So I guess that in the same sense that if you have the JavaScript implementation, then it’s using JavaScript syntax on the surface, and then the patterns themselves look the same because they’re just strings effectively or – + +*AM (0:38:45)*: There’s no strings really in Tidal or Strudel. It looks like there are, but – + +*AL (0:38:51)*: Yeah. I mean, like string syntax. Sorry. All right. + +*AM (0:38:54)*: Yeah. So the JavaScript version is based on method chaining. So rather than a dollar, you’d put a dot after something and then call a method on it. And that method would create a new pattern that you’d then call another method on. And so everything is – in Haskell, it’s beautiful that you can just create operators out of thin air. But in JavaScript, there’s only really that one operator, the dot, and the numerical ones. So everything is a method in JavaScript, which has its problems. But the actual core is more or less a verbatim translation from Haskell. So not within the JavaScript paradigm, which, as I say, makes it quite hard to develop there but quite straightforward to translate from Haskell, I’d say. + +*AL (0:39:48)*: So you would say at the moment also still like if there’s new developments, they happen in Haskell and then they get migrated to the JavaScript version or – + +*AM (0:39:58)*: Yeah. It’s a two-way street, though. I have cleaned up some things in Strudel and then backported that to Haskell. But yeah, this stuff trying to make Tidal more able to express Carnatic South Indian music is definitely implemented in Haskell first. But yeah, as it goes backwards and forwards, it helps clarify the ideas really. So yeah, it’s nice that it goes in both directions. + +*MS (0:40:27)*: So as far as I’m concerned, live coding is a new art form, or at least it’s a different form. It leads to a different form of event, also. And now you’ve seen that evolve over the years. If you think about the evolution of music maybe that live coding has contributed to, do you see any developments or any trends over the last, I guess, 15 years, where is it all going? + +*AM (0:40:50)*: Yeah. I think it’s just like spreading and becoming more localized. So you get all these live coding practices, which are a little bit unique to places. Like if you go to Barcelona or Mexico City, they have these from-scratch sessions where they perform for nine minutes exactly from scratch. And then everyone claps at the end. That’s the only rule. Yeah, really short performances, which are very focused, with lots of people taking turns performing. But yeah, the languages are developing. If you search for all things live coding, you’ll get the TOPLAP long list of live coding environments, and there’s just hundreds. So yeah, lots of new ones. Particularly in France, for some reason, they seem to be coming up with a new language every month or so.  + +So spreading, I think it’s always been very focused on community development. I think it’s as much as a technological development. It’s a community one. And so, yeah, I see organizing events as part of my practice just as much as developing programming environments and performing myself. I see it as all part of the same thing, really. Part of that was coming up with the term algorave, which has become its own thing.  + +*MS (0:42:16)*: I was going to ask about that. + +*AM (0:42:17)*: Yeah. It’s a portmanteau between algorithm and rave. So that’s where people write, usually live code, but other kinds of algorithmic music and visuals for the people to dance to. And part of that is making sure there’s lots of diversity in terms of musical background, gender expression, and everything else to try and make something, because there is a problem both in electronic music and computer science in terms of how things have developed. Certain kinds of people tend to dominate things. So yeah, we’ve tried to make something new, a bit separate from these existing communities, and create something which is more accepting to a broader range of people, which seems to have worked to quite a large degree. + +So yeah, I think community development has been super important. It’s interesting how there’s more commercial live coding now. People like DJ Dave, who perform as well as DJ to really large audiences and have big TikTok followings and things, and how that has actually coexisted very nicely with these more improvisatory practices, more experimental noise performances, this kind of thing. So yeah, it’s been really lovely seeing how these different communities have developed around the world. And we have an international conference as well, which originally started in Leeds, which has toured around the world. The last one was China, and the next one is in Barcelona in May this year, 2025. There’s an open call for the next. I’m not sure where that’s going to be. Yeah, for me, I’d see the development of live coding in terms of community development. But it’s also nice to see how people create these hybrid practices, bring it into areas like algorithmic choreography. So yeah, it’s live coding itself, I suppose, as it’s open to change, I hope. + +*MS (0:44:19)*: So, apart from your work on music, you’ve also worked with textiles, right? I’ve seen your work on weaving even with the Deutsches Museum here in Munich, in Germany. Can you talk a little bit about that? + +*AM (0:44:29)*: Yeah, sure. So that was part of a five-year project called Penelope, led by Ellen Harlizius-Klück at the museum. So I’ve been in the lucky position—well, lucky for me, it just works well for me—of being a researcher outside of academia. So you might call me a para-academic. So that was working in a museum institution, exploring weaving structures as a live coder. So making a little loom that I could live code to explore the three-dimensional properties of weaving and learn about it from a live coding perspective and help reinsert weaving into the history of science and technology. And I’ve continued on that vein, really. So the last four years, I’ve been a research fellow, again, not affiliated with any university, but leading my own project as part of a small non-profit based in the UK called Then Try This. So that project is called Algorithmic Pattern, and it’s looking for algorithmic art precursors in more broadly, including textiles, but also things like juggling, music, dance, bell ringing, all these strange obsessions with numerical transformations, often with strange notations that humans have engaged with over many years, and using that knowledge to inform development of new creative technology, working with lots of collaborators like Kate Sicchio, who’s an amazing choreographer, Luke Iannini, who’s part of another non-profit organization called Dynamic Land, making hand-drawn notations with him and Kate for algorithmic choreography and pattern making. Who else? + +*AL (0:46:25)*: So sorry. I mean, this is all a bit – I mean, this is very impressive, but it’s difficult to imagine for me. So when you say weaving through live coding, are you saying you developed a new system for that yet again, or does this work with essentially the same technology somehow, or – + +*AM (0:46:45)*: Yeah, I have hooked up Tidal to a loom, and yeah, also made much simpler systems for exploring weaving structures, because weaving itself is so complex that that’s the thing about pattern, is that you have these simple parts that combine in different ways through interference patterns and other chaotic relationships into complex results. So you’re working with these very simple transformations but combining them to create very complex results. And when you’re actually working with a material like the ups and downs of weaving, you get all these very surprising interactions in terms of color and structure as a result.  + +Recently, I visited my friends Christina Anderson and Peiying Lin in Eindhoven in the Netherlands. I was working before with a handmade loom, but they have this TC2 industrial prototyping loom, and we managed to hook that up, reverse engineer the network protocol so we didn’t have to use the ancient Windows software that it has, and yeah, started using the MQTT protocol to try various different approaches to controlling it, including using Tidal, but also just much simpler things like cellular automata and things just to see what would happen. + +For me, though, often as computer scientists, the temptation is to look for problems to solve in these practices, but something like weaving, it’s like a really ancient practice. There’s no problems to solve which weavers aren’t already spending their lives engaging with. So it’s much more about looking at these practices for inspiration and learning because computer science is so young and so impoverished in a lot of ways in terms of how it’s open to change, how it supports creativity, and so on. So it’s not very fashionable because AI is being used to try and solve all these problems without really helping us understand them at all. Whereas my approach is to start with the craft and try and enrich computing by understanding it and bringing that knowledge to computer programming.  + +*MS (0:49:12)*: Yeah, I heard you do that with konnakol, which sounds really difficult to master as a human. + +*AM (0:49:17)*: Yeah. + +*MS (0:49:18)*: Or is that something that came easily to you? + +*AM (0:49:20)*: No, not at all. Yeah, I’ve been learning it for two years, but it’s really something you need a whole lifetime to master. And I started too late for that. Although my teacher, BC Manjunath, says that it’s okay. You can continue in the next life. So we’ll see what happens there.  + +But yeah, I think even trying to learn it gives you lots of insights and the challenge of starting to get that embodied feeling for a rhythmic pulse and get a feeling for the transformations you can do with it on the fly. It just gives you so many insights into tacit knowledge, almost into what a rhythm is and how we can interact with it, that I can then bring to my computer programming. + +*MS (0:50:07)*: So, is there any advice you can draw from that for people studying computer science or trying to get into computer science? + +*AM (0:50:14)*: Ooh. Yeah, I don’t know, because I mean, I would sometimes call myself a computer scientist, but I’m not really a trained computer scientist, but I’m not a trained musician either. But I guess I’ve found little unique places in between these different fields. So I guess my advice would be, if you want to find something unique, focus on creating space for things to happen rather than trying to work within the hierarchy of an existing field. I think as a researcher, having the aim to create space for things to happen is just a really nice approach because it’s about community building, but it’s also about finding unique conditions, setting up unique conditions, and letting things just happen and just going with it, if that makes sense.  + +*MS (0:51:10)*: Yeah, that makes total sense. + +*AL (0:51:11)*: Pursuing that thought for a moment, I would be interested in, like, what’s the, to you, most surprising thing that TidalCycles has actually been used for that you did not predict at all, like in terms of the direction or the way it was used in a performance? + +*AM (0:51:31)*: Ooh. Yeah. So that’s a good question, and perhaps too difficult. It’s been used for all kinds of things, like controlling robots to dance, to – I think it’s been used to control a set of accordions that had been taken apart and controlled with pneumatics by my friend Alexandra Cardenas, although I think she might have just used SuperCollider for that, I’m not sure. It’s been used to make the strangest kinds of music, both in terms of really commercial-sounding trance music, beautifully produced, the noisiest, noisiest, strangest thing that I’ve ever heard. Yeah, I mean, that’s the privilege of working in computing, is you can connect with all these different practices just by finding the protocol and then directing your language at it. So, something like Tidal that only deals with patterns, you can use it in theory to pattern anything, but at the same time, you have to have humility because you have to understand the material that you’re patterning to really get anything interesting out of it. + +*AL (0:52:54)*: And so perhaps looking into the future, is there still like a revision of the system that you would hope to be able to get at, which can fully express these ideas that you’re working on, or is there something even bigger that you’re dreaming of, or an application into yet another area when we’ve seen that, like weaving and music and whatnot, generating movies or – + +*AM (0:53:21)*: Yeah, I guess there’s a few things I’ve always wanted to explore. I mean, my immediate, my current difficulty is that, I mean, in Tidal, almost everything is a pattern. So if you speed something up, a pattern up by a factor, then that factor is itself a pattern. So you can pattern the factor to create frequency modulation or whatever, or if you’re moving something earlier or later, then you can actually make that a pattern. But at the moment, I added this extra field, which says how many steps there are in the pattern that allows me to do these Carnatic-like transformations. But that’s just a value. That’s just a time value, a rationale. So I’d like that to be a pattern.  + +*AL (0:54:10)*: Right. + +*AM (0:54:11)*: But making that possible has all kinds of repercussions in terms of the monadic binds and joins that I’ve implemented. It seems like it’s possible I’ve got a working prototype, but as I go through testing it, I’m not sure what’s going to happen. I mean, that’s a beautiful thing. You see this possibility for what you could change without really knowing what the results are going to be in terms of how it actually works in practice. So it’s like this delayed realization where you’re coding something but don’t actually know what the musical results are going to be at all.  + +Yeah, I’d like to make more visual language front ends for Tidal. I’ve always wanted to have spatial arrangement be significant, have proximity between words, be a parameter to the code, if that makes sense. There’s a whole world of – there’s a future of coding community exploring this kind of thing. With this fascination with these verbal patterns, I’d like to get more into the structure of words. So how a word can, through something like onomatopoeia, represent sound? Because we’re so used as programmers to words being arbitrary, but actually, what you call things do have a lot of influence on how we use these things, and it’d be nice to take it further and actually have the morphology of words within a rhythmic structure, actually having connection with the sound. That’s an idea left over from my master’s all those years ago that I’ve never quite got around to sort of properly looking into, but maybe someone else will do it. That’d be nice. + +*MS (0:55:56)*: Yeah. It sounds like you’re not running out of work anytime soon. + +*AM (0:55:59)*: No, unfortunately not. No. + +*AL (0:56:02)*: Fortunately, fortunately. + +*AM (0:56:03)*: Yeah. + +*AL (0:56:04)*: At least fortunate for us, I guess. + +*AM (0:56:08)*: Yeah. I think once you start getting into patterns, whether you’re a mathematician or a choreographer or a computer scientist, you realize there’s always one more thing. And when you add a new thing to a combinator library of pattern transformations, you can combine that new transformation with all the others, and it just opens up a whole combinational explosion of new things to explore. So yeah, it’s been a real privilege being part of this journey, and yeah, always excited to see what’s around the corner. + +*AL (0:56:38)*: Well then, thank you so much for taking the time to talk to us. It was great to have you here.  + +*MS (0:56:42)*: Yeah, thank you.  + +*AM (0:56:43)*: Yeah, it’s a real pleasure. I’ve enjoyed listening to quite a few of your previous podcasts, and yeah, lovely to be part of it. Thanks for the invitation.  + +*AL (0:56:54)*: Of course. Thank you. + +*Narrator (0:56:59)*: The Haskell Interlude Podcast is a project at the Haskell Foundation, and it is made possible by the generous support of our sponsors, especially the Gold-level sponsors: Input Output, Juspay, and Mercury.