Zeroes and Ones, Pt. 2

A Continuing Voyage through the History of Computer Music

Eldar Tagi · 06/09/25

Looking around the world today, one cannot help but be amazed by the tremendous progress digital technology has achieved in less than a century. Computers have become so deeply integrated into our lives that, for many, it would be difficult to imagine a world without them. Exponentiality is a unique trait of digital evolution, allowing us to witness dramatic transformation of the technological landscape within our lifetimes—from early electrical experiments to global interweb, neural networks, and machine intelligence.

Music, which often uniquely overlaps technologies and arts, offers a lens that reflects this dynamic. Computer music, in particular, is a fascinating focal point, as the computer remains the pinnacle of music technology—a musical instrument and mediator that continues to evolve in shape, size, and capability. In the broadest sense, a computer is a meta-instrument capable of taking on virtually any form or sonic character. It can be paired with a wide range of controllers, from piano-style keyboards to game controllers to biosensors, enabling novel modes of musical interaction. A computer can serve as the centerpiece of a recording studio or be invisibly embedded into an instrument. At its furthest reach, a computer can even independently assume the roles of composer, musician, mixing and mastering engineer, convincingly synthesizing human-like music and natural sounds. Today, the computer stands as both a tool for mediating human creativity and a mechanism for replicating it. Digital tools offer an unprecedented array of creative possibilities for working with sound, while also challenging and transforming our relationship with and experience of music.

In a previous article, I explored the origins of the computer music revolution, taking off in the 1950s with Hiller and Isaacson's Illiac Suite, the first piece of music composed by a computer. We then examined developments in the 1960s and 1970s, highlighting the converging paths of computer music research emerging worldwide—from Mathews to Zinovieff to Kurenniemi. At last, we paused our journey at the dusk of the 70s when the research and development around computer music began to gain higher momentum, manifesting in the establishment of dedicated research centers such as IRCAM in Paris and CCRMA in California; furthermore, the end of the decade was met with the release of first commercially available (albeit prohibitively expensive) digital systems, namely Fairlight CMI and New England Digital Synclavier.

Let's now pick back up in the 1980s, an era abounding in the developments within the field, and rich with examples of its extensive influence, spanning anything from academic composition and experimental music to popular culture, and fringe underground movements. In fact, many concepts and practices that we still heavily rely on today emerged in the 1980s. We will then follow the evolution of these ideas and practices through the 1990s, 2000s, and to the present day—where we will examine existing modern approaches and perhaps some potential future directions.

The 1980s: The Rise Of The (Personal) Machines

As I've said, the 1980s were a pivotal decade for computer music and digital audio technology in general, as the period is aureoled by a constellation of transformative innovations that would shape the world of music for years to come. FM synthesis, digital sampling, MIDI protocol, and personal computing—the decade was ripe enough for all of these groundbreaking concepts to emerge. Indeed, they were developed and introduced nearly simultaneously, essentially opening up a brand new door of possibilities to the curious artists.

MIDI, or the Musical Instrument Digital Interface, is perhaps one of the most important developments to come out of that period. The process of standardizing digital control across the rapidly growing electronic musical instruments industry was initiated by a joint effort between Sequential Circuits president Dave Smith and Roland Corporation president Ikutaro Kakehashi. Both recognized the burgeoning need for a universal digital interface that would allow devices to communicate with each other regardless of the manufacturer. In 1981, at the Audio Engineering Society (AES) conference, Smith presented his first proposal—the Universal Synthesizer Interface (USI). Enthused by the idea, several manufacturers, including Sequential Circuits, Roland, Yamaha, Korg, and Kawai, began collaborating to develop a standard protocol. By 1983, the MIDI 1.0 specification was finalized and published, and that same year, the first instruments to implement the protocol were released: the Sequential Circuits Prophet-600 and the Roland Jupiter-6.

Another significant milestone of the era was the widespread rise in popularity of FM synthesis. Truth be said, frequency modulation as a sound design technique wasn't a secret for quite some time at that point, it was initially explored in the analog domain, even in the earliest modular synthesizers. San Francisco Bay Area synth designer Donald Buchla was among the first to integrate FM techniques into his instruments. Both designed in the mid-1960s, the 158 Dual Sine/Sawtooth Oscillators and the 144 Dual Square-wave Generator (pictured above), condoned frequency modulation. However, in the analog domain, FM often resulted in unstable pitch responses. While the oscillating carrier and modulator could be interlinked in creative ways, producing unique textures and rhythms, this was rather in a domain of "cool sound design tricks," and not particularly suitable for tasks that demanded a greater degree of specificity, simulating real world instruments being one of such cases.

Digital FM synthesis, in contrast, proved stable, easily scalable, and consistently delivering reproducible results. Jean-Claude Risset was among the first composers to use the technique during his residency at Bell Labs, however it was John Chowning who extensively formulated and developed the technique into a comprehensive sound synthesis method. As a student at Stanford University, Chowning began exploring digital synthesis in 1967 when he gained access to the computer systems at the Stanford Artificial Intelligence Laboratory (SAIL). In the ensuing years, his research culminated in the paper "The Synthesis of Complex Audio Spectra by Means of Frequency Modulation," and subsequently, Stanford licensed the patent to Yamaha Corporation in 1974.

The first instrument to implement digital FM synthesis did not come from Yamaha itself; instead, it was the Synclavier I by New England Digital, which re-licensed the algorithm from the company. That said, Yamaha is aptly credited with bringing FM to mass awareness through its iconic DX-series synthesizers, particularly the DX7. Released in 1983, the DX7 is often regarded as one of the most well-known and bestselling synthesizers in history. In it, Yamaha effectively and elegantly harnessed the powerful potential of Chowning’s synthesis method, crafting out an instrument that instantly challenged existing music technology of the time.

The DX7 embodied years of advanced synthesis research in a sleek, futuristic keyboard instrument with unmatched sonic potential. Its 6-operator architecture allowed for deep exploration of FM synthesis, producing a vast range of tones, from organs, guitars, and pianos to brass, woodwinds, strings, and abstract soundscapes. Despite its popularity, however, the complex FM structure, coupled with the unfamiliarity of users and a programming interface requiring considerable commitment, meant that most players rarely ventured beyond the built-in presets. As a result, the stock sounds became extensively used, to a point where the majority of popular hits in the U.S. during the period featured one or a few of DX7 sounds. Just as the electric guitar defined the sound of the 1960s, the DX7’s sound became synonymous with the 1980s.

Although the synthesizer's signature sounds eventually fell out of fashion, FM synthesis secured its position as one of the most flexible and versatile sound design techniques. It became quite prominently used in emerging subculture movements like the demoscene, hip-hop, and varied forms of electronic dance music. Yamaha's proprietary sound chips were installed in a number of early personal computers and gaming consoles, including Sega Genesis/Megadrive and NEC PC-8801.

Needless to say, the arrival of personal computers was a very big deal—another one of those marvelous inventions of the time. Harald Bode, a visionary engineer of early musical instruments and the unsung father of modular synthesis, wrote in his notebook in 1978 that a computer epitomized a device that did everything he ever wanted to do: math, music, and writing. This was just a year since Apple introduced one of the earliest personal computer systems with music-making features, the Apple II, which stimulated a host of young developers to create dedicated programs for music production.

One such program was the Music Construction Set (MCS), designed by 15-year-old Will Harvey, whose initial purpose for it was to add music to his previously released shooter game Lancaster. MCS offered a graphical user interface similar to traditional musical staff notation, which made it widely appealing. In 1986, Harvey redesigned the program for the upgraded Apple IIGS, employing the built-in Ensoniq wavetable synthesizer, which significantly expanded its sonic profile. Over time, the program was ported to other platforms, including Atari 8-bit computers, Atari ST, Commodore 64, and various IBM PC clones.

Some companies pursued more unconventional solutions. In 1980, the Syntauri Corporation bypassed the established route of designing standalone hardware synthesizers by developing the alphaSyntauri system as an extension to the Apple II. The system included a polyphonic multitimbral software synthesizer, an external keyboard, and the Mountain Computer Music System hardware expansion card. Priced significantly lower than the CMI and Synclavier, alphaSyntauri became a viable competitor. Although it gained popularity among notable users like Herbie Hancock and Laurie Spiegel, internal mismanagement led to the company's closure by 1984.

Several Commodore computers, notably the Commodore CBM-II, Commodore 64, and Commodore 128, featured a robust programmable sound chip called the Sound Interface Device (SID), originally designed by Ensoniq co-founder Robert Yannes in 1981. SID differed from other sound chips of the time because it was inspired by developments in synthesizers, incorporating features like hybrid analog-digital circuitry, three oscillators with variable pulse width modulation, a characterful multimode filter, ADSR envelope generators, and flexible modulation routing. Moreover, every aspect of the chip could be precisely controlled via direct register writing, and with the ability for real-time manipulation of its parameters, SID's versatility was impressive. Several software applications were developed specifically to harness SID's unique power, including Music Maker (1983) and Electrosound (1983), as well as third-party programs like Steinberg's Pro 16 and Activision's The Music Studio (1984).

Atari Corporation emerged in 1984 from a confluence of seemingly unfortunate events. One was Jack Tramiel's departure from Commodore, along with numerous employees who followed him into a new venture. The other was the financial challenges faced by Atari, Inc., due to a series of strategic missteps. The video game market crash of 1983 was the final blow, and in 1984, the consumer division of the company was sold to Jack Tramiel, who rebranded it as Atari Corporation.

Atari 520ST

The company's most significant impact on the development of computer music was the Atari ST 16-bit personal computer. In addition to its powerful Motorola 68000 processor, high-resolution monochrome display, versatile and expandable software library, and cost-effective design, the ST had another feature that made it particularly appealing to composers and musicians—built-in MIDI I/O ports. This allowed artists to easily bridge the burgeoning realms of audio software and hardware. While many music programs were written for and ported to the ST over its decade of active development, it is worth noting that the machine fostered the development of two programs that evolved into some of today's most popular digital audio workstations: Steinberg's Cubase and Apple's Logic Pro, which originated as the Creator (1987) designed by Gerhard Lengeling, and Notator (1988) developed by Chris Adam, which added notation editing features.

Atari ST's two primary rivals were the Commodore Amiga (1985) and Apple's Macintosh (1984), both of which inspired the creation of novel music programs. Mark of the Unicorn (MOTU) developed some of the earliest applications for the Macintosh, such as Professional Composer (1984), and continued along this path with Performer (1985) and eventually Digital Performer (1990), a program that is still actively developed. Commodore Amiga, on the other hand, became the host for the first tracker sequencer, Ultimate Soundtracker (1987), created by German software developer/composer Karsten Obarski.

Tracker sequencers, or trackers, became prominent in the late-80s and early-90s, and the design has retained a dedicated following to this day. Demoscene and chiptune communities, in particular, made great use of trackers, with programs actively developed by community members, examples include ScreamTracker (1990) by Psi and ProTracker (1990) by Amiga Freelancers.

Organized around a vertically moving timeline of musical events and parameter changes, trackers offer an efficient way to structure compositions from discrete patterns into dynamic arrangements. While early trackers were modest in their features, the concept has since evolved into powerful software tools like Renoise and SunVox, as well as unique standalone hardware instruments like the Dirtywave M8, XOR Electronics NerdSeq, and Polyend Tracker+.

The spread of personal computing had wide-reaching effects. For academic and experimental music, the ability to outsource costly hardware development and production to companies meant that the focus could shift toward artistic and theoretical work on a greater scale than ever before. A distinct example of this is The Hub (pictured above), a pioneering ensemble of computer network music. Founded in 1986 by John Bischoff, Tim Perkis, Chris Brown, Scot Gresham-Lancaster, Mark Trayle, and Phil Stone, the group evolved naturally from the earlier project the League of Automatic Music Composers, which had been using microcomputers and homemade electronic instruments since the mid-1970s. The Hub, still active today with its remaining members, were trailblazers included push the boundaries of computer performance and composition beyond limits. Knowing how to solder and program was as essential to members of The Hub as being able to play an instrument or compose.

Once described by Tim Perkis as the "cybernetic and revolutionary cousin to jazz," the music of The Hub was just one element of their artistic practice—the processes, methods, and live environment were equally essential to the ensemble's experience. In one of their early and simpler pieces, "Stuck Note" by Gresham-Lancaster, the performers were only allowed to play a single continuous note or sound at a time, and while it played, only two parameters could be manipulated—volume and an "x factor" (a process for altering the timbre). Moreover, each controller had virtual access to other performers' controllers at all times and could stream MIDI messages to take control. This arrangement fostered an unusual live scenario for both performers and listeners, creating situational and sonic interactivity.

The Hub's pioneering work was infused with the experimental attitude and spirit of "hacking" and "tinkering" that has always been central to computer music. Among the varied toolset of the ensemble members was software developed by Miller Puckette while at IRCAM in Paris. Originally called Patcher, the program allowed composers to create complex interactive computer music systems without writing code. In 1986, the program was implemented on the Macintosh and renamed Max in honor of Max Mathews.

Opcode Systems Max (Source: SOS 07/1991) Opcode Systems Max (Source: SOS 07/1991)

In 1988, Opcode Systems began distributing Max commercially, and soon it was no longer just IRCAM's resident composers experimenting with interactive and algorithmic composition—essentially anyone with a desire to do so, and a Macintosh could. While the paywall allowed for continued active development of the software, Puckette saw it as a barrier to accessibility. Consequently, once his work with Max was complete, he began developing a free alternative, which eventually would become Pure Data.

Max wasn't the only influential computer music software to emerge in the mid-1980s. In 1985, composer and researcher Barry Vercoe was working at the MIT Media Lab on a new music software that he hoped would capitalize on existing advancements in computer music technology while being accessible not only to academic researchers but also to composers and practical sound designers. Building on his experience with MUSIC-11, Vercoe coded the new program in the recently developed and highly efficient C language, creating Csound as a continuation of the MUSIC-N series started decades earlier by Max Mathews.

CSound CSound

Csound maintained the modular architecture of interconnectable unit generators (or opcodes) found in its predecessors, but also introduced the orchestra and score paradigm, effectively splitting the program into two parts: score files specified notes and control data, while orchestra files defined instruments (sound synthesis algorithms). This architecture ensured the robustness and versatility of the program, allowing composers to design interactive, dynamic control systems and compositions. Csound could implement various sound synthesis techniques, including additive, subtractive, frequency modulation, sampling, and more, making it a perfect platform for prototyping and developing novel synthesis approaches. As a result, the program quickly gained popularity even among seasoned sound researchers like John Chowning and Jean-Claude Risset.

While some software focused on general music and sound applications, others tackled specific tasks, such as Barry Truax’s programs for real-time granular synthesis. Previously an offline process due to computational limits, Truax’s landmark piece Riverrun (1986) demonstrated the feasibility of generating and influencing dynamic granular sound textures in real time.

In essence, by the end of the 1980s, it was clear that computers would play an integral role in the future of music and society. However, the full extent and speed of this evolution were yet to be fully realized.

1990s: Enter The Matrix

The spread of computers, coupled with their increasing power and the rise of the internet throughout the 1990s, was a fuel to the global spread of digitization. As technology advanced, everything that was once analog was rapidly transitioning to the digital domain. The efficient substructure of the binary environment streamlined many previously time- and labor-intensive processes. While the computers of that era still had a long way to go in terms of power and memory, the impact was momentous and the potential—undeniable.

Tracing the development of computer music practices throughout the 1990s, several distinct directions emerge. Some developers focused on advancing the digital audio workstation (DAW) framework, thus effectively designing a virtual studio infrastructure that would bridge the familiar workflows established in the tape studios of the past and novel possibilities of digital systems. Others developed dedicated instruments and sound processors—plugins—to be used alongside and within the virtual studios environments. Meanwhile, those interested in the experimental potential of computer music had frameworks to dive deeper with lower level visual and text-based programming environments, enabling users not only to compose and perform music but also to create their own tools, instruments, and systems using the fundamental building blocks. On top of that, a variety of digital hardware interfaces for computer music continued to proliferate throughout the decade.

These patterns of development reflect the commonly understood distinction between computer music as a specific field and the general use of computers for music production. What differentiates computer music is not a specific aesthetic or form but rather the intent, methods, tools, and techniques used. While programs like Pro Tools (1991), Cubase (1989), and Digital Performer (1990) aimed to recreate and enhance the traditional studio experience in the virtual space, programs like Max and Csound, as well as a great number of idiosyncratic programs written by composers, catered more to experimental work. The defining feature of computer music in a classical sense is to find unique ways the computer can be used to push the existing boundaries of musical experience.

A great example of this techno-artistic prowess can be found in the work of the composer George E. Lewis, who had been exploring the intersections of experimental, classical, jazz, and electronic music since the 1970s. Perhaps one of the most well-known computer music pieces created by Lewis was Voyager (1993). Programmed in the Forth language, Voyager was an interactive music system designed to explore the potential of machines as improvisers. The system "listened" to a musician in real time, analyzing elements like pitch, rhythm, dynamics, and timbre using a set of algorithms. Based on this data, it generated meaningful, autonomous responses, influencing the performer’s decisions. While Voyager was not the first system to use artificial intelligence in music, it was one of the earliest and most significant examples of blending human and machine creativity.

However an ethos for experimentation and exploration of computers as musical instruments wasn't solely an academic pursuit anymore. Computers were central to the evolution of novel kinds of electronic music, some leaning toward the popular, others gravitating to the complex and dynamic musical forms. Inevitably, over the years the distinction within the spectrum got ever-blurrier, with many artists challenging and rejecting the divisions between academic, popular, and sub culture altogether.

Among the first to occupy this liminal space were a set of artists that were collectively put under the so-called "Intelligent Dance Music" (IDM) umbrella. The term emerged in the early 1990s as a descriptor coined by music journalists to categorize a wave of innovative electronic music that bridged the experimental approaches of academia and avant-garde with the visceral energy of club music and the emotional resonance of popular music. Originally called "intelligent techno," the term eventually expanded to encompass a wide range of genre-defying electronic music. However, the label was met with resistance from many artists within the scene, who found the term problematic as it either suggested other types of music being unintelligent or that one necessarily has to dance while listening to this kind of music. Nevertheless, IDM gained widespread use among critics and listeners, especially following the release of the Artificial Intelligence album series by Warp Records (1992-1994), featuring early works by artists like Richard D. James (as Polygon Window), Autechre, The Black Dog, and B12.

One other interesting creative development in computer music to emerge in late 1990s is live-coding, a practice that explicitly presents the act of programming as performative. Ushered by artists such as Adrian Ward, and Nick Collins, and programming environments like SuperCollider and later ChucK, live-coding involves writing and modifying algorithms in real-time, with the code often projected onto a screen for audiences to witness the creative process unfolding. Live-coding’s open-ended and improvisatory nature aligns perfectly with the "hacking" ethos of computer music, while simultaneously challenging traditional notions of musical authorship and composition. The movement spread leading to formations of international communities and events such as Algorave, where “coding as live performance” is celebrated in spaces that span from academic conferences to underground electronic music events.

As artists explored and devised new creative methods, researchers and developers continued to improve existing tools and invent new ones for the proliferation of the field. The modular, object-based paradigm introduced by Mathews in MUSIC remained popular for its versatility. By the mid-to-late 1990s, a diverse selection of tools had evolved the concept in various directions.

Kyma

In 1990, composer and inventor Carla Scaletti, along with engineer Kurt J. Hebel, founded Symbolic Sound Corporation as a spinoff from the recently defunct CERL Sound Group at the University of Illinois. The duo had been developing Kyma, a computer language and hardware solution, since 1986. When their previous research facilities closed, they saw an opportunity to continue developing Kyma as a commercial product. From the outset, Kyma aimed to bridge sound synthesis and music composition without boundaries. The use of dedicated DSP hardware ensured efficient, uninterrupted sound processing—a necessity at the time—and remains a core feature of Kyma, which continues to evolve and is cherished for its unique synthesis possibilities and exceptional sound quality.

Most developers, however, focused strictly on the software side, anticipating the rapid increase in power of personal computers. Miller Puckette, the original creator of Max, introduced his free, open-source alternative Pure Data (PD) in 1996. Like Max, PD utilized a visual programming interface where objects and messages are connected via virtual cables to define a patch’s data flow. Unlike Max, which initially relied on external hardware for signal processing, PD was designed to handle all audio tasks using the computer’s internal CPU.

In 1997, long-time Max developer David Zicarelli acquired the publishing rights for the program from Opcode Systems and founded Cycling '74 to continue its development. That same year, the company integrated Puckette’s signal processing code into Max, resulting in the first major extension of the program: MSP (Max Signal Processing or Miller S. Puckette, if you will). From that point on, the software became colloquially known as Max/MSP, enabling users to create complex synthesizers and effects processors within the same environment.

In Europe, other virtual modular environments emerged. Berlin-based software company Native Instruments was founded in 1996 by Stephan Schmitt and Volker Hinz. They introduced their own virtual modular synth, Generator, in 1997. Similar to the Nord Modular, Generator allowed users to run their patches natively on a PC. A year later, the software was expanded and rebranded as Reaktor, which remains a widely used platform.

While visual programming was an effective way to transition musicians from hardware to the virtual world, not all developers followed this design archetype. In 1996, James McCartney released SuperCollider, which, while object-oriented and inspired by modular synthesis, retained the speed, lightness, and flexibility of text-based programming. Offered for free and extendable under an open-source license, SuperCollider quickly became a favorite tool for artists, scientists, and researchers focusing on sound synthesis, interactive composition, and acoustic research. The software’s structure, divided into a programming language (sclang) and a sound engine (scsynth), mirrored the separation of sound generation and control found in earlier systems. Its open-source nature fostered a vibrant global community that continues to contribute libraries, tools, and resources, ensuring its ongoing evolution.

By the end of the 1990s, computers had firmly established themselves as indispensable tools for musical creativity, setting the stage for even deeper integration between digital technology and music in the fast-approaching 21st century.

2000s: Bringing Back Touch

Computer music software continued to evolve rapidly in the new millennium. SuperCollider reached its stable version 3 in 2003, Pure Data underwent an overhaul with PD-extended in 2002, and Max/MSP expanded with the Jitter package in 2003 for real-time video processing. New developments also emerged, such as Propellerhead's Reason (2000) and Ableton Live (2001), which itself was originally prototyped using Max in the mid-90s by Robert Henke, Gerhard Behles, and Bernd Roggendorf.

Ableton Live Alpha (Source: CDM) Ableton Live Alpha (Source: CDM)

Live deserves a dedicated mention because its unique design, perhaps to a greater degree than anything before it, widely popularized the idea of the computer as a performance instrument. Built around the simple yet powerful premise of dynamically launchable audio loops and MIDI patterns, Live fostered a generation of novel approaches to performing electronic music. While it provided the architecture for creating loop-based performance sets, the addition of a MIDI controller became essential. This, in turn, prompted many companies, including Akai and Novation, to focus the development on MIDI controllers specifically tailored to Live's user base.

The increased focus on interaction and control defines a major vector in computer music in the 21st century. With machines now equipped with state-of-the-art sound synthesis and composition software, the physical interfaces for interacting with these technologies became a primary focus. MIDI keyboard controllers and interfaces had existed in various forms since the 1980s, but it wasn’t until the new century that MIDI control devices could be easily connected to computers via USB ports. Released in 2001, the Roland PC-300 was among the earliest keyboard instruments of such design, soon followed by numerous other offerings from a range of manufacturers. USB MIDI significantly simplified connectivity for musicians and quickly became the standard.

While the conventional keyboard layout dominated, the desire for alternative ways of interacting with music gained momentum. In 2001, the first international conference on New Interfaces for Musical Expression (NIME) took place in Seattle, Washington. As the name suggests, NIME gathered a unique and dynamic group of people from around the world to present novel, often idiosyncratic, and occasionally groundbreaking electronic musical instruments and interfaces. The debut conference featured iconic presentations, including Max Mathews' Radio Baton and the Continuum keyboard developed by Lippold Haken, the same engineer who had contributed to the development of Symbolic Sound's first DSP hardware, Platypus, back in the 1980s. In later years, many more unique instruments were unveiled, including the Reactable (2005) and Yamaha's grid-based instrument Tenori-On (2007).

The notion of laptop as a musical instrument perhaps most directly manifested in the emergence of laptop orchestras—academic ensembles which reimagined the traditional structure by replacing conventional instruments with networked laptops and custom software. Pioneered by groups like the Princeton Laptop Orchestra (PLOrk) founded by Dan Trueman and Perry Cook in 2005, and later Stanford’s SLOrk, these ensembles sought to redefine collaborative music-making by subverting established musical hierarchies and embracing digital tools. In these setups, each participant acts not only as a performer but also as a sound designer and coder, dynamically shaping their instrument’s behavior in real time. Laptop orchestras attempt to blur the boundaries between composition, performance, and technology, creating a collective experience where musicianship, software design, and technological innovation merge. This approach highlights the growing convergence of physical and digital elements while pushing the social dynamics of music-making into uncharted territories. Emphasizing distributed creativity, these ensembles thrive on interaction—both between performers and between humans and machines.

Another significant milestone was the arrival of touchscreen technologies. Predating the mass-market smartphone by a few years, the original touchscreen controller was created in 2002 by the French company JazzMutant. Lemur, as it was called, was unlike any control devices that had existed before. Encased in rugged metal, the Lemur featured a 12" touchscreen that served as a customizable performance area. Users could construct unique controller environments using various modules, some conventional like faders and dials, and others more unorthodox like breakpoint editors and multiball x/y pads. JazzEditor, the software used to design interfaces, featured a scripting language that could be used to impart physical-like behaviors on objects, such as bouncing and oscillating. Moreover, Lemur supported both MIDI and the higher-resolution OSC (Open Sound Control) protocols, making it suitable for a wider range of applications than a typical MIDI control surface.

Despite its revolutionary work, JazzMutant had a relatively short lifespan, largely due to the rise of smartphones and tablets. In 2010, production of the Lemur's was discontinued, and the company ceased operations. However, the concept of Lemur lived on for another decade. One of Lemur’s prominent users, techno artist Richie Hawtin, founded the Liine company, which brought Lemur into the realm of rapidly spreading iOS and Android devices. While extensive development continued for a few years, alternative options like TouchOSC ultimately won the race, and by 2022, Liine Lemur was officially discontinued. Nevertheless, it has been recently revived by a company called MIDI Kinetics, and is once again available to its users.

One more design that gained wide recognition during this period was the grid-style interface composed of equally sized and shaped silicon buttons or pads. Perhaps no device is more emblematic of this than the controller created by the small upstate New York company Monome, led by the visionary duo Brian Crabtree and Kelli Cain. Committed to principles of open-source development, minimalist aesthetics, and community engagement, Monome launched a series of grid controllers in 2006, ranging from 64 to 512 buttons. What set Monome devices apart was their utterly flexible and open design. Bypassing the MIDI protocol in favor of the OSC (Open Sound Control), Monome grids were designed to be programmed by the user using software like Max/MSP to create unique applications. The grid buttons could serve various functions—on/off switches, sequencer steps, faders, loop points—all entirely depending on the user’s design.

Monome Arc and Grid

[Above: the Monome Arc and Grid]

This flexibility might have seemed daunting to musicians not skilled in audio programming, but this is where the community aspect played a crucial role. Developing custom applications was an option, not a requirement, and Monome provided example applications to get users started. The company also fostered a platform for users to share their creations and discuss artistic and technical topics. This vibrant community over the years grew into the Lines forum—one of the most inspiring online spaces for electronic and experimental music and audio technology.

Monome’s product line eventually expanded. In 2011, the company released Arc, which complemented the grid layout with a pair or quartet of fully programmable encoders. Later, Monome further evolved its ecosystem by branching into hardware modular synthesis, introducing the concept of a portable sound computer with devices like Aleph (2013), a collaboration with Ezra Buchla, and later with Norns (2018).

The 2010s saw an even greater expansion in the landscape of music controllers, with options ranging from wearable technology like Mi.Mu gloves (2014) and Genki Instruments Wave Ring (2018) to expressive touch controllers like the Roli Seaboard (2013) and Expressive E Touché (2017). Instrument-based systems such as Sunhouse Sensory Percussion and Jamstik Studio also emerged. Several of these controllers adopted the MIDI Polyphonic Expression (MPE) extension to the MIDI protocol, designed to enhance the expressive potential of electronic instruments by enabling features like polyphonic pitchbend, pressure, and aftertouch. This focus on expressivity, which was realized with modern instruments like Osmose or Continuum, remains a core focus of development in the field.

This reflects an even broader trend in music technology: the drive for complete integration of physical and digital elements. Despite the exponentially growing power of computers, which enables them to handle even the most demanding music and sound-related projects, musicians still crave the immediacy and intuitiveness of hardware instruments. The next logical step was to embed computational power directly into musical instruments and devices. Affordable microcontrollers such as Arduino (2005), Raspberry Pi (2012), and others, along with readily available electronics components, significantly streamlined the task for both established brands and emerging independent synth makers. This convergence of technology and traditional analog synthesis techniques, combined with new ideas, ushered in a decade of modular synthesis revival, music hardware innovation, and a surprising rise in sonic experimentation.

Where We Are And Where The 'Hello World' Are We Going?

Today, ideas, technologies, methods, and even sonic aesthetics that once occupied distinct historical timelines and stylistic frameworks are coexisting and merging into an eclectic hybrid ecosystem. This has created a fertile environment for diverse outcomes, from companies like Elektron advancing digital sequencing and sampling hardware with instruments like the Machinedrum (2001) and Octatrack (2010), to Eurorack modular pioneers like Make Noise, who explicitly brought musique concrète and microsound practices into the modular format with devices like the Phonogene (2010) and its successor Morphagene (2017).

The thread of DIY and open-source philosophy also presents a lot of exciting developments. Some, like the aforementioned Monome Norns, Orthogonal Devices Er-301, or even Empress Effects ZOIA offer its users a great degree of programmability within a. Other companies focus on even more generalized solutions, like Bela and Electrosmith, which offer audio development boards for easy development of electronic and electroacoustic instruments and sound processors. Crucially, both platforms are designed to be programmed using frameworks that are more accessible to musicians and composers, such as Pure Data and Max’s latest extensions, Gen and RNBO.

Undoubtedly one of the most significant technological developments of recent years has been all that has to do with artificial intelligence. Machine listening, machine learning, computational creativity, AI-assisted composition, mixing, mastering, design—we are at the precipice of a rapidly transforming world, and of course as with only change of such scales we are faced with a few dilemmas of ethical nature. On one hand, AI tools are driving breakthroughs in fields ranging from music education and composition to performance and sound design, even extending into bio- and eco-acoustics, where AI is used to decode communication systems and music in non-human species, from whales to bees. The range of creative options is stupendous. To take an example that is close to home, artificial intelligence algorithms set the foundation for novel approaches to sound synthesis. The open-source neural synth engine RAVE (Realtime Audio Variational autoEncoder), developed by Antoine Caillon and Philippe Esling, and the Fluid Corpus Manipulation (FluCoMa) project are examples of tools that tackle that particular creative problem.

On the other hand, artificial intelligence poses serious perils to our existing relationship with music, which has been evolving for centuries. Tools like Suno and AIVA, capable of generating complete tracks based on simple prompts, not only are potentially disruptive to the livelihoods of artists, they also fundamentally affect our sense of value in music, and human creativity, as well the homogenization of music, and the potential loss of cultural and emotional depth. Although as of yet these tools can’t match the richness and nuance of human creativity, they already excel at replicating formulaic musical models, which, if we’re being honest, are prevalent in the world.

While it’s impossible to predict the exact direction in which music will evolve, it’s clear that AI will play a vital role in shaping this journey, transforming how we relate to and interact with music and sound. In his book The Musical Human: A History of Life on Earth, musicologist Michael Spitzer explores the complex relationship between humans and machines, highlighting the blurred boundaries where humans can often exhibit machine-like behaviors while machines increasingly take on traits traditionally associated with humanity—music being a prime example. Spitzer suggests two imagined futures: a musical transhuman and a musical posthuman.

The musical transhuman represents a future where humans are enhanced with cybernetic implants designed to amplify our senses beyond their natural capacity—potentially expanding our frequency range or enabling synesthetic listening experiences where sounds are perceived as tastes or smells. The musical posthuman scenario presents a more radical vision where AI completely takes over the domain of music. In this future, technology becomes the primary creative force, generating endless streams of algorithmically composed music in any style or genre.

As technology continues to evolve, the challenge for artists and developers lies not merely in pushing the boundaries of what is possible but doing so in a way that acknowledges and elevates the human experience. Much of the future depends on how we balance technological innovation with the values that make music meaningful. Music is an enduring force that, as evidenced by multitude of recent discoveries in bioacoustics, transcends human expression, constantly evolving and adapting. In this light, AI-generated music becomes yet another shape that musical expression takes—perhaps even one that isn’t entirely about us, something that will take an entirely new path. What truly matters, and what we should pay attention to, is ensuring that we carry our uniquely human ways of engaging with music into this future of co-existence with musically intelligent machines.