Hidden in his eggshell whitea low-ceilinged basement in Hackensack, New Jersey, Matthew Whitaker makes magic.
Surrounded on all sides by an arsenal of music-making machines: a MIDI controller keyboard, four keyboards stacked in racks of two, and an organ, while a bass cabinet, guitar amp, drum kit, and percussion section cover the rest of the cramped basement. . These all run through five interfaces that send 40 signals – eight each – to the 21-year-old jazz musician’s computer. It is at the helm of this tightly organized chaos.
While chatting during a video call, Whitaker flutters around a sea of dials, sliders, LED displays and keys, like a pilot in the cockpit of a jumbo jet. A turn of a knob here, a click there, then his fingers turn to their true love: the piano keys. He grins as they race across the keyboard, producing a raw, brutal keynote run. The piano is an extension of Whitaker: when he is particularly excited or laughing, his hands dart over the keys, creating short melodic trills and riffs.
Whitaker records and produces his music in this basement. But he doesn’t use home production software or mixing boards in the same way that sighted musicians do. He is blind and has been since he was a baby – a result of complications from being born prematurely at 24 weeks.
Now 21 and entering his fourth year at the esteemed Juilliard School in New York City, he is an established recording and performing artist, a Gen Z jazz prodigy with three full-length records to his credit and collaborations with jazz veterans such as Christian McBride, Rhoda Scott, and the late Dr. Lonnie Smith. He is driven and values control over his creative process, but for most people, recording music in the digital age is as much a visual as it is a musical process. Playing, recording, mixing – all this is done on visual cues, especially on a computer screen.
So how does Whitaker do without sight? With a nod to Sinatra, Whitaker does it his way.
His parents, Moses and May Whitaker, say Matthew played the piano before he could speak. The story is almost folklore: he was 3 years old when he started playing simple melodies on the keys with both hands. By 11, he was performing in concert halls around the world.
Whitaker is used to taking on challenges that other musicians don’t. When asked to do a quick piece for two dancers from the American Ballet Theater, the stress kicked in: “How do I compose for dancers? It’s not something I can see and match with what they’re doing,” he worried. The choreographer told Whitaker the feel, sound and movement he wanted the piece to have. It took a few days, but he composed a song that met the challenge by exploring sounds he hadn’t used before.
Whitaker started home recording on GarageBand, before moving to the music production app Logic Pro around 2015. During a Zoom call from his basement studio in May, Whitaker shares his screen to demonstrate how he’s recording. He’s working on a song idea that afternoon, so he opens a session in Logic with a track for his MIDI keyboard, which connects to computer software for its sounds. He sets the beats per minute at 192 and sets a record. After Whitaker counts one bar, his hands go to work, pumping out a dizzying bouquet of complex chords and melodies. It’s impossibly beautiful, light and bold, the kind of thing you could imagine feeling on top of the world, stepping out at a cutting pace on a sunny day. A gentle breeze blows through your hair. You’re in New York City, honey.
Whitaker’s computer audio, however, is the key to all this beauty. Before and after the beautiful keystrokes on the piano, VoiceOver, Apple’s built-in screen reader, speaks at lightning speed to Whitaker, who can then navigate and manipulate his computer.
VoiceOver is integrated into its Mac Studio architecture, software and apps, displays a verbal roadmap of the page, reads out what buttons and labels are on the screen and what information they contain, descriptions of images and objects, what tools are in a window and how they function.
VoiceOver and similar technologies rely on two key processes. First, the analysis of information, which uses machine learning to identify the most relevant data on a screen. The second is speech synthesis, which converts this information into a linguistic composition that best describes the information. Then a phonetic transcription is assigned, the text is divided into units such as sentences and sentences, and that transcription is converted into a digital voice.
Whitaker uses VoiceOver across all of his devices using a comprehensive and customizable set of gesture and touch specific actions. For example, if you tap with four fingers near the top of his iPhone screen, the first item on the screen will be selected. A three-finger tap forces VoiceOver to relay more information, while three-finger slides up or down. To return to a previous screen, he would use two fingers to quickly draw a Z shape on the screen. “Depending on the interface, I have to interact a lot with sliders and buttons and various other elements, while a sighted person can just drag or click or tap or whatever,” Whitaker says.
Whitaker started using VoiceOver in 2011 when his father got a Mac. Over the years, Whitaker has refined the way he handles the program: he set the verbosity — how many words are used to describe certain things — to his liking, the maximum speed. For Whitaker, it’s efficient and fast. To an untrained user, it sounds like a voice recording when fast forwarding. Whitaker puts a finger on a round plastic button on his keyboard. “If I touch one, it just reads out when I move them,” he says. He turns one to adjust the volume of his piano track and VoiceOver races on, summoning the level changes at a dizzying pace: “Minus 0.2 decibels, 5.4 decibels, 6.2 decibels, 3.7 decibels.”
“I prefer it really fast,” Whitaker laughs. “I practice a lot with the interface, so I can just zoom through everything.”
Whitaker says every utility is different, so the software on his MIDI keyboard is different from VoiceOver. But he’s gotten to know the layout of his equipment and recording software so well that he sometimes navigates through them with the accessibility features turned off completely. Sometimes the chatter can be more than he needs; Whitaker mumbles “Quiet,” while VoiceOver babbles over his own voice.
When recording drums or percussion across the room from his computer, Whitaker uses a companion app on his phone that allows him to control Logic remotely. “That way I can record there myself and push the buttons,” he says, “so I don’t have to worry about people doing it for me.”
In his basement studio, surrounded by wires and screens and instruments, Whitaker maintains control and agency in a world built in a way that often robs him of those things. Whitaker’s father wired up much of the hardware and connected it to software elements such as interfaces, MIDI controllers, and computers. Now the younger Whitaker is the king of this cellar: he knows every square carpet, every knob and knob and key.
But Whitaker’s love is music, not technology. His work is extensive and prodigious; ‘Journey Uptown’, the opening track from his 2021 album connections, makes this amply clear. It runs on a dime between key signatures, tempos and moods with cinematic flair and texture. It is loose and playful jazz, but also melodic and clear. Whitaker creates whole worlds through his piano. The tools Whitaker uses are incidental. They act as a conduit for his talent and musicianship — things that have already caught the attention of scientists studying his brain to find out how the hell he’s so damn good at what he does.
Programs can be encoded, data can be arranged, wires can be connected. Whitaker’s brilliance cannot be quantified as a line of code or the electrical signal flow from an instrument to a processor. He uses the same instruments as any other composer. He just makes music his way. “It’s different because of the way we as blind individuals control everything,” Whitaker says. “But we achieve the same result, you know?”
More must-read features from this issue
want more Popular mechanics?
Get instant access!