"Going to Meet the People I Want to Meet!" Part 6: Takuya Fujita from Dentsu Inc. Event & Space Design Bureau went to meet composer and artist Keiichiro Shibuya, who has been releasing cutting-edge electronic music works both domestically and internationally. Fujita, a long-time fan of Shibuya's music, sought to understand the artist's philosophy—which extends beyond music to collaborations with various artists and sound space production. The result was a thrilling dialogue.
Interview & Editing: Aki Kanahara, Dentsu Inc. Event & Space Design Bureau

(From left) Mr. Fujita, Mr. Shibuya
Technology to infinitely expand "Scary Beauty"
Fujita: You've also done Vocaloid opera. How do you approach technology? Could you share your stance when incorporating it into your work?
Shibuya: For example, a phrase I often use lately during meetings for new projects in Paris is "Scary Beauty." It's uncanny yet beautiful, disturbing yet moving. I want to use technology to expand this direction, and I see potential there. Obviously, it's not about using new technology for its own sake; what matters is what you want to express. Japan might be different, but especially in Europe, works that don't clearly state what they want to say—for better or worse—get a "What is that?" reaction. And honestly, they are "What is that?" (laughs).
However, when technology is central to the expression, it tends to become "cool." It's often used solely to create minimalistic or "cool" art, and it's actually well-suited for that. Because the basics are binary, copy and paste.
But "cool" gets overwritten easily and quickly becomes outdated. That's because the very standard of precision that defines "cool" exists on the premise of constant updating. Consequently, the combination of technology and coolness, where precision dominates, quickly becomes obsolete. In contrast, there's a certain power to the uncanny, which feels like it's been overlooked for the past 50 years or so. I think it's worth revisiting, using technology as the axis. For example, Picasso's paintings aren't exactly something you'd want hanging in your home, but they retain a certain uncanny quality and power. Historically, this direction in the relationship between technology, art, and music tended to lean toward chaos or noise. But I feel like something different is becoming possible now—music that's warped and twisted, uncanny yet beautiful. That's why I want to make a solo album right now (laughs). It's like I've sensed the emergence of a world worth immersing myself in.
Fujita: Indeed, when I saw your Vocaloid opera "THE END," I also sensed the eerie atmosphere reminiscent of Japanese ghost stories.
Shibuya: People said "THE END" felt Japanese even when we performed it in Paris. Maybe it's more ghost story-like (laughs). I hadn't consciously aimed for that, but I think it's not bad.
Two AI-powered robots create music through mutual evolution
Shibuya: That's the direction AI is heading too. We've been working on using androids as vocalists with orchestral accompaniment or computer-generated backing tracks, and we might actually have something ready around year-end. But ultimately, I think it won't be truly interesting unless we build two robots, lock them in a room, and equip them with AI that evolves through interaction. I know this is incredibly difficult, but when music starts emerging from their relationship, it becomes an entirely new form of art.
Fujita: Composition and music without human intervention...
Shibuya: Exactly. People listening to musical works born from relationships where there's no room for human involvement.
Fujita: So Shibuya-san's ideas are reflected in what kind of robot to build and how to build it?
Shibuya: I think there are limits to robots that look exactly like someone. That's because Hatsune Miku is strong precisely because she's a character. A robot that looks like something can never become a character. For example, if you had to choose between a Lady Gaga robot T-shirt and a T-shirt of Lady Gaga herself, you'd buy the one of the actual person, right? That's what being a "character" means.
So, I don't want to make a robot that's just a perfect human replica. I want to create something that's half-object, half-human—something that transcends the boundary between machine and human. I think it would be fascinating to have a concert where such a robot sings solo, accompanied by a human orchestra.
Programming can also have a soul
Fujita: Deep learning has been a hot topic lately, hasn't it?
Shibuya: Deep learning is essentially about how to handle big data, right?
When we did "MEDIA AMBITION" the other day, I had the University of Tokyo run deep learning on my sound files from the past ten years. They created noise from it, but when talented students do it, they produce interesting noise. That's why I believe programs can have a soul.
Fujita: So it's about having a certain directionality or the ability to provide direction, right?
Shibuya: Exactly, that absolutely emerges. The data created by deep learning became something complex—or rather, it had this strange coexistence of emptiness and complexity—something I hadn't quite heard before, compared to noise intentionally created by a single formula. And while we can do this kind of cutting-edge stuff, I want to take it further and bring it down to the mass level, or make it more versatile.
I absolutely don't believe it's enough for only those who understand to get it. Since I'm generally given freedom, including with advertising music, I play mass-market pieces at my own concerts and sometimes include them on albums. But nobody wants to listen to the kind of music typically made for the masses in advertising jobs. Creating music requires moments where you connect with an intuition that isn't your own. It's about expanding the space around that connection, that thing that comes down from above. Then you tune the shape and color of that space.
Fujita: The creations you make, Shibuya-san, feel like they hit you right in the gut emotionally. The message gets through, of course, but even before that, it feels like your emotions are stirred, or grabbed. How much calculation goes into that?
Shibuya: Sometimes there's an impulse, and then the calculation follows at incredible speed. At that moment, my veins throb, and I think, "Ah, it's coming, it's coming, it's coming..."
Noise music transcends human cognition at a certain stage
Fujita: Could you also tell me about your approach to commissioned creations, like the music for commercials or the drama "SPEC," which I also love?
Shibuya: For me, there's no such thing as creation without any conditions whatsoever. Even with my solo work, the question of how it will be presented absolutely follows me. I'm fortunate enough not to get requests like "make it feel like this" or "make it like that song," so I'm generally allowed to create pretty much as I like. So, there's hardly any difference, really.
For dramas or commercials, catchy tunes are better, right? When I'm making catchy music, I can't think of famous piano pieces—that'd be plagiarism. I've got this trick: I imagine a snippet of some totally unrelated, amazing black music track, let it fade from my mind, then play the piano. That's when good stuff happens (laughs).
Fujita: I can't help but feel there's something special about the music itself that comes from Shibuya-san. He creates freely, yet listeners get hooked.
Shibuya: Commissioned work doesn't bother me at all.
Fujita: We're so used to constraints—or rather, we can't function without them. We've become creatures who can't move without getting an orientation from the client.
Shibuya: Having people in that role is fine, but when you work with artists who just passively accept constraints, you end up with something diluted. Compared to artists overseas, many Japanese artists have an unusually strong sense of existential crisis, so they're very compliant with orders. They're easy to tame, so a lot of the music you hear feels spoiled. But for advertising, the unspoiled stuff is definitely better, right? Absolutely.
Fujita: Absolutely better. That alone becomes value, and it's easier to understand. Even with super long-running commercials like "Let's Go to Kyoto," it's amazing how instantly recognizable Shibuya-san's sound is.

2015 Cosmogarden
Shibuya: I got several messages saying they recognized the recent "Let's Go to Kyoto" piece within the first few seconds. I've done quite a bit of academic work, but that world has a lot of constraints, right? Like instrumentation. So having constraints is just a given. Also, when I listen to music, I find it hard to just be a pure listener. When I hear new music, I inevitably start thinking about how it's constructed, trying to figure out its structure.
Fujita: How do you think the way music is listened to will change going forward?
Shibuya: I think it will polarize into things that are consumed very easily and things that aren't. The internet is definitely the key. Because once streaming allows you to listen to DSD, the highest current audio quality standard, without conversion, owning a decent stereo system becomes meaningful again. Conversely, even computer speakers can deliver perfectly good sound, so there's no need to own files anymore.
Deep learning, like the DCGAN program, still can't process sound data while maintaining high quality. But conversely, we're exploring with Professor Takashi Ikegami at the University of Tokyo whether it's possible to compress it down to MP3 quality, let the computer perform calculations in a virtual space, and then just boost it back up to high quality at the final stage. Creating that kind of virtual domain within a computer is fascinating. Additionally, we have ideas for using deep learning's auto-generation capabilities in the relationship between architecture, public spaces, and music. We plan to pursue this as well.
The key question remains: what can we achieve when we fully leverage computers to create music? For example, listening to music currently paired with media art, it all sounds like pretty soft synths and beats—basically a slightly more glamorous rehash of electronica. I find that rather uninteresting. I don't feel like it's fully tapping into the computer's potential. We really need to create sounds and forms that only computers can produce. I play piano myself, and if needed, I'll write for orchestras too – I do everything. If we don't do that, there's no point in using computers.
So, when you think about music with computers at the core, what's noise and what's musical sound, or melody, becomes completely irrelevant. They're parallel. Nowadays, music with multiple layers is becoming less common. Instead, we're seeing more music where a single sound carries a lot of information and density, and is listened to in a way that feels pleasurable. I think this is influenced by and intertwined with other current art forms, technology, and the information environment. So, it's important to recognize that music isn't just a standalone issue; it's part of a larger framework within culture. And what flows through that framework is technology – a common language.
Fujita: The discussion about robots and AI is precisely about that—music we haven't heard yet, sounds that haven't been played. We're waiting for their emergence. I absolutely love Shibuya-san's music, so I was incredibly nervous today, but it was a truly luxurious time. Thank you so very much!
<End>