It is said that by 2045, computers will surpass human intelligence, reaching the Singularity (technological singularity). In January 2015, the NHK Special "NEXT WORLD: Our Future" aired (a 5-part series). To explore future society and the nature of humanity, the production team visited and interviewed scientists and researchers worldwide. Furthermore, this series gained attention as a new form of television content for the internet age through collaborations with creators like Sakanaction, Rhizomatiks, and UNREALAGE. It was also published as a book by NHK Publishing. For this Design Talk, program directors Tomotaka Okada and Tatsushi Tachibana, along with producer Toru Ogawa, took the stage. They engaged in a session with Tomonori Kagaya, a business development planner with strong technology expertise, and Yoshimitsu Sawamoto, a CM planner and creative director representing Dentsu Inc. We present this discussion, spanning two parts, where they explored future technology and expression while shining light on each other's areas of interest.

(From left) Mr. Ogawa, Mr. Tachibana, Mr. Okada, Mr. Kagaya, Mr. Sawamoto
"Setting Sail Toward an Uncertain Future"
Are you ready?
Kagaya: Today, through the production process of "NEXT WORLD"—a five-part series broadcast on NHK last January—we'll explore how the environment surrounding us might change in 30 years.
Ogawa: For the past three years, I've been responsible for planning and developing next-generation TV platforms and content in the Internet division, including experiments with simultaneous online streaming. "NEXT WORLD" was a five-part series. Each episode ended with the line, "Everyone, are you ready?" inspired by the theme song lyrics from Sakanaction: "Setting course for an uncertain future." The message was: There's no point in being afraid of the overwhelming changes coming in the 21st century. Let's prepare to embrace them positively and enjoy them. Now, I'll share how the directors of episodes 1 and 3 envisioned the future. So, everyone, get ready.

Okada: I was in charge of the first episode, themed around artificial intelligence (AI), titled "How Far Can We Predict the Future?" While we've consistently produced programs exploring how science and technology transform our civilization, daily lives, and society, the modern scientific world is filled with vast unknowns. What does it even mean to understand intelligence? Where exactly is the boundary between the body and the brain? We don't know the answers to these questions. Within this context, "NEXT WORLD" aimed to realistically, yet optimistically, depict the future 30 years from now. Can AI truly surpass human intelligence? If so, how would our lives change? We structured the program around Ray Kurzweil's "Singularity Hypothesis" – the idea that by 2045, AI will surpass the collective intelligence of all humanity.
As we researched, we discovered that around 2014, a technological revolution occurred in AI with the emergence of "deep learning," where computers learn, discern, and imitate on their own. In the US, based on these breakthroughs, future predictions more accurate than humans are becoming a reality. Take crime prediction, for example. In cities across the U.S., including New York and Chicago, AI predicts daily which areas are likely to see specific types of crime. In Santa Cruz, California, which was an early adopter, implementing this system led to a 50% increase in arrests and a 20% decrease in crime rates.
AI is also starting to predict everything from compatibility with marriage partners to hit songs, movies, political movements, and even terrorist attacks.
This development will have a massive impact on future society. Concerns have arisen, for instance, that human jobs will rapidly disappear. While it's true that the probability of machines replacing jobs currently done by humans is high, not all jobs will be replaced. The probability of replacement varies: for example, photographers at 60%, lawyers at 35%, journalists and reporters at 11%, and graphic designers at 8.2%. Jobs where humans hold an advantage have lower probabilities. There are also predictions that new jobs, not currently existing, will be created.
Furthermore, another debate has begun: How will humans interact with machines that surpass them? Will AI and human intelligence merge? What if AI develops self-awareness or consciousness, like in the movie "Terminator"? Within consciousness theory, discussions have started about whether consciousness itself can be artificially created. If such research succeeds, truly conscious AI might become a reality.
Furthermore, the possibility of humans acquiring new abilities has been pointed out. For example, consider the case of conjoined twins Tatiana and Krista. They were born sharing the thalamus of their brains, enabling them to share what each sees and thinks without speaking. Based on such facts, there is discussion that if we merge our brains with AI, we might realize a world where we can incorporate what another intelligence sees. In any case, humanity will be coexisting with a new form of intelligence for the first time, and I believe preparation for this is essential.
Synthesizing avatars for live broadcasts
Aiming for innovation in production techniques
Tachibana: Working on this program made me look forward to the future 30 years from now. I felt a sense of accomplishment, thinking, "I definitely want to live long enough to see what kind of future arrives with my own eyes." I produced the third episode on the theme "How Far Can Human Power Be Enhanced?" We covered not only physical enhancements like wearable robots that read human intent to assist tasks, but also enhancements for the mind and spirit. A major boom in brain research is happening worldwide right now, with massive capital investment. Within this, there's research using technology to unlock unused brain power. Research on cyborg technology merging brains with AI is also advancing. We featured a project suggesting that merging AI with the brain could grant humans immortality. Even as the body ages, transferring the brain's accumulated knowledge digitally could achieve eternal life... While such talk might seem frightening, by 2045, this might be commonplace, and people might look back and say, "There was a time when human life was finite."
Ogawa: Allow me to briefly touch on the program's production aspects. With "NEXT WORLD," we aimed to make the show itself an innovation in television broadcasting. For the first episode, we broadcast live from the National Museum of Emerging Science and Innovation, featuring a live performance by Sakanaction. The theme was "A Future Live Controlled by Artificial Intelligence." We utilized multiple computer-controlled cameras and image analysis technology to fuse real and virtual spaces. We also invited viewers to participate in the live show as audience members through avatars created on our website. Actually, during the rehearsal the night before, we struggled late into the night with avatar compositing and other issues. We even discussed the producer appearing on screen to apologize if it failed during the actual broadcast. But miraculously, it worked perfectly only during the live broadcast. It worked so well, viewers didn't even realize it was live.
As the digital producer, my top priority was viewer participation in the program content. While users typically choose their own parts when creating avatars on websites, NEXT WORLD's avatars were proposed by artificial intelligence based on user input. The theme song by Sakanaction was designed to evolve through a system where AI remixed it based on user votes, meaning the song evolved with each broadcast.
I'd always wondered if we could create new user-participation mechanisms for television. I asked Daito Manabe of Rhizomatiks, who developed this system, to "hack the program through user participation in digital content." I wanted user behavior on the website reflected in the main broadcast, even if only briefly. The convergence of television and the internet is rapidly advancing. In such an era, is it enough to keep creating content the same way we always have? I believe that for television stations to evolve the content they produce, they must collaborate with talent outside the broadcasting industry. Otherwise, there is no future.
How far will AI develop?
How will it be differentiated from human work?
Kagaya: It's been about a year since the program aired. How has the environment surrounding AI changed since you were conducting interviews?
Okada: My impression is that the level of attention on AI has completely shifted. When we first aired, the term "big data" was more mainstream than AI. Deep learning technology has emerged, and society is now focusing on AI. The technology itself is also evolving rapidly.
Kagaya: Current AI can make predictions through feature extraction and such, but we still haven't achieved "strong AI" that discovers new problems in the way we humans think, right?
Okada: Yes. We agonized over whether to depict "strong AI" in the program. Ultimately, we focused specifically on the aspect of predicting the future.
Kagaya: In the visual domain, image recognition has advanced to the point where it matches or even surpasses human capabilities. But motion detection and contextual understanding still lag behind, right?
Okada: Technologies for extracting and understanding information from images—like accurately reading expressions from facial muscle movements—are advancing rapidly. The real challenge lies beyond that. While AI can read emotions from expressions, I don't think we have AI that can infer the underlying reasons—like why someone is angry—or delve into the depths of their mind.
Kagaya: Even if we can extract correlations, they aren't necessarily causal, right? Sawamoto-san, you also have something you want to ask about artificial intelligence, don't you?
Sawamoto: What scared me watching the program was the story about the prediction system forecasting singers' hit songs. That's already commonplace in America. When the first song does incredibly well, they try to make the second song fit the prediction system, but they don't know how to actually get it there. That situation of wrestling with the prediction system was happening. I felt the same fate would befall the world of advertising creative. And looking further ahead, will AI predict and create us humans? Or will AI eventually be able to handle production itself?
Okada: At Reuters, robots already write simple sports results and economic articles based purely on data. Video editing is also partially automated. In that sense, we can't definitively say AI can't create. However, giving articles deep meaning or incorporating satire is still beyond its current capabilities. That said, we'll likely see scenarios where video editors get swayed by predictions like, "Inserting this cut will boost ratings by 3%."
Sawamoto: AI performs intense matching to find the optimal pairing. In a way, what we do is also matching. For example, copywriting involves repeatedly matching words that randomly pop into your head, then presenting what you think is the best option. Thinking about it that way, there's no reason AI couldn't do that. It could probably handle copywriting too. The question is who will judge the quality of that copy. While watching the program, I found myself pondering how crucial our "gut instinct" – the final judgment we make – will become in the future.
Okada: At Google, AI already handles adding captions to videos. It doesn't work for every video yet, and it's not perfect. But as accuracy improves, I think we'll easily reach the point where we can cut and paste AI-generated captions. However, whether that sense is good or not is another matter. I believe there are still quite a few things machines can't do.
Sawamoto: Thank you. That's a bit reassuring.
※Continued in Part 2
You can also read the interview here on AdTie!
Planning & Production: Dentsu Inc. Event & Space Design Bureau, Aki Kanahara