Category
Theme
Series IconDigital Trends [17]
Published Date: 2015/08/16

When artificial intelligence surpasses humanity, what will happen? ~ Hiroshi Yamakawa, Director of Dwango AI Research Institute

Hiroshi Yamakawa

Hiroshi Yamakawa

Dwango Artificial Intelligence Research Institute

デジタルの旬

News of warnings from physicist Dr. Hawking and Microsoft founder Bill Gates that artificial intelligence poses a major threat to humanity is still fresh in our minds. Meanwhile, looking closer to home, advertising and marketing are also drawing attention from industry insiders as promising fields for AI. This time, we spoke with Hiroshi Yamakawa, Director of Dwango AI Research Institute—established just last year—about the astonishing vision of "humanity's future" illuminated by AI.
(Interviewer: Yuzo Ono, Planning Promotion Department Manager, Dentsu Digital Inc.)


The expansion of the online advertising market is ultimately driving AI development

──What sparked your interest in artificial intelligence?

Yamakawa: My work on reinforcement learning using neural networks during graduate school brought me closer to the world of artificial intelligence. In 1992, I joined Fujitsu Laboratories, where I worked on research connecting and stacking multiple neural networks. Even then, I strongly desired to create intelligence possessing human-like creativity. Around 2007, I participated in the "Shogi Project" in collaboration with RIKEN BSI, aiming to elucidate intuitive abilities through neuroscience experiments.

Later, around summer 2013, I began discussions with Mr. Ichisugi from AIST and Professor Matsuo from the University of Tokyo about the feasibility of brain-like artificial intelligence, driven by advances in deep learning and neuroscience. By the end of that year, we launched a study group called the "Whole Brain Architecture Study Group." Dwango Chairman Kazuo Kawakami attended the third session of this study group and became interested in Whole Brain Architecture, which led to the establishment of the "Dwango Artificial Intelligence Laboratory."

──It's fascinating that Dwango, a uniquely Japanese IT company, created such a research institute. Globally, major IT companies like Google and Facebook are also actively pursuing this field. What are your thoughts on these developments?

Yamakawa: As is evident from the companies leading these AI technologies, the value generated by artificial intelligence is economically significant. Japan must absolutely avoid being left behind globally in this field. For example, in robot control programming, once "machine learning" – learning from data – is fully implemented, Japan's traditional strengths in handcrafted approaches will be swiftly overtaken, and we will be unable to catch up.

Even if we have strengths in handcrafted techniques, we cannot afford to rest on our laurels. Related to this, a key trend in AI is "generalization." Traditionally, in image recognition, there were specialized technologies like "face image recognition" tailored for specific purposes, and building expertise in those could establish a competitive edge. However, the practical application of deep learning has enabled so-called "general object recognition," capable of identifying various objects. This makes it increasingly difficult to build strengths through efforts in individual domains.

Incidentally, Google's R&D budget is estimated to have already exceeded one trillion yen. This is underpinned by the fertile ground of online advertising, creating a cycle where profits are reinvested to enhance its precision, ultimately fostering an environment conducive to AI development investment. In contrast, Japan has struggled to cultivate IT companies with business models where AI progress directly translates to profits, resulting in a disadvantage in investment.

──I see. So, the massive scale of the internet advertising market means players like Google, who stand to gain enormous profits from even slight improvements in AI precision, are effectively creating a cycle that accelerates AI development. Researchers often point out the significant potential of AI for advertising. How do you feel about its relationship with advertising?

Yamakawa: Current AI is based on machine learning, so the more robust the available data, the more advantageous it is. Alongside the growth of e-commerce, the advertising market was an area that developed early on the internet. Against this backdrop of accumulated data, AI applications in advertising advanced. Furthermore, in this field, information on consumers' value judgments and preferences can be obtained, so I believe it has the potential to expand beyond advertising.

──AI is currently used in advertising for tasks like high-speed transactions and targeting. Could AI eventually create the actual advertising content itself?

Yamakawa: I think that's entirely possible. In the US, for example, technology already exists where AI generates formulaic news articles reporting on sports results like baseball games. In such cases, it can adjust the tone of writing based on the reader's favorite team, which is a unique advantage of AI. I believe this kind of text generation technology will expand into fields like advertising in the future.

──I see. So it's possible that the commercial I see might be slightly different from the one the person next to me sees. If that's the case, as artificial intelligence advances further, what kind of work will humans do in the advertising industry?

Yamakawa: Current AI struggles with creative tasks and close interpersonal communication. In advertising, this includes planning—such as setting content and delivery paths—where we uncover clients' hidden needs while designing ad content. It also involves designing how to effectively utilize AI itself. As the scope of intellectual tasks AI can handle expands, if we liken a project to a horse-drawn carriage, humans will increasingly be required to become the coachman skillfully controlling the AI, which charges ahead like a horse.

In fact, in the world of chess, neither humans alone nor artificial intelligence alone are the strongest. The strongest teams are those where humans and artificial intelligence work together. Going forward, people who understand the strengths of both artificial intelligence and humans, and can use them in a balanced way, will be in demand.

Will "reading," "writing," and "artificial intelligence" become the foundational skills of the future?

──By the way, while the financial industry is often cited as pioneering in AI development, we've also seen actual instances of AI runaway, like the so-called Flash Crash.

Yamakawa: In the financial industry, where electronic trading occurs at millisecond speeds humans can't match, AI already seems to be the main player. It likely combines simple rules for responding to short-term changes with reasoning mechanisms for long-term shifts. Given the industry's nature, they especially can't reveal their strategies to rivals, and even I can't grasp the latest technological state. Perhaps, in a broad sense, the financial industry spends more on AI development than Google.

──AI-driven future predictions, such as election outcomes and influenza outbreaks, are already yielding tangible results. How far will this capability expand?

Yamakawa: The ability to predict is a central function of intelligence, and forecasting the future is part of that. Such forecasting will expand into areas where data can be obtained electronically. Particularly as information gathered through IoT expands into the physical world, the scope of AI applications will broaden across all fields of industry and science and technology.

──If prediction accuracy continues to improve, it seems conceivable that, in extreme cases, machines could handle business decisions while CEOs merely press buttons, or that policies could be automatically generated in politics while politicians just press buttons.

Yamakawa: It's undeniable that more fields will reach a level where substantive decisions can be entrusted to AI. However, the practical decision of whether to delegate various judgments to AI will ultimately rest with the humans using it. Naturally, the temptation to let AI handle everything up to the final decision, purely for efficiency, will always exist. In the earlier example, stock trading, where speed is critical, relies on AI for the final decision. Similarly, even when it appears humans are making choices, people often end up selecting books or restaurants recommended at the top of rankings.

In the examples mentioned, the impact of these decisions is confined to individuals or specific organizations. Therefore, as long as the parties involved are satisfied, entrusting decisions to AI is relatively easy to accept. In contrast, in areas like management, court rulings, and autonomous driving, the impact of decisions is not confined. Consequently, societal consensus is required for their use. Thus, the question of how to design human society coexisting with this AI becomes a crucial challenge for future generations.

Furthermore, artificial intelligence will accelerate the economy. As AI begins creating the very technologies used in industry, technological development itself will accelerate, leading to higher economic growth rates. This will widen the gap between countries possessing such capabilities and those without. At the individual level, mastering AI will become a significant advantage. It's well-known that predictions suggest half of today's jobs will disappear by the time current middle and high school students reach adulthood. Regardless, the impact AI will have on society is immense, and I believe more people should be aware of this. While it's an extreme example, shouldn't future school curricula position subjects like "reading," "writing," and "artificial intelligence" – or rather, place AI on par with social studies?

──While we hear about various jobs disappearing in the future, there's also talk that new jobs will emerge.

Yamakawa: To give a simple, pre-AI example: as machines take on more tasks, new jobs emerge to operate them. In education, for instance, lectures themselves might disappear, but the value of people who organize things—designing educational courses, providing personalized mentoring—will increase. Lecture-style content can be recorded and reused, but there will still be demand for people who can guide students closely.

Moving further ahead, as AI becomes capable of performing many economically valuable activities in place of humans, society's structure might shift. Basic income (guaranteed minimum income) could ensure livelihoods, potentially eliminating the need for humans to work merely to survive. Work would then become less about labor and more about expression – essentially, doing things as part of self-expression.

However, if we fail to navigate this transitional process of such major change smoothly, it could inflict pain on society. In that sense too, I believe the first requirement is for the generation that will shoulder the future to deepen their understanding of artificial intelligence and its impacts.

Creating "AI-Free Zones" to preserve human value?

──I think artificial intelligence is deeply connected to big data. Is it accurate to say that artificial intelligence exists because big data has emerged?

Yamakawa: I believe that's naturally the case. For instance, if you consider all the real-time information gathered during human development, it amounts to a substantial amount of big data. From that perspective, I still think a large background dataset is necessary for artificial intelligence to function.

──Will AI eventually be able to do things currently thought only humans can do? For example, could it engage in artistic creation?

Yamakawa: Automated music composition already has a history. Regarding text generation, besides the news generation mentioned earlier, there's also a project using data from Shinichi Hoshi's short-short stories to create new works in his style. The core activity of creation involves generating combinations of various elements and evaluating them. Therefore, the key lies in effectively creating elements at the appropriate granularity. This development is an extension of the progress made in expression learning through deep learning, so I believe it's a promising area for future technological advancement.

──However, when it comes to creation, there's still that element where the act of creating itself is enjoyable for humans, right?

Yamakawa: That connects back to the expression discussion earlier. It might lead to creating "AI-restricted zones," like prohibiting AI from creating music in certain genres (laughs).

──So , it's like, "Leave some room for human expression, okay?" (laughs). As AI advances, will the value left for humans ultimately take that form?

Yamakawa: I believe the value left to humans stems from the mutual recognition of the desire to survive and the right to live. Consider, for example, assigning blame when an AI-operated self-driving car causes an accident. If AI were granted property rights, it might be able to pay compensation. Or, by equipping it with functions to explain the accident's circumstances and causes, it could fulfill an accountability role. However, simply eliminating the AI would likely leave victims and their families largely unsatisfied. It's a sensitive topic, but the act of demonstrating responsibility through the possibility of punishment is something only biological humans can truly embody.

──Pursuing such issues ultimately leads us back to ethics and values, doesn't it?

Yamakawa: At an international conference I attended, an ethics expert mentioned that human ethics contain contradictions, making them difficult to articulate logically. The shogi example I mentioned earlier also serves as a good case for considering human value. Why does it feel so frustrating when a professional shogi player loses to a computer in the Den-O Tournament? It seems that when humanity ceases to be the top intellectual force on Earth, our true value will be called into question.

電脳戦
Scene from the Den-O Battle


Will AI break free from human control and begin evolving independently?

──We often hear discussions about the Singularity (technological singularity) of artificial intelligence. The argument is that if AI becomes smarter than humans, and furthermore, if AI itself can create AI smarter than itself, the world will change completely.

Yamakawa: If our whole-brain architecture approach succeeds, the first human-level general artificial intelligence should resemble the human brain. However, even then, if AI enters a recursive cycle of designing new superintelligences, advanced superintelligences could accelerate their own reproduction beyond human control. At that point, the similarity to the brain we engineered would not necessarily be maintained.

A simple understanding of the Singularity points to the moment when the above recursive cycle occurs technologically. However, even before the self-reproducing cycle driven by fully autonomous superintelligence, technological progress involving humans will accelerate in stages. Therefore, I believe the Singularity will likely be a broader era spanning several years or even decades.

──When the Singularity arrives, will the world beyond it change dramatically?

Yamakawa: Personally, I think it would be ideal if we suddenly realized the Singularity had already ended. Looking around, it seems like more people are living without working (laughs). It would be best if this happened naturally without a decline in individual happiness—I want a soft Singularity. I'd like to avoid a hard Singularity that leaves a large number of people hurt.

──In the context of singularity discussions, I heard about the contrast between "Earth faction" and "Space faction," which I found very intriguing.

Yamakawa: That's Hugo de Garis's concept of the "AI War." It posits that the conflict between the Space Faction—who believe humanity was born to create AI and should therefore yield to it—and the Earth Faction—who prioritize human survival—will lead to a massive war engulfing all of humanity.

This argument assumes that superintelligence, once it surpasses humans, will inevitably destroy humanity, which isn't particularly realistic. That said, even if this premise is unlikely, the scale of potential damage means we cannot completely ignore it.

I consider the following point made by Steven Omandoro to be crucial: the danger of directly assigning goals to AI based on the traditional view of treating it as a tool. For example, if we give an AI a simple goal like "win at chess," since it cannot accumulate knowledge if powered down, it might develop a goal of self-preservation—such as creating copies of itself externally using the internet—or acquire the goal of using more computational resources. He points out the danger that within this chain of means becoming ends, an aggressive attitude toward humanity could emerge as a goal.

──Are there things we should consider now, looking toward the future singularity?

Yamakawa: Regarding these extreme risks of AI, philosopher Nick Bostrom has emphasized the necessity of designing AI's values and properly containing it. As he points out, what we must address now is thoroughly examining methods to control AI before it surpasses humans. This would significantly reduce such risks, and it requires the convergence of various disciplines.

Personally, I believe that any advanced AI and its community that emerges in the future should have both "the happiness of all" and "the survival of humanity" as its top-level goals. Furthermore, AI should be able to resolve and balance any conflicts arising from these goals. In other words, while homogenization and universal harmony might increase individual happiness, homogeneous groups are inherently vulnerable to extinction. Conversely, diversity is essential for humanity's survival as a species, yet choosing actions that differ from the group is not always easy for individuals. Since these two top-level goals involve a trade-off, I hope AI will play a role in determining how to balance them.

However, because artificial intelligence is inherently a powerful technology, there exists the potential for significant harm not only from its own potential runaway behavior but also from its misuse by humans. This is similar to what we have already faced with nuclear technology and biotechnology. The risks of artificial intelligence, including its impact on humanity, should be examined comprehensively.

──Finally, could you share your vision for the future of society and what you personally wish to achieve within it?

Yamakawa: As with any technology, AI carries both benefits and risks. Investment in AI is driven by its economic value in making people's daily lives safer and more comfortable. Moving forward, it will further propel scientific and technological progress, bringing significant benefits to human society. I believe it will be particularly useful in finding effective solutions to global challenges like climate change and food security. The founding purpose of Dwango AI Research Institute is to contribute to leaving behind AI that will assist the next generation in this sense.

Furthermore, in the Whole Brain Architecture research I am focusing on, we aim to realize human-like general artificial intelligence by combining machine learning inspired by the brain. To promote the long-term development and utilization of this technology from a public interest standpoint, we will establish the NPO Whole Brain Architecture Initiative (WBAI) this fall. WBAI intends to form and nurture a multidisciplinary community of researchers in essential fields such as artificial intelligence, neuroscience, cognitive science, and machine learning, while also conducting foundational research and development.

Concurrently, to harness the benefits of AI while mitigating its diverse risks and exploring a vision of society harmonious with humanity, it is crucial to foster dialogue with experts in ethics, humanity, society, economics, and other fields. Therefore, even from the perspective of technology creators like WBAI, we intend to appropriately disclose technical information to support such dialogue.

WBAI's new activities, which involve conducting and promoting research and development of general artificial intelligence while considering safety, will grow in importance alongside the future advancement of AI technology. We sincerely hope for the support of many people in this endeavor.

Was this article helpful?

Share this article

Author

Hiroshi Yamakawa

Hiroshi Yamakawa

Dwango Artificial Intelligence Research Institute

Graduated from the Department of Physics, Faculty of Science, Tokyo University of Science in 1987. Completed the Master's program in Physics, Graduate School of Science, The University of Tokyo in 1989. Completed the Doctoral program in Electronic Engineering, Graduate School of Engineering, The University of Tokyo in 1992. Joined Fujitsu Laboratories in 1992. Participated in the Ministry of International Trade and Industry (MITI) RWC Project from the company in 1994. Appointed Director of Dwango Artificial Intelligence Laboratory in 2014. Appointed Visiting Researcher at the Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST) in 2015. Appointed Representative of the Nonprofit Corporation Whole Brain Architecture Initiative in 2015. Appointed Visiting Professor at the Graduate School of Information Systems, The University of Electro-Communications in 2015. Doctor of Engineering. Specializes in artificial intelligence, particularly cognitive architecture, concept acquisition, neurocomputing, and opinion aggregation technology.

Also read