The Current State of Intelligent Systems

frog
This article presents content originally published in DesignMind, a design journal operated by frog, translated by Transmedia Digital under the supervision of Mr. Noriaki Okada of Dentsu CDC Experience Design Department.

There's hardly a day without a new article about the dramatic progress of AI (Artificial Intelligence). But just how "deep" is "Deep Learning" really?
The "AIVA" system, developed by French company Aiva, which uses AI to compose music, hints at the astonishing creativity of algorithms. Hearing that Google Translate generates and translates using an intermediate language (a unique, provisional language bridging two languages it hasn't yet learned) might make some people anxious, wondering if machines are finally acting on their own volition.
Many articles about AI's future ultimately touch on broad philosophical themes: human essence, knowledge, life itself. This time, Sheldon Pacotti, Senior Solutions Architect at frog Austin, breaks down this complex issue in a Q&A format. Hear his perspective on the current state of AI, covering product design, common misconceptions, and future outlook.
Q. Why does AI spark philosophical debates?
The unfortunate (yet fascinating) aspect of AI is its deep entanglement with philosophy. For instance, analytical philosophy—shaped by figures like 18th-century David Hume and 20th-century Donald Davidson—directly influenced how "symbolists" conceptualize truth representation within AI systems. To me as a computer scientist, W. V. Quine's "Theory of Observational Sentences" and Jerry Fodor's "The Language of Thought" seem like descriptions of software architecture (fundamental design concepts).
It is inconceivable to create sophisticated, intelligent human-machine interfaces (the means by which humans and machines exchange information, and the devices and software used for this purpose) without considering the epistemology, phenomenology, and other "-ologies" produced by thinkers skilled in abstract thought. Adding further complexity, numerous scientific fields—neuroscience, mathematics, cognitive science, and more—play crucial roles in AI advancement. In this landscape, predicting when we will discover the "correct" theories of intelligence, or the sequence in which these theories will emerge, is extremely difficult. Creating "thinking machines" requires excellent theories of thought, cognition, and reasoning.
In design, we routinely apply the concept of "fostering empathy" to users through practice. What's new is the need to consider how human-made machines perceive humans. This is because these machines will increasingly engage with human lives, communities, and society as a whole. While this often leads to philosophical discussions, I believe that whatever thinking system we design, we should still make "empathy" a fundamental principle for systematization.
Q. We cannot discuss AI without addressing the role of data underlying intelligence. Why do "deep learning" systems require such vast amounts of data?
Deep learning systems operate based on data collected, organized, and processed to solve specific problems. For example, extremely dense, massive data inputs down to each pixel level underpin the statistical learning of image recognition systems. Just as large quantities of accurate samples are needed to enhance the precision of statistical research, vast amounts of training data are also required for "deep" neural networks (※1) to learn to accurately grasp features.
※1 A neural network is a network modeled after the brain's structure. A deep neural network consists of multiple layers stacked on top of each other.
Q. I often hear about the importance of data training in intelligent systems, but it seems like a repetitive and somewhat tedious process. If learning what a cat is requires a massive number of photos, isn't the learning process slow?
Deep learning, inspired by the cerebral cortex, represents only one aspect of recognition ability. Humans can grasp new concepts quickly because many mechanisms within the human brain contribute to intelligence (such as "thinking neurons," believed to definitively link a concept to itself). These mechanisms enable what AI researchers call "transfer learning"—the rapid learning of new concepts by leveraging existing knowledge. As AI architecture (basic design concepts) advances, some of these functions will be implemented, enabling the acquisition of the ability to learn with less data.
At the opposite end of the spectrum from deep learning lies the "symbolist" AI approach. This uses "formal logic"—which judges truth based on the formal validity of arguments—to create systems capable of human-like deductive reasoning, built upon vast ontologies (explicit specifications of conceptualizations, described in ontology languages like OWL or Cyc). While this symbolic approach is generally considered outdated and "old-fashioned," it reflects aspects of human thinking that general-purpose neural networks cannot capture. Developing systems that learn from sparse data will likely require a fusion of these two frameworks.
Q. From a technical perspective, how important is ensuring transparency for AI development?
The "challenge of ensuring transparency" in AI is a particularly difficult problem for mathematically trained systems. Current deep learning systems are single-function black-box processes. For example, consider systems that recognize faces from photos or translate text. When integrated into traditional software systems like strategy games, they perform logical, clear roles, but their internal workings remain opaque. In future architectures modeled after the brain, where not just the mechanism but the structure itself is brain-like, one can imagine that all processing within the system, and the algorithms executing that processing, are learned. This risks creating an eerily intelligent black box.
The key to making such systems "transparent" may lie in working memory—the kind humans use when thinking about things. By designing these systems at a high level, we can enable the definition of processes that store patterns, the ability to focus on patterns, and gain clues to track the flow of execution. This allows us to separate individual learned processes, much like the human mind links concepts and words. While subsystems like vision may remain opaque, at the level of patterns and concepts, transparency—and even introspection (the ability to look inward at one's own inner workings)—can be designed.
Q. Bots are everywhere these days, but how much do they actually understand? Are they really as good as their reputation?
Companies keep adding features to bots, just like apps, but they still seem far from enabling natural conversation with computers. Bots skillfully reason in various ways and perform their designated roles naturally, but they remain fundamentally not of the human world. They lack the mental models (the processes humans use to think about how things work in the real world) that integrate into human life. Some might say this understanding falls outside big data analysis.
However, many leaders in the AI field believe true artificial intelligence must be based on "embodied intelligence." This concept posits that thinking machines need physical senses and even the ability to move around in the world. Giving a shopping assistant bot senses might seem excessive, but it could serve as a "format" for unifying learned concepts, much like the human mind does.
Q. What will be the next breakthrough in AI?
For a while, we'll likely see more similar applications. Meanwhile, the latest systems will learn everything they possibly can, and then a remarkable leap will occur. Current deep learning excels at "understanding" patterns within large datasets in narrow problem domains. But it won't be long before new neuromorphic designs, modeled after brain structures, emerge.
Already, the Go AI "AlphaGo," which defeated human professional Go players and captured global attention, is now evolving under the leadership of researchers at DeepMind, a Google subsidiary. Starting from the "Neural Turing Machine (NTM)"—a complete computer built solely from neural networks that uses external storage to generate its own algorithms—it is progressing toward the "Differential Neural Computer (DNC)," which can remember what it learns like a human and apply it to new phenomena. Equipped with read/write memory, attention controllers, and other features, the DNC in particular has demonstrated the ability to derive algorithms and apply them to new situations. The era of true "intelligent engineering" is about to begin.
Q. What is needed for AI to not only support but enhance human experience?
Unfortunately, I cannot provide a specific roadmap for AI. This is because AI research spans an extremely broad field, from biology to mathematics and philosophy.
The rest of this article can be found in the web magazine "AXIS".

Shelldon Pacotti
Shelldon Pacotti is a Senior Solutions Architect at frog (Austin). After studying mathematics and English at MIT and Harvard, he has worked on interdisciplinary creative projects. His developed software has won industry awards. He has also worked on scriptwriting for video games, building software architecture for corporations, and writing technology articles. @NewLifeIneract
Was this article helpful?
Newsletter registration is here
We select and publish important news every day
For inquiries about this article
Back Numbers
Author

frog
frog is a company that delivers global design and strategy. We transform businesses by designing brands, products, and services that deliver exceptional customer experiences. We are passionate about creating memorable experiences, driving market change, and turning ideas into reality. Through partnerships with our clients, we enable future foresight, organizational growth, and the evolution of human experience. <a href="http://dentsu-frog.com/" target="_blank">http://dentsu-frog.com/</a>

