ELSI refers to issues that arise beyond purely technical challenges when researching, developing, and implementing new technologies in society. The term "ELSI" is an acronym for Ethical, Legal, and Social Issues. It has gained significant attention both domestically and internationally as an unavoidable challenge for companies engaging in new business areas, including data business.
Since 2019, Dentsu Inc. has collaborated with the Osaka University Center for Co-creation of Social Technologies (ELSI Center), then in its pre-launch phase, on an industry-academia co-creation project. Beyond supporting companies and industry groups in establishing guidelines for data business domains, Dentsu Inc. conducted Japan's first-ever (*) "ELSI Awareness Survey in Data Business" in 2021.
Continuing from the previous article, Professor Mitsuo Kishimoto, Director of the Osaka University ELSI Center and an expert in risk science; Professor Yusuke Nagato, a special assistant professor at the same center and an ethicist; and Mr. Jixi Zhe, a visiting faculty member at the center and a member of Dentsu Inc. Solution Design Bureau, discussed the outlook for future ELSI responses, focusing on "consent," one of the key points of the revised Personal Information Protection Law.
(※) Survey by Dentsu Inc. and Osaka University ELSI Center
Issues surrounding "consent" have become apparent with the legal revision
Zhu: Among trends in ELSI responses within the data business, the most attention is currently focused on the revised Personal Information Protection Act (hereafter, the Revised Act), which came into effect in April 2022. While the issues are wide-ranging, one topic with significant business impact is the problems surrounding "consent."
The Revised Act mandates confirmation that consent has been obtained for providing information that could constitute personal data to third parties. It also requires consent to be obtained with the purpose clearly stated. Indeed, since around April, I've noticed a significant increase in websites displaying pop-ups requesting consent for collecting personal information upon access. First, regarding this consent management, I'd like to hear your perspectives on the key points.
Nagato: Philosophically speaking, "consent" means permitting someone to enter the space you control, which is inherently a very weighty act. However, as we are repeatedly asked for consent in daily life, we see cases where people give consent too lightly, or conversely, where serious individuals become exhausted from reading privacy policies every time – a problem known as "consent fatigue." In other words, I believe we have reached a stage where we need to consider methods for obtaining more substantive consent, rather than simply focusing on obtaining any consent.
Kishimoto: What I find noteworthy is the wording in Article 19: "A personal information handling business operator shall not use personal information in a manner that may encourage or induce illegal or improper acts." Prohibiting not only illegal but also improper handling implies room for self-assessment of ethical appropriateness or for third-party judgment.
Shu: Even when speaking with data business operators, I sense that the amended law leaves room for interpretation and significantly expands the areas where companies must make individual judgments. For example, while it's clear that the purpose of use must be communicated, the level of detail required and the specific methods for informing users are largely left to the company's best efforts.
Nagato: The European GDPR (General Data Protection Regulation) is often cited as a leading precedent for consent management. The GDPR strictly defines consent and imposes such stringent conditions that some Japanese companies have withdrawn from Europe. Actual measures, including the imposition of fines, have been taken. Therefore, the GDPR is sometimes called a "consent supremacy" approach, reflecting how heavily it weighs consent as an individual right.
Kishimoto: However, Article 6 of the GDPR lists six conditions under which data processing becomes lawful. "Consent" is only one of them. Beyond consent, there is also the concept of "Legitimate Interests." If it can be demonstrated that there is a legitimate interest for the individual or society, information can be obtained. Therefore, I don't think it can necessarily be called a consent supremacy principle. However, the criteria for determining whether something constitutes a "legitimate interest" are not yet fully established.
Is it acceptable to have a consent process where people must routinely lie?
Shu: When considering the issue of consent, the term "informed consent" in healthcare is a useful reference point. Our research shows this is a highly recognized term, but could Professor Nagato please revisit the principles of informed consent and its current evaluation?
Nagato: In medical settings, the asymmetry of information between medical professionals (doctors) and laypeople (patients) was problematic. Patients often had to accept a doctor's judgment without knowing the risks of treatment or medication. Informed consent emerged as a solution to bridge this information gap.
Shu: That's exactly the same dynamic as the information asymmetry between platform operators and individuals in the data business. On the other hand, it's also true that informed consent has sometimes been perceived as a "technique to persuade patients," leading to a certain formalization or emptiness of consent. Consequently, a more advanced model called "shared decision-making" has recently been proposed in medical settings.
Kishimoto: What we can learn from the informed consent case is that translating it as "explanation and consent" led to a misunderstanding – that it was a tool for persuasion, or that merely going through the formal procedure was sufficient. Another point is that it was introduced due to external pressure. Unless a system is adopted based on genuine, intrinsic motivation from both doctors and patients, it ultimately becomes merely formalistic. In that sense, since the current push for consent management is also driven by external pressures like GDPR compliance, we need to be careful to avoid it becoming just another formality.
Shu: That's right. The government is carefully explaining that formal consent alone is insufficient and is also addressing the issue of "consent fatigue." Our research shows that 65% of people don't even check terms of service or privacy policies in the first place, leading to debate about whether the consent obtained there is truly valid.

Nagato: "Consent fatigue" is a very important issue. In theory, users should review each website's privacy policy individually before consenting, but this places an excessive cognitive burden on them. As a result, consenting without review has become the norm. In a sense, this means users are lying when they say "I've reviewed it." Lawrence Lessig, a renowned cyberlaw scholar, criticizes this state where people must routinely lie. It raises a fundamental question: Is it acceptable to have interfaces that make people lie without hesitation?
Shu: So, what's needed is an interface that obtains genuine consent from users without forcing them to lie, while also reducing cognitive load. Indeed, for the data business to grow healthily, it must be accompanied by user acceptance. Conversely, we can interpret this as meaning that companies and services that clearly articulate data ethics will be chosen.
Is tailored consent formation the next trend?
Shu: A major trend in the business world is the shift in market structure from the "Attention Economy," which captures and retains consumers' interest, to the "Intention Economy," where consumers consciously choose their relationship with companies. I believe this worldview of the Intention Economy also relates to the future of consent. What are your thoughts?
Kishimoto: As Professor Nagato mentioned earlier, the idea of obtaining consent every single time places a significant burden on users. Considering this, I believe we need to strike a balance, as relying entirely on intention (proactive interest) might also become too burdensome. Informed consent in healthcare is used in critical situations like surgery or organ transplants, but it rarely comes up for something like treating a mild cold. Similarly, regarding consent, I believe businesses should first screen risks based on the purpose of use. Then, they should adopt a balanced approach: trust users for certain aspects where risks are low, while thoroughly explaining and gaining their understanding for other aspects where risks are higher.

CRM (Customer Relationship Management) is the concept of sellers managing relationships with customers, while VRM (Vendor Relationship Management) is the concept of customers managing relationships with sellers.
Shu: "Trust" is a crucial keyword. Discussions are already underway in North America that the current formal consent acquisition process is outdated. The next form being proposed is for companies to first fulfill their accountability, then, based on that trust, minimize the burden on users.
Nagato: The key point is how to define and build that trust. Simply mandating "gain trust" through legislation might not work. Since the baseline level of trust can differ between large corporations and startups, whether healthy competition can function effectively might also become a topic of discussion.
Shu: In North America, the privacy tech sector is gaining attention alongside the so-called "post-consent" trend. Recently, the term Preference Management Platform (PMP) has emerged as the next trend replacing Consent Management Platforms (CMPs). This mechanism focuses on understanding user preferences and interests to shape consent, rather than obtaining granular consent for each data use case. Communication with users and building trust become critically important.
Kishimoto: When obtaining individual consent is burdensome, yet users resist blanket consent, a partial blanket consent approach—such as "Feel free to use my personal data for medical research, but not for marketing purposes"—aligns precisely with a consent process based on user preferences. It seems conceivable to pre-register such information and then automatically permit usage.
Shu: I see. So, while we broadly call it consent, it can take various forms: individual consent each time, comprehensive consent where everything is entrusted, or partial comprehensive consent aligned with user preferences. I believe technology can address some of these aspects. As the privacy tech market and tools develop, we'll likely see closer implementation in society.
The Era of In-House ELSI Talent
Shu: While new technologies and tools for ELSI responses, starting with "consent," are emerging one after another, the strength of personnel remains indispensable for companies to respond appropriately. Could you share recent trends in hiring and developing ELSI talent?
Kishimoto: While we occasionally see cases where companies hire dedicated roles like a Chief Ethics Officer (CEO), until a few years ago, the mainstream approach was to establish external expert committees, often including ethicists. However, having organized such a committee myself, I can say it was extremely challenging. First, regarding member selection, there's a tendency to gravitate toward people who won't say no. Roles and authority also require careful consideration. For instance, you must decide each detail: whether the company or the committee proposes agenda items, and who ultimately has the authority to convene the committee. If this design fails, there's a risk of being criticized for ethics washing (superficial ELSI compliance).
Nagato: Furthermore, while committees often served primarily to rubber-stamp products or services after development, the concept of "by design" has gained traction recently. This approach involves organizing cross-functional teams across departments before a project even begins, appointing ELSI leaders within each department. This method is now attracting significant attention. Corporate culture and company mottos embody the founder's vision, making them elements that connect to a company's unique values and strengths. If we can effectively align these with the values built up bottom-up from the field's perspective, I believe it can lead to a better approach to ELSI for that company.
Shu: That's right. In our joint research, rather than taking a position of teaching corporate staff as experts, we acted more as facilitators, drawing out the voices of employees engaged in frontline operations and management. We supported the development of ELSI personnel and the formation of teams dedicated to ELSI responses.
Furthermore, appointing someone like a CEO to handle this is realistically challenging in Japan. Therefore, the role of continuing education—where active business professionals, driven by their own business challenges, systematically study principles from ethics and other humanities and social sciences at universities—will become increasingly important going forward.
Nagato: Yes, at Cambridge University, we offer the "MSt in AI Ethics and Society" course for working professionals, providing ELSI education on AI. It's structured so participants can write a master's thesis and earn a degree. From an in-house development perspective, having an academic knowledge background is a definite strength. We would be very pleased if companies could leverage university resources for this purpose.
Zhu: When considering the future of data business, I believe it's essential to look beyond just the science domain and include the ELSI domain as well. We need to consider how to create products and services that users will choose. I sincerely hope we can continue to collaborate through industry-academia partnerships to address such corporate needs.
(Survey Overview)
・Survey Title: ELSI Awareness Survey in Data Business
・Survey Participants & Sample Size:
[Screening] Men and women aged 20s to 60s nationwide · 20,000 respondents
【Main Survey】Individuals involved in data business・1,000 respondents
・Survey Period: December 20–24, 2021
・Survey Conducting Organization: Dentsu Macromill Insight, Inc .
