Category
Theme

Note: This website was automatically translated, so some terms or nuances may not be completely accurate.

This article presents content originally published in "Design Mind," a design journal operated by frog, under the supervision of Mr. Noriaki Okada of Dentsu Inc. Experience Design Division.

frog

How to Incorporate Data Science into Design Thinking to Drive Results

As automated data processing via artificial intelligence (AI) rapidly expands, incorporating data science into product design and market launch is becoming increasingly common. To create the best products, it's crucial that qualitative and quantitative methods complement each other throughout the entire design process.

This synergy enables the creation of user-centered, highly reliable products. At frog, we combine both approaches, constantly exploring and testing better methods for our clients.

Many teams responsible for product design and market launch view data science primarily as a tool to automate and enhance existing processes (e.g., automatic transcription of design research interviews, computer-based visualization and clustering of concept proposals).

Certainly, we need this support in our daily work. However, at frog, we believe the essence of introducing data science lies in enhancing our understanding of client challenges and improving our ability to develop solutions with proven reliability and scalability.

Data science not only enables new ways of engaging with user data but also provides means to collect and analyze previously unattainable types of user information, giving design statistical backing and validation capabilities. Ultimately, the fusion of data science and design thinking boils down to understanding the end-user and, more importantly, understanding how to best serve that user.

Enhancing Design Thinking with Data

Design thinking is a structured approach to problem-solving. It encompasses various activities that support the creation of impactful design solutions centered on people and focused on the essence of the problem. While activities vary depending on the nature of the project, they fundamentally include: 1) Empathizing, 2) Defining, 3) Ideating, 4) Prototyping, and 5) Testing.

At frog, we enhance these qualitative research methods by adding quantitative techniques. Quantitative methods are used to validate hypotheses emerging from the design process and also serve as a source for new insights. While these two approaches often run parallel, we ensure they intersect at key points in the process.

Some of the activities listed below are particularly effective in "design research"-type projects within design thinking. These projects clarify user needs and pain points (the root causes of users' frustrations and difficulties) and validate solutions through prototyping. Conversely, others are more effective in "design and build"-type projects, which take those solutions further into manufacturing and continuous improvement.

Now, let's explain how data science engages at each of the five stages in the user-centered product design and market introduction process, accompanied by examples.

Stage 1: Empathy (
) Context:
The first stage of the design thinking process is building empathy with users. Qualitatively, this often involves fieldwork research with a relatively small group of users. The goal is to deepen understanding of the user journey, pain points related to design problems, motivations, and resulting behaviors.

Secondary research using proprietary and public information helps determine the overall structure of this type of research. However, we always strive to include open-ended questions that elicit unexpected responses. Such answers can reveal conclusions unattainable through purely deductive reasoning and become critical discoveries that inform subsequent design.

The Need for Data Science: Data science as a discipline typically pays little attention to this empathy phase. At frog, however, we consider this a critical stage for introducing data science. Incorporating quantitative insights into qualitative research helps prevent errors stemming from preconceptions—such as overemphasizing users who share compelling stories while downplaying the pain points of those who don't resonate.

Activity Example: Social media communities not only yield valuable information but also reveal broader awareness of issues. Data science helps grasp this information holistically and evaluate it against design research findings. For instance, conducting quantitative surveys with well-structured questions helps confirm statistical causal relationships and their strength between user pain points, awareness, and resulting behaviors.

This type of research cannot be properly designed without first having hypotheses about the nature of these relationships. Qualitative methods reveal what is actually happening and why, while quantitative methods determine the frequency of occurrence and the relative importance of those reasons.

Stage 2: Definition
Context: Once we've built empathy with users and understand the reasons behind their actions, we then define the nature and scope of the problem more precisely. This process involves synthesizing all information gathered so far to identify patterns that either support or refute our hypotheses. It's not uncommon for unexpected discoveries from our design research to connect with generally established theories, yielding significant insights.

Hypotheses derived from clearly defined user needs become the starting point for innovative designs that fulfill those needs in unexpected new ways. This process creates a unique advantage for clients. Their competitors often rely heavily on industry expertise and tend to have a relatively shallow understanding of users.

The necessity of data science: Data science is an essential tool for evaluating the quality of constructed hypotheses. If data collection and analysis are conducted systematically during the empathy phase, hypotheses can be directly validated against quantitative evidence during the definition phase. This allows for comparing the strength of each hypothesis and prioritizing them. For example, a pain point strongly identified in qualitative research might affect only 10% of users, while another pain point mentioned incidentally could impact 90% of users. This allows for more effective hypothesis formulation: determining which problem requires solving first and how much effort solving it will demand.

Furthermore, by identifying correlations between pain points and user types, you can formulate multiple subtly different hypotheses applicable only to specific user segments or behavioral patterns.

Activity Example: The definition phase is a highly iterative process, repeatedly cycling through forming qualitative hypotheses → quantitative testing and validation → refining hypotheses. For example, suppose you have two hypotheses about why users abandon the sign-up process midway: it takes too long, or it requires information users don't have on hand.

In that case, quantitative methods like A/B testing can be used to compare the time spent by users who completed the procedure versus those who abandoned it, or to compare completion rates between laptop and mobile device users. Digging deeper, comparing the time spent by mobile device users versus laptop users separately can reveal whether one hypothesis is a causal factor or merely a coincidental factor.

Stage 3: Conceptualization
Context: In the conceptualization stage, we brainstorm solutions for the user pain points. Guided by the motto "It's a mistake to think we have no ideas," we encourage participants to freely propose ideas, including those that seem ineffective at first glance or don't appear to be a perfect fit, to generate a wide range of solutions. After the initial brainstorming, we refine the concepts generated and group them into clusters based on potential coherence (e.g., solutions addressing similar pain points, or those suited to specific user types or technologies).

The Need for Data Science: At first glance, quantitatively analyzing the strength of various pain points and hypotheses might seem to conflict with the requirement for conceptualization to avoid imposing limitations or constraints on possibilities. Therefore, during brainstorming, we deliberately avoid focusing on such quantitative analysis. Instead, we use it as a framework to collectively share an understanding of the problem space that should serve as the starting point for ideas.

However, during the concept refinement and classification phase, quantitative insights can play a crucial role in clustering ideas or serve as a means to resolve conflicts when multiple opinions compete over which approach is best.

Activity Example: While process automation can occur during conceptualization (e.g., automating idea clustering via natural language processing or unsupervised learning methods), the greatest benefit of data science at this stage lies in synthesizing user pain points, hypotheses about their causes, and the concepts built to solve them.

This involves leveraging simplified, high-performance quantitative surveys on interactive online platforms. By designing and refining behavioral models based on survey results, we predict how well each concept alleviates pain points and influences user behavior. Such models can also maximize the effectiveness of specific solutions.

Stage 4: Prototype
Context: Prototyping is the stage where concepts take shape. Visual designers sketch rough screen designs and product features, while interaction designers build user journeys, key user flows, and interactive prototypes. Strategists create the business model and product roadmap to maximize product adoption and revenue. Prototypes are iterated multiple times to ensure they adequately address the pain points identified in research.

Data Science Requirements: Specific data science needs during prototyping vary by project but must always be integrated into the process. For design research projects, it's critical to verify whether developed prototypes solve the most important problems and, further, whether they achieve this in the correct sequence. Referencing behavioral models allows us to confirm success and provides guidance for improving the user journey.

In design and build projects, we frequently utilize behavioral models. We use them to verify whether the initial prototype has usability issues due to insufficient data, whether a targeted data collection strategy is included in the product roadmap, and whether we can engage with users to validate advanced features before full-scale deployment.

Example Activity: In design research projects, it's crucial to assess the relative value of each product feature. By using behavioral models to evaluate how well each product feature alleviates its corresponding pain point, and then multiplying that result by the prevalence rate of each pain point in the market, we can estimate the total potential user base for that product. Behavioral models can also be used to eliminate redundancy in design solutions, ensuring that the overlap in pain points alleviated and potential user numbers across components approaches zero.

For design and development projects, frog typically conducts small-scale market testing with an MVP (Minimum Viable Product). Behavioral models help identify the types of users who will actively or passively provide valuable feedback during these tests. They also help evaluate which combinations of features are optimal for that user group. At this stage, we build initial data collection lines and analysis components to interpret user feedback against already known user information.

Stage 5: Testing
Context: While some conceptual testing occurs in previous stages, the testing phase warrants its own distinct step. This is the first stage where we receive actual user feedback on how well the design solution addresses user needs. No matter how hard we try to cover all user needs, testing often reveals oversights. We may have insufficiently considered specific user groups, or pain points and user behaviors not captured in research may surface.

In design research projects, we typically provide users with physical or digital prototypes to use, creating opportunities for them to give verbal feedback in real time. We use this feedback to identify which parts of the product functionality need to be changed and improved.

For design and build projects, an iterative "try and learn" approach is valuable. Just because a solution was perfect on launch day doesn't mean it will remain perfect forever. User needs and behaviors change, and competitors may mimic effective elements of our design, forcing us to innovate further ahead of them.

The Need for Data Science: While some models may work well for a small group of users with sufficient prior knowledge, all data science methods function more effectively at scale. Since we rarely design for a small user base, testing with as large a user group as possible allows us to gain confidence in achieving market success. Data science provides the means to evaluate and compare our designs with an unbiased perspective. Test accuracy and scope may vary depending on the progress of the design process or available resources. Leveraging data science in such cases enables evaluation under different conditions.

Activity example: Deep qualitative feedback can only be obtained by focusing on small groups. However, a certain level of qualitative feedback can be gathered from large user groups through statistical testing. Referencing these results allows for efficient allocation of design and development resources. Furthermore, conducting additional sessions for detailed examination enables digging deeper into unexpected trends, patterns, and correlations discovered earlier.

In design research projects, it's highly unlikely to reach a final product without testing the effectiveness of concepts or features. Testing features, information layouts, design language, and user flows through quantitative surveys or online user testing can reveal hidden issues and bottlenecks that weren't apparent in small groups. You can also ask users about changes in their behavior when using the product, or whether they would still choose to use it despite existing processes or workarounds. You might hear some uncomfortable truths, but it's better to know the bad news early so you can course-correct and take action efficiently.

In design and development projects, effectively turning designs into products requires a more structured approach to "test and learn." Beyond traditional KPIs like conversion rates and click-through rates, you must also measure and visualize user experience metrics that were previously only measurable within design labs. Even common methods like A/B testing need to be expanded to test not just site layouts and features, but also the very pain points and undesirable behaviors that design solutions aim to resolve.

Properly Incorporating Data Science

As outlined above, data science has the potential to offer more than just a mechanical extension of the classical design thinking process (though such tools are undoubtedly useful). There is no reason to view data science as incompatible with traditional qualitative approaches or to engage in ideological debates over which approach is best. We believe that combining design thinking, strategy, and data science enables us to create design solutions that are not only conceptually and experientially excellent but also successful in the marketplace.
 
This article is also published in the web magazine "AXIS".

Was this article helpful?

Share this article

Author

frog

frog

frog is a company that delivers global design and strategy. We transform businesses by designing brands, products, and services that deliver exceptional customer experiences. We are passionate about creating memorable experiences, driving market change, and turning ideas into reality. Through partnerships with our clients, we enable future foresight, organizational growth, and the evolution of human experience. <a href="http://dentsu-frog.com/" target="_blank">http://dentsu-frog.com/</a>

Also read