Diagnosis of Data Saturation and Holistic Interpretation: A Multi-Layered Approach to Planning Qualitative Sample Size
Main Article Content
Abstract
Sample-size guidance in qualitative research has long centered on the idea of data saturation—the point at which further collection yields no substantively new insights (Glaser & Strauss, 1967). Yet, as the notion has migrated from its grounded-theory roots into routine practice, it is often applied as a blunt numerical target that overlooks data richness, participant diversity and theoretical coherence. This article traces the evolution of saturation, critically reviewing empirical studies and simulation models that try to forecast when it occurs. We identify the main forces shaping saturation—study scope, population heterogeneity and interview depth—and show why treating it as the sole indicator of adequacy can be misleading. To overcome these shortcomings, we introduce a multi-layered framework that fuses saturation diagnosis with checks on data richness, diversity of perspectives and theoretical fit. Practical tools such as iterative coding cycles, memo writing and saturation grids illustrate how the framework enhances analytic transparency and rigor. Reframing the question from “How many participants are enough?” to “How much understanding is sufficient?” offers researchers a structured yet flexible path to defensible sample planning.