Pilot Studies & Proof of Concepts: Small Studies Matter
Mar 01, 2026
Small Studies Have a Reputation Problem, Which Says More About Academia Than Your Research
Several years ago, I submitted a paper I was really excited about. It was the fourth time I’d tested a qualitative framework in a new context, and I was so proud of the richness of my data. Then I received this comment, which should have flagged the editor to reject my paper:
Reviewer Comment: There are fundamental limits to the technique used. There were a limited number of interviews done; the authors acknowledge that even of the limited (and therefore statistically doubtful) number of interviewees identified, only about half were actually interviewed.
Yes, I had a small sample size (n=15). But my findings were deep, and my discussion was broad…so I pushed back:
My Response: Small sample sizes can still produce meaningful results if sampling methods are appropriately designed to increase reliability and validity. Note: the limited time for interviews did not necessarily have a negative impact on the sample size—I conducted a parallel study in Singapore and interviewed 16 farmers over more than 12 months. Text added (ln 291): ‘The aim was to ensure variation within the sample. While there is no definite calculation to determine sample size for adequately representing the sample population in qualitative research studies, the rule is to include enough people to reach a saturation point where little to no new information is generated, also called “sufficient redundancy” [27, 28]. Research by Guest et al. [29] suggest that approximately twelve interviews should be sufficient to achieve saturation when researching a relatively homogeneous group (such as in this case of commercial urban farmers selling at Sydney farmer’s markets).’
The second (and, thankfully, final) round of reviewer comments? ‘We…think that the qualitative study including interviews with 15 farmers is justified well. Besides, the author has taken up the critiques by the reviewers carefully.’
Is there a moment before you hit submit, when you re-read your abstract and think, wait, maybe it’s too trivial? Not the research — the research, you are confident, is robust — but the size? You’re not so confident about. We have been trained by academic culture to equate scale with credibility that a small, albeit carefully designed, study can feel like under-performance rather than a real contribution.
This is a problem, and it is not yours to own. It belongs to a system that rewards large randomized controlled trials and multi-year longitudinal datasets while quietly dismissing the exploratory, the preliminary, and the proof-of-concept as somehow lesser. The implicit message is: come back when you have more data. The practical reality is that without small studies, there would be nothing to scale up from in the first place.
Pilot studies and proof-of-concept research are not the ante room of real science. They are the foundation of it. They test whether your methods hold up in the field before you commit a decade and a grant budget to finding out they do not. They generate the preliminary data that makes funders take you seriously. They validate frameworks before you stake your dissertation on them. And sometimes, in contexts where trust is fragile and access is hard-won, a small purposive sample is not a limitation — it is the most rigorous and ethical choice available.
In this post, I want to make the case for small studies using four examples from my own research — across three continents, four methodological approaches, and more than a few moments of productive uncertainty. If you have ever hesitated to submit because your n felt too small, or because a reviewer made you feel like your sample size was a character flaw (been there, done that—as confessed already), you’ll want to stick around…
Before You Scale Up, You Have to Know If Your Methods Actually Work
In one study (Diehl et al., 2019), my co-authors and I set out to explore household food security among migrant urban farmers in three rapidly urbanizing cities: Delhi, India; Jakarta, Indonesia; and Quito, Ecuador. The sample sizes were small by most standards — ten migrant households in Delhi, eleven in Jakarta, and a mixed survey and interview sample in Quito. Each city was part of a larger research project, and the cross-city comparison emerged organically from a shared set of questions rather than a pre-planned comparative design.
What we were really doing, whether we named it that way at the time or not, was testing whether the same conceptual framework and the same interview approach could hold across radically different social, political, and agricultural contexts. Could questions derived from the USDA Household Food Security Survey Module translate meaningfully across a floodplain farm in Delhi, an informal peri-urban plot in Jakarta, and a municipally-supported garden in Quito? The answer was: mostly yes, with important contextual adjustments. And that finding — that the framework was reliable with a little calibration — was itself a contribution.
If you are an early-career researcher sitting on exploratory qualitative data and wondering whether it is "enough" to publish, consider what this kind of study actually demonstrates. It shows that you can navigate complex, cross-cultural fieldwork. It shows that your conceptual framing is robust enough to measure the real world. It shows reviewers and future funders that you know how to design research that is sensitive to context without sacrificing rigor. An exploratory study is not what you publish while you wait for the real study. It is what makes the real study possible — and often, it is the real study.
We also need to be honest about what small, exploratory studies can do that large studies sometimes cannot: they let us sit with the unexpected. In Delhi, one household offhandedly mentioned that they ate better in the city than in their home village — not because they grew more food, but because they spent less on transport and could afford to buy more at the market. That finding would have been invisible in a regression table. In a small qualitative study, it became a thread worth pulling.
One Crop, One Rooftop, One City: The Case for Doing Less, Better
There is a particular kind of academic anxiety that comes with deliberate constraint. When you tell someone your study is focused on one farm, one crop, and one city, you can almost hear them constructing your limitations section in their head before you even finish the sentence. And yet, sometimes the most useful thing a study can do is go deep on a narrow question rather than wide on a vague one.
In a life cycle assessment study (Diehl & Cheng, 2025), we evaluated the environmental performance of rooftop hydroponic production at one pioneer urban farm in Singapore, using basil as the crop of study. Singapore is a compelling context for this kind of work — a high-density city-state where less than one percent of land is in agricultural use, where ninety percent of food is imported, and where the government has made urban food production a national priority through its 30 by 30 vision to grow thirty percent of nutritional needs locally by 2030. The question was specific: how does rooftop hydroponic basil compare environmentally to conventionally grown basil imported from the United States, Singapore's main source?
The results were clear. One kilogram of rooftop basil emitted 0.59 kg of CO2, compared to 8.90 kg CO2 for conventional production. Transportation alone accounted for the largest share of the conventional production footprint. A sensitivity analysis showed that integrating renewable energy sources like solar could reduce rooftop emissions even more.
Now why does this matter? For researchers seeking grant funding, a study like this does not just answer a question. It establishes a baseline, introduces a methodology to a new context, and generates the kind of hard, comparative numbers that make grant reviewers take notice. Funders do not need you to have solved the problem. They need evidence that you understand the problem well enough to design a study that will. A tightly scoped proof-of-concept with defensible methods and quantifiable findings is often far more persuasive than a sprawling preliminary study with ambiguous conclusions.
The constraint was the point. By limiting scope to one crop, one farm, and one production system, we were able to apply a rigorous life cycle assessment methodology with enough depth to produce findings that are both meaningful and replicable. If you are writing a grant application and wondering how to demonstrate feasibility without the resources to run a large study, this is the model. Do less, do it well, and let the specificity of your findings make the argument for you.
You Don't Need a Perfect Framework, You Need to Test Whether It Holds
Frameworks look sophisticated in review papers. They look considerably less so when you try to apply them to a mountainous rural village in Fukuoka Prefecture with a declining population, a history of landslides, and four graduate students working across two countries and a language barrier. This is not a criticism of frameworks. It is a reminder that the distance between conceptual clarity and operational reality is where a great deal of useful research actually happens.
In a survey report (Diehl et al., 2022), our team applied a coupled human and natural systems — or CHANS — framework to investigate post-disaster recovery potential following the Northern Kyushu Heavy Rainfall event of July 2012 in rural Japan. The CHANS framework attempts to bridge social and natural sciences, recognizing that human and ecological systems are deeply interdependent and that studying them separately misses the interactions that matter most. Four graduate student projects investigated different facets of recovery: the effects of planted forests on hillside stability, urban-rural resource flows in landslide restoration, changing farmer demographics, and the potential for green tourism to support rural livelihoods.
Individually, each study was limited in scope. Taken together, triangulated through the CHANS lens, they produced a more holistic picture of what recovery actually requires — and what a government-funded infrastructure-first approach tends to miss. The synthesis revealed, for instance, that the same hillside replanting strategy that could reduce landslide risk also had the potential to support green tourism, which in turn could address the labor scarcity that aging farming communities face. None of the four individual studies could have produced that insight—or been published—alone.
For graduate students working on interdisciplinary or multi-method dissertation designs, this is an important reframe. You do not need a perfect framework. You need a framework that is generative enough to organize your questions and honest enough to acknowledge its own limits. The lessons learned from imperfect application are data. The gaps between what the framework predicted and what the field revealed are findings. Our paper is explicit about this — we identified the absence of a project-level research question as a key limitation and proposed a nested hierarchy of research questions for future CHANS studies. That transparency did not undermine the paper. It strengthened it.
The proof-of-concept study is not the study you publish when you do not have enough for a real paper. It is the study that demonstrates whether a complex, expensive, time-consuming approach is worth pursuing at scale — and in a resource-constrained world (in our case, we had less than a week to collect data on-site!), that is an enormously valuable contribution.
Sometimes a Small Sample Is Not a Limitation...It's the Whole Point
There is a version of the limitations section that reads like a legal disclaimer — a ritual acknowledgment of everything the study did not do, written in the passive voice and designed to preempt reviewer criticism. I’ve seen these as a reviewer—they make me cringe as I feel the insecurity of the authors. On the other hand, there is the unapologetic version, where you name the constraints clearly and explain why, given the context, they were not just acceptable but necessary. A version that owns rather than shrinks away from the research design.
In a study of participatory design and community attachment in Hong Kong, the Magic Lanes project (Chan & Diehl, 2020) offered a rare opportunity to examine what happens to community bonds when residents are invited — genuinely invited — into the design of their own shared spaces. The sample was thirty-six participants. In a city of more than seven million people, navigating significant changes in its governance landscape, thirty-six people willing to participate in a community research project is not a small number.
The framework that we tested positioned participatory design as a facilitator of community attachment — the sense of belonging, investment, and connection that holds communities together under pressure. Findings showed an increase in community attachment following participation in the case study design project, with more frequent and deeper participation associated with higher levels of attachment. Participants who engaged more intensively with the design process reported stronger community bonds than those who observed from the periphery.
For researchers working in politically complex, sensitive, or vulnerable community settings, this is the argument worth making loudly: your sample size is not a methodological weakness to apologize for. It is evidence that you understood your context well enough to design research that participants could actually trust. Chasing a larger sample in a community where trust is fragile and the stakes of participation are real is not rigorous — it is irresponsible. The ethical obligation to protect participants does not disappear because a reviewer wants a bigger n.
We also need to think carefully about what these small, place-based studies contribute that large comparative studies cannot. They give us nuance. They give us the granular, situated knowledge of how a specific intervention landed in a specific place at a specific moment in time. That knowledge does not generalize in the way that meta-analyses generalize — but it does something equally important. It grounds the abstract in the real, and it reminds us that behind every data point is a person who agreed to be counted.
Stop Waiting for the Perfect Study. The Field Needs Your Voice Now.
We have collectively built a research culture that is very good at telling scholars what their work is not. It is not large enough, not generalizable enough, not methodologically sophisticated enough. The bar keeps moving, and somehow the finish line always stays just out of reach. For early-career researchers, PhD students, and anyone working in under-resourced settings or politically complex contexts, this culture is not just discouraging — it is actively biasing the scholarly record.
The four studies I have shared here were all small by some measure. We used sample sizes that would make a large-n quantitative researcher wince; or we focused on a single site that might invite the objection of limited generalizability. And yet each published paper contributed something the field needed: a tested set of methods, a baseline environmental measurement, a framework evaluation, a proof that community attachment can be meaningfully studied even when participation is hard-won. We did not wait for perfect conditions. We did not apologize for limitations—acknowledged, yes, and then pointed to strengths.
But wait, I have a confession—or, rather, a claim of pride: three of the four studies involved some degree of master's student work. While I take advisory credit in developing the research methodologies, my main role was simply to anchor the small in the real; to help the novice researcher see that what they had was enough, that it was worth framing, worth submitting, worth adding to the record. Journal papers are ultimately about situating your data and findings within the broader discourse — taking our global knowledge one small step further, resolving one narrow question, closing one gap that the literature has been quietly ignoring. And for that, small studies are not just acceptable. They are indispensable.
The pilot study is not a lesser form of research. It is a specific, rigorous, and often strategically indispensable form of research that requires its own kind of intellectual bravery — to unapologetically say, here is what we tested, here is what we found, and here is what we still do not know. That honesty is a contribution. It is also exactly what the next researcher needs to build on.
So if you are sitting on a small study that you have been quietly convincing yourself is not ready, not enough, not worth the submission fee — I would like to suggest that the field has been waiting longer for your voice than you have been waiting for permission to use it.
Ready to take your pilot study from findings to first draft? The Draft It! Essential 10-Week Workshop is a self-paced course designed to guide you through preparing your first draft for submission to a peer-reviewed journal — with tutorials, a community discussion board, and a workbook to keep you moving toward a finished draft. Register today and at the same time join the Publish It! Community for free and connect with other academic writers navigating the same path.
Footnotes
Don’t forget to check out my new video this month on visualizing data in your journal paper.
You can find it in the Publish It! Library
Or watch on the Publish It! YouTube channel. I upload new content monthly so subscribe if you are interested!
If you are ready to Draft It! check out The Essential 10-Week Workshop – a self-paced course with ttorials, community discussion board, and workbook designed to guide you in preparing your first draft in 10 weeks for submission to a peer reviewed journal!
And while you are at it—join the Publish It! Community and share your experiences with other academic writers. It’s free!
References
Diehl, J. A. (2020). Growing for Sydney: Exploring the urban food system through farmers’ social networks. Sustainability, 12(8), 3346.
Diehl, J. A., Oviatt, K., Chandra, A. J., & Kaur, H. (2019). Household food consumption patterns and food security among low-income migrant urban farmers in Delhi, Jakarta, and Quito. Sustainability, 11, 1378.
Diehl, J. A., & Cheng, J. (2025). Lifecycle assessment of rooftop hydroponic production systems: A case study of ComCrop in Singapore. Sustainability, 17, 10523.
Diehl, J. A., Asahiro, K., Hwang, Y. H., Hirashima, T., Kong, L., Wang, Z., Yao, H., & Tan, P. Y. (2022). A CHANS approach to investigating post-disaster recovery potential in rural Japan. Journal of Disaster Research, 17(3), 453–463.
Chan, W. F., & Diehl, J. A. (2022). Investigating participatory design and community attachment: A case study of Sai Ying Pun, Hong Kong. Journal of Urbanism: International Research on Placemaking and Urban Sustainability, 1-20.