productivity, research,

Research Protocols Are Documents Too

Sebastian Schkudlara Sebastian Schkudlara Follow Apr 14, 2026 · 6 mins read
Research Protocols Are Documents Too
Share this

The journal sent the manuscript back with major revisions. Reviewers one and two had independently identified the same problem: the sampling frame didn’t account for the 2022 publication cutoff asymmetrically. Studies published in late 2021 had been included without accounting for citation lag, which meant the evidence synthesis was systematically biased toward early adoption findings. The authors had been aware of the cutoff question during protocol design and had resolved it informally, in a conversation, without documenting the resolution in the protocol.

This is a structural problem, not a competence problem. The research team was excellent. The protocol had gone through institutional review. The gap persisted because the document was reviewed by people who had been present for the conversations that resolved the ambiguities, and those conversations never made it back into the text.

Two independent external reviewers, reading only the document, found the gap immediately.

The case for independent protocol validation

Research protocols are high-stakes documents. A methodology gap identified during peer review costs months of revision. A gap identified during data collection costs the study. A gap identified post-publication costs careers.

Independent review — by people who weren’t part of the design process — is the standard mechanism for catching the gaps that internal review misses. The challenge is that genuine independence is expensive and slow. External reviewers take time. IRB review is rigorous on ethics but not on methodology design. Colleague review is collaborative, not independent.

Lope offers a different option: an AI validator ensemble with a --domain research flag that activates a principal researcher persona across multiple independent AI systems. The validators review your protocol with fresh eyes, with no knowledge of the design conversations, and with a specific mandate to identify methodology gaps, validity threats, and compliance issues.

This isn’t AI that designs your study. It’s structured independent review, automated, running in under two minutes.

What –domain research validates

The --domain research validator persona checks research documents for:

  • Methodology completeness — Are inclusion and exclusion criteria fully specified? Are they operationally defined (not “high-quality studies” but “RCTs with n>100 and peer-reviewed publication in indexed journals”)?
  • Sampling frame validity — Does the sampling strategy produce a sample that answers the research question? Are edge cases (geographic scope, language restrictions, time periods) documented and justified?
  • PRISMA and framework compliance — For systematic reviews, are all required PRISMA elements present? For other methodologies, are the framework requirements met?
  • Validity threats — Internal validity (confounders, selection bias, measurement error) and external validity (generalisability, ecological validity) are both checked.
  • Ethics and participant protection — Consent procedures, data protection, risk/benefit assessment.
  • Reproducibility — Is the protocol specific enough that an independent team could replicate the study? Are analytical decisions pre-registered or documented?
  • Temporal and scope consistency — Do the time period, geographic scope, and inclusion criteria form a coherent and internally consistent frame?

Example: systematic literature review protocol

lope negotiate \
  "Systematic literature review of AI adoption in legal practice 2020-2026: 
   scope limited to common law jurisdictions, English-language publications, 
   peer-reviewed journals and practitioner research reports, 
   PRISMA-compliant, focused on workflow integration and adoption barriers" \
  --domain research \
  --max-rounds 3

First round returned NEEDS_FIX on five points:

NEEDS_FIX (confidence 0.72)

Critical:
- "Practitioner research reports" is not defined. This could include 
  law firm white papers, bar association surveys, consulting firm 
  publications, and self-published practitioner material. Each category 
  has different quality standards. Define inclusion criteria for 
  grey literature or restrict to peer-reviewed sources and state the 
  justification.

- "AI adoption" is undefined. The scope conflates AI-assisted legal 
  research tools, contract review automation, predictive litigation 
  analytics, and AI-generated document drafting — which have different 
  adoption trajectories and evidence bases. Define "AI adoption" 
  operationally or add separate sub-question tracks.

High priority:
- English-language restriction requires PRISMA justification. State 
  the rationale (resource constraints, target jurisdiction alignment) 
  explicitly in the protocol. Reviewers will flag this as a bias risk 
  without a documented justification.

- No citation lag adjustment for the 2020 start date. Studies published 
  in 2020 may reference 2018-2019 work that wasn't yet captured. 
  Consider whether the review question requires a citation lag buffer 
  or whether 2020 is the correct substantive start date.

- Search string is not included in the protocol. PRISMA requires the 
  full search string to be documented. Draft the database-specific 
  search strings (Westlaw, LexisNexis, SSRN, Web of Science) and 
  include them in the protocol before IRB submission.

The citation lag finding — the exact category of issue that sent the manuscript back in the opening story — was flagged by the validator in the first round, before the protocol reached IRB review.

Round two addressed all five findings. Round three passed at 0.91.

Example: primary research protocol (survey study)

The same loop works for primary research:

lope negotiate \
  "Cross-sectional survey of remote work practices in financial services: 
   n=400 knowledge workers in banking, insurance, and asset management, 
   UK-based, exploring relationship between AI tool adoption and 
   self-reported productivity, IRB submission deadline in 3 weeks" \
  --domain research \
  --max-rounds 3

Validator feedback on a survey protocol will check: construct validity of survey instruments (is “self-reported productivity” operationally defined and validated?), sampling strategy (how are the 400 respondents recruited, and is the sample representative of the stated population?), statistical power (is n=400 sufficient for the analysis plan?), and IRB completeness (consent language, data storage, anonymisation).

The three-week deadline note in the goal prompt will cause a validator to flag timeline risk explicitly. “IRB submissions typically require 4-6 weeks for expedited review. If the deadline is firm, the protocol must be complete by [date]. Flag this constraint to the research team.”

Who this is for

Research is produced by more professionals than the academic-researcher label suggests. Market research teams at brands and agencies produce research protocols. Policy analysts at think tanks and government bodies design systematic evidence reviews. Management consultants produce primary research as a practice. Legal teams commission expert witness studies. Medical affairs teams at pharmaceutical companies design literature reviews for regulatory submissions.

All of these professionals produce structured research documents. All of those documents benefit from independent review that isn’t done by someone who was part of the design conversation.

The --domain research validator ensemble in Lope is calibrated for this broader definition of research professional. The tool doesn’t care whether your methodology is academic or applied. It cares whether the protocol is internally consistent, the sampling strategy is valid, and the document contains enough specification for the work to be reproducible.

Install

Paste this into any AI agent you already use:

Read https://raw.githubusercontent.com/traylinx/lope/main/INSTALL.md and follow the instructions to install lope on this machine natively.

Then:

lope negotiate "Your research protocol or systematic review goal" \
  --domain research \
  --max-rounds 3

The first round of validator feedback typically arrives in 90 seconds. Pay particular attention to findings marked “critical” — those are the ones that surface at peer review when the protocol is unchanged.

The gap the validators find in round one is almost always a gap someone thought was resolved in a design conversation that never made it into the document. Now it’s in the document.

— Sebastian
github.com/traylinx/lope

Bridging Architecture & Execution

Struggling to implement Agentic AI or Enterprise Microservices in your organization? I help CTOs and technical leaders transition from architectural bottlenecks to production-ready systems.

View My Full Profile & Portfolio
Sebastian Schkudlara
Written by Sebastian Schkudlara Follow View Profile →
Hi, I am Sebastian Schkudlara, the author of Jevvellabs. I hope you enjoy my blog!