Educatian workflow guide

IRB Amendment Packet Builder

A practical guide for turning real project changes into HRP-502/503-style protocol language, consent language, recruitment updates, data-security language, and instrument appendices.

The worked example is TeachPlay: an AI-enhanced educational game design microcredential site with surveys, portfolio artifacts, learner-facing AI policy language, and public web deployment.

Obsidian vault GitHub traces Claude Code HRP-502/503-style TeachPlay example
01 / Purpose

Most amendment work is translation, not blank-page writing

When a research project changes, the hard part is often not deciding what changed. The hard part is translating that change into the several languages an IRB packet expects: protocol language, consent language, recruitment language, data-security language, and instrument appendices.

Scope note. This guide does not replace your IRB office, HRPP policy, or the latest institutional template. It shows how to organize project evidence and draft reviewer-ready language for HRP-502/503-style documents. Always use your institution's current forms.
Not this

"AI writes my IRB amendment."

This

"Claude Code turns my project records into a structured amendment draft I can review."

Best use

AI tutor, survey, telemetry, artifact upload, recruitment, or consent changes in an existing education study.

02 / Obsidian setup

Start slowly: install Obsidian, then make a public-safe research vault

The packet builder works best when Obsidian is not treated as a writing app, but as the research control room. Install Obsidian, create a local vault, then add only project metadata, decisions, links, and drafting notes. Keep raw participant data, private consent forms, and sensitive IRB correspondence in approved institutional storage.

Beginner onboarding PNG showing four steps: download Obsidian, create folders, add a TeachPlay note, and keep private data out
Beginner onboarding PNG 1: install Obsidian, create a public-safe vault, and keep protected research records outside the vault.
StepActionWhy it matters for IRB amendments
1Install Obsidian from the official download page.Use a local-first workspace that can hold linked project notes without forcing private data into a public repo.
2Create a new vault such as ResearchVault or use an existing private vault.The vault becomes the index for study purpose, status, instruments, and amendment decisions.
3Create folders: wiki/entities, wiki/irb/approved-baselines, wiki/irb/amendment-packets, and wiki/sources.Separates public-safe project memory from controlled baseline documents and generated packet drafts.
4Create a TeachPlay project note using the starter template.Gives Claude Code or Codex a stable place to read study purpose, participant population, data categories, and repo evidence.
5Add pointers to private approved IRB docs instead of copying sensitive content into public notes.Preserves review traceability without leaking protocol language, consent forms, or participant information.
wiki/ entities/ TeachPlay.md irb/ approved-baselines/ amendment-packets/ templates/ sources/

Download the Obsidian vault starter and paste it into the first TeachPlay project note, then replace placeholders with verified project facts.

03 / Architecture

Connect the research memory you already have

The strongest version of the workflow connects four sources that researchers often keep separate: a knowledge vault, project repositories, existing IRB documents, and live research instruments.

Beginner onboarding PNG showing the amendment evidence bundle: Obsidian project note, GitHub repo, approved IRB baseline, and survey or API records
Beginner onboarding PNG 2: collect the evidence bundle before drafting any IRB language.

The useful claim is modest: this workflow helps you avoid missing sections, contradictions, or copy-paste drift when your actual research system changes.

04 / Inputs

Build an amendment input bundle before asking for prose

A reliable packet starts with an input bundle. Do not start by asking the assistant to "write the IRB." Start by asking it to inventory the project change and mark what evidence it used.

InputWhat it contributesExample path or source
Vault project pageStudy purpose, participants, project status, related instrumentswiki/entities/AI Microcredential GameDesign.md
Existing approved docsBaseline language that should not be silently rewrittenCurrent protocol, consent, recruitment, instruments
GitHub repoActual features added: surveys, uploads, telemetry, AI policy pagesEducatian/TeachPlay
Survey/API recordsInstrument IDs, item banks, export fields, distribution modeQualtrics, REDCap, OSF, Canvas, local CSV
Change requestThe human reason for amendment: what is new, why now, who is affectedOne-page intake memo
05 / Deep research layer

Draft around the determinations reviewers must make

The amendment packet should not be organized around whatever the assistant can write fastest. It should be organized around the questions an IRB reviewer must answer. For AI-enhanced education research, the most reusable anchors are the Common Rule approval criteria, informed-consent elements, documentation/waiver rules, and exemption or limited-review conditions.

Reviewer logic map showing project change, vault and GitHub evidence, Common Rule anchors, and amendment outputs
Workflow image: a TeachPlay-style project change is translated into reviewer-facing outputs using 45 CFR 46.111, 46.116, 46.117, and 46.104 as organizing logic.
Reviewer questionRegulatory anchorPacket artifact
Are risks minimized and reasonable for the study change?45 CFR 46.111 approval criteriaProtocol delta + risk-change rationale
Will participants understand the new procedures and data uses?45 CFR 46.116 informed consent elementsHRP-502-style consent insert
Is written documentation needed, waived, or replaced by an information sheet?45 CFR 46.117 documentation of consentConsent process note
Does the change remain in an educational/survey/interview exemption path, or does it need review?45 CFR 46.104 exemption categories and limited IRB review conditionsReviewer crosswalk
What special risks come from AI, secondary data use, or re-identification?OHRP/SACHRP AI considerations + NIH participant privacy principlesAI/data-security appendix
Deep-research conclusion. The public guide should avoid saying "HRP-502/503 are universal." Institutions use different numbering and revise templates over time. The safer, accurate phrase is HRP-502/503-style: consent, protocol, recruitment, data security, and appendix language mapped to the user's current local forms.
06 / Route decision tree

Decide the amendment route before drafting the packet

One common failure mode is drafting a full packet when the change is only an administrative record update, or treating a substantive data-flow change like a minor edit. Before writing the HRP-502/503-style language, ask what route the change belongs to.

Beginner onboarding PNG showing amendment route choices: cosmetic edit, administrative update, amendment packet, or consult first
Beginner onboarding PNG 3: route the change before drafting HRP-502/503-style language.
RouteTypical triggerPacket depth
Record updateNon-substantive typo, contact update, public URL correction with no participant-facing procedure changeVault/IRB log note; no full amendment packet unless local policy says otherwise
Minor amendment draftNew survey items, refined recruitment wording, small logging dictionary clarification, added artifact appendixProtocol delta, consent check, revised attachment, reviewer crosswalk
Consultation firstNew AI/API boundary, new identifiable free text, new participant population, new sensitive data, changed risk profileShort consult memo before drafting final language
Practical rule. If participants experience a new procedure, if new data are collected, or if data cross a new AI/API boundary, do not treat it as a cosmetic edit.
07 / Worked example

TeachPlay: AI microcredential study amendment scenario

The public example uses TeachPlay because it combines the recurring ingredients that trigger amendment work: a live learning site, AI-use guidance, survey evaluation, portfolio artifacts, and potential learner interaction traces.

Study surface

AI-enhanced educational game design microcredential with 12 sessions and learner-facing resources.

Possible change

Add pre/post Qualtrics surveys, portfolio artifact upload prompts, and optional interaction telemetry.

IRB pressure

Consent must explain what is collected, what is optional, what AI tools see, and how artifacts/logs are protected.

Project evidenceIRB translation
TeachPlay has AI-use policy and session pages.Protocol should clarify whether AI use is part of instruction, research data, or both.
Pre/post surveys exist or are being generated in Qualtrics.Attach final survey instruments and state matching logic, anonymity, incentives, and export fields.
Portfolio pages request game-design evidence.Consent should say whether artifacts are collected for course/program evaluation, research analysis, or optional showcase use.
Site may record progress, search, quiz, or annotation events.Data-security appendix should define event logs, retention, de-identification, and access controls.
TeachPlay amendment packet example showing vault, repo, baseline IRB docs, drafting steps, and output documents
Example image: TeachPlay's public project facts and repository features become a structured amendment packet, but the final risk judgment and submission stay with the investigator.
08 / Data-flow register

Make the AI boundary visible

For AI-enhanced education projects, the data-flow register is often more useful than a paragraph. It forces the team to name each data element, identify whether it crosses an AI/API boundary, and decide whether consent language needs to change.

AI data-flow register showing participant actions, TeachPlay platform records, AI API boundary, research export, and aggregate reporting
New example image: a TeachPlay-style AI data-flow register. The dashed line marks the boundary reviewers usually care about most: participant text or metadata leaving the instructional system for AI/API processing.
Register fieldWhy it mattersExample
contains_free_textFree text may accidentally include identifiers or sensitive context.AI feedback prompt, reflection note, artifact title
crosses_ai_or_api_boundaryConsent and security language should say whether data are sent outside the local research system.External AI feedback request
stored_in_research_exportSome data are processed transiently but not retained; others become research data.Telemetry event vs retained artifact metadata
consent_sentence_neededFlags whether participant-facing language must be revised.Yes for AI text retention; often yes for new logs

Downloadable starters: AI data-flow register and telemetry dictionary.

09 / Prompt sequence

Use Claude Code like a change-control assistant

Prompt 1 - inventory
Read the TeachPlay vault page, repo README, current survey/evaluation files, and existing IRB packet folder. Create an amendment input inventory. Do not draft prose yet. List every file used and mark missing evidence.
Expected output

A table of evidence sources, changed study surfaces, affected IRB documents, and unanswered questions. This prevents the model from filling gaps with plausible but unverified language.

Prompt 2 - protocol delta
Using only the verified inventory, create a protocol delta table. Columns: approved baseline, proposed change, affected participants, new data elements, risk change, document section, rationale, human verification needed.
Prompt 3 - draft sections
Draft amendment-ready inserts for HRP-503-style protocol sections and HRP-502-style consent sections. Preserve existing study framing. Use bracketed placeholders where the approved baseline is missing.
Prompt 4 - reviewer crosswalk
Create a reviewer-facing crosswalk that explains the change in plain language: what changed, why the change is minimal risk, what participants will be told, and what new attachments are included.
Prompt 5 - adversarial check
Review the draft like an IRB reviewer. Find places where the consent language promises less than the data-flow register shows, where a new data element lacks a retention/access statement, or where an AI/API boundary is hidden behind generic wording.
10 / Packet outputs

The packet should be reviewable as a folder

The output is not a single document. It is a small amendment packet that a PI can inspect, edit, and paste into the institution's current system.

Beginner onboarding PNG showing packet generation, PI review, Playwright check, and manual IRB submission
Beginner onboarding PNG 4: generate the packet, review it with humans, verify it in the browser, and submit manually.
amendment_packet/ 00_README_review_first.md 01_change_summary.md 02_protocol_delta_table.csv 03_hrp503_protocol_inserts.md 04_hrp502_consent_language.md 05_recruitment_script_update.md 06_data_security_appendix.md 07_instrument_appendix/ pre_survey_items.md post_survey_items.md telemetry_dictionary.csv artifact_collection_note.md 08_ai_data_flow_register.csv 09_reviewer_crosswalk.md 10_pi_review_checklist.md 11_playwright_smoke_report.png
Do not automate final submission. The final paste into Cayuse, Kuali, VERA, eRA, or another IRB system should remain human-controlled. The packet is a drafting and consistency tool.
11 / Sensitive language

AI/data-security language must be concrete

IRB reviewers do not need vague claims that an AI tool is "secure." They need to know what data moves where, whether identifiers are included, whether raw text is stored, and who can access outputs.

Weak wordingBetter amendment wording
The study uses AI to support learning.The instructional site may provide AI-assisted feedback on learner-submitted game-design artifacts. Research exports will not include direct identifiers in the artifact-analysis dataset.
Data will be kept confidential.Survey responses, artifact metadata, and interaction logs will be stored separately from direct identifiers when feasible. Access is limited to approved study personnel.
Telemetry will be collected.Event logs may include page views, quiz attempts, portfolio submission status, timestamps, and feature-use events. Logs will not intentionally collect private messages outside the study system.
AI-generated text may be analyzed.If AI feedback or learner-AI interaction text is retained for research, the consent form will describe what text is retained, whether it is optional, and whether it is de-identified before analysis.
AI-specific review issue. OHRP's SACHRP recommendations emphasize that AI/ML research can complicate identifiability, secondary use, privacy expectations, and group harms. In practice, amendment language should name the data flow, not merely the model name.
12 / Human review

Build a reviewer loop into the workflow

The most useful automation is not more prose. It is a checklist that forces the PI to approve or reject each change before submission.

PI review

Confirms study design, risk level, recruitment, incentives, and whether the change is actually an amendment.

Data review

Checks whether the draft accurately names identifiers, logs, survey fields, artifacts, AI services, and retention.

Template review

Moves language into the institution's current HRP-502/503-style form without preserving stale template text.

Red flags worth fixing before submission

Red flagWhy it mattersFix
Consent says "survey data" but the platform also logs telemetry.Participant-facing language under-describes the data collected.Add a clear interaction-log sentence and attach telemetry dictionary.
AI feedback is described, but no data-flow boundary is named.Reviewers cannot assess privacy/confidentiality.State whether prompts, artifacts, or metadata leave the local system.
Artifacts might contain names or faces.Uploaded work products can include identifiers even if the form does not ask for them.Add artifact de-identification instructions and review workflow.
Recruitment language implies program requirement.Voluntariness can be unclear in course or credential settings.Separate instructional participation from research data use.
13 / Starter downloads

Public-safe templates

These files are intentionally generic. They are meant to scaffold thinking, not to carry private protocol language or participant data.

14 / Playwright verification

Use Playwright to check the guide and the packet like a real page

After the amendment guide or project packet changes, verify it in a browser rather than trusting static file inspection. Playwright catches missing anchors, broken template links, invisible images, and layout regressions that a markdown-only workflow misses.

# from the GitHub Pages repository root python -m http.server 8765 --bind 127.0.0.1 # in another terminal npm install -D playwright npx playwright install chromium $env:BASE_URL="http://127.0.0.1:8765/irb-amendment-packet-builder/" node irb-amendment-packet-builder/templates/playwright-irb-guide-smoke.mjs
CheckWhat the smoke test verifiesWhy it belongs in the workflow
Visible headingsIRB Amendment, Obsidian, TeachPlay, HRP-502/503-style, 45 CFR 46.111, and AI data-flow register text.Confirms the page still carries the intended reviewer-facing frame.
Workflow imagesAt least four beginner onboarding PNGs and five total embedded images render with non-trivial visible dimensions.Prevents broken PNG/SVG image paths after publishing.
Template linksEvery templates/ download returns a successful response.Protects the practical value of the guide.
ScreenshotWrites irb-amendment-packet-builder-smoke.png.Gives a quick visual artifact for release checks or GitHub issue comments.
Why this matters for IRB automation. The same pattern can test a generated amendment packet page: check that consent snippets, data-flow registers, telemetry dictionaries, and PI review prompts are present before anyone moves text into the live IRB system.
15 / FAQ

Common reviewer-facing questions

Can the assistant fill HRP-502 or HRP-503 directly?

It can draft text for the relevant sections, but the investigator should paste into the current institutional form. Template versions and required fields vary by institution and date.

Should the vault contain raw consent forms or participant data?

No. Use the vault for project metadata, decisions, and pointers. Keep raw participant data and sensitive IRB documents in controlled local or institutional storage.

What makes TeachPlay a good example?

It has a public instructional site, AI-use guidance, survey evaluation, portfolio artifact surfaces, and possible usage logs. Those are exactly the surfaces that require clear amendment language.

Is this only for education research?

No. The same pattern works for many social, behavioral, and educational studies, especially when a digital platform changes after initial approval.

What should never be automated?

Risk judgment, final consent promises, institutional policy interpretation, and final IRB submission. The workflow drafts and checks consistency; it does not approve research.

Why add another data-flow image if the protocol already describes data?

Because AI-enhanced systems often mix instruction, platform analytics, survey instruments, and optional text/artifacts. A diagram makes it harder to hide an API boundary or accidentally omit a retained data element.

16 / References

Official and template references