The AI Learning Process Record is an open, portable schema that captures the process of AI-assisted learning—giving students, educators, and evaluators a verified window into how learners think, struggle, iterate, and grow.
Traditional evaluation materials were designed for a world without AI. That world no longer exists.
AI can generate polished essays, solve problem sets, and write code. Evaluators can no longer distinguish genuine student work from AI-generated output.
Grades, transcripts, and test scores capture what a student produced but reveal nothing about how they learned, reasoned, or persisted through challenges.
Every AI platform generates learning data in its own format. There is no interoperable, portable way to share verified process evidence across institutions.
ALPR builds on established learning data standards to create a trust chain from raw interactions to verifiable credentials.
Raw learning interactions captured as xAPI statements from AI platforms. Students never share this layer directly—it feeds the aggregation pipeline.
The core innovation: derived summaries capturing learning behaviors that are meaningful to evaluators. Behavioral signals, not conversation transcripts.
Signed, portable, evaluation-ready package using W3C Verifiable Credentials, CLR 2.0, and Open Badges 3.0. Cryptographic proofs ensure authenticity.
ALPR captures behavioral patterns across six research-backed dimensions that reveal how a student engages with AI-assisted learning.
Does the student think independently or outsource cognition to AI?
Does the student reflect on their own thinking and learning strategies?
Does the student persist through difficulty or abandon ship at the first obstacle?
Does the student revise and improve work or accept first drafts?
Does the student apply concepts across different contexts and domains?
How effectively and critically does the student use AI as a learning tool?
ALPR serves everyone in the learning ecosystem—from the students who own their records to the institutions that evaluate them.
Your learning journey is more than a GPA. ALPR gives you a verified, portable record of how you think, learn, and grow with AI—curated and owned by you.
You choose which learning episodes to share. You write the reflections. You own the narrative. ALPR doesn't surveil—it empowers you to prove what grades can't show.
Maya's ALPR shows evaluators that she doesn't just get the right answer—she challenges AI explanations, iterates on her approach, and transfers physics intuition to novel problems.
Without AP classes or expensive tutors, Jamal's transcript understates his abilities. His ALPR demonstrates sophisticated debugging instincts, deep iterative refinement, and growing AI literacy that rivals formal CS education.
Yuki's ALPR provides standardized, verifiable process data that transcends grading system differences and lets evaluators see her critical thinking in action.
For non-traditional learners, ALPR provides verified evidence of rigorous self-directed study that formal transcripts can't capture.
You control what to share, with whom, and for how long. Revoke access anytime.
Showcase thinking process, persistence, and growth that grades don't capture.
One record, many sources. Aggregate learning data from every AI tool you use.
Cryptographic signatures prove your record is authentic and unaltered.
Understand your own learning patterns and identify areas for growth.
Level the playing field for self-taught and non-traditional learners.
Understand how your child actually engages with AI—not as surveillance, but as insight. ALPR helps families support healthy learning habits and make informed decisions about AI tools.
ALPR is never a surveillance tool. It captures aggregated behavioral patterns—not conversation content. Families see learning habits, not private thoughts. The student always controls what is shared.
Instead of guessing or banning AI, the Chens can see aggregated patterns: Is their child accepting every AI answer uncritically? Or are they pushing back, verifying, and building understanding?
Families can help students identify their strongest learning episodes, understand which dimensions to develop, and make strategic decisions about which AI tools to invest time in.
Homeschooling families often struggle to provide standardized evidence of learning. ALPR gives them verified, institution-recognized process data that validates their curriculum choices.
For neurodiverse learners, ALPR captures the nuance that standardized tests flatten. Productive struggle looks different for every brain—ALPR shows the genuine engagement, not just the timed output.
Understand patterns and habits without reading private conversations.
Talk about AI usage with data, not anxiety. Guide healthy habits.
See which AI platforms actually drive learning, not just engagement.
Help your child build a credible, differentiated application portfolio.
Capture learning capacity that traditional metrics undercount.
Provide verified, standardized evidence for non-traditional education.
AI isn't going away. ALPR helps educators understand how students engage with AI tools, design better assignments, and shift focus from policing AI use to cultivating genuine learning.
ALPR doesn't replace your judgment—it amplifies it. See which students are building real understanding and which are coasting. Design interventions backed by process data, not guesswork.
Instead of banning AI or ignoring it, Dr. Martinez designs assignments where ALPR process data becomes part of the assessment. Students earn credit for how they engage, not just what they submit.
Aggregate ALPR data across a grade level reveals which concepts trigger productive struggle (good) versus frustration-driven abandonment (bad), informing curriculum pacing and support strategies.
Rather than playing "detect the AI," Prof. Williams uses ALPR to assess writing process. Did the student brainstorm, draft, get AI feedback, revise substantively? Or did they prompt once and submit?
Instead of blanket bans or unrestricted access, ALPR gives school leaders data to craft nuanced AI policies grounded in evidence of what actually supports learning.
See how students engage with AI, not just what they submit.
Inform curriculum and intervention with real learning process data.
Grade the journey, not just the destination. Reward genuine learning.
Stop playing "spot the AI." Focus on learning quality instead.
Identify which students need scaffolding and which are ready for challenge.
Build AI usage policies on data, not fear or speculation.
Evaluate learners on how they think, not just what they produce. ALPR provides scannable, verified process evidence designed for efficient review workflows.
Scannable in under 5 minutes per candidate. A radar chart at a glance, episode drill-downs for depth, growth trajectories for trend, and cryptographic verification for trust. No more guessing who wrote what.
Alice reads two applications with identical GPAs. One ALPR shows a student who challenges AI outputs and iterates deeply. The other shows surface-level engagement. The difference is now visible and verifiable.
Research readiness requires specific thinking habits: iterative refinement, knowledge transfer across domains, and the ability to critique AI-generated hypotheses. ALPR makes these habits visible.
At scale, ALPR enables efficient first-pass screening based on verified process metrics, while preserving the human review for final selection with rich drill-down data.
ALPR provides a common, standardized process-evidence layer that works across grading systems, languages, and educational traditions. Compare thinking habits, not incompatible transcripts.
Scannable at-a-glance view with optional drill-down for depth.
Cryptographic proofs eliminate manual integrity checks.
Compare process across grading systems, schools, and countries.
Behavioral patterns are hard to fake. Timing, revision depth, and consistency checks.
See the whole learner: persistence, creativity, independence, and growth.
Machine-readable credentials support efficient screening at any volume.
Differentiate your platform by proving pedagogical value. ALPR's MCP connector specification lets your platform contribute verified process data to a cross-platform learner record.
Platforms that adopt ALPR signal commitment to learning outcomes over engagement metrics. Early adopters shape the standard and gain privileged positioning in the emerging credential ecosystem.
Tutoring platforms already capture scaffolding progression, mastery curves, and hint usage. ALPR normalizes this data into a portable format that proves platform effectiveness to parents and institutions.
LLMs are already used for learning but can't prove it. ALPR captures prompt refinement, challenge frequency, and synthesis patterns—turning informal learning into credentialed evidence.
Code assistants can capture uniquely powerful process signals: independence ratio, debugging approach, code review habits, and how students build on AI suggestions vs. accepting them wholesale.
Research tools capture source evaluation quality, cross-referencing behavior, and synthesis sophistication—signals that directly map to academic readiness dimensions.
Prove your platform drives real learning, not just engagement.
Join an open ecosystem rather than building proprietary silos.
ALPR compliance becomes a procurement checkbox for schools.
Cryptographic signing signals data integrity to institutions and parents.
Early adopters influence the specification direction and governance.
Students invest in platforms that contribute to their portable learning record.
The education system needs governance frameworks for AI-assisted learning that protect students, ensure equity, and maintain institutional trust. ALPR provides the data infrastructure to build evidence-based policy.
ALPR is designed for regulation, not against it. Privacy-first architecture, open specification, alignment to existing standards, and configurable fairness parameters make it a policy-ready framework.
State boards need standardized data on how AI tools affect learning outcomes. ALPR's aggregate, anonymized process data provides the evidence base for regulation that helps rather than hinders.
Accreditors need to verify that institutions maintain academic integrity while integrating AI. ALPR provides auditable, standardized evidence of learning quality across institutions.
ALPR is built as an extension to existing standards (xAPI, CLR 2.0, Open Badges 3.0), making it a natural candidate for formal standardization.
ALPR's configurable dimension weights and cultural-bias awareness features make it an ally for equity work. "Productive struggle" shouldn't penalize students from different learning traditions.
Regulate AI in education with real process data, not assumptions.
Built for FERPA, GDPR, and student data protection from the ground up.
Aligned to W3C, 1EdTech, and IEEE standards for global interoperability.
Configurable weights and transparent algorithms support fairness review.
Open-source, community-governed standard. No vendor lock-in.
Cryptographic verification and provenance tracking support accountability.
Explore different learner profiles and their process evidence records. Click the profiles below to see how different learning styles appear in the ALPR radar chart.
6-axis view — scannable in 30 seconds
Student-selected moments that tell their learning story
After three failed attempts and a deliberate shift from asking for solutions to asking for analogies, the concept of recursive function calls connected to the Russian nesting dolls metaphor I built myself.
Spent 45 minutes tracing a subtle array boundary error. Rejected the AI's first fix because it masked the root cause. Wrote my first property-based test to prove the fix was correct.
Connected Shannon entropy from my CS reading to genetic information density in AP Bio. Used Claude to validate the analogy, then found two flaws in the AI's response through independent research.
My original thesis on urban planning was too broad. Each revision narrowed focus based on AI-surfaced counterarguments I hadn't considered. The final version was genuinely mine—shaped by challenge, not by copying.
ALPR captures behavioral patterns, not content. Students control what is shared. Conversations are never exposed. The architecture enforces data minimization at every layer.
ALPR extends proven learning data standards rather than reinventing from scratch.
Layer 1 interaction events use xAPI Actor-Verb-Object statements with a custom ALPR verb profile.
Layer 3 packaging uses Competency & Learning Record as the verifiable credential envelope.
Individual competency achievements represented as OpenBadgeCredentials within the CLR.
Underlying trust and verification layer using W3C Verifiable Credentials data model.
Process dimensions aligned to recognized competency frameworks for institutional compatibility.
Formal standards alignment pathway for the interaction data layer.
ALPR is an open specification in active development. Here's the path forward.
Note: This roadmap represents a prospective path forward. Actual timing will depend on community adoption, development progress, and stakeholder feedback.
Finalize JSON-LD context and JSON Schema. Build reference MCP connectors for 2–3 platforms. Pilot with 3–5 schools for evaluator feedback.
Build the evaluator dashboard renderer. Develop cohort benchmarking data. Publish evaluator interpretation guide and training materials.
Open specification for community contribution. Launch certification program for MCP connector compliance. Begin integration with credential platforms (Common App, professional registries).
Submit to 1EdTech as CLR extension. Register xAPI profile. Seek AACRAO endorsement. Formal IEEE standards track.
No. ALPR is fundamentally student-owned and student-curated. It captures aggregated behavioral patterns (like "revised 3 times" or "challenged the AI's response") rather than conversation content. Students choose which sessions to include, which episodes to highlight, and which institutions to share with. Raw conversations are never exposed. Think of it as a fitness tracker for learning habits, not a wiretap.
ALPR includes multiple anti-gaming safeguards. Timing analysis detects unnatural interaction patterns. Platform-level aggregate metrics are cryptographically signed and can't be selectively excluded. Cross-platform consistency checks flag sudden behavioral changes. And because ALPR captures behavioral signals over time (not single-point performances), gaming requires sustained, consistent behavioral change—which, if maintained long enough, is arguably genuine learning.
ALPR is an optional, supplementary credential—not a requirement. Students who don't use AI tools simply wouldn't have ALPR data, and that absence should not be held against them. For students who do use AI, ALPR rewards effective and critical AI use, not avoidance. The schema explicitly measures AI literacy alongside intellectual autonomy, recognizing that both are valuable.
This is an active area of concern in the specification. Concepts like "productive struggle" and "intellectual autonomy" may carry cultural assumptions. ALPR addresses this by making dimension weights configurable by evaluators, supporting culturally-informed interpretation guides, and maintaining an open specification process that invites diverse perspectives. The schema explicitly flags this as an open question to be resolved through broad community input.
Each AI platform implements an MCP (Model Context Protocol) connector that normalizes its data into the ALPR schema. The student's ALPR record aggregates process data from all connected platforms into a single, portable credential. Platform-specific adaptations are defined for AI tutors, general LLMs, code assistants, research tools, and writing assistants. Cross-platform episode linking is an active design challenge being addressed in the specification.
ALPR extends established learning data standards: xAPI 1.0.3 for interaction events, CLR 2.0 (Comprehensive Learner Record) for credential packaging, Open Badges 3.0 for competency achievements, W3C Verifiable Credentials 2.0 for trust and verification, CASE for competency framework alignment, and IEEE P9274.1.1 for formal standards compliance. It builds on these foundations rather than starting from scratch.
Yes. ALPR is an open specification. The JSON Schema, documentation, and reference implementations are all publicly available. The project welcomes community contributions, and governance is designed to transition toward a multi-stakeholder model as the ecosystem grows. No single vendor controls the standard.
This is an open question in the specification. The minimum viable dataset—how many interactions or hours constitute a meaningful record—is being determined through pilot testing. The goal is to balance statistical significance with accessibility, ensuring the bar isn't so high that only privileged students with extensive AI access can build useful records.
ALPR is in active development and open to contributions from educators, developers, admissions professionals, students, and policymakers.