Open Specification · Draft v0.1

Show How You Learn,
Not Just What You Know

The AI Learning Process Record is an open, portable schema that captures the process of AI-assisted learning—giving students, educators, and evaluators a verified window into how learners think, struggle, iterate, and grow.

The Problem

Education Evaluation is Broken in the AI Era

Traditional evaluation materials were designed for a world without AI. That world no longer exists.

Authenticity Crisis

AI can generate polished essays, solve problem sets, and write code. Evaluators can no longer distinguish genuine student work from AI-generated output.

📈

Outcome-Only Evaluation

Grades, transcripts, and test scores capture what a student produced but reveal nothing about how they learned, reasoned, or persisted through challenges.

🔒

No Shared Standard

Every AI platform generates learning data in its own format. There is no interoperable, portable way to share verified process evidence across institutions.

What Educators See Today

  • A polished essay (who actually wrote it?)
  • A 4.0 GPA (but was it rote memorization?)
  • Test scores (one-day snapshot)
  • Letters of recommendation (subjective)
  • Extracurriculars (curated for application)

What ALPR Makes Possible

  • Verified record of iterative thinking process
  • Evidence of intellectual independence with AI
  • Longitudinal growth trajectories across domains
  • Student-authored reflections on learning moments
  • Anti-gaming signals rooted in behavioral patterns
Architecture

Three Layers, One Portable Record

ALPR builds on established learning data standards to create a trust chain from raw interactions to verifiable credentials.

1

Interaction Events

Raw learning interactions captured as xAPI statements from AI platforms. Students never share this layer directly—it feeds the aggregation pipeline.

xAPI 1.0.3 prompted self-corrected synthesised reflected
2

Process Evidence Records

The core innovation: derived summaries capturing learning behaviors that are meaningful to evaluators. Behavioral signals, not conversation transcripts.

6 Dimensions Learning Episodes Growth Trajectories Anti-gaming
3

Verifiable Credentials

Signed, portable, evaluation-ready package using W3C Verifiable Credentials, CLR 2.0, and Open Badges 3.0. Cryptographic proofs ensure authenticity.

CLR 2.0 Open Badges 3.0 W3C VC 2.0 DID
Process Dimensions

Six Dimensions of How Students Learn

ALPR captures behavioral patterns across six research-backed dimensions that reveal how a student engages with AI-assisted learning.

🧠

Intellectual Autonomy

Does the student think independently or outsource cognition to AI?

Prompt specificity trend AI output acceptance rate Challenge frequency per session Independent reasoning episodes
🔭

Metacognitive Awareness

Does the student reflect on their own thinking and learning strategies?

Self-assessment accuracy Strategy switching count Reflection depth Help-seeking appropriateness

Productive Struggle

Does the student persist through difficulty or abandon ship at the first obstacle?

Attempts before help-seeking Time in struggle zone (%) Abandonment rate Scaffolding preferences
🔄

Iterative Refinement

Does the student revise and improve work or accept first drafts?

Average revision cycles Revision depth distribution Self-initiated revision rate Surface vs. structural edits
🔗

Knowledge Transfer

Does the student apply concepts across different contexts and domains?

Cross-domain application count Decreasing scaffolding rate Analogical reasoning instances Concept reuse patterns
🤖

AI Literacy

How effectively and critically does the student use AI as a learning tool?

Prompt engineering sophistication Output verification rate Appropriate tool selection Limitation awareness score
Use Cases

Built for Every Stakeholder

ALPR serves everyone in the learning ecosystem—from the students who own their records to the institutions that evaluate them.

For Students

Your learning journey is more than a GPA. ALPR gives you a verified, portable record of how you think, learn, and grow with AI—curated and owned by you.

The Student Promise

You choose which learning episodes to share. You write the reflections. You own the narrative. ALPR doesn't surveil—it empowers you to prove what grades can't show.

🎓

College Applications

Maya, 17 — Applying to engineering programs
Used Claude for physics problem-solving and Khan Academy for calculus over 18 months

Maya's ALPR shows evaluators that she doesn't just get the right answer—she challenges AI explanations, iterates on her approach, and transfers physics intuition to novel problems.

  1. Reviews her learning sessions and selects 5 breakthrough episodes
  2. Writes reflections on her "productive failure" debugging a bridge simulation
  3. Shares her growth trajectory showing steadily increasing intellectual autonomy
  4. Exports a signed credential to attach alongside her Common App
💻

Scholarship Portfolios

Jamal, 16 — First-generation college student
Self-taught programming using Copilot, YouTube, and ChatGPT

Without AP classes or expensive tutors, Jamal's transcript understates his abilities. His ALPR demonstrates sophisticated debugging instincts, deep iterative refinement, and growing AI literacy that rivals formal CS education.

  1. Curates episodes showing his progression from basic prompts to architectural reasoning
  2. Highlights cross-domain transfer: applying data structures to a community service app
  3. Includes his reflection on why he prefers hints over full solutions
  4. Attaches ALPR to scholarship applications where transcripts tell an incomplete story
🌍

International Students

Yuki, 18 — Applying from Tokyo to U.S. universities
Uses Duolingo, Claude, and Elicit for English language learning and research

Yuki's ALPR provides standardized, verifiable process data that transcends grading system differences and lets evaluators see her critical thinking in action.

  1. Shows steady CEFR-aligned growth trajectory from B1 to C1 across platforms
  2. Curates an episode where she challenged a cultural assumption in an AI-generated essay
  3. Demonstrates research methodology refinement using Elicit for her Extended Essay
  4. Shares selective credentials with each university, tailored to program focus
🚀

Graduate School & Career Transitions

Priya, 24 — Applying to data science master's programs
Career changer using AI tools to build skills outside formal education

For non-traditional learners, ALPR provides verified evidence of rigorous self-directed study that formal transcripts can't capture.

  1. Documents 12 months of self-directed machine learning study with AI tutors
  2. Shows growing sophistication in prompt engineering and output verification
  3. Highlights episodes of productive struggle with gradient descent concepts
  4. Presents longitudinal growth data as evidence of readiness for graduate study

Key Benefits for Students

🔑
Full Ownership

You control what to share, with whom, and for how long. Revoke access anytime.

💡
Beyond the GPA

Showcase thinking process, persistence, and growth that grades don't capture.

🛠
Portable Across Platforms

One record, many sources. Aggregate learning data from every AI tool you use.

Verified & Tamper-Proof

Cryptographic signatures prove your record is authentic and unaltered.

🎯
Self-Awareness Tool

Understand your own learning patterns and identify areas for growth.

Equity Amplifier

Level the playing field for self-taught and non-traditional learners.

For Families

Understand how your child actually engages with AI—not as surveillance, but as insight. ALPR helps families support healthy learning habits and make informed decisions about AI tools.

The Family Promise

ALPR is never a surveillance tool. It captures aggregated behavioral patterns—not conversation content. Families see learning habits, not private thoughts. The student always controls what is shared.

👪

Understanding AI Usage at Home

The Chen Family — Parents of a 14-year-old
Concerned about whether their child uses AI to learn or to avoid learning

Instead of guessing or banning AI, the Chens can see aggregated patterns: Is their child accepting every AI answer uncritically? Or are they pushing back, verifying, and building understanding?

  1. Review process dimension summary showing intellectual autonomy trends
  2. See that productive struggle scores are healthy—their child persists before seeking help
  3. Notice AI literacy is growing: their child now verifies outputs and questions assumptions
  4. Have a data-informed conversation about healthy AI habits rather than anxious speculation
📚

Supporting College Prep

The Johnsons — Parents navigating admissions
Want their daughter's application to stand out authentically

Families can help students identify their strongest learning episodes, understand which dimensions to develop, and make strategic decisions about which AI tools to invest time in.

  1. Review growth trajectories together to identify areas of genuine strength
  2. Discuss which learning episodes best tell their daughter's story
  3. Understand which platforms are most pedagogically valuable for her goals
  4. Support her in writing authentic reflections that connect to her aspirations
🏠

Homeschool Documentation

The Rivera Family — Homeschooling 3 children
Need credible, standardized evidence of learning progress

Homeschooling families often struggle to provide standardized evidence of learning. ALPR gives them verified, institution-recognized process data that validates their curriculum choices.

  1. Use AI tutoring platforms as part of the homeschool curriculum
  2. ALPR automatically captures learning process data across all platforms
  3. Generate verifiable credentials that satisfy state reporting requirements
  4. Present rich process evidence alongside portfolios for college applications
💪

Supporting Diverse Learners

The Okonkwo Family — Child with ADHD
Need to demonstrate learning capacity that traditional assessments miss

For neurodiverse learners, ALPR captures the nuance that standardized tests flatten. Productive struggle looks different for every brain—ALPR shows the genuine engagement, not just the timed output.

  1. ALPR's process data shows deep engagement despite non-linear learning patterns
  2. Growth trajectories reveal steady progress that timed tests obscure
  3. Scaffolding preference data informs effective accommodation strategies
  4. Share with evaluators who value process over single-point-in-time assessments

Key Benefits for Families

👁
Visibility Without Surveillance

Understand patterns and habits without reading private conversations.

💬
Informed Conversations

Talk about AI usage with data, not anxiety. Guide healthy habits.

💰
Investment Insight

See which AI platforms actually drive learning, not just engagement.

🛠
Application Advantage

Help your child build a credible, differentiated application portfolio.

🧡
Neurodiversity Support

Capture learning capacity that traditional metrics undercount.

🎓
Homeschool Credibility

Provide verified, standardized evidence for non-traditional education.

For Educators

AI isn't going away. ALPR helps educators understand how students engage with AI tools, design better assignments, and shift focus from policing AI use to cultivating genuine learning.

The Educator Promise

ALPR doesn't replace your judgment—it amplifies it. See which students are building real understanding and which are coasting. Design interventions backed by process data, not guesswork.

📋

Classroom Integration

Dr. Martinez — AP Biology Teacher
Wants to allow AI use but ensure students are actually learning

Instead of banning AI or ignoring it, Dr. Martinez designs assignments where ALPR process data becomes part of the assessment. Students earn credit for how they engage, not just what they submit.

  1. Assigns a research project where students must use AI tools as part of the process
  2. Reviews class-level process dimension data to identify who's thinking vs. copying
  3. Uses iterative refinement scores to weight the process portion of the grade
  4. Provides targeted feedback to students with low productive-struggle scores
📊

Data-Driven Instruction

Ms. Patel — Middle School Math Coordinator
Manages 6 math teachers across grades 6-8

Aggregate ALPR data across a grade level reveals which concepts trigger productive struggle (good) versus frustration-driven abandonment (bad), informing curriculum pacing and support strategies.

  1. Reviews anonymized, aggregate process dimension data across grade-level cohorts
  2. Identifies that fractions-to-algebra transfer shows unusually low knowledge-transfer scores
  3. Adjusts curriculum to add bridging activities at the identified transition point
  4. Monitors ALPR data over the next quarter to measure intervention effectiveness

Writing Instruction

Prof. Williams — College Writing Instructor
Teaching composition in the age of AI writing assistants

Rather than playing "detect the AI," Prof. Williams uses ALPR to assess writing process. Did the student brainstorm, draft, get AI feedback, revise substantively? Or did they prompt once and submit?

  1. Requires students to submit ALPR process data alongside final essays
  2. Reviews iterative refinement data: revision depth, self-initiated changes, structural edits
  3. Grades the quality of the revision process, not just the final product
  4. Uses data to teach students about effective revision strategies
🏫

School-Wide AI Policy

Principal Torres — K-12 School Leader
Developing the school's AI usage framework

Instead of blanket bans or unrestricted access, ALPR gives school leaders data to craft nuanced AI policies grounded in evidence of what actually supports learning.

  1. Pilots ALPR integration with two grade levels using approved AI platforms
  2. Collects anonymized aggregate data on how AI use correlates with learning dimensions
  3. Builds an evidence-based AI policy distinguishing productive from unproductive AI use
  4. Reports to parents and board with transparent process data backing the policy

Key Benefits for Educators

🔍
Process Visibility

See how students engage with AI, not just what they submit.

📈
Data-Driven Decisions

Inform curriculum and intervention with real learning process data.

📝
Process-Based Assessment

Grade the journey, not just the destination. Reward genuine learning.

End the Detection Arms Race

Stop playing "spot the AI." Focus on learning quality instead.

👥
Differentiated Support

Identify which students need scaffolding and which are ready for challenge.

📊
Evidence-Based Policy

Build AI usage policies on data, not fear or speculation.

For Evaluators

Evaluate learners on how they think, not just what they produce. ALPR provides scannable, verified process evidence designed for efficient review workflows.

The Evaluator Promise

Scannable in under 5 minutes per candidate. A radar chart at a glance, episode drill-downs for depth, growth trajectories for trend, and cryptographic verification for trust. No more guessing who wrote what.

🏫

Academic Admissions

Alice — Senior Admissions Reader, Liberal Arts College
Reviews 2,000+ applications per cycle

Alice reads two applications with identical GPAs. One ALPR shows a student who challenges AI outputs and iterates deeply. The other shows surface-level engagement. The difference is now visible and verifiable.

  1. Opens the ALPR dashboard: scans the 6-axis radar chart for a quick profile
  2. Notices strong intellectual autonomy and productive struggle—flags for closer read
  3. Clicks into a "breakthrough moment" episode: student's own reflection + verified data
  4. Checks the growth trajectory: steady, consistent improvement over 18 months
🔬

Graduate Research Programs

Dr. Kim — Graduate Admissions Committee Chair
Selecting PhD candidates for a computational biology lab

Research readiness requires specific thinking habits: iterative refinement, knowledge transfer across domains, and the ability to critique AI-generated hypotheses. ALPR makes these habits visible.

  1. Filters candidates by knowledge transfer and iterative refinement scores
  2. Reviews research deep-dive episodes showing how candidates use AI for literature review
  3. Checks AI literacy dimension: do candidates verify AI outputs against primary sources?
  4. Uses ALPR data alongside traditional materials for a holistic evaluation
🏆

Scholarship Selection

The Merit Foundation — National scholarship program
Awarding 500 scholarships from 50,000 applicants

At scale, ALPR enables efficient first-pass screening based on verified process metrics, while preserving the human review for final selection with rich drill-down data.

  1. Use aggregate ALPR scores for initial screening of 50,000 applicants
  2. Shortlist candidates with standout growth trajectories and productive struggle
  3. Reviewers examine curated episodes and student reflections for the final 2,000
  4. Verify credential authenticity programmatically—no manual integrity checks needed
🌐

Cross-System Evaluation

Evaluation team at a global organization
Assessing candidates across 100+ educational systems

ALPR provides a common, standardized process-evidence layer that works across grading systems, languages, and educational traditions. Compare thinking habits, not incompatible transcripts.

  1. Receive ALPR credentials alongside transcripts from varied international systems
  2. Use standardized process dimensions to compare candidates on equal footing
  3. Review platform-diversity data: strong candidates often use multiple AI tools effectively
  4. Leverage anonymous cohort percentiles for fair cross-system calibration

Key Benefits for Evaluators

5-Minute Review

Scannable at-a-glance view with optional drill-down for depth.

🔒
Verified Authenticity

Cryptographic proofs eliminate manual integrity checks.

📈
Standardized Comparison

Compare process across grading systems, schools, and countries.

🔍
Anti-Gaming Signals

Behavioral patterns are hard to fake. Timing, revision depth, and consistency checks.

🤝
Holistic Evaluation

See the whole learner: persistence, creativity, independence, and growth.

🚀
Scalable

Machine-readable credentials support efficient screening at any volume.

For AI Platforms

Differentiate your platform by proving pedagogical value. ALPR's MCP connector specification lets your platform contribute verified process data to a cross-platform learner record.

The Platform Opportunity

Platforms that adopt ALPR signal commitment to learning outcomes over engagement metrics. Early adopters shape the standard and gain privileged positioning in the emerging credential ecosystem.

🤖

AI Tutoring Platforms

Khan Academy, Duolingo, and similar
Adaptive learning platforms with rich interaction data

Tutoring platforms already capture scaffolding progression, mastery curves, and hint usage. ALPR normalizes this data into a portable format that proves platform effectiveness to parents and institutions.

  1. Implement MCP connector mapping existing data to ALPR xAPI verbs
  2. Surface ALPR process dimensions in the learner dashboard
  3. Enable one-click credential export for admissions applications
  4. Use aggregate ALPR data to demonstrate platform learning outcomes
💬

General LLMs

ChatGPT, Claude, Gemini, and similar
Conversational AI used widely for learning

LLMs are already used for learning but can't prove it. ALPR captures prompt refinement, challenge frequency, and synthesis patterns—turning informal learning into credentialed evidence.

  1. Build an ALPR MCP server that analyzes conversation patterns (not content)
  2. Detect and tag learning-relevant interactions: challenge, synthesis, reflection
  3. Offer users an "ALPR Learning Mode" that opts into process data collection
  4. Export verifiable process evidence as a platform-signed credential
💻

Code Assistants

GitHub Copilot, Cursor, Replit, and similar
AI-powered coding tools used by students

Code assistants can capture uniquely powerful process signals: independence ratio, debugging approach, code review habits, and how students build on AI suggestions vs. accepting them wholesale.

  1. Map code-editing interactions to ALPR verbs: iterated, abandoned-ai-output, self-corrected
  2. Compute AI literacy dimension from acceptance rate, modification depth, and verification
  3. Generate learning episodes from debugging sessions and project milestones
  4. Contribute platform-specific signals to the cross-platform ALPR record
📚

Research Tools

Elicit, Consensus, Semantic Scholar, and similar
AI-powered research and synthesis platforms

Research tools capture source evaluation quality, cross-referencing behavior, and synthesis sophistication—signals that directly map to academic readiness dimensions.

  1. Track source evaluation and cross-referencing as xAPI interaction events
  2. Compute knowledge transfer dimension from cross-domain research patterns
  3. Generate "research deep-dive" learning episodes from extended sessions
  4. Sign and export as part of the student's multi-platform ALPR credential

Key Benefits for AI Platforms

🏆
Market Differentiation

Prove your platform drives real learning, not just engagement.

🔗
Interoperability

Join an open ecosystem rather than building proprietary silos.

📐
Institutional Sales

ALPR compliance becomes a procurement checkbox for schools.

🔐
Trust Signal

Cryptographic signing signals data integrity to institutions and parents.

🔭
Standard Shaping

Early adopters influence the specification direction and governance.

👥
User Retention

Students invest in platforms that contribute to their portable learning record.

For Policymakers & Standards Bodies

The education system needs governance frameworks for AI-assisted learning that protect students, ensure equity, and maintain institutional trust. ALPR provides the data infrastructure to build evidence-based policy.

The Policy Promise

ALPR is designed for regulation, not against it. Privacy-first architecture, open specification, alignment to existing standards, and configurable fairness parameters make it a policy-ready framework.

🏛

State Education Departments

State Board of Education
Developing statewide AI-in-education guidelines

State boards need standardized data on how AI tools affect learning outcomes. ALPR's aggregate, anonymized process data provides the evidence base for regulation that helps rather than hinders.

  1. Adopt ALPR as the recommended process-evidence standard for state schools
  2. Use anonymized aggregate data to measure AI's impact on learning dimensions statewide
  3. Identify equity gaps: which student populations benefit most/least from AI tools
  4. Set evidence-based requirements for AI platforms used in public schools
📜

Accreditation Bodies

Regional and national accreditation organizations
Updating accreditation standards for the AI era

Accreditors need to verify that institutions maintain academic integrity while integrating AI. ALPR provides auditable, standardized evidence of learning quality across institutions.

  1. Include ALPR-compatible process assessment in accreditation criteria
  2. Review institutional aggregate ALPR data as part of quality assurance
  3. Benchmark process dimension scores across accredited institutions
  4. Recognize institutions that demonstrate effective AI-enhanced pedagogy
🌐

International Standards Organizations

1EdTech, IEEE, W3C, and similar
Maintaining learning data interoperability standards

ALPR is built as an extension to existing standards (xAPI, CLR 2.0, Open Badges 3.0), making it a natural candidate for formal standardization.

  1. Review ALPR as a CLR 2.0 extension profile for AI-assisted learning
  2. Register ALPR xAPI verb profile in the official xAPI profile registry
  3. Coordinate with W3C Verifiable Credentials working group on privacy extensions
  4. Develop compliance certification for ALPR MCP connector implementations

Equity & Civil Rights Organizations

Education equity advocacy organizations
Ensuring AI in education doesn't widen existing gaps

ALPR's configurable dimension weights and cultural-bias awareness features make it an ally for equity work. "Productive struggle" shouldn't penalize students from different learning traditions.

  1. Audit ALPR dimension definitions and weights for cultural bias
  2. Advocate for configurable evaluation parameters that respect diverse learning styles
  3. Monitor whether ALPR adoption correlates with more equitable admissions outcomes
  4. Contribute to the open specification process to embed equity by design

Key Benefits for Policymakers

📊
Evidence-Based Policy

Regulate AI in education with real process data, not assumptions.

🔒
Privacy-First Design

Built for FERPA, GDPR, and student data protection from the ground up.

🌐
International Compatibility

Aligned to W3C, 1EdTech, and IEEE standards for global interoperability.

Equity Auditable

Configurable weights and transparent algorithms support fairness review.

📂
Open Specification

Open-source, community-governed standard. No vendor lock-in.

🛠
Audit Trail

Cryptographic verification and provenance tracking support accountability.

Interactive Demo

See What Evaluators See

Explore different learner profiles and their process evidence records. Click the profiles below to see how different learning styles appear in the ALPR radar chart.

Process Dimensions Profile

6-axis view — scannable in 30 seconds

Curated Learning Episodes

Student-selected moments that tell their learning story

2025-11-14
Breakthrough: Recursion Finally Clicks
breakthrough_moment

After three failed attempts and a deliberate shift from asking for solutions to asking for analogies, the concept of recursive function calls connected to the Russian nesting dolls metaphor I built myself.

Productive Struggle Metacognitive Awareness AI Literacy
2025-12-03
Debugging: The Off-by-One That Taught Me Testing
debugging_journey

Spent 45 minutes tracing a subtle array boundary error. Rejected the AI's first fix because it masked the root cause. Wrote my first property-based test to prove the fix was correct.

Intellectual Autonomy Iterative Refinement
2026-01-18
Creative Synthesis: Biology Meets Information Theory
creative_synthesis

Connected Shannon entropy from my CS reading to genetic information density in AP Bio. Used Claude to validate the analogy, then found two flaws in the AI's response through independent research.

Knowledge Transfer Intellectual Autonomy AI Literacy
2026-02-01
Perspective Shift: Rewriting My Thesis Three Times
perspective_shift

My original thesis on urban planning was too broad. Each revision narrowed focus based on AI-surfaced counterarguments I hadn't considered. The final version was genuinely mine—shaped by challenge, not by copying.

Iterative Refinement Metacognitive Awareness
Privacy & Trust

Privacy-First by Design

ALPR captures behavioral patterns, not content. Students control what is shared. Conversations are never exposed. The architecture enforces data minimization at every layer.

🔑 Student Control Points

  • Session opt-in: Choose which learning sessions to include in your record
  • Episode curation: Select which learning moments to highlight
  • Selective disclosure: Share different subsets with different institutions
  • Expiry & revocation: Records carry expiry dates; revoke access anytime
  • Right to be forgotten: Platform MCPs must support full deletion

🔒 Anti-Gaming Safeguards

  • Timing analysis: Genuine learning has characteristic pause patterns
  • Aggregate signing: Platform-level metrics can't be selectively excluded
  • Cross-platform checks: Consistency analysis flags sudden behavioral changes
  • Rewards effective AI use: Schema values smart AI usage, not avoidance
  • Third-party verification: Independent audits prevent platform metric inflation

✓ What ALPR captures

  • "Student refined their prompt 3 times before getting a useful response"
  • "Student challenged the AI's answer and provided an alternative reasoning"
  • "Average time in productive struggle zone: 12 minutes per session"
  • "Self-initiated revision rate: 68% of edits were unprompted"

✗ What ALPR never captures

  • "Student asked about photosynthesis and the AI responded with..."
  • Raw conversation transcripts or chat logs
  • Personal information beyond a pseudonymous learner ID
  • Content of student writing, code, or creative work
Standards Alignment

Built on Established Foundations

ALPR extends proven learning data standards rather than reinventing from scratch.

xAPI 1.0.3

Layer 1 interaction events use xAPI Actor-Verb-Object statements with a custom ALPR verb profile.

CLR 2.0

Layer 3 packaging uses Competency & Learning Record as the verifiable credential envelope.

Open Badges 3.0

Individual competency achievements represented as OpenBadgeCredentials within the CLR.

W3C VC 2.0

Underlying trust and verification layer using W3C Verifiable Credentials data model.

CASE Framework

Process dimensions aligned to recognized competency frameworks for institutional compatibility.

IEEE P9274.1.1

Formal standards alignment pathway for the interaction data layer.

Roadmap

From Specification to Standard

ALPR is an open specification in active development. Here's the path forward.

Note: This roadmap represents a prospective path forward. Actual timing will depend on community adoption, development progress, and stakeholder feedback.

Phase 1 — Foundation

Specification & Pilot

Finalize JSON-LD context and JSON Schema. Build reference MCP connectors for 2–3 platforms. Pilot with 3–5 schools for evaluator feedback.

Phase 2 — Tooling

Evaluator Tooling

Build the evaluator dashboard renderer. Develop cohort benchmarking data. Publish evaluator interpretation guide and training materials.

Phase 3 — Growth

Ecosystem Growth

Open specification for community contribution. Launch certification program for MCP connector compliance. Begin integration with credential platforms (Common App, professional registries).

Phase 4 — Standardization

Standards Body Alignment

Submit to 1EdTech as CLR extension. Register xAPI profile. Seek AACRAO endorsement. Formal IEEE standards track.

FAQ

Frequently Asked Questions

Is ALPR a surveillance tool?

+

No. ALPR is fundamentally student-owned and student-curated. It captures aggregated behavioral patterns (like "revised 3 times" or "challenged the AI's response") rather than conversation content. Students choose which sessions to include, which episodes to highlight, and which institutions to share with. Raw conversations are never exposed. Think of it as a fitness tracker for learning habits, not a wiretap.

Can students game ALPR to look better?

+

ALPR includes multiple anti-gaming safeguards. Timing analysis detects unnatural interaction patterns. Platform-level aggregate metrics are cryptographically signed and can't be selectively excluded. Cross-platform consistency checks flag sudden behavioral changes. And because ALPR captures behavioral signals over time (not single-point performances), gaming requires sustained, consistent behavioral change—which, if maintained long enough, is arguably genuine learning.

What if a student doesn't use AI? Does ALPR penalize them?

+

ALPR is an optional, supplementary credential—not a requirement. Students who don't use AI tools simply wouldn't have ALPR data, and that absence should not be held against them. For students who do use AI, ALPR rewards effective and critical AI use, not avoidance. The schema explicitly measures AI literacy alongside intellectual autonomy, recognizing that both are valuable.

Is ALPR culturally biased?

+

This is an active area of concern in the specification. Concepts like "productive struggle" and "intellectual autonomy" may carry cultural assumptions. ALPR addresses this by making dimension weights configurable by evaluators, supporting culturally-informed interpretation guides, and maintaining an open specification process that invites diverse perspectives. The schema explicitly flags this as an open question to be resolved through broad community input.

How does ALPR handle multiple platforms?

+

Each AI platform implements an MCP (Model Context Protocol) connector that normalizes its data into the ALPR schema. The student's ALPR record aggregates process data from all connected platforms into a single, portable credential. Platform-specific adaptations are defined for AI tutors, general LLMs, code assistants, research tools, and writing assistants. Cross-platform episode linking is an active design challenge being addressed in the specification.

What data standards does ALPR build on?

+

ALPR extends established learning data standards: xAPI 1.0.3 for interaction events, CLR 2.0 (Comprehensive Learner Record) for credential packaging, Open Badges 3.0 for competency achievements, W3C Verifiable Credentials 2.0 for trust and verification, CASE for competency framework alignment, and IEEE P9274.1.1 for formal standards compliance. It builds on these foundations rather than starting from scratch.

Is ALPR open source?

+

Yes. ALPR is an open specification. The JSON Schema, documentation, and reference implementations are all publicly available. The project welcomes community contributions, and governance is designed to transition toward a multi-stakeholder model as the ecosystem grows. No single vendor controls the standard.

How much data does a student need for a meaningful ALPR record?

+

This is an open question in the specification. The minimum viable dataset—how many interactions or hours constitute a meaningful record—is being determined through pilot testing. The goal is to balance statistical significance with accessibility, ensuring the bar isn't so high that only privileged students with extensive AI access can build useful records.

Get Involved

Shape the Future of Learning Credentials

ALPR is in active development and open to contributions from educators, developers, admissions professionals, students, and policymakers.