LatticePlans: How Do You Measure the Effectiveness of a Plan?
How do you measure the effectiveness of a training plan or a LatticePlan?
The question is simple. The answer isn’t.
Imagine a basic climbing plan.
Three sessions a week. A balance of climbing, conditioning and finger strength work. It looks solid on paper. It follows principles you recognise. Progression is built into it. You can glance at it and think, yes, that seems sensible.
But is it effective?
Effectiveness might refer to strength gains, improved movement, fewer injuries, better performance or simply feeling like you are progressing. It could also mean something more fundamental: can the athlete follow the plan with enough consistency for adaptation to occur?
This is where the real complexity begins.
Training takes place in the real world, under real constraints. Sleep, work, childcare, weather, motivation, travel, injury history and facility access all influence whether a plan is usable. A plan can be theoretically perfect yet practically ineffective if it does not fit the athlete’s context.
So when we began developing LatticePlans, the central question was never “How do we design the perfect plan?”
Instead, it was:
What makes a plan effective in real-world conditions?
The answer we kept returning to was simple:
A plan is only effective if it can be followed consistently.
Consistency became the foundation of the entire system.
To support this, we needed a way to evaluate whether the logic behind a plan would behave consistently when confronted with real variability. That requirement led us toward a structured validation process.
LatticePlan: A System Designed for Consistency
If consistency is the core requirement, the system behind a plan must actively support it. For us, that meant building something deterministic and rules based rather than probabilistic.
LatticePlan is not built with AI or machine learning. It does not generate content randomly or scrape information from anywhere. Every rule, boundary and progression is written or reviewed by coaches. All logic is explicit and traceable.
The system uses athlete information such as discipline, ability, experience, availability and goals to generate a structured plan. It then adapts that plan only when stable behaviour patterns appear, such as repeated missed days, consistent RPE difference or accumulated fatigue. Single outliers never trigger changes.
If the system adapts, it should adapt for reasons that reflect real behaviour rather than noise.
To make these decisions meaningful, we needed a framework capable of validating a subjective, context-dependent system. This brought us into the domain of behavioural science.
Why We Needed a Framework at All
This brings us to the deeper question:
How do you validate a subjective, context-dependent training system?
Climbing has no established method for evaluating whole training plans. Research tends to examine isolated mechanisms, such as rest intervals on a fingerboard or the required volume for power endurance. These are useful findings, but they do not tell you whether an entire plan behaves intelligently under varied conditions.
A training plan is a subjective system shaped by judgement, context and behaviour. Traditional sports science cannot capture that level of complexity. To address this, we turned to behavioural science, because this field specialises in measuring constructs that cannot be directly observed.
This also aligns with my PhD research, which focuses on redefining mental fatigue. Fatigue is often described as a single state, yet athletes do not experience it consistently. Two climbers can begin a session with the same mental load and finish with entirely different levels of fatigue. This suggests fatigue is better understood as a dynamic regulatory process shaped by mental load, time, motivation quality and allostatic regulation. These processes influence when someone reaches their cessation threshold, the point where continuing no longer feels worthwhile.
To measure such a system, my work uses a multi-stage validation process. It includes literature reviews, qualitative exploration, domain mapping, expert review and psychometric evaluation. The aim is not to identify a single symptom, but to model a complex, adaptive process.
To apply this type of reasoning to training plans, we needed a structured way to move from theory to rules and from rules to real-world evaluation. This is where we drew from the work of Boateng and colleagues.
Boateng et al. (2018) outline a three-phase structure for validating subjective systems:
- Item Development
- Scale Development
- Scale Evaluation
We did not use their statistical methods, but we adopted their architectural logic. Their phases became the scaffolding for our own validation process:
- Logic Development
- Logic Refinement
- Plan Evaluation
This framework shaped the entire development of LatticePlan.
The Three Phases of Validation: A Deeper Look
We used this structure to guide LatticePlan from first principles to final behaviour. The sections below expand on each phase in detail.
Phase One: Logic Development
Phase One answered the fundamental question:
What is the full domain a climbing training plan must operate within?
If the domain is not defined correctly, nothing downstream can be validated with confidence.

1. Mapping the Full Domain of Climbing Training
A climbing training plan is a complex system, not a list of workouts. It contains interacting parts that must work together over weeks and months.
We mapped:
- Training categories
- Multi-session progressions
- Load and intensity rules
- Safety boundaries
- Scheduling constraints
- Discipline and grade-specific requirements
- Behavioural feasibility
- Real coaching decision patterns
These elements form the backbone of how coaches think in practice.
2. Extracting Coaching Logic
Every meeting with coaches was recorded, transcribed and coded. Rather than rely on assumptions, we captured actual decision behaviours:
- Why certain weeks follow a particular structure
- How progression evolves in different contexts
- When load increases are appropriate
- Where safety boundaries must be enforced
- Which categories belong together and which do not
This ensured the logic reflected real coaching judgment.
3. Defining the Four Major Dimensions
To organise the domain, we structured it into four dimensions that govern training behaviour:
Structural
Time availability, frequency, scheduling constraints and training rhythm. These influence what is realistically possible week to week.
Physiological
Volume, intensity, progression and recovery. These variables control load and adaptation, and determine whether a plan is safe and effective.
Goal Alignment
Discipline, grade band and experience level. These define what the plan is trying to achieve and ensure the training stimulus is matched to the required performance outcome.
Behavioural
Feasibility and adherence likelihood. A plan is only effective if it can be completed consistently. Behavioural constraints are therefore central.
These dimensions anchor the entire system and ensure it behaves consistently across athlete profiles.
4. Producing the First-Draft Model
By the end of Phase One, we had:
- Defined categories
- Mapped flows
- Documented rules
- Articulated dimensions
This produced a complete first-draft model, ready to be stress tested in Phase Two.
Phase Two: Logic Refinement
Phase Two asked the next critical question:
Does the model behave like an experienced coach when confronted with large scale variation?
By this point, we understood the structure of the system. Phase Two tested whether that structure behaved as intended. The phase used a two-step loop: mass simulation followed by expert review.
Step One: Mass Simulation
We generated very large numbers of training plans across:
- All disciplines
- All grade bands
- All training schedules
- Varied goal types
- Edge-case scenarios
Across six refinement cycles, this amounted to roughly three million plans.
What We Looked For
We evaluated plans for:
- Unsafe progressions
- Unrealistic weekly structures
- Inconsistent load patterns
- Duplicated or missing categories
- Illogical flow combinations
- Scheduling conflicts
- Behaviours that contradicted real coaching logic
These issues highlighted rules or weightings that required refinement.
Refining the System
Each simulation cycle led to targeted adjustments:
- Redesigning flows
- Modifying weekly structure rules
- Rebalancing category frequencies
- Tightening progression boundaries
- Refining scheduling constraints
- Editing weightings
The goal was to produce a deterministic system that behaved predictably across inputs.
Step Two: Expert Review
Simulation reveals structural weaknesses, but subjective qualities must also be evaluated by coaches. After each simulation cycle, sample plans were reviewed using a structured rubric.
This rubric focused on three critical attributes:

1. Usability
Usability describes how easily the athlete can understand and follow the plan. If a plan is unclear, adherence drops and the training stimulus becomes inconsistent. Without consistent execution, adaptation cannot occur.
2. Goal Alignment
Goal alignment assesses whether the plan accurately reflects the athlete’s discipline, objectives and required performance qualities. Training adaptations are specific to the stresses applied. If the plan targets the wrong qualities, progress slows.
3. Physiological Safety
Physiological safety evaluates whether load and intensity progress in a sustainable way. Excessive stress increases injury risk and disrupts long-term progress. Too little stress produces stagnation. Sustainable progress requires appropriate load management.
Why These Attributes Matter
Together, these three attributes define whether an athlete can complete the plan consistently, safely and with clear purpose. Consistency is one of the strongest predictors of long-term improvement in climbing. These attributes ensured that the system generated plans that real climbers could understand, follow and benefit from.
The Iterative Refinement Loop
The refinement loop followed a consistent pattern:
- Simulation
- Refinement
- Expert review
- Refinement
This loop continued until the system consistently produced plans that surpassed 95% acceptability according to the rubric.
At that point, the model was ready for real-world testing.
Phase Three: Plan Evaluation
By this point, we understood how the system behaved in theory. Phase Three asked whether those behaviours held up when real climbers used the plans in real contexts.
1. LatticePlan Alpha Testing
A small, controlled group allowed us to observe:
- adherence patterns
- session completion
- RPE drift
- adaptation triggers
- disengagement points
Qualitative feedback helped identify clarity issues and misunderstandings.
2. LatticePlan Beta Testing
A larger and more diverse group introduced real-world variability. Data came from:
- Questionnaires
- Session logs
- In-app comments
- Email feedback
This revealed patterns that simulations could never show, such as:
- Stages where users commonly disengaged
- Technically correct flows that felt mismatched
- Misunderstood load expectations
- Adaptation behaviours that required clearer explanation
These insights led to refinements in logic, structure and communication, and identified new features such as equipment lists, preference systems and assessment modules.
3. Confirmation of System Behaviour
By the end of Phase Three, LatticePlan demonstrated consistent, predictable behaviour aligned with real coaching reasoning. The system built coherent plans, adjusted logically to stable behaviour patterns and maintained safety boundaries.

Bringing It All Back to the Beginning
So we return to the original question:
How do you measure the effectiveness of a training plan?
You evaluate the system behind it.
Not the individual sessions.
Not isolated variables.
The full logic.
A plan is effective when it:
- Behaves consistently across contexts
- Adapts only when behaviour justifies it
- Avoids unsafe or unrealistic patterns
- Supports adherence in real life
- Follows a validation framework built for subjective systems
That is what the three phases of validation were designed to achieve.
And it is why LatticePlan works the way it does.
This validation framework is only the beginning. Over the next year, we will continue expanding the model by integrating equipment lists, training preferences and assessment data directly into the system. These additions will further strengthen how the plan reflects real climbers and their real contexts. This is only the start.
Join our waitlist before Monday 15th December 2pm (GMT), to get first access to LatticePlans, secure an exclusive launch discount and find out more in our limited video series.
Public LatticePlan launch on Tuesday 16th December, 2pm (GMT).




