John Salter’s Blog

Follow my ruminations

Get new content delivered directly to your inbox.

  • Capability Assessment Method – Overview

    Purpose

    This assessment provides a concise, evidence‑based view of how well the organisation is set up to run, grow, and handle shocks, not just how much documentation it has. It combines a practical business lens with proven public‑sector and private‑sector capability models.


    Framework structure

    The method assesses capability across 11 domains that together describe how the business works:

    1. Context, Scope, Stakeholders & Strategy
    2. Leadership, Governance, Culture & Accountability
    3. Integrated Risk & Opportunity Management
    4. Framework, Design & Integration into Operations
    5. Planning, Objectives, Strategies & Change
    6. People, Capability, Culture, Communication & Awareness
    7. Customers, Markets, Stakeholders & Supply Chain
    8. Operational Control, Design, BCM Plans & Emergency Response
    9. Information, Data, Documentation & Digital
    10. Performance Measurement, Monitoring, Exercising & Review
    11. Learning, Improvement, Innovation & Resilience Evolution

    Each domain is assessed using evidence questions tailored to the organisation’s size, context, and sector.


    Three evidence tests (E1–E3)

    For each domain we look for three levels of evidence:

    • E1 – Exists
      Do the core structures and processes exist?
      (Policies, frameworks, processes, roles, plans, registers, documented approaches.)
    • E2 – Enabled
      Are they supported and usable?
      (Clear owners, resources, training, tools, data, and regular review cycles.)
    • E3 – Executed
      Are they actually used and making a difference?
      (Real examples where they shape decisions, behaviours, investments, and outcomes.)

    In practical terms, E1/E2 tell you “have we built the system?” and E3 asks “does the system change what happens?”


    Maturity scale (N–P–L–F)

    Evidence across E1–E3 is then converted into a four‑step maturity rating for each domain:

    • N – Absent
      No meaningful evidence; capability does not meet current needs.
    • P – Ad hoc
      Informal, person‑dependent, inconsistent; pockets of good practice but not reliable.
    • L – Defined
      Documented and repeatable, but weakly enforced; often strong on design, weaker on routine use.
    • F – Operational
      Embedded, consistent, tested, and reviewed; good evidence that it works in practice.

    The assessment also considers how well each domain is positioned for future challenges, using concepts like Emerging, Developing, Embedded, and Leading as narrative descriptors, but N/P/L/F remains the formal rating scale.


    Outputs

    A typical assessment delivers:

    • Overall maturity position (e.g. percentage at Defined vs Operational).
    • Domain‑by‑domain ratings and commentary, highlighting strengths, gaps, and underlying evidence.
    • A simple heat‑map or scorecard showing current and target maturity across the 11 domains.
    • A short list of priority gaps and practical actions, sequenced so effort is focused where it matters most.


  • Which risk metrics matter most?

    In short – the ones that “mean more”

    Metric: % of critical risks with treatments that reduce risk to target

    Used in our Universal Framework, this is “necessary but not sufficient”.

    Risks, opportunities, and continuity needs are systematically identified, analysed, and updated. This is a fundamental domain in our Consolidated Framework.

    Indeed, we only analyse risk in order to manage them.

    But again, not sufficient.

    Risk‑based objectives, plans, and continuity strategies guide and control material change. This fundamental domain (in our Consolidated Framework) completes the “necessary and sufficient to be adequate” requirement.

    For this exemplary client, the “Integrated Risk & Opportunity Management” domain (line 3 in the heat-map below) needed to move from “Defined” to “Operational”. While “Planning, Objectives, Strategies and Change” is fully operational.

  • Capability is more than potential

    Walking the Talk: Why Theory Must Meet Practice

    We’ve all met someone who can explain things perfectly — quote the books, name the principles, cite the research. But when it comes to putting those ideas to work, the magic fades. The truth is that theory without practice is like a plan without a journey; neat on paper, yet unrealized in life.

    We live in a world overflowing with advice, frameworks, and “how-tos.” You can learn the theory of entrepreneurship, design, leadership, or even kindness. Yet none of it matters until it shows up in how you act, decide, and create. The unity of theory and practice is where real capability lives — it’s the difference between knowing what and mastering how.

    The Power of Embodied Knowledge

    Think of a skilled chef. They’ve read recipes and studied technique, but their genius only shines through repeated action — through tasting, refining, and failing. The dance between theory and practice creates experience, and experience becomes wisdom. Walking the talk, in this sense, means letting ideas live through your hands, your habits, and your results.

    When you practice what you know, something shifts. Your understanding deepens. You start seeing nuances theory alone could never reveal. It’s discipline sharpened by doing, reflection built into motion.

    Enabled Capability

    Capability isn’t just the potential to perform; it’s the proven ability to execute — to make things happen well, consistently. It begins with knowledge but blossoms through the empowered act of applying that knowledge with intention. A team that executes well doesn’t just have skill; it has enabled capability — the conditions, openness, and trust that allow theory to become reality.

    In an organization, that looks like leaders who model the values they preach, cultures that reward experimentation rather than perfection, and practices that adapt ideas through feedback and lived experience.

    When Theory and Practice Walk Together

    So, how do we unite them? Simple — one deliberate step at a time. Take what you know, apply it, observe what happens, and iterate. Whether you’re designing a community project or mastering a sport, keep your theory humble and your actions bold.

    Ideas gain their truth only in motion. Walking it, not just talking it, is what turns intention into impact. It’s how we turn the abstract into the actual — and capability into excellence.

    You can select all, some or one domain for us to focus on


  • Capability Assessments and Context

    “Assessment Context” is fundamental.

    • What are your needs?
    • What do you want explored?
    • What data should be reviewed in that exploration?

    We encourage you to select which capability domains you want the assessment to focus on (all, or some, or one).


    Universal Framework

    Options are listed in the “requirements section” of the Fiverr sign up

    1. Strategy & Direction: % of strategic objectives with owners, measures, & proven impact on decisions
    2. Decision quality: % of decisions made once that stick without reversal or major rework
    3. Risk & Resilience: % of critical risks with treatments that reduce exposure to target level
    4. Execution & Control: % of material initiatives delivered on time and within budget
    5. People Leadership: voluntary turnover rate in critical roles within target limits
    6. Learning & Improvement: reduction in repeat incidents or failures over time
    7. Governance & accountability: % of material issues closed on time

    ALL, SOME, or ONE


    Consolidated Framework

    Options are listed in the “requirements section” of the Fiverr sign up

    Two mandatory domains:

    1. Understands context, stakeholders, and disruption, and uses this to set scope and strategy.

    2. Leaders set direction and appetite; governance ensures oversight and accountability.

    Nine options:

    3. Risks, opportunities, and continuity needs are systematically identified, analysed, and updated.

    4. Integrated management framework is tailored to context and embedded in operations.

    5. Risk‑based objectives, plans, and continuity strategies guide and control material change.

    6. People have the competence, information, and culture to deliver Q/E/R/BCM duties.

    7. Customer, market, stakeholder, and supply‑chain needs and risks are actively managed.

    8. Operations and critical activities are controlled via processes, BCM plans, and emergency response.

    9. Information and digital tools for Q/E/R/BCM are controlled, reliable, and used for decisions.

    10. Q/E/R/BCM performance and exposure are monitored, tested, and reviewed for assurance.

    11. The organisation learns from experience and systematically improves and innovates resilience.

    ALL, SOME, or ONE


    NB1: Documentation requirements vary by context and are advised in package descriptions.

    NB2: Where confidentiality precludes these options – see flow diagram below which is tailored to Operational Resilience package – we oversee self-assessment in step 2.


  • Consolidated Assessment

    Executive Summary Exemplar (Basic Package)

    The report shows a solid management-system foundation, with no domains rated Absent or Ad hoc, but the overall maturity is 59.1%, so the organisation is stronger in design than in consistent operational proof of use. Four domains are at Operational maturity and seven are at Defined maturity, which means the core opportunity is to convert documented processes into demonstrated decisions, behaviours, and improvement outcomes.

    Executive summary

    This is a credible, above-baseline capability position: strategy, governance, planning, and operational control are the strongest areas, and they are already operating at the target maturity level. The main weakness is not missing documentation; it is the lack of evidence that several frameworks are consistently shaping resource allocation, frontline behaviour, supplier resilience, analytics-led decisions, and continuous improvement.

    A useful way to read the report is that E1 and E2 are broadly in place across domains, while E3 is the recurring breakpoint in 7 of the 11 domains. In practical terms, the organisation has built the system, but it has not yet shown enough repeatable proof that the system changes outcomes.

    Strengths and gaps

    • Strong strategic alignment: context, stakeholder analysis, scope, and objectives are documented, traceable, and used to steer initiatives and investment choices.
    • Strong governance base: policies, governance structures, leadership participation, issue tracking, and accountability mechanisms are in place.
    • Strong planning discipline: objectives, action plans, continuity strategies, and change assessments are defined and influence priorities and budgets.
    • Strong operational resilience controls: key processes, BCM plans, and emergency procedures are documented, accessible, and evidenced in incidents or exercises.
    • The biggest cross-cutting gap is operational evidence: in seven domains, the report records “YES” for process definition and deployment, but “NO” for proof that the capability is materially influencing decisions or outcomes.
    • Risk and BIA are not yet clearly driving prioritisation and resource allocation, which limits the value of otherwise sound risk methods and registers.
    • People capability, supplier resilience, information use, performance review, and improvement all show the same pattern: defined and repeatable, but not yet demonstrably embedded.

    Domain maturity

    Heat-map

    The top priorities are to close the four target gaps first, then strengthen the lagging domains by proving routine use, not by writing more process material. If that sequencing is followed, the organisation should improve maturity fastest because the report already shows most core structures are in place.

    Action roadmap

    1. In the next 30 days, create a single decision trail linking risk ratings, BIA outputs, treatment choices, continuity strategies, and funding or project approvals so that risk and continuity visibly drive priorities.
    2. In the next 30 days, redesign management review packs so every KPI, audit, exercise, and incident trend ends with a required decision, owner, due date, and follow-up check.
    3. In the next 60 days, run targeted exercises for the highest-impact services and use the results to update BIA assumptions, recovery strategies, supplier contingencies, and BCM plans.
    4. In the next 60 days, review critical suppliers and partners for single points of failure, then define diversification, fallback, contractual, or stockholding controls for the highest-risk dependencies.
    5. In the next 90 days, shift people capability from training completion to demonstrated competence by testing key roles in exercises, audits, and incident simulations, especially outside specialist teams.
    6. In the next 90 days, establish a benefits-tracked improvement portfolio that measures repeat incidents, nonconformities, near misses, and resilience gains so learning can be shown in hard outcomes.

    The practical message for executives is simple: do not invest first in more framework design; invest in evidence of use, decision traceability, supplier resilience, management-review discipline, and measurable learning loops.



    The above note is an example of an executive summary provided with the Basic Package.

    Check capability, not compliance.


  • Consolidated Capability Assessment (Health Check) Framework

    An integrated synthesis of quality, risk, environmental, businesses continuity, and universal frameworks tailored to meet needs.

    Heat-map (example)


    1. Context, Scope, Stakeholders & Strategy

    Capability criterion
    The organisation understands its context, stakeholders, and disruption factors, and uses this to define scope, strategy, and objectives for quality, environment, risk, and continuity.


    2. Leadership, Governance, Culture & Accountability

    Capability criterion
    Leadership sets direction, policy, and appetite, and governance structures provide oversight and accountability for quality, environment, risk, resilience, and continuity.


    3. Integrated Risk & Opportunity Management

    Capability criterion
    Risks, opportunities, impacts, and continuity requirements are systematically identified, analysed, evaluated, and kept current across domains.


    4. Framework, Design & Integration into Operations

    Capability criterion
    The organisation maintains an integrated management framework tailored to context and embedded in business processes.


    5. Planning, Objectives, Strategies & Change

    Capability criterion
    Objectives, plans, and continuity strategies are risk-based, aligned with policy and appetite, and material changes are controlled.


    6. People, Capability, Culture, Communication & Awareness

    Capability criterion
    People have the competence, information, and culture to deliver integrated Q/E/R/BCM responsibilities.


    7. Customers, Markets, Stakeholders & Supply Chain

    Capability criterion
    The organisation manages customer, market, stakeholder requirements, and supply-chain dependencies, including quality, environmental, risk, and continuity expectations.


    8. Operational Control, Design, BCM Plans & Emergency Response

    Capability criterion
    Operations, products/services, and critical activities are controlled through processes, BCM plans, and emergency arrangements.


    9. Information, Data, Documentation & Digital

    Capability criterion
    Information, documented content, and digital tools supporting Q/E/R/BCM are controlled, reliable, and used for insight and decisions.


    10. Performance Measurement, Monitoring, Exercising & Review

    Capability criterion
    Performance, exposure, and capability across Q/E/R/BCM are monitored, tested, and reviewed for insight and assurance.


    11. Learning, Improvement, Innovation & Resilience Evolution

    Capability criterion
    The organisation learns from experience and systematically improves and innovates its integrated management and resilience capability.


    Understand whether your organisation is capable, not just compliant.


  • Understand whether your organisation is capable, not just compliant
  • Alignment of the Universal Framework with the American Productivity & Quality Center’s Process Classification Framework®

    The Universal Framework draws selectively from the management and support services portion of the American Productivity & Quality Center’s Process Classification Framework® (PCF) and reframes it as a maturity assessment rather than a process taxonomy.

    What the PCF looks like

    The PCF divides all enterprise work into two layers:

    • Operating Processes: Develop Vision and Strategy (1.0), Develop/Manage Products and Services (2.0), Market and Sell (3.0), Deliver Products and Services (4.0), Manage Customer Service (5.0)
    • Management and Support Services: Develop and Manage Human Capital (6.0), Manage IT (7.0), Manage Financial Resources (8.0), Acquire/Construct/Manage Assets (9.0), Manage Enterprise Risk, Compliance, Remediation and Resiliency (10.0), Manage External Relationships (11.0), Develop and Manage Business Capabilities (12.0)

    The PCF organises operating and management processes into 12–13 enterprise-level categories covering everything from product development to customer service, while the Universal Framework deliberately narrows to seven domains that test management capability quality rather than process existence or classification.

    The PCF is a hierarchical process taxonomy – a classification tool intended for benchmarking what processes exist and how they are structured across organisations.

    The Universal Framework is a maturity and evidence model – it tests whether processes are defined, applied, and producing results through E1, E2, and E3 questions. This means the two are complementary in intent but different in design logic.

    Evidence standard – E1, E2, and E3 evidence standard should follow a simple pattern:
    E1 requires proof that a defined process or framework exists, E2 requires proof that people are equipped and the process is actually being applied, and E3 requires proof that the process produces reliable results in practice. A practical test is:

    • E1: design evidence, the process exists and is defined.
    • E2: deployment evidence, people use it consistently and have the means to do so.
    • E3: effectiveness evidence, results show the process is working.

    The Universal Framework is PCF-aligned at enterprise-management level, especially across strategy, human capital, risk/resilience, governance, and business capability management, but it is not a full substitute for the PCF’s operating-process taxonomy.

    The Universal Framework asks, “Does the organisation have the capability and does it work?”, while APQC asks, “Which enterprise processes and process groups exist, and how are they classified for management and benchmarking?”

    Alignment

    Key structural differences

    Three practical gaps stand out:

    • The Universal Framework omits entire PCF operating categories — products, services, customers, IT, finance, and external relationships are not domains in the capability model, because these are operational functions rather than management capability dimensions.
    • Decision-Making Quality has no PCF equivalent at the category level; APQC treats decision authority and escalation as embedded activities within specific process groups rather than as a cross-cutting management capability.
    • Learning & Improvement is similarly distributed across multiple PCF categories (quality management, corrective action, knowledge management), whereas the Universal Framework treats it as a standalone domain with its own maturity ladder.

    Overall verdict

    The alignment is solid for four of the seven domains (Strategy, Risk, People, and partially Execution) and loose for the remaining three. The Universal Framework has been derived primarily from the management and support services half of the PCF, filtered down to the capabilities that most directly determine whether an organisation is well-led and controlled, (rather than whether it has documented every process in a taxonomy). This is a purposeful narrowing, but it means the two frameworks serve different purposes and should not be treated as equivalent.


    Appendix

    A best-fit crosswalk from the seven-domain Universal Framework to APQC PCF Level 1 categories and the most relevant Level 2 process-group anchors. It is best read as an alignment map rather than a strict one-to-one translation, because the Universal Framework assesses management capability through E1/E2/E3 evidence, while the PCF is a hierarchical taxonomy for processes and process groups.

    Appendix note: The Universal Framework is aligned to APQC at the enterprise-management level, particularly in strategy, human capital, risk and resilience, governance, and capability management, but it is not intended to replicate the full APQC operating-process taxonomy.


    Understand whether your organisation is capable, not just compliant


  • Unveiling a range of Gigs to meet your needs

    We are pleased to launch a range of solutions which aim to meet various resilience needs by asking the right questions …

    A diagnostic to identify your biggest gaps, immediate risks,

    and the priorities to actions that matter.



    99 cents Apple App


    Note: All pricing is displayed in USD


  • Sound on paper but …

    Yes, a familiar situation. You have a sound framework on paper but resilience is not embedded in how the organisation runs day to day.

    1. Key weaknesses in current arrangements

    Illustration: today, if a major outage occurred, you would probably depend heavily on a few key individuals and ad hoc coordination rather than confidently lifting a set of tested, role‑based playbooks off the shelf.

    2. Where plans, and accountability are unclear

    • Governance: policy and committee structures exist, but OR is not yet fully integrated into enterprise risk, strategy, change, and assurance processes. This makes accountability blurry at executive level (who “owns” resilience outcomes versus activities).
    • Critical services/BIA/risk: methodologies and outputs are defined, but they are not maintained as a “living” source of truth or consistently used across disciplines (IT DR, cyber, vendor management, etc.).
    • Strategies & solutions: there is a defined approach and documented strategies, but alignment with risk appetite, budgets, change programs and other resilience initiatives is weak, so decisions during projects and investments may not consider resilience sufficiently.
    • Roles & capability: there is no clear, organisation‑wide view of OR roles, competencies, and training/awareness expectations (e.g. crisis leaders, service owners, vendor resilience leads). People may not know what is expected of them in a disruption.
    • Performance & continuous improvement: there is no agreed set of resilience metrics, review cadence, or link into issues/corrective action processes, so no one is clearly accountable for closing gaps and demonstrating improvement over time.

    In practice, this means resilience is still seen more as a set of documents and projects than as a managed performance area with clear owners, measures, and consequences.

    3. Practical priorities (what to fix first)

    Focus first on the areas rated N (Absent) and P (Ad hoc), then strengthen linkages in the L (Defined) domains.

    Capability Heatmap

    1. Establish performance and oversight basics

    (Performance, Insights & Continuous Improvement – N).

    • Define a small, meaningful set of OR metrics/KRIs (e.g. critical service coverage, currency of BIAs and plans, test pass rates, incidents vs impact tolerances).
    • Set a quarterly OR performance review rhythm into an existing executive or risk committee, with clear ownership for actions and tracking.

    2. Build roles, skills, and awareness

    (Resilience Culture, Capability & Awareness – N).

    • Clarify key OR roles and responsibilities (service owners, crisis managers, BC coordinators, IT DR leads, vendor resilience owners) and map them to people.
    • Create a simple training and awareness plan: short role‑based briefings for key roles, basic OR awareness for all staff, and targeted crisis/incident leadership training.

    3. Make scenario testing and learning real

    (Scenario Testing, Validation & Maintenance – P).

    • Use the existing testing framework to run a small number of high‑value, cross‑functional exercises focused on top critical services and impact tolerances.
    • Implement a central lessons‑learned log with owners and due dates, linked to change, risk, and improvement processes so fixes are actually delivered.

    4. Strengthen the usability of response arrangements

    (Crisis, Incident & Operational Response – P).

    • Prioritize 3–5 key crisis and incident playbooks and BC/DR plans for simplification into short, role‑based, action‑oriented guides.
    • Confirm how plans are accessed in a crisis (including offline) and test this in exercises to make sure people can find and use them quickly.

    5. Keep critical services, BIAs and strategies “alive”

    (Governance, Critical Services, Strategies – L).

    • Stand up a single source of truth for critical services, impact tolerances, BIAs and key risks, with defined review triggers (annual plus event‑based).
    • Embed explicit “resilience checks” into major change and investment processes so strategies and tolerances are considered and updated at design gates.

    4. Concrete next 90–180 day actions

    5. Use for board, client or internal discussions

    • Position the assessment as evidence that the organisation has a defined framework but is in the early stages of embedding resilience into operations (overall maturity 28.6%, no domains yet at “Operational Strength”).
    • Emphasize that immediate focus is on: clarifying roles and accountability, demonstrating performance through metrics and testing, and simplifying/operationalising response arrangements.
    • Show a simple roadmap (90–180 days) with clear milestones and owners rather than a long list of abstract improvements.
    • For clients and regulators, highlight strengths in having defined methodologies, policies, and governance structures, while being transparent about the improvement plan for culture, testing, and performance management.


We could … but should we?

In contexts characterized by uncertainty, I am not convinced that AI provides sound solutions. Equally, I’m pretty sure that those dusty plans on your shelf (or in your cupboard) won’t measure up either. A reflection in the International Crisis Management Standard notes that “crises through a combination of their novelty, inherent uncertainty and potential scale… Read More

Free Disaster Risk Assessor App

Use our free app to map and explore how you are at risk. We also use it as a “pre-read” lead into our business continuity workshops. Hazards are not equally significant. Google Store https://play.google.com/store/apps/details?id=com.disaster.risk&hl=en_AU&gl=US Apple Store https://apps.apple.com/au/app/disaster-risk-assessor/id6443818654

Alignment of the Universal Framework with the American Productivity & Quality Center’s Process Classification Framework®

Understand whether your organisation is capable, not just compliant Read More

Unveiling a range of Gigs to meet your needs

We are pleased to launch a range of solutions which aim to meet various resilience needs by asking the right questions Read More

The move from traditional Business Continuity Planning to Operational Resilience is real and growing

How strong is the move from traditional Business Continuity and RTOs to a focus on Operational Resilience and Impact Tolerance? Read More

Business Continuity Review Document Checklist for Clients

Selecting the right paperwork to achieve an appropriate capability assessment review. Read More

A necessary and sufficient BCM/resilience “evidence set”

When commissioned to review a client’s continuity and resilience capabilities I am nearly always asked – “What documentation do you need to review?” You can reflect on my response below: “Minimum Viable” set to request client to consider BCM minimum evidence set: policy & framework, risk register with BCM risks and latest assessment, approved BIAs,… Read More

Necessary and Sufficient Evidence (Consolidated Framework)

For each domain, “necessary and sufficient” evidence means: the minimum concrete artefacts and observations that prove the criterion is in place (E1), enabled (E2), and working in practice (E3). Below is a concise, practical set of examples. 1. Context, Scope, Stakeholders & Strategy 2. Leadership, Governance, Culture & Accountability 3. Integrated Risk & Opportunity Management… Read More

“Necessary and Sufficient” Resilience Evidence

You can treat “necessary and sufficient” at each level as (a) existence of defined artefacts (E1), (b) evidence of use and quality (E2), and (c) evidence that use is routine, linked to other systems, and self‑reinforcing (E3) across all seven domains.[1] Below are concise criteria you can use as an assessment rubric. 1. BCM Program… Read More