John Salter’s Blog
Follow my ruminations
Get new content delivered directly to your inbox.
- Celebrating the Kon-Tiki expedition
I recall with fondness being absolutely absorbed by the story of the Kon Tiki expedition. I could be found in the high school library with my face buried in the pages … when I probably ought to have been in class. For me, it stands as an example of how “single exposures” can significantly shape and impact the way you think.

I suggest you can treat the Kon Tiki expedition as a compact story of how fragile capability, clear intent, and adaptive learning combine into resilience under prolonged stress.[1][2][3]
Capability and maturity metaphor
A small, minimally “engineered” raft and crew deliberately sail into extreme uncertainty for 101 days, relying on simple but coherent capabilities that strengthen under pressure rather than collapse.[3][4][1] You can link this directly to capability-based definitions of organizational resilience: anticipating the crossing, coping with storms, damage and drift, and adapting operating routines over the journey.[7]
Elements of resilience mapped to Kon Tiki
- Purpose and narrative: Heyerdahl’s explicit aim was to demonstrate that a primitive balsa raft using period materials could cross from Peru to Polynesia, not to run a pleasure cruise. That hypothesis acts like a strategic intent or “design premise” for an organization: it aligns choices, justifies risk, and gives meaning to hardship.[2][4][5]
- Designing for environment, not comfort: The raft was built from balsa logs and natural-fibre lashings, with no metal hull, because Heyerdahl wanted it to behave like pre-Columbian craft and to work with Pacific currents and swells. Organizationally, this is resilience through fit-to-context: structures that flex with the operating environment instead of trying to overpower it.[4][1][2][3]
- Initial fragility, emergent strength: Early expert predictions were that the lashings would wear through, the raft would break up, or it would drift aimlessly for years. In practice, as the ropes swelled with seawater, they tightened and the raft became more coherent; a “barely viable” configuration matured into a robust platform under real conditions. That is a strong metaphor for capability maturity: crude, loosely coupled practices that, subjected to consistent load, tighten, integrate, and stabilise.[6][3][4]
- Improvisation and bricolage: The crew continually repaired lashings, managed a problematic steering oar, and adjusted sail and rigging as weather and sea state changed. This aligns with resilience literature around improvisation, bricolage, and “situation-specific responses” as core to coping and adaptation, not as signs of immaturity.[7][8][3][6]
- Tolerance of slow feedback and lag: It took 97–101 days to confirm whether the whole hypothesis was valid when they finally reached the Tuamotus. That long lag between action and definitive feedback looks a lot like strategic transformation: you must maintain conviction and disciplined practice despite ambiguous, noisy short-term signals.[1][3][4]
- Safe-enough failure: The expedition ended by running aground on a reef at Raroia; the raft was damaged but the crew made safe landfall with no fatalities. In resilience terms, this is “successful failure”: the system can absorb a terminal event without catastrophic loss of life or core purpose.[3][4][1]
How you might use this in your workplace
- As a visual: a simple Kon Tiki route line with “maturity waypoints” (concept, prototype, early storms, mid-ocean repairs, first sight of land, reef impact/landing) annotated with resilience capabilities at each stage.[4][1][3]
- As a narrative case: contrasting “designing a steel ship for comfort and speed” (efficiency, optimization, low variance) with “building a balsa raft that can ride out whatever the Pacific throws at it” (robustness, adaptability, graceful failure).
If you tell this story to a leadership group, what’s the main capability you’d want them to see themselves in: the courage to launch, the willingness to learn mid-ocean, or the ability to land safely even if they hit the reef first?
Sources
[1] Kon-Tiki expedition – Wikipedia https://en.wikipedia.org/wiki/Kon-Tiki_expedition
[2] Kon-Tiki | Explorer, Pacific Ocean, Thor Heyerdahl https://www.britannica.com/topic/Kon-Tiki-raft
[3] Kon-Tiki Expedition: Thor Heyerdahl’s Epic Crossing of … https://www.worldhistory.org/Kon-Tiki_Expedition/
[4] Kon-Tiki ekspedisjonen — Kon-Tiki museetwww.kon-tiki.no › heyerdahls-expeditions › kon-tiki https://www.kon-tiki.no/en/heyerdahls-expeditions/kon-tiki
[5] Thor Heyerdahl’s Kon-Tiki Expedition: Across the Pacific by Raft https://runawayjuno.com/runaway-tales/thor-heyerdahls-kon-tiki-expedition-across-the-pacific-by-raft/
[6] Voyage of the Kon-Tiki Part I: a most challenging hypothesis | TOTA https://www.tota.world/article/2317/
[7] [PDF] Organizational resilience: a capability-based conceptualization https://d-nb.info/1178203778/34
[8] Kon Tiki: The Epic Raft Journey Across the Pacific | Full Documentary https://www.youtube.com/watch?v=gvBYfba8nv8
[9] Kon-Tiki expedition – Wikipedia https://en.wikipedia.org/wiki/Kontiki
[10] UN Organizational Resilience Maturity Model https://unsceb.org/un-organizational-resilience-maturity-model
[11] Kon-Tiki https://remosince1988.com/blogs/stories/kon-tiki
[12] Kon Tiki — A Beautiful Voyage … Into Divine Madness | by Sydney … https://blogs.sydneysbuzz.com/kon-tiki-a-beautiful-voyage-into-divine-madness-ce9682c593e5
[13] Developing organisational resilience https://knowledge.aidr.org.au/resources/ajem-oct-2017-developing-organisational-resilience-organisational-mindfulness-and-mindful-organising/
[14] This is the story of a man behind a bold idea https://www.kon-tiki.com.au/the-kon-tiki-story/
[15] How the Voyage of the Kon-Tiki Misled the World About Navigating … https://www.smithsonianmag.com/smithsonian-institution/how-voyage-kon-tiki-misled-world-about-navigating-pacific-180952478/
- Shoring up the ship
Most organisations respond to a drop in performance the same way: tighten targets, add new dashboards, reshuffle roles, and ask people to try harder. For a quarter or two, the numbers might improve. Then the leaks return, often in exactly the same places.
Some leaders stand on the bridge of the ship, staring at the wake, assuming that if they just shout loud enough, the hull will somehow repair itself.

Capabilities are the repeatable combination of people, process, technology, data, and governance that enables a specific outcome – in this metaphor they are the hull, keel, crew, and navigation systems that create that wake, day after day, in all kinds of weather.
When leaders focus only on the wake, they’re managing the traces of what the organisation can do, not the organisation’s actual ability to do it. That’s why so many performance conversations feel like déjà vu: we keep revisiting the symptoms, without ever really examining the structure and processes that produces them.

Yes, a little management parable: if you load all your capability on one side of an organization and leave the other side underpowered, you get motion, but not progress—lots of effort with no real forward movement.
This piece has been about shifting that focus. Instead of asking “How do we get better numbers next quarter?”, ask “How do we shore up the ship?”
Through the lens of capability assessment, look below the waterline: clarifying what capabilities really are, what a good assessment reveals, and how those insights can guide more deliberate, structural improvements.


Consolidated Framework distilled from universal, Quality, Risk, Environmental, and Business Continuity Frameworks. 
I leave one consideration with you – “choose 1 critical capability for your strategy and run a light-touch assessment in the next 3 weeks”.

Which slice(s) of the cake are currently crucial to you? 
- War, Interconnection, and the Hidden Weaknesses in Your Operating Model
War has a way of exposing vulnerabilities that arise from interconnection. Inside organisations, conflict acts like an extreme stress test: it doesn’t create fragility from nowhere, it reveals weaknesses already baked into operating models, supply chains, and decision‑making rhythms. The question for leaders is not whether those vulnerabilities exist, but how consciously they have been recognised, designed for, and governed.

Why distant wars show up in your performance indicators
War often arrives in the organisation as a data point, not a drama. A distant conflict appears first as a headline, then as stretched lead times, rising logistics costs, “pending allocation” notices, and a flood of cyber risk bulletins. What begins as “something happening over there” quickly shows up as noise in KPIs and discomfort in planning assumptions. These are not just external shocks; they are feedback on how capability has been built.
Decades of optimisation have pushed organisations toward lean inventories, concentrated suppliers, centralised data, and tightly coupled processes. In stable conditions, this looks like good management. Under war‑related disruption — physical, economic, cyber, or informational — the same characteristics show up as brittleness. Capabilities that looked robust turn out to be over‑specialised, over‑connected, and under‑resilient.

Interconnection: strength and shock channel
The systems we build to give us reach, efficiency, and speed also become the channels through which shocks propagate. A highly integrated global sourcing strategy is both a source of advantage and a liability when regions become contested. Centralised cloud platforms increase agility and standardisation, but concentrate cyber and dependency risk. Interconnection creates both capability and fragility; conflict strips away the illusion that we can have one without the other.
Language can make us feel more passive than we really are. We talk about “supply chain disruption” or “market volatility,” as if organisations simply receive forces from outside. But procurement choices, network architectures, partnership models, outsourcing decisions, and risk tolerances are all design moves. War reveals where those moves have created single points of failure, opaque dependencies, or untested assumptions about who carries which risks.

Screenshot Three capability gaps war keeps exposing
From a capability perspective, three recurring gaps stand out:
- Structural resilience is underweighted. We define what we can do in terms of throughput, quality, and cost, but pay less attention to how performance degrades under stress, how quickly we can reconfigure, or how well we can operate in degraded modes.
- Interdependencies are not mapped. Many organisations cannot clearly answer: Which vendors, facilities, platforms, and people are on our critical path? Where are we single‑threaded? Which external infrastructures quietly underpin “normal” operations?
- Ecosystem risk is overlooked. Much of “our” capability lives across joint ventures, strategic suppliers, logistics partners, digital platforms, and regulators. An organisation may have strong internal continuity plans yet still be exposed through partners with much lower resilience maturity.
Designing for exposure, not illusory control
If interconnection is here to stay, the task for leaders is to design for exposure, not pretend it can be eliminated. That starts with defining capability in multi‑state terms: not just “Can we do X?”, but “How do we do X under normal, stressed, and degraded conditions — and how do we move between those states?” Resilience becomes a dynamic performance question, not a slogan.
It also means treating redundancy, diversity, and modularity as strategic investments rather than inefficiencies, and rehearsing disruption scenarios so people closer to the front line are empowered to act. Ultimately, war will continue to expose vulnerabilities that arise from interconnection, whether or not we are paying attention. We cannot choose when the next conflict erupts, but we can choose what kind of organisations we will be when it does.

Start the conversation Where in your operating model are you most single‑threaded today, and who in your organisation is actually accountable for changing that? I’d be interested to hear how you’re tackling this in practice.
- What’s for breakfast?

Reflecting over breakfast this morning – checked in with AI A significant, prolonged oil shortage would not usually mean these foods disappear, but it would very likely mean patchy availability, higher prices, and less choice, especially for imported or processed items.[1][2][3][4][5]
What an oil shortage affects
Oil underpins food via three main links: farm production (diesel for tractors, fertiliser), processing and refrigeration (electricity, packaging inputs), and transport from farm to factory to supermarket. When fuel is short or very expensive, governments and fuel companies typically prioritise essential uses like food freight, but logistics still slow down and costs rise, which then flow through to retail prices.[2][3][4][5][6][1]
Item‑by‑item implications
- Milk – Vulnerable to fuel constraints because it is collected daily from farms and must stay cold, so transport and refrigeration costs would rise quickly. You would probably still get milk, but see more frequent gaps on shelves, shorter “use by” spreads, and higher prices.[7][8][1]
- Yoghurt – Adds another processing step plus chilled distribution, all fuel‑intensive. Expect more price pressure than on plain milk, fewer flavours/sizes, and occasional outages of specific brands rather than total absence.[8][9][1]
- Granola cereal – Depends heavily on bulk grain supply and long‑distance freight; oil shocks raise grain and freight costs, pushing up cereal prices. Availability is likely, but with higher prices, more “budget” formulations, and some imported or niche products disappearing if margins vanish.[3][5][9]
- Fruit – Very mixed: local, in‑season fruit fares better than imported or highly perishable lines. You might see reduced variety (especially out‑of‑season or imported fruit), more reliance on what’s grown regionally, and more rapid price swings.[10][8]
- Coffee – Beans are almost all imported and already exposed to climate and shipping risk; energy and freight shocks add further strain. Good coffee remains available but gets noticeably more expensive, and specialty origins or formats (pods, ready‑to‑drink) may be the first to vanish.[11][12][8]
- Sugar – As a globally traded commodity it is tightly linked to fuel, both via farm diesel and via competition with biofuel markets like ethanol. You would likely see higher prices and possible limits on bulk buying, but outright absence from supermarkets would be less likely than for more perishable products.[5][3][11]
Overall, the pattern is: perishable and highly processed foods become more expensive and erratic, while basic staples stay available but cost more and come in fewer variants.[9][1][8]
Sources
[1] Soaring Oil Prices Raise Questions Over Future Food And Fertiliser … https://www.esmmagazine.com/supply-chain/soaring-oil-prices-raise-questions-over-future-food-and-fertiliser-costs-307280
[2] Aussies face supermarket shortages as fuel crisis cripples … https://au.news.yahoo.com/aussies-face-supermarket-shortages-as-fuel-crisis-cripples-suppliers-we-cant-operate-212851314.html
[3] Commodity markets: shocks and spillovers https://www.bis.org/publ/qtrpdf/r_qt2209b.htm
[4] Fuel panic is spreading and now Australia is tapping into … https://www.abc.net.au/news/2026-03-13/fuel-panic-is-spreading-australia-emergency-supplies/106448176
[5] Exogenous oil supply shocks and global agricultural … https://www.sciencedirect.com/science/article/abs/pii/S1059056024000868
[6] Here’s what Australia’s fuel supply looks like in charts https://www.sbs.com.au/news/article/australia-fuel-shortage-2026/zl0grg7ey
[7] National Food Security Strategy: discussion paper https://haveyoursay.agriculture.gov.au/food-security-strategy/survey/view/300
[8] The cascading crisis of global food supply chains – Pursuit https://pursuit.unimelb.edu.au/articles/the-cascading-crisis-of-global-food-supply-chains
[9] How Supply Chain Disruptions Impact Food Ingredients https://www.globalresourcesdirect.com/blog/supply-chain-disruptions-impact-food-ingredients/
[10] Fork in the Road: Impacts of climate change on Australia’s food supply https://farmersforclimateaction.org.au/fork-in-the-road-impacts-of-climate-change-on-australias-food-supply/
[11] Cocoa, coffee, corn at risk – food shifts focus amid climate threat https://www.foodnavigator.com/Article/2025/03/14/manufacturers-rush-to-secure-supply-chains-of-vital-ingredients/
[12] From farm to fork – key challenges for global food systems https://www.investordaily.com.au/from-farm-to-fork-key-challenges-for-global-food-systems/
[14] Diesel shortage threatens food supply as Big Oil cuts off farmers https://www.onenation.org.au/diesel-shortage-threatens
[15] Aussies warned soaring fuel costs will soon drive up … https://www.youtube.com/watch?v=wxDUlbtuWkc
[16] Fueling our crises – Transport & Environment https://www.transportenvironment.org/uploads/files/Soy_Study_TE_2022_final_embargoed_Friday_4_Nov-1.pdf
- Operational Resilience and Continuity – without the drama
Our consulting offer for vulnerable times.

The gap most mid-sized firms face
Most firms have bits and pieces of business continuity, risk, and IT/DR work, but too often:
Plans are outdated or don’t match how operations actually run today.
• Operations, IT, risk, and suppliers each see their own slice – no one owns the full picture of “how we keep running when X breaks.”
• It’s unclear which products, services or sites are truly critical, or how long they can be down before real damage occurs.
• It’s hard to make the investment case because risks and impacts aren’t expressed in simple business terms.



Rapid Resilience & Continuity Scan – A health check to surface your biggest risks and quick wins.


This offer is a strong fit if:
You’re a mid-sized global firm with multiple sites or regions.
A serious outage, cyber incident, supplier failure, or extreme weather event would materially affect revenue, customers, or compliance.
You have some continuity and risk work in place, but no integrated, up-to-date picture across the business.
You want a practical, operator-friendly approach rather than a theoretical exercise.

Next step
Start with a 20-minute conversation.
We’ll talk through your current situation, recent incidents or near-misses, and what you want your resilience and continuity capabilities to look like in 12-24 months. If it makes sense, we can then decide together whether the Rapid Scan or the full Readiness Sprint is the best starting point.

Five-Step Version




- Will ISO 9001 have “a place at the table” in 2026?
Given ISO 9001:2026 will incorporate extra depth for leadership, culture, risk–opportunity, digital and climate themes, how popular is the domain model (below) likely to be with the current USA leadership?

Below is a domain model one could turn into a capability assessment framework (maturity grid, heat map, etc.).[1][2][3][4]
1. Context, Stakeholders and Strategic Alignment
Assess whether the QMS is anchored in real context, stakeholders and strategy, including climate and resilience.[3][4][1]
Core capability domains:
- Organizational context and climate factors (including climate change and sustainability in context analysis).[4][1]
- Interested parties and requirements (customers, regulators, supply-chain, community, climate-related expectations).[2][3]
- QMS scope and architecture (process approach, boundaries, interfaces, digital/AI-enabled processes).[3][4]
- Strategic alignment and quality objectives integration into business planning.[5][1]
Example maturity question: “To what extent are quality objectives derived from and reviewed against business strategy and climate/sustainability drivers?”[1][4]
2. Leadership, Governance and Culture
The 2026 revision strengthens leadership accountability, ethical culture and governance, so this domain becomes more explicit.[6][7][2][5]
Core capability domains:
- Leadership commitment and role modelling of a quality and ethics culture (top management behaviour, decisions, priorities).[2][5]
- Quality policy and governance (policy aligned with context and strategy; oversight bodies, decision forums).[1][3]
- Roles, responsibilities and authorities (clear, practiced, and effective delegation for QMS performance).[8][3]
- Culture, values and awareness (awareness now includes values and culture, not just procedures).[2]
Example maturity question: “How consistently do leaders take decisions that prioritise long‑term customer value and ethical behaviour over short‑term local optimisation?”[5][2]
3. Risk, Opportunity and Change Planning
ISO 9001:2026 is expected to clarify risk vs opportunity and strengthen planning for change and resilience.[4][5][1][2]
Core capability domains:
- Integrated risk and opportunity management (systematic analysis and evaluation, not just lists; equal discipline for opportunities).[1][2]
- Resilience and continuity considerations (ability of processes and supply chain to withstand disruption).[4]
- Climate and sustainability risks/opportunities explicitly considered in planning activities.[4][1]
- Change planning (structured planning, communication and review of changes to processes, tech, structure).[3][2]
Example maturity question: “To what extent are risks, opportunities and planned changes linked to defined controls, owners, and measurable effects on QMS objectives?”[2][1]
4. Resources, People, Knowledge and Digital Enablement
This domain captures people, infrastructure, knowledge and the increasingly explicit digital/AI aspects.[9][2][4]
Core capability domains:
- People capability, competence and engagement (skills, training, empowerment, participation).[3][2]
- Infrastructure and work environment (physical, technological, and digital workplaces fit for purpose).[3][4]
- Organizational knowledge management and learning (retention, application, sharing, and learning beyond basic product conformity).[2][4]
- Digital and data capabilities (AI, automation, analytics, digital services, and data integrity in cloud environments).[10][9][4]
- Documented information lifecycle in digital environments (creation, control, security, usability, signatures, traceability).[4][3]
Example maturity question: “How deliberately is organizational knowledge (people + data + digital systems) captured, curated and reused to improve processes and products?”[2][4]
5. Customer, Market and Supply‑Chain Management
ISO 9001 has always been customer‑centric; 2026 strengthens communication, contingency, and supply‑chain alignment.[3][4][2]
Core capability domains:
- Customer insight and communication (requirements, expectations, feedback via multiple channels).[2][3]
- Offer and contract management (product/service definition, review, and agreement, including digital offerings).[4][3]
- Contingency and customer communication in disruptions (informing customers about contingency plans).[4][2]
- Supplier and external provider management (alignment with customers’ and interested parties’ requirements, supply‑chain risk).[2][4]
- Management of customer/external property and data (including digital assets and information security controls).[8][2]
Example maturity question: “How proactively are supply‑chain risks and contingencies communicated to customers and integrated into contracts and SLAs?”[4][2]
6. Operational Design, Control and Improvement
This is the classic “8.x operations” domain, but extended to digital products/services and automated monitoring.[9][3][4]
Core capability domains:
- Process architecture and control (end‑to‑end process design, interfaces, criteria, controls for both physical and digital services).[3][4]
- Design and development management (including software, digital platforms, AI‑enabled functions, and iterative/Agile approaches).[10][9][4]
- Production, service provision and change control (execution discipline, change impact assessment, configuration control).[8][3]
- Monitoring and measurement methods, including automated and real‑time data (sensors, dashboards, analytics).[9][4]
- Post‑delivery and lifecycle activities (after‑sales support, updates, digital releases, warranty, recalls).[8][3]
Example maturity question: “To what extent are operational controls and monitoring designed using real‑time data and feedback loops across the full product or service lifecycle?”[9][4]
7. Performance Insight, Review and Learning
The 2026 draft emphasises richer performance evaluation, explicit audit objectives, and stronger management review linkage to improvement.[1][3][2]
Core capability domains:
- Performance measurement and analytics (selection, analysis and evaluation of process, product, and customer metrics).[3][2]
- Customer satisfaction insight using diverse channels (surveys, complaints, returns, digital signals).[2][3]
- Internal audit programme effectiveness (risk‑based, with clear objectives and follow‑through).[3][2]
- Management review discipline (inputs reflect context, interested parties, changes; outputs drive action and strategy updates).[2][3]
Example maturity question: “How consistently do management reviews integrate changes in interested parties, context and performance data into concrete, tracked improvement decisions?”[3][2]
8. Improvement, Innovation and Corrective Action
ISO 9001:2026 clarifies that continual improvement must address suitability, adequacy and effectiveness, not only nonconformity.[1][2]
Core capability domains:
- Continual improvement system (structured pipeline of ideas, from opportunities and lessons learned through to implemented changes).[1][2]
- Corrective action and problem‑solving (root cause analysis, systemic fixes, verification of effectiveness).[8][3]
- Innovation and opportunity management (opportunities treated as strategic tools, not ad‑hoc suggestions).[4][2]
- Periodic challenge of QMS suitability and adequacy (does the QMS still fit the evolving business and digital model?).[1][2]
Example maturity question: “How often does the organization deliberately reassess whether its QMS is still suitable, adequate and effective for its evolving business model and technologies?”[1][2]
E1 E2 E3 evidence questions by domain

Sources
[1] ISO 9001:2026 Revision: Key Changes, Timeline & Transition Guide https://www.9001simplified.com/learn/next-iso-9001-revision.php
[2] ISO 9001:2026 Revision – Key Changes and How to Prepare https://advisera.com/articles/iso-9001-2026-revision-key-changes/
[3] ISO 9001:2015 Requirements – Summary of Each Section https://the9000store.com/iso-9001-2015-requirements/
[4] ISO 9001:2026 Draft Update: What’s Changing & How to Prepare https://www.glocertinternational.com/resources/articles/iso-9001-2026-changes-and-transition/
[5] ISO 9001:2026 Is on the Horizon: Key Changes and How to Prepare … https://www.linkedin.com/pulse/iso-90012026-horizon-key-changes-how-prepare-early-sjmze
[6] ISO 9001:2026 – Preparing for the Next Generation of Quality … https://compliantltd.com/insights/updated-iso-management-system-2026/
[7] Prepare for ISO 9001:2026 | Governance Platform – Zebsoft https://zebsoft.co.uk/iso-9001-2026-preparation-platform/
[8] ISO 9001 2015 requirements, quality management systems – PQB https://www.pqbweb.eu/page-iso-9001-2015-requirements-quality-management-systems.php
[9] ISO 9001:2026 – Key Updates and Transition Guidance – SGS https://www.sgs.com/en-au/showcases/iso-9001-2026-key-updates-and-transition-guidance
[10] ISO 9001:2026 Update: Key Changes and Transition Timeline https://www.linkedin.com/posts/asimbaig_quality-professionals-get-ready-for-iso-9001-activity-7420533763798888448-kyIv
[11] ISO 9001 Quality Management Systems Revision – CQI | IRCA https://www.quality.org/article/iso-9001-quality-management-systems-revision-0
[12] Preparing for ISO 9001:2026. Here’s everything you need to know. https://citationgroup.com.au/resources/preparing-for-iso-90012026-heres-everything-you-need-to-know/
[13] [PDF] Clauses of the new ISO 9001:2015 standard – Qudos Management https://www.qudos-software.com/image/data/downloads/QudosArticle-NewISO9001clauses-Sept2014.pdf
[14] Understanding the 2026 Revision of ISO 9001 | Christopher Paris https://www.linkedin.com/posts/oxebridge_understanding-the-2026-revision-of-iso-9001-activity-7396177729001254912-1QQ9
[15] ISO 9001:2026 is coming! What’s new and how to prepare? https://www.woodwing.com/blog/iso-9001-2026-is-coming-how-to-prepare
- No cherry on top …

The idiom “cherry on top” is a 20th‑century English expression that grew out of the literal practice of finishing cakes and ice‑cream sundaes with a decorative red cherry, and then shifted metaphorically to mean a small extra that makes something already good even better.[1][2][3]
Literal dessert origin
In confectionery, cooks commonly placed a bright red cherry—often a glacé or maraschino cherry—on the very top of cakes and ice‑cream sundaes as a visual and flavorful finishing touch. This highly visible, non‑essential garnish provided the image that the idiom alludes to: something already complete, made a little more special by a final flourish.[2][3][4]
Shift to figurative meaning
From this culinary practice, “cherry on top” came to describe “an extra benefit or positive detail that makes something even better,” i.e., a pleasant bonus added to an already good situation. Modern dictionaries have it as “a thing that makes something good even better” and place it alongside similar expressions like “icing on the cake.”[3][4][5][6][1]
Before you even think about icing on the cake, it is worth checking your foundation layers (E1s) – especially in your crucial Domains (such as D3).
UNIVERSAL FRAMEWORK – Domain 3. Risk & Resilience
E1 – Exists (Gateway Evidence) uses the metric: % of critical risks with treatments that reduce risk to achieving target. Does a defined organisational process exist for identifying, assessing, and managing risks?
An assessment against this metric would look for clear evidence that there is a defined, repeatable risk management process (E1) – below are examples of the kind of evidence and notes that would typically support that level.
E1 – Exists (Gateway Evidence)

Evidence that a defined organisational process exists for identifying, assessing, and managing risks.
You would expect to see:
- Documented risk management framework and policy approved by the governing body, referencing standards such as ISO 31000 and describing objectives, scope, roles, and responsibilities.
- A documented risk management process (e.g. procedure or toolkit) that clearly sets out steps: establish context, identify risks, analyse/assess, evaluate against criteria, treat, and monitor/review.
- Defined risk categories, rating scales, and risk evaluation criteria (likelihood, consequence, and clear definitions of “critical” risk).
- Evidence that the process is in use: completed enterprise or divisional risk registers showing risks identified, assessed, and evaluated using the standard method.
- Governance artefacts that show the process is embedded: terms of reference for risk committees, role descriptions for risk owners, and scheduled risk review cycles.
- Training or awareness material showing that staff are informed about how to identify and escalate risks (e.g. presentations, intranet guidance, learning modules).
Notes an assessor might make:
- “Risk management framework v2.1 approved by Board in May 2025; process aligned to ISO 31000 and applied across all business units.”
- “Enterprise risk register evidences consistent use of standard likelihood/consequence matrix and risk criteria for all critical risks.”
- “Quarterly risk committee pack includes standing agenda items on risk identification, assessment, and treatment planning.”

If E1 is sound, you check E2 and E3 
UNIVERSAL FRAMEWORK evidence notes
Sources
[1] Cherry on top – Idioms Meaning https://www.idiomsmeaning.info/cherry-on-top/
[2] cherry on top – Wiktionary, the free dictionary https://en.wiktionary.org/wiki/cherry_on_top
[3] How To Use “Cherry On Top” In A Sentence: Proper Usage Tips https://thecontentauthority.com/blog/how-to-use-cherry-on-top-in-a-sentence
[4] Cherry on Top – Business English Booster https://businessenglishbooster.com/cherry-on-top/
[5] Cherry on Top https://en.wikipedia.org/wiki/Cherry_on_Top
[6] the cherry on (top of) the cake https://dictionary.cambridge.org/us/dictionary/english/cherry-on-top-of-the-cake
- Universal Management System – Capability Assessment Evidence

Methodology When supporting clients assess capabilities we use the approach in the PDF below


Screenshot – our Capability Assessment app


Screenshot – our Capability Assessment app


Screenshot – our Capability Assessment app


Screenshot – our Capability Assessment app


Screenshot – our Capability Assessment app


Screenshot – our Capability Assessment app


Screenshot – our Capability Assessment app
- UNIVERSAL FRAMEWORK – Domain 7. Governance & Accountability – evidence notes

Metric: % of material issues closed on time (audit, reviews, incidents)
E1 – Exists (Gateway Evidence)
Do defined governance roles, accountabilities, and issue management processes exist?
E2 – Enabled
Are issue owners supported with authority, tracking, and escalation mechanisms?
E3 – Executed
Are material issues reliably closed on time, with consequences for non-closure?

Capability Assessment app Screenshot Evidence and notes for the Governance & Accountability Metric (% of material issues closed on time for audits, reviews, and incidents) support a structured maturity assessment across E1 (Exists), E2 (Enabled), and E3 (Executed).
E1 – Exists (Gateway Evidence)
Defined governance roles, accountabilities, and issue management processes are evidenced by documented frameworks like the three lines of defense model, role mappings, and policies for issue identification, escalation, and remediation.
Notes include charters for risk committees, RACI matrices for audits/reviews/incidents, and procedures outlining timelines for material issues (e.g., high-rated audit findings). Supporting artifacts: governance manuals, process flowcharts, and board-approved standards confirming existence.[2][4]
E2 – Enabled
Issue owners are supported via authority assignments (e.g., decision rights and BEAR-like regimes), tracking dashboards for open issues with aging metrics, and escalation workflows like automated notifications or tiered support.
Evidence comprises tools such as incident tracking systems, executive scorecards linking to remediation status, and training on accountability cascades.
Notes highlight resource allocation for compliance functions and remuneration incentives tied to timely closure to ensure enablement.[6][1]
E3 – Executed
Reliable on-time closure of material issues is demonstrated by KPIs like “% deviations/non-conformances closed on time,” “audit findings closure rate,” and low numbers of open/aged findings, with consequences for delays (e.g., inconsistent but applied non-remuneration actions).
Evidence includes reports on remediation timelines, root-cause analyses, and back-tested outcomes showing >80-90% closure rates for material audits/reviews/incidents.
Notes emphasize consistent enforcement via clawbacks/malus and board oversight of ageing metrics to drive accountability.[7][5]
Sources
[1] [pdf] Self-assessments of governance, accountability and culture | APRA https://www.apra.gov.au/sites/default/files/information_paper_self-assessment_of_governance_accountability_and_culture.pdf
[2] What is a governance framework? Guide and best practices – Diligent https://www.diligent.com/en-au/resources/blog/what-is-governance-framework
[3] ASIC’s governance and accountability https://www.asic.gov.au/about-asic/what-we-do/how-we-operate/asic-s-governance-and-accountability/
[4] Quality Metrics: Definition, Types, Examples, How to Implement and … https://simplerqms.com/quality-metrics/
[5] 11 Governance Metrics for Stronger Corporate Governance – LinkedIn https://www.linkedin.com/posts/smiit-cyberai_elevating-governance-kpis-key-metrics-activity-7379440010342662144-M6zo
[6] Mechanisms exist to identify responsibility and ownership https://www.well-architected-guide.com/well-architected-pillars/mechanisms-exist-to-identify-responsibility-and-ownership/
[7] Top 10 GRC Metrics and KPIs Every Compliance Leader Should Track https://www.salusgrc.com/blog/top-10-grc-metrics-and-kpis-every-compliance-leader-should-track/
[8] Accountability beyond measurement. The role of meetings in … https://www.sciencedirect.com/science/article/pii/S0743016721002904
[9] Pillar 2: Evidence and accountability | Evaluation Strategy 2024–2028 https://www.industry.gov.au/publications/evaluation-strategy-2024-2028/pillar-2-evidence-and-accountability
[10] Reporting Meaningful Performance Information https://www.anao.gov.au/work/insights/reporting-meaningful-performance-information
[11] APRA Information Paper: Self-assessments of governance … https://www.governanceinstitute.com.au/news_media/apra-information-paper-self-assessments-of-governance-accountability-and-culture/
[12] Closed Jobs in Time Phase without material Issued Complete https://www.epiusers.help/t/closed-jobs-in-time-phase-without-material-issued-complete/115137
[13] Worldwide Governance Indicators – World Bankwww.worldbank.org › publication › worldwide-governance-indicators https://www.worldbank.org/en/publication/worldwide-governance-indicators
[14] Escalation Management https://support.supportbench.net/article/escalation-management
[15] Building Strong SES Accountabilities for Data https://www.finance.gov.au/sites/default/files/2024-06/ses-accountabilities-for-data.pdf

- UNIVERSAL FRAMEWORK – Domain 6. Learning & Continuous Improvement – evidence notes

Metric: Repeat incidents or repeat failures
Evidence
E1 – Exists (Gateway Evidence)
Does a defined organisational process exist for learning from incidents, failures, and reviews?E2 – Enabled
Are root cause analysis and improvement mechanisms consistently applied?E3 – Executed
Is there evidence that lessons learned have reduced repeat incidents or failures?
Capability Assessment app Screenshot You can support each level (E1–E3) with a mix of documented processes, consistent practice, and trend data that shows fewer repeats over time.[1][2]
E1 – Exists (Gateway Evidence)
Show that there is a defined, organisation‑wide process for learning from incidents, failures and reviews.[2][3]
Useful evidence:
- Policy or standard that describes your “learning from incidents” process, scope, and responsibilities (e.g. based on steps like collect, evaluate, decide, act, review).[4][2]
- Documented procedure or workflow for incident handling, including investigation, root cause analysis, corrective/preventive actions, and effectiveness review.[3][5]
- Defined triggers for formal reviews (e.g. incident severity thresholds, repeat failures, significant near misses).[2]
- Templates and tools: investigation/RCA templates, lessons learned forms, action‑tracking registers, after‑action review checklists.[5][6]
- Governance: reference in management system, risk framework, or ISO 27001/9001/Safety management documentation showing how learning from events is embedded.[1][2]
- Clear roles: named owners for incident management, investigation leads, learning coordinators or “local safety leaders.”[7][2]
Notes an assessor might write:
- “Documented ‘Learning from Incidents’ procedure v3.2 covers reporting, investigation, RCA, action management, effectiveness review.”
- “Standard templates used for investigations and lessons learned workshops are available in QMS.”
- “Policy A.5.27 references requirements to capture and act on lessons from information security incidents.”[1]
E2 – Enabled (RCA and improvement consistently applied)
Show that the process is not only written down but actively and consistently used for applicable incidents.[5][7]
Useful evidence:
- Incident records that consistently include: description, classification, root cause analysis, contributing factors, and recommended actions.[8][5]
- Use of a recognised RCA or analysis method (e.g. 5 Whys, fishbone, fault tree, discussion groups) applied to all significant incidents, not just a few.[9][7][5]
- Documented criteria for when RCA is required and evidence those criteria are followed (e.g. “all high/critical incidents have an RCA within 10 working days”).[5][1]
- Action logs showing improvement actions are:
- SMART (specific, measurable, achievable, realistic, timely).[2][5]
- Assigned to owners with due dates.
- Tracked through to completion in a unified system.
- Evidence of quality checks on RCAs (e.g. peer review, standard evaluation criteria, checks that causes are evidence‑based and logically linked).[6][10]
- Training material and attendance records for staff who conduct investigations and facilitate lessons learned.[7][2]
Notes an assessor might write:
- “Sample of 10 high‑severity incidents: all have documented RCA using standard template; actions assigned and tracked in central register.”
- “RCA quality checklist used; investigations reviewed by safety manager before closure.”[6]
- “>80% of significant incidents have a documented post‑incident review within 30 days (per KPI dashboard).”[1]
E3 – Executed (Lessons reduce repeat incidents/failures)
Show that learning and improvements are actually changing outcomes, particularly reducing repeat incidents and failures.[7][1]
Evidence types:
- Outcome metrics and trends
- Rate of repeat incidents with the same root cause over time (e.g. per quarter), by system/asset/service.[1]
- Trends in number or rate of high‑impact incidents, normalised (per site, per user, per system).[7][1]
- Percentage of improvement actions that are verified as effective (not just “closed”).[1]
- Studies or internal analyses showing reductions in near misses/adverse events after implementation of specific actions.[7]
- Effectiveness reviews
- Post‑implementation reviews that explicitly ask “Did this action prevent recurrence?” and document evidence (e.g. no similar incidents for X months, control performance data).[9][2]
- Cases where ineffective actions were revised or strengthened following an effectiveness check.[7][1]
- Specific case studies (“stories”)
- Before/after examples: a recurring failure mode, the investigation and system changes, followed by a documented drop in that failure type.[1][7]
- Behaviour change evidence (e.g. fewer “repeat clickers” in phishing tests, improved reporting and safer practices following targeted coaching).[11][12]
- Organisational learning and spread
- Evidence that lessons from one area are communicated and applied more widely (alerts, safety bulletins, toolbox talks, learning sessions).[4][2]
- Records showing controls or standards were updated and adopted across sites as a result of specific incidents.[3][2]
Notes an assessor might write:
- “Dashboard shows 60% reduction in incidents with root cause ‘incorrect configuration’ over 12 months after implementing new change control and training.”[7][1]
- “Effectiveness reviews carried out for 90% of high‑risk incident actions; two ineffective measures were redesigned after follow‑up incidents.”[7]
- “Lessons learned bulletins issued quarterly; evidence of procedure updates and toolbox talks referencing those bulletins.”[4][2]

Sources
[1] A.5.27 Learning From Information Security Incidents – MSP Lessons … https://www.isms.online/managed-service-providers/a-5-27-learning-from-information-security-incidents-msp-lessons-learned-loops/
[2] [PDF] Components of Organisational Learning From Events https://www.veiligheidvoorop.nu/wp-content/uploads/2024/02/Oct23-LVI-021-IOGP-552-Components-of-Organisational-Learning-from-Events.pdf
[3] [PDF] Guidance on Learning From Incidents, Accidents and Events – IChemE https://www.icheme.org/media/8444/xxv-paper-02.pdf
[4] Enhancing Learning from Incidents – Five Tried and Tested … https://www.icheme.org/media/16945/hazards-28-paper-39.pdf
[5] [PDF] Root cause analysis toolkit – Clinical Excellence Commission https://www.cec.health.nsw.gov.au/__data/assets/pdf_file/0009/606735/Root-cause-analysis-toolkit.pdf
[6] Evaluating the quality of a root cause analysis investigation https://www.bakerhughes.com/cordant/blog/evaluating-quality-root-cause-analysis-investigation
[7] Effectiveness and limitations of an incident-reporting system … – PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC6160204/
[8] using learning potential in the process from reporting an … https://pubmed.ncbi.nlm.nih.gov/23498711/
[9] What Is Root Cause Analysis? The Complete RCA Guide – Splunk https://www.splunk.com/en_us/blog/learn/root-cause-analysis.html
[10] The Effectiveness of Root Cause Analysis: What Does the Literature Tell Us? https://www.sciencedirect.com/science/article/abs/pii/S1553725008340495
[11] Learning From Incidents: Key Indicators of Real Organizational Growth https://www.safetywise.com/post/how-to-know-when-you-ve-truly-learned-from-an-incident
[12] Security Awareness Metrics That Matter: Predicting Breach Reduction https://hoxhunt.com/blog/security-awareness-metrics
[13] [PDF] Learning from incidents November 2019 https://www.coalminesinquiry.qld.gov.au/__data/assets/pdf_file/0005/1621076/Anglo-American-SandSD-Group-Standard-Learning-from-Incidents-November-2019.pdf
[14] Using a survey of incident reporting and learning practices to … – PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC2464979/
[15] Why Aren’t Organisations Learning from What Goes Wrong in Their … https://www.incidentanalytics.com.au/blog/how-can-we-learn-more-from-unwanted-events
