In the ever-evolving field of nuclear energy, effective nuclear training assessments are pivotal. At Accelerant Solutions, we’re pioneering advancements in the self-evaluation reports (ASER), focusing on evidence-based formats to illustrate compliance with training objectives and criteria. Over the past several years, Accelerant has engaged with several station to assist in development of the accreditation self-evaluation reports (ASER). Starting in 2016, the focus of the ASER shifted to more of an evidence-based format rather than describing training processes. Indeed, the message from the Accrediting Board was “tell us how you know (you meet the objectives and criteria) rather what you do”. INPO’s ASER quality checklist further reinforces the desire to provide facts and evidence and keep process descriptions brief.
Strategic Insights for Effective Nuclear Training Assessments
Thus, writing an evidence-based ASER starts with a search for station evidence. The comprehensive assessment reports are usually a very good source. This article summarizes Accelerant’s observations and suggestions after digesting many of these comprehensive assessment reports, in detail, across numerous stations.
Innovative Practices for Effective Nuclear Training Assessments
Step 1 – Include Vertical Slices
The new ASER format requires a section for each training program. This section highlights key changes since the last renewal and usually includes a station’s best examples of training used to drive performance improvement.
The section is also expected to explain how we know the program’s analysis, design, and development (ADD in ADDIE) are properly SAT-based. This evidence is often hard to come by and many stations are reduced to providing a few anecdotes of the training processes.
However, one of the most compelling data sources of a training program’s ADD health comes from vertical slice reviews of training materials. Unfortunately, vertical slices are sporadic.
The remedy is quite simple: Station’s should consider adding a vertical slice evaluation for each program being reviewed in each comprehensive assessment. This simple adjustment ensures that each program will have at least two vertical slice examples available for the ASER.
Step 2 – Provide Positive Commentary
The nuclear power culture often focuses assessments on identifying gaps; find and fix. Certainly, the accreditation objectives expect stations to use assessments to make on-going improvements to their programs. But the ASER is also an advocacy document.
We are telling the story that, overall, the programs meet the objectives and criteria. Accordingly, positive factual evidence is required to buttress this position; “we looked and we found that XYZ was solid and here is a fact to support that conclusion.”
However, we often find that in writing an ASER, the bulk of the comprehensive assessment commentary is negative and gap-based. The best assessment reports provide better balance and include much more detail in what the team looked at and found to be acceptable or perhaps even strong.
Assessment report writer’s and approvers should view the document from this point-of-view and make sure both gap-based and positive evidence is included. The lack of positive evidence becomes painfully clear at ASER writing time.
Step 3 – Discretely Review Past Problems and Operating Experience (OE)
The bulk of a comprehensive assessment will focus on current training program health.
But when writing an ASER, we like to be able to show that we circled back around to verify that previous issues remain closed or perhaps have now blossomed into strengths.
Assessment plans should tell the team to do these verification looks and perhaps list the previous issues that are in most need of being revisited. Many assessments also review programs against recent ATV findings and training-related IERs.
This is extremely valuable in writing and evidence-based ASER, however many reports tend to white wash these issues or do superficial reviews; “no problem here”.
A broader review of all the ASER data often makes that conclusion hard to justify and we are caught with having to explain why we didn’t learn soon enough from the industry. Lastly, and most startling, is that comprehensive assessments seldom review the key training OE from within their own fleet.
We have seen OJT/TPE issues, for example, percolate across a fleet one ATV at a time and wonder why a station didn’t pick up on the issue much sooner.
In conclusion, commission assessment teams to specifically review past training issues, industry findings and IERS, and recent fleet training issues. And challenge them to find learning opportunities, small incremental improvements, even if a finding-level issue may not be present.
Step 4 – Improve Finding Quality Product
In writing an ASER, a cold read of a comprehensive assessment report often reveals poorly written findings (these go by many levels and names at different utilities). Sometimes, the gap statements are overly broad based on the facts provided.
Others provide a collection of examples that don’t seem to relate to the problem. Few provide statistics like “X of Y lesson plans reviewed found shortfalls in Z”. (Is a single defect is being used to support a broad conclusion?)
The best advice is to follow INPO’s guidance for construction of an area for improvement.
Collect your best three validated similar examples, then work on a reasonably narrow problem statement that bounds them together. Most utilities have INPO-loaned employees or other INPO qualified evaluators.
They can be a good resource to QC the quality of these key team products.
Step 5 – Avoid Super-SIFs
Avoid collapsing similar issues into a super-finding. Board feedback in recent years is a preference for narrower, understandable, and clearly solvable training program problems.
A super-SIF invites challenges to closure, sustainability, and appear to be more accreditation threatening. Use them only if that is truly the case.
Step 6 – Make The Early SIF Call
Many stations file their comprehensive assessment reports away and proceed to fixing the identified issues.
Then, in year five or six of the accreditation cycle start to decide on their ASER SIFs.
We have seen cases where a SIF condition was identified in an assessment in year two, but its corresponding deeper cause analysis was delayed until year six only after it was declared a SIF. This makes problem resolution appear delayed and slow.
One utility addressed this issue by requiring that assessment report issues that could potentially be called a SIF be captured on a purpose-built form for senior oversight committee determination at issue-birth.
This simple solution makes for much more timely issue resolution and cleaner effectiveness review. And you can always decide late in the renewal cycle that the issue be described in the ASER without being written up as a SIF.
It’s much trickier to go the other way.
In conclusion, with a six-year accreditation cycle, your ASER writer’s and training supervisors in the future may not have been around when past assessments were conducted.
They can’t go by what they know from living the history.
They only have the assessment report.
The above lessons learned are aimed at making that job much easier and helping station’s capture their evidence and facts along the journey.
An evidence-based ASER requires it.