By Paul Burden
Companies have turned, and continue to turn, to technology products to gain a competitive edge. With widespread adoption, however, the competitive edge must come from how these tools are applied. A key to the application of complex technology tools, such as CAD software, is maintaining awareness of skill levels within the user community and providing the appropriate solutions when skill levels fall below an acceptable level. An absence or degradation in skill level with respect to complex technology significantly affects overall productivity.
That skill gaps occur as technology changes and evolves is not unexpected. With each new release of CAD software, hundreds of changes and enhancements are introduced. Suddenly, users trained on earlier releases are faced with a skill gap.
Even human nature leads to skill gaps. People of similar ability who receive the same training often show different levels of productivity afterwards. Why? Reasons include the following:
- Different learning styles
- Different levels of retention of training content
- Different amounts of time between training and use
There are many assessment options available in the market that are designed to provide companies with an inventory of the skill levels within their user communities. This is done by focusing on specific areas of CAD/CAM use with skill-assessing questions and tasks that collectively identify any skill gaps present for any of the reasons described above. Results may be accompanied by recommendations to target the skill gaps with focused training solutions that will ensure the technology tools are being applied to their maximum benefit.
Any assessment must have the attributes of reliability, validity, and fairness with respect to the purpose of the assessment and interpretation of the results.
Reliability refers to the consistency or reproducibility of assessment results. A user who takes the same assessment at different times should produce similar results. Likewise, a user who takes multiple versions of an assessment, which are intended to be equivalent in content and scope, should produce similar results each time.
Factors that can contribute to unreliable results include:
- Questions are ambiguous.
- Questions assess interpretation rather than application.
- Questions require selection of more than one answer.
- Responses from previous sessions are memorized.
- Response selection is the result of guessing.
Validity refers to the value of the meaning that can be drawn from assessment results. Customers are typically seeking baseline of user proficiency with a CAD/CAM tool; therefore, the validity of any assessment in determining proficiency is important. Factors that would be used to determine the validity of any assessment include:
- The extent to which the assessment addresses an appropriate sample of the skills and knowledge required for proficiency.
- The extent to which the assessment finding is representative of a user’s actual proficiency with all of the required domains of the CAD/CAM tool, not just the specific sample topics presented on the assessment.
- The extent to which the assessment is verified on representative sample groups of known proficiency within the target population.
Fairness refers to the extent that assessment findings are comparable between individuals and groups.
Three common assessment formats available are hands-on, multiple-choice quiz, and hybrid.
Hands-on assessments consist of specific tasks that the user is required to perform on actual data that is then analyzed. An example of a hands-on assessment task is shown below. For this type of question, the user accesses the provided CAD file from, performs the prescribed task, and submits the modified file for evaluation.
This assessment format may employ an application expert to examine each submitted model to verify the accuracy and completeness of each task and to make a determination of the user’s ability and proficiency. A score is determined for each task using evaluation criteria, such as the following:
This format of assessment has a high level of reliability because it eliminates guessing by users and requires them to demonstrate performance and ability. Because end results are examined, this format also lends itself to assessment of the quality and end-result proficiency measures. Models can be screened for organization-specific methodologies and best practices, resulting in a thorough examination of a user’s skills and knowledge.
Assessing an appropriate sample of skills and knowledge with this format of assessment, ensuring validity, will require the highest amount of time of the three formats analyzed in this discussion. This format will also provide the truest results from which conclusions of skill and ability can be drawn.
Multiple-choice assessments consist of a series of questions that require the user to select the correct response from at least four possible responses. An example of a multiple-choice question is shown below. For this type of question, the user does not use any provided data; the user simply reads the question, refers to the images, and selects the correct response.
This type of assessment can be analyzed very quickly with results available almost immediately upon completion. Many online assessments of this format auto-evaluate the responses and return a report to the user.
This format of assessment does have a lower level of reliability than the hands-on format since there is no interaction by the user with the software required to complete the assessment. With one correct response out of four per question, users have a 25% chance of guessing the correct response for each question and are not required to demonstrate performance and ability. Lower reliability of this assessment format also lowers validity since conclusions about a user’s actual proficiency based on assessment results would be limited.
This format does permit a greater number of questions than the hands-on assessment format, and therefore a wider sample of skills and knowledge can be assessed. This format of assessment meets the requirements of many customers, as an indicator of proficiency for the purposes of assessing training needs.
Hybrid assessments are a compromise of the hands-on and multiple-choice assessment formats. It requires the user to perform specific tasks on actual data and then select the one correct multiple-choice response to verify that the tasks were completed as required. A sample of a hybrid question is shown below.
To balance the attributes of reliability, validity, and fairness, the assessments you use ideally consist of combinations of the three formats described here. This enables you to maximize the number of topics covered within a reasonable amount of time acceptable to your organization for users to spend completing the assessment. The results will provide valuable indicators of skill gaps within the user community that can be addressed appropriately with targeted training solutions, and enable you to leverage your resources, which include your people and your technology tools, to the fullest extent.