Research & Development of New Occupational Analysis & Training Evaluation Technologies

Jimmy L. Mitchell & David Tucker
Institute for Job & Occupational Analysis
San Antonio, TX, U.S.A.

Jonathan Fast
Metrica, Inc.
San Antonio, TX, U.S.A.

Winston Bennett, Jr. & Walter G. Albert
Air Force Research Laboratory
Brooks AFB, TX, U.S.A.

Abstract

Today’s military services are beginning to take advantage of the evolution of the network age by collecting and sharing data in a more collaborative way. This has taken the form of e-mail, posting information on the Internet, and some collaborative processing. Some mainframe applications have been developed to collect MPT data using electronic forms to automate what has traditionally been a paper and pencil process, and there are some moves underway to introduce electronic form flow to the networked office. Recently, some agencies have begun collecting these data electronically by computer disks or from electronic forms transmitted over the Internet. Internet technology provides three key benefits. These benefits include hardware independence, easy access to data and improved software portability.

With the recent operational implementation of automated surveying technology (AUTHOR 3.0 & OASurv 3.1) in the Air Force, a number of other possibilities become feasible: intelligent authoring of survey items and task lists, using electronic mail for survey administration, and eventually, interactive, platform-independent Internet survey administration using Java capabilities. Such Internet data collection will be used for training requirements analysis, occupational surveys, collecting job and training history information, task performance and time-to-learn assessments, gathering real-time consensus estimations of learning curves by topic or task cluster, and facilitating refinement of training resource requirements analysis by type of organization (mission) for representative units.

The developing interactivity of the Internet suggests that on-call reference files (including film clips and animated gif files), consultant services (chat room, etc.), just-in-time training, and reinforcing feedback through reports to individuals can be economically developed and provided to individuals. Web posting of results of surveys and analyses with e-mail notification to respondents is also now both feasible and practical.

There is, however, much work to be accomplished in the area of enhancing data relevance and quality as well as developing and applying intelligent search engines for consolidating occupations and training requirements, and other interactive enhancements. Current Internet developments and usage suggest that significant reductions in time and costs can be made, with a strong probability of enhanced reliability and validity of survey responses as well as increased participant motivation and involvement. Two major developments involving such new technologies are now underway:

Research & Development in Integrated Training Technologies (INTECH)

INTECH is envisioned as a windows-based platform that will integrate a number of training decision and evaluation technologies which have been developed in recent years as well as new systems yet to be developed. In the initial prototype development, the focus is on the Training Efficiency and Effectiveness Methodology (TEEM) and the Training Impact Decision Support (TIDES) system. TEEM is a system used to enhance training requirements evaluation (Ford & Sego, 1990; Teachout, Sego & Olea, 1993) by comparing current training hours against a variety of training requirements indicators (Task Difficulty, Training Emphasis, Testing Importance, etc.) graphically to highlight under- and over-trained areas. TIDES is a computer-based decision support system for assessing the impact of proposed and expected changes in specialty training programs (Chin, Lamb, Bennett, & Vaughan, 1992; Gosc, Mitchell, Knight, Stone, Reuter, Smith, Bennett, & Bennett, 1995; Stone, Weissmuller, & Mitchell, 1995). Since both systems share a common starting point (i.e., the tasks performed within the occupation), it is reasonable to integrate the databases so as to assure transferability of selected subsets of information from one system to the other. There are, however, a number of possible problem areas where research is underway to assure compatibility of such informational databases.

Plan of Instruction (POI) and Specialty Training Standard (STS) Level Data Collection and Analysis: Over the past several years, Bennett and his colleagues (Bennett 1997; and Mitchell, Yadrick & Bennett, 1993), have been engaged in a number of training needs assessment efforts examining key indicators of training requirements at a variety of different levels of analysis. The goal of this research has not been to identify a single "best" approach to defining training requirements, but to examine several approaches that could be implemented to meet customer requirements. One of the most interesting outcomes to date has been a demonstration of the use of STS-level statements as the units of analysis for data collection from the field. Specifically, Bennett (1997) has demonstrated that gathering time spent, percent members performing, opportunity to perform trained tasks, and On-the-Job Training (OJT) time estimates from supervisors provides information that facilitates training decision making and course redesign. Additional research remains to be conducted relative to the tradeoffs in data specificity associated with this approach and the extent to which specific information about the behavioral, psychomotor, and cognitive aspects of the content can be specified from the STS-level statements for use in driving training objectives development.

R&D and field testing is also required to clarify the relationship between the STS/Plan of Instruction (POI) items and task modules. The fact that we are dealing with three different levels of information (tasks, modules, STS item, or POI subject) has always been a disconnect in terms of proper career field management and decision making. Work now underway will determine how tasks and task modules can be used to define/refine the Career Field Education and Training Plan (CFETP) items, which should then drive improved Specialty Training Standards and Plans of Instruction. No one has tried to integrate this area, but work in this direction has been needed for years. With the advent of the CFETP as "the" primary control document for education and training for a specialty, there is a greater need to conduct the integrating research to identify the "best" process for such improvements.

Field Test, Validation and Refinement of "Actual" Time Estimation Procedures: Recently, there have been presentations and discussions about implementing the Actual Time Scaling procedures in OMS as the best method for gathering time spent estimates from field respondents (see Albert, Phalen, Selander, Dittmar, Tucker, & Weissmuller, 1994). There are two major areas of study relevant to the implementation. The first area is related to the amount of time required of respondents to complete a survey. This is due to the fact that the Actual Time Scaling methodology requires respondents to review their responses several times. This is not necessarily a problem for a fairly short set of items (less than 200), but the problem can be potentially damaging to a data collection effort of the magnitude of a typical occupational survey where the task list can be 1500-2000 tasks long. The second area is related to empirical research to determine the extent to which the Actual Time Scaling is providing more construct-valid and reliable estimates of time spent than other scaling approaches. A study of OJT time currently being completed by Bennett and colleagues (see Bennett, Sego, Teachout, & Phalen, 1994) explored several methods for gathering time spent ratings in OJT. Results indicate that there are a number of reasonable methods for gathering respondent data, including a paper-and-pencil version of the Actual Time Scale. Results further indicate a substantial difference in the time required of respondents to complete an Actual Time Scale, compared to other methods. Actual Time Scaling may have a role in future training needs assessment and occupational survey data collection, but its potential contribution must be examined in systematic studies (against measurable criteria, such as minimized group variance within job groups). There are clearly places where relative time provides the needed information. Similarly, there are other places, such as determining the impact of training on work place task performance and training time, where more precise estimates, in units of time, are needed for costing purposes.

Cognitive Task Analysis Methodologies: In a recent study to identify medical training needs, Bennett (1997) noted the need for methods to empirically link behavioral task analytic methods and outcomes with cognitive task analysis methods to provide specific information for developing learning objectives for training. Current cognitive task analysis methods are laborious and time consuming and they are not readily amenable to automated methods. One potential area of research in this area is to define the current Air Force Education and Training Command (HQ AETC) Specialty Training Standard knowledge and performance categories in ways that can be automated for use alongside more traditional behavioral approaches. The creation of automated methods for cognitive task analysis would greatly enhance the relevance and usability of task level data for education and training development and evaluation as well as to enable key cognitive aspects of work to be used for selection and classification of likely job incumbents.

Principles Inventories: Starting with the Electronic Principles Inventory (EPI) approach, new methods need to be developed for identifying similar principles in other functional areas (for example, a medical principles inventory across all medical-related AFSs and potentially across Services). Some of these methodological needs can be met using techniques developed for the USAF Job Structuring Technology (JST) from automated Office of Personnel Management (OPM) data (Driskill, Weissmuller, Moon, & Black, 1995). Such inventories need to be examined using various types of rating scales for different constructs such as importance, training emphasis, learning difficulty, alternative time-to-train assessments, opportunities to perform in field settings, testing importance, likelihood of proficiency/performance retention in the absence of practice. The Canadian Forces have been gathering KSAO data for a number of years (Hawrysh, 1985). R&D is needed as to how best to develop the Knowledge (or Topic) and Skills lists. The Canadian approach is to ask all respondents to indicate their "Level of Knowledge Required" as 1) basic, 2) detailed, or 3) comprehensive. Their "Level of Skill Required" is rated as 1) semiskilled, 2) skilled, or 3) highly skilled. Work exploring and developing quantitative linkages between knowledge and skills and traditional tasks needs to be accomplished.

Job and Task-Based Clustering and Analysis: At present there is no exploratory R&D identifying appropriate module (and meta-module) values for other Manpower, Personnel, & Training (MPT) uses. For example, what is the appropriate task characteristic summary statistic for properly reflecting job content, or manpower requirements (sum of task percent performing, mean, median, max value, etc.)? For describing job and training flows for aerospace propulsion, average percent performing across the tasks within task modules was found to be an understatement of both descriptive specification and training required (Perrin, Mitchell, Knight, Gosc, Thoreson, Rueter, & Hand, 1996). In this instance, the mean of the top five tasks was used as a better characterization of the job and specialty characteristics. R&D is needed to explore these issues and recommend appropriate statistics and decision rules for various MPT applications. Future research should examine the implications for gathering several levels of information simultaneously so that linkages and interrelationships can be examined and demonstrated. Studies related to linkages among the various levels of data and the practical and research implications of the linkages for identifying selection and training requirements must be conducted. Future analytic requirements underscore a need to be able to tie different levels of analysis for data to developing national occupational and labor data bases so that common job and training areas for training consolidation and privatization can be identified as well as potential occupational pools for recruiting purposes (Borman, 1996).

MPT Internet Development and Delivery Tools (GENSURV)

A related project for this coming year focuses on demonstrating how automated survey technology can be adapted to emerging Internet innovations. The objectives of this project are to show how data collection for a variety of purposes (training evaluation, occupational analysis, decision support systems, etc.) can be improved and expanded through the use of these new technologies. A number of development areas are of interest in this project.

Platform Independent Survey Authoring and Administration: The preliminary technology for AF automated surveying presently executes in a DOS environment. This approach has been shown to be feasible and ensures the maximum reliability of the administrations in the field (Mitchell, Weissmuller, Bennett, Agee, & Albert, 1995). User acceptance of the current approach remains quite high. There are a number of very prudent reasons why the DOS approach was taken for initial surveying efforts. First, field data from over 6000 administrations indicates a wide variety of platforms and varying levels of network software and installations currently operating in the operational AF. The continued diversity of Windows operating systems (3.1, Windows95, 97, NT, etc.) precluded reliable survey administration in field settings in the Windows-type of environment. Work is now underway to adapt both the authoring tool and executable survey engines to be platform-independent. Using Java or an equivalent approach avoids the pitfalls and software faults associated with a Windows-dependent environment. Initial investigations indicate that this new approach to occupational analysis and training evaluation is not only feasible but will probably yield quicker results and improve data quality.

Interactive Data Collection and Feedback: Among the advantages of the Internet is greater interaction between a survey respondent and a central database, although some of this type of interactivity is also possible with hard disk-based software systems (Albert, et al., 1994). This permits respondents to look up definitions, specifications, course titles, official job titles, unit and base designations, and other information that might be needed to complete a survey. In addition, respondents can be queried selectively, based on their response to an earlier question, or can be asked a series of questions based on a logic tree built into a particular survey instrument. Background information can also be selectively presented so that a survey taker sees only the questions relevant to his or her status (military, civilian, contractor, etc.). This type of interactivity is obvious with Internet technologies, although there are many details yet to be worked on in terms of what type of central database is needed in each study and the degree of development needed for complex logic trees.

Another aspect of such interactivity that needs to be defined for operational practice is the amount of feedback to be given to survey respondents. For example, it should be possible to provide a respondent with a summary of the information he or she has completed. For an occupational survey, this might take the form of an individual job description with completed calculations of relative or "actual" time spent per task and/or major grouping of tasks (duties, modules, etc.). This would permit the individual to do a final review of the information and make any changes, corrections, or modifications that are necessary. In the initial laboratory study of automated surveys (Albert, et al., 1994), such feedback was used as a final quality control step to ensure realistic responses and assessment of time allocation, using a variety of time rating scales. Such a file could also be printed, downloaded to disk, or otherwise stored for the individual’s later use or disposition; this type of feedback has motivating potential and the degree of improved quality for such a procedure should be assessed. As experience with interactive surveys develops, a number of other ways such interactivity can help motivate individuals to provide realistic and reliable information will be identified.

Summary

With the advent of the Internet and increased automation, the military services are in a unique position to contribute heavily to our understanding of the utility of occupational analysis data for a variety of purposes. In addition, occupational data can be made available to a wider variety of potential users than ever before. Additional research and development is needed in a number of key areas highlighted in the paragraphs above. The quality of the underlying data analysts provide to customers (training developers, decision makers, etc.) must result from systematic and sound scientific practices. Only then can the real importance and practical usefulness of the data and of these new technology innovations be realized and fully exploited.

References

Albert, W.G., Phalen, W.J., Selander, D.M., Dittmar, M.J., Tucker, D.L., & Weissmuller, J.J. (1994). Large-scale laboratory test of occupational survey software and scaling procedures. Proceedings of the 36th Annual Meeting of the International Military Testing Association. Rotterdam, The Netherlands: European Members of the IMTA.

Bennett, W., Jr. (1997). Results from an assessment of readiness and sustainment training requirements for reserve, guard and active duty medical technicians. Presentation and the Winter Conference of the Reserve Officer’s Association, Washington, D.C.

Bennett, W., Jr., Sego, D.J., Teachout, M.S., & Phalen, W.J. (1994). On-the-job training time as a criterion for training: A comparison of several task-based estimation approaches. Proceedings of the 36th annual conference of the International Military testing Association, ( pp. 37-42). Rotterdam, The Netherlands: European Members of the IMTA.

Borman, W.C. (1996). The occupational information network: An updated dictionary of occupational titles. Military Psychology, 8, 263-265.

Chin, K.B.O., Lamb, T.A., Bennett, W., Jr., & Vaughan, D.S. (1992). Introduction to training decisions modeling technologies: The training decisions system. (AL-TP-1992-0014). Brooks AFB, TX: Armstrong Laboratory, Human Resources Directorate, Technical Training Research Division.

Driskill, W.E, Weissmuller, J.J., Moon, R.A.J., & Black, D.E. (1995). Specialty structuring based on task modules semantically linked to knowledge domain lexicons, 244-250. Proceedings 37th Annual Conference of the International Military Testing Association. Toronto, Canada: International Military Testing Association.

Ford, J.K., & Sego, D. (1990). Linking training evaluation to training needs assessment: A conceptual model (AFHRL-TP-90-60). Brooks AFB, TX: Training Systems Division, Air Force Human Resources Laboratory.

Gosc, R.L., Mitchell, J.L., Knight, J.R., Stone, B.M., Reuter, F.H., Smith, A.M., Bennett, T.M., & Bennett, W., Jr. (1995). Training impact decision system for Air Force career fields: TIDES operational guide (Al/HR-TP-1995-0024). Brooks AFB, TX: Armstrong Laboratory, Human Resources Directorate, Technical Training Research Division.

Hawrysh, F.J. (1985). Getting more from occupational analysis surveys, 167-171. Proceedings of the Fifth International Occupational Analysts Workshop. Randolph Air Force Base, TX: USAF Occupational Measurement Center.

Mitchell, J.L., Bennett, W., Jr., Wimpee, W.E., Grimes, G.R., Stone, B.M., & Reuter, F.H. (1995). Research and development of the training impact decision system (TIDES): A report and annotated bibliography (AL/HR-TP-1995-0038). Brooks AFB, TX: Armstrong Laboratory, Human Resources Directorate, Technical Training Research Division.

Mitchell, J.L., Weissmuller, J.J., Bennett, W., Jr., Agee, R.C., & Albert, W.G. (1995). Final results of a field study of the feasibility of computer-assisted occupational surveys: Stability of task and job information. Proceedings of the 37th annual conference of the International Military Testing Association (IMTA), Toronto, Canada, pp. 231-236.

Mitchell, J.L., Yadrick, R.M., & Bennett, W. Jr. (1993). Estimating training requirements from job and training pattern simulations. Military Psychology, 5, 1-20.

Perrin, B.M., Mitchell, J.L., Knight, J.R., Gosc, R.L., Thoreson, S.A., Rueter, F.H, & Hand, D. (1996). Contributive research and development for the training decision system - final technical report. (CDRL CLIN0001:13). Draft technical report prepared for Brooks AFB, TX: Technical Training Research Division, Armstrong Laboratory, Human Resources Directorate.

Teachout, M.S., Sego, D.J., & Olea, M.M. (1993). Assessing training efficiency and effectiveness for aerospace ground equipment training. Proceedings of the 35th Annual Conference of the International Military Testing Association (pp.445-455). Williamsburg, VA: U.S. Coast Guard.

Stone, B., Weissmuller, J.J., & Mitchell, J.L. (1995, May). Manpower, Personnel, & Training (MPT) Research and Development Overview. In the symposium (Smith, A.M, & Bennett, W.R., Chairs), Military Occupational Analysis: Applications for Manpower, Personnel, & Training. Proceedings of the Ninth International Occupational Analysts Workshop. San Antonio, TX: Air Force Occupational Measurement Squadron.
 

 Next Page 
Back to Symposium Table of Contents

Back to IJOA Home Page