In the symposium, Military Occupational Analysis: Issues and Advances in Research and Application (H.W. Ruck, chair). Proceedings of the Eighth International Occupational Analysts Workshop, San Antonio, TX; USAF Occupational Measurement Squadron, June 1993.

Innovations in Occupational Measurement Technology for the US Military

William J. Phalen, Armstrong Laboratory, and

Jimmy L. Mitchell, McDonnell Douglas Aerospace

A suite of CODAP programs to assist analysts in interpreting job and task clusters has been completed and is now available on the UNISYS, IBM mainframe, and IBM RISC systems. R&D is in progress on computer-based surveying technology and possible scale transformations; an automated survey system will include data networking for inventory downloading and case file transfers. Computer-based surveying permits controlled presentations, branching, and automated quality control, and thus more detailed system-specific task inventories. Another line of R&D involves multidimensional task clustering utilizing comparable levels on dimensions such as grade, system maintained, training emphasis ratings, or similarity of skill and knowledge profiles. These innovations will facilitate multi-level analyses to support a variety of new applications for occupational data.

The principal occupational analysis technology in the United States Air Force is the Task Inventory/Comprehensive Occupational Data Analysis Programs (CODAP) approach. This system has supported a major occupational research program within the Air Force Human Resources Laboratory (AFHRL) since 1962 (Morsh, 1964; Christal, 1974), and an operational occupational analysis capability within Air Training Command's USAF Occupational Measurement Squadron since 1967 (Driskill, Mitchell, & Tartell, 1980; Weissmuller, Tartell, & Phalen, 1988). The CODAP system is now used by all the U.S. and many allied military services, as well as a number of other government agencies, academic institutions, and some private industries (Christal & Weissmuller, 1988; Mitchell, 1988).

The Task Inventory/CODAP approach to job analysis is generally recognized as particularly relevant for a number of Human Resource Management (HRM) uses including job descriptions, identification of training requirements, and modeling career paths (McCormick, 1976). In the early 1980s, a survey of 100 leading U.S. job analysts evaluated six major job analysis systems on a number of utility dimensions, such as cost and time to complete, and for a variety of HRM uses, such as evaluation, performance appraisal, and training (Levine, Ash, Hall, & Sistrunk, 1983). The TI/CODAP approach to job analysis, along with McCormick's Position Analysis Questionnaire (PAQ; McCormick, Jeanneret, & Mecham, 1972)) and Sidney Fine's Functional Job Analysis (FJA; Fine & Wiley, 1971) were consistently rated among the top three for most HRM uses (Levine, Ash, Hall, & Sistrunk, 1983). Significantly, three of the six major job analysis systems in the U.S. (critical incidents, the PAQ, and TI/CODAP) were developed in military-funded job analysis research; such military-sponsored R&D has had a substantial and lasting impact on the study of jobs in this country (Mitchell, 1988:35).

In assessing the current state of job analysis technology in America, Harvey has recently noted that "Advances in computer technology have allowed the development of integrated personnel systems to manage the vast amounts of data generated during the job analysis process. Applications of artificial intelligence and expert-systems technology promise to further reduce the cost and labor-intensiveness of the job analysis process" (Harvey, 1991:71). Unbeknownst to Harvey, a good bit of this work has already been done, is in progress, or in planned for the near future by the military occupational analysis community.

In recent years, the CODAP system has been rewritten to make it more efficient and to expand and partially automate its capabilities (Phalen, Mitchell & Staley, 1987). In the process of developing this new ASCII CODAP system, several major innovative programs were created to extend the capabilities of the system for assisting analysts in identifying and interpreting potentially significant jobs (groups of similar cases) and task modules (groups of co-performed tasks).

Over the last several years, operational testing and evaluation of new ASCII CODAP interpretive software has continued and these programs have demonstrated their value in terms of enhanced analytic capabilities and their potential to accelerate completion of an occupational analysis.

A Suite of Advanced Interpretive Assistance Programs

A set of seven programs has been developed which are meant to assist analysts in interpreting job and task clusters. Some of these were completed in time to be released with the initial version of ASCII CODAP; others required further refinement before they were ready for operational use. It is helpful to have an overview of the entire set of programs, so everyone can see how the programs relate to one another and to their ultimate objective. These programs are shown in Figure 1.

					Case Clusters     Task Clusters
					 (Job Types)      (Task Modules)

Identify Appropriate Clusters JOBTYP MODTYP

Identify/Display Core Tasks CORTAS TASSET

Identify/Display Core Cases CASSET CORCAS

Relationship of Task Clusters

to Job Clusters JOBMOD

Figure 1. The Set of Advanced Interpretive Assistance Programs

Program Descriptions

These programs are briefly described as follows:

JOBTYP automatically identifies stages in most branches of a hierarchical clustering DIAGRM which represent the "best" candidates for job types. First, core task homogeneity, task discrimination, a group size weight, and a loss in "between" overlap for merging stages are calculated for all stages and these values are used to compute an initial evaluation value (for JOBTYP equations, see Haynes, 1989). This value is used to pick three sets of initial stages; these are then inserted into a super/subgroup matrix for additional pairwise evaluation, in order to further refine the selection of candidate job type groups Three final sets of stages (primary, secondary, and tertiary groups) are then reported for the analyst to use as starting points for selecting final job types.

CORTAS compares a set of group job descriptions ("contextual" groups) in terms of number of core tasks performed, percent members performing and time spent on each core task, and the ability of each core task to discriminate each group from all other groups in the set (Phalen & Weissmuller, 1981). It also computes for each group an overall measure of within-group overlap called the "core task homogeneity index", an overall measure of between-group difference called the "index of average core task discrimination per unit of core task homogeneity", and an asymmetric measure of the extent to which each group in the set qualifies as a subgroup or supergroup of every other group in the set.

CASSET generates displays of cases whose jobs are most representative of job types within a given set of job clusters. This approach permits an analyst to quickly characterize a job type by the salient features of its most representative and discriminating members. The CASSET report may contain any type of background variable information describing a case that will fit in the allocated space. "Base of assignment" and "job title" are often the most useful variables to aid analysts' interpretations.

MODTYP - Just as the JOBTYP program automatically selects from a hierarchical clustering of cases the "best" set of job types, based on similarity of time spent across tasks, the experimental MODTYP (module typing) program selects from a hierarchical clustering of tasks the "best" set of task module types, based on task co-performance across cases. The term "best" means that the evaluation algorithm initially optimizes on four criteria simultaneously (i.e., within-group homogeneity, between-group discrimination, group size, and drop in "between overlap" in consecutive stages of the hierarchical clustering. After all stages of the clustering have been evaluated on these criteria, primary groups are input to the TASSET and CORCAS programs to provide analytic and interpretive data for each task cluster (Phalen, Staley, & Mitchell, 1989). Task clustering using co-performance values as a basis for developing task modules (TMs) has been reported elsewhere and need not be detailed again here (Perrin, Knight, Mitchell, Vaughan, & Yadrick, 1988; Vaughan, Mitchell, Yadrick, Perrin, Knight, Eschenbrenner, Rueter & Feldsott, 1989; Mitchell, Phalen, & Hand, 1991). Task co-performance is defined as a measure of the similarity of pairs of task profiles across all the people in an occupational survey sample. For details of the computation of measures of task co-performance, see Rue, Rogers, and Phalen (1992).

TASSET is a program which compares clusters of tasks (modules) in terms of the degree to which each cluster of tasks is co-performed with every other task cluster (supergroup/subgroup matrix). TASSET computes the average co-performance of each task with every other task in each cluster (representativeness index) and the difference in average co-performance of the same tasks with all other task clusters (discrimination index). TASSET also identifies tasks which meet the co-performance criterion for inclusion in clusters in which they were not placed (potential core tasks), as well as tasks that are highly co-performed with clusters other than the target cluster (negatively unique tasks).

CORCAS - The CORCAS program characterizes a task cluster (module) in terms of the people who most perform the tasks in the cluster, and especially those principal performers whose jobs are concentrated in this task cluster to the exclusion of all or most other task clusters. The CORCAS report may contain any type of background variable information describing a case that will be useful to the analyst in interpreting the task cluster and which will fit in the allocated space, just as on a PRTVAR report; however, "base of assignment" and "job title" are often the most useful variables.

JOBMOD - The JOBMOD (Job Type versus Task Module mapping) program aggregates the case- and task-level indices computed by the four advanced analysis programs and uses these aggregate measures to relate task clusters to job types and vice versa. The description of job types by a handful of discriminant clusters of tasks, and the association of each task cluster with the types of jobs of which it is an important component, is a basic requirement for defining and integrating the manpower, personnel, and training (MPT) components of an existing or potential Air Force specialty (AFS) or weapon system. If AFSs are to be collapsed or shredded out, or new jobs are to be assigned to an occupational area, or old jobs are to be moved to another occupational area, such highly summarized, yet meaningfully discriminant hard data are essential (Phalen, Staley, & Mitchell, 1989:4-5).


The work on CODAP programs for selecting and interpreting task clusters has been highly successful, and we have only begun to tap the potential of this type of automated modular technology. Further, our work with TMs to date has led us to believe that the task module approach has real promise for simplifying and expanding the use of occupational information in helping executives and managers make more realistic decisions. It is, in fact, a critical technology in the current period of manpower and budget reductions and consolidations. It is more important than ever to be able to model proposed changes and assess the potential impact of such changes before final manpower, personnel, and training decisions are made.

We believe that the automated modular technology can be extremely useful in modeling occupations and the world of work (Weissumuller, 1978; Mitchell, Phalen, & Hand, 1992). With additional refinement, we should be able to use quite a variety of information to develop better TMs which will take into account the type of equipment operated or maintained as well as subject matter expert (SME) ratings such as training emphasis (TE) and task difficulty (TD). While not yet fully explored or validated, this emerging TM development methodology has great promise for significant improvement of military manpower, personnel, and training planning and decision making, and, indeed, for organizational analysis as well. These are, of course, the first steps toward a fully automated interpretative (artificial intelligence or AI) system.

We have detailed development and operational testing of the new CODAP analyst assistance programs at previous Occupational Analysts' Workshops (Phalen, Mitchell, & Staley, 1987; Phalen, Staley, & Mitchell, 1987; Mitchell, Hand, & Phalen, 1991) and at Military Testing Association annual meetings (Mitchell & Phalen, 1985; Phalen, Mitchell, & Staley, 1987; Phalen, Staley, & Mitchell, 1988; Mitchell, Phalen, Haynes, & Hand, 1988; Phalen, Mitchell, & Hand, 1990; Mitchell, Phalen, & Hand, 1991, 1992). Indeed, such conferences have become the primary forums for interaction among CODAP practitioners (Mitchell 1988:34). All of the new task- and job-clustering and interpretation programs are now available on Air Force UNISYS, IBM mainframe, and IBM RISC systems (and will be available to other services on the next system update tape).

New Work In Progress

There are a number of projects currently underway to further improve the TI/CODAP

technologies in terms of better data collection, improved scales, and the capability to link task data with equipment or systems operated or maintained, required skills and knowledges, and other relevant dimensions.

Computer-based Survey Technology

One of the most innovative current projects is the Automated Survey/Tailored Task List system, a computer-based occupational survey administration system being developed to meet a request from the USAF Occupational Measurement Squadron and Air Training Command. The capability to tailor task lists, needed for inventory development, also meets a requirement of the Base Training System project in HSC/Systems Program Office to be able to integrate task lists (as specialties are merged, as in RIVET WORKFORCE).

The objective of the project is to develop automated procedures which provide for electronic distribution of task-level surveys to respondents; automated self-administration of the surveys on PCs; and the flow of electronically captured response data to USAFOMS on communications lines such as the Military Data Network (MILNET). This paperless survey administation system will result in substantial reductions in costs and the time required to collect required survey data. The more timely collection, analysis and reporting of data should benefit all users of occupational analysis data.

An important aspect of the R&D has been the development and testing of five scaling techniques for measuring time spent on a task:

(1) a criterion set of scales which provide absolute measures of frequency of task performance, time it takes for a single performance of the task, and total amount of task time spent per week, month, and year on the task;

(2) a three-stage scaling technique which begins with administration of the usual nine-point relative time spent scale (stage 1), proceeds to a refinement phase in which tasks assigned the same rating are displayed together and moved to another rating category if appropriate (stage two), and ends with a phase that offers the opportunity to further subdivide the tasks within each rating category into two or three subcategories (stage three);

(3) an end-anchored graphical scale that displays a horizontal line, 80 characters long, with the individual's previously determined highest and lowest time spent tasks displayed at each end of the scale (as each task to be rated is presented, the rating is made by moving the cursor along the line to the desired distance between the highest and lowest time spent tasks);

(4) a direct-magnitude estimation scale which uses a previously identified "moderate" time spent task as the anchor with an assigned value of "100," and each task presented for rating is assigned a value relative to the anchor task (e.g., if the task to be rated is twice as time consuming as the anchor task, it should be assigned a value of "200;" if half as time consuming, "50"); and

(5) an indirect magnitude estimation scale, which consists of nine verbal expressions of amount of time spent, such as "very little," "fairly much," "a great amount," etc., which have been "weighted" by means of magnitude scaling as to the amount of time each expression represents compared to the term "some." The rater responds by highlighting the appropriate expression.

On each scale, multiple feedback loops have been provided to solicit respondent evaluation and refinement of responses. When all task responses have been compiled, a complete description of the job is presented to the incumbent for final review and editing. This description gives the estimated hours per week and per month spent on each task by the respondent. These estimates were made possible by linking several of the scale responses to the absolute measures provided by the absolute time estimation scale and averaging multiple linear fits to the experimental scale data. This technology promises to give us much better estimates of real time expended per task than has previously been possible in the pencil-and-paper mode.

Pilot testing of the automated survey system has been completed, and the software and procedures have been refined. The principal test of the automated survey administration procedure and the experimental scales will take place in a controlled laboratory environment at the Learning Abilities Measurement Program (LAMP) facility at Lackland AFB during August and September 1993 during which time the scales will be administered twice (at two week intervals) to a sizable sample of high and low aptitude airmen in a wide variety of technical and nontechnical specialties. Built-in validation procedures will include:

(a) asking the respondent to indicate which tasks in the job description he or she did on the day prior to taking the survey and comparing these responses to the frequency and time spent data on the rated tasks across all respondents; there should be a fairly high correlation if the survey procedure is valid; and

(b) asking the respondent to select the time spent value which he/she considers to be most accurate when there is a large discrepancy between the criterion and the experimental scale values. The source of each value will not be revealed so as not to bias the responses; the chosen value should most frequently be the one from the criterion scale, if it is truly a valid criterion.

The expected payoffs of the automated survey technologies are many:

(a) elimination at USAFOMS of printing, mailing, return, and data entry costs in excess of $1,000,000 per year;

(b) more rapid field surveying of respondents (reducing the process from 7 - 9 months to 1 - 2 months);

(c) potentially more valid and reliable data (due to tailored presentation and feedback loops);

(d) potentially more effective and efficient occupational analysis;

(e) ability to do spot surveys on an "as needed" basis or annual resurveys at a stated time, rather than blanket surveys every 2 - 8 years; and

(f) potentially invaluable tools for conducting management engineering, productive capacity, and performance measurement studies (which require measures of real time).

The Tailored Task List software is designed to guide the respondent through lengthy task lists in the most efficient manner, so that the respondent will not encounter tasks that have low probability of being performed by him or her, given his or her prior task responses. The Tailored Task List procedure can also be used in conjunction with the computer-administered survey system to rapidly develop job descriptions and associated real-time estimates for tasks in order to establish organizational manpower requirements. A potentially high-payoff opportunity for this PC-based software is its ability to rapidly develop (at the hands of local supervisors or OJT trainers) task training lists tailored to the task-level training requirements of an organization and its individual workers. It may similarly be used at Utilization and Training (U&T) conferences by functional managers to help develop task training lists tailored to meet Air Force Specialty (AFS), shredout, weapon system, or other general or specific task-level training requirements.

The Automated Survey/Tailored Task List system project has a very significant potential to positively impact the occupational analysis programs of all the military services, through better scaling and computer-based surveying. By using the electronic transfer of inventories and completed case data files, it should also encourage similiar electronic transfer of completed reports and analysis files. This process is already underway, to a considerable degree, in the work the USAF Occupational Measurement Squadron is doing in support of the Base Training System (BTS), where specially sorted job descriptions are used as input files upon which the BTS develops generic position descriptions as well as the master task lists for each specialty.

Multilevel Analysis of Task Data

There are extensive amounts of task-level information now available on most military occupations which are used effectively for a variety of purposes. Organizing such task information into task modules, jobs, and higher-order categories allows the data to be applied to more global issues and problems, and the summarized data can be used to develop more realistic models or simulations of occupational structures and requirements (Perrin, et al., 1988; Vaughan, et al, 1989). Existing data already permit comprehensive organizational modeling. Some present analyses involve multiple specialties, multiple categories of personnel (enlisted, officer, civilian), or even multiple services (interservice or joint-service projects). Given the substantial value of task-based information and analyses, multilevel studies focused on task modules and other higher-order groupings have considerable potential for applications in modeling military organizations to assist military decision makers in evaluating proposed organizational restructuring, interventions, and/or other organizational changes (Mitchell, Phalen, & Hand, 1992).

Current experimental work is focusing on adjusting the task clustering algorithm or expanding the task co-performance similarity matrix to yield more interpretable groupings of tasks, so as to distinguish meaningful subgroups among the large numbers of commonly performed tasks. We need to be able to develop multilevel taxonomies of variables which can be linked, either directly or indirectly, with tasks or groups of tasks (TMs), skills, knowledges, and abilities (KSAs), potentially at many different levels.

These linkages could be estimated directly, by having subject-matter experts (SMEs) map the KSAs required for every task. This is a formidable undertaking in terms of the sheer volume of tasks and potential KSAs. It can and is being done in some circumstances (see, for example, the Weissmuller, Dittmar, & Moon presentation on their IRS study, elsewhere at this workshop). The same type of linkage can be achieved indirectly by having job incumbents rate the tasks they perform in their job, and subsequently identify and rate the KSAs they use in their jobs. The tasks and KSAs can be clustered separately and linkages between the task modules (TMs) and the KSA modules (KMs) can be established by determining the commonality of cases involved in both the TM and KSA clustering solutions, as reflected by their CODAP CORCAS indices. Other measures might include evidence generated directly from paired comparisons of task ratings with KSA ratings across all cases in the sample. For example, if a task rated high on time spent (salient task) is associated with a zero-rated KSA by a significant number of cases, it can be assumed that this task and KSA are not linked; whereas, if there is a significantly high correlation between a task and a KSA when both have non-zero ratings, one can assume that there is a linkage between them.

Another indirect linkage technology being developed involves the use of graph theory to establish degrees of "connectedness" (e.g., similarity or dominance) between items for which only a small number of direct comparisons are available. For example, if task "A" is estimated by one rater to be similar for weapon systems "X" and "Y," and is also estimated to be similar for weapon systems "Y" and "Z" by a second rater and for systems "V" and "Z" by a third rater, we can say that "X" and "Z" are one-connected through "Y", and that "X" and "V" are said to be two-connected through "Y" and "Z". In this manner, it is quite possible to build nonredundant connectedness networks which yield several times as many indirect linkages as the direct linkages furnished by the raters.

This technology will make it possible to perform complex occupational analyses requiring extremely large numbers of paired comparisons, yet requiring only a limited number of raters who have compared relatively small subsets of items (e.g., four to six weapon systems) per task.

Semantic Analysis

Another emergent technology falls under the heading of Semantic-Assisted Analysis Technology (SAAT). SAAT actually covers a set of analysis principles which have been converted into unique computer code for a variety of specific applications since its initial conception in 1984. The purpose of SAAT is to provide a linkage table (mapping) from items in one data set into matching items in another data set. This may be mapping tasks from one job inventory into another (such as Time 1 - Time 2 analysis or merging Air Force Specialties), mapping tasks into Testing Areas for promotion tests (Dittmar, Weissmuller, Haynes, & Phalen, 1989), mapping tasks into Maintenance Data Collection (MDC) records of work performed, or mapping tasks into Logistic Support Analysis Records (LSAR). Once the linkage table has been established, the task-level data can be summarized and used for the purpose at hand.

Currently, SAAT operates on well-structured lists of items, such as task statements or descriptions of individual data items. Because of the clear syntax used in these lists, determining parts of speech for the noun and verb phrases is straighforward. The SAAT involves a noun and verb similarity assessment, with procedures for compensating for abbreviations, acronymns, misspelled words, plural & other word forms and synonyms (if desired). Without further linguistic development, semantic-assisted clustering can be tailored to improve the definition of task co-performance modules by measuring the strength of semantic linkage of the component task statements comprising a task module with another task module, even though general performance levels differ (e.g., subset vs. superset modules).

Additional work is planned to expand the linkage methodology to link task statements into paragraphs, sections, and chapters of textual training materials (such as career development courses and technical manuals), which would require an enhanced linguistic analysis process to properly parse and relate sentence fragments. This requires using additional dictionaries and thesauruses in which words are labeled as parts of speech (adjective, adverb. etc.), synonymous terms are identified, and in which rules of grammar operate. Given this expanded capability, it would be possible to use logic rules (such as the law of transitivity) to establish relations between tasks or task modules and weapon systems, other equipment, or procedures, which themselves are linked with required knowledges, and probabilistic measures (based on frequency of co-occurence of words and word combinations in a well-defined context). An extended matrix of relationships between TMs or jobs versus weapon systems or other equipment operated or maintained can be developed, which would be extremely useful in determining job requirements (KSAs, educational background, training, etc.) for optimal person-job matching. The key to optimizing such person-job matches (and thus to improved productivity) is the realisitic determination of real job requirements (Ward, Vaughan, Mitchell, Driskill, & Ruck, 1992).


While much has been accomplished in recent years, and much is planned in the near future, much still remains to be done to improve and refine military occupational analysis technologies. We need future basic research into the nature of tasks and the construction of meaningful task modules and higher-level aggregations of work. We will need your help in developing a realistic statement of the requirement for this work, as well as in finding appropriate funding. We also need further applied research to further enhance the automated analysis programs and bring them more into the realm of state-of-the-art Artificial Intelligence. Again, your help is needed in defining, justifying, and funding this advanced development work. Finally, we also need your support and money to enhance CODAP software and make it more user friendly and interactive on RISC and PC computer systems. Given recent advances in such hardware systems, we should be aiming for an analysis system where each occupational analyst can run CODAP interactively on his or her desk-top computer, thus ending our reliance on expensive mainframe systems and batch operations.

References Cited

Christal, R.E. (1974). The United States Air Force occupational research project (AFHRL-TR-73-75). Lackland AFB, TX: Occupational Research Division, Air Force Human Resources Laboratory.

Christal, R.E., & Weissmuller, J.J. (1988). Job-task inventory analysis. In S. Gael (Ed), Job analysis handbook for business, industry, and government. New York: John Wiley and Sons, Inc. (Chapter 9.3).

Dittmar, M.J., Weissmuller, J.J., Haynes, W.R., & Phalen, W.J. (1989, November). Development of Air Force specialty-specific predicted testing importance (PTI) equations. In the Symposium, Advances in a new technology for developing automated data-based specialty knowledge test outlines (P.P. Stanley, Chair). Proceedings of the 31st Annual Conference of the Military Testing Association (pp. 643-648). San Antonio, TX: Air Force Human Resources Laboratory and the USAF Occupational Measurement Center.

Driskill, W. E., Mitchell, J.L., & Tartell, J.E. (1980, October). The Air Force occupational analysis program - a changing technology. Proceedings of the 22nd Annual Conference of the Military Testing Association. Toronto, Ontario, Canada: Canadian Forces Personnel Applied Research Unit.

Fine, S.A., & Wiley, W.W. (1971). An introduction to functional job analysis. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research.

Harvey, R.J. (1991). Job analysis. In M.D. Dunnette & L.M. Hough (Eds), Handbook of industrial and organizational psychology, Second Edition, Volume 2 (71-163). Palo Alto, CA: Consulting Psychologists Press, Inc.

Haynes, W.R. (1989, January). JOB-TYPING, Job-typing programs. In: Comprehensive Occupational Data Analysis Programs. San Antonio, TX: Analytic Systems Group, The MAXIMA Corporation, Prepared for the Air Force Human Resources Laboratory [program documentation available on the AFHRL Unisys computer].

Levine, E.L., Ash, R.A., Hall, H., & Sistrunk, F. (1983). Evaluation of job analysis methods by experienced job analysts. Academy of Management Journal, 26: 339-348.

McCormick, E.J. (1976). Job and task analysis. In M.D. Dunnette (Ed.), Handbook of industrial and organizational psychology (651-696). Chicago: Rand McNally.

McCormick, E.J., Jeanneret, P.R., & Mecham, R.C. (1972). A study of job characteristics and job dimensions as based on the position analysis questionnaire (PAQ). Journal of Applied Psychology, 56: 347-368.

Mitchell, J.L. (1988). History of job analysis in military organizations. In S. Gael (Ed), Job analysis handbook for business, industry, and government. New York: John Wiley and Sons, Inc. (Chapter 1.3).

Mitchell, J.L., Hand, Darryl K., & Phalen, William J. (1991, May). ASCII CODAP: Modification of the CORCAS program to facilitate analysis of task clusters. Proceedings of the Seventh International Occupational Analysts' Workshop. San Antonio, TX: USAF Occupational Measurement Center.

Mitchell, J.L., & Phalen, W.J. (1985, October). Non-hierarchical clustering of Air Force jobs and tasks. Proceedings of the 27th Annual Conference of the Military Testing Association. San Diego, CA: Navy Personnel Research and Development Center.

Mitchell, J.L., Phalen, W.J., & Hand, D.K. (1991, October). ASCII CODAP: Use of CASSET to facilitate analysis of case clusters. Proceedings of the 33d Annual Conference of the Military Testing Association. San Antonio, TX: Human Resources Directorate, Armstrong Laboratory and the USAF Occupational Measurement Squadron.

Mitchell, J.L., Phalen, W.J., & Hand, D.K. (1992, October). Multilevel occupational analysis: Hierarchies of tasks, modules, jobs, and specialties. In the symposium, Organizational analysis issues in the military (H.W. Ruck, chair). Proceedings of the 34th Annual Conference of the Military Testing Association. San Diego, CA: Navy Personnel Research & Development Center.

Mitchell, J.L., Phalen, W.J., Haynes, W.R., & Hand, D.K. (1988, December). Operational testing of ASCII CODAP job and task clustering methodologies. In the symposium, New ASCII CODAP Technology: Manpower, Personnel, & Training Applications. Proceedings of the 30th Annual Conference of the Military Testing Association. Arlington, VA: U.S. Army Research Institute.

Mitchell, J.L., Phalen, W.J., Haynes, W.R., & Hand, D.K. (1989, October). Operational testing of ASCII CODAP job and task clustering methodologies (AFHRL-TP-88-74). Brooks AFB, TX: Manpower and Personnel Division, Air Force Human Resources Laboratory.

Morsh, J.E. (1964). Job analysis in the United States Air Force. Personnel Psychology, 17: 7-17.

Perrin, B.M., Knight, J.R., Mitchell, J.L., Vaughan, D.S., & Yadrick, R.M. (1988, September). Training decisions system: Development of the task characteristics subsystem (AFHRL-TR-88-15, AD-A199 094). Brooks AFB, TX: Training Systems Division, Air Force Human Resources Laboratory.

Phalen, W.J., Mitchell, J.L., & Hand, D.L. (1990, November). ASCII CODAP: Progress report on applications of advanced occupational analysis software. Proceedings of the 32nd Annual Conference of the Military Testing Association. Orange Beach, AL: Naval Education & Training Program Management Support Activity.

Phalen, W.J., Mitchell, J.L., & Staley, M.R. (1987, May). Operational testing of ASCII CODAP job and task clustering refinement methodologies. Proceedings of the Sixth International Occupational Analysts' Workshop. San Antonio, TX: USAF Occupational Measurement Center.

Phalen, W.J., Staley, M.R., & Mitchell, J.L. (1987, May). New ASCII CODAP programs and products for interpreting hierarchical and nonhierarchical task clusters. Proceedings of the Sixth International Occupational Analysts' Workshop. San Antonio, Texas: USAF Occupational Measurement Center.

Phalen, W.J., Staley, M.R., & Mitchell, J.L. (1988, December). ASCII CODAP programs for developing job and task clusters. In the symposium, New ASCII CODAP technology: Manpower, Personnel, & Training Applications. Proceedings of the 30th Annual Conference of the Military Testing Association. Arlington, VA: Army Research Institute.

Phalen, W.J., Staley, M.R., & Mitchell, J.L. (1989, October). ASCII CODAP programs for developing job and task clusters (AFHRL-TP-88-73). Brooks AFB, TX: Manpower and Personnel Division, Air Force Human Resources Laboratory.

Phalen, W.J., & Weissmuller, J.J. (1981, October). CODAP: Some new techniques to improve job type identification and definition. Proceedings of the 23d Annual Conference of the Military Testing Association. Arlington, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.

Rue, R.C., Rogers, S.G., & Phalen, W.J. (1992, May). Comprehensive occupational data analysis programs (CODAP): Computation of measures of task co-performance. Manuscript report; SRA Corporation and AL/HRMJ, available from the authors. Brooks AFB, TX: Armstrong Laboratory, Human Resources Directorate, Manpower and Personnel Research Division.

Vaughan, D.S., Mitchell, J.L., Yadrick, R.M., Perrin, B.M., Knight, J.R., Eschenbrenner, A.J., Rueter, F.H., & Feldsott, S. (1989, June). Research and Development of the Training Decisions System (AFHRL-TR-88-50). Brooks AFB, TX: Training Systems Division, Air Force Human Resources Laboratory.

Ward, Joe H., Jr., Vaughan, D.S., Mitchell, J.L., Driskill, W.E., & Ruck, H.W. (1992, October). The ultimate person-job match: A key to future worker productivity. Proceedings of the 34th Annual Conference of the Military Testing Association. San Diego, CA: Navy Personnel Research & Development Center.

Weissmuller, J.J., Tartell, J.E., & Phalen, W.J. (1988, December). Introduction to operational ASCII CODAP: An overview. Proceedings of the 30th Annual Conference of the Military Testing Association. Arlington, VA: U.S.Army Research Institute.

Weissmuller, J.J. (1978, May). CODAP: Creating and using modular job descriptions. In J.J. Weissmuller (Chair), CODAP: New directions in occupational analysis. Proceedings of the 1st Military Occupational Analysts' Workshop. San Antonio, TX; USAF Occupational Measurement Center.

Back to the IJOA home page