IJOA-TR-1994-01

TASK DIFFICULTY WORKSHOP

Conference Room
Air Force Occupational Measurement Squadron
1550 5th Street East
Randolph AFB, TX 78150-4449

20 June 1994

Hosted By:

Air Force Occupational Measurement Squadron

and the

Institute for Job and Occupational Analysis
8301 Broadway, Suite 211C
San Antonio, TX 78209-2066

November 1994

Copyright - 1994. All rights reserved under the copyright laws by the Institute for Job & Occupational Analysis. This material may be reproduced without restriction by and for any agency of the U.S. Government.


PREFACE

This volume is a report of a meeting in June 1994 where the construct of Task Difficulty was discussed, and where some new ideas and suggestions were formulated. The report is distributed to participants and other interested individuals, with the expectation that additional ideas and comments will be generated. The hope is that sufficient information and ideas can be developed over the next few months to permit the development of a White Paper on needed future research on Task Difficulty (TD) and TD data development procedures and guidelines. If you have suggestions and ideas concerning TD, please forward them to AFOMS\OMY or IJOA.


ATTENDEES

TASK DIFFICULTY WORKSHOP

20 June 1994
AFOMSq Conference Room
Randolph AFB, Texas

Mr. J. S. Tartell            USAFOMS/OMYO                    DSN 487-6623
			     1550 5th Street East            Com 652-
			     Randolph AFB, TX 78150-4449     FAX: 3773

LtCol Bill Wimpee AL/HRT DSN 240-2912 7909 Lindbergh Drive Com 536- Brooks AFB, TX 78235-5352 FAX: 2902

Dr. R. Bruce Gould AL/HRT DSN 240-2912 7909 Lindbergh Drive Com 536- Brooks AFB, TX 78235-5352 FAX: 2902

Major Randy Agee USAFOMS/OMYO DSN 487-6624 1550 5th Street East Com 652- Randolph AFB, TX 78150-4449 FAX: 3773

Mr. Winston Bennett AL/HRTE DSN 240-2932 7909 Lindbergh Drive Com 536- Brooks AFB, TX 78235-5352 FAX: 2902

SQNLDR Ian Rouse RAAF - AL/HRMJ DSN 240-3256 Royal Australian AF 7909 Lindbergh Drive Com 536- Brooks AFB, TX 78235-5352 FAX: 2902

Dr. Walter Albert AL/HRMJ DSN 240-3256 7909 Lindbergh Drive Com 536- Brooks AFB, TX 78235-5352 FAX: 2902

Dr. Mary J. Skinner AL/HRM DSN 240-3222 7909 Lindbergh Drive Com 536- Brooks AFB, TX 78235-5352 FAX: 2902

Mr. Bill Phalen 2157 West Gramercy Com 733-8858 San Antonio, TX 78201

Mr. Gabriel P. Itano ARI - AL/HRMJ DSN 240-3256 Army Research Institute 7909 Lindbergh Drive Com 536- Brooks AFB, TX 78235-5352 FAX: 2902

Mr. Larry Looper AL\HRMM DSN 240-3648 7909 Lindbergh Drive Com 536- Brooks AFB, TX 78235-5352 FAX: 2902

Ms. Doris Black AL/HRMJ DSN 240-3256 7909 Lindbergh Drive Com 536- Brooks AFB, TX 78235-5352 FAX: 2902

Mr. Don Harville AL\HRMM DSN 240-3222 7909 Lindbergh Drive Com 536- Brooks AFB, TX 78235-5352 FAX: 2902

Dr. Walter E. Driskill Metrica, Inc Com. 822-6600 8301 Broadway, Suite 215 FAX: 821-6308 San Antonio, TX 78209-2066

Dr. Jimmy L. Mitchell IJOA Com. 822-6600 8301 Broadway, Suite 211C FAX: 821-6308 San Antonio, TX 78209-2066

Dr. Brice Stone Metrica, Inc Com. 822-6600 8301 Broadway, Suite 215 FAX: 821-6308 San Antonio, TX 78209-2066

Mr.Johnny J. Weissmuller IJOA Com. 822-6600 8301 Broadway, Suite 211C FAX: 821-6308 San Antonio, TX 78209-2066


TASK DIFFICULTY WORKSHOP

20 June 1994
1330-1530
AFOMSQ conference room

A group of interested individuals met on 20 June 1994 at the Air Force Occupational Measurement Squadron conference room, Randolph AFB, TX, to discuss Task Difficulty as a construct and as a useful task factor. This meeting came about as a result of recent conversations; first between Dr. Bruce Gould, AL/HRT, and Major Randy Agee, OMYO, and subsequently involving Mr. Winston Bennett, AL/HRTE, Dr. Jimmy Mitchell, Institute for Job & Occupational Analysis (IJOA), and others. As a result of these interactions, it was concluded that it would be worthwhile to hold a short get together, where we as a group of interested individuals could discuss the current status of Task Difficulty research and application, and might formulate some things that need to be looked at or resolved in the future.

Dr. Mitchell, IJOA, opened the meeting (with the above introduction), and handed out a prepared agenda (see Attachment 1). He also provided copies of a draft Task Difficulty Bibliography (see Attachment 2), which he and Johnny Weissmuller quickly put together last Friday, and suggested that it could be used as a foundation for developing a comprehensive TD reference list (which might be useful to everyone).

[Note the Madden reference, "What makes work difficult" (1962), to put Air Force thinking about this construct into a historical perspective.]

Major Agee, OMYO, discussed the recent Master's thesis by Lisa Boyce (December, 1993, St. Mary's University, 1/0 Psychology; Dr. Gould, thesis advisor [Lisa Boyce is now the USAF Exchange Officer with the RAAF, Victoria Barracks, Melbourne]), which examined the definition of TD. In this work, she used Task Performance Difficulty (TPD) and Task Learning Difficulty (TLD). Apparently, most people are implicitly using TPD as the construct being rated rather than TLD, which is the official definition. She found no real differences in ratings of these versus just Task Difficulty. She recommended that if "difficult to learn" is the target construct, then perhaps the cover of the rating booklets should indicate "LEARNING DIFFICULTY" in some bold or emphasized way, to focus raters' attention to the definition. The group discussed the possibility that current instructions should perhaps be modified; however, before any changes are made, the Boyce thesis should be studied in detail. [Dr. Gould indicated that while some people have seen the draft of this thesis, it is not completely ready for publication; the group suggested that as soon as it was published, perhaps Dr. Gould could provide copies to those in the group who were interested.]

Major Agee indicated that one of his concerns was the Job Difficulty Index (JDI), which was at one time routinely used to make comparisons among the job types within a specialty or occupation. In the last decade or so, the JDI has not been used; in part, because additional research had been requested to see if certain anomalies of more senior jobs appearing less difficult could be corrected. Dr. Gould responded that the then Australian Exchange Officer, SqLdr Phil Davis, had worked extensively on this problem. His conclusion was that TD results for supervisory levels of technicians are inconsistent; apparently the meaning of the measure disintegrates at the higher enlisted skill levels and grades. He found no useable remedy for the problem. Mr. Weissmuller pointed out that there is also a major problem with the JDI when an occupational survey covers several specialties, particularly if the programmer uses overall averages. He suggests that JDI is only meaningful if VARSUM is rerun for each AFS, to calculate a separate JDI for each specialty in the study.

One of the components of the JDI formula is the Average Task Difficulty per Unit Time Spent (ATDPUTS); apparently since the JDI is seldom used at OMS, the ATDPUTS is no longer part of the study EXTRACT. AL/HRTE recently sent a letter requesting that the ATDPUTS be computed for each study and save so that ATDPUTS would be available for research purposes and would not have to be recomputed. OMS has agreed to the this request, and ATDPUTS is now being included in the EXTRACT for each study. In discussing the JDI, Dr. Walter Driskill, Metrica, Inc., suggested that in the absence of the JDI, analysts could use the ATDPUTS as a realistic measure of the relative difficulty of various job types.

Dr. Gould noted that TD is used as a starting point for the development of Occupational Learning Difficulty (OLD), which is the common scale used to assess relative aptitude requirements. He stressed that it is the only measure with which comparisons can be made across occupational (and aptitude) areas. It is needed also in looking at Specialty Structuring questions. Dr. Gould suggested we probably should be looking at this kind of data for assessing requirements for 7-skill level courses, under the current AF Chief of Staff Year of Training initiatives.

Dr. Driskill reported that Metrica is currently doing some work with Dr. Skinner and Don Harville, to bring the OLD data up to date; this is a difficult undertaking given the number of changes in AFS structures that have taken place over the last year or two. This will be more than just an updating of files; it involves technology for the development of a common benchmark scale, using the Ramadge (1987) and one or two other approaches. Mr. Weissmuller noted that it is the Benchmark Occupational Learning Difficulty (or BOLD) system.

Dr. Driskill commented that changes in AFSC structure make the process problematic for developing a common benchmark scale. One of the major issues involved is being able to even match task lists from older AFSs to new specialties, since there have been so many changes.

[Metrica has developed some new software, the Historical AFS Information Locator System (HAILS) to facilitate tracking the history of AFS changes, and other software for use in comparing old and new task lists, as the first steps in determine which BOLD values are still valid, and where new data must be collected.]

More AFS changes are underway. Mr. Tartell reported that there is a drop this week from about 188 AFSS to 187; the boatmaster specialty was deleted. Dr. Mitchell indicated that in recent Air Staff contacts, there has been the rumor that the Chief of Staff has mandated that there should be a further 20% reduction in the number of AFSS, paralleling the general "downsizing" taking place within DoD. Major Agee observed that some of the mergers/reduction in AFSs makes for some weird mergers (such as AFSs 116XO and 118XO; those who operate and repair AWACSs equipment).

Dr. Skinner reported that TD is used in selecting tasks for measurement in one research project she manages. Task characteristic information is used to predict learning curves. Under this contract, Dr. Brice Stone of Metrica, has examined time to perform tasks based on supervisor reports versus actual performance times for six to eight tasks in four AFSS. This involves 3- and 5-skill level personnel. There is some need to further refine this methodology and for more realistic measures of time to perform. Dr. Driskill suggested use of Maintenance Data System records for generating average performance times, since the records can be tracked by SSAN of the person doing the work. Dan Harville noted that codes are different for each base and difficulty to get access to; Dr. Driskill indicated that F-15 and F-16 data was available through McDonnell Douglas for some bases, and that he had made use of such data for other purposes.

Mr. Winston Bennett reported another effort by Dr. Dave Woehr of Texas A&M University (Psychology Dept) which used TD to select tasks used in developing measures of performance in Aircraft Maintenance (Jet Engine Mechanics). This effort was to build criterion measures with which to assess training interventions. [See Wink if you want details.] This developing technology also may need access to maintenance data to obtain training times.

Dr. Skinner discussed another in-house project with used OS data; this work with Brice Stone was the development of an index of productive capacity. It looks at tasks being performed by groups and examines how the tasks are stratified by grade and skill level. Which tasks are "characteristic" tasks for each level; what other tasks are added later at higher grades and skill levels. The study has examined 26 AFSS, studying the percent performing data stratified by skill level. It examines the core of common tasks by level and attempts to use job proficiency ratings to validate this characteristic core. The measures appear consistent within AFS to define productive people [hence "productive capacity"].

Larry Looper has a draft of the final report on this project and could make it available to those who need it.

[Bennett comments: What is the relationship of productive capacity of Job Performance Measurement (JPM) data for the same career fields. We should explore the relationship of Qualified Man-Months (QMM) to productive capacity (PC). This would be an interesting area for research. What are the facilitators and impediments of PC, QMM, and utility estimation? What about the capacity of operational AF measures, such as maintenance data collection, LCOM data, etc., to be used to inform these estimates. Should also examine individual differences - Brice Stone is using OS data to get an index of PC, so we could look at the tasks performed and how they differ among individuals.]

Dr. Mitchell suggested that TD is possibly related to the concept of "learning decay" or "training decay"; a measure of such decay could be used in the TDS/TIDES technology. This led to a discussion of an AFOMS/OMD project; the "skills degradation study." The issue here is to determining how quickly ANG/RES personnel called to active duty in some crisis could regain proficiency in required tasks. The plan is to use tasks to define tests of specialty knowledge to assess state of proficiency [there are no longer any Apprentice Knowledge Tests or AKTs; these were apparently discontinued since the Year of Training Initiative mandates ABR training for all personnel so there will be no by pass specialists.] SKT teams review EXTRACTs to match SKT and OSR tasks and examine percent performing, TD, TE, and Testing Importance (TI). The objective is to find tasks associated with War Skills; 20 - 30 tasks for testing to identify people who need training. OMD is collecting information for each AFS as SKTs are revised, and turning the data over to MPC (DPXC - Col Sincenere??).

Mr. Tartell observed that with Benchmarking and other technologies, OMY has to be very conscious of how such new technology impacts on the program. The OMY "customers" are increasingly the Career Field Managers; OMY must be concerned with the operational use of new or changed procedures. If there is a better way, and if it is fully validated, then OMY will use it. But changes will not be made until a proposed change is PROVED better. Some discussion of possible changes, including changes in instructions as suggested by the Boyce thesis followed. Some modifications, such as highlighting instructions, or labeling the cover as Learning Difficulty, perhaps could and should be made. These kinds of modifications will be discussed internally [OMY] after the staff have had an opportunity to review the final version of Lisa Boyce's thesis.

Dr. Driskill observed that it had been a very long time since TD and the JDI were looked at, and we need to revisit what was done in the initial research (and since]. One general trend observed through the years is that the most difficult tasks are usually not performed very often. He noted that there have been some efforts to use TD in very inappropriate ways. Christal's original definition was the time to learn to do a task successfully. They had looked at several operational definitions, and this was the only one where they could get agreement among raters.

Christal took this work and extended it into the aptitude area, and several other appropriate applications. Weeks, Mumford, & Harding (1985) used difficulty versus courses, and wanted to look at task difficulty versus actual training time. Several problems: 1) no way to get reliable estimates of time task training. There was no agreement among estimates and they finally used composites. The bottom line was that there was low relationship between TD and classroom training time. 2) did find that more difficult tasks tend to be performed by more senior people. Need further definition and refinement of the TD construct. Fleishman's work in the 1960's involved work with Miller, Wheaton, and others. Wheaton's Task Characteristics (TC) approach is important. TC involves what the task itself requires for successful performance.

The Rose, et al, work for the U.S. Army focused on task retention. They were looking at hands-on performance to 100% proficiency involving operating a 20 mm gun. Regression analysis predicted retention of information and proficiency at 3 - 6 months after training. Task characteristics seem to be related to a broader construct called complexity. [Possible relationship of task complexity and TD; need to establish the relationship between these two constructs.]

Dr. Driskill indicated what we ought to know more before we make new uses of TD data. TD may, in some cases, be function as a surrogate for some other task characteristic. He feels that it is sound to use TD in BOLD, Productive Capacity, and Testing Importance; but, we need to know more before TD is applied in more uses.

Dr. Gould indicated that he agrees with that point of view, but pointed out that today there are no resources which can be used for basic research. We need:

  1.  An exhaustive list of what we don't know

2. A list of all the kinds of things we could do.

3. Perhaps a white paper which summarizes all this and defines priorities for R&D.

Then, there might be some possibility to do something. What are the critical questions which need to be answered now. It could be possible and useful to weave some of these questions into ongoing research (BOLD, Specialty Structuring). Dr. Driskill observed that if we knew that the work on learning retention (conversely stated as task decay) was valid, it could certainly be used in the Specialty Structuring System.

Mr. Tartell suggested that if we can adequately reformulate these issues involving TD, this would make an excellent theme for the 1995 International Occupational Analysts Workshop (IOAW), which he indicated he should begin planning for fairly quickly. This could be the keynote issue for the IOAW, and we could distribute the TD white paper and associated bibliography to attendees; this would be useful for everybody. The group consensus was for approval of this suggestion and appreciation to Mr. Tartell for suggesting it. Dr. Mitchell volunteered to Mr. Tartell that IJOA would be happy to help with the IOAW in any way possible.

Considerable additional discussion ensued. Mr. Phalen indicated that the current TD measure is not as reliable a measure as it might appear to be, since the r11's (average correlations among pairs of raters) almost invariably fall within the range of .10 to .30. The rkk's reach .90 only because of the large number of raters and the standardization of raw task ratings (thus removing level and scatter differences). The final standardization of task difficulty means to Mean = 5.0 S.D. = 1 hides the fact that the raw means define a much more compressed distribution, the S.D. usually falling in the range of .60 to .90. Part of the problem, in addition to expecting too much expertise from raters on too many tasks, is that there is not a clearly defined rating context; e.g., by organizational setting, weapon system, environment, etc. Instead, raters are supposed to come up with "global" difficulty ratings, even though task difficulty may vary considerably across contexts. Suspected "policies" should be allowed for by asking raters to rate tasks according to their current work situation (as defined in the background section of the occupational survey they have recently completed) or according to a specified context (assigned to the rater on the basis of his or her responses to the background items). Suspected policies can often be derived by policy analysis of previous task difficulty studies, recommendations from previous occupational analyses, or consultation with subject-mafter experts (SMES) and functional managers. Of course, the plan for sampling raters would have to reflect the suspected policies and raters would have to be drawn from the recently completed occupational survey of the career field in question. This approach would ensure better interrater agreement (within policy groups).

Mr. Phalen then discussed some surrogate measures for TD, such as average time in service, average time in career field, average paygrade, and average skill level, combined with average specific and general aptitudes of individuals who report performing each task in a job survey. This approach has several advantages over current TD measures: (1) These measures are objective (versus subjective TD ratings), since they represent task difficulty-related characteristics of the workers who are actually performing the tasks. (2) These measures have concrete (or absolute) meaning across all AFSs (versus the withinAFS relativity of current TD ratings). (3) These measures use existing archival data from the OSR and UAR [Uniform Airman Record] (versus special data-gathering processes required for TD). (4) These measures will permit continuous updating of difficulty benchmarking of tasks and AFSs with no additional data-gathering effort (versus periodic development and administration of several different benchmarking inventories to all AFSs which become obsolete over time as AFSs are resurveyed with new job inventories and task difficulty surveys.

Mr. Phalen also described a new job difficulty index he has developed for use in the computer-administered survey (CAS) research. It was designed to yield job difficulty values in the range of 1 to 100 with a mean of 50. As applied in the CAS R&D to 640 jobs in 67 AFSs at Lackland AFB, it yielded a range of job difficulties that were normally distributed between 10 and 85 with an approximate mean of 50.0. This index is available to anyone who wishes to test its validity and applicability in their work. Another index of job difficulty which uses an iterative adaptive procedure to compute a more refined estimate of job difficulty has also been developed and is available for testing.

Dr. Mitchell made the point that with all the possible new indices, we need to be careful about labeling things TD or JDI; new labels are needed for each new index which better portray its source or use.

Overall, the group felt that this was a very worthwhile meeting. While we did not discuss every issue proposed in the draft agenda, the group discussion was extensive and resulted in a very free interchange of information. Everyone seemed to learn something from the session, about the earlier work or what others are currently doing. There is a need for continuing dialogue among those interested in the TD construct or area [or Task Factors in general]. Mr. Bennett suggested that it might be worthwhile for this kind of group to get together every 3 to 6 months; several people agreed pointing out that it would be most worthwhile this year as a white paper is developed or keynoted issues for the IOAW developed.

As a first step, it would be useful to summarize the issues in this meeting, circulate them to everyone [for editing, modification, or expanded written comments/discussion], and then to propose a follow-on meeting whenever one can be scheduled.

The group adjourned at about 16:10 hours.


Attachment 1

TASK DIFFICULTY WORKSHOP

20 June 1994
1330-1530
(AFOMSQ conference room)

AGENDA

1.  Where are we?
      a.  Definition of "difficulty" (to learn vs perform)
      b.  Reliability/stability of ratings
      c.  Uses of TD - Current & Projected (JDI, ATI, PTI, etc.)
      d.  Benchmarked Occupational Learning Difficulty (BOLD)
      e.  Productive Capacity
      f.  Training Decay

2. Implications of Recent Changes in the AF (& other services) a. Year of Training Initiatives (changing AFSS, requirements) b. Mandatory 5- and 7-skill level courses c. Who should rate tasks? d. Subanalysis of OS data (systems, functional area, etc.) e. Level of analysis (tasks, modules, meta modules, etc.)

3. Other information/ratings a. Criticality (relevance to mission; mission essential modules...) b. Consequences of poor performance c. Complexity (systems, equipment, functional area, etc.) d. Training Emphasis

4. Possible improvements a. Instructions (Lisa Boise thesis at St Mary's) b. Survey methodology (computer-assisted; built-in QC) c. Scales d. Sampling e. Levels of Analysis

5. Definition & Prioritization of TD R&D a. Application work versus R&D b. Generate definition of requirements c. Prioritize requirements d. Seek support (other AF organizations, other services) e. Commitment to publish outcomes in professional literature


Attachment 2

TASK DIFFICULTY BIBLIOGRAPHY

Burtch, L.D., Lipscomb, MJ., Wissman, D.J. (1982). Aptitude requirements based on task difficulty. Methodology for evaluation (AFHRL-TR-82-34). Brooks AFB, TX: Manpower and Personnel Division, Air Force Human Resources Laboratory,

Christal, R.E. (1974). The United States Air Force occupational research project (AFHRL-TR-73-75). Lackland AFB, TX: Occupational Research Division, Air Force Human Resources Laboratory.

Christal, R.E., & Weissmuller, J.J. (1976). New CODAP programs for analyzing task factor information (AFHRL-TR-76-3). Lackland AFB, TX: Air Force Human Resources Laboratory, Occupational and Manpower Research Division.

Christal, R.E., & Weissmuller, JJ. (1988). Job-task inventory analysis. In S. Gael (Ed), Job analysis handbook for business, industry, and government. New York: John Wiley and Sons, Inc. (Chapter 9.3).

Davis, PA. (1989). Affordable and creditable procedures for determining occupational learning difficulty (AFHRL-TP-88-72). Brooks AFB, TX: Manpower and Personnel Research Division, Air Force Human Resources Laboratory

Dittmar, M.J., Weissmuller, J.J., Hand, D.K., Tucker, D.L. (1991). Validation of survey-based estimates of job learning difficulty. Technical paper prepared under Air Force Contract F41689-88-D-0251, Brooks AFB, TX: Manpower and Personnel Research Division, Air Forces Human Resources Laboratory

Driskill, W.E., Bower, F.B. (1978). The stability over time of Air Force enlisted career ladders as observed in occupational survey reports. Proceedings of the 20th Annual Conference of the Military Testing Association, 228-235. Oklahoma City, OK- U.S. Coast Guard Institute.

Fodale, M., & Aslett, L.S. (1987, May). Automated training indicators (ATI), USAFOMC-TN-87-01. Randolph AFB, TX: USAF Occupational Measurement Center.

Fugill, J.W.K. (1972). Task difficulty and task aptitude benchmark scales for the mechanical and electronics career fields (AFHRL-TR-72-40). Lackland AFB, TX: Personnel Research Division, Air Force Human Resources Laboratory.

Fugill, J.W.K. (1973). Task difflculty and task aptitude benchmark scales for the administrative and general career fields (AFHRL-TR-73-13). Lackland AFB, TX: Personnel Research Division, Air Force Human Resources Laboratory.

Garcia, S.K, Ruck, H.W., & Weeks, J. (1985, November). Benchmark learning difficulty technology. Feasibility of operational implementation (AFHRL-TP-85-33). Brooks AFB TX: Air Force Human Resources Laboratory, Manpower and Personnel Research Division.

Goody, K. (1976, October). Comprehensive ocaipational data analysis programs (CODAP): Use of REXALL to identify divergent raters (AFHRL-TR-76-82). Brooks AFB TX:Occupational and Manpower Research Division.

Koym, K.G. (1977, June). Familiarity effects of task difficulty ratings (AFHRL-TR-77-25). Lackland AFB TX: Air Force Human Resources Laboratory, Occupational and Manpower Research Division.

Koym, K.G. (1977, June). Predicting job difficulty in high aptitude career ladders with standard score regression equations (AFHRL-TR-77-26). Lackland AFB TX: Air Force Human Resources Laboratory, Occupational and Manpower Research Division.

Lance, C.E., Hedge, J.W., Alley, W.E. (1987, August). Ability, experience, and task difficulty predictors of task performance (AFHRL-TP-87-14). Brooks AFB TX: Air Force Human Resources Laboratory, Training Systems Division and Manpower and Personnel Division.

Lecznar, W.B. (1971). Three methods for estimating difficulty of job tasks (AFHRL-TR-71-30). Brooks AFB, TX: Air Force Human Resources Laboratory, Personnel Division.

Madden, J.M. (1962). What makes work difficult? Personnel Journal, 41:341-344.

Mead, D.F. (1970). Development of an equation for evaluating job difficulty (AFHRL-TR-70-42). Brooks AFB, TX: Air Force Human Resources Laboratory, Personnel Division.

Mead, D.F. (1970). Continuation study on development of a method for evaluating job difficulty (AFHRL-TR-7043). Brooks AFB, TX: Air Force Human Resources Laboratory, Personnel Division.

Mead, D.F. (1975, September). Determining training priorities for job tasks. Proceedings of the 17th Annual Conference of the Military Testing Association. Indianapolis, IN: U. S. Army.

Mead, D.F., & Christal, R.E. (1970). Development of a constant standard weight for evaluating job difficulty (AFHRL-TR-70-44). Brooks AFB, TX: Air Force Human Resources Laboratory, Personnel Division.

Mial, R.P. & Christal R.E. (1978, April). The determination of training priority for vocational tasks. Proceedings of the Psychology in the Air Force Symposium (pp 29-33). USAF Academy, CO: Department of Behavioral Sciences.

Mitchell, J.L., Ruck, H.W., & Driskill, W.E. (1988). Task-based training program development. In S. Gael (Ed), Job analysis handbook for business, industry, and government. New York: John Wiley and Sons, Inc. (Chapter 3.2).

Morsh, J.E. (1965, December). Evolution of a task inventory and tryout of task rating factors (PR-L-TR-65-22). Lackland AFB, TX: Personnel Research Laboratory.

Morsh, J.E., & Archer, W.B. (1967, September). Procedural guide for conducting occupational surveys in United States Air Force (PRL-TR-67-11). Lackland AFB, TX: Personnel Research Laboratory.

Phillippo, L.M. (1991, May). Just how difficult is "difficult" and whom shall we ask. Proceedings of the Seventh International Occupational Analysts Workshop. Randolph AFB, TX: USAF Occupational Measurement Squadron.

Ramadge, J.D. (1987). Task learning difficulty. Interrelationships among aptitude-specific benchmarked rating scales (AFHRL-TP-86-56). Brooks AFB, TX: Manpower and Personnel Division, Air Force Human Resources Laboratory.

Ruck, H.W., Thompson, N.A., & Stacy, W.J. (1987). Task training emphasis for determining training priority (AFHRL-TP-86-65). Brooks AFB, TX: Manpower and Personnel Division, Air Force Human Resources Laboratory.

Staley, M.R., & Weissmuller, J.J. (1981, November). Interrater reliability. The development of an automated analysis tool (AFHRL-TP-81-12). Brooks AFB TX: Air Force Human Resources Laboratory, Technical Services Division.

Stacy, W.J., Thompson, N.A., & Thomson, D.C. (1977, October). Occupational task factors for instructional systems development. Proceedings of the 19th Annual Conference of the Military Testing Association. San Antonio, TX: Air Force Human Resources Laboratory and the USAF Occupational Measurement Center.

Thomson, D.C., & Goody, K. (1979, May). Three sets of task factor benchmark scales for training priority analysis (AFHRL-TR-79-8). Brooks AFB, TX: Air Force Human Resources Laboratory, Occupational and Manpower Research Division.

Vaughan, D.S. (1978). Two applications of occupational survey data in making training decisions. Proceedings of the 20th Annual Conference of the Military Testing Association (pp 213-227). Oklahoma City, OK: U.S. Coast Guard Institute.

Weeks, J. (1984, November). Occupational learning difficulty.- A standard for determining the order of aptitude requirements minimums (AFHRL-TR-84-26). Brooks AFB TX: Air Force Human Resources Laboratory, Manpower & Personnel Research Division.

Weeks, J., Mumford, M.D., & Harding, F.D. (1985, October). Occupational learning difficulty. A construct validation against training criteria. Proceedings of the 27th Annual Conference of the Military Testing Association. San Diego CA: Naval Personnel Research and Development Center.

Weissmuller, J.J., Dittmar, M.J., & Moon, R.A. (1993, June). A validation of automated training indicators (ATIs). Proceedings of the Eighth International Occupational Analysts Workshop (pp 306-310). San Antonio, TX: USAF Occupational Measurement Squadron.

(Revised - 11/23/94; Additions Welcome)

Back to the IJOA home page