Presented at the 38th Annual Conference of the International Military Testing Association (IMTA), 12 - 14 November 1996, Gunter Sheraton Hotel, San Antonio, Texas; co-hosted by the Air Force Personnel Center, Armstrong Laboratory/Human Resources Directorate, and the Air Force Occupational Measurement Squadron.

Development and Application of a Computer-Assisted Survey
Authoring Tool for Training Needs Assessment

J. L. Mitchell, J. J. Weissmuller, D.L. Tucker, Patricia Waldroop
Institute for Job & Occupational Analysis
San Antonio, Texas, U.S.A.

Winston Bennett, Jr.
Technical Training Research Division
Armstrong Laboratory, Human Resources Directorate
Brooks Air Force Base, Texas, U.S.A.

In previous conferences, we have reported on the field testing of computer-assisted occupational surveying and concluded that occupational data could be collected via diskettes more quickly and efficiently than by traditional paper-and-pencil surveys (Albert, et al., 1994, 1995; Mitchell, et al., 1994, 1995). One of the major recommendations made in the final report of the feasibility project (Mitchell, Weissmuller, Gosc, & Bennett, 1995) was the need for the development of a Windows-based authoring tool with the capability of creating reproducible master diskettes for occupational and other types of surveys (training evaluation, job analysis, performance measurement, an d other types of uses).

AUTHOR

A developmental version of the Air Force Survey Authoring System (AFSAS, commonly referred to as AUTHOR) was completed in August 1995 and underwent initial user testing in subsequent months. An AFSAS Operational Guide was completed in November 1995 as was the first formal version of the AFSAS (Version 1.0, 2 November 1995). This system was distributed to selected users as a beta test of the operational system and feedback as to suggestions and problems was solicited. Using the feedback received by Version 1.0 users, a second version of the system was completed in February 1996 (Version 2.0, 15 February 1996), and a formal training program developed. A training session was presented for personnel of the Armstrong Laboratory and the Air Force Occupational Measurement Squadron on 16 February, and copies of both the software and the operational guide distributed.

The AFSAS was then used to generate the Air Force Health Administrator survey (n = 2000); the Quality Air Force study (n = 200) which involved officers, enlisted personnel, and civilian employees; and the Behavioral Scientist study (n = 100). In the development of each of these surveys, occasional problems were encountered, either in how the survey operated or in the data uploading process. Each problem was addressed so that the surveys could progress, and ideas developed for other needed capabilities. These included the need to implement branching based on some item (such as status or component) and the logic needed to route respondents to only appropriate items (e.g., officer grade, enlisted grade, or enlisted grade). This type of logic and branching is particularly needed for Training Evaluation research, where multiple ratings or data are needed for tasks (or task modules), as well as for background variables. Such logic is not supported by the Computer Administered Occupational Survey (CAOS) software. Fortunately, software which met some of these needs was already operational in a series of collateral projects.

OASurv

OASurv is a survey administration engine developed by Metrica, Inc. to support automated survey requirements for US government data collection projects. OASurv is a program (OASURV.EXE) which reads a script file (a standard MS-DOS Text File), formats questions and collects, validates, and records data from survey respondents. OASurv is very much like an actor playing a part -- what you see will depend on the contents of the script. In today's terminology, "OASurv" is called a survey engine. This analogy equates the program (i.e., "OASurv") with an engine and equates flat text file ("your script") with the gasoline. Ongoing maintenance and some enhancement efforts have been undertaken by the Institute for Job and Occupational Analysis (IJOA) to meet ever changing government needs in application areas such as training evaluation and occupational analysis.

The original funding for OASurv's development was partially drawn from AF Contract F33615-91-D-0010, DO 0013 - a study of Manpower Personnel & Training (MPT) Functional Relationships. The AL/HRM project required a fairly standard set of background items as well as special-purpose modules to achieve the goals of this study. In each of 21 different career areas, this project showed a pre-screened subset of approximately 60 tasks to each of 200 job incumbents and allowed him or her to rate "familiarity" on a nine-point scale. For those tasks rated 7, 8, or 9 (highly familiar) the job incumbent provided direct time estimates for targeted tasks under various conditions. When the job incumbent finished with the survey disk, the disk was turned over to the incumbent's supervisor for ratings of incumbent proficiency and training investment.

The Air Force Occupational Analysis Program began to consider automated data collection in 1991. Because the new media presented new possibilities for data collection methodologies, research programs were initiated to evaluate possible alternatives. This initiative resulted in a package of approximately 20 programs (the Computer Administered Survey System [CASS]). The development of these programs drew on both the script-based design described above and a series of "data entry" programs previously developed in the military by the programmer. Data collection was initially envisioned as a mailout system to eight career fields (three treatments each). Any one of the 24 experimental treatments could fit on a single 5 1/4" double density floppy. The need for controlled conditions during this sensitive measurement phase, however, altered initial plans for data collection. Data was collected on the 30+ computers in the Project LAMP computer lab. As each Project LAMP computer had to be able to administer the proper treatment whenever a subject "logged-in," all treatments had to be loaded to each machine. The entire set of research programs required three 5 1/4" floppies to install them on a Project LAMP computer hard disk (Albert, et al., 1993).

While the CASS methodology research was underway, the question of logistical hurdles using mailout disks arose again. Under the auspices of IJOA, a feasibility study was undertaken (Mitchell, et al., 1995). As none of the CASS programs closely matched standard paper-and-pencil (optical scan sheet) methods used by the OMS, alternate programs were developed for this study. The programs were called the Computer Administered Occupational Surveys (CAOS) to reflect their closer ties to the OA operational community rather than the research community of CASS.

The project from which OASurv sprang began in mid-1995. Initial review showed at least eight "unique" data collection programs were required. A "unique" data collection is one which can ask and record only a specific kind of question or data set. This is required if the program must cross-compare data or collect data of a very specific format which must be validated and corrected. The opposite of a "unique" data collection program, then, is a "generic", "general", or "robust" data collection program. To meet deadlines, the most appropriate model from the CASS systems were selected as starting platforms. As the first three reached completion, it became clear that a single engine would be possible. As the fourth and fifth "unique" engines were completed, the new unified "OASurv" began to take shape and finally pre-empt all subsequent programming on "unique" engines.

General Features of OASurv

OAVirus - Virus Checking. Multi-person use of the survey diskette mandated the development of a non-proprietary virus checking procedure. This feature can be automatically added after a master survey diskette has been produced.

OASys - Inventory of Computer System Hardware/Software. The project also wanted to capture system information to determine if the job incumbent and supervisor data were collected on the same physical machine. There was some concern that job incumbents would simply do the supervisor's survey rather than actually turn over the survey disk. To this end, the Volume Serial Number is copied from Drive C as well as a complete equipment inventory. It is strongly recommended that this program be used in all automated surveys so a centralized data base of the Air Force computer inventory can be developed.

LOGTIME - An event time stamp program. To the same end of determining if an incumbent is completing the supervisor's survey, an event-log (LOGTIME) was developed and incorporated into the batch runs which control this survey.

OAGate - A program to control access to diskette. Both the job incumbent and supervisor may start, stop, and restart the survey to fit their schedule. Once a person records data on the disk the Privacy Act kicks in and each restart requires a re-enter of the Social Security Account Number.

Other OASurv Features

To use OASurv, a diskette with a "batch" file (survey.bat) to sequence the required computer issues like Virus Checking, Event Logging, and actually starting the OASurv program is needed.

OASurv can: 1) Display Instructions; 2) Ask Background Questions; 3) Collect List-based information (such as Base List, Equipment List, Task List, etc.); 4) Accept Open-Ended Input; and 5) Conditionally Branch to or around items.

Display instructions are classified into in three types:

Type One Instructions are those instructions in which you fill the entire screen with whatever you want the survey respondent to read. This includes such things as the survey cover page with title, survey control number, the Privacy Act Statement, the Eligibility Statement, etc. These instructions may appear at any point in the survey. The survey creator may enter the text for these items while creating the survey and specifying the background questions and/or recording options to be used in rating assignments.

Type Two Instructions/Help are standard instructions that are accessed by respondents during the survey process by pressing the [F1] (HELP) key. The organization using OASurv may wish to standardize the HELP for routine applications. Text for these instructions is stored in the OASurv.HLP text file where they can be altered if necessary.

Type Three Instructions are those that you want to appear at the top of the screen at all times WITHIN a rating assignment (e.g., the definition of the "Relative Time Spent" scale). As with Type Two instructions, the organization using OASurv may wish to standardize the SCALE FORM-Flash for routine applications. The text for these form-flash instruction is stored in the OASurv.DFL text file.

Background questions are classified into three types

Type One Background questions are "Fill in the Blank" questions such as NAME, SSAN, Job Title, etc. You restrict the type of characters or data you will accept for a Type One item. Data types include: Numeric, Alphabetic, Printable Type, Date (YYMMDD), or Duration (YYMM). Customized formats can be specified using the "M" for "mask" that allows the survey creator to specify each position and the required length of a response. For example, one might ask for Social Security Account Number (SSAN) and specify a length of "9" with a data type of "N" to ensure all numeric characters.

Type Two Background questions permit respondents to select "the best" response from a list of choices which fits on a single computer screen. This category includes items like Job Satisfaction and Gender. Again, inputs may be audited either as to type (numeric, etc) or as to validity within an enumerated set (like <MF> for Male/Female or <123456789> to exclude 0).

Type Three Background questions are also a "Choose ONLY ONE" response, but for those questions in which the list is too long for a single screen. Type Three supports a "code" value associated with each choice (like Military Base code) so only the code for a single item is stored. If no codes are associated with a list, then the item number within the list is stored. Although equipment check lists are considered "background" questions in CODAP, the "Check all that apply" questions are collected by OASurv in the "List-Based" questions which follow.

Collect List-based Information. To OASurv, all lists are treated the same, so what follows can be applied to any type of list (e.g., equipment lists, knowledge lists, courses taken lists). Only one list can be selected as the "Task List" for CODAP -- all others will be coded as a SERIES of background items. Each list file produced by a survey creator (like TASKS.TXT) or extracted and converted from TIARA (like BASES.TXT) must be processed by the "MAKEFAST" program to produce a rapid-access version of those inputs. This program will take a file like "TASKS.TXT" and output a file called "TASKS.FAS". Only the "<name>.FAS" files should be copied to the OASurv master survey disk. In the IJOA AUTHOR system or the MAKEDISK batch procedure, this issue is handled automatically.

Type One: Check if done or Yes/No Type One Inventory questions are similar to the task list "Check IF Done" and equipment usage (Use/Don't Use) check lists - ones in which you "Check all that apply." A "Yes/No" (or "no response=*") type response is stored for every item in the list.

Type Two: Rate each Item Type Two Inventory questions are similar to Type One, except you rate some value on a scale like "Relative Time Spent" or "Frequency of Use." A rating response is stored for every item in the list. Note: If you ask a respondent to do a "Check all that apply pass," you can instruct the Rating Pass to use ONLY those items checked. When using TE and TD ratings, the "Check" pass is not used and the "Rating" pass will automatically solicit a response for every item unless you specifically request a YESNO (i.e., Check first) pass.

Accept Open-Ended Input for Comments/OTHER If a survey creator wants to solicit a Text input in case of an "OTHER" (or any other selected response), a mini-Text Editor is available to accept input. At least one line of prompt for the editor screen is required. If extensive instructions (or a FORM) is desired, the editor may be "pre-set" to desired contents.

Conditional Branching OASurv now supports assigning Externally Recognized Names to items in the survey. Any item with an External ID can be 1) queried by other items to check responses OR 2) jumped to from somewhere else in the survey. The Condition Transfer command basically asks if the Externally Named item had a response equal to, greater than or equal to, less than or equal to, etc., a specified value. If the specified value is found at the Externally Named location, the transfer takes place to the Externally Recognized Name at the end of the Transfer statement.

Operational Use of AUTHOR with OASurv

With the refinement of AUTHOR and OASurv, it has become possible to do a wide variety of surveys. For AFOMS, a complex study of strategic analysts (STRATCOM) was undertaken which had a lengthy background section, a 1467 item task list, plus Ballentine and Cunningham's General Work Inventory (GWI), itself a complex set of seven minisurveys using a check pass and a rate pass within each section. A different rating scale was used for the last two sections of the GWI. This survey experienced a number of difficulties: For example, of 140 disks, only 50 were returned with readable responses. In some cases, respondents did not follow instructions to exit out of Windows to the DOS prompt. With Windows, networks, and other applications running in the background, their computers ran out of working memory and the survey failed to work properly. In other cases, it appears that respondents simply quit the survey three quarters through the task list. This experience suggests the need for better memory management within OASurv (which is currently being implemented), and perhaps for a redesign to run from within Windows (and Windows95, WindowsNT, etc.).

A series of four Medical Technician training evaluation surveys (with multiple passes to capture training time estimates, among other things) were accomplished using AUTHOR (Version 2.4) and OASurv (Version 2.3) in order to collect data from active duty, AF Reserve (AFRes), and Air National Guard (ANG) personnel (n = 3000). Many were hand delivered to selected AFRes and ANG sites around the country by development team members, who reported very few problems; administration times generally averaged about 1 to 1.5 hours. Data were collected from multiple respondents on a single double-sided, double-density disk; as many as seven cases. Upon return to Brooks AFB, the data files were uploaded from the survey diskettes to a consolidated case file via AUTHOR Collect function. Data were summarized, means and s.d. calculated, and summary reports provided to Air Force decision makers at a Sheppard AFB, TX meeting last week (one week turnaround). This use of AUTHOR and OASurv systems was the most successful to date, and represents a new level of maturity for the software and approach.

Conclusions

While much remains to be done (including updating Help information and revising the AFSAS Operational Guide), the AUTHOR and OASurv systems have now undergone rather strenuous testing, and have emerged as useful technology which the Air Force (and other services) can use in future data collection efforts. Further refinements should include additional work on the Task Editor, some standard templates (such as the now-agreed-upon standard background section) for operational occupational surveys, a within-Windows run capability, and more output formats (in addition to CODAP and ASCII). Collateral research and development is also needed to expand the uses of the systems for other possible data collection purposes (such as job and training history data for TIDES, performance assessment for training criteria for TEEM, team tasks, etc.).

References

Albert, W.G., Phalen, W.J., Selander, D.M., Dittmar, M.J., Tucker, D.L., Hand, D.K., & Weissmuller, J.J. (1995, May). Large-scale laboratory test of occupational survey software and scaling procedures. Proceedings of the Ninth International Occupational Analyst Workshop. San Antonio, TX: U.S. Air Force Occupational Measurement Squadron.

Albert, W.G., Phalen, W.J., Selander, D.M., Dittmar, M.J., Tucker, D.L., Hand, D.K., Weissmuller, J.J. & Rouse, I.F. (1994, October). Large-scale laboratory test of occupational survey software and scaling procedures. In the symposium, Bennett, W. Jr., Chair, Training needs assessment and occupational measurement: Advances from recent research. Proceedings of the 36th Annual Conference of the International Military Testing Association. Rotterdam, The Netherlands: European Members of the IMTA.

Albert, W.G., Phalen, W.J., Selander, D.M., Yadrick, R.M., Rouse, I.F., Weissmuller, J.J., Dittmar, M.J., & Tucker, D.L. (1993, November). Development and test of computer-administered survey software. Proceedings of the 35th Annual Conference of the International Military Testing Association. Williamsburg, VA: U. S. Coast Guard.

Mitchell, J.L., Weissmuller, J.J., Bennett, W. Jr., Agee, R.C., & Albert, W.G. (1995, October). Final results of a feasibility study of computer-assisted occupational surveys. In the symposium, Gould, R. B., Chair, Issues and advances in task-based occupational research and development for manpower, personnel, and Training. Proceedings of the 37th Annual Conference of the International Military Testing Association. Toronto, Canada: Canadian Forces Applied Research Unit.

Mitchell, J.L., Weissmuller, J.J., Gosc, R.L., & Bennett, W., Jr. (1995, September). Feasibility study of the development, implementation & evaluation of computer-based job & occupational data collection methods. Draft final report, prepared for the Technical Training Research Division of the Armstrong Laboratory, Human Resources Directorate, Brooks AFB, TX.

Mitchell, J.L., Weissmuller, J.J., Bennett, W.R. Jr., Agee, R.C., Albert, W.G., & Selander, D.M. (1995, May). A field study of the feasibility of computer-assisted occupational surveys: Implications for MPT research & development. In the symposium (Smith, A.M, & Bennett, W.R., Chairs), Military Occupational Analysis: Applications for Manpower, Personnel, & Training. Proceedings of the Ninth International Occupational Analysts Workshop. San Antonio, TX: Air Force Occupational Measurement Squadron.

Mitchell, J.L., Weissmuller, J.J., Bennett, W.R. Jr., Agee, R.C., Albert, W.G., & Selander, D.M. (1994, October). A field study of automated occupational survey administration methods. In the symposium (Bennett, W.R., Chair), Training needs assessment and occupational measurement: Advances from recent research. Proceedings of the 36th Annual Conference of the International Military Testing Association. Rotterdam, The Netherlands: European Members of the IMTA.

Back to the IJOA home page