Instructional Design Process

Several instructional design models have been developed to provide instructional designers with a format to follow when analyzing a situation that will need instruction, developing the relevant content, and then evaluating the results of their effort (Brown & Green, 2016). One description of this process is the ADDIE method, which is an acronym that divides the process into the following components:

Analyze

data on a screen

Dick and Carey (1996) suggest that in addition to analyzing the content you will be teaching, you should also evaluate the learners and the environment in which the learning will take place. An emphasis on this analysis is a movement away from the lecture-based or sage-on-a-stage method where the content is presented with little regard for the background or needs of the learner (Dick, Carey & Carey, 2009).

Generally, during the analysis phase, a needs analysis, a problem identification, and task analysis are completed. The goals and objectives for the course are developed. The model developed by David Merrill called Pebble-in-a-Pond deviates from this because he suggests developing objectives after the content has been established since he feels that they will frequently change during the development phase (Brown & Green, 2016). In addition to goals and objectives, learner, and environmental requirements.

Design

pencils

There are several suggested formats on how to design instruction. David Merrill (2002), in his “first principle of instruction,” feels that learners must be engaged in solving real-world problems. They must rely on previous knowledge learned and construct new knowledge. This new knowledge is applied by the learner and assimilated into their world. The design phase typically starts with the learning objectives, then moves to a flowchart or organizational structure for the content, then storyboarding and finally designing the actual system or user interface (Brown & Green, 2016).

Develop

During the development phase, the instructional designer, along with the subject matter expert, writes the actual content. After developing the content, there is a period of trouble-shooting and seeking feedback. 

Implement

During the implementation phase, the instructional designer teaches the trainers and develops methods for assessing the content.

Evaluate

The evaluation phase includes writing both formative and summative assessments for the learner to verify that they met the learning goals for the course as well as evaluation of course delivery, content, and implementation. 


Sample Instructional Design Process

The problem that will be explored and evaluated in this course is how to train new admissions committee members. Admissions committee members are expected to fairly, quickly, and equitably assess 3000-4000 applicants. This assessment must be done using a holistic approach to maintain the diversity of the class while at the same time, achieving a level of consistency that is defensible when questioned by applicants.

In addition, a new software system will be utilized during the next admissions cycle. The volunteer admissions committee members need to use this software in the pre-admission screening process.

Needs Analysis

collecting data

The first step in the design process, according to Brown and Green (2016), is to conduct a needs analysis or a systematic look at who is requesting a change, why a change is needed, and what variables are involved, including technological and environmental. The instructional designer can help determine what change needs to occur.

In addition, a new software system will be utilized during the next admissions cycle. The volunteer admissions committee members need to use this software in the pre-admission screening process.

Robert F. Mager suggests that a common approach is to evaluate how the process should be done and how it is currently being done. If there is a discrepancy between the two, then the instruction is needed (Brown & Green, 2016). To develop a comprehensive needs analysis, the instructional designer should interview or survey all relevant constituents.

Another method suggested by Brown and Green (2016) was developed by Allison Rossert (1995).   Allison Rossert indicates that the instructional designer should determine the optimal performance of a task, the actual performance of the task, any feeling about the task that may be contributing factors as well as any root causes for sub-optimal performance (Brown & Green, 2016).

magnifying glass and data

In this workshop case, the desired change is to train new admissions committee members on how to do a systematic, equitable look at evaluating candidates for admission by looking at their application holistically and comparing the applicant metrics to the mission of the school.

The need for an updated plan for evaluation, as well as an instructional manual, was determined after interviewing several members of the admissions committee. They felt that with many new committee members, there is a need for training on how to utilize a consistent, equitable method for applicant screening.

Currently, each member of the admissions committee utilizes their interpretation of the mission statement to determine how well each candidate meets the criteria. This method seems to work well with a stable, experienced committee. No statistical evaluation for consistency has taken place.

The plan is to evaluate the admissions committee procedure and performance over the next several months, as well as analyze the statistical correlation between various aspects of the application with their committee score. Opinions for root causes of inconsistencies, as well as suggestions for improvement, will be sought from admissions committee members as well as from administrators and IT professionals who have been involved in the last admissions cycle. 

Steps in a Needs Analysis

Step One: Determining the Desired Change:

problem and solution chart


Problem List:
​1. Each medical school has approximately 4000 applications that need to be screened during the admission process, which extends from August through April each year.
​2. For the most part, the screeners are volunteer faculty who do the screening after hours.
​3. Each screener may have a different interpretation of the mission statement and, therefore, different priorities when determining what makes a good applicant.
​4. Diversity in all aspects of the application is essential. As an institution of higher learning diversity of culture, experiences, country of origin, socioeconomic factors as well as educational history are all important and valued.
​5. There is new software coming out for the next admissions season. The prescreening admissions process will need to be done on this new software.


Step Two: The request

The request for the desired change is coming from many sources. Many medical schools are looking at innovative ways to screen applicants. They are using combinations of the following options: metrics only, holistic evaluations, candid interviews, and mini-multiple interviews.

The next step is an informal interview of both the IT and administrative personnel, as well as the admissions committee members, followed by a survey of both IT and administrative personnel, as well as members of the admissions committee.

Step Three: Implementation

The desired change will take place in the admissions office.

Step Four: The intervention

The next step is a  formal observation. The output from each member of the committee is observed. Data is collected from each pre-admission screener. This data includes the following: how many people they screen, how they rate each applicant, and ultimately how many are chosen to be interviewed. Also, correlation studies are done between each component of the application and ultimate acceptance decisions to determine if there is a correlation and if it is positive or negative.

The purpose of the observation is to collect data to see if there are parts of the application that could use technology to score while still maintaining diversity.

Step Five: Evaluating the Success

As the project continues, a formal set of objectives is developed. The admissions committee members, as well as the IT and administrative personnel, are given a second survey to determine how well the scoring rubric meets the needs of the committee. They will look at whether it is aligned to the mission of the school, ease of use, utility, and ease of imputing scores in the software system.

A desk

Needs Analysis

Do you have a gap in what your employees need to be able to do?

Task Analysis

frustrated woman

After a needs analysis has been completed and the goals for training are more formalized, the next step is a task analysis, which, according to Brown and Green (2016), is the most critical step in the instructional design process.

Morrison et al. (2006) emphasize the need to tie together the goals from the needs analysis, the characteristics of the learners from the learner analysis, and the content of the task analysis. According to Brown and Green (2016), the final step in the task analysis process is to evaluate the success of the task analysis. They suggest asking a subject matter expert who was not involved in the process to assess the detailed task analysis and see if any steps require further elaboration or clarification.

The two tasks that need to be analyzed in this workshop are the following:

  • break down the steps to screen a medical school application
  • break down the steps required for a new committee member to learn how to input the content from the pre-admissions screen of an applicant into the new WebAdmit software.

Brown and Green (2016) recommend evaluating the task in terms of both scope and sequence and then formalizing it in either outline or flowchart format. After the task analysis is complete, Brown and Green (2016) suggest that specific learning goals be developed for the learners.

For this process, a committee will be formed consisting of one member of the IT department, one instructional designer, and one member of the admissions department. They will teach the new committee member how to access and utilize the software.

The two tasks that will be analyzed are 1. the steps necessary to access and input the prescreening review of a candidate into the WebAdmit software and 2. the steps required to look at a medical school application formally. 

Learner Analysis

Brown and Green (2016) state that the next step in the instructional design process is analyzing the learners. As a designer of instructional content, it is essential to know who your learners are as well as their strengths and weaknesses. No longer is teaching considered a process of imparting knowledge to learners who lack the information. Instead, it is acknowledged that learners have their own skill sets and experiences that they can draw upon to construct knowledge.

With this in mind, learners are an active component of the curriculum instead of mere recipients. Anyone who has taught for any length of time will probably agree that each class has its personality and that this must be taken into account when planning the curriculum. 

Unlike other information gathered in the instructional design process, the analysis of the learners is a more private document. Brown and Green (2016) suggest starting with analyzing your learners as humans to see if your instruction has a role in fulfilling any basic human needs or wants. Next, look at their motivation for participating in the education: are they required to attend, or are they a willing participant (Brown & Green, 2016).

The goal for all instruction should be to be as inclusive as possible. Analyzing learners for special needs as well as deficits in skills is vital to make sure that it meets the needs of as many people as possible. 

eThe instruction design task in this workshop is the following: to analyze the steps in evaluating a medical school application, determine how this task could be standardized and taught to all learners, and utilize the new software to input data from the pre-admission screening, The recommendations of Mager (1988, p. 60, as cited by Brown & Green, 2016, p. 78-79) were followed in the learner analysis survey and a discussion of the demographics of the population.


​Dick, Carey, and Carey (2011) suggest analyzing:
1. Entry skills
2. Prior knowledge of the topic area
3. Attitudes toward content and potential delivery systems
4. Academic motivation
5. Educational and ability levels
6. General learning preferences
7. Attitudes toward training organizations
8. Group characteristics

Organization of Content

Goals and Objectives

Goals determine the intention of the instruction, and objectives describe the intended outcome of the instructional activity, according to Brown & Green (2016). However, D. H. Jonassen (1991) contrasts objectivism and constructivism and posits that if instructors shift their philosophy to more of a constructivist view, then learning objectives would not be necessary or desired.

Constructivists believe that each learner constructs their knowledge and makes their mental models based on their experiences. If that is the case, then the goals and objectives for each learner would be different.  The traditional objectivist position puts the instructor in the role of determining all that is to be learned and how it is to be learned, and the learner is merely the receptacle for this learning (Jonassen, 1991). To see the goals and objectives of this project, click here.

Goals are overarching statements about why you need instruction and what it is supposed to accomplish. A goal describes the change that is expected to occur in the learner, whether it is a change in knowledge, skill, or attitude.  Goals need to be formally articulated before any instruction can be developed, according to Brown & Green (2016).

Objectives are much more specific and are usually written in a format that clearly defines what the learner is to learn, how the instructor will know that they learned it, the level of achievement they have reached learning it, and in what period the learner will complete the learning (Brown & Green, 2016).

There are several formats learning objectives can take. According to Mager (1984), a learning objective should include an action, a condition under which this action will occur, and a criterion or level of competency that the learner will demonstrate. A well-established method for writing effective objectives is Bloom’s taxonomy. Instructors focus on the purpose of goals and objectives: to develop effective, organized instruction for learners.

Organizing Instruction

teacher working at a desk

The next step for the instructional designer is to decide how the instructional content will be organized. According to Brown and Green (2016), the first step is determining the scope and sequence of the material. In other words, you need to determine precisely how much content you intend to cover and in what order you plan to cover it.

There are many methods of organization discussed by Brown & Green (2016), and frequently the setting will provide some restrictions. The content could be organized in terms of content, which is the concepts, skills, or attitudes that you intend for the students to incorporate based on your objectives or in terms of media, which is the methods that you will utilize to teach the content (Brown & Green, 2016).

It is important to remember that all objectives and, therefore, content learned should not come from the instructor. The learners, while actively working with the content, will develop their objectives. Brown and Green (2016) describe Edgar Dale’s Cone of Experience (Dale, 1969) as a way of describing the continuum of available learning experiences. They can range from enactic or real-world experiences; to iconic or visual or sensory experiences, to symbolic or the use of sounds or symbols that are unrelated to the experience (Brown & Green, 2016). The instructor considers the setting in which the instruction will occur when planning. A classroom setting, programmed instruction, and distance learning each have their benefits and drawbacks (Brown & Green, 2016). 

Learning Environments

Bransford, Brown, and Cocking (2013) as cited in Brown and Green (2016) describe four types of learning environments:

people working in groups

Learner-centered: This environment focuses on the learner and utilizes their experiences, biases, and prior knowledge to uncover misconceptions and preconceived ideas and develop new mental models that are more cohesive and consistent with the known science at the time. An example of this would be a group engaged in problem-based learning. 

Knowledge-centered environments: In this environment, the instructional content takes center stage. Activities are designed to teach the material and develop understandings. An example of this would be an instructor teaching on a scientific principle.

Assessment environment: The instructional setting is designed to provide the opportunity for continual testing, feedback, and then revision. An example of this would be an online testing session where a student self-tests, gets feedback on their answers, and the opportunity to retest. 

Community-centered environments: In this environment, not only are learners learning from the perspectives and experiences of other learners but are also extending this knowledge to real-world examples of problems.  An example of this would be a class on training employees where theories on how best to train are explained first, and then companies provide real-world case studies as practice activities.

Teaching Styles

Directed learning or teaching: In this method, the instructor has developed clear learning objectives and goals that will be covered when teaching the content. The focus is on the information; the instructor has the central role, and activities are designed to allow learners to engage with the material and learn the content  (Brown & Green, 2016). This type of instruction is typically used in the medical field to teach how to perform a medical procedure. There is only one correct method, and attention to detail is critical.

Open-ended learning or teaching; In this method, the learners are the focus, and they actively work on solving a complex problem. The instructor’s role is more supportive. Goals for the instruction may or may not be present, but objectives are not. The learners determine the objectives as they work through the content. Open-ended learning provides a forum and promotes divergent thought (Brown & Green, 2016). This style of learning is commonly used in medicine to explore a clinical scenario. It is more important to consider the perspectives and conceptions of the group members than to quickly arrive at an answer. 

Strategies of Teaching

Problem-Based Learning: In problem-based learning, a group of students is presented with a problem. They utilize their experiences, biases, and prior knowledge to develop a list of objectives of content or concepts that need further investigation or understanding. After researching these objectives, the group reconvenes to try to develop a solution to the problem (Brown & Green, 2016). This method is used in medical schools to examine a clinical scenario, develop a differential diagnosis, and then to try to create a plan of action or plan of care for the patient. 

Simulations and Games: Simulations and games allow a participant to participate in learning a new skill in a realistic environment with little to no risk. It is much easier to learn how to do something if you have the opportunity to practice the skill in an environment that feels authentic (Brown & Green, 2016). In medical education, simulated patients and simulation labs are used to teach medical students and residents how to perform a procedure or learn a new skill in a way that presents little to no risk. 

Instructional Games: Instructional games are a fun, interactive way of extending knowledge or identifying gaps in knowledge by allowing students the opportunity to compete against themselves or others. Games must be well designed, or the focus shifts from the content to be learned to the game itself (Brown & Green, 2016). In medical school, there are many instructional games to teach about the immune system. The immune system is complex, with terminology that is new to the learner. The opportunity to play an instructional game makes learning less of a struggle and keeps the learners engaged. 

Just-in-time teaching: A method of using direct instruction in a more open-ended environment. Here the instructor provides mini-lectures to teach complex topics or provide background information (Brown & Green, 2016). After a problem-based learning session, just-in-time teaching is used to allow learners to continue working on their problem by providing them with enough background knowledge to keep them from becoming frustrated without giving so much instruction that diversity of thought is stifled. 

Evaluating the Course

survey on a screen

The next step in the process is to determine how successful the learners were at meeting the objectives for the course. A criterion-referenced evaluation assesses the competence levels of the learners at meeting each criterion. A norm-referenced assessment compares learners to their peers instead of evaluating mastery of content. To develop a successful evaluation, the instructional designer must make sure that the appraisal fully lines up with and thoroughly assesses the mastery of the objectives.

The evaluation can be conducted in several ways: a pen and paper test, an assessment demonstrating a skill, a performance evaluation, using observation or anecdotal records, reviewing a portfolio, or using a rubric (Brown & Green, 2016). The timing of an assessment depends on the goal.

A formative evaluation will check on the progress of the learner and the success of the instructional designer at meeting the objectives. A summative assessment is given at the end of the instruction to evaluate how successful the process was at helping learners achieve the goals and objectives. 

In this workshop, it will be essential to access whether the admissions committee members can effectively do the following:

  • use WebAdmit to screen applicants for medical school
  • demonstrate familiarity with the prerequisites and minimum requirements for a medical student application
  • clearly articulate the school’s mission and determine attributes that can be used to show the alignment between a candidate and the mission of the school
  • evaluate and then defend reasoning for classifying an applicant at a certain level. 

Evaluating the Instructional Design Process

man with x and check in hands

In addition to evaluating the success of the learners at meeting the objectives, the success of the program or design must be evaluated. Formative evaluation can be used to check progress as well as provides an opportunity to make changes or improvements, and a summative evaluation assesses the overall success of the designed instruction at meeting its goal; have the learners achieve the objectives.

There are many stages for the assessment of an instructional design project.

  • The first is to evaluate the instruction in draft form and make sure that it meets the intent of the subject matter expert as well as the needs of the learners.
  • The next step would be to have a group of learners take a pre-quiz, work through the content, and then take a post-quiz to see how successfully they were learning the content with or without the instructor.
  • Finally, the instructional material is field-tested by the instructor (Brown & Green, 2016). 
  • A summative assessment is essential to evaluate the overall success of the training session or instruction.

One well-described summative assessment is that by Kirkpatrick (1994), which has four levels of evaluation, each level more comprehensive than the previous.

Level 1 is to check for reactions or feedback to the training session. 
Level 2 is to look at what was actually learned by the participants. 
Level 3 is to test their ability to transfer this information to new scenarios
Level 4 is to look at whether the outcomes seen are a direct result of the training.

Evaluation of the workshop.

References

Bransford, J., Brown, A.L., & Cocking, R.R. (2003). How people learn: Brain, mind, experience, and school. (2nd ed.). Washington, D.C. National Academy Press. 

Brown, A., & Green, T. D. (2016). The essentials of instructional design: Connecting fundamental principles with process and practice (3rd edition). New York, NY: Routledge.

Dale, E. (1969). Audio-visual methods in teaching (3rd ed.). New York, NY: Holt, Rinehart, and Winston. 

Dick, W., Carey, L., (1996). The systematic design of instruction. In D.P. Ely & T Plomp (Eds.), Classic writings on instructional technology. Vol. II, Englewood, CO: Libraries Unlimited.

Dick, W., Carey, L., & Carey, J.O. (2009). The systematic design of instruction (7th ed.), Columbus, OH: Allyn & Bacon.

Dick, W., Carey, L., & Carey, J.O. (2011). The systematic design of instruction (7th ed.). New York: Pearson. Jonassen, D. H. (1991). Objectivism versus constructivism: Do we need a new philosophical paradigm? Educational Technology Research & Development, 39(3), 5-14.

Kirkpatrick, D. L. (1994). Evaluating training programs: The four levels. San Francisco, CA: Berrett-Koehler.

Mager, R. F. (1988). Making instruction work. Belmont, CA: Lake Publishing Company. 

Mager, R. (1984). Goal analysis. Belmont, CA: Lake Publishing Company. 

Merill, M.D., (2002.) Five principles of instruction. Educational Technology Research and Development. 50 (3), 43-59. 

Morrison,  G. R., Ross, S.M., & Kemp, J. E. (2006). Designing effective instruction (4th ed.). Hoboken, N.J: John Wiley & Sons. 

Rossert, A. (1995). Needs assessment. In G. Anglin (Ed.) Instructional Technology: Past, present and future (2nd ed.) (pp 183-196). Englewood, CO: Libraries Unlimited.