HR Management Interventions Discussions
INSTRUCTIONS
Scenario
You are an HR associate at the Large Technology Corporation (LTC). As LTC has recently experienced rapid growth, your division manager, John Leicke, has approached you to help create a performance evaluation system for LTC. In your meeting, he explains that the current performance evaluation process varies by department and does not have buy-in from employees. Managers feel that the existing process requires too much time to implement. In general, employees at LTC also feel the system is too general and leaves too much room for subjectivity within the evaluation from managers.
To address these issues, John has asked for your assistance in creating a performance evaluation proposal. This proposal should address the perceived issues in the current system and should provide guidelines for creating an effective performance evaluation system. Additionally, John has asked you to develop job goals and performance standards for the customer service department at LTC and to provide strategies for intervention or remediation based on employee scenarios.
Directions
Create a performance evaluation proposal and a performance evaluation sheet for John Leicke, the division manager of the human resources department at LTC. John is asking for your help in pitching this new system to upper management at LTC. He has asked for your advice on how to effectively communicate the need for this plan and the elements of an effective performance evaluation system.
Specifically, you must address the following:
Performance Evaluation Proposal: Your proposal should cover the fundamental aspects of a performance evaluation system. In your proposal, address the following:
- Articulate the importance of a performance evaluation system to a corporation.
- Outline the specific components of a performance evaluation system.
- Determine the performance appraisal method for measuring employees. Be sure to provide a rationale for your choice. For example, will you choose a critical incident appraisal?
- Describe a method for getting buy-in from managers and employees, including how both can participate in the performance evaluation process.
- Outline best practices to guide managers in giving feedback to employees.
Performance Criteria: Based on your proposal, senior leadership at LTC has decided that the company will use a graphic rating scale to conduct performance appraisals.
- Using the provided job description for the customer service representative role, create five to six performance evaluation criteria that could be used to evaluate a customer service representative at LTC.
- Using your performance evaluation criteria, choose three employees from the provided scenarios and evaluate the employees using the provided Performance Evaluation Sheet.
- Describe whether remediation is appropriate, and describe these steps. Recommend termination if necessary.
What to Submit
Every project has a deliverable or deliverables, which are the files that must be submitted before your project can be assessed. For this project, you must submit the following:
- Performance Evaluation Proposal Outline a performance evaluation proposal for LTC. In addition, provide best practices for giving feedback and encouraging participation in the system. Your proposal must be 500 to 1,000 words in length. Cite all references appropriately.
- Performance Evaluation Sheet Evaluate three employees at LTC according to the criteria you have developed for the customer service representative job description. Follow best practices for providing employees with actionable feedback.
RESOURCES!!!!!
EVALUATION: PURPOSE AND DEFINITION
While some rightly say that the fundamental purpose of evaluation is the determination of the worth or merit of a program or solution (Scriven, 1967), the ultimate purpose, and value, of determining this worth is in providing the information for making data-driven decisions that lead to improved performance of programs and organizations (Guerra-López, 2007a). The notion that evaluation’s most important purpose is not to prove but to improve was originally put forward by Egon Guba when he served on the Phi Delta Kappa National Study Committee on Evaluation around 1971 (Stufflebeam, 2003). This should be the foundation for all evaluation efforts, now and in the future. Every component of an evaluation must be aligned with the organization’s objectives and expectations and the decisions that will have to be made as a result of the evaluation findings. These decisions are essentially concerned with how to improve performance at all levels of the organization: internal deliverables, organizational gains, and public impact. At its core, evaluation is a simple concept:
- It compares results with expectations.
- It finds drivers and barriers to expected performance.
- It produces action plans for improving the programs and solutions being evaluated so that expected performance is achieved or maintained and organizational objectives and contributions can be realized (Guerra-López, 2007a).
Some approaches to evaluation do not focus on predetermined results or objectives, but the approach taken in this book is based on the premise of performance improvement. The underlying assumption is that organizations, whether they fully articulate this or not, expect specific results and contributions from programs and other solutions. As discussed in later chapters, this does not prevent the evaluator or performance improvement professional from employing means to help identify unanticipated results and consequences. The worth or merit of programs and solutions is then determined by whether they delivered the desired results, whether these results are worth having in the first place, and whether the benefits of these results outweigh their costs and unintended consequences.
An evaluation that asks and answers the right questions can be used not only to determine results but also to understand those results and to modify the evaluation so that it can better meet the intended objectives within the required criteria. This is useful not only to identify what went wrong or what could be better but also to identify what should be maintained. Through appreciative inquiry (Cooperrider & Srivastva, 1987), evaluation can help organizations identify what is going right. Appreciative inquiry is a process that searches for the best in organizations in order to find opportunities for performance improvement. Here too the efforts are but a means to the end of improving performance. Although the intentions of most evaluators are just that, the language and approach used are charged with assumptions that things are going wrong. For instance, the term problem solving implies from the start that something is wrong. Even if this assumption is not explicit in the general evaluation questions, it makes its way into data collection efforts. Naturally the parameters of what is asked will shape the information evaluators get back and, in turn, their findings and conclusions. If we ask what is wrong, the respondents will tell us. If we ask what went right, again they will tell us. The key point is that evaluation should be as unbiased as possible. Evaluators should ask and answer the right questions, so that the data they get are indeed representative of reality.
In specific terms, before evaluators start to plan, and certainly before they collect data, they must determine why they are conducting an evaluation. Is this their initiative, or were they directed to do this work? What is the motivation for the study? What are they looking to accomplish and contribute as a result of this evaluation? Here are some general reasons for conducting an evaluation:
- To see if a solution to a problem is working, that is, delivering valued ends
- To provide feedback as part of a continual monitoring, revision, and improvement process
- To provide feedback for future funding of initiatives
- To confirm compliance with a mandate
- To satisfy legal requirements
- To determine if value was added for all stakeholders
- To hold power over resources
- To justify decisions that have already been made
Although the last two in this list are particularly driven by political agendas, in reality most reasons can be politicized; thus, it takes an insightful evaluator to recognize the feasibility of conducting an honest evaluation. An experienced evaluator will recognize, most of the time, whether evaluation stakeholders are truly interested in using evaluation findings to improve performance or are more concerned with advancing their political interests. With careful attention to detailed planning, either goal can be made to fit a data-driven and results-oriented action approach to evaluation. But if taken too narrowly—in isolation and without proper context—each has its own narrow set of problems, blind spots, and special data generation and collection issues. Perception of the purpose of the evaluation can shape and limit the data that are observed (or not observed), collected (or not collected), and interpreted (or ignored). Thus, evaluators and stakeholders must begin the planning process with a clear articulation of what decisions must be made with the results of their findings, decisions that are linked to the overall purpose for conducting the evaluation.
PERFORMANCE IMPROVEMENT: A CONCEPTUAL FRAMEWORK
The field of performance improvement is one of continuous transition and development. It has evolved through the experience, reflection, and conceptualization of professional practitioners seeking to improve human performance in the workplace. Its immediate roots stem from instructional design and programmed instruction. Most fundamentally, it stems from B. F. Skinner and his colleagues, whose work centered on the behavior of individuals and their environment (Pershing, 2006).
The outgrowth of performance improvement (also called human performance technology) from programmed instruction and instructional systems design was illustrated in part by Thomas Gilbert’s behavioral engineering model, which presented various categories of factors that bear on human performance: clear performance expectations, feedback, incentives, instruments, knowledge, capabilities, and internal motives, for example. This landmark model was published in Gilbert’s 1978 book, Human Competence: Engineering Worthy Performance, and was based in large part on the work Gilbert conducted with Geary Rummler and Dale Brethower at the time. Pershing (2006) declares that Joe Harless’s 1970 book, An Ounce of Analysis Is Worth a Pound of Objectives, also had a significant impact on the field and was well complemented by Gilbert’s work. Together these works served as the basis for many researchers who have contributed to and continue to help develop the performance improvement field.
Currently the International Society for Performance Improvement, the leading professional association in the field, defines performance improvement as a systematic approach to improving productivity and competence, using a set of methods and procedures—and a strategy for solving problems—for realizing opportunities related to the performance of people. More specifically, it is a process of selection, analysis, design, development, implementation, and evaluation of programs to most cost-effectively influence human behavior and accomplishment. This series of steps, commonly known as the ADDIE model, is the basic model from which many proposed performance improvement evaluation models stem. Pershing (2006) summarized performance improvement as a systematic combination of three fundamental processes: performance analysis (or needs assessment), cause analysis (the process that identifies the root causes of gaps in performance), and intervention selection (selecting appropriate solutions based on the root causes of the performance gaps). These three processes can be applied to individuals, small groups, and large organizations. The proposition that evaluation of such interventions should also be at the core of these fundamental processes is presented in the final chapter of this book.
This is the context in which evaluation is seen and described in this book—not as an isolated process but rather as one of a series of processes and procedures that, when well aligned, can ensure that programs and organizations efficiently and effectively deliver valuable results.
BENEFITS OF EVALUATION
Conducting an evaluation requires resources, but the benefits outweigh those costs in most situations. Here are some of the many benefits to include in an evaluation proposal or business case:
- Evaluation can provide relevant, reliable, and valid data to help make justifiable decisions about how to improve programs and other solutions, what programs and solutions to continue or discontinue, how to get closer to organizational goals, and whether current goals are worth pursuing.
- Evaluation plans and frameworks provide the basis for design, development, and implementation project management plans.
- Evaluation can identify any adjustments that have to be made during and after development and implementation, so that resources are maximized.
- Evaluation provides the means to document successes so that the merit of decisions, department, staff, and solutions is recognized by all; budget requirements and jobs are justified; the quality of this work is respected by organizational partners; the value of opinions and data is taken into account throughout the organization; and evaluators gain credibility and competence, are granted autonomy and power along with accountability, and are seen as true strategic partners in the organization.
- Evaluation reports can be used to disseminate and market the organization’s successes to internal and external partners, such as current and prospective customers.
https://scholar.flatworldknowledge.com/books/27617/portolesedias_1.0-ch11_s00/read
https://ebookcentral-proquest-com.ezproxy.snhu.edu/lib/snhu-ebooks/reader.action?docID=876641&ppg=16
https://scholar.flatworldknowledge.com/books/27617/portolesedias_1.0-ch11_s01/read
GENERAL EVALUATION ORIENTATIONS
Two common distinctions in evaluation are formative and summative. Formative evaluation typically occurs during the developmental stage of a program and can be used to improve the program before it is formally launched. The formative approach can also be used to improve all stages of performance improvement, from assessment to implementation, and the evaluation itself.
Summative evaluation occurs after the implementation of a program or solution and usually requires some appropriate amount of time to have transpired so that the object of evaluation has the opportunity to have the full impact required on performance at various levels of the organization. It is worth noting that summative evaluation can also be used to improve programs and solutions. Stufflebeam and Webster (1980) hold that an objectives-based view of program evaluation is the most common type of evaluation. Once the results that have been accomplished have been determined, the evaluator is well advised to identify causal factors contributing to those results. These data should provide insights as to what the drivers and barriers to the success of the program are, thereby providing the basis for recommendations for improving performance.
Another distinction often made among evaluation orientations is that of process evaluation versus results evaluation. These terms are used to describe the same processes that formative and summative approaches, respectively, take. Depending on how these are interpreted and implemented, they can also differ somewhat from their counterparts described above. For instance, the Canadian Evaluation Society uses the term process evaluation (also referred to as efficiency evaluation) to describe the monitoring of the implementation of programs. Obviously, there should be a well-planned logic model with specified results and processes, but modifications are made if a discrepancy between the program design and the actual implementation is found. For example, one might want to determine if the program is being delivered as intended, if it is being delivered to the targeted clients or participants, or if it is being delivered with the intended effort or in the intended quantity.
Process evaluation is critical in helping evaluators address the variations in program delivery. The greater the variation in program delivery, the greater the requirement is for useful data gathered through a process evaluation approach. For instance, there may be differences in staff, clients, environments, or time, to name a few variables.
Stufflebeam and Webster (1980) have argued that objectives-based program evaluation is the most prevalent type used in the name of educational evaluation. Scriven (1972) proposed goal-free evaluation to urge evaluators to also examine the process and context of the program in order to find unintended outcomes.
Results evaluation, also referred to as effectiveness evaluation, is used to determine whether the immediate outcomes of a program meet predetermined objectives specified by program planners; impact evaluation tends to refer to an evaluation that looks at not only immediate outcomes but also the long-term outcomes of a program and their interdependency. A results evaluation approach is important because it allows us to ensure and document that we are on track by gathering data that show quality accomplishments. It also helps us stay accountable and our programs to stay cost-effective by making program benefits and costs tangible.
Other evaluation approaches are associated with effectiveness evaluation. Cost-benefit evaluation is the translation of costs and benefits into monetary terms, which is used to compare the relative net benefits of doing one thing versus another. However, monetary terms are not always applicable, and they are seldom sufficient to appreciate costs and benefits. Cost-effectiveness evaluation considers alternative forms of program delivery according to both their costs and their effects with regard to producing some result or set of results. Of course, a stable measure of result should be defined. The least costly program is not necessarily the best one. And in the context of technology solutions, an additional orientation to evaluation is usability testing, which focuses on whether people are using the product and how well they are using it to meet required objectives.