Subscribe in a reader

Monday, April 07, 2008

We've been asked by a few prospective clients about the value of benchmarks, so I thought I would share our Benchmark Tutorial here with you.



1. Client Additive Benchmarks: This approach is based on the compilation of survey results from the clients of a given consulting firm. These benchmarks may contain responses from hundreds or even thousands of employees across hundreds of companies. The sheer number of responses can be impressive but also potentially misleading. Here’s why: Unless each client uses a core set of survey questions furnished by the vendor, the database likely will vary with respect to the number of companies contributing to each benchmark question. So, whereas your survey responses to one question may be compared with results from 13 companies and 34,000 respondents, responses to another question may only be compared with results from only 3 companies and 800 responses. So if your results differ from benchmark findings, it is difficult to determine whether the discrepancy is attributable to a difference between your company and the external environment or a difference between one benchmark and another. Additionally, there needs to be standardization and consistency in the order and method in which survey questions are presented, and in the scales that are used. Slight differences in these factors also can impact the integrity of the benchmark and the accuracy of the comparison (i.e., is the difference between your score and the benchmark a real difference or does it relate to how the questions may have been asked?). Client additive benchmarks have an additional limitation: your results are being compared with those of a single consulting firm’s client list. This list excludes many companies that may be doing their own survey on their own or who may have decided to hire a different firm.

2. Workforce Studies: This approach to survey benchmarks evolved as survey vendors realized the drawbacks in quality and the high expense of developing and maintaining quality client additive benchmarks. Workforce studies are similar to public opinion polls conducted by newspapers (e.g., the USA Today Poll) as well as research conducted by market research firms. A set of questions is administered to a sample of the workforce, typically segmented by industry and/or geography. Unlike the respondents in client additive benchmarks, these respondents are individuals who work in various organizations. These benchmarks do not reflect the responses of groups of employees who work for a single organization, but rather those of individual employees who work for individual companies. In other words, a workforce study database could have 1,000 respondents from 1,000 different companies. Although claims of statistical representation can be made, there is less control over what types of companies go into the database. Moreover, there is the question of who is actually completing the survey. Respondents usually are offered an incentive (such as a monetary reward, gift certificate, or entry into a raffle) to encourage participation in multiple surveys. Furthermore, comparing the results of your survey, which is sponsored by an employer, with those of a workforce-study survey, which is sponsored by a polling or market research firm, also may be questionable. Employees typically want to help their employers by sharing important feedback. Respondents to workforce surveys are not participating under the same assumptions, and thus comparisons also may be tenuous.

3. Consortium Benchmarks: The third type of benchmark is the consortium benchmark. In our opinion, this is the most valuable, accurate, and worthwhile external benchmark for comparing your survey results. These benchmarks are based on results from a collection of companies that share survey results and best practices in survey research. Membership is by application, and member companies must commit to the terms of membership, which include asking a minimum set of questions, attending meetings, and contributing best practices. The Mayflower Group (www.mayflowergroup.org) is one of the oldest survey consortia; it comprises large, multinational organizations with established survey programs. Other groups also have been formed, such as the Information Technology Survey Group (ITSG; www.itsg.org) and MIDAS, a similar group for the financial services.

One Final Note: We hope this primer has helped you understand the different types of survey benchmarks available today. Of course, there’s one survey benchmark we didn’t discuss – and that is your own survey data. Looking at a specific group within your organization relative to other groups, as well as in relation to your previous survey results, often can provide the most useful and relevant comparisons.