This page is older archived content from an older version of the Emerald Publishing website.

As such, it may not display exactly as originally intended.

An interview with Dr. Palmer Morrel-Samuels

Interview by: James Nelson

Options:     PDF Version - An interview with Dr. Palmer Morrel-Samuels Print view    |    PDF Version - An interview with Dr. Palmer Morrel-Samuels PDF version

Palmer Morrel-SamuelsCompanies can benefit greatly from well-crafted workplace surveys and questionnaires. Good surveys accurately identify issues and problems the company needs information about, while ensuring a high level of unbiased response from employees.

Dr. Palmer Morrel-Samuels is the president of Chelsea, Michigan-based Employee Motivation & Performance Assessment, Inc. ( EMPA ). A social psychologist, Dr. Morrel-Samuels has over 20 years experience designing and analyzing customized employee assessments for Fortune 500 corporations, including IBM, General Motors and Disney. In conjunction with the University of Michigan EMPA also conducts the National Benchmark StudyÒ that sets annual normative benchmarks and measures the linkage between employee motivation and objective performance metrics.

In this interview, Morrel-Samuels and his associate Diana Palmiere Hunt discuss the critical guidelines which will help companies get unbiased, representative and useful information from workplace surveys. 


What makes the difference between a good workplace survey and a poor one?

Dr. Palmer Morrel-Samuels:

A good workplace survey has three important qualities that bad surveys lack: reliability, validity, and business utility. The easiest quality to secure is reliability – which in the testing domain simply means replicability. If your survey gets different responses from the same person before and after a coffee break, then reliability is so low that the results will be worthless. Validity, in the general sense, means accuracy, and is much harder to obtain. If your survey is truly valid, then (for example) its questions on Teamwork really do measure teamwork, rather than something related but different, such as politeness, desire to maintain the appearance of amicability, or satisfaction with pay. However, the most crucial element in a workplace survey is business utility. A useful survey will help you identify strengths and weaknesses; plan interventions; predict the impact of those interventions on objectively measured business performance; and document your progress over time. A good workplace survey provides measures that are reliable and valid and useful, so that simultaneous improvements in working conditions and business performance can be planned, predicted, and measured.

What are the most important features of a good survey?

Dr. Palmer Morrel-Samuels:

Good workplace surveys look different from bad ones. Good surveys are brief, focused, clear, neutral, and free from jargon. The bad ones are wordy, sprawling, confusing and clogged with jargon that activates stereotypes based on race, gender, culture, or seniority. If the respondent gets the feeling that there is a “right” answer, or that the survey is designed to confirm a preconceived interpretation, then the survey is simply a waste of time and money. Thankfully, dreadful workplace surveys are rarely actually used for anything, so the damage is relatively limited.

Is it really important that workplace surveys be rigorous and precise? Aren’t they a bit like taking a pulse, where the doctor needs to know only general information?

Dr. Palmer Morrel-Samuels:

With many workplace surveys these days calling themselves “pulse surveys,” it is especially important to remember that taking a pulse does have some value...but only when one wants to distinguish the living from the dead. A good workplace survey needs to be precise, detailed, and informative. Without precision and detail, the results can be disastrously misleading; and in the workplace, misleading surveys can – and often do – lead to the loss of profits, jobs, and family livelihoods. So don’t let anyone convince you that precision is unimportant.

What are the pros and cons of web-based vs. paper surveys?

Dr. Palmer Morrel-Samuels:

Web-based surveys are substantially more complicated than most people believe. In general they are faster, less expensive, more flexible, and less error-prone than their printed counterparts. However, all the research indicates, with startling consistency, that the results from web-based surveys are dramatically different from the same survey administered on paper.

In our own work providing survey services for more than 3 million employees in over 45 countries, we find that web-based workplace surveys tend to get responses that are unrealistically favourable unless several dozen design steps are implemented in the interface. This doesn’t even address the problems concerning self-selection and response bias: If a survey is simultaneously administered on both paper and the Web, the electronic survey has a very different response rate, and more importantly, its respondents will be very different from the respondents in the paper-generated database.

The best rule is to know about the research on formats, so that these discrepancies can be minimized. Even a brief reading of that research will convince you that using one format alone is preferable to using both simultaneously. However, in those cases where both formats must be used simultaneously for one survey, it is imperative to use an interface and a print format that have a proven ability to generate identical response data.

Do the quality and accuracy of employee responses to a survey improve with focus? For example, only for specific plant locations or departments?

Dr. Palmer Morrel-Samuels:

Not necessarily; a bad survey given to fewer people still produces unusable results. Quality and accuracy in surveys don’t depend on the range or size of the intended response group, but on the way questions are written, organized, presented, and analyzed.  Limiting a survey to a particular group simply means that you ask only about issues specific to that group. This enables you to address more specific issues, or ask more detailed questions; but it doesn’t guarantee that you will ask the right questions, or ask them in a way that is understood equivalently by everyone in the group, is truly unbiased, or that will elicit the whole range of opinions on each topic. 

Does it make sense for employees to be required to respond to a survey, or should participation always be voluntary?

Dr. Palmer Morrel-Samuels:

Voluntary participation is better, for very practical reasons that concern confidentiality and anonymity. Requiring employees to take a survey means that someone must monitor participation.

“Without voluntary participation, the company doesn’t get the constructive criticism it needs to improve working conditions and its bottom line.”

When participation is monitored, employees worry that the content of their answers can also be traced back to them personally. They worry that giving honest answers that are critical of their company can threaten their careers, and so they tend to provide unrealistically favourable responses and few useful comments. Without voluntary participation, the company doesn’t get the constructive criticism it needs to improve working conditions and its bottom line.

If designing a good survey is a science, are many HR departments properly staffed to handle survey design internally, that is without outside expert assistance?  How can HR directors evaluate whether their internal staff has the required design expertise to handle a specific survey, or whether outside design help is advisable?

Dr. Palmer Morrel-Samuels:

This depends on the background and training of the HR staff. Designing a survey requires more than familiarity with the topic issues, good sense, and a basic ability to write. Consider the purpose of the survey and how its results will be used. Typically, surveys are used in four ways: to identify strengths and weaknesses in company conditions and policies; to plan interventions based on the results; to predict the impact of those interventions on objective business performance metrics; and to provide precise documentation of your progress over time.

Employee surveys have legal ramifications, so it’s important that they be done rigorously. This requires solid training in research methodology, survey design, and statistics. A background in social psychology is a real asset, especially if it includes research experience examining the impact of social-psychological variables in large, complex datasets. With these, plus a hefty dose of common sense, it is possible for an HR staff to create a reasonable survey. However, most HR training programs don’t include much information on the subtle factors that influence survey responses. Advice from an expert in survey design will almost certainly improve response rates, reliability, validity, and business utility of your survey. It will also enable HR to target its resources to interventions and areas most likely to produce positive business results.

Surveys can also be valuable in communicating with external stakeholders such as the community, suppliers, customers and shareholders. Apart from the obvious differences, how do these surveys differ from employee surveys?

Dr. Palmer Morrel-Samuels:

Each group has specific interests that tend to produce different response biases. For example, customers typically want to avoid “buyer’s remorse”, to save face and contend that their purchase was necessary and sensible. Suppliers have a corresponding bias to put their services and products in the best possible light. Employees, on the other hand, typically just want to fill out the survey in a way that will let them keep their job and get more pay. In a poorly designed survey, these response biases will limit the range of responses, which in turn limits the survey’s ability to identify meaningful differences among respondents, topics, or locations (what is commonly called “discriminant validity”).

A number of general response biases can threaten the validity of surveys designed for any group of respondents. These include, among others, the acquiescence bias (a tendency to agree); the positivity bias (a tendency to view the world in overly positive terms); the social desirability bias (a tendency to provide answers that reflect well on the respondent); and the increasingly common “rusher’s bias” (our term for a tendency to just go as fast as possible through the survey regardless of the consequences). Considerable research shows that these tendencies are surprisingly international, though, of course, each culture varies in its own unique way.

So, in addition to targeting your questions to the interests and concerns of each group of respondents, your survey should be designed to minimize response bias as much as possible. For example, you can minimize the acquiescence bias by interjecting negatively worded questions so that high scores do not always indicate a positive state of affairs.

For an employee survey, what do you consider a good response rate? For a stakeholder survey?

Dr. Palmer Morrel-Samuels:

Most researchers now believe that the typical response rate for an optional, confidential, anonymous survey is about 20 per cent. We find that, with proper design, your workplace survey will get response rates not lower than about 50 per cent. This is a good minimum in the workplace because sceptical leaders often view rates below 50 per cent with excessive caution, mistakenly imagining that any survey with a lower response rate is necessarily unrepresentative. 

Note however, that a high response rate does not guarantee validity, just as a low response rate does not necessarily preclude it. For example, national surveys in the US often predict voting behaviour very well with only modest response rates and samples of less than two thousand likely voters. Similarly, there have been a number of famously disastrous election forecasts with respectable response rates and literally millions of respondents. Mapping response rates to accuracy is more complex than most people imagine. The best approach is to evaluate your survey data’s validity comprehensively: Given the observed response rate, how well does your survey concur with – and predict – relevant measures such as staff retention, customer satisfaction, and profit per employee?

How long should a well-designed survey take for the participant to answer?

Dr. Palmer Morrel-Samuels:

A survey designed for the workplace should take no longer than about 20 minutes to complete. Longer surveys reduce overall response rates; minimize participation from respondents who are especially busy, cynical, or negative; and tend to produce unrealistically positive responses, in part because impatient employees often default to innocuous positive responses so they can quickly wrap up and return to more pressing work responsibilities. These factors, in turn, reduce the validity and therefore the business utility of the survey results.

You have written that superior survey design has five key elements: content, format, language measurement and administration. Would you cite one key critical success factor for each of these five elements?

Dr. Palmer Morrel-Samuels:

There is no single key critical success factor for any of these, so it’s sensible to take the following list as mere examples of important factors in each area.

Good content is the primary means for securing validity. Content should be based exclusively on directly observable behaviour, not perceived attitudes, character traits, or imagined attributes.

The survey’s format can have a surprisingly strong impact on response rate, completion time, and validity. It is important to place related questions together in a block that is set off by a blank line or border. Nonetheless, despite common practice, it is wise not to label the sections of your survey (Teamwork, Ethics, Pay & Benefits, etc.) because doing so induces respondents to use a set rating throughout that section – a shortcut that will limit validity.

Language should be clear, jargon-free (this includes company acronyms), and appropriate to the reading level of the intended respondent group.  Avoiding both condescension and confusion is not easy, but it can be done, and it is well worth the effort.

Measurement needs to be planned ahead: what statistical analyses will enable you to obtain the clearest, most usable results? The most essential measurement element is reliance on ratio scales (where respondents generate answers by picking a number from a limited set where there is a known zero, and each number is equidistant from its neighbours). Nominal scales (where the respondent provides answers by selecting one word from a set of words on a continuum, such as “Very satisfied” “Somewhat satisfied” etc.) are to be avoided at all costs because they preclude comprehensive and precise statistical analysis.

Administration needs to be carefully controlled so that anonymity and confidentially can be preserved throughout the entire process. Moreover, especially in the workplace, it is important to administer the survey in a manner that demonstrates its guarantee of confidentiality and anonymity. For example, in settings where we have to administer a survey simultaneously by paper and web, we intentionally make small piles of extra paper surveys available on un-monitored desks throughout the workplace, so that employees have irrefutable evidence that no one is placing secret identifiers on the survey forms. Moreover, if you do administer your survey by web and paper simultaneously, it is imperative that you use a web interface that has a proven ability to match results from paper surveys. When this step is overlooked, results from the two different administration methods are dramatically different, and that difference skews results profoundly.

You obviously don’t want to swamp people with surveys. Over a 12 month period, what is the limit you’d place on the number of surveys for a given survey group?

Dr. Palmer Morrel-Samuels:

You don’t want to swamp people with pointless surveys. But the answer depends on at least two factors. One factor concerns content. If the content is sufficiently critical, respondents will tolerate surveys that are quite detailed and frequent. For example, surveys on safety conditions in factories and day-care facilities are certainly taken seriously, despite their repetitiveness.

The second factor concerns remediation. When people see their feedback and ideas being used to improve working conditions, processes, and outcomes, they will typically maintain strong motivation to participate, regardless of the survey’s frequency. When employees see no changes from a workplace survey, even once a year is too often.

May 2007