As we noted in our analysis of results from the two questions that were the focus of our first article, several approaches can be taken as we attempt to make sense of these means and variances. One approach, with regard to the means, is to take these mean (average) scores at “face-value.” If a respondent indicates that her level of agreement with a specific statement is “Much” or “Very Much”, then we should accept this level of agreement for this respondent and not attempt to manipulate this assessment in some manner. Therefore, as we discuss the results from these three questions, we will first consider the mean scores as accurate representations of the respondents’ self-perceptions regarding the challenges they face and the support they seek.
We also can make a legitimate claim that the mean scores should be interpreted in a comparative manner. It is not simply a matter of reporting on the mean scores recorded for these questions. There are several ways in which we must be cautious in accepting the mean scores for these three questions. Specifically, as we noted in the first article, there are so-called “response set” factors that can legitimately be considered when seeking to make sense of the scores recorded for these questions.
Clearly, there is a strong judgmental factor (“social desirability”) to be assigned to these three questions–especially the first question (“currently, how often do you feel”). It is better to feel good about your work as a coach. In a long questionnaire, such as this one, response fatigue is also likely to settle in by the time the respondent faces these questions. Respondents are often likely to simply click on one end of the response spectrum (usually the positive end). This acquiescence response set can be particularly prevalent when the survey requires no more than clicking of the mouse on a specific response bullet.
Given these concerns, it is legitimate to provide a comparative analysis–looking at means in terms of not just their absolute value, but also their value in comparison with the mean scores on other items listed within a specific question. We will approach the mean scores for each of the three questions from both the absolute and comparative perspectives.
There is not as much of a problem in making sense of the variance scores. In many ways, this is the most interesting descriptive statistic when considering the meaning of scores in a questionnaire such as this one–which was completed by a diverse set of respondents. The variance scores tell you about the extent to which respondents tend to agree with one another. A low variance scores indicates that there is a high level of agreement, whereas a high variance score indicates low levels of agreement (and potential controversy). Some caution does have to be engaged when interpreting variance scores, for an item that pulls for social desirability or acquiescence tends to “squish” everyone at one end of the scale: there is not a higher (or lower) point on the scale when respondents are making their choice.
Download Article 1K Club
Rey Carr
December 17, 2015 at 4:00 pm
The best part of this report of the results of these two surveys is the discussion of the concepts. Such discussion is valuable regardless of the reliability or validity of the results (or evidence).
Unfortunately, the methodology section is missing the most important aspect of methodology: how were each of the surveys distributed and what was the rate of return. If, as I suspect, this was an Internet-based survey, then the results have an exceptionally low chance of being either reliable or valid. That is, the likelihood that they reflect the “coaching industry” or “a typical coach” is incredibly small. Thus, conclusions based on the results are suspect.
But there’s the point. The discussion itself has its own reliability and validity independent of the survey. The points made are worthy of continuing discussion regardless of the surveys.