John C. Norcross is Professor of Psychology and Distinguished University Fellow at the University of Scranton, a clinical psychologist in part-time practice, and editor of the Journal of Clinical Psychology: In Session.
He also has a string of honours and awards, posts, editorships, etc. that would turn many academics a none too delicate shade of 'envious green'.
Yet Norcross has also headed up what are, in my opinion, two of the most cack-handed surveys I've ever come across. Not least because they seem to be designed to manipulate the results in a none too subtle manner.
Before going any further it is only fair to point out that both articles include a section of "Cautions and Caveats" (pages 519-520 (2006) and 12 (2010)) in which the authors themselves admit to several reasons why the results should be regarded with a great deal of suspicion (see Cautions and Caveats).
*** The Short Version ***
Nature of criticisms:
First and foremost, at no point in either article is there any indication at all that the pollsters provided their "experts" with clear definitions of what was meant by any of the treatment/test labels. Thus both polls totally lack any standardization of what respondees are being asked to assess on.
Second, Norcross et al claim to be seeking support for an EBP - evidence-based practice - approach to psychotherapy, yet their own polls/articles are opinion-based rather that evidence-based.
Indeed, it is rather ironical that these "researchers" should choose to attack the credibility of the FoNLP at all. Because by so doing they are attacking a field of study and practice which, when the processes are carried out according to the genuine NLP-related principles, is evidence-based in a far more practical sense than most academically-oriented psychological research-based practices (see Conclusions, below).
These articles share several basic flaws, any one of which would be enough to call the poll results into question. Taken together they invalidate the polls, the results and even the underlying rationale in general, not just for the question about "NLP":
In practice, neither poll has been designed in a way likely to elicit accurate/reliable results. Nor is it possible, on the evidence supplied in the two articles, to derive any sensible idea as to what the results actually mean.
Just how "Neuro-Linguistic Programming" got dragged in, other than the unfocused searches for anything that someone, somewhere has criticised, is not made clear, which makes its presence even more inexplicable since the various authors involved in the two polls seem to know little or nothing about the FoNLP. On the contrary, both questionnaires appear to refer to Neuro-Linguistic Programming (NLP) as a single instrument. Indeed, the authors of both articles seem to have shared the erroneous belief that there is something called "NLP" which is a form of psychotherapy hence, for instance, the reference to finding out: "Which psychotherapies are effective?" (Norcross, Koocher and Garofalo, 2006. Page 515).
*** End of Short Version ***
*** 'Director's Cut' ***
'Away From' is a Poor Way to Set Goals
Most experienced NLPers will know about "meta programs" - ways in which we consciously or unconsciously filter incoming information. One such meta program is known as "Towards/Away from", meaning that some people tend to take notice of information which will help them to achieve some goal, whilst others are more likely to look for ways to avoid what they don't want. For example, one office worker may be willing to do unpaid overtime because they are looking for ways to move on to a more rewarding position, whilst another shows the same willingness, but only because they see it as a way to avoid getting fired.
It is interesting, then, to note how the authors of these two articles have framed their goal:
The ... evidence-based practice (EBP) movement in mental health ... has provoked enormous controversy within organized psychology, and with the exception of the general conviction that psychological practice should rely on empirical research, little consensus currently exists among the various stakeholders on either the decision rules to determine effectiveness or the treatments designated as "evidence-based" ...
And in the second article:
The focus of EBP falls squarely on what works ...
Is there any link, I wonder, between the negativity embedded in the two polls and the fact that in 2006 only 29.3% of the potential participants completed both rounds of the survey? A figure which dropped to 22.8% in the second poll.
Two for the Price of One
I have combined the evaluations of the two articles by Norcross et al (describing polls conducted by Delphi), because, from the NLP-related point of view they both have the same flaws and are both equally lacking in credibility
Starting as They Meant to Go On - 2006
The credibility of the 2006 article disappears in a puff of metaphorical smoke in the second sentence of the Abstract. Thus we are told that:
A panel of 101 experts participated in a 2-stage survey, reporting familiarity with 59 treatments and 30 testing techniques ...
But when we check the reported results, in Table 2 (pages 518-519), and Table 3 (page 520) we find that the figures aren't entirely accurate.
In October 2004, we mailed the five page questionnaire to 290 doctoral-level mental health professionals
Of the 290 recipients only 130 people (about 45%) returned the questionnaire, but 37 were unusable (26 were from retirees, the other 11 were rejected for other, unspecified) reasons. So those taking part in round 1 represent only 35% of the original mailout. Which in and of itself seems to seriously question the usefulness of the poll in the eyes of those of the pollsters' peers who were initially approached.
The same instrument was then redistributed to the 101 panelists in February 2005 ... 85 of the original 101 (84%) panelists responded to the second round.
Hang on a moment. This means that only 85 of the alleged experts "participated in [the whole of the] 2-stage survey". And whilst that may be 84% of the people who participated in the first round, it's only 29.3% of those who received the initial questionnaire (though we don't know how many retirees were included in that initial mail out).
Moreover, whilst there were 59 treatments and 30 tests in the first questionnaire, 4 treatments and 5 tests were unfamiliar to over 75% of the Round 1 respondents and were therefore eliminated in Round 2. On this basis the statement quoted above would have been a lot more accurate if it had read:
A panel of 85 experts participated in a two-stage survey, each of whom reported some (unspecified) degree of familiarity with 55 treatments, and 25 tests.
And as far as the voting on the mythical "Neuro-Linguistic Programming (NLP) for treatment of mental/behavioral disorders" was concerned, just 74 people thought they were able to rate the non-existent treatment in Round 1, down to only 65 people in Round 2.
Starting as They Meant to Go On - 2010
And again, in 2010, we find exactly the same kind of inaccuracies. In the third sentence of the Abstract we are told that:
A panel of 75 experts participated in a two-stage survey, reporting familiarity with 65 treatments ...
And again, when we check the text, and the reported results (in Table 2 (pages 177-178)), we find that the figures show a somewhat different picture.
In January 2007, we mailed a 5-page questionnaire and a 1 page personal information survey to the 250 potential participants.
Of the 250 recipients only 113 people returned the questionnaire but 38 of these returned blank questionnaires leaving just 75 Round 1 participants.
We then redistributed the same instrument to the 75 panelists in April 2007 ... 57 of the original 75 (76%) panelists responded to the second round.
Surely this means that only 57 of the alleged experts "participated in [the entire] 2-stage survey"? And whilst that may be 76% of the people who participated in the first round, it's only 22.8% of those who received the original mailing (though again we don't know how many retirees were included in that initial mail out).
Furthermore, whilst there were 65 treatments in the first questionnaire, 6 of them were unfamiliar to over 75% of the Round 1 respondents and were therefore eliminated in Round 2. On this basis the statement quoted above would have been a lot more accurate if it had read:
A panel of 57 experts participated in a two-stage survey, each of whom reported some (unspecified) degree of familiarity with 59 treatments
And as far as the voting on the mythical "Neuro-Linguistic Programming for drug and alcohol dependence" was concerned, it seems that only 32 of the alleged "experts" thought they were able to rate the non-existent treatment in Round 1, down to only 27 "experts" in Round 2.
Were these Articles Written by "Experts"?
Despite the heavy emphasis on the "expert" nature of the participants in the two polls, a rather critical error appears in the first sentence of the main body of the first article:
Which psychotherapies are effective?
The article allegedly covers "psychological treatments and tests" (page 515). But "NLP" is a label for a specific form of modelling, and the wider field of NLP (FoNLP) is primarily concerned with communication techniques - which can be used in a wide variety of contexts. It is certainly not a form of psychotherapy, nor a "treatment [for] mental/behavioral disorders" (see Table 2 on page 518, 2006), nor yet a treatment for drug and alcohol dependence (Table 2, page 177, 2010).
As the entry on the National Center for Biotechnology Information, U.S. National Library of Medicine website correctly states:
Neurolinguistic [sic] Programming
There is also evidence in both articles that suggests that the authors viewed Neuro-Linguistic Programming (NLP) as a single instrument. In the second article, for example, interventions such as "Past-life therapy", "Scared Straight" and "Synanon-style boot camps" each appear twice - once linked to substance abuse, once linked to alcohol dependence (Norcross et al, 2006. Pages 19-20). Presumably this is to take account of two separate regimes. But "Neuro-Linguistic Programming" is listed only once, "for drug and alcohol dependence", presumably based on the assumption that there is a single procedure for dealing with both situations.
Of course it is true that some NLP-related communication techniques can be used "in a psychotherapeutic context", as Bandler and Grinder put it in the Forward to Neuro-Linguistic Programming, Volume 1 (1980, no page number), but that doesn't make NLP and/or the FoNLP a kind of therapy.
Priming the Pump, Stacking the Deck
But should we really be surprised by this information? Probably not. After all, the authors themselves indicate, in both articles, that they compiled the lists of "treatments and tests" from other sources:
We searched broadly and collected nominations for discredited mental health treatments and tests [sic] via literature searches, electronic mailing lists requests, and peer consultations
The 65 potentially discredited treatments were compiled through an extensive literature review that included electronic database searches, listserv requests, and peer consultations. We searched electronic databases (e.g., PubMed, PsychINFO, Cochrane Collaboration, Google Scholar) for published literature using the keywords "discredited," "quack," and "harmful" placed with the words "treatment" and "addiction". We examined journal articles and books discussing discredited, potentially harmful, and "crazy" therapies (e.g., Eisner, 2000; Lilienfeld et al, 2003; Singer & Lalich, 1996).
Which may seem to be commendably thorough, until we consider the implications of such a search.
In the first of the two articles the authors wrote:
A surprising finding to us was the large percentage of experts unfamiliar with the listed practices. For example, in the first round, 56% were not sufficiently familiar with Thought Field Therapy to render a rating and 37% were not familiar with Erhard Seminar Training. These numbers may help us to understand one important and perhaps unappreciated reason for the relative apathy of many mental health experts toward discredited practices: many experts simply do not know much about them.
Where, then, would the "experts" - being members of the same community as the authors of the two articles - look for information (and we know that at least some of them did look for information - see below), other than in the same places that the authors had looked?
Yet just as the authors seem to have been unaware of the potential effect of using such pejorative wording in their survey, they also seem to have been totally unaware that they were potentially "priming the pump". Far from being impressed by the negative results of the poll one can only wonder that there were some items which got off relatively lightly.
And on a similar note, weren't the authors somewhat ill-advised when they commented (Norcross et al, 2006. Page 521) on the 37% of their colleagues' who didn't know about "est" (Erhard Seminar Training), given that the organization ceased operating over 20 years ago? (It was bought out by the organisers of the Landmark Forum, now known as the Landmark Education Corporation (LEC).)
This leads us directly into yet another concern that is relevant to both surveys as far as any alleged connection with the authentic field of NLP is concerned.
What Makes an "Expert" Expert?
Norcross et al define the term "expert" as follows:
Expert in our study was defined by status as journal editor or association [i.e. APA] fellow, which predictably produced a disproportionate percentage of academics and cognitive-behavioral proponents.
Well that's clear enough, isn't it?
To be specific, the authors of the 2006 report initially contacted, via random selection (page 516):
The second "expert" panel (2010) potentially consisted of (page 8):
(In neither case are we given any indication as to how this information breaks down in Round 2.)
And here's the big question: On what basis can we safely assume that clinical psychologists, counseling psychologists, school psychologists or fellows of any other APA division or editorial board members are, per se, "experts" in any of the methodologies listed as "treatments" in either survey?
It is true that respondents could indicate that they were not familiar with any given methodology, but this is meaningless without a clear definition of either the methodologies or the word "familiar". And even if such definitions had been supplied, how many of the respondents are likely to be genuinely "familiar", to any significant degree, with so many non-conventional treatments and/or tests?
In the 2006 article we are told that, according to the given answers, only 26.7% of the Round 1 respondents, and 24.1% of the Round 2 respondents, were "not familiar" with the use of "Neuro-Linguistic Programming (NLP) for treatment of mental/behavioral disorders". But what about the other 75% (approx)? They may have thought that they were genuinely familiar with this "treatment", but what evidence, other than notoriously unreliable self-reporting, do we have on this score?
In the 2010 article the results are even more significant - in Round 1, the 71% of respondents who acknowledged that they knew nothing about "Neuro-Linguistic Programming for drug and alcohol dependence" were the largest such group, with the 67% who were not familiar with "Metronidazole for alcohol dependence" coming second. Likewise, in Round 2, though the figures had dropped to 59% and 51% respectively, the "NLP" and "Metronidazole" groups were still largest and second largest as far as "not familiar" was concerned.
To repeat an earlier observation: there is no definitive/standard "NLP treatment for mental and/or behavioral disorders" or for "drug and alcohol dependence". Nor, as far as I can see from the two articles, do the authors themselves know what they mean by these labels.
In practice, just to dot the "i's" and cross the "t's", we would do well to consider the contents of the FAQ #28 Project as they relate to academic psychologists. At the latest count (December 2011 - 18 entries) 100% of those critics whose work has been reviewed have demonstrated that even though they may present themselves as being knowledgeable about "NLP", it is entirely possible, or even probable, that they know nothing of any consequence about the subject. As we saw in the previous section, Norcross et al, who apparently imagine that "NLP" is a form of psychotherapy, etc., and who presumably (?) think they know what "NLP" is about, are themselves examples of this kind of seriously flawed "knowledge" in action.
Compliance Rules OK - 1
Way back in 1974, Professor Elizabeth Loftus co-authored an article which described an experiment designed to measure how far people's memory of a traffic accident might be influenced by the specific wording of questions about the event.
In that experiment subjects were shown short film clips of a two-car accident and asked a number of questions about what they had seen, including: "About how fast were the cars going when they hit each other?"
When averaged out, the subjects who had been asked what speed the cars were doing when they "contacted" was 31.8 mph. The other four groups gave increasingly faster estimates (in the order shown above) with "smashed" producing an average estimate of 40.8 mph. In other words, the more "sensational" the description, the higher the estimated speed; the more neutral the description, the slower the perceived speed.
In a second, related experiment Loftus and Palmer showed that the variability of the answers is likely to be due to the question altering the subjects' actual memory of the event they had witnessed.
So what is the likely effect of the constant emphasis, in these two polls, on the highly judgmental word "discredited", rather than some neutral word or phrase? It appears in all of the rating definitions, and is the focus of the first two paragraphs of the instructions to the participants (according to the 2006 article).
And that is only part of the story.
Compliance Rules OK - 2
In the "Second Round" section of both articles we are told that:
the second round questionnaire presented the pooled responses from the first round, which is the standard procedure in Delphi polls.
This element of the procedure was supposedly included because:
In this manner, the panel of experts will exchange opinions and arrive at a greater consensus. Your individual responses will not be identified.
Maybe I'm missing something here, but how does any "exchange [of] opinions" take place without the parties involved being able to engage in any kind of discussion? In what sense, precisely, does being shown an anonymous set of statistics qualify as any kind of meaningful "exchange [of] opinions"?
To be honest, this seems more like a second form of subtly enforced compliance.
Given the heavy emphasis on the word "discredited", as described above, it is surely relevant to note what actually happened to the allegedly "expert" opinions in the two rounds (2006):
In 2010 the results showed:
This is arguably evidence of the extent of the influence of using a strongly negative word in all of the rating descriptions and in the participant instructions. That is to say, respondents, might be negatively influenced on Round 1, and when receiving the feedback on the first round, might unconsciously be further persuaded to comply with the implication that the treatments are less than fully creditable (especially since not one treatment has received a rating lower than 1.97) and thus make their own ratings more negative in Round 2.
But the argument has another dimension, which again reduces the poll's results to meaningless data.
In must further be noted that in neither case are we given information about Round 2 "dropouts" which would allow us to determine what difference their absence made to the results.
The "Homework" Factor"
Another factor which differentiated Sample 1 from Sample 2, at least in the 2006 poll, was a change in participants' knowledge. No, let me be more accurate - a change in some participants' knowledge.
Thus in the 2006 article we are told that:
... a large number of expert panelists informed us that completing the initial questionnaire prompted them to secure and read critical reviews of the treatments and tests on the questionnaire.
So how many of the "experts" did this? And for how many of the treatments. Which sources did they use? And were their sources always accurate?
In the case of "NLP" we can pretty much guarantee that anyone searching academic sources would find the kind of material gathered by the FAQ #28 Project. In other words, poorly-researched, denigratory misinformation.
Is this really the way to go about running a survey of such probity that the authors can say, of the second poll:
As a field, we have made progress in differentiating science from pseudoscience, credible from discredible [sic] in addictions treatment.
Cautions and Caveats
If this assessment of the two polls seems unreasonably critical it should be noted that the authors of the polls also listed a number of errors they detected in their studies.
This first tranche of shortcomings is made up entirely of direct quotes from the 2006 article (page 519):
And these are from the 2010 article (page 178):
Picking Your Friends
An interesting demonstration of the authors' lack of accurate knowledge (in both articles) appears when they present their readers with these statements:
Recently, several authors have attempted to identify pseudoscientific, invalidated, or "quack" psychotherapies ..."
Most assuredly, select investigators have attempted to identify pseudoscientific or ineffective treatments applied to a variety of mental disorders and addictions ..
Both versions are followed by a list that includes:
Carroll, 2003; Della Sala, 1999; Eisner, 2000; Lilienfeld, Lynn, and Lohr, 2003; Singer and Lalich, 1996.
It is interesting to note, then, that:
In both articles the authors worry that these "pioneering efforts" have not "systematically relied on expert consensus to determine their contents", and "have provided little differentiation between credible and noncredible treatments" (2006, page 515; 2010, pages 175).
Nevertheless, Eisner (2000), Lilienfeld et al, (2003) and Singer & Lalich (1996) are listed as being amongst the 'journal articles and books discussing discredited, potentially harmful, and "crazy" therapies' (page 6) the authors consulted when looking for candidates for their second (2010) poll.
It is almost incomprehensible, under the circumstances, that the authors of the first article actually closed by giving themselves a big pat on the back:
Our Delphi study systematically compiled clinical expertise on credibility, based perhaps on the best available research. ... The consensus emerging on this Delphi poll on potentially discredited treatments and tests leaves us feeling encouraged.
Likewise, at the close of the second article they wrote:
We believe that this study, as did its parallel on mental health treatments (Norcross et al, 2006) offers a cogent, positive first step in consensually identifying the "dark side" or "soft underbelly" of modern addiction treatments and in providing a more granular analysis of the continuum of discredited procedures."
As a famous tennis star was so fond of saying: "You cannot be serious!"
Carroll, R.T., neuro-linguistic programming (NLP) (2003). In The Skeptic's Dictionary [sic], R.T. Carroll (ed.). John Wiley & Sons, Inc. pages 252-260
Della Sala, S. and Beyerstein, B.L. (2007). Introduction: The myth of 10% and other Tall Tales about the mind and the brain. In Tall Tales about the Mind and Brain: Separating fact from fiction, Sergio Della Sala (ed.), Oxford University Press, Oxford. Pages xx-xxii.
Loftus, E. E., and Palmer, J.C. (1974) Reconstruction of automobile destruction. In Journal of Verbal Learning and Verbal Behavior, 13. Pages 585-589.
Norcross, J.C., Koocher, G.P. and Garofalo, A. (2006), Discredited Psychological Treatments: A Delphi Poll. In Professional Psychology: Research and Practice, Vol. 37, No. 5. Pages 515-522.
What Does Not Work? Expert Consensus on Discredited Treatments in the Addictions, Norcross, J.C., Koocher, G.P., Fala, N.C. and Wexler, H. W. (2010) Journal of Addiction Medicine, Vol. 4, No. 3. pages 174-180.
Singer, M.T. and Lalich, J. (1996) Crazy Therapies. Jossey-Bass Publishers, San Fransisco. Pages 168-176.
Wilson, N. (2004) Commercializing Mental Health Issues: Entertaining, Advertising and Psychological Advice, in Lilienfeld, S.O., Lynn, S.J. and Lohr J.M., Science and Pseudoscience in Counseling Psychology. Pages 446 and 455.
Andy Bradbury can be contacted at: firstname.lastname@example.org