If at First You Don't Succeed...

 

Introduction


The Peer Review Process

Dr Gerald Koocher, Norcross' colleague on both surveys, after seeing a draft copy of my evaluation of their work, has complained that:

"You appear to expect your readers to blindly substitute your personal review criteria (emanating from your own admittedly biased perspective as an NLP advocate) for the opinions of 6+ anonymous experts selected by two different journal editors to review the work blind prior to accepting the papers for publication.  As written, unsophisticated readers of your blog [sic] will remain ignorant of the rigorous peer review process to which our work was subjected."
(Personal e-mail communication)

Note the apparent implication that "NLP advocates" are "admittedly biased" - though this is not an admission I am aware of having made to Koocher or anyone else - but psychologists who characterise whatever it is they think of as "NLP" as a "pseudoscientific, unvalidated or 'quack' psychotherap[y] [sic]" (Norcross et al, 2006.  Page 515), are (presumably) to be seen as straight-shooting, clear-headed and totally bias free and well-informed.
Just like the Easter Bunny!

Now, I could contest that last suggestion by telling you about my understanding of the peer review system.  That is, the process whereby articles submitted for publication are (usually anonymously) submitted to a panel of "experts" in the subject concerned, who will advise the editor of the relevant learned journal what they think of the article.  Then it's up to the editor to decide whether an article gets published, or not.  But then I came across what seemed, to me, to be a better option.  The statement below was written by Richard Horton, the editor-in-chief of the internationally renowned British medical publication The Lancet, (a peer reviewed journal):

"The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability — not the validity — of a new finding.  Editors and scientists alike insist on the pivotal importance of peer review.  We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller.  But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong."
(Horton, R. (2000), Genetically modified food: consternation, confusion, and crack-up.  In the Medical Journal of Australia, 172: 148–9.

So there you are.  Now you have a genuinely expert opinion of the peer review process and hopefully will not be mislead by my alleged bias.
 

John C. Norcross is Professor of Psychology and Distinguished University Fellow at the University of Scranton, a clinical psychologist in part-time practice, and editor of the Journal of Clinical Psychology: In Session.
He has authored over 300 publications and has co-written or edited 20 books, principally in the areas of psychotherapy, clinical training, and self-change.
(From Professor Norcross' home page: http://academic.scranton.edu/faculty/norcross/
accessed on June 1, 2010)

He also has a string of honours and awards, posts, editorships, etc. that would turn many academics a none too delicate shade of 'envious green'.

Yet Norcross has also headed up what are, in my opinion, two of the most cack-handed surveys I've ever come across.  Not least because they seem to be designed to manipulate the results in a none too subtle manner.
(This review includes what I believe to be concrete evidence, from the two articles, to support this contention.)

Before going any further it is only fair to point out that both articles include a section of "Cautions and Caveats" (pages 519-520 (2006) and 12 (2010)) in which the authors themselves admit to several reasons why the results should be regarded with a great deal of suspicion (see Cautions and Caveats).

*** The Short Version ***

Critic(s):
(academic role at the time of publication)
Norcross, J.C., PhD:   Distinguished University Fellow at the University of Scranton.
Koocher, G.P., PhD:   Dean, Graduate School for Health Studies, Simmons College, Boston.
with, in alphabetical order:
Fala, N.C., B.S. (2010):   Post Graduate student?
Garofalo, A. (2006):   Doctoral candidate in clinical psychology at Nova Southeastern University.
Wexler, H.K., PhD (2010):   National Development and Research Institutes Inc.

Critical Material:
Discredited Psychological Treatments and Tests: A Delphi Poll, Norcross, J.C., Koocher, G.P., and Garofalo, A (2006).  Professional Psychology: Research and Practice, Vol. 37, No. 5. Pages 515-522.
What Does Not Work?  Expert Consensus on Discredited Treatments in the Addictions, Norcross, J.C., Koocher, G.P., Fala, N.C. and Wexler, H. W. (2010)  Journal of Addiction Medicine, Vol. 4, No. 3. pages 174-180.

Nature of criticisms:
Based on two sets of poll results: "Neuro-Linguistic Programming (NLP) [is possibly discredited] for treatment of mental/behavioral disorder" (2006) and "certainly discredited ... [as a treatment] for drug and alcohol dependence" (2010).

Original/derivative:
The first poll allegedly follows the established pattern for Delphi polls, and methodologically speaking, the second poll seems to be little more than a re-run of the first, but with a different focus and different interviewees.

Flaw(s):

First and foremost, at no point in either article is there any indication at all that the pollsters provided their "experts" with clear definitions of what was meant by any of the treatment/test labels.  Thus both polls totally lack any standardization of what respondees are being asked to assess on.
The profound weakness of this approach can be judged by the fact that in regard to Neuro-Linguistic Programming the "experts" were asked to rank two treatments which don't actually exist.

Second, Norcross et al claim to be seeking support for an EBP - evidence-based practice - approach to psychotherapy, yet their own polls/articles are opinion-based rather that evidence-based.

Indeed, it is rather ironical that these "researchers" should choose to attack the credibility of the FoNLP at all.  Because by so doing they are attacking a field of study and practice which, when the processes are carried out according to the genuine NLP-related principles, is evidence-based in a far more practical sense than most academically-oriented psychological research-based practices (see Conclusions, below).

These articles share several basic flaws, any one of which would be enough to call the poll results into question.  Taken together they invalidate the polls, the results and even the underlying rationale in general, not just for the question about "NLP":

  1. Neither NLP nor the FoNLP are forms of psychotherapy, which is why there is no standard procedure in the FoNLP relating to the "treatment of mental/behavioral disorder" (note, by the way, the lack of any indication as to what "mental/behavioral disorder(s)" is/are being referred to).
    Likewise there is no standard procedure for using any part of the FoNLP for the treatment of "drug and alcohol dependence".
     
    Likewise in numerous cases a variety of loosely related "treatments" were bundled together as though respondents were being asked to rate a specific procedure: "Treatments for mental disorder resulting from Satanic ritual abuse" (2006), "Hypnosis for alcohol dependence" (2010), etc.
  2. No effort was made to discover whether any of the participants were genuinely and sufficiently familiar with the "treatments" and "tests" they rated to offer accurate evaluations.
      In practice, any "expert" who rated the credibility of either of the allegedly NLP-related "treatments" automatically DIScredited themselves as an "expert" the subject insofar as they were claiming to have knowledge of something that doesn't exist.
     
  3. Both polls are based on the highly dubious proposition that being an "expert" in one area of psychology automatically makes one an expert in all matters psychological, unless a respondent says otherwise.  No evidence is provided to substantiate this implied claim.
     
  4. In practise even the basic concept behind the two polls appears to be a logical error, an "appeal to authority".  That is to say, the underlying assumption is that because the poll respondents are allegedly "experts", if their averaged opinion holds that a particular test or treatment is "discredited", then it is.
     
  5. There is at least prima facie evidence that the results of both polls were affected by a form of manipulation likely to evoke psychologically-induced compliance in those taking part (see Priming the Pump, Stacking the Deck below).
     
  6. The argument offered in support of the negative emphasis in the two polls implies that it is more useful to know which therapeutic processes should be avoided than it is to know what process(es) should be used.

In practice, neither poll has been designed in a way likely to elicit accurate/reliable results.  Nor is it possible, on the evidence supplied in the two articles, to derive any sensible idea as to what the results actually mean.
On the contrary, both polls seem to be seriously lacking in the kind of safeguards needed to ensure that the results were based on something more than sheer guess work and/or personal prejudice.  "Expert opinions may become widely held either because they are correct or because most experts simply share the same heuristic biases" (Norcross et al (2010, page 179)).

Conclusions:
Both polls seem to be intended to "push" what the first article refers to as "The burgeoning evidence-based practice (EBP) movement in mental health" (Norcross, Koocher and Garofalo, 2006.  Page 515).

Just how "Neuro-Linguistic Programming" got dragged in, other than the unfocused searches for anything that someone, somewhere has criticised, is not made clear, which makes its presence even more inexplicable since the various authors involved in the two polls seem to know little or nothing about the FoNLP.  On the contrary, both questionnaires appear to refer to Neuro-Linguistic Programming (NLP) as a single instrument.  Indeed, the authors of both articles seem to have shared the erroneous belief that there is something called "NLP" which is a form of psychotherapy hence, for instance, the reference to finding out: "Which psychotherapies are effective?" (Norcross, Koocher and Garofalo, 2006.  Page 515).

*** End of Short Version ***

*** 'Director's Cut' ***

'Away From' is a Poor Way to Set Goals

Most experienced NLPers will know about "meta programs" - ways in which we consciously or unconsciously filter incoming information.  One such meta program is known as "Towards/Away from", meaning that some people tend to take notice of information which will help them to achieve some goal, whilst others are more likely to look for ways to avoid what they don't want.  For example, one office worker may be willing to do unpaid overtime because they are looking for ways to move on to a more rewarding position, whilst another shows the same willingness, but only because they see it as a way to avoid getting fired.
In the NLP-related goal-setting process it is claimed that people are more likely to be successful if they set positively-framed goals rather than negatively-framed goals.  Because setting a goal based on what you don't want is like trying to drive a car in which the only view of the outside world is through the rear-view mirror.

It is interesting, then, to note how the authors of these two articles have framed their goal:

The ... evidence-based practice (EBP) movement in mental health ... has provoked enormous controversy within organized psychology, and with the exception of the general conviction that psychological practice should rely on empirical research, little consensus currently exists among the various stakeholders on either the decision rules to determine effectiveness or the treatments designated as "evidence-based" ...
    We believe that it might prove to be as useful and probably easier to establish what does not work - discredited psychological treatments and tests.  Far less research and clinical attention have been devoted to establishing a consensus on ineffective procedures as compared to effective procedures.
(Norcross et al, 2006.  Page 515.  Italics added for emphasis)

And in the second article:

The focus of EBP falls squarely on what works ...
    But EBP largely ignores what does not work.  Far less research and clinical attention has focused on establishing a consensus on ineffective methods as compared to effective methods ...
    We believe it will prove useful and perhaps easier to establish a professional consensus on discredited treatments for addictions.  Doing so may counter the widespread tendency for professionals to practice (or repeat) what they have been taught by their mentors or authorities.
(Norcross et al, 2010.  Page 174.  Italics added for emphasis)

Is there any link, I wonder, between the negativity embedded in the two polls and the fact that in 2006 only 29.3% of the potential participants completed both rounds of the survey?  A figure which dropped to 22.8% in the second poll.

Two for the Price of One

I have combined the evaluations of the two articles by Norcross et al (describing polls conducted by Delphi), because, from the NLP-related point of view they both have the same flaws and are both equally lacking in credibility
Indeed, both polls are so badly designed (as even their authors admit) that there is really not much else of any substance to comment on except the errors.

Starting as They Meant to Go On - 2006

The credibility of the 2006 article disappears in a puff of metaphorical smoke in the second sentence of the Abstract.  Thus we are told that:

A panel of 101 experts participated in a 2-stage survey, reporting familiarity with 59 treatments and 30 testing techniques ...
(Norcross et al, 2006.  Page 515)

But when we check the reported results, in Table 2 (pages 518-519), and Table 3 (page 520) we find that the figures aren't entirely accurate.
In the section of the article headed "Expert Panel" we learn that:

In October 2004, we mailed the five page questionnaire to 290 doctoral-level mental health professionals
(Norcross et al, 2006.  Page 516)

Of the 290 recipients only 130 people (about 45%) returned the questionnaire, but 37 were unusable (26 were from retirees, the other 11 were rejected for other, unspecified) reasons.  So those taking part in round 1 represent only 35% of the original mailout.  Which in and of itself seems to seriously question the usefulness of the poll in the eyes of those of the pollsters' peers who were initially approached.
Then came "Round 2" (of which more in just a moment):

The same instrument was then redistributed to the 101 panelists in February 2005 ... 85 of the original 101 (84%) panelists responded to the second round.
(Norcross et al, 2006.  Page 516)

Hang on a moment.  This means that only 85 of the alleged experts "participated in [the whole of the] 2-stage survey".  And whilst that may be 84% of the people who participated in the first round, it's only 29.3% of those who received the initial questionnaire (though we don't know how many retirees were included in that initial mail out).

Moreover, whilst there were 59 treatments and 30 tests in the first questionnaire, 4 treatments and 5 tests were unfamiliar to over 75% of the Round 1 respondents and were therefore eliminated in Round 2.  On this basis the statement quoted above would have been a lot more accurate if it had read:

A panel of 85 experts participated in a two-stage survey, each of whom reported some (unspecified) degree of familiarity with 55 treatments, and 25 tests.

And as far as the voting on the mythical "Neuro-Linguistic Programming (NLP) for treatment of mental/behavioral disorders" was concerned, just 74 people thought they were able to rate the non-existent treatment in Round 1, down to only 65 people in Round 2.

Starting as They Meant to Go On - 2010

And again, in 2010, we find exactly the same kind of inaccuracies.  In the third sentence of the Abstract we are told that:

A panel of 75 experts participated in a two-stage survey, reporting familiarity with 65 treatments ...
(Norcross et al, 2010.  Page 174)

And again, when we check the text, and the reported results (in Table 2 (pages 177-178)), we find that the figures show a somewhat different picture.
In the section of the article headed "Expert Panel" we learn that:

In January 2007, we mailed a 5-page questionnaire and a 1 page personal information survey to the 250 potential participants.
(Norcross et al, 2010.  Page 176)

Of the 250 recipients only 113 people returned the questionnaire but 38 of these returned blank questionnaires leaving just 75 Round 1 participants.
Then came "Round 2":

We then redistributed the same instrument to the 75 panelists in April 2007 ... 57 of the original 75 (76%) panelists responded to the second round.
(Norcross et al, 2010.  Page 176)

Surely this means that only 57 of the alleged experts "participated in [the entire] 2-stage survey"?  And whilst that may be 76% of the people who participated in the first round, it's only 22.8% of those who received the original mailing (though again we don't know how many retirees were included in that initial mail out).

Furthermore, whilst there were 65 treatments in the first questionnaire, 6 of them were unfamiliar to over 75% of the Round 1 respondents and were therefore eliminated in Round 2.  On this basis the statement quoted above would have been a lot more accurate if it had read:

A panel of 57 experts participated in a two-stage survey, each of whom reported some (unspecified) degree of familiarity with 59 treatments

And as far as the voting on the mythical "Neuro-Linguistic Programming for drug and alcohol dependence" was concerned, it seems that only 32 of the alleged "experts" thought they were able to rate the non-existent treatment in Round 1, down to only 27 "experts" in Round 2.

Were these Articles Written by "Experts"?

Despite the heavy emphasis on the "expert" nature of the participants in the two polls, a rather critical error appears in the first sentence of the main body of the first article:

Which psychotherapies are effective?
(Norcross et al, 2006.  Page 515)

The article allegedly covers "psychological treatments and tests" (page 515).  But "NLP" is a label for a specific form of modelling, and the wider field of NLP (FoNLP) is primarily concerned with communication techniques - which can be used in a wide variety of contexts.  It is certainly not a form of psychotherapy, nor a "treatment [for] mental/behavioral disorders" (see Table 2 on page 518, 2006), nor yet a treatment for drug and alcohol dependence (Table 2, page 177, 2010).

As the entry on the National Center for Biotechnology Information, U.S. National Library of Medicine website correctly states:

Neurolinguistic [sic] Programming
A set of models of how communication impacts and is impacted by subjective experience. Techniques are generated from these models by sequencing of various aspects of the models in order to change someone's internal representations. Neurolinguistic programming is concerned with the patterns or programming created by the interactions among the brain, language, and the body, that produce both effective and ineffective behavior.
(Accessed at: http://www.ncbi.nlm.nih.gov/sites/entrez?db=mesh&term="Neurolinguistic Programming" on July 8, 2011)

There is also evidence in both articles that suggests that the authors viewed Neuro-Linguistic Programming (NLP) as a single instrument.  In the second article, for example, interventions such as "Past-life therapy", "Scared Straight" and "Synanon-style boot camps" each appear twice - once linked to substance abuse, once linked to alcohol dependence (Norcross et al, 2006.  Pages 19-20).  Presumably this is to take account of two separate regimes.  But "Neuro-Linguistic Programming" is listed only once, "for drug and alcohol dependence", presumably based on the assumption that there is a single procedure for dealing with both situations.

Of course it is true that some NLP-related communication techniques can be used "in a psychotherapeutic context", as Bandler and Grinder put it in the Forward to Neuro-Linguistic Programming, Volume 1 (1980, no page number), but that doesn't make NLP and/or the FoNLP a kind of therapy.

Priming the Pump, Stacking the Deck

But should we really be surprised by this information?  Probably not.  After all, the authors themselves indicate, in both articles, that they compiled the lists of "treatments and tests" from other sources:

We searched broadly and collected nominations for discredited mental health treatments and tests [sic] via literature searches, electronic mailing lists requests, and peer consultations
(Norcross et al, 2006.  Page 515)

The 65 potentially discredited treatments were compiled through an extensive literature review that included electronic database searches, listserv requests, and peer consultations.  We searched electronic databases (e.g., PubMed, PsychINFO, Cochrane Collaboration, Google Scholar) for published literature using the keywords "discredited," "quack," and "harmful" placed with the words "treatment" and "addiction".  We examined journal articles and books discussing discredited, potentially harmful, and "crazy" therapies (e.g., Eisner, 2000; Lilienfeld et al, 2003; Singer & Lalich, 1996).
(Norcross et al, 2010.  Page 175)

Which may seem to be commendably thorough, until we consider the implications of such a search.

In the first of the two articles the authors wrote:

A surprising finding to us was the large percentage of experts unfamiliar with the listed practices.  For example, in the first round, 56% were not sufficiently familiar with Thought Field Therapy to render a rating and 37% were not familiar with Erhard Seminar Training.  These numbers may help us to understand one important and perhaps unappreciated reason for the relative apathy of many mental health experts toward discredited practices: many experts simply do not know much about them.
(Norcross et al, 2006.  Page 521)

Where, then, would the "experts" - being members of the same community as the authors of the two articles - look for information (and we know that at least some of them did look for information - see below), other than in the same places that the authors had looked?
And what results would they be likely to get?  Presumably, the same results that the authors got.

Yet just as the authors seem to have been unaware of the potential effect of using such pejorative wording in their survey, they also seem to have been totally unaware that they were potentially "priming the pump".  Far from being impressed by the negative results of the poll one can only wonder that there were some items which got off relatively lightly.

And on a similar note, weren't the authors somewhat ill-advised when they commented (Norcross et al, 2006.  Page 521) on the 37% of their colleagues' who didn't know about "est" (Erhard Seminar Training), given that the organization ceased operating over 20 years ago?  (It was bought out by the organisers of the Landmark Forum, now known as the Landmark Education Corporation (LEC).)

This leads us directly into yet another concern that is relevant to both surveys as far as any alleged connection with the authentic field of NLP is concerned.

What Makes an "Expert" Expert?

Norcross et al define the term "expert" as follows:

Expert in our study was defined by status as journal editor or association [i.e. APA] fellow, which predictably produced a disproportionate percentage of academics and cognitive-behavioral proponents.
(Norcross et al, 2006.  Page 520)

Well that's clear enough, isn't it?
No, actually it isn't.
On the contrary it appears to depend on a major, and highly dubious, assumption.

To be specific, the authors of the 2006 report initially contacted, via random selection (page 516):

  • 100 fellows of the APA's Division 12 (Clinical Psychology)
  • 45 fellows of the APA's Division 17 (Counseling Psychology)
  • 23 fellows of the APA's Division 17 (School Psychology)
  • 46 fellows of the APA "with a major field in clinical psychology"
  • ,
  • 57 "current and former editors of scholarly journals in mental health
  • 14 "members of the APA Presidential Task Force on Evidence-Based Practice"
  • 5 chairs or editors of the Diagnostic and Statistical Manual of Mental Disorders

The second "expert" panel (2010) potentially consisted of (page 8):

  • 62 fellows of the American Society of Addiction Medicine
  • 63 fellows of the APA's Division of Addictions
  • 25 editorial board members of Addiction
  • 25 editorial board members of Psychology of Addictive Behaviors
  • 25 editorial board members of Journal of Studies on Alcohol
  • 25 editorial board members of Journal of Substance Abuse Treatment
  • 25 professional members of the advisory council of NIAAA (National Institute of Alcoholism and Alcohol Abuse) and NIDA (National Institute of Drug Abuse)

(In neither case are we given any indication as to how this information breaks down in Round 2.)

And here's the big question: On what basis can we safely assume that clinical psychologists, counseling psychologists, school psychologists or fellows of any other APA division or editorial board members are, per se, "experts" in any of the methodologies listed as "treatments" in either survey?

It is true that respondents could indicate that they were not familiar with any given methodology, but this is meaningless without a clear definition of either the methodologies or the word "familiar".  And even if such definitions had been supplied, how many of the respondents are likely to be genuinely "familiar", to any significant degree, with so many non-conventional treatments and/or tests?

In the 2006 article we are told that, according to the given answers, only 26.7% of the Round 1 respondents, and 24.1% of the Round 2 respondents, were "not familiar" with the use of "Neuro-Linguistic Programming (NLP) for treatment of mental/behavioral disorders".  But what about the other 75% (approx)?  They may have thought that they were genuinely familiar with this "treatment", but what evidence, other than notoriously unreliable self-reporting, do we have on this score?

In the 2010 article the results are even more significant - in Round 1, the 71% of respondents who acknowledged that they knew nothing about "Neuro-Linguistic Programming for drug and alcohol dependence" were the largest such group, with the 67% who were not familiar with "Metronidazole for alcohol dependence" coming second.  Likewise, in Round 2, though the figures had dropped to 59% and 51% respectively, the "NLP" and "Metronidazole" groups were still largest and second largest as far as "not familiar" was concerned.

To repeat an earlier observation: there is no definitive/standard "NLP treatment for mental and/or behavioral disorders" or for "drug and alcohol dependence".  Nor, as far as I can see from the two articles, do the authors themselves know what they mean by these labels.
So how likely is it that any of the respondents would be "familiar" with these undefined, non-existent treatments?  And how can we make any sense of the ratings of "possibly discredited" (2006, page 518) and "probably discredited" (2010, page 177) evoked by the two polls?

In practice, just to dot the "i's" and cross the "t's", we would do well to consider the contents of the FAQ #28 Project as they relate to academic psychologists.  At the latest count (December 2011 - 18 entries) 100% of those critics whose work has been reviewed have demonstrated that even though they may present themselves as being knowledgeable about "NLP", it is entirely possible, or even probable, that they know nothing of any consequence about the subject.  As we saw in the previous section, Norcross et al, who apparently imagine that "NLP" is a form of psychotherapy, etc., and who presumably (?) think they know what "NLP" is about, are themselves examples of this kind of seriously flawed "knowledge" in action.

Compliance Rules OK - 1

Way back in 1974, Professor Elizabeth Loftus co-authored an article which described an experiment designed to measure how far people's memory of a traffic accident might be influenced by the specific wording of questions about the event.

In that experiment subjects were shown short film clips of a two-car accident and asked a number of questions about what they had seen, including: "About how fast were the cars going when they hit each other?"
What the subjects didn't know was that there were actually five different versions of the questionnaire, identified by the wording of this one question, which in its various versions described the meeting of the two cars as "contacted", "hit", "bumped", collided" and "smashed".

When averaged out, the subjects who had been asked what speed the cars were doing when they "contacted" was 31.8 mph.  The other four groups gave increasingly faster estimates (in the order shown above) with "smashed" producing an average estimate of 40.8 mph.  In other words, the more "sensational" the description, the higher the estimated speed; the more neutral the description, the slower the perceived speed.

In a second, related experiment Loftus and Palmer showed that the variability of the answers is likely to be due to the question altering the subjects' actual memory of the event they had witnessed.
(This result might be counted as evidence in support of the NLP-related claim that our actual neurology is affected by the language we hear.)

So what is the likely effect of the constant emphasis, in these two polls, on the highly judgmental word "discredited", rather than some neutral word or phrase?  It appears in all of the rating definitions, and is the focus of the first two paragraphs of the instructions to the participants (according to the 2006 article).
No matter whether the use of such language was intentional or just careless, the likelihood of it biasing the poll results seems irrefutable.

And that is only part of the story.

Compliance Rules OK - 2

In the "Second Round" section of both articles we are told that:

the second round questionnaire presented the pooled responses from the first round, which is the standard procedure in Delphi polls.
(For example, Norcross et al, 2006.  Page 517)

This element of the procedure was supposedly included because:

In this manner, the panel of experts will exchange opinions and arrive at a greater consensus.  Your individual responses will not be identified.
(For example, Norcross et al, 2006.  Page 516)

Maybe I'm missing something here, but how does any "exchange [of] opinions" take place without the parties involved being able to engage in any kind of discussion?  In what sense, precisely, does being shown an anonymous set of statistics qualify as any kind of meaningful "exchange [of] opinions"?

To be honest, this seems more like a second form of subtly enforced compliance.

Given the heavy emphasis on the word "discredited", as described above, it is surely relevant to note what actually happened to the allegedly "expert" opinions in the two rounds (2006):

In 2010 the results showed:

  • "... the standard deviations decreased on 57 of the 59 ratings.  The mean difference was -.213."  (page 176)
     
  • "The mean ratings evidenced similar change: ratings on 56 of the 59 items increased (in the direction of more discredited).  Mean difference from round 1 to round 2 was .197."  (page 176)
     
  • Taking the average of the reported mean scores for the treatments, (based on 2010, Table 2, pages 177-178), 56 results shifted by an average of 0.21 in a negative (upward - more discredited) direction, whilst 3 scores shifted by an average 0.05 in a positive (downward - less discredited) direction.
     
  • The spread of negative shifts was between 0.42 (Chloradiazepoxide for alcohol dependence) and 0.01 (Group process psychotherapy for alcohol dependence).  The spread of positive shifts was between 0.08 (Providing transitory substitute gratifications for treatment of alcohol dependence) and 0.01 (Twelve-step facilitation for alcohol dependence).
     
  • 95% of the scores moved in a negative direction between Round 1 and Round 2, and by an average that was approximately three times the size of the average positive shift.
     

This is arguably evidence of the extent of the influence of using a strongly negative word in all of the rating descriptions and in the participant instructions.  That is to say, respondents, might be negatively influenced on Round 1, and when receiving the feedback on the first round, might unconsciously be further persuaded to comply with the implication that the treatments are less than fully creditable (especially since not one treatment has received a rating lower than 1.97) and thus make their own ratings more negative in Round 2.
Moreover, if Loftus and Palmer's deductions were correct, respondents might literally, "remember" giving somewhat higher (more negative) ratings first time around, and imagine that the later ratings are similar to those earlier ratings.

But the argument has another dimension, which again reduces the poll's results to meaningless data.
To be precise, in both polls the Round 1 and Round 2 results are treated as if they had come from the same sample.  But in both cases they obviously did not .
In 2006, Sample 1 (101 useable responses) includes sample 2 (85 responses), but sample 2 is 16 people/16% smaller than sample 1.
In 2010, Sample 1 consisted of 75 responses, and Round 2 consisted of only 57 responses, a drop of approximately 25%.

In must further be noted that in neither case are we given information about Round 2 "dropouts" which would allow us to determine what difference their absence made to the results.

The "Homework" Factor"

Another factor which differentiated Sample 1 from Sample 2, at least in the 2006 poll, was a change in participants' knowledge.  No, let me be more accurate - a change in some participants' knowledge.

Thus in the 2006 article we are told that:

... a large number of expert panelists informed us that completing the initial questionnaire prompted them to secure and read critical reviews of the treatments and tests on the questionnaire.
(Norcross et al, 2006.  Page 521.  Italics added for emphasis)

So how many of the "experts" did this?  And for how many of the treatments.  Which sources did they use?  And were their sources always accurate?

In the case of "NLP" we can pretty much guarantee that anyone searching academic sources would find the kind of material gathered by the FAQ #28 Project.  In other words, poorly-researched, denigratory misinformation.

Is this really the way to go about running a survey of such probity that the authors can say, of the second poll:

As a field, we have made progress in differentiating science from pseudoscience, credible from discredible [sic] in addictions treatment.
(Norcross et al, 2010.  Page 179)

Cautions and Caveats

If this assessment of the two polls seems unreasonably critical it should be noted that the authors of the polls also listed a number of errors they detected in their studies.

This first tranche of shortcomings is made up entirely of direct quotes from the 2006 article (page 519):

  • Firstly, many of our experts lacked familiarity with the many "fringe" therapies or "unusual" assessment techniques.
     
  • Second, some might challenge the relatively modest size of our panel of experts.
     
  • Third, our reliance on traditionally trained and academically vetted experts with a disproportionate number of cognitive-behavioral therapists might be too narrow.
     
  • Fourth, the robust rating differences due to theoretical orientation ... indicate that the epistemological commitments of the expert panel materially influence the results and thus the conclusions of what is discredited.
     
  • Fifth, several panel members noted that a single item assessing the credibility of an omnibus assessment for a given purpose was insufficient.

And these are from the 2010 article (page 178):

Picking Your Friends

An interesting demonstration of the authors' lack of accurate knowledge (in both articles) appears when they present their readers with these statements:

Recently, several authors have attempted to identify pseudoscientific, invalidated, or "quack" psychotherapies ..."
(Norcross et al, 2006.  Page 515)

Most assuredly, select investigators have attempted to identify pseudoscientific or ineffective treatments applied to a variety of mental disorders and addictions ..
(Norcross et al, 2010.  Page 5)

Both versions are followed by a list that includes:

Carroll, 2003; Della Sala, 1999; Eisner, 2000; Lilienfeld, Lynn, and Lohr, 2003; Singer and Lalich, 1996.

It is interesting to note, then, that:

  • Carroll (Robert Todd) is a retired teacher of philosophy, not a psychologist.  And his article on "NLP" in his website/book The Skeptic's Dictionary is notable for a string of errors from the first paragraph onwards.
    See
    HERE.
     
  • Della Sala's criticisms of "NLP", in both the cited work and in the Introduction to his 2007 book Tall Tales About the Mind and Brain, seem to be based on Sharpley's deeply flawed "review" of 1984 (via the Druckman and Swets report of 1988), and Heap's similarly inaccurate review of 1988, 1989, etc.
    See HERE.
     
  • Eisner is a bit of a two-edged sword in this context.  On the one hand he certainly claims to be weeding out the rubbish, and on the other he basically states that all psychotherapy is pseudoscience.  Which presumably isn't much comfort to Professor Norcross who is, amongst his various roles, a psychotherapist!
    On that basis it seems the lead author of these two articles attacking "quackery" and "pseudoscience" is himself a pseudoscientist!
    (Eisner has little of any consequence to say about "NLP" since he imagines it to be a form of psychotherapy.)
     
  • Lilienfeld, Lynn & Lohr have very little to say about "NLP" and gave little sign that they know much about it.  In the publication cited here the main commentary on "NLP" (a single paragraph on one page, and a reprise of two or three sentences from that paragraph later on) was actually part of a chapter written by Associate Professor Nona Wilson.  Ms Wilson seems to have had little or no idea what "NLP" is about.
    See HERE
     
  • Singer and Lalich's book Crazy Therapies contains some interesting material - mainly directed at deviant forms of psychotherapy.  As far as "NLP" is concerned, however, her seven page treatment is little more than a rant, remarkable for its lack of authoritative quotes (just 1), its preference for non-authoritative material, a pair of thoroughly underwhelming "case studies", and its assumption that the Druckman and Swets report was accurate.
    See HERE
     

In both articles the authors worry that these "pioneering efforts" have not "systematically relied on expert consensus to determine their contents", and "have provided little differentiation between credible and noncredible treatments" (2006, page 515; 2010, pages 175).

Nevertheless, Eisner (2000), Lilienfeld et al, (2003) and Singer & Lalich (1996) are listed as being amongst the 'journal articles and books discussing discredited, potentially harmful, and "crazy" therapies' (page 6) the authors consulted when looking for candidates for their second (2010) poll.
From a FoNLP standpoint it would have been interesting if all of the authors concerned had bothered to do the research needed to reach an accurate understanding of the subject before making their pronouncements.

It is almost incomprehensible, under the circumstances, that the authors of the first article actually closed by giving themselves a big pat on the back:

Our Delphi study systematically compiled clinical expertise on credibility, based perhaps on the best available research. ... The consensus emerging on this Delphi poll on potentially discredited treatments and tests leaves us feeling encouraged.
(Norcross et al, 2006.  Page 522)

Likewise, at the close of the second article they wrote:

We believe that this study, as did its parallel on mental health treatments (Norcross et al, 2006) offers a cogent, positive first step in consensually identifying the "dark side" or "soft underbelly" of modern addiction treatments and in providing a more granular analysis of the continuum of discredited procedures."
(Norcross et al, 2010.  Page 179)

As a famous tennis star was so fond of saying: "You cannot be serious!"
But of course they were.

References

Carroll, R.T., neuro-linguistic programming (NLP) (2003).  In The Skeptic's Dictionary [sic], R.T. Carroll (ed.).  John Wiley & Sons, Inc.  pages 252-260

Della Sala, S. and Beyerstein, B.L. (2007).  Introduction: The myth of 10% and other Tall Tales about the mind and the brain.  In Tall Tales about the Mind and Brain: Separating fact from fiction, Sergio Della Sala (ed.), Oxford University Press, Oxford.  Pages xx-xxii.

Loftus, E. E., and Palmer, J.C. (1974) Reconstruction of automobile destruction.  In Journal of Verbal Learning and Verbal Behavior, 13.  Pages 585-589.

Norcross, J.C., Koocher, G.P. and Garofalo, A. (2006), Discredited Psychological Treatments: A Delphi Poll.  In Professional Psychology: Research and Practice, Vol. 37, No. 5.  Pages 515-522.

What Does Not Work?  Expert Consensus on Discredited Treatments in the Addictions, Norcross, J.C., Koocher, G.P., Fala, N.C. and Wexler, H. W. (2010)  Journal of Addiction Medicine, Vol. 4, No. 3. pages 174-180.

Singer, M.T. and Lalich, J. (1996) Crazy Therapies.  Jossey-Bass Publishers, San Fransisco.  Pages 168-176.

Wilson, N. (2004) Commercializing Mental Health Issues: Entertaining, Advertising and Psychological Advice, in Lilienfeld, S.O., Lynn, S.J. and Lohr J.M., Science and Pseudoscience in Counseling Psychology.  Pages 446 and 455.

 

Andy Bradbury can be contacted at: bradburyac@hotmail.com