ATLAS - a year on


The main results of the ATLAS trial were published in the Annals of Internal Medicine on 3rd November 2015.[1] Although an entry was quickly placed in the ASO bibliography, ASO did not publish a news story on this important research at the time. A year later, it is time to make amends.

For an excellent summary of the research, refer to that published on the website of the Society of Teachers of the Alexander Technique [UK]. Below we report only the most important points, along with some additional aspects that may be of particular interest to the ASO audience.

Profile of the research

This was a large, well-designed, well-conducted, randomised, controlled trial involving 517 people with ‘chronic neck pain’ (defined as pain that had lasted at least three months; in fact the median duration of neck pain amongst participants was 6 years). 

Excluded from the trial were people suffering from ‘serious underlying pathology’: this included conditions such as ankylosing spondylitis, osteoporosis, rheumatoid arthritis, history of cervical surgery and progressive diseases such as cancer – but not osteoarthritis.

The scale of the trial was reflected in the £750,500.65[2] grant awarded by the Arthritis Research Council making this easily the largest Alexander Technique trial since the ATEAM study that reported in 2008.[3]

The trial had three arms: Alexander Technique lessons with usual care, acupuncture treatment with usual care, and usual care alone.  ‘Usual care’ meant simply the care delivered ad hoc, through the individual's General Practitioner: this might for example involve prescribed pain killers or physiotherapy sessions.

The research team was managed from the University of York and included two members of the STAT Research Group: Julia Woodman and Kathleen Ballard [pictured above] – the latter a veteran of the ATEAM trial. They were both fully involved in all aspects of the research including the funding application, the design of the study, coordinating the delivery of the Alexander Technique intervention, interpretation of results and drafting of the research publications. An added benefit was that the description of the Alexander Technique in the research report was written by people well qualified to do so: it is a useful point of reference for anyone looking to describe the Technique to a medical research audience.

The interventions were delivered in York, Leeds, Sheffield and Manchester and participants were offered either 20 one-to-one 30 minute Alexander Technique lessons or 12 x 50 minute acupuncture sessions (amounting to 600 minutes in either case), or usual care alone.[4]  18 acupuncturists and 18 Alexander Technique teachers were involved, although the research report  comments that most of the Alexander Technique lessons were delivered by half the teachers (due to uneven city-by-city recruitment of GP practices); all the Alexander Technique teachers were STAT-registered, with at least 3 years’ post-qualification experience, and required to demonstrate commitment to Continuing Professional Development.[5]

Outcome data was collected at baseline and at 3, 6 and 12 months, with the 12 month point being the primary end point.


The primary outcome measure was the self-reported score from the Northwick Park Questionnaire, a well-validated measure for neck pain and associated disability.[6]  Secondary outcome measures included  current pain intensity (collected, in an innovative approach, via text message on a 0 to 8 scale), quality of life measures (SF12-v2 Health Survey scores)[7], and patient-reported self-efficacy using the pain self-management sub-scale of the Chronic Pain Self-Efficacy Scale[8].

The key message from an Alexander Technique point of view was that clinically significant benefits were recorded for Alexander Technique lessons, as well as for acupuncture (both combined with usual care) across the primary and secondary pain and self-efficacy measures, when compared with usual care alone. The abstract for the research concludes:

Acupuncture sessions and Alexander Technique lessons both led to significant reductions in neck pain and associated disability compared with usual care at 12 months. Enhanced self-efficacy may partially explain why longer-term benefits were sustained.

Importantly, too, the study concludes that ‘No reported serious adverse events were considered probably or definitely related to either intervention’. The extent of the pain-reduction effects did not match those achieved by the ATEAM lower back pain trial,  – at first sight, they might be described as somewhat disappointing – but this should be seen in the context of neck pain being perceived to be harder to treat and of the longevity of the problems experienced by the participants.   The Arthritis Research Council, in a statement about its grant for the research, noted

After back pain, neck pain is the most common physical problem reported by the general population and is a common reason for people visiting their GP. Chronic neck pain can lead to depression and anxiety, and impacts on other aspects of health and work. However it remains under-researched and we need to know more about interventions that could provide long term benefit. [2]

The clinically relevant and statistically significant long-term benefits reported in the ATLAS trial thus make an important contribution to the evidence base.

Further notes


Along with the outcomes, there were other interesting results to be found in the main research report.  

For example, the information recorded before participants were assigned to intervention groups included their preferences for, and perceptions of the effectiveness of, the three available options. Although Alexander teachers might be inclined to think that acupuncture has a higher profile than the Alexander Technique, the preferences expressed were 41.4% for Alexander Technique lessons, 36.6% for acupuncture, 20.8% no preference, and 1.2% for usual care.  There is no way of knowing to what extent positive views of one approach or negative views of another (e.g. fear of needles, fear of the unknown) affected these figures. And counter-intuitively, in the light of the expressed preferences, slightly more people (46.4%) thought that acupuncture would be ‘fairly effective’ or ‘very effective’ than thought the same about Alexander Technique lessons (44.4%).  The expectations for the effectiveness of usual care were much lower, with only 19.2% of participants having positive expectations.

Looking at those people expressing low expectations, the difference between usual care and the other interventions was even more marked: 53.7% of participants thought usual care would be ‘very ineffective’ or ‘fairly ineffective’, compared with only 8.3% for acupuncture and 4.1% for Alexander Technique lessons. One difference, of course, is that, by virtue of the study design, participants’ expectations for ‘usual care’ would necessarily – unlike the other two interventions – be based on past experience: by definition, this could not have been very positive if the participants met the threshold criteria for inclusion in the study.  But hope springs eternal – and, in the light of the results of the study, with some justification as regards acupuncture and Alexander Technique.  It is an encouraging message for teachers that, extrapolating from these figures, there is a significant cohort of the general public that is predisposed to having a positive regard for the potential of the Alexander Technique to reduce pain and disability, far exceeding any negative perceptions.

One aspect of these marked differences in expectation that remains to be considered is how they might have affected the self-report scores. The study protocol indicates that there will be further analyses to explore the potential impact of such ‘modifiers’.[9]

As regards the importance of perceptions, it turns out that ‘marketing’ is one thing, ‘sales’ another. Participants were asked to report private purchases of treatments (though not specifically treatments for neck pain) before and during the study. Despite the positive perceptions and expectations noted above, only a single one of the 345 participants enrolled in the acupuncture and usual care groups chose to purchase Alexander lessons privately during the 12 months of the study.[10] As for purchasing extra sessions of the intervention already being received under the study, in the final 6 months 20% of the acupuncture group purchased additional acupuncture sessions as against 9.3% of the Alexander Technique group purchasing additional Alexander lessons.

Teachers’ perceptions

Further material of particular interest to Alexander teachers can be found in Appendix 1 (‘Description of the Alexander Technique and Details of the Alexander Lessons Provided to Participants’). Teachers were asked to complete a one-page log form after each lesson: that’s over 2,000 log forms in total.  A further form was completed after the final lesson.  The teachers reported that 31% of trial participants attending for lessons had difficulty in ‘assimilating and remembering Alexander Technique concepts’ but that ‘78% learned to use the core skills to at least a reasonable degree’, and for 20% the skills were assessed as ‘very good’ or ‘excellent’. (The 20% presumably also fall within the 78%.)

Alexander Technique vs. acupuncture?

The authors point out that ‘no statistical comparisons were made between the acupuncture and Alexander groups because the trial was not powered for this.’ It is understood that the sample sizes would have needed to have been doubled to achieve sufficient power for such comparisons to be made.[11] On the other hand, the protocol for the trial states that ‘we will also compare acupuncture to Alexander Technique lessons to estimate possible clinical differences’: the key words here are ‘estimate possible’, which is a different matter from achieving statistical significance.  And of course, simply by reporting the results, the study does invite comparisons to be made.  Generally, however,  the the results on the main outcome measures were so similar that any choice between interventions might reasonably be determined by patient preference.  One possibility for future research suggested by the study authors, reflecting the faster pain reduction reported in the acupuncture group and the greater emphasis on self-management associated with the Alexander Technique, was to evaluate a combined intervention where acupuncture sessions would be followed by Alexander Technique lessons.


Public profile

It is a matter for some regret that the ATLAS research, unlike the ATEAM research, was not published open access: apparently there was no funding available for the article processing charges that would have been incurred and the over-riding objective was for the paper to be published in one of the big five medical journals.[12] It is difficult to gauge how this would affect awareness of the results of the study amongst the general public, as so little research seems to have been conducted around the connection between open access and public awareness.[13]

Casting the net wider, a Google search for ‘neck pain alexander technique’ [06 November 2016] did not bring up any articles in national newspapers within the first 100 results; there were two results in mass media: an article on the Time website and on Canadian news channel CTV News. As a comparison, a search for ‘back pain alexander technique’ brought up within the first 20 results links to the original ATEAM research published by BMJ and to two articles from the Daily Telegraph, all from 2008; lower down the list are links to articles from 2008 in the Guardian and a New York Times blog.  The ATLAS study did however receive a good write-up in Talkback the quarterly UK magazine of BackCare (‘serving those with a personal or professional interest in back pain’).

Google searches are not definitive as to the public reporting of the results but the rankings do provide some evidence as to the profile achieved.  For fuller reporting of the profile in the news media, refer to the altmetrics results.


A more ‘scientific approach’ to the wider impact of the research is measured via altmetrics.  Annals uses for this purpose and the results for ATLAS at the time of writing state

Altmetric has tracked 6,232,855 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 99th percentile.

Given that there is an awful lot of low-grade research out there, the figures for Annals itself (which generally only publishes high quality research) are perhaps of more interest: the ATLAS publication is ranked 10 out of 133 research outputs of a similar age.  The results for the trial protocol are also interesting: it is in the 91st percentile across all sources and is ranked 7th out of 65 outputs of the same age on Trials. In both cases, the scores are those at the time when the research was last mentioned.  Refer to the methodology used by (and further linked pages) for details of how the figures are compiled.

Alexander Technique community

Within the Alexander Technique world, STAT covered the research carefully as might have been expected given the strength and focus of the STAT Research Group and the involvement of two of its leading members in the research team. In addition to the public-facing article on the STAT website, materials have also been produced to assist STAT teachers (downloable from ere [STAT members only: login required]). AmSAT produced a press release but has yet to include the study in its list of research. The Alexander Technique International and AuSTAT websites do not seem to feature the research as yet.  On the other hand, many websites of individual Alexander teachers feature the study, based on the results of the Google search mentioned above.


As for negative coverage, the most prominent instance seems to be a post by Edzard Ernst, the well-known critic of ‘alternative medicine’, on his blog.[14] While accurately reporting the results of the study, his main objection is that adding an intervention such as Alexander Technique lessons (or acupuncture) on top of usual care and then comparing it with usual care alone (‘A+B versus B’) gives too much scope for placebo-type effects in the results. This possibility was acknowledged by Macpherson et al. in the section of the study protocol that defends the ‘pragmatic’ approach being adopted. The argument made there was that what mattered was whether or not the interventions produced results: it was not essential to try and identify the diverse possible factors that might be producing those results:

One resolution to these difficulties is to bypass the question regarding the relative impact of specific and nonspecific components, and to address a different research question, namely asking, `What is the overall benefit?' For a pragmatic trial to be `positive; the results must, as a minimum, meet four quantitative challenges: statistical significance, clinical relevance, adequate safety and worthwhile cost-effectiveness. To summarise, the pragmatic design offers best value by providing data that are immediately applicable to patients and providers with real-world comparisons that will assist policy and decision-makers.[9]

(The ATEAM trial authors took a somewhat different approach by including massage as one of the intervention arms: the assumption was that ‘touch and attention’ placebo-type effects of massage would be comparable with those of Alexander Technique lessons, thus allowing any specific effects of the Alexander Technique lessons to become apparent by comparison with the massage arm results.)

Ernst says in his blog post that ‘usual care is useless care’: his vision of effective treatment is one of a multi-disciplinary team able to recognize the multiplicity of causes of neck pain and to provide treatment tailored to the individual, based on the specific diagnosis of the case: in other words to treat the disease, not the symptom.  Accordingly, if an ‘A+B versus B’ approach were to be used, then according to Ernst the ‘B’ should be ‘optimum care’ not ‘usual care’.  This contrasts rather starkly with the very first sentence of the ATLAS study protocol, which states ‘Optimal care for uncomplicated chronic neck pain has yet to be established.’  It is possible that Ernst, in his comments, has not given sufficient weight to the fact that the trial participants had non-specific neck pain where, presumably, in many cases the diagnosis might be uncertain.

This is not to dismiss Ernst’s concerns about methodology, which certainly merit consideration; but the issues he identifies were not so compelling as to prevent the editors of a highly prestigious journal from choosing to publish the research.  Ultimately, it is possible to discern here the same conflict between ‘rational medicine’ and ‘empirical medicine’ that goes back over 2000 years.[15]  Each camp has legitimate points to make, but between an ‘Alexander Technique intervention’ that is demonstrated to work somewhat, and an ‘optimal care’ that doesn’t seem to exist, people suffering neck pain have rational grounds for choosing the former (never mind the reasonable possibility of benefiting from improved balance, co-ordination, control of reaction and self-awareness). 

The COMPare saga

By chance, the ATLAS trial was published during a six-week window when the COMPare project was systematically analysing research published in the ‘big five’ medical journals, to highlight the issue of ‘outcome switching’, i.e. non-reporting of pre-specified outcomes and inadequate explanations for outcomes that were reported but had not been pre-specified. During this window of activity, the project team analysed 67 research papers of which only a handful escaped criticism. COMPare wrote to Annals drawing attention to what they considered to be deficiencies in the transparency of the reporting of some of the outcome measures in the ATLAS trial, including supposedly missing outcomes; similar letters were sent in connection with four other pieces of research published by Annals during COMPare’s research window. Hugh Macpherson responded on behalf of the ATLAS team, highlighting that further publications would be forthcoming to report additional outcomes from the trial and that it was simply not possible to report all the data from such an extensive trial in one paper.[16]  Subsequently, the Annals Editors wrote a letter with a more general critique of COMPare’s approach, referencing all the research reports that COMPare had criticised; this in turn drew a lengthy riposte from COMPare; further correspondence followed but now seems to have petered out[17]

Of some significance in all this was the status of the ATLAS trial protocol, published in Trials in July 2013, as compared with the less developed entry published in the trials registry in February 2012[18]:  The Trials version of the protocol is openly accessible and was clearly signposted by the main ATLAS research report.  The policy of Trials is that it is prepared to publish protocols up to the point when the trial participants have been recruited, but not afterwards.  This allows protocols to be adapted at an early stage, for example if there prove to be issues with the selection criteria as happened to a minor extent with the ATLAS trial (and is fully documented in the protocol).  The ATLAS team fully complied with the Trials requirements.  However, the COMPare team’s methodology prevented them from taking account of any protocol published after a trial had commenced, even if any intervention had yet to commence – thus excluding the full ATLAS protocol in its entirety.  The dispute, such as it was, was thus largely about the relative status of two versions of the trial protocol.

COMPare’s endeavours to tighten up the reporting of medical research, which can be considered part of the ‘means whereby’ of good science, are valuable; but the criticisms levelled at the reporting of ATLAS trial outcomes do not seem to touch the robustness of the results, or of the methodology by which they were obtained. They are criticisms concerned with aspects of protocol and procedure around transparency, aimed primarily at journal editors.  Had Annals required more detail around the selection and reporting of outcome measures then there is every reason to suppose that the ATLAS team would have provided it.  As it is, the responses offered by Macpherson in his letter, along with methodological details in the protocol published in Trials and in the main research report, are likely to be sufficient for people whose focus is the substantive results of the trial.  

ASO made its own minor contribution to the COMPare debate by approaching Annals to make the MacPherson reply to COMPare open access, which they eventually did.  (Originally both letters sat behind the Annals paywall, but the COMPare letter was publicly available and unchallenged on the COMPare website.)  Since the ASO intervention, the situation seems to have changed yet again: the correspondence on the Annals site, though now open access, has since been re-organised in ways that make it harder to locate or follow.

In conclusion, then, standards of outcome reporting and editorial practices of medical journals including Annals may improve as a result of the COMPare intervention, but the intervention does not affect – and nor was it ever intended to challenge – the validity of the research.  COMPare’s target was not individual pieces of research such as the ATLAS trial but the editorial standards of medical journals. The correspondence on the COMPare website provides valuable insight into some fundamental issues of medical research publishing and if readers are sufficiently interested in that topic they are advised to explore it, but not otherwise.

Another ASO contribution!

A further ASO intervention came after the Editor noticed that the labels for the lines for ‘Alexander Technique’ and ‘usual care’ in the graph plotting pain scores submitted by text message (Appendix Figure 2) had been transposed: as a result, the graph suggested that the Alexander Technique was the least effective intervention on this measure compared with the other two. Considering that the authors had sent the correctly labelled graph to Annals as a jpeg file that could have been published ‘as is’, this seemed an entirely avoidable mistake.  The study authors were notified of the error and Annals in turn corrected the graph.

Anyone who downloaded the article in the weeks immediately following publication should be aware of the error and ensure they have the corrected version; the assumption has to be that the graph in the print version of Annals (which we have not seen) was still incorrect.  Annals did report the correction, but the notification was only published three months later.[19]

More to come

Follow-on analyses are in the pipeline, including the cost-effectiveness analysis, a ‘longitudinal qualitative sub-study’, further analysis of the text message pain scores, and more detailed descriptions of the interventions and potential mediators (the latter presumably to include the effect of the ‘expectation’ modifiers).  These follow-on studies will be critical to the longer-term impact of the research, in particular its influence on clinical guidance and commissioning decisions.


[1] Hugh MacPherson, Helen Tilbrook, Stewart Richmond, Julia Woodman, Kathleen Ballard, Karl Atkin, Martin Bland, Janet Eldred, Holly Essex, Catherine Hewitt, Ann Hopton, Ada Keding, Harriet Lansdown, Steve Parrott, David Torgerson, Aniela Wenham, and Ian Watt, ‘Alexander Technique lessons or acupuncture sessions for persons with chronic neck pain: A randomized trial’, Annals of Internal Medicine, 163/9 (2015), pp.653-662.

[2] See < > [accessed 03 November 2016].  According to the original Arthritis Research Council press release the amount awarded was £719,000. <> [accessed 03 November 2016], indicating that additional funding was made available; three of the researchers indicated receiving ‘in-trial’ grants from this source.

[3] Paul Little et al., ‘Randomised controlled trial of Alexander technique lessons, exercise, and massage (ATEAM) for chronic and recurrent back pain’, British Medical Journal 337/a884 (2008).

[4] ‘designed to’ because not all participants attended for all the planned sessions.

[5] In fact the mean duration of post-qualification experience was 14 years.

[6] The questionnaire includes 9 Likert items, each offering five options for measures of pain and disability. One of the nine covers driving and adjustments are needed for non-drivers. A tenth measure captures the perception of change since the last time the questionnaire was completed.

[7] ‘…a shorter version of the SF-36v2 Health Survey that uses just 12 questions to measure functional health and well-being from the patient’s point of view.’ See <> [accessed 03 November 2016].

[8] See K.O. Anderson KO et al., ‘Development and initial validation of a scale to measure self-efficacy beliefs in patients with chronic pain’, Pain, 63 (1995), pp.77–83.

[9] See Macpherson et al., ‘Alexander Technique Lessons, Acupuncture Sessions or usual care for patients with chronic neck pain (ATLAS): study protocol for a randomised controlled trial’, Trials, 14/209 (2013);  <> [accessed 04 November 2016]: ‘To estimate the potential impact of the nonspecific components of preference, expectation and belief, we will measure these at baseline and evaluate their impact on outcome, as we have done in previous trials for preference and belief’.

[10] The comparable figure for acupuncture is slightly higher but cannot be determined exactly because of the way the figures are reported; but looking at the last six months of the trial, the figure was one for Alexander Technique and five for acupuncture. It should be noted also that the figure is for participants not sessions.

[11] Verbal communication from Julia Woodman. Trials are normally sized based on previous studies – including pilot studies – that provide evidence as to the likely impacts of the interventions. Thus the ATLAS trial . The closer the expected outcomes between two interventions, the larger the sample size would need to be to produce statistically significant evidence of any difference.

[12] The ‘big five’ are BMJ, The Lancet, Journal of the American Medical Association, New England Journal of Medicine, Annals of Internal Medicine.  The research report notes that requests for individual reprints of the research should be directed to the lead author, Hugh MacPherson, at

[13] Nearly all research on the impact of open access seems to be restricted to the position within the research community, such as the incidence of citations, rather than on the public profile of the research. See for example <>  [accessed 03 November 2016]. Whilst this has not been updated beyond June 2013 it references a considerable body of research published up to that point.

[14] <> [accessed 06 November 2016]. For further information about Edzard Ernst see <>  [accessed 06 November 2016].

[15] See Michael Frede, ‘Introduction’ to Galen, Three Treatises: On the Nature of Science (Indianapolis, IND: Hackett, 1985).  For a more easily accessible illustrative discussion, see Richard H. Shyrock, ‘Empiricism vs. Rationalism in American Medicine 1650-1950’, Proceedings of the American Antiquarian Society v.79 (1969); <> [2.7MB PDF file; accessed 06 November 2016].  These same issues cropped up early in Alexander’s career in his 1909-10 dispute with Dr. Scanes Spicer.

[17] See <> and <>  [both accessed 06 November 2016]. The COMPare blog is the best place to follow the history of the interaction between COMPare and Annals.

[18] See <> [accessed 19 November 2016].

[19] See <> [accessed 03 November 2016]. The publication date is given as 02 February 2016.

News Category: 
Latest ASO news


Knowledge domain: 
AT history, people, institutions: