Two of the substantial challenges in research design involve data depiction and statistical analysis. Data depiction issues often start with misunderstandings about reasonable precision. Effectively, a number is only meaningful if it reflects the precision of the measure used to capture it. Additional decimal places may seem important, but they do nothing to increase precision. They can even put authors in a bad light for their misunderstanding. Examples of exaggerated reporting are as simple as reporting height to the millimeter level. This is an invalid proposition given both the general precision of the measure and the change in height seen in people over the course of a day. Every number should be evaluated for appropriateness before sharing.
The graphic depiction of data holds additional challenges. The validity of what the naked eye can discern is heavily influenced by the presentation. Depiction choices can overemphasize or underemphasize differences. Poor selection of an axis range—unintentional or intentional—can compromise the presentation; too narrow a range will exaggerate differences, and too wide a range will make differences disappear. Scale inconsistencies can also cloud relationships between related figures. Similar problems can be created through the depiction of statistics such as variability. Standard deviation serves as a good reference for normally distributed parametric data, since the distribution of sample data is intuitively clear: ±1 SD captures 68% of the sample, ±2 SD captures about 95% of the sample, and so forth. Although there are situations in which standard error may be appropriate, choosing it to make the variability look smaller is not appropriate. The fact that standard error has no graphic relationship to the sample distribution is a shortcoming, and it should be used carefully. Ultimately, the goal for any graphic presentation should be the willingness to argue for it from either side of the null hypothesis. No depiction should unfairly favor either position. Reviewers should be thinking of this when they evaluate manuscripts.
Statistical analysis issues go far beyond the descriptive. Valid statistical approaches can range from remarkably simple to blindingly complex. The right choice is largely a function of the data in hand. Researchers can struggle with statistics, sometimes through a reluctance to go beyond the most simple, sometimes in response to recommendations by statisticians, and sometimes through an inappropriate desire to use specific tools. Evaluating the statistical elements of manuscripts can be particularly challenging. Subject matter experts may not always be familiar with the statistical tools used, and they will rarely see all the raw data in any case. Although statistical experts can be added to a review panel, they too will generally not see all the raw data, and, in some cases, might lack helpful insight into other aspects of the study design or execution. It comes down to authors to ensure that both reasonable and accurate analyses are made. Considering analytical options before data collection is likely to strengthen the overall effort, making the product more compelling. It is not the outcome of any hypothesis test that is important, but that the right tests are conducted. The focus must be on the validity of the assessment, not a hunt for a positive finding in hopes of impressing reviewers. Getting a manuscript through peer review is an important hurdle, but not the final assessment of the value of a report. The right analyses will withstand the test of time.
There is a sense in scientific writing that every word should count. Although clarity is paramount, economy runs close behind. The results are frequently targeted as a source of redundancy if material in tables is repeated in text. The effort should not just be about word count though. It is important to consider whether data can be provided in sufficient detail to allow future reanalysis. This can help to validate the work, to accommodate new analytical techniques, and to facilitate more direct comparison with the results in other reports. Tables can serve as an important repository of data. Careful consideration and clear justification incorporated into the text can help convince reviewers and editors who are often looking more to cut excess than to add material. The effort is worthwhile for material that can benefit future scholarship.
There are a lot of rules in scientific writing, but there can be far more exceptions. Each manuscript should be built upon its own content for the most appropriate presentation. Thoughtful preparation does not guarantee success, but it creates a much stronger position from which to go through the negotiation of peer review.
Published online: August 16, 2018
All rights reserved.