Referencing in scientific writing has long been viewed as arcane to the uninitiated. The confusion in the pre-internet era was largely due to highly specific format requirements that differ between types of articles and between publications. The new reality is much worse.
The ever-increasing array of online resources has many people able to access a tremendous wealth of content with ease from almost any location. It is possible to find what can seem to be relevant material on any topic with a remarkably small number of keystrokes. The rapidity in finding information can promote the sense of a limitless trove of knowledge, but in this is a major hazard.
The ease of access to information does not confirm validity, let alone authority. Critical thinking remains essential to weigh the value of any piece of information. The established structure in traditional literature aids in the assessment. The greatest academic weight is generally given to well-designed primary research reports published in respected, peer-reviewed journals. This work is most likely reviewed by subject matter experts who help authors overcome or acknowledge any perceived shortcomings.
Review papers are useful as syntheses, but they tend to have lower academic weight because they rely on the selective interpretation of other original work. Brief reports can offer good insights, but their authority is generally limited by small sample sizes. Case reports provide the most extreme example of limited sample size, with a concomitant lack of authority. Gray literature, such as proceedings papers, is often published with little or no meaningful peer review. Although these works can be insightful, they are rarely accepted as authority. The final traditional literature class is textbooks, which are effectively thirdhand summaries with selective content that can fall anywhere on the continuum between compelling and misleading.
The internet confounds the classic hierarchy of published material. It can, in some cases, be used to access top-ranked peer-reviewed original and formally published research, but it can also place such items next to unreviewed “preprints” or informally published commentaries that may or may not be well founded. There is no standard for internet content, generally no checks on content quality, and almost never any promise of content stability or archival access. It is left to the reader to evaluate and use the material appropriately.
The first step in evaluating any content is consideration of validity. Finding something that agrees with a position or that could reinforce an argument may be attractive, but these things do not ensure validity. Assuming that validity can be satisfactorily established, it needs to be determined whether the material is appropriately referenceable.
Content stability and archival access are priority concerns in referencing in scientific articles. There is an expectation that references cited in a paper will be not only valid but accessible for the foreseeable future in the form used to generate the citation. This provides an important guide for reference selection.
The only content that can be considered unquestionably suitable for inclusion in the reference list is formally published material. The fact that something was written on some web page at some point or that it was found in a portable document format (PDF file) does not justify inclusion in a reference list. Formal publication requires, at a minimum, a publication or version date, a listed author and/or publisher, and a stable form that can reasonably be expected to be available in the future.
The other end of the extreme is easier to describe. Items that are definitely disqualified from reference lists are general web pages (eg, “landing pages”) and dynamic pages. Landing pages generally do not contain the specific information relevant to the discussion. They have no value as references. Dynamic pages that are continually updated will, by their nature, be inconstant and inappropriate to reference: They could tell similar, stronger, or completely different stories at any point in the future. The descriptive term for the weakness associated with dynamic pages is “reference rot.” Effectively, the pages may appear to be the same, but the content could be substantially different.
The debate over using informal or unpublished reports available online is more challenging. The content may be compelling—there may be a version date, author, and even a publisher listed, and preservation may be assumed—but caution is required. Organizations can change hosts or reorganize, reduce, or replace content. The lack of formal publication makes any such material less likely to survive. The term “link rot” applies to addresses no longer accessing the expected content. The problem is huge, particularly given the dearth of rules regarding website archiving. The scientific literature is intended to stand as a record of scientific endeavor, and it is important to incorporate elements most likely to endure.
There can be frustration among those comfortable with internet content to learn that some is disqualified from reference lists, but it is important to understand that this does not eliminate the presentation of relevant sites. There is a tiered approach to referencing. Formal publications are those most appropriate for reference lists. Material with lesser provenance can still be included, but as text citations, completely independent of the reference list. For example, the landing page of an organization could be listed parenthetically after the organization name. This is a generic text reference, with no promise of specific content. Similarly, text citation is possible for a dynamic web page collecting data relevant to the topic under discussion. There needs to be more explanation in the text, and no reliance on a formal reference, but the existence of the relevant site and cautious use of its content can be reasonable.
Text citations are the equivalent to personal communications, where material held in written form can be cited within the text. The caveat here is that steps should be taken to capture and preserve the content at the point of writing. It should be kept available in a form that can be reviewed upon demand.
Complicating the current discussion is the fact that referencing standards will vary between journals. Some might rely on author discretion, whereas others more actively promote the tiered approach to mandate preservation of the traditional standards for reference lists. Wilderness & Environmental Medicine operates in the latter form, generally accepting only formally published material in the reference list. As is the case with almost all journal guidelines, the argument that similar content was allowed elsewhere or previously holds no weight.
Article info
Publication history
Published online: July 21, 2021
Identification
Copyright
©2021 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.