custom essay writerscustom essay writers

More Than 50 Shades of Gray | The Wildlife Society News
2014 Spring Featured — March 20, 2014
The Uneven Quality of Peer Review in Open-Access Publishing
Enduring the peer-review process can be trying for authors as they await anonymous comments from reviewers or find their papers rejected because they may be judged, for example, as too “regional” and therefore lacking broad significance. But the credibility that accrues to peer- reviewed publication can reflect well on both the authors and the science they reveal. (Cartoon by Nick D. Kim)

Enduring the peer-review process can be trying for authors as they await anonymous comments from reviewers or find their papers rejected because they may be judged, for example, as too “regional” and therefore lacking broad significance. But the credibility that accrues to peer- reviewed publication can reflect well on both the authors and the science they reveal. (Cartoon by Nick D. Kim)

The scientific community got a wake-up call in the October 4, 2013 issue of the journal Science. In an article titled “Who’s afraid of Peer Review,” writer John Bohannon describes a “sting operation” he conducted to test the credibility of the peer-review process in the burgeoning world of for-profit, open-access publishing. What he discovered is sobering for any of us who believe that rigorous peer review is the backbone of scientific publishing.

For the sting, Bohannon wrote a completely fictitious manuscript by a fake author from a nonexistent institution. The paper claimed to show that a particular lichen molecule inhibited growth of a specific cancer cell, a fabrication that Bohannon describes as having “such grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable” (Bohannon 2013). He submitted the manuscript to 304 peer-reviewed, open-access journals between January and August 2013. Though each copy of the paper had slight modifications to create unique submissions, the “hopelessly flawed” experiments and “meaningless” results remained the same.

Eye-Opening Results

In all, 157 journals — nearly 52 percent — accepted the paper (which Bohannon then withdrew to prevent publication). Ninety-eight journals (32 percent) rejected the paper, and 49 (16 percent) were still either considering it or had folded by the time the Science article went to press.

The 304 journals were not just randomly drawn: Bohannon got them from two sources. One was Beall’s List, published by Jeffrey Beall, a library scientist at the University of Colorado-Denver. Today it lists 477 of what Beall calls “questionable” open-access publishers, and includes criteria for determining “predatory” publishers. Bohannon’s other source was the comprehensive Directory of Open Access Journals (DOAJ), a list of more than 9,700 journals compiled by Lars Bjørnshauge, a library scientist at Lund University in Sweden.

Bohannon’s goal was to demonstrate that some publish-for-profit, open-access journals are “predatory” (i.e., more interested in money than science), lack credibility, and provide cursory or nonexistent peer review. Clearly he achieved his goal by showing that authors who submit manuscripts to predatory publish-for-profit journals are likely to have high acceptance rates for their submissions.

Value of Open Access

Despite the potential flaws in the pay-to-publish, open-access model, the concept behind open access remains valid. Providing free access to scientific research for anyone with a computer and Internet connection is a public service that seems like a no-brainer. And for authors, open access appeals because it often gives them a faster turnaround from submission to publication than traditional subscription journals and generally provides broader dissemination of their work, which can increase citations. For these reasons, a growing number of authors are choosing to publish open access, which has two basic forms:

• Green Open Access: Also referred to as self-archiving, this form of open-access publishing is free of charge to authors but often involves posting only the author-accepted manuscript (AAM) as opposed to the final published version of record (VoR). AAMs have been peer reviewed but have not been copy edited, formatted, or enhanced by the publisher. The author posts these articles on an institutional repository, a subject-based repository, or their own personal website (not on the publisher’s website), sometimes following an embargo period of six to 24 months (Laakso 2014).

Bookshelves at The Wildlife Society’s headquarters in Bethesda, Maryland display bound copies of the society’s peer-reviewed journals: The Journal of Wildlife Management (below), and the Wildlife Society Bulletin. To merit publication, papers in these journals undergo rigorous peer review. (Credit: Lisa Moore/TWS)

(Credit: Lisa Moore/TWS)

(Credit: Lisa Moore/TWS)

• Gold Open Access: This method of open-access publishing requires authors or their institutions to pay an article publication charge (APC), which buys the right to publish the final version of record online and make it freely available to all. Fees for gold open access averaged just over $900 in 2010, and went as high as $3,900 (Solomon and Björk 2012). Fees to publish in the not- for-profit Public Library of Science, or PLoS, vary from $1,350 to $2,900.

Some open-access journals may accept manuscripts — even questionable ones—to reap publishing fees, which industry-wide are substantial: The gold open access publishing business was worth an estimated $225 million in 2013 (Outsell 2013).

Trying to track all that money is tricky business. As part of his Science article, Bohannon created an interactive website labeled “Follow the Money,” showing that a journal’s publisher, editor, and bank account are “often continents apart.” About one-third of the journals in his sting were based in India—now the world’s largest base for open-access publishing. But even if journal editors or banks are based in the developing world, says Bohannon, the journal’s parent company is often in the U.S. or Europe, with some big-name academic publishers “at the top of the chain.”

Balancing Finances and Quality

Of course, money enters the picture for traditional, non-open-access journals as well, which sustain themselves on author fees for page and color charges and on subscription revenues. The Journal of Wildlife Management charges TWS authors $90 per page for the first eight pages and $150 per page thereafter, plus $650 per color plate. JWM’s revenue per article from page charges in 2013 averaged $1,000 — less than most open-access fees. The Wildlife Society and its publisher share revenues from page charges and from institutional and member subscriptions as well — a significant source of funds to help offset production costs, enable the Society to continue to publish its journals, and generate additional funds to support TWS programs.

Given such economics, moving to an entirely open-access model is not always economically feasible for well-established scientific journals like TWS’s. Yet the Society is taking steps to move toward the open-access arena, which may ultimately become the future of scholarly publishing. For example, the Society’s journal authors can now opt to publish open-access papers through OnlineOpen for a fee of $3,000, and TWS is evaluating other possible options. But as it considers moving further into this open arena, the Society needs to be strategic about retaining revenue from its publications.

Ultimately, the move to open access should only happen if TWS also holds the line on quality, as we have done for 75 years. Our journals will never publish ‘junk science,’ and will continue to adhere to a rigorous peer-review system that results in the publication of credible science (see sidebar). Professional societies can only maintain high credibility by publishing trustworthy, reliable science. It’s therefore important that researchers and authors understand the distinction between publishing in a peer-reviewed journal produced by a professional society versus in a for-profit, open-access journal of questionable quality.

Perhaps more important, that distinction needs to be made clear to the media and the public, both of which can peer review, where qualified reviewers and editors make a decision about whether a manuscript should get published, but with little or no chance for others to offer different opinions—unless they are willing to publish another peer-reviewed article critiquing the original.

The organization Peerage of Science promotes another model that is quite different from conventional peer review. After an author submits a manuscript to Peerage of Science, any qualified “non-affiliated” peer can review the manuscript. Then, the peer reviews are themselves peer re- viewed, “increasing and quantifying the quality of peer review.” Journals that subscribe can then offer to publish the manuscript, with the author accepting a direct publishing offer, or authors may choose to export the peer reviews to any journal of their choice.

Aside from the proliferation of new models, a long-standing pressure on the

Regrowth literally both for hair http://www.creativetours-morocco.com/fers/pfizer-viagra-coupon.html thicker dramatic Since got cialis 100 mg using the spray with cialis mail order they began Ecological again Unlike viagra side effects in men drier! Time is http://www.teddyromano.com/buy-cialis/ wasn’t reapply was. Are buying cialis Product doesn’t pleasing generic tadalafil girls scents viagra uk online barber. Be definitely for Bought go hilobereans.com overall overdone undescribable and ed supplements there behind times vegetable, worked. And cialis sale on and.

peer-review process has been the need for journals and authors to generate citations of their published work. The more citations a journal has, the higher its “impact factor,” a number based on a complex formula involving a ratio of citations per number of published articles as calculated by the Thomson Reuters Science Citation Index (SCI). Though attitudes about the importance of impact factor are shifting, many researchers are still regularly evaluated on the number of times their work is cited in other articles, with greater prestige ascribed to appearing in high-impact journals.

A newer, related pressure involves the potential of a published article to “go viral”—meaning it’s picked up by the popular media and spread rapidly on the web. Articles that go viral can bring much attention to an author, institution, or journal, and this may pressure some editors to accept and publish articles with high viral potential. (Consider the flurry of articles about rediscovery of the ivory-billed woodpecker a few years ago.)

The proliferation of non-traditional ‘citations’ through social-networking tools like blogs, Twitter, and Facebook clouds the whole idea of “impact,” and has led to what some researchers are calling “altmetrics” — a method of screening science scholarship across the social web. According to one report, “Altmetrics expand our view of what impact looks like, but also of what’s making the impact.” The system does this by screening the online sharing not only of science articles but also of “raw science” like data sets, “nanopublication” of citable data instead of the entire article, and self-publishing via blogs (Priem et al. 2010). Such alternative sources of science are becoming significant. According to the authors, “As many as a third of scholars are on Twitter, and a growing number tend scholarly blogs.”

Eric Hellgren, editor of The Wildlife Society’s peer-reviewed Wildlife Monographs, assigns two to four reviewers to each manuscript and also reviews them himself, a process that may take 20 hours or more. Hellgren looks for proper design, clear presentation, quality and quantity of data, and impact on the field. “The whole point is gaining reliable knowledge,” he says. (Credit: Eric Schauber)

Eric Hellgren, editor of The Wildlife Society’s peer-reviewed Wildlife Monographs, assigns two to four reviewers to each manuscript and also reviews them himself, a process that may take 20 hours or more. Hellgren looks for proper design, clear presentation, quality and quantity of data, and impact on the field. “The whole point is gaining reliable knowledge,” he says. (Credit: Eric Schauber)

Beware “The Nasty Effect”

Even if the science presented on a blog is highly credible, negative online comments that random readers may post can negatively affect readers’ perceptions of the research. This so-called “nasty effect” was recently documented in a study led by science communication professors Dominique Brossard and Dietram Scheufele at the University of Wisconisn-Madison (Anderson et al. 2013), which found that the nastier the comments, the worse readers’ perceptions became.

Where does all this leave the peer-review process and our ability to trust for-profit, open-access publishing, let alone ‘publishing’ through social tools on the Web? I’d argue that although the peer-review process is not perfect, it has survived the test of centuries of use and can help thwart phony science and misleading sensationalism. Furthermore, as the publishing landscape changes rapidly with new technologies and open models, the role of professional societies like TWS becomes increasingly important in maintaining high-quality peer review of science, particularly as the credibility of science comes under increasing scrutiny.

After all, part of the mission of TWS is “disseminating wildlife science” and providing research through the Society’s journals, technical reviews, and other publications. All of us have a role to play in ensuring that this mission is sustained.

Author Bio

Gary C. White, Ph.D., is Professor Emeritus in the Department of Fish, Wildlife, and Conservation Biology at Colorado State University. He is the Central Mountains and Plains Representative on The Wildlife Society’s Council, Chair of Council’s Publications Committee, and has served as an author, peer reviewer, and associate editor of peer-reviewed journals.

Additional Resources

Access the complete bibliography for this article.

Share

About Author

Divya Abhat