Friday, 29 November 2013

Peer review and a full stomach

Ok, this post may be stretching the topic of peer review a bit but I want to talk about the connection between peer review and a full stomach. To explain this proposed connection, I will describe a study that one of my undergraduate professors, Mark Fenske, performed on the connection between Judges' verdicts and when they last ate (I'm sorry I tried to find this paper but couldn't, so you will have to take this description on faith). The study was an examination of the correlation between when a judge determined wether convicts should be released or not and the time when the judge last ate. Fenske discovered that there was a positive correlation between having just ate and and releasing convicts for parole as well as a positive correlation between having not eaten recently and not releasing convicts for parole. In general, when judges had full stomachs, they tended to review convicts more charitably. Ever since I read this paper I have been terrified that my professors will read my assignments on an empty stomach.

So what does this study have to do with peer review: although, in many cases, multiple people are reviewing the same paper during the review process, what if the person who sent the paper for review is plagued with bad luck and all of the reviewers haven't eaten in hours. This topic is intended to seem sort of trivial and lighthearted but point to something deeper: systematic review is still subject to luck and probability because of the complexity of the human condition and the universe as a whole. Following from the tradition of both Aristotle, who argues that moderation of the mind is not enough to succeed, one also needs luck and Spinoza, who argues that beings are specific possibilities within a set of possibilities that must be expressed for the universe to be, I argue that the peer review process requires a full stomach, along with many yet to be determined factors that I operationalize as luck.

The moral of this convoluted post is that even if you write the most amazingly inventive paper and attach a cookie to your paper when you hand in for review, it might still get rejected, so keep trying until you get lucky.

Clear and Dear? Peer Reviews...

Reading the articles this week, I certainly found more affinity with Fitzpatrick (2009). Maybe this is because I have no experience in the world of academic publishing, and thus a "freshening" of the publishing system sounds kind of thrilling to a newbie. And why shouldn't it be? Peer review, which is a form of evaluation (and as such, is inherently variable) by other members of a specific community (usually), can always be rebuilt, renovated, and remodeled.

However, although Sokal's hoax has developed numerous discussions about the process of peer review, I think the resulting debates were minimally constructive, and essentially became a series of misreadings about misreadings about misreadings.....("Sokol Affair", 2013). They didn't really move intellectual debates very far. Humanities vs. sciences? Not very original.

There are many issues in this field, including access, cost, peer review (not to mention the complex process of navigating interdisciplinary publications, where the notion of a singular authority on a particular matter perhaps might not exist). Regarding access, I do think that publications should be accessible to a more varied audience (this relates mostly to issues of cost and funding). Access is connected to peer-review - who is included in this process? Some of the examples which Fitzpatrick brings up, including very open methods of peer review, in which there was no incentive to comment (and thus no commenting occurred), demonstrated that the issues of peer review are not simply an either/or kind of situation. Closed, structural peer review vs. open, public peer review are not the only options, and there is a lot of wiggle room in between, so it will be interesting to see what publishing organizations do with this wide open space.

I really think that the object of peer-review should be a constructive one - namely, how can articles be improved and worked on in order to contribute a well-crafted, well-researched, well-argued piece? Of course the term "well" or "good" or "bad" are all subjective terms...although are they? Does "subjectivity" find a place in all academic fields, or is it really just exclusive to humanities and social sciences, which are somewhat vague categories too? And where does subjectivity fit into the information field, which is so interdisciplinary? A dilemma of sorts...Suddenly peer review becomes more complicated than first anticipated!

References

Fitzpatrick, K. (2009). Planned Obsolescence: Publishing, Technology, and the Future 
     of the Academy. Retrieved from
     mediacommons.futureofthebook.org/mcpress/plannedobsolescence/

Sokal Affair. (n.d.). Wikipedia. Retrieved November 28, 2013, from      
       http://en.wikipedia.org/wiki/Sokal_affair

 

Peer Reviews By Cynthia Dempster

I think that the traditional approach espoused by Lovejoy(2011) and Fitzpatrick's(2009) evolutionary approach have different merits. If I write an article that is innovative, I might prefer an open peer review. If I am writing an article that relates to an area where traditional expertise is important, I may prefer a more thorough traditional review process. It is unlikely that I would submit an innovative article to a journal that I perceived as being stodgy or old schooled. Such a journal would probably reject my manuscript. It is very likely that the peer review process and the journal's attitude to peer review fit the mandates and interests of the publication itself. A journal that is interested in innovative approaches and ideas, is not going to have articles reviewed by a blind retired professor, age 125, who hates change. The comments offered in the context of an open review are frequently offered off the cuff. The comments can relate to one aspect of the article and not balance particular issues with the whole. A traditional peer review will probably be more comprehensive. The thoughts contained may have matured with time.

I think it is important to view peer reviews as tools available to serve us as writers rather than obstacles to being published. If there is not a publishing option or a peer review process that suits us, then we can choose to self-publish or start our own journal with our own mandates. We all have an individual voice and an individual point of view. We may choose publishing methods and peer review processes that suit us. Publication is, in the end, just a means of communication. Peer review is a means of receiving valuable feedback and input from others. We are in charge of our own work and careers. If the options we want don't exist, we can create them.

On another note, I do not like practical jokes or hoaxes. Usually someone is embarrassed or ridiculed. I prefer direct, kind and courteous ways of making a point.

Peer Review - Ignored?

Earlier this year I was discussing John Bohannon's "experiment" with the peer review process of open access journals (CBC News, 2013) with a friend of mine in the sciences. The "sting operation" he performed has been criticized as biased since then (Taylor, Wedel & Naish, 2013), and was published largely as a news item even then, so it's clearly not an ideal examination of the peer review process. However, what we ended up discussing was her experience as a reviewer, as she had recently been asked to perform a review in place of her busy supervisor.

When my friend was reviewing her first ever pier-reviewed paper she felt terribly guilty for being highly critical of it, but she found it essentially flawed and didn't recommend it for publishing. As a participant in the experimental process she knew how much work had gone into the paper, and felt bad for the scientists involved, but she knew that her duty as a reviewer was to be critical and honest about adherence to the scientific process and the quality of research. After agonizing over the review, she was very surprised later on to find that the paper she thought she had condemned to rewriting had been published despite her input. She also told me that many of her colleagues had reviewed papers in the past that they deemed not fit for publishing, and had seen them go on to be published in peer-reviewed journals. Further, all of the members of her lab review for non-open-access journals.

While it's true that the peer review process involves multiple reviewers and the input of a single scientist can't be taken as the sole possible view of a paper's worth, this raises concerns about the validity of the peer review process even when it is working as originally designed in a closed-access journal of repute. While there is much debate about the potential harm that can be done when changing the peer review process, there are still many questions that can be asked about the quality of the traditional process as it is implemented today. How many of the reviewers are qualified to perform the review - and how many of them pass it on to a less experienced assistant or student working under them? How many of the reviewers approach the review process seriously and pay close attention to the details of the paper? How well do editors and selection committees adhere to the advice the reviewers give? And how well does the peer review process truly represent the opinions of the scientific community they are meant to embody?

(2013, Oct 14). Bogus science paper reveals peer review's flaws. CBC News. Retrieved from http://www.cbc.ca/news/technology/bogus-science-paper-reveals-peer-review-s-flaws-1.2054004
Taylor, M., Wedel, M., & Naish, D. (2013, Oct 7). Anti-tutorial: how to design and execute a really bad study. Retrieved from http://svpow.com/2013/10/07/anti-tutorial-how-to-design-and-execute-a-really-bad-study/

Thursday, 28 November 2013

Pear-reviewed journals



This question of peer-review resonates with me because of my social psychology background.  Peer review is capable of judging whether given research meets disciplinary best practices.  However, sometimes disciplinary methods themselves are inadequate.  These inadequacies are not the fault of peer review, per se, but of the discipline itself.  In the case of the Sokal affair, the problem was not the peer review; it was the academic discipline of postmodern cultural studies itself.  When satire slips past peer review it is a sign of sickness in the discipline itself.

Peer review is important; but it is not sufficient.  It can test whether research follows best practice but it cannot advance that established practice or offer more profound commentary.  In my experiences in the experimental sciences, peer review is respected to such an extent that new or non-standard methodologies that have a limited capacity for peer review (because of their novelty) are viewed as suspect.  The rituals of publication may, in fact, be impeding creativity and innovation.

As an aside, I thought I’d include a picture of this spam “pear-reviewed” journal that is soliciting articles.  Your chance to be published!


Preserving StatCan data



I’d like to take some liberties in answering this question to respond instead in regard to preservation of StatCan data (that I work with).  I’ve worked with data for the last three years, and in that time my attention to preservation has grown with experience.  Some of the early censuses (late 1800s and early 1900s) have valuable information about Canada – but only some of these files are available to researchers because most have not yet been digitized.  It is a painstaking process to manually enter this old data.

Fast-forward 50 years and we encounter a similar problem; although the existing data is machine-readable, those machines (and software) no longer exist.  It takes an expert to reformat the data and syntax to be used with contemporary technologies.

Last year, I worked with Dataverse – an online platform for research data.  It runs on R (an open source statistical software) and claims to be able to automatically reformat data over time.  Only time will tell to what extent Dataverse is able to maintain usability.  In any case, its attention to preservation is progressive.

Peer Review

Unfortunately I don’t have any personal experience with peer-review, as I have never submitted a paper for publishing.  I also missed the class on peer-review, as I was recovering from a bad bout of food poisoning, so I apologize if anything in this blog post repeats a discussion that took place in class.  But from what I’ve gathered from the Lovejoy (2011) and Fitzpatrick (2009) readings, peer-review strikes me as a double-edged sword, that like so many other institutions in academia (and in the world in general), has both positive and negative aspects inherent within it. 

Clearly there exists a need to assess the validity of claims in academic journals.  This seems to be most relevant in fields like the medical sciences, where the veracity of claims can affect the very lives of patients.  I would certainly not be comfortable being prescribed a medicine if the research findings about it were not published in a journal with very rigorous standards.  But when it comes to other academic fields, I do agree with Fitzpatrick’s overall statement that at times the peer-review process can lead to the exclusion of new and interesting ideas.  I also agree with her argument about Wikipedia – that it would be more useful to teach students how to properly use it as a source, rather than banning it altogether – however I can’t really get behind her notion that Wikipedia itself is basically a platform for ongoing peer-review (p.10).  While its pages are certainly undergoing a process of continual editing, I don’t know if you could in good conscience call all of Wikipedia’s editors “peers.”  If the logic of peer-review is the critiquing or editing of a work by experts in the field  (or a closely related field), I doubt that everyone who contributes to a Wikipedia page could be considered an expert. 

I also agree with Fitzgerald’s emphasis on the fact that the advent of digital publishing is changing the process of peer-review, as well as other standards of legitimacy.  This seems to mirror the reality that the prevalence of blogging and tweeting is changing the process by which news stories are fact-checked and vetted before being released to the public.  This movement appears unstoppable, and it is clear that the standards by which all publications are validated need to adapt accordingly, however I’m unqualified to venture a guess at how this could be accomplished.  At the risk of sounding cynical, the only solution I can think of is to instill in future generations a greater sense of the value of critical analysis, so that they will learn to consult a variety of sources in order to make informed decisions.

Fitzpatrick, K. (2009). Planned Obsolescence: Publishing, Technology, and the Future of the Academy.  Retrieved from: mediacommons.futureofthebook.org/mcpress/plannedobsolescence/

Lovejoy, T.I., et al. (2011). Reviewing manuscripts for peer-reviewed journals: A primer for novice and seasoned reviewers. Annals of Behavioral Medicine, 42(1), 1-13.