Saturday 30 November 2013

Peer Reviewing my Peers…

Well, I am quite late for this post.  To make up for my tardiness, I decided that I would take this as an opportunity to respond to all of my group members previous posts on this topic.  Think of it as a sort of peer review (and like peer review, it comes long after the original submissions!)

Akash has found some other interesting incidents involving controversy in peer review.  It is interesting to note that some incidents of peer review failure were cases where the research was perfectly valid, but it was completely plagiarized.  I think this serves as a good example of where peer review might not be able to hold up a standard.  I think that most peer reviewers would only being doing a review of an article to see if the ideas that it is espousing contain any merit.  I would not be surprised if many peer reviewers would be caught off guard by an idea that is perfectly sound, but completely unoriginal.  I think that this is especially true if you consider the fact that most research does not occur in isolation; it would be very difficult to be certain if a good idea in one paper hasn't already been thought of in another.

I like Lauren's discussion of Wikipedia as a source of peer review.  And I agree with her assessment that, despite Fitzpatrick's assertion that Wikipedia is a platform of ongoing peer review, the reality does not hold up that standard.  It is good to point out that most Wikipedians are not experts, as I think that this very much does make a difference in the quality of the review.  I have seen and heard of some of the infamous "edit wars" that can occur in Wikipedia over the simple placement of a single word.  A controversy caused by the placement of a word is less the exception, and more a rule.  I believe that this is for the precise reason that Lauren indicates: Wikipedians are mostly non-experts; but you do not need to be an expert to have an opinion on word placement.

I also think that this is why "Wikigroaning" exists.  "Wikigroaning" is a term coined for the practice of comparing the word count of a regular, serious encyclopedia article with a pop-culture one.  The resulting groan you will experience is how this term achieved its name.  Or, do a Wikigroan with my favourite method: compare an article on serious subject on Wikipedia: http://en.wikipedia.org/wiki/Spanish_Civil_War (word count: ~14000)
With an article on the Star Wars Wookiepedia:  http://starwars.wikia.com/wiki/Wicket_Wystri_Warrick (word count: ~25000)

I don't know if I completely agree with Jess' point on the Sokal affair (sorry!): That if satire slips past peer review it is a sign of sickness in the discipline itself.  I think that I will go back to my first point, about a peer reviewer being caught off guard by a plagiarized concept, and say that the same is true here.  If you are not expecting to look for satire, it can be extremely hard to spot.  Especially if you are the one who the satire is being direct towards.  We can have a very large blind spot when it comes to our own ideas and beliefs, and satire can very easily slip right past us.  And oftentimes, the point of the satire is completely lost on those it is directed against.  As in the Sokal affair, where the publishers of the journal said that they did in fact read Sokal's article, but saw nothing wrong with it, it just needed to be better written.  When something like this happens, you can start to lose track of who is "trolling" whom…

Vanessa's post about the peer review process was quite interesting.  Hearing about her friend's experience with peer review makes me wonder more about the process as well.  How is it that her friend could find a paper fundamentally flawed, yet others see it as being fit for publishing?  The questions that Vanessa raises at the end of her post are all good points, and I think that there are no easy answers to any of them.  Hearing her friend's experience with peer review does make me start to see the more troubling implications of peer review (I am just speculating here): What if her friend really wasn't qualified to read the paper, and completely misinterpreted it? Or, even worse, what if she was the only one who actually read the paper, and found it flawed?  Not something that you would want to think about too much…

I like Cythia's point about the personal impact of peer review.  When you are choosing to publish a paper, it is really up to you on where you want to publish it and what kind of feedback you are looking for.  While I am sure that many academics would like to see their work in the most prestigious journal, it is really more about seeking the feedback and recognition that will help your ideas to grow.  It is up to the individuals who publish papers to decide on how they want to have their work recognized, and which peer review process works or doesn't work is really a choice that is made by them.

I found myself laughing when reading Eva's post.  I liked her assessment of the Sokal affair, and I found that it was in line with my own thinking: it only created a controversy, and did not really advance a new topic or enlightened change in the system.  What made me laugh was that Eva's post made me think of Sokal's hoax being assessed as if it were his paper being submitted (as a sort of meta-analyis).  Eva's summation of it not being a very original idea is what gave me a chuckle.  Eva also has a good point about peer review being flexible in regards to open vs closed review.  I agree that a combination of both would allow for more constructive reviews.  And I think this relates to Eva's point about author's being flexible in their choice of peer review.

Courtney's post about peer review and the full stomach reminded me of my legal work.  And that I think I had heard about her professor's research in my studies as well.  It is a well accepted fact of legal work that judges are humans too, and are subject to the same weaknesses as everyone else (so it is never wise to do anything to annoy them; including presenting your case at lunch hour)  Though Courtney relates this to reviewers having an empty stomach, and thus giving a negative review of a paper, I think that the observation that others have made about the possible bias of reviewers is a more probable concern.  An empty stomach is the least of your worries when you are dealing with humans with all the same biases and weaknesses that you have.  (I still think that attaching a cookie to your paper is an excellent idea).

Ghaddar makes a good point about nonsensical writings not being related to single papers, but entire disciplines.  Whenever I encounter the subject of pseudoscience, I always think about phrenology, and because I am a Simpsons fan, I think about Mr. Burns' response when Smithers says that "phrenology has been dismissed as quackery 160 years ago" Mr. Burns says, "Of course you'd say that...you have the brainpan of a stagecoach tilter!" (Simpsons, 3F06).  I think that this quote actually teaches us something about peer review: that it is useless when an entire discipline is caught in its own nonsense.  Ghaddar makes a good point in the usefulness of allowing public review of academic papers.  This might help to shed some light in the darker corners academia, where peer review would make less difference.

Well, this has been the longest post ever.  I hope that I was able to hit on some good points in response to each person, and that my peer review of their work helps them out.  Also, I have no idea what the correct format for citing Simpsons episodes is, so I hope that my attempt works.  And now, I hereby deem all of the previous posts to have passed my peer review and be suitable for publishing!




Friday 29 November 2013

Peer review and a full stomach

Ok, this post may be stretching the topic of peer review a bit but I want to talk about the connection between peer review and a full stomach. To explain this proposed connection, I will describe a study that one of my undergraduate professors, Mark Fenske, performed on the connection between Judges' verdicts and when they last ate (I'm sorry I tried to find this paper but couldn't, so you will have to take this description on faith). The study was an examination of the correlation between when a judge determined wether convicts should be released or not and the time when the judge last ate. Fenske discovered that there was a positive correlation between having just ate and and releasing convicts for parole as well as a positive correlation between having not eaten recently and not releasing convicts for parole. In general, when judges had full stomachs, they tended to review convicts more charitably. Ever since I read this paper I have been terrified that my professors will read my assignments on an empty stomach.

So what does this study have to do with peer review: although, in many cases, multiple people are reviewing the same paper during the review process, what if the person who sent the paper for review is plagued with bad luck and all of the reviewers haven't eaten in hours. This topic is intended to seem sort of trivial and lighthearted but point to something deeper: systematic review is still subject to luck and probability because of the complexity of the human condition and the universe as a whole. Following from the tradition of both Aristotle, who argues that moderation of the mind is not enough to succeed, one also needs luck and Spinoza, who argues that beings are specific possibilities within a set of possibilities that must be expressed for the universe to be, I argue that the peer review process requires a full stomach, along with many yet to be determined factors that I operationalize as luck.

The moral of this convoluted post is that even if you write the most amazingly inventive paper and attach a cookie to your paper when you hand in for review, it might still get rejected, so keep trying until you get lucky.

Clear and Dear? Peer Reviews...

Reading the articles this week, I certainly found more affinity with Fitzpatrick (2009). Maybe this is because I have no experience in the world of academic publishing, and thus a "freshening" of the publishing system sounds kind of thrilling to a newbie. And why shouldn't it be? Peer review, which is a form of evaluation (and as such, is inherently variable) by other members of a specific community (usually), can always be rebuilt, renovated, and remodeled.

However, although Sokal's hoax has developed numerous discussions about the process of peer review, I think the resulting debates were minimally constructive, and essentially became a series of misreadings about misreadings about misreadings.....("Sokol Affair", 2013). They didn't really move intellectual debates very far. Humanities vs. sciences? Not very original.

There are many issues in this field, including access, cost, peer review (not to mention the complex process of navigating interdisciplinary publications, where the notion of a singular authority on a particular matter perhaps might not exist). Regarding access, I do think that publications should be accessible to a more varied audience (this relates mostly to issues of cost and funding). Access is connected to peer-review - who is included in this process? Some of the examples which Fitzpatrick brings up, including very open methods of peer review, in which there was no incentive to comment (and thus no commenting occurred), demonstrated that the issues of peer review are not simply an either/or kind of situation. Closed, structural peer review vs. open, public peer review are not the only options, and there is a lot of wiggle room in between, so it will be interesting to see what publishing organizations do with this wide open space.

I really think that the object of peer-review should be a constructive one - namely, how can articles be improved and worked on in order to contribute a well-crafted, well-researched, well-argued piece? Of course the term "well" or "good" or "bad" are all subjective terms...although are they? Does "subjectivity" find a place in all academic fields, or is it really just exclusive to humanities and social sciences, which are somewhat vague categories too? And where does subjectivity fit into the information field, which is so interdisciplinary? A dilemma of sorts...Suddenly peer review becomes more complicated than first anticipated!

References

Fitzpatrick, K. (2009). Planned Obsolescence: Publishing, Technology, and the Future 
     of the Academy. Retrieved from
     mediacommons.futureofthebook.org/mcpress/plannedobsolescence/

Sokal Affair. (n.d.). Wikipedia. Retrieved November 28, 2013, from      
       http://en.wikipedia.org/wiki/Sokal_affair

 

Peer Reviews By Cynthia Dempster

I think that the traditional approach espoused by Lovejoy(2011) and Fitzpatrick's(2009) evolutionary approach have different merits. If I write an article that is innovative, I might prefer an open peer review. If I am writing an article that relates to an area where traditional expertise is important, I may prefer a more thorough traditional review process. It is unlikely that I would submit an innovative article to a journal that I perceived as being stodgy or old schooled. Such a journal would probably reject my manuscript. It is very likely that the peer review process and the journal's attitude to peer review fit the mandates and interests of the publication itself. A journal that is interested in innovative approaches and ideas, is not going to have articles reviewed by a blind retired professor, age 125, who hates change. The comments offered in the context of an open review are frequently offered off the cuff. The comments can relate to one aspect of the article and not balance particular issues with the whole. A traditional peer review will probably be more comprehensive. The thoughts contained may have matured with time.

I think it is important to view peer reviews as tools available to serve us as writers rather than obstacles to being published. If there is not a publishing option or a peer review process that suits us, then we can choose to self-publish or start our own journal with our own mandates. We all have an individual voice and an individual point of view. We may choose publishing methods and peer review processes that suit us. Publication is, in the end, just a means of communication. Peer review is a means of receiving valuable feedback and input from others. We are in charge of our own work and careers. If the options we want don't exist, we can create them.

On another note, I do not like practical jokes or hoaxes. Usually someone is embarrassed or ridiculed. I prefer direct, kind and courteous ways of making a point.

Peer Review - Ignored?

Earlier this year I was discussing John Bohannon's "experiment" with the peer review process of open access journals (CBC News, 2013) with a friend of mine in the sciences. The "sting operation" he performed has been criticized as biased since then (Taylor, Wedel & Naish, 2013), and was published largely as a news item even then, so it's clearly not an ideal examination of the peer review process. However, what we ended up discussing was her experience as a reviewer, as she had recently been asked to perform a review in place of her busy supervisor.

When my friend was reviewing her first ever pier-reviewed paper she felt terribly guilty for being highly critical of it, but she found it essentially flawed and didn't recommend it for publishing. As a participant in the experimental process she knew how much work had gone into the paper, and felt bad for the scientists involved, but she knew that her duty as a reviewer was to be critical and honest about adherence to the scientific process and the quality of research. After agonizing over the review, she was very surprised later on to find that the paper she thought she had condemned to rewriting had been published despite her input. She also told me that many of her colleagues had reviewed papers in the past that they deemed not fit for publishing, and had seen them go on to be published in peer-reviewed journals. Further, all of the members of her lab review for non-open-access journals.

While it's true that the peer review process involves multiple reviewers and the input of a single scientist can't be taken as the sole possible view of a paper's worth, this raises concerns about the validity of the peer review process even when it is working as originally designed in a closed-access journal of repute. While there is much debate about the potential harm that can be done when changing the peer review process, there are still many questions that can be asked about the quality of the traditional process as it is implemented today. How many of the reviewers are qualified to perform the review - and how many of them pass it on to a less experienced assistant or student working under them? How many of the reviewers approach the review process seriously and pay close attention to the details of the paper? How well do editors and selection committees adhere to the advice the reviewers give? And how well does the peer review process truly represent the opinions of the scientific community they are meant to embody?

(2013, Oct 14). Bogus science paper reveals peer review's flaws. CBC News. Retrieved from http://www.cbc.ca/news/technology/bogus-science-paper-reveals-peer-review-s-flaws-1.2054004
Taylor, M., Wedel, M., & Naish, D. (2013, Oct 7). Anti-tutorial: how to design and execute a really bad study. Retrieved from http://svpow.com/2013/10/07/anti-tutorial-how-to-design-and-execute-a-really-bad-study/

Thursday 28 November 2013

Pear-reviewed journals



This question of peer-review resonates with me because of my social psychology background.  Peer review is capable of judging whether given research meets disciplinary best practices.  However, sometimes disciplinary methods themselves are inadequate.  These inadequacies are not the fault of peer review, per se, but of the discipline itself.  In the case of the Sokal affair, the problem was not the peer review; it was the academic discipline of postmodern cultural studies itself.  When satire slips past peer review it is a sign of sickness in the discipline itself.

Peer review is important; but it is not sufficient.  It can test whether research follows best practice but it cannot advance that established practice or offer more profound commentary.  In my experiences in the experimental sciences, peer review is respected to such an extent that new or non-standard methodologies that have a limited capacity for peer review (because of their novelty) are viewed as suspect.  The rituals of publication may, in fact, be impeding creativity and innovation.

As an aside, I thought I’d include a picture of this spam “pear-reviewed” journal that is soliciting articles.  Your chance to be published!


Preserving StatCan data



I’d like to take some liberties in answering this question to respond instead in regard to preservation of StatCan data (that I work with).  I’ve worked with data for the last three years, and in that time my attention to preservation has grown with experience.  Some of the early censuses (late 1800s and early 1900s) have valuable information about Canada – but only some of these files are available to researchers because most have not yet been digitized.  It is a painstaking process to manually enter this old data.

Fast-forward 50 years and we encounter a similar problem; although the existing data is machine-readable, those machines (and software) no longer exist.  It takes an expert to reformat the data and syntax to be used with contemporary technologies.

Last year, I worked with Dataverse – an online platform for research data.  It runs on R (an open source statistical software) and claims to be able to automatically reformat data over time.  Only time will tell to what extent Dataverse is able to maintain usability.  In any case, its attention to preservation is progressive.

Peer Review

Unfortunately I don’t have any personal experience with peer-review, as I have never submitted a paper for publishing.  I also missed the class on peer-review, as I was recovering from a bad bout of food poisoning, so I apologize if anything in this blog post repeats a discussion that took place in class.  But from what I’ve gathered from the Lovejoy (2011) and Fitzpatrick (2009) readings, peer-review strikes me as a double-edged sword, that like so many other institutions in academia (and in the world in general), has both positive and negative aspects inherent within it. 

Clearly there exists a need to assess the validity of claims in academic journals.  This seems to be most relevant in fields like the medical sciences, where the veracity of claims can affect the very lives of patients.  I would certainly not be comfortable being prescribed a medicine if the research findings about it were not published in a journal with very rigorous standards.  But when it comes to other academic fields, I do agree with Fitzpatrick’s overall statement that at times the peer-review process can lead to the exclusion of new and interesting ideas.  I also agree with her argument about Wikipedia – that it would be more useful to teach students how to properly use it as a source, rather than banning it altogether – however I can’t really get behind her notion that Wikipedia itself is basically a platform for ongoing peer-review (p.10).  While its pages are certainly undergoing a process of continual editing, I don’t know if you could in good conscience call all of Wikipedia’s editors “peers.”  If the logic of peer-review is the critiquing or editing of a work by experts in the field  (or a closely related field), I doubt that everyone who contributes to a Wikipedia page could be considered an expert. 

I also agree with Fitzgerald’s emphasis on the fact that the advent of digital publishing is changing the process of peer-review, as well as other standards of legitimacy.  This seems to mirror the reality that the prevalence of blogging and tweeting is changing the process by which news stories are fact-checked and vetted before being released to the public.  This movement appears unstoppable, and it is clear that the standards by which all publications are validated need to adapt accordingly, however I’m unqualified to venture a guess at how this could be accomplished.  At the risk of sounding cynical, the only solution I can think of is to instill in future generations a greater sense of the value of critical analysis, so that they will learn to consult a variety of sources in order to make informed decisions.

Fitzpatrick, K. (2009). Planned Obsolescence: Publishing, Technology, and the Future of the Academy.  Retrieved from: mediacommons.futureofthebook.org/mcpress/plannedobsolescence/

Lovejoy, T.I., et al. (2011). Reviewing manuscripts for peer-reviewed journals: A primer for novice and seasoned reviewers. Annals of Behavioral Medicine, 42(1), 1-13.

Wednesday 27 November 2013

Peer Review Review

I think the necessity for and credibility of peer review is an interesting issue. On the one hand, it is a system of checks and balances to ensure that authors meet the accepted standards of their discipline and reduces the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, and personal views. An article having gone through peer review automatically affords it a certain degree of credibility in the eyes of readers and fellow academics. Publications that have not undergone peer review are likely to be regarded with suspicion by scholars and professionals.

Whether this level faith or trust in the system is merited can be argued; Peer review may make the ability to publish susceptible to control by elites and potentially, even personal jealousy and other kinds of bias. Reviewers tend to be especially critical of conclusions that contradict their own views, and lenient towards those that accord with them. Also ideas that harmonize with the established experts' (generally who gets chose n to be a peer reviewer) in a field are more likely to see print and to appear in premier journals than are iconoclastic or revolutionary ones [1].

For the purposes of this blog post and keeping with the controversy theme (the Sokal affair that Prof. Galey brought up), I thought it might be interesting to look up some incidents of 'Peer Review failure'. Failure, in this context, is defined to be publishing any kind of work containing obvious fundamental errors that undermines at least one of its main conclusions, when a journal publishes well-known information as a new discovery, or when important valid work is rejected out of hand. Retractions and letters-to-the-editor that correct major errors in articles would generally constitute peer review failures [2].

The Wikipedia article on Peer Review failure put me on to the publication of 'Tai's method' [3] for calculating areas under curves (in this case glucose tolerance and other metabolic curves) as original research. The method of Riemann sums for numerical integration, which was republished, in this particular paper is a technique taught in high school calculus and something that I've used personally a fair. So naturally I found that particularly intriguing and even amusing. And apparently it is a prominent example of a well known idea being re-branded as a new discovery. I learnt that Edward Jenner's report of the first vaccination against smallpox was rejected too!

I guess we have to concede that peer review, in scientific journals, assumes that the article reviewed has been honestly written, and the process is not designed to detect fraud. The reviewers usually do not have full access to the data from which the paper has been written and some elements have to be taken on trust. Therefore peer review is not considered a failure in cases of deliberate fraud by authors. It is not usually practical for the reviewer to reproduce the author's work, unless the paper deals with purely theoretical problems which the reviewer can follow in a step-by-step manner.

I think the system of open peer review and leveraging the power of the internet to obtain rapid and detailed feedback is fascinating. Of course, filtering out the quality feedback from the rest due to the sheer volume of inputs is tricky but I'm certain that with advances in natural language processing and machine learning, such systems can be concerned as a seriously viable alternative. I looked up the Shakespeare quarterly experiment and thought it was a great first step towards such a movement [4]. 

I also think that disclosing the identities of reviews to the authors should ideally facilitate healthy discussion between both parties that leads to better science. At the end of the day, it is people's biases and egos that stop this from happening and it is important for scholars to rise above this and look at bigger picture.

REFERENCES


[1] Higgs, Robert (May 7, 2007). "Peer Review, Publication in Top Journals, Scientific Consensus, and So Forth". Independent Institute. Retrieved April 9, 2012.

[2] Wikipedia article: Peer Review Failure - http://en.wikipedia.org/wiki/Peer_review_failure

[3] Tai, M. M. (1994). A mathematical model for the determination of total area under glucose tolerance and other metabolic curves. Diabetes care17(2), 152-154.


[4] Cohen, Patricia (2010) - Scholars Test Web Alternative to Peer Review -
http://www.nytimes.com/2010/08/24/arts/24peer.html?pagewanted=1&_r=2&ref=arts






Peer Review and Confirmation Bias


A few years ago one of my professors discussed the concept of Peer Review, Gatekeeping and the Ivory Tower. Her lecture discussed how the culture/environment of a department and/or field can affect literature produced by researchers. It is possible that if a body of researchers in a peer review have the same stance on certain issues a confirmation bias occurs where work which reflects their opinions goes on to pass the peer review process while work that is contrary to their convictions may get thrown out the door. This can transfer into thesis committees and journals thereby limiting the diversity of work accepted and thus acting as gatekeepers of what knowledge is deemed acceptable or not. This raises the question of how authentic certain research is and we arrive at the Ivory Tower analogy.

A somewhat similar process occurs in the publishing industry where agents and imprints only select works which they know will sell and can make a profit off of. In the publishing industry a number of alternatives have been used to circumvent this issue such as small presses and publications. I know in academia researchers can take their work to other journals or students can try and find other advisors for their work, but I wonder what the consequences of this are. I am very much open to using and legitimizing alternative sources of information and publishing, however, to what degree (if at all) do we need a framework to operate within?

Daniel Kahneman’s book Thinking Fast and Thinking Slow specifically discusses our own cognitive and personal biases that arise. One interesting example he gives is a test he did on himself to see if he was bias in grading his students papers. He was able to confirm that depending on how well the first few papers he graded faired, combined with is own energy levels, they affected the median grade of the course. Students marked towards the end depending on a variety of factors had less of a chance to do well. This makes me wonder about the peer review process and whether or not we should hold it as our gold standard.

Friday 22 November 2013

Preservation and security…

I think that there are a few ways to preserve your data for the future.  One principle that I follow in many areas that require thorough record keeping – or attention to detail – is the principle of redundancy.  Redundancy is not a bad thing in those contexts, because keeping many copies of the same document can help ensure preservation is successful.  To have many saved copies of documents, and to use any cloud-based services will help keep your documents safe in case of machine failure (or other catastrophe).

In order to ensure the opposite effect – destroying sensitive material when necessary – I would use the reverse of the same principle: keep as few copies as possible.  And to add to the security and safety of the document, I would also say that it should be encrypted.  So, even if you lose track of the document or have it stolen, the information is still (relatively) safe.

Actually, both my points about document preservation have basis in the subject of security.  I mentioned before that redundancy is a principle that I follow in many areas, and security is one of those areas.  It is generally understood that there is no foolproof system for ensuring absolute security.  However, if you use multiple redundant systems then if one system is broken the next will continue to uphold security.  And the encryption of (sensitive) documents is generally good practice for anyone, even those without confidential research materials.

Another idea to keep in mind with preservation of research methods is what I mentioned in class in regards to references.  Using too many contemporary references can make your paper more difficult to understand for a future audience.  You should be tailor your references and ideas to be understood by people in any context, not just those who you would assume have the same level of cultural understanding as you.  I remember in high school when we were studying Hamlet, and I encountered this quote:

Such dear concernings hide? Who would do so?
No, in despite of sense and secrecy,
Unpeg the basket on the house’s top.
Let the birds fly, and like the famous ape,
To try conclusions, in the basket creep 
(Hamlet, Act 3, Scene 4, Line 195-199)

I went to the footnotes of my text, and I found that this reference to the "famous ape" is completely lost to a modern audience.  No one has any idea what Shakespeare is referring to in this quote, because this parable has not survived.  This means that we cannot form any real conclusions about what Shakespeare is actually saying in this quote.  There might be some great sub-textual meaning, but without the original reference, we will never know.  (My theory is that, since Shakespeare also liked to create new words, he created this reference "out of thin air" just to confuse us).

Legacy matters



It seems throughout history , matters of preservation and legacy grow in importance as one gets old. People go about preserving their legacy in different ways including sponsoring artists to shout out their name in a song, getting a renowned scientist/researcher to cite them, drawing graffiti in a public place like the subway or a park, starting a foundation, making a family branded quality product year after year, etc. Some methods prove to be more long-lasting than others and animals also emulate preservation behavioral patterns with their  territory marking tactics.


Wallpaper Converter. (n.a.). London Graffiti wallpaper. Retrieved November 22, 2013. From http://www.wallconvert.com/converted/london-graffiti-park-stock1315-61234.html

In the context of research, it is obvious that some of the best ways to keep research relevant are the accomplishment of discovering something revolutionary/field-advancing as well as referrals by colleagues and peers from various disciplines. As far as documenting and preserving the ideas, theories, and assumptions one goes through from embarking into a research project to completing it and reflecting on it, I think Kristin Luker's analogy of thinking through writing can be a good starting point for the accumulation of research artifacts and notes that can be released later to complement future introspection into the research methodologies.