←back to thread

31 points xqcgrek2 | 5 comments | | HN request time: 0s | source
1. mmooss ◴[] No.43560086[source]
What are the requirements of a review? And what is the marketplace for someone meeting those requirements?

What expertise is required - someone who researches the same questions? Same general domain? Adjacent domain?

And how long does it take? I imagine that depends on many details.

Finally, what are they reviewing for? Is it a once-over for errors in method? Something like grading a student paper?

replies(2): >>43560498 #>>43562794 #
2. tsumnia ◴[] No.43560498[source]
Speaking as a CS Education reviewer, some of the criteria can be "signing up to review", though solicitation is often sent to professionals in the domain (through personal requests or blanket email campaigns), as well as through respective mailing lists. I review papers for I think 4-5 conferences, mostly because I have colleagues that serve/publish in those spaces (you declare conflicts of interest to avoid bias).

Each publisher/conference have their own reviewing guidelines to follow, but at least for the conferences I've reviewed for they include: a summary (2-5 sentences tops), the strengths of, the weaknesses of the research, and potentially your opinion on the piece. You are typically asked to include your familiarity with the research space since you may be reviewing methodologies that you were not explicitly trained in. This all distills into a metric that effectively reflects "this paper should be accepted/not accepted" which is then handed to a 'senior' reviewer to summarize for the conference to decide. All of my conferences are double-blind single submission, but I have colleagues that are able to respond to reviewer critiques.

Most conferences recognize things like grammatical issues can happen, so reviewers are asked to only point them out rather than use them as a basis for rejection; however if the paper is riddled with mistakes, then it can be grounds for rejection. Likewise, since CS Education is a combination of CS and cognitive psychology, some of the discussion can be attributed toward "appropriateness for CS education research". For example, I once reviewed a paper that clearly was including theater-based education techniques but had CS shoehorned in one paragraph (that was it). Alternatively, measuring time delays in student responses to a tutoring system can help distinguish when students become distracted or take a break.

replies(1): >>43561280 #
3. mmooss ◴[] No.43561280[source]
Thanks. Someone told me that the 'blind' review doesn't often work because they already know who is doing what in their field.
replies(1): >>43561358 #
4. tsumnia ◴[] No.43561358{3}[source]
It can depend on the field and the methodologies that are used - there's been some papers I've reviewed that I could assume who they were based on the contents. I can't really offer a counterpoint on non-blinded reviews as I've only done blind. I have heard some reviewers use the anonymity to be particularly rude, but I've only ever experienced that once but I used our 'discussion' phase to express my concerns.
5. goosedragons ◴[] No.43562794[source]
Generally they want to know is this paper worth publishing and what are things that need fixing, clarification, etc. The reviewers should be people that understand the topics in the paper so they can identify issues, these are usually people that have published articles on similar topics, or people those people recommend. It's more in-depth than grading a paper.