How much deep research does $250 yield by comparison?
Knowledge market > Examples; Google Answers, Yahoo Answers, https://en.wikipedia.org/wiki/Knowledge_market#Examples
How much deep research does $250 yield by comparison?
Knowledge market > Examples; Google Answers, Yahoo Answers, https://en.wikipedia.org/wiki/Knowledge_market#Examples
I'm also deeply suspicious of the confidentiality of anything sent to one of those sites.
However this does suggest the idea that a high-powered university in a low-income country might be able to cut a deal to provide reviewing services...
Does not sound like notable effects to either end. (I was once offered a payment for a peer review, but declined it.)
What expertise is required - someone who researches the same questions? Same general domain? Adjacent domain?
And how long does it take? I imagine that depends on many details.
Finally, what are they reviewing for? Is it a once-over for errors in method? Something like grading a student paper?
So much so that when I did Chemistry at uni I got asked if I was cheating a few times in labs, until I explained.
It's actually really hard to get any experiment perfect the first time.
Even with a year's practice of measuring and mixing and titration and all the other skills you need, I'd still get low yields, or bad results occasionally. Better than everyone else, but still not perfect.
I also noticed that the more you do a particular process, the better results you will get. Just like practicing a solo on an instrument lots, or a particular pool shot, or cooking a particular meal. There's a level of learning and experience needed for each process, not all chemistry in general.
Either you publish the range of results, the average plus standard deviation or average plus standard deviation of a subset with the exclusion criteria and exclusion range. Picking a result is a lie, plain and simple, and messiness is not an excuse.
Each publisher/conference have their own reviewing guidelines to follow, but at least for the conferences I've reviewed for they include: a summary (2-5 sentences tops), the strengths of, the weaknesses of the research, and potentially your opinion on the piece. You are typically asked to include your familiarity with the research space since you may be reviewing methodologies that you were not explicitly trained in. This all distills into a metric that effectively reflects "this paper should be accepted/not accepted" which is then handed to a 'senior' reviewer to summarize for the conference to decide. All of my conferences are double-blind single submission, but I have colleagues that are able to respond to reviewer critiques.
Most conferences recognize things like grammatical issues can happen, so reviewers are asked to only point them out rather than use them as a basis for rejection; however if the paper is riddled with mistakes, then it can be grounds for rejection. Likewise, since CS Education is a combination of CS and cognitive psychology, some of the discussion can be attributed toward "appropriateness for CS education research". For example, I once reviewed a paper that clearly was including theater-based education techniques but had CS shoehorned in one paragraph (that was it). Alternatively, measuring time delays in student responses to a tutoring system can help distinguish when students become distracted or take a break.
Truck passing by on the nearby road? Oops, my physics experiment got shaken, results look messy. Lab animal caught a cold? Oops, genetics experiment now has messy data. Atmosphere is turbulent and some shitty starlink satellite passed by at the wrong moment? Oops, my stellar spectra are messy now. Imperfection in my test ingot? Oops, now my tensile strength measurements have messy data because a few ripped too early...
It is the nature of experimental science to deal with messiness. And dealing with it means being honest about it. You write it like it happened, find the problems in the messy parts of your data, exclude that and explain the why and how. Hand-picked results and just omitting data you find inconvenient is not science, its fraud.
When I am allowed to just pick one result I can show you a perpetuum mobile, cold fusion, superhuman intelligence in mice and tons of other newsworthy things...
As an aside, I'm working at a QC chem lab now, with results that have a direct impact on revenue calculations for clients. Therefore the reports go to accountants, therefore error bars dont't exist. We recently had a case where we reported 41.7 when the client expected 42.0 on a method that's +/- 1.5... They insisted we remeasure because our result was "impossible" The repeat gave 42.1, and the client was happy to be charged twice
he's not going out of his way to reproduce papers, its just on the way of turning peanut butter into toothpaste, or something of the sorr
From the way you're talking, I'm going to guess you're an armchair commentator.
One person performing an unfamiliar experiment once is going to get lower yields and occasional failures.
Do you mean to suggest that "commercial work" in science takes shortcuts and ignores the essentials of the scientific method? Do you mean to suggest that commercial science or at least commercial chemistry writing science-like papers are all misrepresenting their results systematically? Do you think the standards for good scientific conduct do not apply to chemists or commercially working scientists? Because any of that would mean that "commercial work" in science is just fraud dressed up as science.
And yes of course an experienced experimenter will get better, easier, more consistent results, everyone knows that. The issue is not about that at all. The issue is about suppressing results and data that you don't like. Those maybe result from initial inexperience or bad luck, normal variations in measurements or whatever. You present all your data, with statistics, with an explanation, and if that explanation is "well, the initial 20 values are excluded from the reported average because of me being heavy-handed with the frobnicator" then that is fine. People can check your values, your reasoning and convince themselves that your reporting is right and your experiment works to the extend you reported. If you just say "the yield is 89%" without mentioning that all the other yields were worse, without mentioning any kind of variance, range, exclusions, you are lying. Those 89% were your single best yield, since they were best you were never able to reproduce that, so it might as well have been leftover product from improperly cleaning your glassware...
Are you really trying to convince me that all chemists are crooked like that? Or all commercial work in science is crooked?