PDA

View Full Version : What is "peer reviewed"?



sack316
11-03-2011, 12:16 PM
Guess it depends on what your definition of "is" is /forums/images/%%GRAEMLIN_URL%%/wink.gif

Some interesting stuff here. A renowned psychologist admits to faking dozens of scientific studies HERE (http://io9.com/5855733/psychologist-admits-to-faking-dozens-of-scientific-studies) .

Within that article is a link to another one questioning how effective our peer review process is scientifically. Almost sounds more like a grammar check to polish up the entry for journals than it is a checking and confirmation of fact (?). Anyway, that article is HERE (http://blogs.scientificamerican.com/guest-blog/2011/11/02/what-is-peer-review-for/)

I post this because we quite often cling to "peer reviewed study" as a basis that our viewpoint and published article backing it must be true because it is, after all, peer reviewed. I myself am included in that group.

But the above articles really got me thinking about a lot of that. I'm with Voytek, the author of the 2nd linked article above in saying that we need to do better. Very eye opening stuff on the process of publishing scientific data in supposedly reliable journals.

Sack

Soflasnapper
11-03-2011, 03:16 PM
There is little that peer review can do, even if done well, to determine that the data itself has been faked.

What it could do is perhaps find statistical manipulation of the data, and peer review sometimes does find that, whether a witting manipulation, or an accidental mistaken analysis.

But if the data is intentionally faked at the lab level or whatever, how COULD peer review find out? Only retrials that showed far different results could indicate the original data was faked, and re-doing the experiment is not what peer review entails.

cushioncrawler
11-03-2011, 03:39 PM
Einstein never ever published a peer-reviewed paper, not once.
And Alby woz mostly wrong, az we now know.

Peer-review in psychiatry iz like testing holy-water to see if it iz real holy-water or fake.

Science iz the search for a smarter question.
Allmost all (true) science iz wrong, or partly wrong.
mac.

cushioncrawler
11-03-2011, 04:39 PM
Why did the young tugboat take up smoking??
Pier pressure.
mac.

Soflasnapper
11-03-2011, 07:42 PM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: cushioncrawler</div><div class="ubbcode-body">Einstein never ever published a peer-reviewed paper, not once.
And Alby woz mostly wrong, az we now know.

Peer-review in psychiatry iz like testing holy-water to see if it iz real holy-water or fake.

Science iz the search for a smarter question.
Allmost all (true) science iz wrong, or partly wrong.
mac. </div></div>

That's an interesting point that I hadn't ever known before.

Here's an interesting discussion. (http://physicstoday.org/journals/doc/PHTOAD-ft/vol_58/iss_9/43_1.shtml?bypassSSO=1)

Evidently, as Einstein had published all his output in German or European journals prior to coming to the US, he was never subject to what apparently became an American method of anonymous refereeing. This link tells of his becoming perturbed when a paper on the non-existence of gravitational waves was criticized by one such referee without his prior knowledge or approval. In a snit, he withdrew the paper from being published at Physical Review, and published it instead at another journal.

However, as the article points out, the reviewer was correct in identifying errors in Einstein's article, which AE himself soon discovered (he claimed independently), and revised the paper.

Oddly, Einstein himself back in Europe had been called upon to referee OTHERS papers, and his typical comment was whatever is the German word for 'worthless.' LOL!

Einstein's example is a reason FOR peer review, not a reason it is no good. His objection was basically, hey, I'm famous and you're not, and nobody in Europe ever caused my papers to be subject to anonymous review. His personal pique got in the way of accuracy.

sack316
11-04-2011, 07:37 AM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: Soflasnapper</div><div class="ubbcode-body">There is little that peer review can do, even if done well, to determine that the data itself has been faked.

What it could do is perhaps find statistical manipulation of the data, and peer review sometimes does find that, whether a witting manipulation, or an accidental mistaken analysis.

But if the data is intentionally faked at the lab level or whatever, how COULD peer review find out? Only retrials that showed far different results could indicate the original data was faked, and re-doing the experiment is not what peer review entails. </div></div>

Certainly entire experiments and retrials cannot be done in most cases. But experts within a given field, without bias, and studies objectively could logically find flaws within a basic going from point A to point B equation and the plausibility of such conclusions. Stapel has 30-odd published and "accepted" fraudulent works already at the very least... which is rather alarming!

And that's just one person in one field... it begs me to wonder about other scientific studies, things that affect how we treat the Earth, how we treat people, how we invest funds, etc. etc. etc. How much is valid? How often do we become counterproductive following the "snipe hunt" of an 'accepted' yet invalid conclusion?

Perhaps I was just always a little naive myself on this topic, though.

Sack

eg8r
11-04-2011, 08:53 AM
<div class="ubbcode-block"><div class="ubbcode-header">Quote:</div><div class="ubbcode-body">But experts within a given field, without bias, and studies objectively could logically find flaws within a basic going from point A to point B equation and the plausibility of such conclusions. Stapel has 30-odd published and "accepted" fraudulent works already at the very least... which is rather alarming!
</div></div>Basically what is happening here is if you were talking about global warming (as a direct result of human intervention) sofla would tell you about how strong their proof was due to all the research and peer reviews by fellow scientists. Now we see that peer review really might not be as robust as expected he tells us they don't really matter.

<div class="ubbcode-block"><div class="ubbcode-header">Quote:</div><div class="ubbcode-body">And that's just one person in one field... it begs me to wonder about other scientific studies, things that affect how we treat the Earth,</div></div>Where there is smoke there is usually a fire.

eg8r

cushioncrawler
11-04-2011, 08:54 AM
Peer review iz useless in some areas. It shoodnt be -- it kood eezyly be usefull -- but it aint -- and probly never wont be.
Lemmesee, we hav....
1. Krappynomicysts.
2. Architekts.
3. Dieticians.
4. Psychiatrysts.
5. Priests.
6. Homeopathetiks.
7. Astrologysts.
8.
mac.

Soflasnapper
11-04-2011, 04:03 PM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: eg8r</div><div class="ubbcode-body"><div class="ubbcode-block"><div class="ubbcode-header">Quote:</div><div class="ubbcode-body">But experts within a given field, without bias, and studies objectively could logically find flaws within a basic going from point A to point B equation and the plausibility of such conclusions. Stapel has 30-odd published and "accepted" fraudulent works already at the very least... which is rather alarming!
</div></div>Basically what is happening here is if you were talking about global warming (as a direct result of human intervention) sofla would tell you about how strong their proof was due to all the research and peer reviews by fellow scientists. Now we see that peer review really might not be as robust as expected he tells us they don't really matter.

<div class="ubbcode-block"><div class="ubbcode-header">Quote:</div><div class="ubbcode-body">And that's just one person in one field... it begs me to wonder about other scientific studies, things that affect how we treat the Earth,</div></div>Where there is smoke there is usually a fire.

eg8r </div></div>

Well, no, actually.

Peer review can aid, correct, improve, or make more credible, a given paper being published.

Even if there isn't peer review, however, SCIENCE takes place, and THAT is what either eventually confirms or disconfirms the paper's findings, as the experiment(s) are REPEATED, or the analysis REDONE, by others, to see if their results are like those of the first publisher of the results.

So, for instance, a non-peer-reviewed paper may end up being replicated and confirmed by other scientists (as the temperature findings of the global warmist theorists were just reconfirmed), and on the other hand, a peer-reviewed paper may be refuted by future research.

cushioncrawler
11-04-2011, 04:11 PM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: eg8r</div><div class="ubbcode-body">....Where there is smoke there is usually a fire. eg8r</div></div>Where There's Smoke, There's Climate Change March 1, 2004

To anyone who has ever seen a forest fire in action or the eerie, charred landscape left in its wake, the ground-level damage is devastatingly clear. More difficult to assess has been what transpires in the atmosphere as a result of biomass burning. New research suggests that the atmospheric effects of these blazes are profound, and may significantly impact climate on regional and continental scales.

Findings from two studies of smoke pollution from forest burning in the Amazon are detailed in the current issue of the journal Science. In the first paper, Meinrat O. Andreae of the Max Planck Institute for Chemistry in Mainz, Germany, and his colleagues report that in cases of heavy pollution, smoke suppresses rainfall, allowing the aerosols to penetrate the upper levels of the atmosphere. As a result, the clouds appear to smoke. Ultimately, the smoke aerosols can alter the amount of radiation reaching the earth and encourage long-distance transportation of the smoke. And when the aerosol- and water-laden clouds eventually release their precipitation, they generate intense thunderstorms and large hail instead of the usual moderate rainfall. "The invigorated storms release the latent heat higher in the atmosphere," the authors write. "This should substantially affect the regional and global circulation systems."

In the second study, a team led by Ilan Koren of the NASA Goddard Space Flight Center analyzed satellite data from the Amazon during the dry season and found that scattered cumulus-cloud coverage fell from 38 percent when the air was clean to zero in heavy smoke conditions. The incoming heat resulting from this reduction in cloud cover, they say, can swamp the cooling effects of the scattering of solar radiation by the smoke particles. This, the researchers conclude, may help explain "why Earth warmed substantially in the last century despite the expected aerosol cooling effect." --Kate Wong

cushioncrawler
11-04-2011, 04:16 PM
Why Prescribed Fires in Grasslands Don’t Contribute to Global Warming
Posted on March 21, 2011 by Chris Helzer

There are plenty of things to worry about when conducting a prescribed fire. Is the wind going to change? Is the smoke going where it’s supposed to? Will the fire leave sufficient unburned refuges for insects and other animals?

Fortunately, one thing we don’t have to worry about is whether or not the smoke from our fires is contributing to global warming. It’s true that smoke from prairie fires contains carbon, and that carbon is lifted right into the air. However, it’s important to step back and look at the bigger picture.

When all is said and done, the smoke from a prairie fire returns much less carbon to the atmosphere than was sequested during the same time period. Even with annual burning, a prairie stores more carbon than it releases.
First, prairies pull more carbon from the ecosystem each year than they release - even if they’re burned annually. Prairie plants take carbon from their environment and store it beneath the ground as soil organic carbon. We’ve long known that prairies build organic soils – that’s why grasslands make such good farmland – but recently that ability has gotten more notice because of its contribution to carbon sequestration efforts.

Second, burning prairies stimulates stronger vegetative growth, which sequesters even more carbon in the soil than if the prairie was unburned. Spring fires warm the soil and allow prairie plants to start their growth earlier, and removes shade that would otherwise slow plant growth. In addition, it appears that fires also stimulate soil bacteria that make more nitrogen available to plants.

Third, the carbon that IS released through smoke is not the fossil carbon that is responsible for steeply climbing carbon dioxide levels in the atmosphere. Smoke from prairie fires contains carbon that was pulled out of the atmosphere within the last few years. Remembering that much of that carbon is sent down in the the soil by prairie plants, whatever is re-released is simply returning carbon that was already in modern day circulation. Today’s increasing atmospheric carbon levels are driven by the release of fossil carbon from millions of years ago. That carbon was stored away in coal and oil deposits until we pulled it out of the ground and released it through combustion.

A nicely succinct (if slightly ornery) synthesis of the reasons prairie fires don’t contribute to global warming was written by Gerould Wilhelm, a widely respected botanist and educator. You can read that in PDF form here.
So – stop worrying about carbon. Instead, make sure the forecast is still accurate, watch where your smoke is going, and be sure to leave some unburned areas for insects and other animals.
Most importantly, be safe.

cushioncrawler
11-04-2011, 04:28 PM
PEER REVIEW AZ PRAKTISED BY THE KATHLICK CHURCH.
MAC.

Theologians respond to NCReporter criticism

Jesuit journal submitted to “a higher authority,” but not before peer review August 30, 2011 12:00 EST
By Catherine Harmon

Yesterday, the National Catholic Reporter posted an article on the alleged “pressuring” of the Jesuit theological journal Theological Studies by the Vatican’s Congregation for the Doctrine of the Faith. Citing “theologians not connected to the journal or to the Jesuit order” (the Jesuits publish Theological Studies), the NCR reports that the CDF forced the journal to publish an article defending Church teaching on the indissolubility of marriage in its June 2011 edition. That article, by Father Peter Ryan, S.J. and Dr. Germain Grisez, was a response to an article published in TS in September 2004 by Fathers James Coriden and Kenneth Himes, in which the authors argued that the Church should change its teaching on the indissolubility of marriage. According to the NCR, “The Vatican aim is to weed out dissenting voices and force the journal to stick more closely to official church teachings.”
Critical to the NCR report is the claim that the Vatican pressured TS to publish the Ryan/Grisez article “unedited and without undergoing normal peer review.” The assertion is apparently backed up by the allegations of the anonymous “theologians not connected to the journal” and by the unusual editorial note included at the top of the Ryan/Grisez article in TS, which states, “Except for minor stylistic changes, the article is published as it was received.”

In a statement responding to the NCR story, Ryan and Grisez indicate that their article did in fact go through a process of peer review, and was submitted to a group of TS-assigned readers, who offered criticisms that the authors took into account in a revised version of the article. These readers are thanked for their comments in the final note of the Ryan/Grisez article as it was published by TS, a fact unmentioned by the NCR.

Ryan and Grisez state that the editor of TS, Father David G. Schultenover, indicated that he was willing to publish the revised version of the article, but only “in a substantially reduced form.” The reduced version, according to Ryan and Grisez, “excised our arguments showing that much of Himes and Coriden’s case is unsound and that Piet Fransen’s interpretation of Trent on marriage, on which they rely, is based on false factual claims.”

While acknowledging that TS “submitt[ed] to a higher authority” in publishing the untrimmed version of their article, Ryan and Grisez object to the TS editorial statement that the article was “submitted as it was received,” leaving the impression – for its regular readers and for NCR reporters, apparently – that the article underwent no peer review or vetting process by the journal prior to publication.

Father Ryan and Dr. Grisez’s full statement can be read below. A PDF of their article as it appears in Theological Studies can be read here. This article is Copyright © Theological Studies, Inc. 2011, all rights reserved. A PDF of the Himes-Coriden article to which they were responding can be read here. This article is Copyright © Theological Studies, Inc. 2004, all rights reserved. Instructions for obtaining rights to either or both articles, including the right to download a single copy for one’s own use, may be found on the Theological Studies website http://www.ts.mu.edu/content/index.html



Ryan and Grisez Statement

When Theological Studies, submitting to higher authority, agreed to publish the complete and final version of our article making the case for the absolute indissolubility of covenantal marriage, the editor requested and we provided the abstract that usually appears just before the beginning of the text. The page proofs we received, however, replaced our abstract with an unusual editor’s note: “The article is a reply to one by Kenneth Himes and James Coriden published in our September 2004 issue. Except for minor stylistic changes, the article is published as it was received.”

In our next note to the editor, we said: “We’re concerned that the second sentence of what appears instead is misleading, for we did a great deal of work to respond to the criticisms proposed by the first group of readers assigned by TS, and we thank them in the final note of our article. If the reason for the change is to suggest that the article is being published under duress, we think it would be well to say that straightforwardly.”

The editor replied: “As to the abstract, I decided on this briefer form because what you said in your abstract is repeated at the beginning of article, and I wanted to save space. I don’t think the abstract as it stands is at all misleading.” What concerned us was that the editor’s rejection of the first draft of the article, in May 2009, was accompanied by his “lightly edited summary of the [three] referees’ reports.”

In August 2010, having received our final draft, the editor wrote: “I am pleased to report that my editorial consultants have recommended that we publish your manuscript, but in a substantially reduced form.” That letter included comments from two referees along with the editor’s proposed “trimmed version,” from which were excised our arguments showing that much of Himes and Coriden’s case is unsound and that Piet Fransen’s interpretation of Trent on marriage, on which they rely, is based on false factual claims.

Had Theological Studies published, without a mandate from higher authority, the unexpurgated final version of our reply to Himes and Coriden, its doing so would have contributed to its credibility as a forum for fair and thorough treatment of vital theological controversies. As for the quality of our scholarship, we ask only that readers of the two articles set aside the fact that higher authority had to mandate publication of the unexpurgated version of our article and judge for themselves.

cushioncrawler
11-04-2011, 04:41 PM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: eg8r</div><div class="ubbcode-body">....Where there is smoke there is usually a fire. eg8r</div></div>Peer review in the The Holey Kathlick Church of Rome.

1. Non-canoninikal gospels -- peer review = burn.
2. Heretix ------------------ peer review = burn.

The Pope sayd that burning of gospels and heretix iz GW neutral.
mac.

cushioncrawler
11-04-2011, 04:45 PM
What Kind of Peer-Review Would Jesus Want?

For all those creationists out there wondering how to approach peer review in their brand new “journal,” Answers Research Journal, take heart: the latest edition has some friendly advice.

Despite the centrality of peer review to the development of a scholarly community, very little is known about the biblical basis and Christian conduct of peer review. We find that peer review is rooted in several Christian virtues, such as reflecting Christ, being honest, seeking wisdom, humbly submitting, showing Christian love, correcting error, and being accountable. Given these principles, we recommend that creationists use a double-blind peer review system, wherein the identities of the author and peer reviewers are confidential.

I think most scientists would agree that honesty, accountability, and the correction of errors are important aspects of the scientific review process. The “biblical basis” may seem a bit of a stretch, though, since last time we checked, peer review got going in the 17th century at the Royal Society of London. The ARJ paper does cite a lot of biblical passages, but I couldn’t find any describing how to get your new protein structure results thoroughly vetted and published. It’s also more than a bit disturbing that the authors don’t seem to view scientists whose “virtues” are rooted in any other tradition as qualified to take part in the process. Then again, what else would you expect from editors who have stated in their policy that they won’t publish anything that contradicts the biblical flood story or suggests that the earth is more than a few thousand years old?

As it turns out, the authors of the peer-review tract do realize that even insulating themselves from the real world may not be enough to keep publishing humming along. For instance, they point out that the tiny number of people who actually consider themselves creationists could pose a challenge. The authors wonder (and so do I)

if all the creationists with formal training in one field coauthored a paper together, what qualified peer is left to review it?

Now that would be a fine pickle, wouldn’t it?

But although the whole concept for the journal is a scientific joke (dictating in advance that your results must not contradict the Bible), I actually think the ARJ peer review recommendations got one thing right: double-blind review. That’s where the names of both the author(s) and reviewer(s) are concealed.

Most journals operate on a single-blind system, where reviewers know who the authors are, but not vice versa. Unfortunately, many studies have found that this system perpetuates a bias against female authors. When one journal introduced a double-blind system, they found a significant increase in the number of accepted papers written by women.

Single-blind reviewing also may benefit well-known, established authors with a long publication record, making it more difficult for promising work by new, relatively unknown researchers to make it into publication. Read more about why journals should double-blind here.

As for the ARJ, I applaud their double-blind review recommendation. It’s a pity there won’t be any actual science to evaluate.

cushioncrawler
11-04-2011, 04:51 PM
Peer Reviewed Science Journal Retracts Suggestion That Jesus Cured Woman of Flu

Posted by John Jubinsky on August 22, 2010 at 10:49am in Atheist News

The peer reviewed Virology Journal in reaction to a wave of criticism has embarrassingly retracted a July 21 article suggesting that Matthew 8:14-15, Mark 1:29-31 and Luke 4:38-39 are accounts of Jesus having cured a woman (Peter's mother-in-law) of influenza. The obvious question is how such an article could have been published in a peer reviewed scientific journal to begin with.

The retraction came after criticisms, including those made via blogs and a comment posted on the paper by Paul Gray of the Washington University School of Medicine, expressing the view that it was unclear how the paper met any of the normal standards of such a journal other than someone paid to have it published.

http://www.physorg.com/news201148749.html

The Abstract and Case of the Article began with the following sentence:

The Bible describes the case of a woman with high fever cured by our Lord Jesus Christ.

http://www.virologyj.com/content/7/1/169

cushioncrawler
11-04-2011, 05:04 PM
IT IZ HOPED THAT THE WORK OF THE IJNP WILL KOMPLEMENT THE WORK OF THE IPC.
MAC.

What follows is the process that was followed by IJNP for Panel Selection with the initial participating Authors which included:

•Dr. John Baumgardner
•Dr. Walt Brown
•Dr. Larry Vardiman
•Dr. Carl Baugh
Overview

In Jesus’ Name Productions, Inc. is conducting a <span style='font-size: 14pt'>peer review </span>of the leading Flood and Pre-Flood world theories (the “Flood Models”). This will consist of critical evaluations and rebuttals from selected panelists and participating authors of the scientific support for the pre-eminent Flood Theories (the “Review”). The Review will include the following:

•The Local Flood
•Catastrophic Plate Tectonics
•Hydroplate Theory
•Vapor Canopy
•Solid Canopy

Additional models may be considered as part of the Review process if approved by the Panel.

Though each model is assumed to have Biblical support, this Review will be based on the merits of each theory based only on scientific evidence. It is IJNP’s basic assumption for this Review the correct Biblical interpretation will be supported by a preponderance of scientific evidence and any incorrect interpretations will likely be invalidated by the lack of supporting scientific evidence.

Roles and Responsibilities:

The following participants will contribute to the Review.

•IJNP Chairman – Will serve as Moderator for the Review. The Moderator will oversee both the Panel Selection process and the Review itself. The Moderator is responsible for the coordination of all participants’ input and for maintaining an atmosphere of mutual respect among the contributors.
•Authors – Have agreed to submit their Flood Models for critical review. The Authors are generally well-known for a specific model or position related to the Pre-Flood world or the Biblical Flood (defined either as a global or local flood).
•Panelists – Will review all materials submitted by the Authors and assist in asking a series of questions that challenge, validate, or further illuminate the Author’s position and models.
The Panelists are recognized peers of the Authors, and have specialized knowledge in an area of science or are well-versed on Creation models and teaching. Panelists will be selected based on:

1.The strongest scientific qualifications,
2.The greatest independence, and
3.The most thorough understanding of the Flood Models.
Panelists may also consult with others during the Review to bring additional scientific knowledge and expertise to the discussion. Part of the Panelist screening process will identify the Panelists network of experts that they intend to consult during the Review.

Panel Selection Process:

Step 1 – Initial Call to Panelists

IJNP will release a “Call to Panelists” to Creation organizations around the country in order to notify interested parties of the Review and the process for becoming a Panelist. All potential candidates will be contacted by IJNP either by phone or by e-mail to help evaluate their suitability for the Review. IJNP will narrow the number of candidates down to the 25–30 who have the greatest competence, independence, and breadth of relevant experience. Each of the 25–30 candidates and the participating authors will then receive a complete set of reading material.

Step 2 – Narrowing the Field of Candidate Panelists

In order to narrow the number of candidates to the 15–20 individuals who best understand the theories, each candidate will be contacted by phone and/or e-mail to set up phone interviews. During the phone interview, the Chairman and other IJNP staff will allow the candidates to ask questions and express their agreement or any disagreement with the procedures and time schedule.

Step 2a – Proficiency Testing

The Chairman and IJNP staff will administer proficiency tests to the 15-20 selected candidates. These proficiency tests are comprised of sets of questions provided in advance by the Authors to test the knowledge and understanding of the potential candidates of their Flood Models. Candidates who show a weak understanding of the material will be encouraged to restudy the theories and reschedule another phone interview.

Step 2b – Verification of Credentials

The Chairman and IJNP staff will verify the candidate’s stated credentials, and try to learn the extent of the candidate’s scientific contacts—people he or she might consult during the panel’s deliberations. These will include experts in such fields as orbital mechanics, heat transfer, medicine, geology, physics, fluid mechanics, and nuclear physics.

Step 3 – Interview by Authors

Once the panelists are narrowed down to 15-20, an Author may request a 5–10 minute conference call with the Chairman and each candidate, if they so choose. The call will allow each author to assess each candidate’s comprehension of their own theory. During this interview, Candidates should not express their opinions on the merits of the theories. Authors should not seek agreement, but only ask questions to assess the level of understanding. The authors may also inquire about a candidate’s independence. Authors cannot disqualify a potential candidate on quality or basis of understanding unless the author has conducted a personal interview with the Candidate.

Step 4 – Author Ranking of Candidates

After interviews are conducted, Authors will then rank order these 15-20 candidates according to the Author’s assessment of the Candidate’s understanding of the Author’s theory. This rank ordering will be a factor in IJNP’s final selection of panelists, along with a candidate’s qualifications and independence. Qualifications, independence, and understanding of competing ideas are the keys to <span style='font-size: 14pt'>“true peer review.”</span>

Step 4a – Challenges to Candidate Selection

All participating authors will receive the names of these 15-20 candidates. If a participating author feels that any candidate cannot serve on the panel in an <span style='font-size: 14pt'>unbiased </span>way, he should explain his concerns in a written letter to the Chairman. Participating authors may also <span style='font-size: 14pt'>invoke one peremptory challenge that would remove one candidate </span>from further consideration without having to explain what may be a complex, but justifiable, rationale. Candidates removed by peremptory challenges will be made known to all the Authors.

Step 5 – Final Review of Candidates

Step 5a – Proposed List of Candidates
IJNP will use all of the information gleaned from the previous three steps to propose a panel of 7-10 scientists to conduct the peer review. At this point, IJNP will release the names and qualifications of the proposed panel to each of the Authors for their review.

Step 5b – Agreed List of Candidates
Each author must agree, in writing, they believe the proposed panel is qualified to conduct a <span style='font-size: 14pt'>fair </span>peer review of their Flood Model. If any author disagrees with the proposed panel, they must state their reasons in writing to the Chairman, who will then distribute the Author’s written concerns to all of the other Authors. The Chairman may, at his discretion, adjust the proposed panel, conduct discussions of concerns or initiate changes in order to arrive at a panel that is acceptable to all authors. The Chairman will work with each of the authors until all have provided written agreement that the proposed panel can conduct a <span style='font-size: 14pt'>fair </span>peer review.

Step 6 Final Selections of Panelists
IJNP will notify the 7-10 panelists and mail each a $1,000 check. Alternates will also be selected in case a panelist needs to drop out of the process for any reason. All candidates will be notified whether they were selected or not. The Chairman will email his congratulations to the panelists and alternates and ask them to begin sending him their first question or criticism of each of the Flood Models.

cushioncrawler
11-04-2011, 05:23 PM
Unfair peer review or biased peer review iz not true peer review.

And the invokation of one pre-emptory challenge per author will allso help root out potentially unfair peers and biased peers and untrue peers.
mac.

cushioncrawler
11-04-2011, 05:33 PM
The concept of fair peer review duzznt receive az much attention az it shood. Unfair peer review iz a blight on true science.

One unfair peer kan potentially sink a worthy theory. There iz no defence against an unfair scientist who hits below the belt.
A scientifik fight must be a fair fight -- no gouging -- no biting.
Spektators havta get their moneys worth. Sponsers must be kept happy.
mac.

cushioncrawler
11-04-2011, 06:00 PM
cloudsoup no soup, no clouds
Lying for Jesus:
The Discovery Institute

The Seattle-based creationist advocacy think-tank, The Discovery Institute tries to respond to the criticism that proponents of creationism and its cousin ‘intelligent design’, which has been called ‘creationism in a fancy suit’, do not publish in peer-reviewed scientific journal by publishing a page of short descriptions of ‘Peer-Reviewed and Peer-Edited’ publications. Things aren’t quite as they seem.

Let’s test the honesty of the Discovery Institute’s list by delving into their claim that a suitable example of a peer-reviewed paper is one entitled Genetic Analysis of Coordinate Flagellar and Type III Regulatory Circuits in which Scott Minnich and Stephen C. Meyer ‘argue explicitly that intelligent design is a better (sic) than the Neo-Darwinian mechanism for explaining the origin of the bacterial flagellum’.

Meyer is a theologian and a founder of the Discovery Institute who has a history of finding scientific support for his peculiar views where none in fact exists. He once presented an annotated bibliography of 44 peer-reviewed scientific articles to the Ohio State Board of Education that were said to significantly challenge ‘Darwinian evolution’. The authors of the papers were contacted, and twenty-six, representing thirty-four of the papers responded, all stating that they disagreed with Meyer’s representation of their work.

Scott Minnich is a Fellow of the Discovery Institute’s Center for Science and Culture. Unfortunately for Scott and for the Discovery Institute’s claims for this particular paper, he provided testimony in the Dover Trial (Kitzmiller v. Dover Area School District), the Federal court case that ruled on teaching intelligent design in high schools.

The damaging part of the proceedings are online.

Q. And the paper that you published was only minimally peer reviewed, isn’t that true?

A (Scott Minnich). For any conference proceeding, yeah. You don’t go through the same rigor. I mentioned that yesterday. But it was reviewed by people in the Wessex Institute, and I don’t know who they were.
and then, slightly later in discussing a different paper:

Q. Unlike your paper, that is a peer reviewed scientific paper, correct?

A. In that — in that sense, yeah. Again, mine is a conference paper, so –

Q. This is a true peer reviewed paper, correct?

A. Correct.
This supposedly ‘peer-reviewed’ paper, then, was ‘conference reviewed’ and Scott Minnich doesn’t know who the Wessex Institute are.

By coincidence I came across the Wessex Institute a few weeks ago while reminding myself of that great hoax on the pretensions of the Social Sciences, the Sokal Affair.

The Wessex Institute of Technology (WIT) is associated with the University of Wales and organised the conference, ‘Design & Nature 2004′ in Rhodes, at which Minnich and Meyer’s paper was presented. As Minnich says, it was the WIT that provided the conference peer review. So what are the WIT’s peer review standards an proceudres?

Here’s an example:

A prior event which may also be compared to the Sokal affair involved the VIDEA 1995 conference, organized by the Wessex Institute of Technology. Professor Werner Purgathofer (Vienna University of Technology), a member of the VIDEA 1995 program committee, became suspicious of the conference’s peer review standards after not receiving any abstracts or papers for review. <span style='font-size: 14pt'>To confirm his suspicions, he wrote four absurd and/or nonsensical “abstracts” and submitted them to the conference. All were “reviewed and conditionally accepted.”[3] He subsequently resigned from the program committee.</span>
Wikipedia on The Sokal Affair and the Wessex Institute of Technology

You can read more of Professor Purgathofer’s trenchant views on the Wessex Institute here.

The upshot of this one brief investigation, is that the paper, presented to an Engineering conference, was not, as the Discovery Institute wrongly claims, properly peer-reviewed. Anyone care to take a look at the rest of their claims on that page?

cushioncrawler
11-04-2011, 06:06 PM
Any phoney kan write a phony paper and get a phoney peer review from a phoney institute.
mac.

Soflasnapper
11-04-2011, 06:11 PM
Perhaps that is so, but Lord Monckton couldn't accomplish it, even though he continues to claim he did.

What he got published was a letter in the letter section, of a peer reviewed publication. But the letter section wasn't subject to peer review. And he lies his arse off that it was.

cushioncrawler
11-04-2011, 06:21 PM
But -- Lord Monckton iz a Peer.
mac.

cushioncrawler
11-04-2011, 08:09 PM
<span style='font-size: 14pt'>Peer review
Shameful
Women really do have to be at least twice as good as men to succeed </span>
May 22nd 1997 | WASHINGTON, DC | from the print edition

..SEX and connections: these are not the criteria on which science should be judged, least of all by scientists. But in the first extensive analysis of the way that fellowships in science are awarded, which is published this week in Nature, Christine Wenneras and Agnes Wold, microbiologists at Gothenburg University, in Sweden, found what many graduate students and postdoctoral fellows have long suspected. Namely, that these factors matter as much as, if not more than, scientific merit.

Peer review, the evaluation (often anonymous) of a piece of scientific work by other scientists in the same field, is central to the way in which science proceeds. Journals use it to help decide whether to publish papers; funding agencies use it when deciding to whom to award grants. Anecdotal accounts of abuses abound. But considering how essential it is, there is surprisingly little information on how well it works.

This is in part because the raw data are difficult to obtain. To get the data for their study, Dr Wenneras and Dr Wold had to go to court. The Swedish Medical Research Council (MRC), a government body that funds biomedical research, did not want to release the records of who had said what about whom in the evaluation of fellowship applications. Fortunately, the court declared the records to be official documents—and, therefore, public under Sweden’s Freedom of the Press Act.

To start with, Dr Wenneras and Dr Wold analysed the reviews of the 114 applications that the MRC received for the 20 postdoctoral fellowships it offered in 1995. Of the applicants, 46% were women. Of the successful recipients of the awards, only 20% were women. This was not a freak year: in Sweden in the 1990s, women have received 44% of the doctorates awarded in the biomedical sciences, but have been less than half as successful as men at getting postdoctoral fellowships from the MRC. In principle, of course, that might reflect their abilities. In practice, however, other factors seem to be at work.

When the council gets a grant application, it is evaluated by five reviewers, on three measures: scientific competence, the proposed methodology and the relevance of the research. Each measure is given a score of between zero and four; each reviewer’s scores are multiplied together, giving a single score between zero and 64; and finally, the scores from the reviewers are averaged together, giving the total score.

Dr Wenneras and Dr Wold found that women received lower than average scores on all three criteria, but especially low scores for scientific competence. To see whether women really were bumbling scientists, the researchers devised three quantifiable measures of competence, and used these to assess the applicants’ abilities.

The first measure is crude: how many papers have you published? This measures productivity more than competence—you might publish a lot, but in trivial journals that no one reads. More refined is something known as an “impact” factor. Calculated by an independent body, the Institute for Scientific Information, the impact of a journal is the number of times an average paper in that journal is cited elsewhere in a given year. To calculate a scientist’s impact, just add up the impact factors of all of his or her papers. The third measure was the number of times that an individual’s papers had actually been cited in the previous year. Moreover, each of these measures can be calculated in two ways: total productivity and first-author productivity. In biomedical research, the first author listed on a paper is typically the one who contributed the most—so being one frequently is a good measure of individual competence that is independent of collaborations.

Having compiled this information for each applicant, Dr Wenneras and Dr Wold looked to see how well it matched the competence rating given by the MRC. They found an astonishing—and shocking—discrepancy. Women with the same impact and productivity as men were consistently given much lower competence scores. The women with the most impact—those with a total score of over 100 points—were deemed to be only as competent as those men whose total impact was less than 20.

Although these figures look like the result of sex bias, other kinds of bias could produce them too. Women might more often come from insignificant universities, or hold their PhDs in subjects—such as nursing—that might be perceived as inferior.

To identify such factors, the researchers analysed how much an applicant’s competence score was affected by nine different variables, including sex and whether or not the applicant knew a member of the reviewing committee. They found that just two factors improved the score significantly: being male and knowing a reviewer. In fact, the difference was so great that in order to get the same competence score as a man, a woman would need either to know someone on the committee, or to have published three more papers than the man in Nature or Science, the two journals with the highest impact—or 20 more papers in good specialist journals. It is often joked that a woman has to be twice as good as a man to do as well; Dr Wenneras and Dr Wold found that she would need to be, on average, 2.5 times as good on their measures to be rated as highly by reviewers.

This could partly explain why, although women receive almost half the PhDs in biomedical fields, more women than men leave at all later stages. This exodus is often explained as women not having the motivation or perseverance to work in a male-dominated scientific establishment, but Drs Wenneras and Wold reckon that their results could account entirely for the large numbers of women who have left biomedical research in Sweden. This, if true, is not only unfair, but a waste of public money.

Granted, theirs is only one study from one country. But it is the first study of its kind, and it comes from a country in which sexual equality is formally entrenched in public life. Other, similar research will have to be done, and if the same pattern is found, the peer review system will have to be overhauled. America’s National Science Foundation is currently assessing its system—but in the absence of similar data. In the meantime, ambitious women would do well to return to a time-honoured but supposedly obsolete tradition, and apply under a male name.

cushioncrawler
11-05-2011, 12:38 AM
<div class="ubbcode-block"><div class="ubbcode-header">Quote:</div><div class="ubbcode-body">Leading Economics Journals Drop ‘Double Blind’ Peer Review
May 31, 2011, 12:28 pm

Journals of the American Economic Association on July 1 will end “double blind” manuscript reviews, in which neither the author nor the reviewers know one another’s names. Now the reviewers—who will remain anonymous—will know the author’s identity; the association says the practice will make it easier for reviewers to spot conflicts of interest, and will reduce the administrative costs of the review process. Recently scientific articles have marshaled evidence that elaborate peer review does not improve journal quality, but have also noted that unblinded reviewers tend to favor papers from more prestigious institutions.

This entry was posted in Uncategorized. Bookmark the permalink.</div></div>"elaborate peer review does not improve journal quality,"

HHAAAAAAHHHHHHAAAAAAAAAAAHHHAAAAAAAAAAAAAAAAAHHHAA AAAAAAAAAAAAAA.

How duz u peer review KRAPPYNOMIX SHITE. I know, spray it with perfume -- praps spray it with color -- hmmmmmm, no, still stinx.

Apparently, it iz important whoze KRAPPYNOMIX SHITE it iz.
A Maggot.

Soflasnapper
11-05-2011, 06:30 PM
It's obvious that blinding the peer-reviewers is a bad practice!

Very hard for them to read the papers!

Gayle in MD
11-05-2011, 06:45 PM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: cushioncrawler</div><div class="ubbcode-body"><span style='font-size: 14pt'>Peer review
Shameful
Women really do have to be at least twice as good as men to succeed </span>
May 22nd 1997 | WASHINGTON, DC | from the print edition

..SEX and connections: these are not the criteria on which science should be judged, least of all by scientists. But in the first extensive analysis of the way that fellowships in science are awarded, which is published this week in Nature, Christine Wenneras and Agnes Wold, microbiologists at Gothenburg University, in Sweden, found what many graduate students and postdoctoral fellows have long suspected. Namely, that these factors matter as much as, if not more than, scientific merit.

Peer review, the evaluation (often anonymous) of a piece of scientific work by other scientists in the same field, is central to the way in which science proceeds. Journals use it to help decide whether to publish papers; funding agencies use it when deciding to whom to award grants. Anecdotal accounts of abuses abound. But considering how essential it is, there is surprisingly little information on how well it works.

This is in part because the raw data are difficult to obtain. To get the data for their study, Dr Wenneras and Dr Wold had to go to court. The Swedish Medical Research Council (MRC), a government body that funds biomedical research, did not want to release the records of who had said what about whom in the evaluation of fellowship applications. Fortunately, the court declared the records to be official documents—and, therefore, public under Sweden’s Freedom of the Press Act.

To start with, Dr Wenneras and Dr Wold analysed the reviews of the 114 applications that the MRC received for the 20 postdoctoral fellowships it offered in 1995. Of the applicants, 46% were women. Of the successful recipients of the awards, only 20% were women. This was not a freak year: in Sweden in the 1990s, women have received 44% of the doctorates awarded in the biomedical sciences, but have been less than half as successful as men at getting postdoctoral fellowships from the MRC. In principle, of course, that might reflect their abilities. In practice, however, other factors seem to be at work.

When the council gets a grant application, it is evaluated by five reviewers, on three measures: scientific competence, the proposed methodology and the relevance of the research. Each measure is given a score of between zero and four; each reviewer’s scores are multiplied together, giving a single score between zero and 64; and finally, the scores from the reviewers are averaged together, giving the total score.

Dr Wenneras and Dr Wold found that women received lower than average scores on all three criteria, but especially low scores for scientific competence. To see whether women really were bumbling scientists, the researchers devised three quantifiable measures of competence, and used these to assess the applicants’ abilities.

The first measure is crude: how many papers have you published? This measures productivity more than competence—you might publish a lot, but in trivial journals that no one reads. More refined is something known as an “impact” factor. Calculated by an independent body, the Institute for Scientific Information, the impact of a journal is the number of times an average paper in that journal is cited elsewhere in a given year. To calculate a scientist’s impact, just add up the impact factors of all of his or her papers. The third measure was the number of times that an individual’s papers had actually been cited in the previous year. Moreover, each of these measures can be calculated in two ways: total productivity and first-author productivity. In biomedical research, the first author listed on a paper is typically the one who contributed the most—so being one frequently is a good measure of individual competence that is independent of collaborations.

Having compiled this information for each applicant, Dr Wenneras and Dr Wold looked to see how well it matched the competence rating given by the MRC. They found an astonishing—and shocking—discrepancy. Women with the same impact and productivity as men were consistently given much lower competence scores. The women with the most impact—those with a total score of over 100 points—were deemed to be only as competent as those men whose total impact was less than 20.

Although these figures look like the result of sex bias, other kinds of bias could produce them too. Women might more often come from insignificant universities, or hold their PhDs in subjects—such as nursing—that might be perceived as inferior.

To identify such factors, the researchers analysed how much an applicant’s competence score was affected by nine different variables, including sex and whether or not the applicant knew a member of the reviewing committee. They found that just two factors improved the score significantly: being male and knowing a reviewer. In fact, the difference was so great that in order to get the same competence score as a man, a woman would need either to know someone on the committee, or to have published three more papers than the man in Nature or Science, the two journals with the highest impact—or 20 more papers in good specialist journals. It is often joked that a woman has to be twice as good as a man to do as well; Dr Wenneras and Dr Wold found that she would need to be, on average, 2.5 times as good on their measures to be rated as highly by reviewers.

This could partly explain why, although women receive almost half the PhDs in biomedical fields, more women than men leave at all later stages. This exodus is often explained as women not having the motivation or perseverance to work in a male-dominated scientific establishment, but Drs Wenneras and Wold reckon that their results could account entirely for the large numbers of women who have left biomedical research in Sweden. This, if true, is not only unfair, but a waste of public money.

Granted, theirs is only one study from one country. But it is the first study of its kind, and it comes from a country in which sexual equality is formally entrenched in public life. Other, similar research will have to be done, and if the same pattern is found, the peer review system will have to be overhauled. America’s National Science Foundation is currently assessing its system—but in the absence of similar data. In the meantime, ambitious women would do well to return to a time-honoured but supposedly obsolete tradition, and apply under a male name. </div></div>

Will it ever end. /forums/images/%%GRAEMLIN_URL%%/frown.gif

Qtec
11-06-2011, 07:35 AM
<div class="ubbcode-block"><div class="ubbcode-header">Quote:</div><div class="ubbcode-body">Einstein <u>never ever</u> published a peer-reviewed paper,<u> not once. </u></div></div>

<div class="ubbcode-block"><div class="ubbcode-header">Quote:</div><div class="ubbcode-body">Einstein and Peer review | Abhishek Tiwari
"A Small story about Albert Einstein (adopted from Three myths about scientific peer review, an excellent article by Michael Nielsen about dark side of peer review)-

Albert Einstein, who wasn’t just an outstanding scientist, but was also a prolific scientist, publishing more than 300 journal articles between 1901 and 1955. Many of Einstein’s most ground-breaking papers appeared in his “miracle year” of 1905, when he introduced new ways of understanding space, time, energy, momentum, light, and the structure of matter. Not bad for someone unable to secure an academic position, and working as a patent clerk in the Swiss patent office.

<span style='font-size: 14pt'>How many of Einstein’s 300 plus papers were peer reviewed? According to the physicist and historian of science Daniel Kennefick, it may well be that only a single paper of Einstein’s was ever subject to peer review.</span>
That was a paper about gravitational waves, jointly authored with Nathan Rosen, and submitted to the journal Physical Review in 1936. The Physical Review had at that time recently introduced a peer review system. It wasn’t always used, but when the editor <u>wanted a second opinion</u> on a submission, he would send it out for review. The Einstein-Rosen paper was sent out for review, and came back with a (correct, as it turned out) negative report. Einstein’s indignant reply to the editor is amusing to modern scientific sensibilities, and suggests someone quite unfamiliar with peer review:

Dear Sir,

We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorized you to show it to specialists before it is printed. I see no reason to address the in any case erroneous comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere.

Respectfully,

P.S. Mr. Rosen, who has left for the Soviet Union, has authorized me to represent him in this matter." </div></div>

Q

cushioncrawler
11-06-2011, 07:51 AM
Einstein woz a fake.
Hiz best work woz aktually dunn by hiz wife.
Hiz other works were plagiarized.

Worse than that -- Einstein woz wrong -- and all of science knows it.
mac.

LWW
11-06-2011, 09:21 AM
"PEER REVIEWED" used to mean that numerous scientists would honestly examine, usually in the blind, the work of another and see if they could replicate the findings.

Leftist revisionist newspeak has reduced "PEER REVIEWED" to mean that a bunch of moonbat crazy leftists, who all believe the same boilerplate leftist nonsense, certify that party dogma is absolute and all who believe otherwise are "DENIERS" ... without regards to what the actual science shows.

LWW
11-06-2011, 09:22 AM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: cushioncrawler</div><div class="ubbcode-body">Worse than that -- Einstein woz wrong -- and all of science knows it.
mac. </div></div>

About what?

cushioncrawler
11-06-2011, 02:51 PM
Einstein iz wrong about the speed of light being the same in all direktions (probly about c being a max -- probly about c being the same to all viewers).
Scientists hav proovd this many times -- in fakt this iz a strange one, koz it woz prooven wrong even before it woz written, now thats what i call prooven wrong.
All know that c depends on the earth's sidereal speed and alignment.
All know that a pure one-way measurement of c haz never been dunn.

Einstein iz wrong about gravity arizing from warpage of space-time.
All know that u dont havta make Einsteinian korrektions to get satellites to work together.

Einstein might be korrekt about some stuff re quantum theory and the photoelektrik effekt, he got a Nobel Prize for that. Or at least the plagiarized scientists that discovered that stuff were korrekt.

Its not just that Einstein haz never had peer-review -- Einstein haz never ever refered to the scientists he haz continuously plagiarized, or to their work that he haz continually plagiarized. Not the least of who-which iz Mrs Einstein.
mac.

Soflasnapper
11-06-2011, 07:04 PM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: LWW</div><div class="ubbcode-body">"PEER REVIEWED" used to mean that numerous scientists would honestly examine, usually in the blind, the work of another and see if they could replicate the findings.

Leftist revisionist newspeak has reduced "PEER REVIEWED" to mean that a bunch of moonbat crazy leftists, who all believe the same boilerplate leftist nonsense, certify that party dogma is absolute and all who believe otherwise are "DENIERS" ... without regards to what the actual science shows. </div></div>

Your claim of the original meaning is likely absurd. Reviewers could check the math, check the logic, review the details of the experimental protocols, and the like, but what they almost certainly would never do is try to replicate the findings.

That would be a separate scientific investigation, which they might later do, as SCIENCE. But not as part of peer-review or refereeing papers submitted for publication.

If it were thought that the peer-reviewers or referees ought to themselves duplicate the research, who would ever agree to do that thankless job?

Louie
11-06-2011, 10:01 PM
The reply is right on. I've been a pro scientist for &gt; 40 years (still connected to THE Los Alamos National Lab). I have and still do peer reviews of lots of s**t. As the replier stated, nobody has time to replicate results, just ascertain if reasonable procedures were used, the 'scientific method' was followed, and if other works/citations were adequately explored.
Are you bored or what?
What f'ing leftists? Like those I went to grad school with in WI, whiule taking time away from Action Billiards & CueNique?
This is a pool/billiard blog. Get real!

Qtec
11-07-2011, 01:31 AM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: Louie</div><div class="ubbcode-body">
Are you bored or what?
<span style="color: #990000">That would be what.</span>
What f'ing leftists?
<span style="color: #CC0000">What? That doesn't make sense. If its an answer to a question, what was the question? If it isn't, I think you needed an exclamation mark and maybe a comma.</span>
Like those I went to grad school with in WI,<span style='font-size: 14pt'> whiule </span>taking time away from Action Billiards & CueNique?
<span style="color: #990000">LOL</span>
This is a pool/billiard blog. <span style="color: #990000">No it f'ing isn't.</span> Get real! </div></div>



Q

Soflasnapper
11-07-2011, 07:42 PM
I think you've mistaken the intent of the poster.

He was supporting my reply, and denying LWW's take, and challenging HIM as to what leftists HE was referring.

<div class="ubbcode-block"><div class="ubbcode-header">Quote:</div><div class="ubbcode-body">As the replier [which would be me] stated, nobody has time to replicate results, just ascertain if reasonable procedures were used, the 'scientific method' was followed, and if other works/citations were adequately explored. </div></div>

Qtec
11-07-2011, 08:10 PM
<div class="ubbcode-block"><div class="ubbcode-header">Quote:</div><div class="ubbcode-body">I think you've mistaken the intent of the poster. </div></div>

No, no. <u>I totally agreed with his first paragraph </u>that was on topic. My post was addressed to the <u>second paragraph</u>.

<div class="ubbcode-block"><div class="ubbcode-header">Quote:</div><div class="ubbcode-body">Are you bored or what?
What f'ing leftists? Like those I went to grad school with in WI, whiule taking time away from Action Billiards & CueNique?
This is a pool/billiard blog. Get real! </div></div>

I thought I made my point.

Q

LWW
11-08-2011, 05:35 AM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: Soflasnapper</div><div class="ubbcode-body">Your claim of the original meaning is likely absurd. Reviewers could check the math, check the logic, review the details of the experimental protocols, and the like, but what they almost certainly would never do is try to replicate the findings. </div></div>

Your ignorance in the field of science is astounding.

LWW
11-08-2011, 05:38 AM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: Louie</div><div class="ubbcode-body">The reply is right on. I've been a pro scientist for &gt; 40 years (still connected to THE Los Alamos National Lab). I have and still do peer reviews of lots of s**t. As the replier stated, nobody has time to replicate results, just ascertain if reasonable procedures were used, the 'scientific method' was followed, and if other works/citations were adequately explored.
Are you bored or what?
What f'ing leftists? Like those I went to grad school with in WI, whiule taking time away from Action Billiards & CueNique?
This is a pool/billiard blog. Get real! </div></div>

You might start at the IPCC.

Welcome back woofie.

Soflasnapper
11-08-2011, 10:24 AM
<div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: LWW</div><div class="ubbcode-body"><div class="ubbcode-block"><div class="ubbcode-header">Originally Posted By: Soflasnapper</div><div class="ubbcode-body">Your claim of the original meaning is likely absurd. Reviewers could check the math, check the logic, review the details of the experimental protocols, and the like, but what they almost certainly would never do is try to replicate the findings. </div></div>

Your ignorance in the field of science is astounding. </div></div>

Are you claiming to be out standing in that field?

Well, if so, come inside, as you are all wet!

Here's Wiki on peer review. (http://en.wikipedia.org/wiki/Peer_review)

Somehow, in a large mass of words concerning this practice, they forgot to mention how the peer-reviewers/paper referees are replicating the experiment to decide whether to have it published.

What can explain such vast ignorance, and who in this drama is displaying it?

We've already seen how you cannot do math correctly. For some reason we should now take you as an authority on science, and peer-review?