BULLSHIT. Portland's full of it, and we're past the point where it's cute. When Portlanders voted against fluoride, we made a declaration: That in Portland, junk science was more valued than legitimate science. It was a low point for our city, but hopefully it taught us something: That unless we learn how to tell the difference between fact and fiction, we're setting ourselves up to fail—and in the process, failing to reap the physical and social benefits that modern medicine can provide.

To figure out how to tell the difference between legitimate science and junk science, I spoke with Amy Evans, MD, who works as a general practitioner in Portland. The list of Portlanders' dubious health practices that I threw at Evans was long—from Echinacea, to hypnotism, to juice cleanses, to those ear candles that idiots burn themselves with. Evans had tips about a few, like neti pots ("as long as you're using clean water, they are likely beneficial for particulate allergen removal and treatment of sinus infections") and natural supplements ("some have data, but most don't—or don't have good, consistent data derived from decent studies"). "Most of the other stuff on your list," she told me, "probably doesn't have good data at all, or has data showing that it doesn't work, and would almost certainly fall under the heading of junk science."

When it comes to legitimate science, good data is key. And good data is what's usually missing when Portlanders wring their hands over, say, vaccines. Admittedly, there's a reason most people don't dig deep enough to get to good data: Doing so requires delving into peer-reviewed journals, which are super boring. But while they're dry, it's in these journals that well-documented primary source articles prove what therapies do and don't work. "Basically, if something doesn't have good data to support its use, I don't recommend it to patients," says Evans.

"There are several things to look for," Evans says about these studies. "One is how the research was conducted. When testing a drug or other medical intervention, the best type of study is a double-blind randomized control study with a large number of participants. We're getting kinda boring and technical here, but to break that down: [One group is] receiving an intervention, and their outcomes are compared to a group either receiving placebo or another comparable therapy. And no one knows which group they're in." It's this kind of study, says Evans, that "minimizes a number of biases and factors that could make the data unreliable."

"Another important factor," says Evans, "is the reproducibility of the study's outcomes. What you really want is a number of randomized control studies that come to similar conclusions." And there's one last big thing to keep an eye on: "Look at the conflict-of-interest section of a research paper," says Evans. "You might not trust the study's positive findings as much if the study is performed by someone who would benefit from a positive outcome—like a drug manufacturer showing that their newly patented, expensive medication is better than the now-outdated, and coincidentally now generic, prior standard of care."

For those of us who aren't doctors, Evans points toward a place to start researching: publicly available resources like pubmed.gov. A searchable database, PubMed doesn't always have the full text of research papers, but it does offer those papers' abstracts, alongside links that go further into peer-reviewed studies—or, as Evans puts it, "good summaries of currently available data." True, they aren't quite as entertaining as Portlanders' polemics. But there's a lot less bullshit.

DEPT. OF CORRECTIONS: This article has been edited since original publication to more accurately reflect the terminology used in double-blind randomized control studies.