Experts Find Definition of ‘Expert’ to be Pretty Loose

BOSTON – In a breakthrough study conducted last month, experts discovered what many in the field have deemed a real blow to progress. After several weeks of observation and research, the scientific community has concluded that the term “expert” is much more loosely defined than previously thought.

“We studied millions of mentions across LinkedIn and other social media channels for claims of expertise and found nearly zero correlation between those claims and actual skill sets” noted James Splitt, lead researcher. He further clarified that “the same was true across all industries, even after removing incidents where ‘expert’ was spelled wrong…which happened way more than you would think.”

This is not the first time that researchers have looked into the validity of expertise claims. Back in 2013, a team of students poured through thousands of hours of footage of cable news, fact checking the pundits who appeared as experts.  “[The study] was really quite surprising” said then sophomore Will Pemberton, “after the first few hours we thought there might be a link between being an expert and being a pretty white woman, given the sheer number presenting ‘facts’ on the news.” Despite this original hypothesis, Pemberton and his team quickly discovered that expertise was being claimed by almost anyone with more than 1,000 Twitter followers, a self-published e-book, or basic physical proximity to an event.

The study last month built upon those initial findings going beyond TV news to dig into the internet, where everyone goes for reliable information. As part of the initial discovery, Splitt and his team found that nearly 1 in 3 claim some type of expertise online. Of those, only 3% had more than one year of professional experience in their industry. Even more troubling is that when the researchers brought this up to the experts, 88% responded with “wait…what do you mean?” The other 22% were spam bots.

Legitimate experts were furious at the initial results and clamored to join Splitt in the remaining rounds of field research. A random sampling of experts were each given a quiz in their respective fields with questions that would represent an understanding beyond just basic subjects and skills. Of those who took the test, 31% passed, 64% failed, and 4% decided to change their profile keyword to “guru.”

We asked Splitt to define the statistical significance of his research, and provide further quantitative analysis of his findings. He declined, telling me that although he is definitely an expert in primary research and analysis, he “didn’t really feel like talking about that right now.”

I am Legend - The Rise And Fall of Chris Todd

Why Jerry Seinfeld is Wrong About ‘Political Correctness’ Ruining Comedy