The Alzheimer’s retraction item was interesting—and timely, because I read this morning that John Wiley & Sons has retracted more than 11,000 faked papers over just the last two years, and that it’s closing 19 journals because they’re “infected by large-scale research fraud.” The article in which I read that cited Marcia Angell’s 2009 comment: “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines.” The article also quoted Richard Horton from 2015: “The case against science is straightforward: Much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.” John Ioannidis has said that most published research findings are false, and that it tends to be the case “when there is greater financial and other interest and prejudice.” He said that “claimed research findings may often be simply accurate measures of the prevailing bias.”
Given all that, and admitting that I’m no expert, is it really a good idea to rely *more* on computers using AI? Is it really research if a computer just predicts stuff, regardless whether it’s using traditional software or AI? Is it a good idea to even keep modeling at all, since it’s so susceptible to the programmers’ biases? From the information in the item about VCGs I think Charles River Laboratories is limiting AI use to replacing control groups, which I guess means that the AI is restricted to modeling known behaviors and statistically likely baseline physiological reactions of untreated animals. But in general, maybe the introduced bias of modeling is just as much to blame for all the errors and retractions across so many different disciplines as “large-scale research fraud” is. Somebody had to decide what to put in and what to leave out. Isn’t that the first place to look for bias? Isn’t it by definition programmed right into any models? I always understood science as a process: of prediction (hypothesis) first, then testing/observation, then developing a theory to explain the observations, or else going back to the drawing board to develop another hypothesis followed by more testing and observation. But adding AI and computer modeling into the mix means that people aren’t observing anything in reality, or at least are observing fewer things directly. Doesn’t it introduce an element of not knowing what you don’t know? Even when AI is restricted to modeling animal control groups, how can researchers trust that they're seeing the same result they would have with real animals, and isn't that like eliminating the control group all together? The accuracy of the results relies on the accuracy of the programming, whether it’s traditionally programmed software or what’s fed into AI to “learn” on. I know there can be a lot (maybe mind-numbing amounts) of variables to consider and I know that computers make it easier to manage them, but if “most published research findings are false,” how well are the variables really being managed? If the bias is programmed in, how much harder is it to discover and eliminate it? Do models really produce data, or just filter and rearrange biases and assumptions? Eleven thousand faked papers sounds more like fraud than bias, but the problems that Horton and Ioannidis are pointing out probably can’t be explained entirely by fraud. If the Charles River AI models are expanded beyond replacing animals in control groups, then wouldn’t researchers potentially miss information or results? Aren’t AI’s predictions of how a live animal would react to a certain treatment limited to what the humans who programmed it knew about its physiology? And while a single person doesn’t know everything the AI “knows,” isn’t the AI still drawing from what an aggregate of people know? Isn't that an automatic ceiling on discovery, if the bias is toward assuming the AI is right?
I know I always come here and sound negative, but as a layman, modeling doesn’t look to me like research or science. Maybe the problem of mass retractions is a problem of people substituting modeling for science. Scientists have observed, experimented, and researched for thousands of years without computers to handle otherwise-unmanageable variables. Maybe people need to say, “We don’t know” more often, although that doesn’t pay as well, instead of conflating modeling with observable data. Maybe computers should more often be reserved for computations: number-crunching that would take years if done by humans, and then less of what does get published won’t have to be retracted, and science will regain respect as a process instead of a way to get hoped-for or paid-for results. Or maybe I don’t know WTF I’m talking about. But these articles make me at least try to think.
Also, I had questions (from last week’s article) about the drug that regenerates teeth by turning off the protein that suppresses tooth growth: Do they have to intervene at all to make sure the tooth growth is confined to the right places in the mouth and with the teeth positioned correctly? Is that even possible? Is it a targeted application, or more big-picture, where they turn the USAG-1 protein off and stand back? Because the first thing I thought of when I saw that item was teeth in weird places, like in teratomas. But I think that’s a phenomenon with a completely different cause.
If they're willing to fabricate evidence for something as serious as Alzheimer's, I wonder what else they'll do.
Oh man, that retraction! Ouch. Hadn't heard about this, so thanks for sharing.
I saw there was a Nature piece the other day about AI animal models. Pretty wild if they can start to be used in research.
The Alzheimer’s retraction item was interesting—and timely, because I read this morning that John Wiley & Sons has retracted more than 11,000 faked papers over just the last two years, and that it’s closing 19 journals because they’re “infected by large-scale research fraud.” The article in which I read that cited Marcia Angell’s 2009 comment: “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines.” The article also quoted Richard Horton from 2015: “The case against science is straightforward: Much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.” John Ioannidis has said that most published research findings are false, and that it tends to be the case “when there is greater financial and other interest and prejudice.” He said that “claimed research findings may often be simply accurate measures of the prevailing bias.”
Given all that, and admitting that I’m no expert, is it really a good idea to rely *more* on computers using AI? Is it really research if a computer just predicts stuff, regardless whether it’s using traditional software or AI? Is it a good idea to even keep modeling at all, since it’s so susceptible to the programmers’ biases? From the information in the item about VCGs I think Charles River Laboratories is limiting AI use to replacing control groups, which I guess means that the AI is restricted to modeling known behaviors and statistically likely baseline physiological reactions of untreated animals. But in general, maybe the introduced bias of modeling is just as much to blame for all the errors and retractions across so many different disciplines as “large-scale research fraud” is. Somebody had to decide what to put in and what to leave out. Isn’t that the first place to look for bias? Isn’t it by definition programmed right into any models? I always understood science as a process: of prediction (hypothesis) first, then testing/observation, then developing a theory to explain the observations, or else going back to the drawing board to develop another hypothesis followed by more testing and observation. But adding AI and computer modeling into the mix means that people aren’t observing anything in reality, or at least are observing fewer things directly. Doesn’t it introduce an element of not knowing what you don’t know? Even when AI is restricted to modeling animal control groups, how can researchers trust that they're seeing the same result they would have with real animals, and isn't that like eliminating the control group all together? The accuracy of the results relies on the accuracy of the programming, whether it’s traditionally programmed software or what’s fed into AI to “learn” on. I know there can be a lot (maybe mind-numbing amounts) of variables to consider and I know that computers make it easier to manage them, but if “most published research findings are false,” how well are the variables really being managed? If the bias is programmed in, how much harder is it to discover and eliminate it? Do models really produce data, or just filter and rearrange biases and assumptions? Eleven thousand faked papers sounds more like fraud than bias, but the problems that Horton and Ioannidis are pointing out probably can’t be explained entirely by fraud. If the Charles River AI models are expanded beyond replacing animals in control groups, then wouldn’t researchers potentially miss information or results? Aren’t AI’s predictions of how a live animal would react to a certain treatment limited to what the humans who programmed it knew about its physiology? And while a single person doesn’t know everything the AI “knows,” isn’t the AI still drawing from what an aggregate of people know? Isn't that an automatic ceiling on discovery, if the bias is toward assuming the AI is right?
I know I always come here and sound negative, but as a layman, modeling doesn’t look to me like research or science. Maybe the problem of mass retractions is a problem of people substituting modeling for science. Scientists have observed, experimented, and researched for thousands of years without computers to handle otherwise-unmanageable variables. Maybe people need to say, “We don’t know” more often, although that doesn’t pay as well, instead of conflating modeling with observable data. Maybe computers should more often be reserved for computations: number-crunching that would take years if done by humans, and then less of what does get published won’t have to be retracted, and science will regain respect as a process instead of a way to get hoped-for or paid-for results. Or maybe I don’t know WTF I’m talking about. But these articles make me at least try to think.
Also, I had questions (from last week’s article) about the drug that regenerates teeth by turning off the protein that suppresses tooth growth: Do they have to intervene at all to make sure the tooth growth is confined to the right places in the mouth and with the teeth positioned correctly? Is that even possible? Is it a targeted application, or more big-picture, where they turn the USAG-1 protein off and stand back? Because the first thing I thought of when I saw that item was teeth in weird places, like in teratomas. But I think that’s a phenomenon with a completely different cause.