Hunter Moseley says that good reproducibility practices are essential to fully harness the potential of big data.Credit: Hunter N.B. Moseley
We are in the middle of a data-driven science boom. Huge, complex data sets, often with large numbers of individually measured and annotated features, are fodder for voracious artificial intelligence (AI) and machine-learning systems, with details of new applications being published almost daily.
But publication in itself is not synonymous with factuality. Just because a paper, method or data set is published does not mean that it is correct and free from mistakes. Without checking for accuracy and validity before using these resources, scientists will surely encounter errors. In fact, they already have.
In the past few months, members of our bioinformatics and systems-biology laboratory have reviewed state-of-the-art machine-learning methods for predicting the metabolic pathways that metabolites belong to, on the basis of the molecules chemical structures1. We wanted to find, implement and potentially improve the best methods for identifying how metabolic pathways are perturbed under different conditions: for instance, in diseased versus normal tissues.
We found several papers, published between 2011 and 2022, that demonstrated the application of different machine-learning methods to a gold-standard metabolite data set derived from the Kyoto Encyclopedia of Genes and Genomes (KEGG), which is maintained at Kyoto University in Japan. We expected the algorithms to improve over time, and saw just that: newer methods performed better than older ones did. But were those improvements real?
Scientific reproducibility enables careful vetting of data and results by peer reviewers as well as by other research groups, especially when the data set is used in new applications. Fortunately, in keeping with best practices for computational reproducibility, two of the papers2,3 in our analysis included everything that is needed to put their observations to the test: the data set they used, the computer code they wrote to implement their methods and the results generated from that code. Three of the papers24 used the same data set, which allowed us to make direct comparisons. When we did so, we found something unexpected.
It is common practice in machine learning to split a data set in two and to use one subset to train a model and another to evaluate its performance. If there is no overlap between the training and testing subsets, performance in the testing phase will reflect how well the model learns and performs. But in the papers we analysed, we identified a catastrophic data leakage problem: the two subsets were cross-contaminated, muddying the ideal separation. More than 1,700 of 6,648 entries from the KEGG COMPOUND database about one-quarter of the total data set were represented more than once, corrupting the cross-validation steps.
NatureTech
When we removed the duplicates in the data set and applied the published methods again, the observed performance was less impressive than it had first seemed. There was a substantial drop in the F1 score a machine-learning evaluation metric that is similar to accuracy but is calculated in terms of precision and recall from 0.94 to 0.82. A score of 0.94 is reasonably high and indicates that the algorithm is usable in many scientific applications. A score of 0.82, however, suggests that it can be useful, but only for certain applications and only if handled appropriately.
It is, of course, unfortunate that these studies were published with flawed results stemming from the corrupted data set; our work calls their findings into question. But because the authors of two of the studies followed best practices in computational scientific reproducibility and made their data, code and results fully available, the scientific method worked as intended, and the flawed results were detected and (to the best of our knowledge) are being corrected.
The third team, as far as we can tell, included neither their data set nor their code, making it impossible for us to properly evaluate their results. If all of the groups had neglected to make their data and code available, this data-leakage problem would have been almost impossible to catch. That would be a problem not just for the studies that were already published, but also for every other scientist who might want to use that data set for their own work.
More insidiously, the erroneously high performance reported in these papers could dissuade others from attempting to improve on the published methods, because they would incorrectly find their own algorithms lacking by comparison. Equally troubling, it could also complicate journal publication, because demonstrating improvement is often a requirement for successful review potentially holding back research for years.
So, what should we do with these erroneous studies? Some would argue that they should be retracted. We would caution against such a knee-jerk reaction at least as a blanket policy. Because two of the three papers in our analysis included the data, code and full results, we could evaluate their findings and flag the problematic data set. On one hand, that behaviour should be encouraged for instance, by allowing the authors to publish corrections. On the other, retracting studies with both highly flawed results and little or no support for reproducible research would send the message that scientific reproducibility is not optional. Furthermore, demonstrating support for full scientific reproducibility provides a clear litmus test for journals to use when deciding between correction and retraction.
Now, scientific data are growing more complex every day. Data sets used in complex analyses, especially those involving AI, are part of the scientific record. They should be made available along with the code with which to analyse them either as supplemental material or through open data repositories, such as Figshare (Figshare has partnered with Springer Nature, which publishes Nature, to facilitate data sharing in published manuscripts) and Zenodo, that can ensure data persistence and provenance. But those steps will help only if researchers also learn to treat published data with some scepticism, if only to avoid repeating others mistakes.
Read the rest here:
In the AI science boom, beware: your results are only as good as your data - Nature.com
- What We Learned From Big Tech's Earnings Reports - Investopedia - February 4th, 2024 [February 4th, 2024]
- Google Maps: It's getting a new generative AI feature - Mashable - February 4th, 2024 [February 4th, 2024]
- Amazon made an AI bot to talk you through buying more stuff on Amazon - The Verge - February 4th, 2024 [February 4th, 2024]
- AI creates what Europeans think Americans from every state look like and it may hurt your feelings - UNILAD - February 4th, 2024 [February 4th, 2024]
- Samsung's Galaxy S24 Ultra Could Be Doing So Much More With AI - CNET - February 4th, 2024 [February 4th, 2024]
- Fact Sheet: Biden-Harris Administration Announces Key AI Actions Following President Bidens Landmark Executive ... - The White House - February 4th, 2024 [February 4th, 2024]
- Google Maps is getting supercharged with generative AI - The Verge - February 4th, 2024 [February 4th, 2024]
- AI chatbots tend to choose violence and nuclear strikes in wargames - New Scientist - February 4th, 2024 [February 4th, 2024]
- Police Turn to AI to Review Bodycam Footage - ProPublica - February 4th, 2024 [February 4th, 2024]
- Apple Just Teased Its AI Plans. You Really Should Take Notice - CNET - February 4th, 2024 [February 4th, 2024]
- AI models are coming to fashion to promote diversitybut some industry insiders are concerned it will end up parodying it - Fortune - February 4th, 2024 [February 4th, 2024]
- Tim Cook confirms Apple's generative AI features are coming later this year - The Verge - February 4th, 2024 [February 4th, 2024]
- I Tried Google Bard's New AI Image Generator. Here's How It Turned Out - CNET - February 4th, 2024 [February 4th, 2024]
- 'Year of AI' Faculty Recruitment Initiative Aims to Bring More World-Class Professors to UT - The University of Texas at Austin - February 4th, 2024 [February 4th, 2024]
- AI Learns Through the Eyes and Ears of a Child - New York University - February 4th, 2024 [February 4th, 2024]
- Amazon Introduces Rufus, an AI Shopping Tool, and Reports Earnings - The New York Times - February 4th, 2024 [February 4th, 2024]
- I Tested a Next-Gen AI Assistant. It Will Blow You Away - WIRED - February 4th, 2024 [February 4th, 2024]
- AI Actors Who Almost Got The Part - BuzzFeed - February 4th, 2024 [February 4th, 2024]
- AI afterlife, robot romance, and slow-burn slashers: the best of Sundance 2024 - The Verge - February 4th, 2024 [February 4th, 2024]
- Is Jumping on the AI Bandwagon Prudent? - Catholic Exchange - February 4th, 2024 [February 4th, 2024]
- This AI learnt language by seeing the world through a baby's eyes - Nature.com - February 4th, 2024 [February 4th, 2024]
- Arc Search's AI responses launched as an unfettered experience with no guardrails - Mashable - February 4th, 2024 [February 4th, 2024]
- Generative AI is hot, but predictive AI remains the workhorse - CIO - February 4th, 2024 [February 4th, 2024]
- Nvidia Stock Just Got Amazing Artificial Intelligence (AI) News From These Trillion-Dollar Tech Giants - The Motley Fool - February 4th, 2024 [February 4th, 2024]
- AI Briefing: How Priceline and other e-commerce companies are approaching generative AI - Digiday - February 20th, 2024 [February 20th, 2024]
- OpenAI Unveils A.I. That Instantly Generates Eye-Popping Videos - The New York Times - February 20th, 2024 [February 20th, 2024]
- Technology industry to combat deceptive use of AI in 2024 elections - Stories - Microsoft - February 20th, 2024 [February 20th, 2024]
- What is a deepfake? How AI scams are threatening the 2024 election - USA TODAY - February 20th, 2024 [February 20th, 2024]
- Meeting the moment: combating AI deepfakes in elections through today's new tech accord - Microsoft On the Issues - Microsoft - February 20th, 2024 [February 20th, 2024]
- These Are the Jobs That AI Is Actually Replacing in 2024 - Tech.co - February 20th, 2024 [February 20th, 2024]
- AI company developing software to detect hypersonic missiles from space - SpaceNews - February 20th, 2024 [February 20th, 2024]
- How are AI Systems Assisting Architects and Designers? - ArchDaily - February 20th, 2024 [February 20th, 2024]
- Artificial intelligence is making critical health care decisions. The sheriff is MIA - POLITICO - February 20th, 2024 [February 20th, 2024]
- Google's Chess Experiments Reveal How to Boost the Power of AI - WIRED - February 20th, 2024 [February 20th, 2024]
- Why the only way to ride the company AI wave is experimentation - Big Think - February 20th, 2024 [February 20th, 2024]
- What Are the Best AI Stocks in February 2024? Our Top 3 Picks - InvestorPlace - February 20th, 2024 [February 20th, 2024]
- Media Buying Briefing: Agencies' AI efforts lead to aliens and Whoppers - Digiday - February 20th, 2024 [February 20th, 2024]
- C3.ai Stock Warning: Don't Get Carried Away With AI Euphoria! - InvestorPlace - February 20th, 2024 [February 20th, 2024]
- Google wants you to label AI-generated images used in Merchant Center - Search Engine Land - February 20th, 2024 [February 20th, 2024]
- The State of A.I., and Will Perplexity Beat Google or Destroy the Web? - The New York Times - February 20th, 2024 [February 20th, 2024]
- Donald Trump's father resurrected by AI to tell him he's 'a disgrace' - Euronews - February 20th, 2024 [February 20th, 2024]
- Google Cloud CEO On Huge Investments, AI And Challenges In 2024 - CRN - February 20th, 2024 [February 20th, 2024]
- AI Stocks: Google, Adobe Highlight Threat Even To Big Artificial Intelligence Plays. Take Note Investors. - Investor's Business Daily - February 20th, 2024 [February 20th, 2024]
- Another Big Question About AI: Its Carbon Footprint Mother Jones - Mother Jones - February 20th, 2024 [February 20th, 2024]
- Reddit sells training data to unnamed AI company ahead of IPO - Ars Technica - February 20th, 2024 [February 20th, 2024]
- ChatGPT Stock Predictions: 3 Artificial Intelligence Companies the AI Bot Thinks Have 10X Potential - InvestorPlace - February 20th, 2024 [February 20th, 2024]
- Chinese entrepreneurs express awe and fear of OpenAIs Sora video tool - South China Morning Post - February 20th, 2024 [February 20th, 2024]
- Sanofi CEO: AI promises a great era of drug discovery that could fundamentally change medicinebut only if we allow it to deliver - Fortune - February 20th, 2024 [February 20th, 2024]
- Google's AI Boss Says Scale Only Gets You So Far - WIRED - February 20th, 2024 [February 20th, 2024]
- World's largest computer chip WSE-3 will power massive AI supercomputer 8 times faster than the current record-holder - Livescience.com - March 15th, 2024 [March 15th, 2024]
- Is generative AI truly making disinformation worse? - Euronews - March 15th, 2024 [March 15th, 2024]
- Do This Weekly To Learn About AI Investing, Says Top Trader - Investor's Business Daily - March 15th, 2024 [March 15th, 2024]
- Your Kid May Already Be Watching AI-Generated Videos on YouTube - WIRED - March 15th, 2024 [March 15th, 2024]
- Free Legal Research Startup descrybe.ai Now Has AI Summaries of All State Supreme and Appellate Opinions - LawSites - March 15th, 2024 [March 15th, 2024]
- Google's new AI will play video games with you but not to win - The Verge - March 15th, 2024 [March 15th, 2024]
- Regulators Need AI Expertise. They Can't Afford It - WIRED - March 15th, 2024 [March 15th, 2024]
- CBP wants to use AI to scan for fentanyl at the border - The Verge - March 15th, 2024 [March 15th, 2024]
- Rely on the Spirit when using AI, Elder Gong encourages - Church News - March 15th, 2024 [March 15th, 2024]
- Video Game Made Purely With AI Failed Because Tech Was 'Unable to Replace Talent' - IGN - March 15th, 2024 [March 15th, 2024]
- Among the A.I. Doomsayers - The New Yorker - March 15th, 2024 [March 15th, 2024]
- Self-docking spacecraft could be built with AI system similar to ChatGPT - Space.com - March 15th, 2024 [March 15th, 2024]
- AI books are crowding the marketplace on Amazon - NPR - March 15th, 2024 [March 15th, 2024]
- Hackers can read private AI-assistant chats even though they're encrypted - Ars Technica - March 15th, 2024 [March 15th, 2024]
- EU Presses Big Tech Companies on AI Threats - PYMNTS.com - March 15th, 2024 [March 15th, 2024]
- AI Chips: In the AMD Vs. Nvidia Fight, Second Place Is Still A Winner - Investor's Business Daily - March 15th, 2024 [March 15th, 2024]
- AI fear and excitement are lucrative mix for online training industry - Marketplace - March 15th, 2024 [March 15th, 2024]
- Craig Martell, the Pentagon's first-ever Chief Digital and AI Officer, to depart in April - DefenseScoop - March 15th, 2024 [March 15th, 2024]
- Startup Interloom raises $3 million seed round to take on UiPath and RPA market - Fortune - March 15th, 2024 [March 15th, 2024]
- Forget Chatbots. AI Agents Are the Future - WIRED - March 15th, 2024 [March 15th, 2024]
- SXSW audience boos AI sizzle reel - Quartz - March 15th, 2024 [March 15th, 2024]
- Which AI is best? Our tech expert put three free versions to the test - USA TODAY - March 15th, 2024 [March 15th, 2024]
- Look beyond Nvidia to ride the AI wave there are other potential winners, Fidelity says - CNBC - March 15th, 2024 [March 15th, 2024]