/* ---- Google Analytics Code Below */

Sunday, April 30, 2023

Galactica: A Large Language Model for Science Research

Quite Useful idea for sharing orgaizedscience data with with Large Language Methods

 https://www.youtube.com/watch?v=ZTs_mXwMCs8&t=1418s

https://arxiv.org/abs/2211.09085

Galactica: A Large Language Model for Science

Computer Science > Computation and Language

[Submitted on 16 Nov 2022]

Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, Robert Stojnic

Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It also sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community.

Subjects: Computation and Language (cs.CL); Machine Learning (stat.ML)

Cite as: arXiv:2211.09085 [cs.CL]

  (or arXiv:2211.09085v1 [cs.CL] for this version)

https://doi.org/10.48550/arXiv.2211.09085

Focus to learn more

Submission history

From: Robert Stojnic [view email]

[v1] Wed, 16 Nov 2022 18:06:33 UTC (10,715 KB)

No comments: