Improve model card: Add abstract, pipeline tag, and library name
Browse filesThis PR improves the model card by adding the paper's abstract, the `pipeline_tag` to `text-generation` for discoverability (so people can find your model at https://huggingface.co/models?pipeline_tag=text-generation), and the `library_name` as `transformers` for proper library detection.
README.md
CHANGED
|
@@ -1,17 +1,22 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
-
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Model Card for SciLitLLM1.5
|
| 7 |
|
| 8 |
SciLitLLM1.5 adapts a general large language model for effective scientific literature understanding. Starting from the Qwen2.5-7B/14B model, SciLitLLM1.5-7B/14B goes through a hybrid strategy that integrates continual pre-training (CPT) and supervised fine-tuning (SFT), to simultaneously infuse scientific domain knowledge and enhance instruction-following capabilities for domain-specific tasks.
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
In this process, we identify two key challenges: (1) constructing high-quality CPT corpora, and (2) generating diverse SFT instructions. We address these challenges through a meticulous pipeline, including PDF text extraction, parsing content error correction, quality filtering, and synthetic instruction creation.
|
| 11 |
|
| 12 |
Applying this strategy, we present SciLitLLM-7B and 14B, specialized in scientific literature understanding, which demonstrates promising performance on scientific literature understanding benchmarks.
|
| 13 |
|
| 14 |
-
We observe promising performance enhancements, **with an average improvement of 4.0
|
| 15 |
|
| 16 |
See the [paper](https://arxiv.org/abs/2408.15545) for more details and [github](https://github.com/dptech-corp/Uni-SMART) for data processing codes.
|
| 17 |
|
|
@@ -36,7 +41,8 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
| 36 |
device_map="auto"
|
| 37 |
)
|
| 38 |
tokenizer = AutoTokenizer.from_pretrained("Uni-SMART/SciLitLLM1.5-14B")
|
| 39 |
-
prompt = "Can you summarize this article for me
|
|
|
|
| 40 |
messages = [
|
| 41 |
{"role": "system", "content": "You are a helpful assistant."},
|
| 42 |
{"role": "user", "content": prompt}
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
---
|
| 6 |
|
| 7 |
# Model Card for SciLitLLM1.5
|
| 8 |
|
| 9 |
SciLitLLM1.5 adapts a general large language model for effective scientific literature understanding. Starting from the Qwen2.5-7B/14B model, SciLitLLM1.5-7B/14B goes through a hybrid strategy that integrates continual pre-training (CPT) and supervised fine-tuning (SFT), to simultaneously infuse scientific domain knowledge and enhance instruction-following capabilities for domain-specific tasks.
|
| 10 |
|
| 11 |
+
## Paper Abstract
|
| 12 |
+
|
| 13 |
+
Scientific literature understanding is crucial for extracting targeted information and garnering insights, thereby significantly advancing scientific discovery. Despite the remarkable success of Large Language Models (LLMs), they face challenges in scientific literature understanding, primarily due to (1) a lack of scientific knowledge and (2) unfamiliarity with specialized scientific tasks. To develop an LLM specialized in scientific literature understanding, we propose a hybrid strategy that integrates continual pre-training (CPT) and supervised fine-tuning (SFT), to simultaneously infuse scientific domain knowledge and enhance instruction-following capabilities for domain-specific tasks.cIn this process, we identify two key challenges: (1) constructing high-quality CPT corpora, and (2) generating diverse SFT instructions. We address these challenges through a meticulous pipeline, including PDF text extraction, parsing content error correction, quality filtering, and synthetic instruction creation. Applying this strategy, we present a suite of LLMs: SciLitLLM, specialized in scientific literature understanding. These models demonstrate promising performance on scientific literature understanding benchmarks. Our contributions are threefold: (1) We present an effective framework that integrates CPT and SFT to adapt LLMs to scientific literature understanding, which can also be easily adapted to other domains. (2) We propose an LLM-based synthesis method to generate diverse and high-quality scientific instructions, resulting in a new instruction set -- SciLitIns -- for supervised fine-tuning in less-represented scientific domains. (3) SciLitLLM achieves promising performance improvements on scientific literature understanding benchmarks.
|
| 14 |
+
|
| 15 |
In this process, we identify two key challenges: (1) constructing high-quality CPT corpora, and (2) generating diverse SFT instructions. We address these challenges through a meticulous pipeline, including PDF text extraction, parsing content error correction, quality filtering, and synthetic instruction creation.
|
| 16 |
|
| 17 |
Applying this strategy, we present SciLitLLM-7B and 14B, specialized in scientific literature understanding, which demonstrates promising performance on scientific literature understanding benchmarks.
|
| 18 |
|
| 19 |
+
We observe promising performance enhancements, **with an average improvement of 4.0% on SciAssess and 10.1% on SciRIFF, compared to the leading LLMs under 10B parameters**. Notably, **SciLitLLM-7B even outperforms Llama3.1 and Qwen2.5 with 70B parameters on SciRIFF**. Additionally, SciLitLLM-14B achieves leading results on both benchmarks, surpassing other open-source LLMs. Further ablation studies demonstrate the effectiveness of each module in our pipeline.
|
| 20 |
|
| 21 |
See the [paper](https://arxiv.org/abs/2408.15545) for more details and [github](https://github.com/dptech-corp/Uni-SMART) for data processing codes.
|
| 22 |
|
|
|
|
| 41 |
device_map="auto"
|
| 42 |
)
|
| 43 |
tokenizer = AutoTokenizer.from_pretrained("Uni-SMART/SciLitLLM1.5-14B")
|
| 44 |
+
prompt = "Can you summarize this article for me?
|
| 45 |
+
<ARTICLE>"
|
| 46 |
messages = [
|
| 47 |
{"role": "system", "content": "You are a helpful assistant."},
|
| 48 |
{"role": "user", "content": prompt}
|