dassarthak18 commited on
Commit
1a9693f
·
verified ·
1 Parent(s): ca24730

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ library_name: sfttrainer
17
 
18
  # BitNet 1.58b 2B Fine-tuned on F*
19
 
20
- This model is a SFTTrainer fine-tuned version of Microsoft's Phi-2 trained to generate F* code and automated proofs in the F* formal verification language. In their paper [Towards Neural Synthesis for SMT-Assisted Proof-Oriented Programming](https://arxiv.org/pdf/2405.01787), Saikat Chakraborty et al. (2024) use Phi-2, among other LLMs, for evaluating their dataset. This LLM is relatively small and can be fine-tuned on consumer-grade GPUs using LoRA. We wanted to see what could be achieved with BitNet.
21
 
22
  ## Model Details
23
 
 
17
 
18
  # BitNet 1.58b 2B Fine-tuned on F*
19
 
20
+ This model is a SFTTrainer fine-tuned version of Microsoft's BitNet 1.58b 2B LLM trained to generate F* code and automated proofs in the F* formal verification language.
21
 
22
  ## Model Details
23