|
|
|
|
|
--- |
|
|
license: apache-2.0 |
|
|
dataset: sft |
|
|
tags: |
|
|
- finetuned |
|
|
- multimodal |
|
|
inference: false |
|
|
--- |
|
|
|
|
|
These are weights for a version of `checkpoints/stage2/llava-moleculestm-vicuna-7b-v1.5-pretrain_rxn_nc` finetuned for multimodal applications. |
|
|
|
|
|
### Modalities |
|
|
|
|
|
* Molecule2DModality (use `<molecule_2d>` in text and provide `molecules` |
|
|
|
|
|
### Usage |
|
|
|
|
|
GitHub: https://github.com/IDEA-XL/PRESTO (includes training scripts and basic inference server) |
|
|
|
|
|
### Dataset |
|
|
|
|
|
sft (765299 examples) |
|
|
|
|
|
### Training Device(s) |
|
|
|
|
|
``` |
|
|
name, pci.bus_id, vbios_version |
|
|
A100-SXM4-40GB, 00000000:07:00.0, 92.00.45.00.03 |
|
|
A100-SXM4-40GB, 00000000:0F:00.0, 92.00.45.00.03 |
|
|
A100-SXM4-40GB, 00000000:47:00.0, 92.00.45.00.03 |
|
|
A100-SXM4-40GB, 00000000:4E:00.0, 92.00.45.00.03 |
|
|
A100-SXM4-40GB, 00000000:87:00.0, 92.00.45.00.03 |
|
|
A100-SXM4-40GB, 00000000:90:00.0, 92.00.45.00.03 |
|
|
A100-SXM4-40GB, 00000000:B7:00.0, 92.00.45.00.03 |
|
|
A100-SXM4-40GB, 00000000:BD:00.0, 92.00.45.00.03 |
|
|
``` |
|
|
|
|
|
|
|
|
### Model |
|
|
|
|
|
``` |
|
|
LlamaLMMForCausalLM.model = |
|
|
|
|
|
LlamaLMMForCausalLM( |
|
|
(model): LlamaLMMModel( |
|
|
(embed_tokens): Embedding(32000, 4096, padding_idx=0) |
|
|
(layers): ModuleList( |
|
|
(0-31): 32 x LlamaDecoderLayer( |
|
|
(self_attn): LlamaSdpaAttention( |
|
|
(q_proj): Linear(in_features=4096, out_features=4096, bias=False) |
|
|
(k_proj): Linear(in_features=4096, out_features=4096, bias=False) |
|
|
(v_proj): Linear(in_features=4096, out_features=4096, bias=False) |
|
|
(o_proj): Linear(in_features=4096, out_features=4096, bias=False) |
|
|
(rotary_emb): LlamaRotaryEmbedding() |
|
|
) |
|
|
(mlp): LlamaMLP( |
|
|
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False) |
|
|
(up_proj): Linear(in_features=4096, out_features=11008, bias=False) |
|
|
(down_proj): Linear(in_features=11008, out_features=4096, bias=False) |
|
|
(act_fn): SiLU() |
|
|
) |
|
|
(input_layernorm): LlamaRMSNorm() |
|
|
(post_attention_layernorm): LlamaRMSNorm() |
|
|
) |
|
|
) |
|
|
(norm): LlamaRMSNorm() |
|
|
(molecule_2d_lmm_projector): _MLPVectorProjector( |
|
|
(mlp): Sequential( |
|
|
(0): Linear(in_features=300, out_features=4096, bias=True) |
|
|
(1): GELU(approximate='none') |
|
|
(2): Linear(in_features=4096, out_features=4096, bias=True) |
|
|
) |
|
|
) |
|
|
) |
|
|
(lm_head): Linear(in_features=4096, out_features=32000, bias=False) |
|
|
) |
|
|
``` |
|
|
|
|
|
|