radinplaid commited on
Commit
4367ab1
·
verified ·
1 Parent(s): f88d04b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -8
README.md CHANGED
@@ -42,7 +42,7 @@ model-index:
42
  * 195M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
43
  * 20k sentencepiece vocabularies
44
  * Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
45
- * Training data: https://huggingface.co/datasets/quickmt/quickmt-train.fa-en/tree/main
46
 
47
  See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
48
 
@@ -51,22 +51,25 @@ See the `eole` model configuration in this repository for further details and th
51
 
52
  You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
53
 
54
- Next, install the `quickmt` python library and download the model:
55
 
56
  ```bash
57
  git clone https://github.com/quickmt/quickmt.git
58
  pip install ./quickmt/
59
-
60
- quickmt-model-download quickmt/quickmt-en-ur ./quickmt-en-ur
61
  ```
62
 
63
- Finally use the model in python:
64
 
65
  ```python
66
  from quickmt import Translator
67
-
68
- # Auto-detects GPU, set to "cpu" to force CPU inference
69
- t = Translator("./quickmt-en-ur/", device="auto")
 
 
 
 
 
70
 
71
  # Translate - set beam size to 5 for higher quality (but slower speed)
72
  sample_text = 'Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days.'
 
42
  * 195M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
43
  * 20k sentencepiece vocabularies
44
  * Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
45
+ * Training data: https://huggingface.co/datasets/quickmt/quickmt-train.ur-en/tree/main
46
 
47
  See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
48
 
 
51
 
52
  You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
53
 
54
+ Next, install the `quickmt` [python library](github.com/quickmt/quickmt).
55
 
56
  ```bash
57
  git clone https://github.com/quickmt/quickmt.git
58
  pip install ./quickmt/
 
 
59
  ```
60
 
61
+ Finally, use the model in python:
62
 
63
  ```python
64
  from quickmt import Translator
65
+ from huggingface_hub import snapshot_download
66
+
67
+ # Download Model (if not downloaded already) and return path to local model
68
+ # Device is either 'auto', 'cpu' or 'cuda'
69
+ t = Translator(
70
+ snapshot_download("quickmt/quickmt-en-ur", ignore_patterns="eole-model/*"),
71
+ device="cpu"
72
+ )
73
 
74
  # Translate - set beam size to 5 for higher quality (but slower speed)
75
  sample_text = 'Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days.'