GGUF
conversational
Research-EAI commited on
Commit
c439c8d
·
verified ·
1 Parent(s): 56f262a

Update usage instructions

Browse files
Files changed (1) hide show
  1. README.md +20 -20
README.md CHANGED
@@ -1,20 +1,20 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
- This is a GGUF-formatted checkpoint of
5
- [rnj-1-instruct](https://huggingface.co/EssentialAI/rnj-1-instruct) suitable
6
- for use in llama.cpp, Ollama, or others. This has been quantized with the
7
- Q4\_K\_M scheme, which results in model weights of size 4.8GB.
8
-
9
- For llama.cpp, install (after version 7328) and run either of these commands:
10
-
11
- ```bash
12
- llama-cli -hf EssentialAI/rnj-1-instruct-GGUF
13
- llama-server -hf EssentialAI/rnj-1-instruct-GGUF -c 0 # and open browser to localhost:8080
14
- ```
15
-
16
- For Ollama, install (after version v0.13.3) and run:
17
-
18
- ```bash
19
- ollama run rnj-1
20
- ```
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ This is a GGUF-formatted checkpoint of
5
+ [rnj-1-instruct](https://huggingface.co/EssentialAI/rnj-1-instruct) suitable
6
+ for use in llama.cpp, Ollama, or others. This has been quantized with the
7
+ Q4\_K\_M scheme, which results in model weights of size 4.8GB.
8
+
9
+ For llama.cpp, install (after version 7328, e.g., on Mac OSX `brew install llama.cpp`) and run either of these commands:
10
+
11
+ ```bash
12
+ llama-cli -hf EssentialAI/rnj-1-instruct-GGUF
13
+ llama-server -hf EssentialAI/rnj-1-instruct-GGUF -c 0 # and open browser to localhost:8080
14
+ ```
15
+
16
+ For Ollama, install (after version v0.13.3 -- versions can be found [here](https://github.com/ollama/ollama/releases)) and run:
17
+
18
+ ```bash
19
+ ollama run rnj-1
20
+ ```