diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-03-21 18:10:32 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-03-21 18:10:32 +0200 |
commit | 1daf4dd71235dbbf537738e7ad53daad8d97586f (patch) | |
tree | 09cb46785e1af9fb43f91273d7fe63c2b010f3db /README.md | |
parent | dc6a845b8573cd7d06c6b295241d26f311602a1f (diff) |
Minor style changes
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 4 |
1 files changed, 3 insertions, 1 deletions
@@ -178,13 +178,15 @@ If you want a more ChatGPT-like experience, you can run in interactive mode by p In this mode, you can always interrupt generation by pressing Ctrl+C and enter one or more lines of text which will be converted into tokens and appended to the current context. You can also specify a *reverse prompt* with the parameter `-r "reverse prompt string"`. This will result in user input being prompted whenever the exact tokens of the reverse prompt string are encountered in the generation. A typical use is to use a prompt which makes LLaMa emulate a chat between multiple users, say Alice and Bob, and pass `-r "Alice:"`. Here is an example few-shot interaction, invoked with the command -``` + +```bash # default arguments using 7B model ./chat.sh # custom arguments using 13B model ./main -m ./models/13B/ggml-model-q4_0.bin -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt ``` + Note the use of `--color` to distinguish between user input and generated text. ![image](https://user-images.githubusercontent.com/1991296/224575029-2af3c7dc-5a65-4f64-a6bb-517a532aea38.png) |