aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGary Mulder <gjmulder@gmail.com>2023-03-24 15:23:09 +0000
committerGitHub <noreply@github.com>2023-03-24 15:23:09 +0000
commitf4f5362edb01b05c383b23f36d7b3489c77061b5 (patch)
tree2a481123baa51665c2517cb147f0cd4754ae3cac
parent863f65e2e32dc1e6d23c96a4811bf382d6b2a548 (diff)
Update README.md (#444)
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
-rw-r--r--README.md26
1 files changed, 14 insertions, 12 deletions
diff --git a/README.md b/README.md
index 06799b5..0830074 100644
--- a/README.md
+++ b/README.md
@@ -219,9 +219,11 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach.
### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data
-* The LLaMA models are officially distributed by Facebook and will never be provided through this repository. See this [pull request in Facebook's LLaMA repository](https://github.com/facebookresearch/llama/pull/73/files) if you need to obtain access to the model data.
-* Please verify the sha256 checksums of all downloaded model files to confirm that you have the correct model data files before creating an issue relating to your model files.
-* The following command will verify if you have all possible latest files in your self-installed `./models` subdirectory:
+- **Under no circumstances share IPFS, magnet links, or any other links to model downloads anywhere in this respository, including in issues, discussions or pull requests. They will be immediately deleted.**
+- The LLaMA models are officially distributed by Facebook and will **never** be provided through this repository.
+- Refer to [Facebook's LLaMA repository](https://github.com/facebookresearch/llama/pull/73/files) if you need to request access to the model data.
+- Please verify the sha256 checksums of all downloaded model files to confirm that you have the correct model data files before creating an issue relating to your model files.
+- The following command will verify if you have all possible latest files in your self-installed `./models` subdirectory:
`sha256sum --ignore-missing -c SHA256SUMS` on Linux
@@ -229,15 +231,15 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach.
`shasum -a 256 --ignore-missing -c SHA256SUMS` on macOS
-* If your issue is with model generation quality then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
- * LLaMA:
- * [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
- * [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
- * GPT-3
- * [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
- * GPT-3.5 / InstructGPT / ChatGPT:
- * [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
- * [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
+- If your issue is with model generation quality then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
+ - LLaMA:
+ - [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
+ - [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
+ - GPT-3
+ - [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
+ - GPT-3.5 / InstructGPT / ChatGPT:
+ - [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
+ - [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
### Perplexity (Measuring model quality)