NingG +

AI系列:Code Llama 在本地 Mac 上运行

1.核心步骤

几个步骤:

提示信息中,可用的模型:

  1. CodeLlama-7b
  2. CodeLlama-13b
  3. CodeLlama-34b
  4. CodeLlama-7b-Python
  5. CodeLlama-13b-Python
  6. CodeLlama-34b-Python
  7. CodeLlama-7b-Instruct:推荐此模型
  8. CodeLlama-13b-Instruct
  9. CodeLlama-34b-Instruct

2.准备工作

运行 download.sh 脚本之前,需要确认已经安装了 wgetmd5sum

// mac 环境,采用下述命令
brew install md5sha1sum

安装 conda:

安装 conda 后,创建一个 python = 3.8 的环境:

// 指定了特定的 mirror 镜像源,用于提升依赖的下载速度
conda create -n codellama -c https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main python=3.8

关联资料:

3.第一个 code llama 示例

下载 Code Llama 的仓库代码 后,命令行终端窗口内:

// conda, 创建 python=3.8 的环境
conda create -n codellama -c https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main python=3.8

// 切换到刚刚创建的 codellama 环境
conda activate codellama

// 安装 pytorch
conda install pytorch

// 进入 codellama 代码的目录
cd codellama

// 安装依赖(下载依赖的 torch 等包,可能耗时较长)
pip install -e .

完成上述准备后,直接进行下面的示例,README

Different models require different model-parallel (MP) values:

Model MP
7B 1
13B 2
34B 4

All models support sequence lengths up to 100,000 tokens, but we pre-allocate the cache according to max_seq_len and max_batch_size values. So set those according to your hardware and use-case.

Fine-tuned Instruction Models

Code Llama - Instruct models are fine-tuned to follow instructions. To get the expected features and performance for them, a specific formatting defined in chat_completion needs to be followed, including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and linebreaks in between (we recommend calling strip() on inputs to avoid double-spaces). You can use chat_completion directly to generate answers with the instruct model.

You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for an example of how to add a safety checker to the inputs and outputs of your inference code.

Examples using CodeLlama-7b-Instruct:

torchrun --nproc_per_node 1 example_instructions.py \
    --ckpt_dir CodeLlama-7b-Instruct/ \
    --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model \
    --max_seq_len 64 --max_batch_size 2

Fine-tuned instruction-following models are: the Code Llama - Instruct models CodeLlama-7b-Instruct, CodeLlama-13b-Instruct, CodeLlama-34b-Instruct.

Code Llama is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the Responsible Use Guide. More details can be found in our research papers as well.

附录

附录A. 收到的模型下载邮件

Code Llama commercial license

You’re all set to start building with Code Llama.
The models listed below are now available to you as a commercial license holder. By downloading a model, you are agreeing to the terms and conditions of the license, acceptable use policy and Meta’s privacy policy.

Model weights available:

With each model download, you’ll receive a copy of the License and Acceptable Use Policy, and can find all other information on the model and code on GitHub.

How to download the models:

  1. Visit the Code Llama repository in GitHub and follow the instructions in the README to run the download.sh script.
  2. When asked for your unique custom URL, please insert the following: [personal link url - use youself]
  3. Select which model weights to download

The unique custom URL provided will remain valid for model downloads for 24 hours, and requests can be submitted multiple times.
Now you’re ready to start building with Code Llama.

Helpful tips:

Please read the instructions in the GitHub repo and use the provided code examples to understand how to best interact with the models. In particular, for the fine-tuned chat models you must use appropriate formatting and correct system/instruction tokens to get the best results from the model.
You can find additional information about how to responsibly deploy Llama models in our Responsible Use Guide.

If you need to report issues:

If you or any Code Llama user becomes aware of any violation of our license or acceptable use policies - or any bug or issues with Code Llama that could lead to any such violations - please report it through one of the following means:

  1. Reporting issues with the model: Code Llama GitHub
  2. Giving feedback about potentially problematic output generated by the model: Llama output feedback
  3. Reporting bugs and security concerns: Bug Bounty Program
  4. Reporting violations of the Acceptable Use Policy: LlamaUseReport@meta.com

Subscribe to get the latest updates on Llama and Meta AI.

Meta’s GenAI Team

原文地址:https://ningg.top/ai-series-code-llama-setup-on-mac-local/
微信公众号 ningg, 联系我

同类文章:

微信搜索: 公众号 ningg, 联系我, 交个朋友.

Top