Large Language Model Free

Llama 2

0
Please log in or register to do it.

The next generation of our open source large language model

This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters.

Llama 2 was trained on 40% more data than Llama 1, and has double the context length.

Training Llama-2-chat: Llama 2 is pretrained using publicly available online data. An initial version of Llama-2-chat is then created through the use of supervised fine-tuning. Next, Llama-2-chat is iteratively refined using Reinforcement Learning from Human Feedback (RLHF), which includes rejection sampling and proximal policy optimization (PPO).

Tech Used:

Llama 2

Top Features:

  • Llama 2 models are trained on 2 trillion tokens and have double the context length of Llama 1. Llama-2-chat models have additionally been trained on over 1 million new human annotations.
  • Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests.
  • Llama-2-chat uses reinforcement learning from human feedback to ensure safety and helpfulness.
iMyFone VoxBox
AI Baby Generator

Reactions

0
0
0
0
0
0
Already reacted for this post.

Reactions

Your email address will not be published. Required fields are marked *