58 lines of code scale Llama 3 to 1 million contexts, any fine-tuned version is applicable
Mengchen comes from Aofei Temple
Qubits | Public account QbitAI
Llama 3, the majestic king of open source, the original context window is only... 8k , which makes me swallow back the words "it smells so good".
Today, when 32k is the starting point and 100k is common, is this intentional to leave room for contributions to the open source community?
The open source community will certainly not miss this opportunity:
Now with just 58 lines of code, any fine-tuned version of Llama 3 70b can automatically scale to 1048k (one million) contexts.
Behind the scenes is a LoRA, extracted from a fine-tuned version of Llama 3 70B Instruct that extends good context, and the file is only 800mb .
Next, using Mergekit, you can run it with other models of the same architecture or merge it directly into the model.
The fine-tuned version of the 1048k context used has just achieved an all-green (100% accuracy) score in the popular needle-in-a-haystack test.
It has to be said that the speed of progress of open source is exponential.
How to create 1048k context LoRA
The first 1048k contextual version of the Llama 3 fine-tuned model comes from Gradient AI , an enterprise AI solutions startup.
The corresponding LoRA comes from developer Eric Hartford , which extracts parameter changes by comparing the differences between the fine-tuned model and the original version.
He first produced a 524k contextual version, and then updated the 1048k version.
First, the Gradient team continued training based on the original Llama 3 70B Instruct and obtained Llama-3-70B-Instruct-Gradient-1048k.
The specific method is as follows:
-
Adjust position encoding: Use NTK-aware interpolation to initialize the optimal scheduling of RoPE theta and optimize it to prevent the loss of high-frequency information after extending the length.
-
Progressive training: Extend the context length of the model using the Blockwise RingAttention method proposed by UC Berkeley Pieter Abbeel's team
It is worth noting that the team implemented hierarchical parallelization on top of Ring Attention through a customized network topology to better utilize large GPU clusters to deal with the network bottleneck caused by transferring many KV blocks between devices.
The result is a 33x increase in model training speed.
Long text retrieval performance evaluation, only in the most difficult version, is prone to errors when the "needle" is hidden in the middle of the text.
After having the fine-tuned model with extended context, use the open source tool Mergekit to compare the fine-tuned model and the basic model, and extract the parameter differences to become LoRA.
Also using Mergekit, the extracted LoRA can be merged into other models with the same architecture.
The merged code is also open sourced on GitHub by Eric Hartford and is only 58 lines long.
It's unclear whether this LoRA merge will work with Llama 3, which is fine-tuned on Chinese.
However, it can be seen that the Chinese developer community has paid attention to this development.
524k version LoRA:
https://huggingface.co/cognitivecomputations/Llama-3-70B-Gradient-524k-adapter
1048k version LoRA:
https://huggingface.co/cognitivecomputations/Llama-3-70B-Gradient-1048k-adapter
Merged code:
https://gist.github.com/ehartford/731e3f7079db234fa1b79a01e09859ac
Reference link:
[1]https://twitter.com/erhartford/status/1786887884211138784
-over-
Click here ???? Follow me and remember to star~