Article count:10350 Read by:146647018

Account Entry

70B model produces 1000 tokens in seconds, code rewriting surpasses GPT-4o, code artifact Cursor team invested by OpenAI

Latest update time:2024-05-17
    Reads:
Cressy from Aofei Temple
Quantum Bit | Public Account QbitAI

70B model, 1000 tokens in seconds, converted into nearly 4000 characters!

The researchers fine-tuned Llama3 and introduced an acceleration algorithm. Compared with the native version, the speed is 13 times faster!

Not only is it fast, its performance on code rewriting tasks even surpasses GPT-4o.

This achievement comes from anysphere, the team behind the popular AI programming tool Cursor, and OpenAI also participated in the investment.

You must know that on Groq, a well-known fast inference acceleration framework, the inference speed of 70B Llama3 is only more than 300 tokens per second.

The speed of Cursor can be said to achieve nearly instant editing of complete code files.

Someone said, "Hey guys, if you put Cursor's modified Llama3 on Groq, would you be able to generate tens of thousands of tokens per second?"

Some people are even more excited to say that in the field of large models, we are eliminating the concept of "delay".

Introducing a new inference acceleration algorithm

The acceleration method designed by the author this time is mainly used to solve a task called "Fast Apply", which is to quickly modify and apply the code content.

First of all, it needs to be explained that although the final effect of the task is a partial modification of the code, during the actual operation, the output is not only the changed content, but a direct global rewrite .

The reason for this is a choice made by the team after preliminary testing - they found that, except for Claude-3-Opus, most models did not perform well on the task of true local modification.

There are three main reasons why this happens:

  • The first is that more tokens will be output when rewritten directly, allowing more forward passes to determine the correct solution.

  • Secondly, most of the training data for the model is complete code, and local modifications are relatively unknown.

  • Additionally, the poor math of large models does not guarantee correct handling of line numbers when outputting differences.

(However, the author believes that this is still a potential future research direction.)

After deciding on the solution of global rewriting, the Cursor team used task-related data to fine-tune Llama3.

The data used comes from two sources: real edited data and synthetic data, which are mixed at a ratio of 1:4.

Synthetic data refers to using GPT-4 to generate code editing suggestions, and then using other models to "apply" these suggestions to the original code.

To improve the quality of the dataset, the authors also downsampled small files, duplicate files, and samples without changes.

To evaluate the performance of these models, the authors ran them through 450 code editing tasks (each of no more than 400 lines) and scored the output with Claude3-Opus.

In the end, the performance of the 70B Llama3 model fine-tuned by the author almost matched that of Claude3-Opus-diff, and was better than GPT-4-Turbo and GPT-4o.

The fine-tuning to this point has solved the performance problem, but it is not difficult to see that Llama3 is still very slow at this time, and can only output less than 300 characters per second (note that it is characters, not words or tokens) .

What makes the rewriting work so fast is another secret weapon.

For the code rewriting task, the Cursor team specially introduced an algorithm called speculative edits .

This method uses an a priori algorithm to predict multiple subsequent tokens, and then uses a large ontology model for verification, which reduces the number of calls to the large model and thus reduces the amount of calculations.

This a priori algorithm comes from a feature of the coding task - compared with other texts, its vocabulary is smaller, and its grammatical structure, indentation rules, etc. have higher certainty. Using a priori knowledge can predict the future more accurately token.

This approach also has something in common with GPT-4 and Meta——

The reason why traditional language model reasoning is slow is that the process of predicting the next token is usually autoregressive, that is, when the model generates each token, it must consider all previously generated tokens.

In order to reduce the amount of calculations, large models represented by GPT-4 use an acceleration algorithm called speculative decoding to predict in advance through a small approximate model, and then let the large ontology model verify the prediction results.

The difference between Cursor and GPT-4 is that the former’s small “model” is a more deterministic algorithm, while the latter only reduces the size of the model and is still essentially a probabilistic prediction.

Meta has introduced an algorithm for predicting multiple subsequent tokens at once, using n independent output heads to predict n future tokens in parallel. It was found that the performance is particularly excellent in programming tasks , because the logical structure of the programming language is more complex. Rigorous, the inner connection of knowledge is closer.

Of course, Cursor makes full use of this feature. Instead of using attention heads, it directly uses a more certain algorithm to make multi-token predictions.

The final result is that the prediction algorithm brings a nearly 13-fold speed increase to the 70B Llama3 without any loss in evaluation performance.

In addition, the author also cooperated with the enterprise AI model infrastructure platform fireworks.ai, using its optimized inference engine and customized hardware environment to further improve the operating efficiency of the model.

In the future, the team also plans to perform knowledge distillation and migrate the predictive editing algorithm to the smaller 8B Llama3, and expand it to more programming languages ​​and tasks.

At the same time, the author also plans to improve the true partial modification (Diff) algorithm that the Cursor team has studied but has not adopted.

One More Thing

In the experiment, the author not only accelerated Llama3 using the prediction algorithm, but also accelerated GPT4-Turbo.

However, the author did not introduce how to implement it in GPT, but left some thinking questions and even conducted a "prize-winning guessing game".

Those who can answer correctly will receive a 1-month Cursor membership; if they can achieve prediction acceleration in vllm and TensorRT-LLM, they will receive a half-year and one-year membership respectively.

If you feel you have an idea, you might as well try the challenge (manual dog head).

Reference link:
https://cursor.sh/blog/instant-apply#user-content-fnref-feel-difference

-over-

Qubit's annual AI theme planning is now soliciting!

Welcome to submit your contributions to the special topic 1,001 AI applications , 365 AI implementation solutions

Or share with us the AI ​​products you are looking for or the new AI trends you have discovered


Click here ???? Follow me and remember to star~

Three consecutive clicks of "Share", "Like" and "Watching"

Advances in cutting-edge science and technology are seen every day ~


Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号