Article count:10350 Read by:146647018

Account Entry

Stability AI’s first large model of the year: specially written code, supports 18 programming languages, 100K contexts, and can run on Apple notebooks offline

Latest update time:2024-01-17
    Reads:
The west wind comes from Ao Fei Temple
Qubit | Official account QbitAI

Stability AI’s first big model of the year is here!

It's called Stable Code 3B, with 2.7 billion parameters, and is designed for writing code.

Stable Code 3B can understand and process 18 different programming languages, with a context length of 100K tokens .

And its hardware requirements are not high, and it can also be run offline with ordinary laptops such as MacBook Air .

In terms of performance, Stable Code 3B is 60% smaller than CodeLLaMA 7B, but its performance is about the same.

In the BigCode evaluation, Stable Code 3B achieved SOTA performance compared with similar-sized models on the MultiPL-E data set containing multiple programming languages.

Stable Code 3B has just been launched, and some netizens reported that it has been used in the plug-in:

The effect is very good! It is small in size, compatible with various hardware and runs very fast.

What does Stable Code 3B look like?

Prior to this, Stability AI had released an initial version, Stable Code Alpha 3 B , in August last year. Stable Code 3B is its evolved version. However, the development team stated that Stable Code 3B is the first major version, based on the Stable LM 3B basic model, and adds a number of additional features.

These include supporting new features such as filling in the middle (FIM) , and using the RoPE (rotated position embedding) method to extend the context length from 16,000 tokens to 100,000 tokens.

The overall architecture of Stable Code 3B is similar to LLaMA and is a decoder-only model using the Flash Attention 2 algorithm.

The following 18 programming languages ​​are supported:

C, CPP, Java, JavaScript, CSS, Go, HTML, Ruby, Rust, Markdown, Shell, Php, Sql, R, Typescript, Python, Jupyter-Clean, RestructuredText

Below is a supplement to the performance comparison.

Facing CodeLLama 7B, which is more than twice its size, Stable Code 3B's performance is almost on par with it, and even slightly better in Python and CPP languages.


Among models of similar size, the Stable Code 3B stands out.

But there are also complaints

Although Stable Code 3B has received a wave of praise, many netizens think that "there is not much new stuff" and "it is not that good".

In addition, it only supports English and programming languages, which is a bit disappointing for everyone.


What do you think of it?

Reference links:
[1]https://huggingface.co/stabilityai/stable-code-3b

[2]https://twitter.com/StabilityAI/status/1747348018884493623

-over-

Click here ???? Follow me and remember to star~

Three consecutive clicks of "Share", "Like" and "Watching"

Advances in cutting-edge science and technology are seen every day ~


Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号