You give the requirements document, AI can help you develop Android App
Fengse from Aofei Temple
Quantum Bit Report | Public Account QbitAI
Generating code using natural language is not new, but now, the scope of business involved in this technology is really getting wider and wider.
There is an "AI" called Text2App. You "feed" it a string of text requirements , and it can directly "digest" it into an Android app for you !
If you don’t believe me, just look.
This is the text entered:
Create an app with a video, a button, a text-to-speech function, and a phone acceleration sensor. Click the button to play the video; shake the phone to read the text "happy Text2App".
The whole process takes only a few minutes to compile, and no other code operations are required to directly generate an Android application like the following:
I wonder what programmers, especially Android developers, think after reading this?
The "intermediate language" between text description and source code
The Text2App framework comes from UCLA and Bangladesh University of Engineering and Technology.
It does not generate source code directly from natural language , but first generates intermediate language , and then the compiler generates source code.
Why do we need to generate an intermediate language first?
Because most previous studies on generating programs based on text descriptions are based on end-to-end neural machine translation (NMT) models, similar to Google Translate, which directly translate natural language into source code.
While some of them work reasonably well, most are unable to generate larger programs that are hundreds of lines of code long.
To overcome this limitation, the researchers invented a new formal language as a "bridge" in this process .
It can "understand" complex source code and convert the natural language given by the user into a small number of tokens (markers) , and then form a simple program representation code.
Finally, using a compiler developed by the researchers, this intermediate language can be converted into source code.
The compiler is the one that understands programming languages best. It is not enough to let AI generate complex programs completely, so it cannot do without the strong support of the compiler .
Of course, the generation of intermediate language still relies on the neural machine translation model.
The following is the specific process of "converting text description into APP":
word description:
Create an app with a textbox, a button named “Speak”, and a text2speech. When the button is clicked, speak the text in the text box.
The natural language above is first formatted (for example, "Speak" is converted to "'STRING0':'Speak'") , and then given to a Seq2Seq neural network with an encoder and decoder to translate it into a simple application representation (SAR) - this is the intermediate language mentioned above:
<complist> <textbox> <button> string0 </button> <text2speech> </complist><code> <button1clicked> <text2speech1> <textboxtext1> </text2speech1></button1clicked> </code>
The intermediate language is then converted into the MIT App Inventor source code file (.scm/.bky) through the SAR compiler , and then packaged into a final usable Android application by MIT.
The following is a schematic diagram of the automatic synthesis of natural language and intermediate language (SAR) , which is very intuitive:
The function is still relatively basic
As you might expect, this framework is still relatively rudimentary, and currently description text needs to be limited to a fixed range :
Only 11 components can be described: text box, button, label, player, time selector...
There are no clear restrictions on the events and operations that can be implemented. Those who are interested can test how much can be achieved.
The current functions are very simple, and the majority of Android developers do not have to worry about AI "stealing their jobs".
However, the researchers said that the ultimate goal is to make Text2App a mature natural language-based APP development platform.
How long will it take? It is still unknown.
Paper address: https://arxiv.org/abs/2104.08301
Full video and trial link: https://text2app.github.io/
Reference link:
https://techxplore.com/news/2021-06-text2app-framework-android-apps-text.html
-over-
This article is the original content of [Quantum位], a signed account of NetEase News•NetEase's special content incentive plan. Any unauthorized reproduction is prohibited without the account's authorization.
Free registration | NVIDIA CV Open Class
On June 17, NVIDIA experts will demonstrate an example of "quickly building a gesture recognition system" to help everyone learn how to build, train, and deploy AI models with low barriers and high efficiency.
ps After registration, you can join the group to get a series of CV courses Live playback , PPT , source code Oh~
click here