Deciding which programming language to learn today is a big question for developers, given how time consuming it is; A question that may seem futile in a future where artificial intelligence (AI) models perform heavy-duty processes by understanding a problem description and coding solutions.
According to ZDNet, researchers at Google’s artificial intelligence unit called DePaymand claim that the project’s AlphaCode system could provide solutions to coding problems and achieve average scores in programming competitions run by new developers. In these contests, the problem described in natural language is understood by humans and then its algorithm is effectively coded.
In a new paper, Deepmayed researchers published details on how AlphaCode achieved an average ranking of 54.4% above participants in 10 programming competitions previously held with more than 5,000 participants. The tournament was held on the CodeForce competition platform.
DePaymand claims that AlphaCode is the first artificial intelligence-based code generation system to compete at the code level for human developers. This research can improve the productivity of the programmer and may help people who do not specialize in this field to provide a solution to problems without the need to know how to write code.
Human and AlphaCAD participants must analyze the description of a challenge or puzzle and write a plan to solve that challenge quickly. This is more difficult than model-based training using gateway data to solve a simple coding challenge.
Alphacode, like humans, needed to understand the description of natural language, the details of the background narrative, and explain the solution in terms of input and output.
To solve this problem, AlphaCode must create an algorithm and then implement that algorithm effectively. AlphaCode should also potentially choose a faster programming language such as C ++ to overcome these limitations.
Depomayand has helped boost alphacode performance by combining large-scale transformer models; Such as GPT-3 OpenAI and Google BERT language model. DePaymand used transformer-based language models to generate code and then filtered the output into a small set of promising programs that were sent for evaluation.
The AlphaCode DePaymand team explained in a blog post:
During the evaluation, we create a large number of programs for each problem in C ++ and Python. We then filter and categorize those solutions and again limit them to a small set of 10 programs for external evaluation. This automated system replaces the competitors’ trial and error process in debugging, compiling, passing tests, and finally providing solutions.
DePaymand shows how AlphaCode provides a solution to a problem. This project considers several potential negative aspects of what it is trying to achieve. For example, models can generate code with exploitable vulnerabilities, such as unintentional vulnerabilities in older code or intentional ones that are injected into the training suite by malicious agents.
There are also environmental costs. Training this model in Google Data Centers required “hundreds of days of petaflops,” but generating artificial intelligence code in the long run could lead to systems that could recreate themselves and quickly become more advanced and better systems.
There is a risk that automation will reduce demand for developers, but DePayend points to the limitations of today’s code completion tools, which greatly improve programming productivity. However, the AlphaCode project is currently limited to one-line suggestions, specific languages, or shortcodes.
DePaymand emphasizes that the project’s work is by no means a threat to human programmers, but that its systems can develop problem-solving capabilities to help humans.
Deepayand researchers say:
Our exploration of code generation will leave a lot of room for improvement, pointing to more exciting ideas that can help programmers improve their productivity and provide a platform for people who are not currently coding.