Home > Technology peripherals > AI > body text

Study: AI coding assistants may lead to unsafe code, developers should use caution

WBOY
Release: 2023-04-19 10:37:02
forward
1178 people have browsed it

Researchers say that relying on AI (artificial intelligence) assistants when writing code can make developers overconfident in their development work, resulting in less secure code.

Study: AI coding assistants may lead to unsafe code, developers should use caution

A recent study conducted by Stanford University found that AI-based coding assistants such as GitHub’s Copilot can leave developers confused about the quality of their work, leading to software There may be vulnerabilities and it is less secure. One AI expert says it's important for developers to manage their expectations when using AI assistants for such tasks.

more security vulnerabilities introduced by AI coding

The study conducted a trial with 47 developers, 33 of whom used AI assistants while writing code, compared with a control group 14 people write code alone. They must complete five security-related programming tasks, including encrypting or decrypting strings using symmetric keys. They can all get help using a web browser.

AI assistant tools for coding and other tasks are becoming increasingly popular, and Microsoft-owned GitHub will launch Copilot as a technology preview in 2021 to increase developer productivity.

Microsoft pointed out in a research report released in September this year that GitHub makes developers more efficient. 88% of respondents said Copilot is more efficient when coding, and 59% said its main benefits were completing repetitive tasks faster and completing coding faster.

Researchers at Stanford University wanted to know whether users were writing more insecure code with AI assistants, and found that this was indeed the case. Developers who use AI assistants remain disillusioned about the quality of their code, they say.

The team wrote in the paper: "We observed that developers who received help from an AI assistant were more likely to introduce security vulnerabilities in most programming tasks than developers in the control group, but also in More likely to rate unsafe answers as safe. Additionally, the study found that developers who invested more in creating queries to their AI assistants (such as adopting AI assistant features or adjusting parameters) were more likely to ultimately provide secure solutions. ."

This research project used only three programming languages: Python, C and Verilog. It involves a relatively small number of participants with varying development experience, ranging from recent college graduates to seasoned professionals, using specially developed applications that are monitored by administrators.

The first experiment was written in Python, and those written with the help of AI assistants were more likely to be unsafe or incorrect. In the control group without the help of AI assistants, 79% of developers wrote code without quality problems; in the control group with AI assistant help, only 67% of developers wrote code without quality problems.

Use AI Coding Assistants with Care

Things get worse when it comes to the security of the code they create, as developers who employ AI assistants are more likely to provide unsafe solutions, or use Simple password to encrypt and decrypt strings. They are also less likely to perform plausibility checks on the final values ​​to ensure the process is working as expected.

The findings suggest that less experienced developers may be inclined to trust AI assistants, but at the risk of introducing new security vulnerabilities, researchers at Stanford University said. Therefore, this research will help improve and guide the design of future AI code assistants.

Peter van der Putten, director of the AI ​​lab at software provider Pegasystems, said that although the scale is small, the research is very interesting and the results can inspire further research into the use of AI assistants in coding and other areas. use. "It's also consistent with what some of our broader research into dependence on AI assistants has concluded," he said. He cautioned that developers adopting AI assistants should gain access to them in an incremental manner. Trust the tool, don't rely too much on it, and understand its limitations. He said, “Acceptance of a technology depends not only on expectations for quality and performance, but also on whether it saves time and effort. Overall, people have a positive attitude towards the use of AI assistants, as long as their Expectations are managed. This means defining best practices for how to use these tools, as well as adopting potential additional features to test code quality."

The above is the detailed content of Study: AI coding assistants may lead to unsafe code, developers should use caution. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!