Home > Technology peripherals > AI > body text

Generative artificial intelligence is coming, how to protect minors? | Journal of Social Sciences

WBOY
Release: 2023-06-12 17:34:02
forward
1476 people have browsed it

Artificial intelligence and the development of protection of minors

In response to the rapid development of powerful generative artificial intelligence, UNESCO held its first global education ministers meeting on this topic on May 25 this year. Comprehensive and objective analysis of the unprecedented development opportunities and risks and challenges of generative artificial intelligence is an urgent proposition of the times to ensure the healthy growth of minors, and is also a key part of implementing the protection and development of minors.

生成式人工智能来了,如何保护未成年人? | 社会科学报

Original text: "Generative artificial intelligence is coming, how to protect minors"

Author | School of Journalism and Communication, Beijing Normal University Fang Zengquan/Researcher Yuan Ying/Lecturer Qi Xuejing/Assistant Researcher

Pictures |Network

Generative artificial intelligence (AIGC) ranks among the top ten scientific breakthroughs in 2022. This result was published in "Science" magazine. Generative artificial intelligence shows strong logical reasoning and re-learning capabilities, can better understand human needs, breaks through the bottleneck of natural language processing, and reshapes the social knowledge production system, talent training system, business model, and social division of labor Systems, etc., have brought huge development opportunities to all fields of society, but they also face severe risks. At present, minors have interacted with artificial intelligence technology in a variety of ways, and these technologies are often embedded in toys, applications and video games, unknowingly affecting the healthy growth of minors. Since minors' minds and cognitive abilities have not yet matured, they are easily affected by the intelligent environment. Therefore, a comprehensive and objective analysis of the unprecedented development opportunities and risks and challenges of generative artificial intelligence is an urgent proposition of the times to ensure the healthy growth of minors, and is also a key part of implementing the protection and development of minors.

Opportunities and challenges brought by generative artificial intelligence

In terms of development opportunities, artificial intelligence technology is a tool to guide minors to learn education. Relying on generative artificial intelligence technology will make education more targeted and effective. Generative artificial intelligence can bridge individual differences and regional gaps in education, promote universal equity in educational resources, and narrow the digital divide between individuals and regions to a certain extent. Generative artificial intelligence will also promote the improvement of minors’ innovative and creative abilities. The emergence of large-scale models may further lower the threshold for using low-code and no-code development tools, and may trigger the birth of new intelligent development technologies.

Minors will face challenges in cognitive, thinking and behavioral abilities because generative artificial intelligence will affect them. The first is the proliferation of false information. How to continuously improve the accuracy of facts requires urgent technological breakthroughs. The second is the problem of data leakage. Currently, the data leak has resulted in the loss of conversation data and related information for some users. In particular, it is important to note that generative artificial intelligence applications currently lack the design to systematically verify the user's age, which may expose minors to content that is "completely disproportionate to their age and consciousness" and have adverse effects on their physical and mental health. The third is the problem of embedded algorithmic discrimination. Some of the answers given by generative artificial intelligence are sexist and racially discriminatory, which may mislead users into thinking discriminatory answers as "correct answers" and make wrong decisions, thus negatively affecting social cognition and ethics. Especially when dealing with the issue of algorithm inclusivity, due to the different origins and evolutionary paths of Chinese and Western cultures, it also involves issues of interpretation, evaluation and dissemination between traditional culture and reality observation, which may all play a role in generative artificial intelligence. Ignored or deliberately amplified in technology. The fourth is the issue of intellectual property infringement. Whether it is an image or a large text model, there are actually many copyright-related risks and hidden dangers. The training data may infringe the copyright of others, such as the rights and interests of news organizations and photo gallery dealers. There is currently no particularly appropriate solution to how content creators whose works are used in AI-generated songs, articles, or other works are compensated. The sustainable and healthy development of the artificial intelligence content ecosystem will be harmed if the copyright of the original creator is difficult to ensure. Fifth, the issue of digital divide. There are significant differences in the performance of large language models in English and Chinese, and it is clear that writing, expression, and comprehension abilities are generally stronger in the English context. The differences between Chinese and English languages ​​coupled with the data Matthew effect may make the gap wider. Sixth, network security risks. This includes, but is not limited to, graphic material that suggests or encourages self-harm, pornographic or violent content, and harassing, derogatory and hateful content.

生成式人工智能来了,如何保护未成年人? | 社会科学报

CI-STEP system for generative artificial intelligence

In September 2020, UNICEF released a draft of "Artificial Intelligence for Children - Policy Guidelines", which proposed three major principles that child-friendly artificial intelligence should follow: protection, that is, "do no harm" ; Empowerment means "doing good"; participation means "inclusion". In November 2021, UNICEF released the "Recommendations 2.0 on Establishing AI Policies and Systems to Protect Children's Rights", which proposed three foundations for AI to protect children's rights: Artificial intelligence policies and systems should be designed to protect children; Children’s needs and rights are met equitably; children should be empowered to contribute to the development and use of AI. A hot topic discussed around the world is how to use AI technology to create a beneficial environment for the growth of minors now and in the future. Immediately afterwards, China, the United States, the European Union, and the United Kingdom also successively introduced relevant laws and regulations to further standardize and improve the management of artificial intelligence technology for minors.

Based on Piaget's stage theory of children's intellectual development and Kohlberg's stage theory of moral development, combined with my country's relevant laws and policies - "Personal Information Protection Law of the People's Republic of China" and "Network Security Law of the People's Republic of China" "Data Security Law of the People's Republic of China", etc., as well as relevant policy documents from the United States, the European Union, the United Kingdom and other countries and regions, the Minors Internet Literacy Research Center of the School of Journalism and Communication of Beijing Normal University released the country's first generative artificial intelligence for minors The Protection and Development Evaluation (CI-STEP) indicator system provides important reference and evaluation indicators for the protection of minors in generative artificial intelligence (see the figure below).

生成式人工智能来了,如何保护未成年人? | 社会科学报

Generative Artificial Intelligence Minors Protection and Development Evaluation (CI-STEP) Index System

This indicator model specifically includes Comprehensive management, Information cue, Scientific popularization and publicity education, Technical protection, and Emergency complaint mechanism. and reporting mechanism), privacy and personal information protection system (Privacy and personal information protection system), each dimension contains 2 to 3 specific indicators, a total of 15 evaluation indicators, in order to provide scientific information for safeguarding the rights and interests of minors. , comprehensive assessment concepts and practical guidance.

The first is in terms of comprehensive management, including time management, authority management and consumption management. Time management is to determine reasonable and appropriate online time based on minors’ social interactions, scenarios, uses, etc., and implement anti-addiction requirements. Permission management is to set up a verification mechanism to check age. According to the security rules or security policies set by the system, minors can access and can only access the resources they are authorized to. Consumption management is a generative artificial intelligence product or service that should prohibit unfair and deceptive business practices, take necessary safeguards to limit bias and deception, and avoid serious risks to businesses, consumers, and public safety.

The second is information prompts, including private information prompts, age-appropriate prompts and risk prompts. If minors publish private information through the Internet, prompts should be issued promptly and necessary protective measures should be taken. Age-appropriate reminders mean that providers of generative artificial intelligence products or services should classify artificial intelligence products in accordance with relevant national regulations and standards, make age-appropriate reminders, and take technical measures to prevent minors from being exposed to inappropriate products or services. Some generative artificial intelligence providers require users to be over 18 years old, or over 13 years old with parental consent before they can use AI tools. However, current verification options and implementation mechanisms need to be improved. Risk warning means that generative artificial intelligence products or services should comply with the requirements of laws and regulations, respect social ethics, public order and good customs, and provide warnings about possible illegal content such as fraud, terrorism, bullying, violence, pornography, prejudice, discrimination, and inducing information.

The third is science popularization and publicity and education. Providers of generative artificial intelligence products or services are in a leading position in technology and should provide support for minors’ related thematic education, social practice, professional experience, internship inspections, etc., and organize scientific popularization of minors based on industry characteristics. Educational and innovative practical activities.

The fourth point is about technical guarantees, including the registration of real identity information, filtering of bad information, and network security level protection. According to the "Cybersecurity Law of the People's Republic of China" and other relevant regulations, users are required to provide true identity information to the system. If a provider of a generative artificial intelligence product or service discovers that a user has published or disseminated information that contains content that is harmful to the physical and mental health of minors, it should immediately stop disseminating the relevant information and take disposal measures such as deletion, blocking, and disconnection of links. Providers of generative artificial intelligence products or services should perform security protection obligations in accordance with the requirements of the network security level protection system to protect the network from interference, destruction or unauthorized access, and prevent network data from being leaked or stolen or tampered with.

The fifth is the emergency complaint and reporting mechanism. Providers of generative artificial intelligence products or services should establish a mechanism for receiving and handling user complaints, disclose information such as complaints and reporting methods, promptly handle individuals' requests for correction, deletion, and blocking of their personal information, and formulate contingency plans for network security incidents to handle them in a timely manner. System vulnerabilities, computer viruses, network attacks, network intrusions, data poisoning, prompt word injection attacks and other security risks; when an incident that endangers network security occurs, immediately activate the emergency plan and take corresponding remedial measures.

Sixth is the privacy and personal information protection system. Providers of generative artificial intelligence products or services should clarify the rules for the collection and use of personal information, how to protect personal information and personal rights, how to handle the personal information of minors, how personal information is transferred globally, and how to update privacy policies. Providers of generative AI products or services may not use data to sell services, advertise or build user profiles, remove personal information from training data sets where feasible, fine-tune models to reject requests for private information, and prevent damage to publicity rights , reputation rights and personal privacy, and illegal acquisition, disclosure, and use of personal information and privacy are prohibited.

生成式人工智能来了,如何保护未成年人? | 社会科学报

We must pay equal attention to protection and development in the future

On the important issue of generative artificial intelligence and the protection of minors, we must insist on paying equal attention to protection and development, guided by the concepts of "child-centered", "protecting children's rights", "taking responsibility" and "multi-party governance" Under the current situation, generative artificial intelligence will become a new driving force for the development of minors and help achieve the comprehensive and healthy growth of minors in the era of artificial intelligence. Under the guidance of the concept of technology for good, the development of generative artificial intelligence technology should adhere to the principle of what is most beneficial to minors, take into account the network security and digital development of minors, focus on the application of AI technology in line with scientific and technological ethics, and encourage Internet companies to actively participate in the industry Co-governance.

The application of generative artificial intelligence in the future needs to focus on four major issues. First, implement technical standards. In the future, the protection and development of minors in generative artificial intelligence will need to focus on supervising corporate R&D activities, supervising and intervening in generative artificial intelligence, and putting forward approval and testing requirements for the development and release of artificial intelligence models. Second, clarify the assessment obligations. It is necessary to clarify the security risk assessment obligations of R&D institutions and network platform operators for the application of generative artificial intelligence products, and ensure that R&D institutions and network platform operators must conduct security risk assessments before putting such products into the market to ensure that risks are controllable. Third, focus on content risks. Regulatory agencies and network platform operators should further improve regulatory technology and encourage multiple parties to participate in the development and promotion of relevant generative artificial intelligence products, thereby standardizing the content generated by generative artificial intelligence technology. Fourth, build an AI literacy cultivation ecosystem. According to the stage-by-stage characteristics of the development of minors' cognitive, emotional and behavioral abilities, strengthen the innovation of "information technology" and other course content and teaching methods in the compulsory education stage, with primary and secondary basic education as the main position, forming a government-led, academic and enterprise The jointly constructed artificial intelligence public service platform promotes artificial intelligence literacy education and innovative practical activities, promotes the balanced sharing of high-quality educational resources, avoids the exacerbation of digital divide and inequality, and creates a more inclusive, fair and open educational environment.

The article is an original product of Social Science Journal’s “Ideological Workshop” Fusion Media. It was originally published on Page 2 of Issue 1857 of Social Science Journal. Reprinting without permission is prohibited. The content in the article only represents the author’s opinion and does not represent the position of this newspaper.

Editor in charge of this issue: Wang Liyao

Extended reading

Talent Power | Artificial Intelligence Era: "Learner-centered Precision Education"

Establishing a trustworthy artificial intelligence development mechanism | Journal of Social Sciences

The above is the detailed content of Generative artificial intelligence is coming, how to protect minors? | Journal of Social Sciences. For more information, please follow other related articles on the PHP Chinese website!

source:sohu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!