Home>Article>Technology peripherals> alert! AI may lead to the proliferation of false information
According to Science and Technology Daily, after the emergence of generative artificial intelligence (AI), in addition to affecting some professional positions and posing new challenges to the existing copyright system, it can also be used in social media. False information activities not only create new dangers, but also amplify existing dangers, thereby endangering economic and social stability.
Based on reasonable predictions based on the actual situation, false information activities supported by AI may be mainly used in the following four aspects:
①Create fake news.
The main purpose here is to distort and tamper with official media news, eliminate the backward content production method of "starting with a picture, and rely on editing for the rest", and instead set requirements-input some materials-automatically generate pictures Fully automated mode for text/video.
②.Spread rumors.
It is fast to create fake news, and it spreads quickly in automated batches. If coupled with the addition of "professionals" who are well versed in new media, social media tonality mechanisms, and are well versed in communication or journalism theory, it can Achieve the effect of "if a lie is repeated a thousand times, it will become the truth". In addition, because rumors are also very gimmicky, they naturally easily gain great attention and discussion when spread on the Internet. On the contrary, refuting rumors not only requires "breaking your legs", but also because it is "too correct". "And no one is watching.
③ Combine generative AI and social media features to implement the "spam sea" tactic.
Inject entertainment news, gossip information and other "nipple fun" content into the entire network, occupying the entire user's field of vision, in order to drive out serious news and information, and completely control the information circulating in social media. Through in-depth cooperation with entertainment companies, those behind the scenes can achieve additional benefits while completing established tactical goals.
④ Use deep synthesis technology to counterfeit accounts and disrupt the normal order of social media.
Crawling information from official accounts and personal accounts of agencies, enterprises, institutions, social organizations, celebrities/big Vs/opinion leaders, etc., including but not limited to pictures and texts, facial images, voices, etc., and using these corpus and materials to generate false information The content can not only bring trouble to the perpetrators, but can also stir up trouble, stir up troubles, and confuse falsehoods with truth.
More importantly, whether for profit or to achieve their ulterior motives, the actions of the producers and disseminators of false information are inherently illegal, and they will naturally ignore the constraints of laws, regulations, platform regulations, etc., thus creating Three major difficulties - difficult to review, difficult to supervise, and difficult to impose penalties.
①Difficult to review.
The current requirements of various platforms for generative AI content can be summarized as "uploader self-discipline", that is, the user takes the initiative to make a conspicuous identification and clearly indicates to the audience that his content is produced by AI. This stipulates that "guard against gentlemen but not against others." villain". Of course, normal users will act in accordance with the rules, but illegal users can completely avoid bidding, slow down bidding, or reduce bidding.
②Supervision is difficult.
Before AI was applied in the field of content generation, it was well known that fake news on the Internet had emerged like weeds in endlessly and was difficult to eradicate. In this regard, the traditional management method is to combine big data screening and accepting reports, focusing on cracking down on illegal accounts, and focusing on marking their associated accounts. This is an upright and irreproachable way to deal with it. But the problem is that after unscrupulous users introduce AI, if the previous state is regarded as sporadic and spot-like wildfires, then the false information that needs to be processed and rectified will become a large fire, and the management will consume a lot of manpower, material resources, The difficulty rises exponentially.
③Punishment is difficult.
In the end, there are two problems with punishment. One is that the supporting laws and regulations are not complete enough, and the other is that it is difficult to capture due to secrecy. However, not long ago, opinions on the applicable legal provisions against cyber violence were collected from the public, and it is believed that they will be implemented in detail soon. It is expected that as the governance of false information generated by AI advances, related legal construction will follow suit.
Azhi has mentioned many times in questions about generative AI that AI is not fundamentally different from new technologies that have appeared in history. It has nothing to do with right or wrong, good or evil. The key is who will use it and how to use it. There is no doubt about its role as a new engine in the Internet industry and a new reservoir in the financial market. However, blindly and one-sided emphasis on its technological superiority is mostly done by opportunists and is just for profit. We must view the innovative technology of generative AI objectively and comprehensively, and take preventive measures to avoid the risks caused by its abuse.
The above is the detailed content of alert! AI may lead to the proliferation of false information. For more information, please follow other related articles on the PHP Chinese website!