Explore EU agreement on artificial intelligence regulation

WBOY
Release: 2024-05-07 17:34:32
forward
871 people have browsed it

After three days of "marathon" negotiations, the Council Chairman and the negotiators have just reached an agreement on artificial intelligence coordination rules, which is expected to eventually become the "Artificial Intelligence Act for Artificial Intelligence Regulation" . The draft regulation focuses on providing a compliance framework that prioritizes the deployment of safe and human rights-respecting AI systems in the EU. This artificial intelligence regulation goes beyond inspiring European countries to invest and innovate in the field of artificial intelligence.

Explore EU agreement on artificial intelligence regulation

The Artificial Intelligence Act is a landmark legislation that creates an enabling environment in which artificial intelligence The use of intelligence will become a tool for better security and trust, ensuring the engagement of public and private institutional stakeholders across the EU. The main idea is to follow a “risk-based” approach to regulating AI based on its ability to cause harm to society: the more harmful the risk, the more restrictions need to be put in place. The law sets a global precedent for the regulation of artificial intelligence in other jurisdictions. In contrast, the GDPR achieves this in the same way in terms of protecting personal data, thereby promoting the EU's approach to technology regulation globally.

Main contents of the interim agreement

Compared with the European Commission’s preliminary proposal, the main new elements of the interim agreement on the regulation of artificial intelligence can be summarized as follows :

  • Categories high-impact general AI models with future systemic risks and the high-risk AI systems that control them.
  • Improving governance systems with coordinating powers at EU level or overhauling economic policy.
  • The list of prohibited items has been expanded, but police officers can use remote biometric identification in public places; however, this is subject to conditions that prevent abuse.
  • Better enforcement of rights by requiring AI actors deploying potentially risky systems to understand the fundamental rights implications of those systems before using them.

More specifically, the provisional agreement includes the following aspects:

  • Definition and scope

Depending on the selection agreement, The definition of an AI system attempting to select an OECD shelter corresponds to the proposed procedure. In this way, standards for artificial intelligence systems help distinguish simple software systems more clearly.

Furthermore, the provisional agreement explains in more detail that the rules of the Regulation do not cover sectors falling within the field of EU law and cannot in any way undermine the rights of Member States in the field of national security or of parties sharing responsibilities in the field of national security. ability. Not only that, but the Artificial Intelligence Act will not extend to artificial intelligence systems used solely for military or defense purposes. At the same time, the treaty also states that the law shall apply to artificial intelligence systems when they are used not only for scientific and research purposes, but also for non-scientific and non-innovative reasons, including by non-artificial intelligence technicians or experts.

  • Classifies AI systems as high-risk and prohibited AI practices

It creates a horizontal security barrier that includes Level 1.1 "Possible" serious/significant harm to rights" to exclude artificial intelligence systems that were not predicted to pose such a threat. Those AI systems that pose a slight risk of harm to users will have minimal transparency requirements. For example, it should inform users that the content was generated by artificial intelligence so that decisions can be made whether to use the content or perform further actions.

Various artificial intelligence-based systems operating on EU territory will be approved. However, there are requirements and responsibilities for entering the EU market that must be met. These co-legislators added and revised some provisions to make them technically simpler and clearer, easing the burden on stakeholders, for example, regarding the provision of data quality and the technical documentation that SMEs need to prepare to certify their artificial intelligence The system has been built safely and complies with current regulations.

Since AI systems are created and delivered within a demanding value chain environment, a compromise arrangement will require, among other things, a revision of the enterprise’s behavior to reflect the various actors in that chain (i.e. providers and users of the technology system) clarification of the scope of responsibility/influence. It also sets out how the AI-specific obligations derived from the AI Bill interact and conflict with obligations set out in other laws, such as the EU’s data legislation and sectoral legislation.

The application of artificial intelligence will be rejected due to high risk, which is prohibited for certain uses of artificial intelligence. Therefore, these devices will not be introduced into the EU. According to the preliminary agreement, prohibited activities include cognitive technologies for behavioral control, the purposeless collection of facial images from the Internet, emotion recognition in institutions and education, social scoring, and biometric methods for determining sexual orientation or religious beliefs. and some personal genealogy for policing purposes.

  • Law Enforcement Exception

Given the special nature of law enforcement organizations and the need for them to use computers in the performance of their duties, the Committee’s exception to law enforcement The rules proposed by AI have made several changes, which are crucial. As long as careful measures are put in place, these changes will translate into the necessity for operators to protect information privacy. An example of this would be initiating emergency procedures for high-risk AI implementations, but not including conducting a conformity assessment in emergency situations. In addition, a specific action has been formulated to provide authority to protect human rights and prevent the abuse of artificial intelligence applications.

Furthermore, the text of the interim agreement clearly expresses the reasons for using real-time remote biometric identification systems in public places, but only for law enforcement reasons, and authorities will only be allowed to do so in exceptional circumstances. The compromise agreement provides for additional safeguards and limits these exceptions to cases of killing of suspects, ensuring that searches are only carried out in cases of real threats and preventing terrorist attacks and searches when people are suspected of committing the most serious crimes. .

  • General artificial intelligence system and basic model

For scenarios where the artificial intelligence system is used for multiple purposes, that is, general artificial intelligence and independent systems , formulated new regulations. Another high-risk system, self-driving cars, integrates with general artificial intelligence. The Transitional Agreement also includes the GPA (GPAI). GPA supervision is a core part of the agreement.

Basic models, described as systems capable of demonstrating capabilities in complex activities such as text generation, video segmentation, processing natural language, rendering code and many more computer tasks, have also been concluded. The interim arrangement requires base models to meet decontamination requirements before being marketed. The policies required to build a "high-impact" base model are much more stringent. These average data models, with their massive scale and highly advanced complexity, capabilities and capabilities, can create systemic risks across a business's value chain; these risks are shared by all participating businesses.

  • New governance structure

In light of the restrictive measures of the new GPAI model and the need for its standardized monitoring at EU level, the Commission has established a The Unique Artificial Intelligence Office oversees these state-of-the-art AI models, promotes the establishment of specifications and testing procedures, and enforces key rules in all member states. An independent scientific group of technical experts will provide advice to the GPAIAI Office on model sizing by developing methods for assessing base models and technical capabilities, conducting an assessment of GPAI status and base models in preparation for market launch, and potentially monitoring issues related to base models. Material safety.

To this end, the Artificial Intelligence Commission, composed of Member States as its members and serving as the Commission's coordination platform and advisory body, shall enable Member States to play a prominent and key role in the implementation of the Regulation within its field , as a code of practice for the basic model. Last but not least, a forum will be established with individual stakeholders represented by industry players, SMEs, start-ups, civil society and universities. This could provide technical knowledge that the AI Commission could use.

  • Penalty

Connect and sanction companies that violate the Artificial Intelligence Law. The minimum fine is a certain amount and the maximum is Percentage of global annual turnover for a financial year. Violations of the above-mentioned artificial intelligence applications will be subject to a fine of 35 million euros (7%), violations of the obligations under the Artificial Intelligence Act will be subject to a fine of 15 million euros (3%), and providing misleading information will be subject to a fine of 7.5 million euros (1.5 %) penalty. Nonetheless, the interim agreement includes special fines that will have a smaller impact on small and medium-sized enterprises and start-ups when they commit to complying with the provisions of the Artificial Intelligence Act.

The concise agreement with the Artificial Intelligence Law provides that natural or legal persons have the right to lodge a formal complaint with the relevant market surveillance entity. Furthermore, the Agency shall follow its specific and prescribed procedures in handling the said complaints.

  • Transparency and Protection of Fundamental Rights

Notably, the interim agreement requires deployers of artificial intelligence systems before placing high-risk systems on the market , conduct rights impact assessments of end-user safeguards. This interim agreement also provides a good starting point for the widespread use of sophisticated artificial intelligence and automated truth detection systems. It will provide clarity on the scope of implementation of the system. Notably, some of the proposed Commission's regulations have been modified to reference various occasional government users of high-risk AI systems who have also registered high-risk AI machine learning systems in EU databases. In addition to this, the updated product line will also send a message to users operating the emotion recognition system to let them know when they are exposed to the system.

  • Measures to support innovation

This part of the regulations has been significantly revised to encourage innovation, which is the committee's proposal to establish a more scientific innovation An important aspect of the Framework, which continuously adapts to ensure a sustainable regulatory environment across the EU.

The Required Artificial Intelligence Regulatory Sandbox is designed to ensure a controlled environment for the development, testing and validation of new artificial intelligence systems, and also allows institutions to test them under real-world conditions. Additionally, where AI systems are tested in real-world environments, new restrictions have been enabled to allow the systems to be used under specific conditions. This temporary agreement is to reduce the administrative burden on small and medium-sized enterprises and establish a support plan for small and medium-sized enterprises with low income. In such cases, derogations are allowed if they are legally limited and strictly specific.

  • Effective Date

The "Interim Agreement on the Management of Artificial Intelligence" stipulates that, subject to certain exceptions, the provisions of the "Artificial Intelligence Law" shall Applicable for 2 years from the effective date.

The above is the detailed content of Explore EU agreement on artificial intelligence regulation. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!