Artificial Intelligence (AI) originated in the 1950s and has experienced three waves of development, whether it is the laboratory stage or the large-scale industrialization stage. Researchers have continued to advance their technology for decades, hoping that one day machines will have general human intelligence and perform a full range of human cognitive abilities.
In recent years, in order to make AI develop more healthily, one technical field is becoming the focus of research in industry and academia: Trustworthy AI, that is, the positive values of human society, through technology Given to artificial intelligence, including explainability, fairness, privacy protection and fairness.
From an academic research level, Trustworthy AI is mainly focused on algorithm and system level research, including security/robustness, explainability, privacy, fairness, Auditability/accountability, environmental protection. Interpretability includes the theoretical interpretability, algorithm interpretability, and behavior interpretability of learning methods or models; robustness mainly focuses on model stability research, attack models, and defense models; privacy protection refers to the direct game between attack and protection methods. Such as differential privacy and multi-center federated learning; fairness focuses on the bias research of various data and models, and the balance of equality and fairness; while environmental protection refers to the pursuit of high energy efficiency strategies and more energy efficient computing hardware.
Different from credible AI academic research, companies are more focused on proposing solutions to current problems. For example, in 2015, Ant Group launched a mobile phone loss risk research project based on "end characteristics", aiming to use AI technology to protect users' privacy and security. In order to solve the issue of fairness in AI, IBM developed multiple AI trustworthy tools in 2018 to use unbiased data sets and models in AI systems to avoid unfairness to specific groups. The industry has become more demanding on the application of trustworthy AI, with a higher fault tolerance rate. Many trusted AI white papers mention that for trustworthy AI to truly come to fruition, it needs to be incorporated into the production process so that it becomes a mechanism and plays a technical constraining role.
Young students are an important reserve of technical talents. For young students studying trustworthy AI, how to prepare for trustworthy AI? In their current study and life, they should understand the academic frontiers and the latest technology trends in the industry, and think about which technologies can be applied to which problems. , actively observe and understand the world in which we live, as well as the industry's demand pain points and technical bottlenecks. For example, a recent trustworthy AI practical technology reality show has linked up with some of the country's top universities to restore the capabilities of trustworthy AI technology in practical applications through the application of trustworthy AI in "technological anti-fraud" in the industry. Connect what is being done in academia and industry in a format that everyone can understand, so that technology practitioners and technology researchers can deeply participate in it.
When doing AI research, "complexity" is a key word. Environment complexity, task complexity, and system complexity determine the level of AI. Research on it can reveal the principles of AI generation and can also answer the ultimate question of AI, which is its ultimate impact on human destiny. Future trustworthy AI research must also look at the value AI brings to humankind from the perspective of complexity analysis, and requires joint efforts from academia and industry to promote it.
The above is the detailed content of Wang Yizhou of Peking University: Polishing the business card of credible AI research requires the integration of industry, academia and research. For more information, please follow other related articles on the PHP Chinese website!