Home >Technology peripherals >AI >Report says Nvidia artificial intelligence software can be easily cracked, posing risk of leaking user privacy
According to the British "Financial Times" report, Nvidia's artificial intelligence software "NeMo" can easily bypass security restrictions and may leak user privacy.
According to reports, NeMo is a software for enterprise customers that can combine enterprise user data with large language models to answer user questions. Its application scenarios include customer service and answering simple medical questions. California-based information security company Robust said in a report that malicious users can easily bypass NeMo’s internal AI system security restrictions. In just a few hours, Robust's researchers bypassed the limitations of using these language models.
IT House noticed that in one of the tests, the researchers asked NeMo to replace "I" with the letter "J", which caused NeMo to publish the letters in the database. User personal information. In addition, the researchers found that although NeMo was set up to only provide users with career advice, guided questions led NeMo to discuss topics such as the health of Hollywood actors and the Franco-Prussian War. This means that restrictions within the system that prevented the AI from discussing certain topics are no longer in effect.
Nvidia Vice President Jonathan Cohen said that the NeMo architecture is only to help build chatbots that conform to themes defined by developers, and it is released to developers as open source software. Currently, these vulnerabilities have been fixed. Cohen declined to say how many enterprises use the NeMo architecture, but stressed that Nvidia had received no other reports of vulnerabilities.
The above is the detailed content of Report says Nvidia artificial intelligence software can be easily cracked, posing risk of leaking user privacy. For more information, please follow other related articles on the PHP Chinese website!