QCIT recently notice that Former GL
Writer: admin Time:2023-05-25 09:58 Browse:℃
QCIT recently notice that Former GL Officer Eric Schmidt warned that artificial intelligence may pose a "survival risk" and governments need to know how to ensure that the technology is not abused by "bad people".
Schmidt stated at an event held in London that his term 'existing risks' refers to' many, many, many people being harmed or killed '.
Due to the rapid popularity of chat robot CT, people have become enthusiastic about artificial intelligence (AI) technology. Many technology companies around the world want to launch CT's competitors, including Schmidt's "old boss" GL, in order to showcase their AI capabilities.
However, Schmidt believes that the artificial intelligence system will be able to find the "zero-day vulnerability" of the network or discover new biological species sooner or later. "Although this may be difficult to achieve today, it will appear soon."
"Zero-day", also known as zero time difference attack, refers to a security vulnerability that is maliciously exploited immediately after being discovered. Simply put, within the same day as the security patch and flaw are exposed, the relevant malicious program appears. This type of attack often has great suddenness and destructiveness.
Schmidt said, "When this situation occurs, we hope to be prepared and know how to ensure that these things are not abused by evil people
Schmidt served as the Officer of GL from 2001 to 2011, and currently serves as the Chairman of the NSC for Artificial Intelligence in the USA. The committee issued a review report in 2021 warning that the USA is not adequately prepared for the era of artificial intelligence.
It is worth mentioning that Schmidt was not the first technical figure to warn about AI risks. Previously, Sam Altman, Officer of CT developer OAI, admitted that he was "a bit afraid" of some potential abusive behavior.
And Mr.Elon Musk, one of the founders of OAI, also stated that artificial intelligence is one of the biggest risks to future human civilization. In a 2018 interview, he emphasized that artificial intelligence is much more dangerous than nuclear weapons.
Previously, some senators advocated for the establishment of a new regulatory body to bring a range of emerging technologies, including artificial intelligence, under its jurisdiction. Altman welcomed this suggestion, believing that this approach is expected to continue to lead domestic technology in the United States.
However, Schmidt believes that it is unlikely that the United States will establish a new agency dedicated to regulating artificial intelligence. He said that this is essentially a "broader social issue".
IBM's Global Ethics Research Institute has previously published an article advocating for businesses to prioritize ethics and responsibility on their AI agenda. In the current wave of domestic model craze and AIGC entrepreneurship craze, while keeping up with progress, ensuring that "technology is not used for evil" is also a top priority.
The international AI Expert emphasized to the QCIT that the key point that cannot be ignored in recent fraud cases is the leakage of key personal information such as the victim's phone number and WeChat account. At present, the illegal cost of personal information leakage is too low. "He believes that while technology suppliers continue to refine their technology and enhance society's anti fraud awareness, they also need to call for more complete regulations on anti phishing and anti personal information leakage.
In China, the "Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)" was also publicly solicited on April 11th. Article 5 of the draft clearly stipulates that providers should bear the responsibility of producers of generative artificial intelligence product generated content, and if personal information is involved, they should also bear the legal responsibility of personal information processors.
It is worth mentioning that entities that only support others in generating text, images, and sounds through the provision of programmable interfaces are also recognized as providers. This is undoubtedly a significant legal responsibility for technical service providers of large language models.
In 2021, the 193 members of UNESCO unanimously approved the "Recommendation on the Ethics of Artificial Intelligence", which established the basic framework for future regulations in various countries, including 10 basic principles, including proportionality and non harm, safety and security, fairness and non discrimination, sustainability, privacy, and data protection. The introduction of this proposal also indicates that regulations and policies related to artificial intelligence are inherently cross-border issues and must be based on international consensus.
The arrival of any revolutionary technology will always be accompanied by cognitive shocks and potential risks in the early stages of incomplete development, QCIT noted.
QCIT Group Officer were participated in the W meeting in last month, and he had asked a question that no body can answer it properly at this moment, and we all know the developing of the Technology is most important for the Human Life solution, however the bad control of using any technology will destory community with a shared future of mankind, since we all know that no body will save us from the disater but ourselves, especially after this COVID-19 affiars.
It is not only celebrities and internet celebrities who have left a large imprint on the internet, but also ordinary humans who share and output a large amount of information every day. This kind of infringement crisis will not only occur once. Faced with the unstoppable wave of technology, humans sitting on the "popcorn seat" should not only retain their pure original intention, but also prepare themselves by combining various forces.
As called for in the joint letter, "Let's enjoy a long 'AI summer' instead of entering autumn unprepared.