The potential risks associated with artificial intelligence as presented by OpenAI’s CEO Sam Altman at the recent Senate hearings just became a lot more complicated for corporate leaders.
released by the Center for AI Safety warned that developing AI technology could, in the future, present an existential threat to humanity similar to that presented by other societal extinction-level threats: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
The open letter was signed by Mr. Altman and more than 350 other executives, researchers and engineers involved in AI development.additional evidence to the growing public conversation on the potential risks presented by AI and the most effective means of regulating its implementation. It also complicates the challenges already confronting corporate leaders as they make decisions how to incorporate AI technology into their organization. In other words, it’s a problem.
These leaders are already balancing the exciting opportunities presented by AI technology with the known risks, including privacy breaches, clinical errors to wholesale misinformation, racial bias and manipulation. There is, among some of those leaders, a recognition that the rapid development of the technology suggests longer-term concerns, including its potential to surpass human-level capabilities in certain fields.
But injecting the threat of societal extinction into the conversation moves the corporate risk discussion to a new - and most likely unwanted - level.No thoughtful corporate board is going to reject organizational investment in fundamental AI technology based on “existential risks” to society. They’re generally aware of the “big picture” risks of AI as they relate to political, economic and national security issues, among others. They get it.
South Africa Latest News, South Africa Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
AI presents 'risk of extinction' on par with nuclear war, industry leaders say | EngadgetA group of high-profile industry leaders has issued a one-sentence statement on the risks of AI, equating it to pandemics and nuclear war..
Read more »
AI could bring ‘extinction’ and requires nuclear-war-level preparations, experts warnA group of scientists and other luminaries sees potentially grave dangers stemming from the rapid advancement of artificial intelligence.
Read more »
AI leaders warn the technology poses 'risk of extinction' like pandemics and nuclear warHundreds of business leaders and public figures sound a sobering alarm over what they describe as the threat of mass extinction posed by artificial intelligence.
Read more »
AI could pose 'risk of extinction' akin to nuclear war and pandemics, experts sayArtificial intelligence could pose a 'risk of extinction' to humanity on the scale of nuclear war or pandemics, and mitigating that risk should be a 'global priority,' according to an open letter signed by AI leaders.
Read more »
AI experts sign doc comparing risk of ‘extinction from AI’ to pandemics, nuclear warArtificial intelligence experts from across Big Tech and academia, as well as celebrities and other noteworthy figures, signed an open statement from the Center for AI Safety stating that AI represents an existential threat to humanity.
Read more »
Next generation arms race could cause 'extinction' event akin to nuclear war, pandemic: tech chiefArtificial intelligence could lead to extinction and should be a global priority on the scale of nuclear war and pandemics, Center for AI Safety chief Dan Hendrycks said.
Read more »