/* ---- Google Analytics Code Below */

Wednesday, May 31, 2023

Statement on AI Risk

 Considerable statement and agreement,   Signed by many worldwide.  top academics in China. 

https://www.youtube.com/watch?v=f20wXjWHh2o

Statement on AI Risk

Hassabis, Altman and AGI Labs Unite - AI Extinction Risk Statement [ft. Sutskever, Hinton + Voyager]

54,971 views  May 30, 2023

The leaders of almost all of the world's top AGI Labs have united to put out a statement on AI Extinction Risk, and how mitigating it should be a global priority. This video covers not just the statement and the signatories, including names as diverse as Geoffrey Hinton, Ilya Sutskever, Sam Harris and Lex Fridman, but also goes deeper into the 8 Examples of AI Risk outlined at the same time by the Center for AI Safety.

Top academics from China join in, while Meta demurs, claiming autoregressive LLMs will 'never be given agency'. I briefly cover the Voyager paper, in which GPT 4 is given agency to play Minecraft, and does so at SOTA levels. 

Statement: https://www.safe.ai/statement-on-ai-risk

Further:  https://www.safe.ai/ai-risk   8 risk types

Natural Selection Paper: https://arxiv.org/pdf/2303.16200.pdf5

Yann LeCun on 20VC w/ Harry Stebbings:   

 • Yann LeCun: Meta’...  

Voyager Agency Paper: https://arxiv.org/pdf/2305.16291.pdf

Karpathy Tweet: https://twitter.com/karpathy/status/1...

Hassabis Benefit Speech:   


 • Fei-Fei Li & Demi...  

Stanislav Petrov: https://en.wikipedia.org/wiki/Stanisl...

Bengio Blog: https://yoshuabengio.org/2023/05/07/a...

https://www.patreon.com/AIExplained   .... ' 

No comments: