View a list of all blog entries.
RECENT BLOG POSTS
Rob Bensinger: wrote MIRI News: October 2015
Max Tegmark: wrote AI safety at the United Nations
Max Tegmark: wrote Hawking Reddit AMA on AI
Viktoriya Krakovna: wrote Happy Petrov Day!
Max Tegmark: wrote We're hiring!
James Hogan: wrote Future of Life Institute Summer 2015 Newsletter
Rob Bensinger: wrote MIRI News: September 2015
RECENT COMMENTS
Stuart Armstrong: commented on Ackerman defends AI weapons: From the comment I posted there (note that I am an outlier in the FHI in ...
Max Tegmark: commented on Article: Research Challenges for Safe AI Systems: Thanks Jacob for doing this!
RECENT BLOG POSTS
Rob Bensinger: wrote MIRI News: October 2015
Max Tegmark: wrote AI safety at the United Nations
Max Tegmark: wrote Hawking Reddit AMA on AI
Viktoriya Krakovna: wrote Happy Petrov Day!
Max Tegmark: wrote We're hiring!
James Hogan: wrote Future of Life Institute Summer 2015 Newsletter
Rob Bensinger: wrote MIRI News: September 2015
RECENT COMMENTS
Stuart Armstrong: commented on Ackerman defends AI weapons: From the comment I posted there (note that I am an outlier in the FHI in ...
Max Tegmark: commented on Article: Research Challenges for Safe AI Systems: Thanks Jacob for doing this!

FLI BLOG
January 25, 2021
Article: Research Challenges for Safe AI Systems
Jacob Steinhardt (Blogger) wrote on Jul. 27, 2015 @ 11:06 GMT
FLI has done a great job of kick-starting a discussion on the research challenges in ensuring the safety of AI systems, by putting forth a proposed program of research in their research priorities document.
To ensure that our efforts are directed effectively, it is important for other AI researchers to engage with the question of what topics are important for AI safety. Towards this end, I've put forward my own ideas about the potential risks from AI systems and what research questions they motivate. There is overlap with the FLI document but also some important differences. I would welcome commentary on the proposed research questions, as well as efforts by others to offer their own proposals. You can find the document here.
To ensure that our efforts are directed effectively, it is important for other AI researchers to engage with the question of what topics are important for AI safety. Towards this end, I've put forward my own ideas about the potential risks from AI systems and what research questions they motivate. There is overlap with the FLI document but also some important differences. I would welcome commentary on the proposed research questions, as well as efforts by others to offer their own proposals. You can find the document here.
Max Tegmark (FLI Administrator) wrote on Aug. 3, 2015 @ 02:46 GMT
Thanks Jacob for doing this!