View a list of all blog entries.
RECENT BLOG POSTS
Rob Bensinger: wrote MIRI News: October 2015
Max Tegmark: wrote AI safety at the United Nations
Max Tegmark: wrote Hawking Reddit AMA on AI
Viktoriya Krakovna: wrote Happy Petrov Day!
Max Tegmark: wrote We're hiring!
James Hogan: wrote Future of Life Institute Summer 2015 Newsletter
Rob Bensinger: wrote MIRI News: September 2015
RECENT COMMENTS
Stuart Armstrong: commented on Ackerman defends AI weapons: From the comment I posted there (note that I am an outlier in the FHI in ...
Max Tegmark: commented on Article: Research Challenges for Safe AI Systems: Thanks Jacob for doing this!
RECENT BLOG POSTS
Rob Bensinger: wrote MIRI News: October 2015
Max Tegmark: wrote AI safety at the United Nations
Max Tegmark: wrote Hawking Reddit AMA on AI
Viktoriya Krakovna: wrote Happy Petrov Day!
Max Tegmark: wrote We're hiring!
James Hogan: wrote Future of Life Institute Summer 2015 Newsletter
Rob Bensinger: wrote MIRI News: September 2015
RECENT COMMENTS
Stuart Armstrong: commented on Ackerman defends AI weapons: From the comment I posted there (note that I am an outlier in the FHI in ...
Max Tegmark: commented on Article: Research Challenges for Safe AI Systems: Thanks Jacob for doing this!

FLI BLOG
January 25, 2021
The FLI blog is a place for members of the existential risk community to post commentary and generate discussion on special topics. Only designated bloggers can start a new topic, but then anyone is welcome to join in the conversation.
If you have an idea that you think would make a good blog post, let us know at forums@futureoflife.org!
MIRI News: October 2015
1 post • created by Rob Bensinger • Oct. 15, 2015 @ 01:57 GMT
"MIRI's October Newsletter collects recent news and links related to the long-term impact of artificial intelligence. Highlights: — New introductory material on MIRI can be found on our information ..."
AI safety at the United Nations
1 post • created by Max Tegmark • Oct. 14, 2015 @ 02:29 GMT
"Nick Bostrom and I were invited to speak at the United Nations about how to avoid AI risk. I'd never been there before, and it was quite the adventure! Here's the video - I start talking at 1:54:40 ..."
Hawking Reddit AMA on AI
1 post • created by Max Tegmark • Oct. 10, 2015 @ 04:52 GMT
"Our Scientific Advisory Board member Stephen Hawking's long-awaited Reddit AMA answers on Articificial Intelligence just came out, and was all over today's world news, including MSNBC, Huffington ..."
Happy Petrov Day!
1 post • created by Viktoriya Krakovna • Sep. 26, 2015 @ 17:05 GMT
"32 years ago today, Soviet army officer Stanislav Petrov refused to follow protocol and averted a nuclear war. From 9/26 is Petrov Day: "On September 26th, 1983, Lieutenant Colonel Stanislav ..."
We're hiring!
1 post • created by Max Tegmark • Sep. 23, 2015 @ 15:25 GMT
"To bolster our ability to do good, we at FLI are looking to fill two job openings. Please consider applying and please pass this posting along to anyone you think would be a good fit! PROJECT ..."
Future of Life Institute Summer 2015 Newsletter
1 post • created by James Hogan • Sep. 22, 2015 @ 20:42 GMT
"$7M in AI research grants announced, open letter about economic impacts of AI, and more."
MIRI News: September 2015
1 post • created by Rob Bensinger • Sep. 21, 2015 @ 22:54 GMT
"[ Rob Bensinger is the Outreach Coordinator of the Machine Intelligence Research Institute (MIRI), a research nonprofit studying the technical questions raised by the prospect of smarter-than-human ..."
GCRI News Summaries
1 post • created by Seth Baum • Sep. 14, 2015 @ 21:35 GMT
"Here are the July and August global catastrophic risk news summaries, written by Robert de Neufville of the Global Catastrophic Risk Institute. The July summary covers the Iran deal, Russia's new ..."
Future of AI at SciFoo 2015
1 post • created by Meia Chita-Tegmark • Sep. 1, 2015 @ 14:08 GMT
"Here is a short summary of the Future of AI session organized at SciFoo by Nick Bostrom, Gary Marcus, Jaan Tallinn, Max Tegmark and Murray Shanahan."
Wallace defends AI weapons
1 post • created by Max Tegmark • Aug. 11, 2015 @ 04:31 GMT
"Sam Wallace, a former US army officer, has an interesting piece criticizing our open letter suggestion as "unrealistic and dangerous". I just wrote a response together with Stuart Russell and Toby ..."
Financial Times supports ban
1 post • created by Max Tegmark • Aug. 4, 2015 @ 21:55 GMT
"The open letter got its first big editorial endorsement today, with the Financial Times supporting a ban."
Ackerman defends AI weapons
2 posts • created by Max Tegmark • Aug. 3, 2015 @ 02:33 GMT
"Evan Ackerman, a contributing editor for IEEE-Spectrum, has an interesting piece panning our open letter, writing `who would seriously be for “killer robots?” I am.' I just wrote a response ..."
Russell pans AI weapons on NPR
1 post • created by Max Tegmark • Jul. 30, 2015 @ 04:37 GMT
"Stuart Russell just gave an interesting interview on NPR's "All Things Considered" about AI weapons and our open letter advocating a ban. He also did a TV interview on the topic for Al Jazeera (that's..."
Open letter on AI weapons
1 post • created by Max Tegmark • Jul. 29, 2015 @ 01:18 GMT
"At a press conference at the IJCAI AI-meeting in Buenos Aires today, Stuart Russell and Toby Walsh announced an open letter on autonomous weapons that we've helped organize. We're delighted that it's ..."
Article: Research Challenges for Safe AI Systems
2 posts • created by Jacob Steinhardt • Jul. 27, 2015 @ 10:56 GMT
"FLI has done a great job of kick-starting a discussion on the research challenges in ensuring the safety of AI systems, by putting forth a proposed program of research in their research priorities ..."
GCRI News Summary June 2015
1 post • created by Seth Baum • Jul. 21, 2015 @ 17:50 GMT
"Here is the June 2015 global catastrophic risk news summary, written by Robert de Neufville of the lobal Catastrophic Risk Institute. The news summaries provide overviews across the world of global ..."
AI safety research on NPR
1 post • created by Max Tegmark • Jul. 16, 2015 @ 18:12 GMT
"I just had the pleasure of discussing our new AI safety research program on National Public Radio. I was fortunate to be joined by two of the winners of our grants competition: CMU roboticist Manuela ..."
Are we heading into a second Cold War?
1 post • created by Janos Kramar • Jul. 6, 2015 @ 04:20 GMT
"US-Russia tensions are at their highest since the end of the Cold War, and some analysts are warning about the growing possibility of a nuclear war. Their estimates of risk are comparable to some ..."
ITIF panel on superintelligence with Russell and Soares
1 post • created by Viktoriya Krakovna • Jul. 2, 2015 @ 01:15 GMT
"The Information Technology and Innovation Foundation held a panel discussion on June 30, "Are Superintelligent Computers Really A Threat to Humanity?"."
And the winners are...
1 post • created by Max Tegmark • Jul. 1, 2015 @ 16:12 GMT
"After a grueling expert review of almost 300 grant proposals from around the world, we are delighted to announce the 37 research teams that have been recommended for funding to help keep AI ..."
AI Economics Open Letter
1 post • created by Max Tegmark • Jun. 19, 2015 @ 17:59 GMT
"Inspired by our Puerto Rico AI conference and open letter, a team of economists and business leaders have now launched their own open letter specifically on how to make AI's impact on the economy ..."
CBS takes on AI
1 post • created by Max Tegmark • Jun. 15, 2015 @ 02:22 GMT
"CBS News interviewed me for this morning's segment on the future of AI, which avoided the tired old "robots-will-turn-evil" message and reported on the latest DARPA challenge."
Wait But Why: 'The AI Revolution'
1 post • created by Melody Guan • Jun. 13, 2015 @ 00:40 GMT
"Tim Urban of Wait But Why has an engaging two-part series on the development of superintelligent AI and the dramatic consequences it would have on humanity. Equal parts exciting and sobering, this is ..."
AI Ethics in Nature
1 post • created by Max Tegmark • Jun. 6, 2015 @ 15:26 GMT
"Nature just published four interesting perspectives on AI Ethics, including an article and podcast on Lethal Autonomous Weapons by Stuart Russell."
Sam Altman Investing in 'AI Safety Research'
1 post • created by Jesse Galef • Jun. 6, 2015 @ 14:57 GMT
"Sam Altman, head of Y Combinator, gave an interview with Mike Curtis at Airbnb's Open Air 2015 conference and brought up (among other issues) his concerns about AI value alignment. He didn't pull any punches:"
Stuart Russell on the long-term future of AI
1 post • created by Viktoriya Krakovna • Jun. 1, 2015 @ 00:52 GMT
"Stuart Russell recently gave a public lecture on The Long-Term Future of (Artificial) Intelligence, hosted by the Center for the Study of Existential Risk. "
What happens when our computers get smarter than we are?
1 post • created by Peter Haas • May. 26, 2015 @ 01:21 GMT
"Nick Bostrom's talk on Artificial Super Intelligence is up at TED.com"
Happy Birthday, FLI!
1 post • created by Meia Chita-Tegmark • May. 25, 2015 @ 03:21 GMT
"Today we are celebrating one year since our launch event. It's been an amazing year, full of wonderful accomplishments, and we would like to express our gratitude to all those who supported us with ..."
What AI Researchers Say About Risks from AI
1 post • created by Viktoriya Krakovna • May. 23, 2015 @ 23:59 GMT
"Scott Alexander does a comprehensive review of the opinions of prominent AI researchers on the risks from AI."
Hawking AI speech
1 post • created by Max Tegmark • May. 12, 2015 @ 21:30 GMT
"Stephen Hawking, who serves on our FLI Scientific Advisory Board, just gave an inspiring and thought-provoking talk that I think of as "A Brief History of Intelligence". He spoke of the opportunities ..."
MIRI's New Executive Director
1 post • created by Viktoriya Krakovna • May. 11, 2015 @ 01:35 GMT
"Big news from our friends at MIRI: Nate Soares is stepping up as the new Executive Director, as Luke Muehlhauser has accepted a research position at GiveWell."
January 2015 Newsletter
1 post • created by Jesse Galef • May. 4, 2015 @ 22:01 GMT
"In the News * Top AI researchers from industry and academia have signed an FLI-organized open letter arguing for timely research to make AI more robust and beneficial. Check out our research ..."
Chinese Scientists Report Unsuccessful Attempt to Selectively Edit Disease Gene in Human Embryos
1 post • created by Grigory Khimulya • May. 4, 2015 @ 15:27 GMT
"Researchers from Sun Yat-sen University, Guangzhou failed to selectively modify a single gene in unicellular human embryos using the CRISPR/Cas9 technology, noting many off-target mutations. The study..."
November 2014 Newsletter
1 post • created by Jesse Galef • May. 3, 2015 @ 20:22 GMT
"In the News * The winners of the essay contest we ran in partnership with the Foundational Questions Institute have been announced! Check out the awesome winning essays on the FQXi website. * ..."
Dubai to Employ "Fully Intelligent" Robot Police
1 post • created by Jesse Galef • May. 2, 2015 @ 01:15 GMT
"I don't know how seriously to take this, but Dubai is developing Robo-cops to roam public areas like malls: "'The robots will interact directly with people and tourists,' [Colonel Khalid Nasser ..."
Jaan Tallinn on existential risks
1 post • created by Viktoriya Krakovna • Apr. 16, 2015 @ 15:05 GMT
"An excellent piece about existential risks by FLI co-founder Jaan Tallinn on Edge.org: "The reasons why I'm engaged in trying to lower the existential risks has to do with the fact that I'm a ..."
Recent AI discussions
1 post • created by Viktoriya Krakovna • Apr. 16, 2015 @ 14:54 GMT
"1. Brookings Institution post on Understanding Artificial Intelligence, discussing technological unemployment, regulation, and other issues. 2. A recap of the Science Friday episode with Stuart ..."
Assorted Sunday Links #3
1 post • created by Jacob Trefethen • Apr. 13, 2015 @ 06:45 GMT
"1. In the latest issue of Joint Force Quarterly, Randy Eshelman and Douglas Derrick call for the U.S. Department of Defense to conduct research on how "to temper goal-driven, autonomous agents with ..."
Russell, Horvitz, and Tegmark on Science Friday: Is AI Safety a Concern?
1 post • created by Jesse Galef • Apr. 11, 2015 @ 15:05 GMT
"To anyone only reading certain news articles, it might seem like the top minds in artificial intelligence disagree about whether AI safety is a concern worth studying. But on Science Friday yesterday, guests Stuart Russell, Eric Horvitz, and Max Tegmark a"
Gates & Musk discuss AI
1 post • created by Max Tegmark • Apr. 8, 2015 @ 19:00 GMT
"Bill Gates and Elon Musk recently discussed the future of AI, and Bill said he shared Elon's safety concerns. Regarding people dismissing AI concerns, he said "How can they not see what a huge ..."
CSER and FHI recruiting post-docs!
1 post • created by Jacob Trefethen • Apr. 6, 2015 @ 14:51 GMT
"Exciting news from the two major existential risk research hubs in the UK: The Centre for the Study of Existential Risk (University of Cambridge) and the Future of Humanity Institute (University of ..."
April 2015 Newsletter
1 post • created by Jesse Galef • Apr. 1, 2015 @ 20:01 GMT
"In the News * The MIT Technology Review recently published a compelling overview of the possibilities surrounding AI, featuring Nick Bostrom's Superintelligence and our open letter on AI research ..."
AI grant results
1 post • created by Max Tegmark • Mar. 30, 2015 @ 17:53 GMT
"We were quite curious to see how many applications we’d get for our Elon-funded grants program on keeping AI beneficial, given the short notice and unusual topic. I’m delighted to report that the ..."
Assorted Sunday Links #2
1 post • created by Jacob Trefethen • Mar. 30, 2015 @ 04:57 GMT
"Some links from the last few weeks—and some from the last few days—on what's been happening in the world of existential risk: 1. The Open Philanthropy Project, a growing philanthropic force in the..."
Wozniak concerned about AI
1 post • created by Max Tegmark • Mar. 24, 2015 @ 00:50 GMT
"Steve Wozniak, without whom I wouldn't be typing this on a Mac, has now joined the growing group of tech pioneers (most recently his erstwhile arch-rival Bill Gates) who feel that we shouldn't dismiss..."
MIRI's New Technical Research Agenda
1 post • created by Luke Muehlhauser • Mar. 18, 2015 @ 16:56 GMT
"Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI), a research institute devoted to studying the technical challenges of ensuring desirable behavior from ..."
Assorted Sunday Links #1
1 post • created by Jacob Trefethen • Feb. 21, 2015 @ 15:54 GMT
"1. Robert de Neufville of the Global Catastrophic Risk Institute summarizes news from January in the world of Global Catastrophic Risks. 2. The Union of Concerned Scientists posts their nuclear ..."
The Future of Artificial Intelligence
1 post • created by Seán Ó hÉigeartaigh • Jan. 31, 2015 @ 19:40 GMT
"Seán Ó hÉigeartaigh is the Executive Director of the Centre for the Study of Existential Risk, based at the University of Cambridge. Artificial intelligence leaders in academia and industry, and ..."
Feeding Everyone No Matter What
1 post • created by David Denkenberger • Jan. 31, 2015 @ 19:31 GMT
"Dr David Denkenberger is a research associate at the Global Catastrophic Risk Institute, and is the co-author of Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, ..."
Elon Musk donates $10M to our research program
1 post • created by Jesse Galef • Jan. 22, 2015 @ 15:30 GMT
"We are delighted to report that Elon Musk has decided to donate $10M to FLI to run a global research program aimed at keeping AI beneficial to humanity. "
AI Conference
1 post • created by Jesse Galef • Jan. 11, 2015 @ 16:14 GMT
"We organized our first conference, The Future of AI: Opportunities and Challenges, Jan 2-5 in Puerto Rico. This conference brought together many of the world's leading AI builders from academia and ..."
AI Leaders Sign Open Letter
1 post • created by Jesse Galef • Jan. 11, 2015 @ 16:05 GMT
"Top AI researchers from industry and academia have signed an open letter arguing that rapid progress in AI is making it timely to research not only how to make AI more capable, but also how to make it robust and beneficial."
1 post • created by Rob Bensinger • Oct. 15, 2015 @ 01:57 GMT
"MIRI's October Newsletter collects recent news and links related to the long-term impact of artificial intelligence. Highlights: — New introductory material on MIRI can be found on our information ..."
AI safety at the United Nations
1 post • created by Max Tegmark • Oct. 14, 2015 @ 02:29 GMT
"Nick Bostrom and I were invited to speak at the United Nations about how to avoid AI risk. I'd never been there before, and it was quite the adventure! Here's the video - I start talking at 1:54:40 ..."
Hawking Reddit AMA on AI
1 post • created by Max Tegmark • Oct. 10, 2015 @ 04:52 GMT
"Our Scientific Advisory Board member Stephen Hawking's long-awaited Reddit AMA answers on Articificial Intelligence just came out, and was all over today's world news, including MSNBC, Huffington ..."
Happy Petrov Day!
1 post • created by Viktoriya Krakovna • Sep. 26, 2015 @ 17:05 GMT
"32 years ago today, Soviet army officer Stanislav Petrov refused to follow protocol and averted a nuclear war. From 9/26 is Petrov Day: "On September 26th, 1983, Lieutenant Colonel Stanislav ..."
We're hiring!
1 post • created by Max Tegmark • Sep. 23, 2015 @ 15:25 GMT
"To bolster our ability to do good, we at FLI are looking to fill two job openings. Please consider applying and please pass this posting along to anyone you think would be a good fit! PROJECT ..."
Future of Life Institute Summer 2015 Newsletter
1 post • created by James Hogan • Sep. 22, 2015 @ 20:42 GMT
"$7M in AI research grants announced, open letter about economic impacts of AI, and more."
MIRI News: September 2015
1 post • created by Rob Bensinger • Sep. 21, 2015 @ 22:54 GMT
"[ Rob Bensinger is the Outreach Coordinator of the Machine Intelligence Research Institute (MIRI), a research nonprofit studying the technical questions raised by the prospect of smarter-than-human ..."
GCRI News Summaries
1 post • created by Seth Baum • Sep. 14, 2015 @ 21:35 GMT
"Here are the July and August global catastrophic risk news summaries, written by Robert de Neufville of the Global Catastrophic Risk Institute. The July summary covers the Iran deal, Russia's new ..."
Future of AI at SciFoo 2015
1 post • created by Meia Chita-Tegmark • Sep. 1, 2015 @ 14:08 GMT
"Here is a short summary of the Future of AI session organized at SciFoo by Nick Bostrom, Gary Marcus, Jaan Tallinn, Max Tegmark and Murray Shanahan."
Wallace defends AI weapons
1 post • created by Max Tegmark • Aug. 11, 2015 @ 04:31 GMT
"Sam Wallace, a former US army officer, has an interesting piece criticizing our open letter suggestion as "unrealistic and dangerous". I just wrote a response together with Stuart Russell and Toby ..."
Financial Times supports ban
1 post • created by Max Tegmark • Aug. 4, 2015 @ 21:55 GMT
"The open letter got its first big editorial endorsement today, with the Financial Times supporting a ban."
Ackerman defends AI weapons
2 posts • created by Max Tegmark • Aug. 3, 2015 @ 02:33 GMT
"Evan Ackerman, a contributing editor for IEEE-Spectrum, has an interesting piece panning our open letter, writing `who would seriously be for “killer robots?” I am.' I just wrote a response ..."
Russell pans AI weapons on NPR
1 post • created by Max Tegmark • Jul. 30, 2015 @ 04:37 GMT
"Stuart Russell just gave an interesting interview on NPR's "All Things Considered" about AI weapons and our open letter advocating a ban. He also did a TV interview on the topic for Al Jazeera (that's..."
Open letter on AI weapons
1 post • created by Max Tegmark • Jul. 29, 2015 @ 01:18 GMT
"At a press conference at the IJCAI AI-meeting in Buenos Aires today, Stuart Russell and Toby Walsh announced an open letter on autonomous weapons that we've helped organize. We're delighted that it's ..."
Article: Research Challenges for Safe AI Systems
2 posts • created by Jacob Steinhardt • Jul. 27, 2015 @ 10:56 GMT
"FLI has done a great job of kick-starting a discussion on the research challenges in ensuring the safety of AI systems, by putting forth a proposed program of research in their research priorities ..."
GCRI News Summary June 2015
1 post • created by Seth Baum • Jul. 21, 2015 @ 17:50 GMT
"Here is the June 2015 global catastrophic risk news summary, written by Robert de Neufville of the lobal Catastrophic Risk Institute. The news summaries provide overviews across the world of global ..."
AI safety research on NPR
1 post • created by Max Tegmark • Jul. 16, 2015 @ 18:12 GMT
"I just had the pleasure of discussing our new AI safety research program on National Public Radio. I was fortunate to be joined by two of the winners of our grants competition: CMU roboticist Manuela ..."
Are we heading into a second Cold War?
1 post • created by Janos Kramar • Jul. 6, 2015 @ 04:20 GMT
"US-Russia tensions are at their highest since the end of the Cold War, and some analysts are warning about the growing possibility of a nuclear war. Their estimates of risk are comparable to some ..."
ITIF panel on superintelligence with Russell and Soares
1 post • created by Viktoriya Krakovna • Jul. 2, 2015 @ 01:15 GMT
"The Information Technology and Innovation Foundation held a panel discussion on June 30, "Are Superintelligent Computers Really A Threat to Humanity?"."
And the winners are...
1 post • created by Max Tegmark • Jul. 1, 2015 @ 16:12 GMT
"After a grueling expert review of almost 300 grant proposals from around the world, we are delighted to announce the 37 research teams that have been recommended for funding to help keep AI ..."
AI Economics Open Letter
1 post • created by Max Tegmark • Jun. 19, 2015 @ 17:59 GMT
"Inspired by our Puerto Rico AI conference and open letter, a team of economists and business leaders have now launched their own open letter specifically on how to make AI's impact on the economy ..."
CBS takes on AI
1 post • created by Max Tegmark • Jun. 15, 2015 @ 02:22 GMT
"CBS News interviewed me for this morning's segment on the future of AI, which avoided the tired old "robots-will-turn-evil" message and reported on the latest DARPA challenge."
Wait But Why: 'The AI Revolution'
1 post • created by Melody Guan • Jun. 13, 2015 @ 00:40 GMT
"Tim Urban of Wait But Why has an engaging two-part series on the development of superintelligent AI and the dramatic consequences it would have on humanity. Equal parts exciting and sobering, this is ..."
AI Ethics in Nature
1 post • created by Max Tegmark • Jun. 6, 2015 @ 15:26 GMT
"Nature just published four interesting perspectives on AI Ethics, including an article and podcast on Lethal Autonomous Weapons by Stuart Russell."
Sam Altman Investing in 'AI Safety Research'
1 post • created by Jesse Galef • Jun. 6, 2015 @ 14:57 GMT
"Sam Altman, head of Y Combinator, gave an interview with Mike Curtis at Airbnb's Open Air 2015 conference and brought up (among other issues) his concerns about AI value alignment. He didn't pull any punches:"
Stuart Russell on the long-term future of AI
1 post • created by Viktoriya Krakovna • Jun. 1, 2015 @ 00:52 GMT
"Stuart Russell recently gave a public lecture on The Long-Term Future of (Artificial) Intelligence, hosted by the Center for the Study of Existential Risk. "
What happens when our computers get smarter than we are?
1 post • created by Peter Haas • May. 26, 2015 @ 01:21 GMT
"Nick Bostrom's talk on Artificial Super Intelligence is up at TED.com"
Happy Birthday, FLI!
1 post • created by Meia Chita-Tegmark • May. 25, 2015 @ 03:21 GMT
"Today we are celebrating one year since our launch event. It's been an amazing year, full of wonderful accomplishments, and we would like to express our gratitude to all those who supported us with ..."
What AI Researchers Say About Risks from AI
1 post • created by Viktoriya Krakovna • May. 23, 2015 @ 23:59 GMT
"Scott Alexander does a comprehensive review of the opinions of prominent AI researchers on the risks from AI."
Hawking AI speech
1 post • created by Max Tegmark • May. 12, 2015 @ 21:30 GMT
"Stephen Hawking, who serves on our FLI Scientific Advisory Board, just gave an inspiring and thought-provoking talk that I think of as "A Brief History of Intelligence". He spoke of the opportunities ..."
MIRI's New Executive Director
1 post • created by Viktoriya Krakovna • May. 11, 2015 @ 01:35 GMT
"Big news from our friends at MIRI: Nate Soares is stepping up as the new Executive Director, as Luke Muehlhauser has accepted a research position at GiveWell."
January 2015 Newsletter
1 post • created by Jesse Galef • May. 4, 2015 @ 22:01 GMT
"In the News * Top AI researchers from industry and academia have signed an FLI-organized open letter arguing for timely research to make AI more robust and beneficial. Check out our research ..."
Chinese Scientists Report Unsuccessful Attempt to Selectively Edit Disease Gene in Human Embryos
1 post • created by Grigory Khimulya • May. 4, 2015 @ 15:27 GMT
"Researchers from Sun Yat-sen University, Guangzhou failed to selectively modify a single gene in unicellular human embryos using the CRISPR/Cas9 technology, noting many off-target mutations. The study..."
November 2014 Newsletter
1 post • created by Jesse Galef • May. 3, 2015 @ 20:22 GMT
"In the News * The winners of the essay contest we ran in partnership with the Foundational Questions Institute have been announced! Check out the awesome winning essays on the FQXi website. * ..."
Dubai to Employ "Fully Intelligent" Robot Police
1 post • created by Jesse Galef • May. 2, 2015 @ 01:15 GMT
"I don't know how seriously to take this, but Dubai is developing Robo-cops to roam public areas like malls: "'The robots will interact directly with people and tourists,' [Colonel Khalid Nasser ..."
Jaan Tallinn on existential risks
1 post • created by Viktoriya Krakovna • Apr. 16, 2015 @ 15:05 GMT
"An excellent piece about existential risks by FLI co-founder Jaan Tallinn on Edge.org: "The reasons why I'm engaged in trying to lower the existential risks has to do with the fact that I'm a ..."
Recent AI discussions
1 post • created by Viktoriya Krakovna • Apr. 16, 2015 @ 14:54 GMT
"1. Brookings Institution post on Understanding Artificial Intelligence, discussing technological unemployment, regulation, and other issues. 2. A recap of the Science Friday episode with Stuart ..."
Assorted Sunday Links #3
1 post • created by Jacob Trefethen • Apr. 13, 2015 @ 06:45 GMT
"1. In the latest issue of Joint Force Quarterly, Randy Eshelman and Douglas Derrick call for the U.S. Department of Defense to conduct research on how "to temper goal-driven, autonomous agents with ..."
Russell, Horvitz, and Tegmark on Science Friday: Is AI Safety a Concern?
1 post • created by Jesse Galef • Apr. 11, 2015 @ 15:05 GMT
"To anyone only reading certain news articles, it might seem like the top minds in artificial intelligence disagree about whether AI safety is a concern worth studying. But on Science Friday yesterday, guests Stuart Russell, Eric Horvitz, and Max Tegmark a"
Gates & Musk discuss AI
1 post • created by Max Tegmark • Apr. 8, 2015 @ 19:00 GMT
"Bill Gates and Elon Musk recently discussed the future of AI, and Bill said he shared Elon's safety concerns. Regarding people dismissing AI concerns, he said "How can they not see what a huge ..."
CSER and FHI recruiting post-docs!
1 post • created by Jacob Trefethen • Apr. 6, 2015 @ 14:51 GMT
"Exciting news from the two major existential risk research hubs in the UK: The Centre for the Study of Existential Risk (University of Cambridge) and the Future of Humanity Institute (University of ..."
April 2015 Newsletter
1 post • created by Jesse Galef • Apr. 1, 2015 @ 20:01 GMT
"In the News * The MIT Technology Review recently published a compelling overview of the possibilities surrounding AI, featuring Nick Bostrom's Superintelligence and our open letter on AI research ..."
AI grant results
1 post • created by Max Tegmark • Mar. 30, 2015 @ 17:53 GMT
"We were quite curious to see how many applications we’d get for our Elon-funded grants program on keeping AI beneficial, given the short notice and unusual topic. I’m delighted to report that the ..."
Assorted Sunday Links #2
1 post • created by Jacob Trefethen • Mar. 30, 2015 @ 04:57 GMT
"Some links from the last few weeks—and some from the last few days—on what's been happening in the world of existential risk: 1. The Open Philanthropy Project, a growing philanthropic force in the..."
Wozniak concerned about AI
1 post • created by Max Tegmark • Mar. 24, 2015 @ 00:50 GMT
"Steve Wozniak, without whom I wouldn't be typing this on a Mac, has now joined the growing group of tech pioneers (most recently his erstwhile arch-rival Bill Gates) who feel that we shouldn't dismiss..."
MIRI's New Technical Research Agenda
1 post • created by Luke Muehlhauser • Mar. 18, 2015 @ 16:56 GMT
"Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI), a research institute devoted to studying the technical challenges of ensuring desirable behavior from ..."
Assorted Sunday Links #1
1 post • created by Jacob Trefethen • Feb. 21, 2015 @ 15:54 GMT
"1. Robert de Neufville of the Global Catastrophic Risk Institute summarizes news from January in the world of Global Catastrophic Risks. 2. The Union of Concerned Scientists posts their nuclear ..."
The Future of Artificial Intelligence
1 post • created by Seán Ó hÉigeartaigh • Jan. 31, 2015 @ 19:40 GMT
"Seán Ó hÉigeartaigh is the Executive Director of the Centre for the Study of Existential Risk, based at the University of Cambridge. Artificial intelligence leaders in academia and industry, and ..."
Feeding Everyone No Matter What
1 post • created by David Denkenberger • Jan. 31, 2015 @ 19:31 GMT
"Dr David Denkenberger is a research associate at the Global Catastrophic Risk Institute, and is the co-author of Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, ..."
Elon Musk donates $10M to our research program
1 post • created by Jesse Galef • Jan. 22, 2015 @ 15:30 GMT
"We are delighted to report that Elon Musk has decided to donate $10M to FLI to run a global research program aimed at keeping AI beneficial to humanity. "
AI Conference
1 post • created by Jesse Galef • Jan. 11, 2015 @ 16:14 GMT
"We organized our first conference, The Future of AI: Opportunities and Challenges, Jan 2-5 in Puerto Rico. This conference brought together many of the world's leading AI builders from academia and ..."
AI Leaders Sign Open Letter
1 post • created by Jesse Galef • Jan. 11, 2015 @ 16:05 GMT
"Top AI researchers from industry and academia have signed an open letter arguing that rapid progress in AI is making it timely to research not only how to make AI more capable, but also how to make it robust and beneficial."