View a list of all blog entries.
RECENT BLOG POSTS
Rob Bensinger: wrote MIRI News: October 2015
Max Tegmark: wrote AI safety at the United Nations
Max Tegmark: wrote Hawking Reddit AMA on AI
Viktoriya Krakovna: wrote Happy Petrov Day!
Max Tegmark: wrote We're hiring!
James Hogan: wrote Future of Life Institute Summer 2015 Newsletter
Rob Bensinger: wrote MIRI News: September 2015
RECENT COMMENTS
Stuart Armstrong: commented on Ackerman defends AI weapons: From the comment I posted there (note that I am an outlier in the FHI in ...
Max Tegmark: commented on Article: Research Challenges for Safe AI Systems: Thanks Jacob for doing this!
RECENT BLOG POSTS
Rob Bensinger: wrote MIRI News: October 2015
Max Tegmark: wrote AI safety at the United Nations
Max Tegmark: wrote Hawking Reddit AMA on AI
Viktoriya Krakovna: wrote Happy Petrov Day!
Max Tegmark: wrote We're hiring!
James Hogan: wrote Future of Life Institute Summer 2015 Newsletter
Rob Bensinger: wrote MIRI News: September 2015
RECENT COMMENTS
Stuart Armstrong: commented on Ackerman defends AI weapons: From the comment I posted there (note that I am an outlier in the FHI in ...
Max Tegmark: commented on Article: Research Challenges for Safe AI Systems: Thanks Jacob for doing this!

FLI BLOG
December 15, 2019
Ackerman defends AI weapons
Max Tegmark (FLI Administrator) wrote on Aug. 3, 2015 @ 02:42 GMT
Evan Ackerman, a contributing editor for IEEE-Spectrum, has an interesting piece panning our open letter, writing `who would seriously be for “killer robots?” I am.' I just wrote a response together with Stuart Russell and Toby Walsh. Although I disagree with Ackerman's arguments, I'm grateful that he published them up so that we can get them discussed and analyzed out in the open. Please join the discussion in the comment field here or at the IEEE-site!
Stuart Armstrong (Member) wrote on Aug. 10, 2015 @ 14:33 GMT
From the comment I posted there (note that I am an outlier in the FHI in not being as worried about autonomous weapons):
I do feel both articles are missing the other's strongest points. Ackerman is not arguing that bans never work, but that this specific ban will never work, because so many dual-use civilian products will be easily modifiable into weapons.
Conversely, Ackerman focuses on scenarios which are basically traditional states doing traditional warfare, but with robots instead of soldiers, missing the problem of these weapons becoming generally available and devastating in the hands of small groups and individuals.
So I think the real issue is to prevent small groups having the ability to cause great destruction through these tools. I can see a few ways of doing this - countermeasures (software or hardware), mass surveillance, banning at the local level, etc... The big question is whether an arms race by the big militaries is more or less likely to result in small groups with this power. This depends a lot on how effective we expect countermeasures to be. Will centralised anti-drone surveillance networks take down any rogue drones/quadcopters/robots, or will they proliferate uncontrollably, with attacks essentially unpreventable? It seems that there are at least some approaches - say, prioritising surveillance robots searching for other robots, over armed robots, and prioritising weapons that can destroy other electronic systems over those that can kill humans - that should help in any case.
I do feel both articles are missing the other's strongest points. Ackerman is not arguing that bans never work, but that this specific ban will never work, because so many dual-use civilian products will be easily modifiable into weapons.
Conversely, Ackerman focuses on scenarios which are basically traditional states doing traditional warfare, but with robots instead of soldiers, missing the problem of these weapons becoming generally available and devastating in the hands of small groups and individuals.
So I think the real issue is to prevent small groups having the ability to cause great destruction through these tools. I can see a few ways of doing this - countermeasures (software or hardware), mass surveillance, banning at the local level, etc... The big question is whether an arms race by the big militaries is more or less likely to result in small groups with this power. This depends a lot on how effective we expect countermeasures to be. Will centralised anti-drone surveillance networks take down any rogue drones/quadcopters/robots, or will they proliferate uncontrollably, with attacks essentially unpreventable? It seems that there are at least some approaches - say, prioritising surveillance robots searching for other robots, over armed robots, and prioritising weapons that can destroy other electronic systems over those that can kill humans - that should help in any case.