Q: What’s the Future of Life Institute?
A: A 501(c)3 non-profit that wants the long-term future of life to exist and be as positive as possible. We focus particularly on the benefits and risks of transformative technology.
Q: Who’s Vitalik Buterin?
A: A cryptocurrency pioneer and philanthropic supporter of effective altruism.
Q: What is existential risk?
A: Nick Bostrom has defined it as something that could cause human extinction or permanently and drastically curtail humanity's potential. For example, some AI researchers are concerned that misaligned superintelligence could cause human extinction as explained in this video.
Q: What do you count as AI existential safety?
A: The opposite of AI existential risk. Before applying, please read how we define AI existential safety research here. There is in our opinion way too little AI existential safety research, considering its potential impact on the future of life. Given that there are way more researchers working on regular AI safety research (e.g. self-driving car safety), we are instructing our reviewers to be vigilant against attempts to shoehorn regular AI safety research proposals into our funding programs. We wish to enable research that would otherwise not get done, so if your research proposal is a strong candidate for government or industry funding, it's probably not a good fit for us.
Q: Are the Buterin Fellowships only for Americans researching in the United States?
A: No, they are open to applicants of any nationality. They are also open to be used at universities in any country, as long as the applicant can find a faculty member there who is willing and able to support their AI existential safety research. For a non-US host institution, the fellowship amount will be adjusted to match local conditions.
Q. Can I submit an application in a language other than English?
A. All proposals must be in English, the standard language for AI research papers. To avoid penalizing applicants whose first language isn't English, we will encourage the reviewers to be accommodating of language differences when reviewing applications.
Q. How and when do I apply?
A. On this website, by the deadline listed for the program in question.
Q. Will FLI pay any overhead to universities who host fellows?
A. Our institutional policy is that FLI will pay up to 15% overhead.
Q. What if I am unable to submit my application electronically?
A. Only applications submitted through the form on our website are accepted. If you encounter problems, please contact FLI immediately.
Q: Why would humanity cause its own destruction?
A: By mistake or miscommunication, which has brought humanity to the brink of catastrophe many times in the past (example,more examples, comic relief), and biotech & AI poses arguably greater threats.
Q: Isn’t this naïve to think that humanity would abstain from developing destructive technologies?
A: No. Several national bioweapon programs existed around 1970, and yet bioweapons are now illegal under international law. Thanks in significant part to Future of Life Award winner Prof. Matthew Meselson, this such weapons of mass destruction never entered into widespread use, and biology’s main use is saving lives.