Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety
The Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety is designed to support promising researchers for postdoctoral appointments starting in the fall semester of 2022 to work on AI existential safety research. Funding is for three years subject to annual renewals based on satisfactory progress reports. For host institutions in the US, UK, or Canada, the Fellowship includes an annual $80,000 stipend and a fund of up to $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the fellowship amount will be adjusted to match local conditions.
Applications close November 5th, 2021. Questions about the fellowship or application process not answered on this page should be directed to firstname.lastname@example.org.
Purpose and eligibility:
The purpose of the fellowship is to fund talented postdoctoral researchers to work on AI existential safety, as defined on our "AI Existential Safety Research" page.
- A mentor (normally a professor) at the host institution (normally a university) who commits in writing to mentor and support the applicant in their AI existential safety research if a Fellowship is awarded. This includes ensuring that the applicant has access to office space and is welcomed and integrated into the local research community.
- Fellows are expected to participate in annual workshops and other activities that will be organized to help them interact and network with other researchers in the field.
FLI is assembling a global network of faculty interested in hosting Buterin Fellows for AI existential safety research, which we hope to post on this website during the coming month. Interested candidates are also welcome to reach out to email@example.com so that we can offer help with mentor-matching.
Applicants will submit:
A detailed, up-to-date CV, including education and employment history and a full publication list.
A research statement up to 3 pages long (not including references).
A summary of previous and current research up to 2 pages long.
The names and email addresses of three referees, who will be sent a link where they can submit letters of recommendation and fill out a brief questionnaire about the applicant.
The proposed host institution, and the name and email address of the proposed mentor (whose agreement must have been secured beforehand).
The research statement should include the applicant's reason for interest in AI existential safety, a technical specification of the proposed research, and a discussion of why it would reduce the existential risk of advanced AI technologies or otherwise meet our eligibility criteria.
The proposed mentor will be asked to submit a letter confirming that they will supervise the applicant to work on AI existential safety research as per above, and that the applicant will be employed by the host institution if the Fellowship is offered.
There are no geographic limitations on applicants or host universities. We welcome applicants from a diverse range of backgrounds, and we particularly encourage applications from women and underrepresented minorities.
Our application portal at grants.futureoflife.org is now open. The application deadline is November 5th, 2021. After an initial round of deliberation, some applicants may be interviewed before decisions are finalized. Any offers will be made no later than the end of March 2022.
We are working to build a global community of AI Safety researchers who are keen to ensure that AI remains safe and beneficial to humanity. You can see who is already part of the community on our website here.