Vitalik Buterin PhD Fellowship in AI Existential Safety

The Vitalik Buterin PhD Fellowship in AI Existential Safety is for students starting PhD programs in 2022 who plan to work on AI existential safety research, or for existing PhD students who would not otherwise have funding to work on AI existential safety research. It will fund students for 5 years of their PhD, with extension funding possible. At universities in the US, UK, or Canada, annual funding will cover tuition, fees, and the stipend of the student's PhD program up to $40,000, as well as a fund of $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the stipend amount will be adjusted to match local conditions. Fellows will also be invited to workshops where they will be able to interact with other researchers in the field.

In addition, applicants who are short-listed for the Fellowship will be reimbursed for application fees for up to 5 PhD programs, and will be invited to an information session about research groups that can serve as good homes for AI existential safety research.

Applications are now open at They close October 29th, 2021, 11:59:59 pm Eastern Time, and letters of recommendation are due by November 5th, 2021, 11:59:59 pm Eastern Time. Questions about the fellowship or application process not answered on this page should be directed to

Purpose and eligibility:

The purpose of the fellowship is to fund talented students throughout their PhDs to work on AI existential safety research, as defined on our "AI Existential Safety Research" page. To be eligible, applicants should either be graduate students or be applying to PhD programs that start in 2022. Funding is conditional on being accepted to a PhD program, working on AI existential safety research, and having an advisor who can confirm to us that they will support the student’s work on AI existential safety research. If a student has multiple advisors, these confirmations would be required from all advisors. There is an exception to this last requirement for first-year graduate students, where all that is required is an "existence proof". For example, in departments requiring rotations during the first year of a PhD, funding is contingent on only one of the professors making this confirmation. If a student changes advisor, this confirmation is required from the new advisor for the fellowship to continue.

An application from a current graduate student must address in the Research Statement how this fellowship would enable their AI existential safety research, either by letting them continue such research when no other funding is currently available, or by allowing them to switch into this area.

FLI is assembling a global network of faculty interested in advising Buterin Fellows on AI existential safety research, which we hope to post on this website during the coming weeks. Interested candidates are also welcome to reach out to so that we can offer help with advisor-matching.

Fellows are expected to participate in annual workshops and other activities that will be organized to help them interact and network with other researchers in the field.

Continued funding is contingent on continued eligibility, demonstrated by submitting a brief (~1 page) progress report by July 1st of each year.

There are no geographic limitations on applicants or host universities. We welcome applicants from a diverse range of backgrounds, and we particularly encourage applications from women and underrepresented minorities.

Application process:

Applicants will submit a curriculum vitae, a research statement, and the names and email addresses of up to three referees, who will be sent a link where they can submit letters of recommendation and answer a brief questionnaire about the applicant. Applicants are encouraged but not required to submit their GRE scores using our DI code: 3234.

The research statement can be up to 3 pages long, not including references, outlining applicants’ current plans for doing AI existential safety research during their PhD. It should include the applicant's reason for interest in AI existential safety, a technical specification of the proposed research, and a discussion of why it would reduce the existential risk of advanced AI technologies. For current PhD students, it should also detail why no existing funding arrangements allow work on AI existential safety research.

Those applicants who make the short-list will then go through an interview process before fellows are finalized.


Applications are open at, and close on October 29th, 2021. Letters of recommendation are due by November 5th, 2021. In November 2021, we will inform applicants whether or not they made the short-list, and final decisions will be communicated no later than the end of March 2022.

We are working to build a global community of AI Safety researchers who are keen to ensure that AI remains safe and beneficial to humanity. You can see who is already part of the community on our website here.