Partnership on AI

Safety-Critical AI (SCAI) Senior Advisor

Contract in San Francisco, CA - Remote OK

Remote, Contract, January 2022

The Partnership on AI (PAI) is seeking a senior advisor for a forthcoming workstream on norms for responsible deployment of AI products to 1. Inform the research and program agenda for the project, 2. Create a roadmap outlining research directions and practical outputs. The successful candidate will work closely with members of the Safety-Critical AI (SCAI) program. The position is designed for well established researchers or practitioners who have an extensive track record of leading research, policy, or strategy work in AI safety; and have a strong interest in developing governance strategies to prevent a “race to the bottom” on safety. This is a remote, contract position beginning in January 2022. We are not prescriptive about time commitments for this role (part time vs full time) and instead compensate based on completion of tasks. We welcome applications from people who are primarily employed by other institutions.


The goal of the new workstream on deployment norms is to develop recommendations and resources to improve industry coordination on AI safety ie. reducing risks posed by deployment of powerful AI. This workstream will consider questions around how to enable access to models, encourage incident sharing, and facilitate better understanding of risks, to enhance safety efforts.

As the Senior Advisor for Partnership on AI’s Deployment Norms workstream, you will support the Program Lead for SCAI in developing a research and program agenda and creating a roadmap outlining research directions and practical outputs. The basis of the project is drawn from our findings from other ongoing PAI projects including Publication Norms for Responsible AI - project examining how novel research can be published in a way that maximizes benefits while mitigating potential harms; and AI Incidents Database - a central repository of over 1300 real-world incident reports aimed at improving safety efforts by increasing awareness.

You will be responsible for using your professional and topical expertise in AI safety to develop a research and program agenda and roadmap that leverages inputs and experiences from across PAI’s Partner Network. You will have the opportunity to shape a multistakeholder workstream and its collection of research projects, to develop best practices alongside individuals in civil society, academia and industry from PAI’s Partner network, and to inform key decision makers in government and industry.


Inform Research and Program Agenda

  • Work closely with the Program Lead for SCAI to develop a research and program strategy that identifies practical and applied interdisciplinary research questions which will will provide the foundation for the workstream’s research, writings, convenings, and recommendations
  • Collaborate with SCAI Program Lead to identify and engage relevant stakeholders (particularly those likely to be overlooked), leveraging PAI’s Partner network and your own professional network, to inform the deployment norms workstream
  • Review insights, feedback and other contributions from stakeholders involved in the workstream to help identify tractable research directions

Create a Roadmap for Execution of Research and Program Agenda

  • Apply your professional and topical expertise in AI safety to create a roadmap outlining appropriate outputs for the deployment norms workstream in service of the research questions identified, including multistakeholder convenings, papers, frameworks, tools etc., that can inform our Partner’s approaches
  • Support SCAI Program Lead with project scoping, research and program design, methodology, grant writing and recruiting talent
  • Recommend creative approaches for transforming research findings into practical applications, recommendations, tools, and resources for stakeholders involved in the deployment norms workstream


  • Jan - Feb 2022: Project onboarding and inform research and program agenda
  • March - May 2022: Identify stakeholders and collect insights
  • April - June 2022: Review insights from involved stakeholders to create a roadmap for execution of research and program agenda
  • June 2022: Explore possibility of work continuation


  • Deep familiarity with key stakeholders, tensions, and tradeoffs in AI safety related conversations including topical expertise in related public and closed-door debates taking place in government, industry, academia, and civil society
  • Familiarity with other scientific and dual-use fields that have considered norms or protocols on safety for research and products (e.g. synthetic biology, biosecurity, cybersecurity, nuclear security, national security, aviation and automobile industries)


  • Extensive professional experience leading successful research, policy, or strategy projects in AI safety including senior researchers who have led/are leading labs or research teams, industry practitioners who have led/are leading policy or research projects. Open to considering senior professionals from different disciplinary backgrounds including, but not limited to, statistics, computer science, international relations, or law
  • Proven track record of leading and executing practical and applied sociotechnical projects
  • Ability to solve problems and manage input from multiple stakeholders; providing potential solutions when there is uncertainty and facilitating decision-making.
  • Comfort with ambiguity, technology, and welcome the challenge of working with a multi-stakeholder organization.


To apply, please submit a package providing the following:

  • Resume/CV
  • Cover Letter explaining your interest and how your expertise aligns with what we are looking for
  • Attachment of a sample project plan that you have developed in the past


  • Application Due: December 23, 2021
  • Selection of finalists and interviews: January 7, 2021
  • Contractor selected: January 14, 2021
  • Contract start date: January 2022

We may review applications received after the deadline on a rolling basis until the role is filled.

We know that research has shown that some potential applicants will submit a proposal only when they feel that they have met close to all of the qualifications —we encourage you to take a leap of faith with us and submit your proposal as long as you are passionate about working to make a real impact on the AI industry. We are very interested in hearing from a diverse pool of candidates.