Deadline: 26 November 2024
Applications are now open for the Systemic AI Safety Grants for researchers who will collaborate with the UK government to advance systemic approaches to AI Safety.
Systemic AI safety is focused on safeguarding the societal systems and critical infrastructure into which AI is being deployed to make the world more resilient to AI-related risks and to enable its benefits.
The AI Safety Institute in partnership with the Engineering and Physical Sciences Research Council and Innovate UK, part of UK Research and Innovation, is excited to announce support for impactful research that takes a systemic approach to AI safety. Successful applicants will receive ongoing support, computing resources where needed, and access to a community of AI and sector-specific domain experts.
They are seeking applications focused on a range of safety-related problems: this could involve monitoring and anticipating AI use and misuse in society, or the risks they expose in certain sectors. They want to see applications that could enhance understanding of how government could intervene where needed – with new infrastructure and technical innovations – to make society more resilient to AI-related risks.
They conceive of systemic AI safety as a very broad field of research and interventions. Some examples of the kinds of research they are interested in are listed below:
- A systems-informed approach for how to improve trust in authentic digital media, protect against AI-generated misinformation, and improve democratic deliberation.
- Targeted interventions that protect critical infrastructure, for example, those providing energy or healthcare, from an AI-mediated cyber-attack.
- Ideas about how to measure or mitigate the potentially destabilizing effects of AI transformations of the labor market.
- Ways to measure, model, and mitigate the secondary effects of AI systems that take autonomous actions on digital platforms.
They recognise that future risks from AI remain largely unknown. They are open to a range of plausible assumptions about how AI technologies will develop and be deployed in the next 2-5 years. They are excited about work that addresses both ongoing and anticipated risks, as long as it is credible and evidence based.
Objectives
- The initial goals are:
- To crowd source ideas from academics, civil society and entrepreneurs about what challenges AI is likely to pose for societal systems and infrastructure and how to measure and monitor such risks, as well as ideas for how to best intervene to address these challenges and how to prioritise such interventions.
- To build a community of researchers with expertise in this area and establish a field of research focused on addressing these problems. This grant will support the wider AI safety ecosystem by funding groups they have little awareness of or so far underestimated. To support closer ways of working between governments and the research community, ensuring that outputs from this research programme (e.g., new technical tools) can be rapidly taken up by governments and others.
Scope
- The scope of the call for the Systemic AI Safety Grants is deliberately broad and encompasses the diverse ways that AI might create new and significant risks that might impact the economy, critical national infrastructure, and specific sectors including education, media, legal, and health. Research within scope for Phase 1 award will:
- Study and characterize ways in which societal systems are vulnerable to the challenges posed by Frontier AI over the near term (2–5-year horizon).
- Surface new data or insights that are needed to understand how to best address these challenges.
- Propose and test interventions that may help build resilience in societal systems.
Benefits
- Access to technical experts in the field of AI Safety, working with researchers who have previously worked at OpenAI, Google DeepMind, and Cambridge University.
- Access to compute infrastructure to drive forward projects and applications into tangible and innovative solutions.
- A supportive community across Government and research organisations to promote systemic wide interventions in AI Safety.
Funding Information
- They are offering seed grants round of £200,000 for 12 months and plan to follow in future rounds with more substantial awards.
Expected Outcomes
- The grant programme aim is to create technical methods, tools and knowledge that can support AISI and its partners to adapt and safeguard societal systems which will be affected by the adoption of frontier AI.
- They are looking for research which can complement AISI’s work to date, including their efforts on:
- Testing advanced AI systems and inform policymakers about their risks;
- Fostering collaboration across companies, governments, and the wider research community to mitigate risks and advance publicly beneficial research; and
- Strengthening AI development practices and policy globally.
- They aim for the grant competition to:
- Improve their systemic understanding of AI hazards and potential interventions. They are interested both in cross-sectoral understanding of risks and interventions, and in system-specific understanding.
- Facilitate monitoring and anticipation of AI use and misuse in society, and the contextual vulnerabilities they expose.
- Allow them to better understand how to intervene where needed with new infrastructure, regulations, and technical innovations – to make the world more resilient to AI-related hazards.
Eligibility Criteria
- They would like to invite proposals from a broad cross-section of researchers. To be a project leader or co-leads (co-applicants), you must be employed at:
- A UK-based organisation currently registered as eligible to apply for funding from UKRI information), or
- Any non-academic organisation eligible to receive subsidies from the UK government that will provide an innovation or research environment of international standing.
- Investigators must be academic employees – lecturer or equivalent – of an eligible organisation and must be resident in the UK, except under specific conditions, set out in the co-investigator section. Any fellows holding fellowships aimed primarily at the postdoctoral level are not eligible to apply.
- Engineering and Physical Sciences Research Council (EPSRC) fellows, Royal Society fellows and Royal Academy of Engineering fellows holding fellowships aimed at later career stages are eligible to apply.
- Holders of fellowships under other schemes should contact the EPSRC to establish eligibility. This may be given on an individual case by case basis rather than as blanket eligibility for all fellows under any scheme.
- You can be an international project co-lead, but you need to find a project lead who is based in the UK and eligible for EPSRC funding. An international project co-lead is an individual employed by a research organisation in an overseas country who otherwise fits the normal UKRI definition of a project co-lead on a research project, collaborating with the grant holder in the management and leadership of the project. They are expected to make a major intellectual contribution to the design and conduct of the project.
- The UK project lead will need to be willing to attend their grantee workshops and to report to them on the progress of the award.
For more information, visit AISI.