Platzhalter Bild

Principal GRC Engineer bei AI Security Institute

AI Security Institute · London, Vereinigtes Königreich · Onsite

65.000,00 £  -  145.000,00 £

Jetzt bewerben

About the AI Security Institute

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10, and we work with frontier developers and governments globally.  

We’re here because governments are critical for advanced AI going well, and AISI is uniquely positioned to mobilize them. With our resources and the UK government's unique agility and international influence, this is the best place to shape both AI development and government action.  

About the Team:

Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product.
Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership. 

What you might work on:

•    Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely
•    Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility)
•    Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale
•    Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal
•    Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them
•    Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer
•    Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar
•    Contribute to open standards and open source, and share lessons with the broader community where appropriate

If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it


Role Summary
Own and operationalise AISI's governance, risk, and compliance (GRC) engineering practice. This role sits at the intersection of security engineering, assurance, and policy, turning paper-based requirements into actionable, testable, and automatable controls. You will lead the technical response to GovAssure and other regulatory requirements, and ensure compliance is continuous and evidence driven. You will also extend GRC disciplines to frontier AI systems, integrating model lifecycle artefacts, evaluations, and release gates into the control and evidence pipeline.

Responsibilities:

  • Translate regulatory frameworks (e.g. GovAssure, CAF) into programmatic controls and technical artefacts
  • Build and maintain a continuous control validation and evidence pipeline
  • Develop and own a capability-based risk management approach aligned to AISI's delivery model
  • Maintain the AISI risk register and risk acceptance/exception handling process
  • Act as the key interface for DSIT governance, policy, and assurance stakeholders
  • Work cross-functionally to ensure risk and compliance are embedded into AISI delivery lifecycles
  • Extend controls and evidence to the frontier AI model
  • Integrate AI safety evidence (e.g., model/dataset documentation, evaluations, red-team results, release gates) into automated compliance workflows
  • Define and implement controls for model weights handling, compute governance, third-party model/API usage, and model misuse/abuse monitoring
  • Support readiness for AI governance standards and regulations (e.g., NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894; EU AI Act exposure where relevant)

Profile requirements:

  • Staff or Principal-level engineer or technical GRC specialist
  • Experience in compliance-as-code, control validation, or regulated cloud environments
  • Familiar with YAML, GitOps, structured artefacts, and automated policy checks
  • Equally confident in engineering meetings and policy/gov forums
  • Practical understanding of frontier AI system risks and artefacts (e.g., model evaluations, red-teaming, model/dataset documentation, release gating, weights handling) sufficient to translate AI policy into controls and machine-checkable evidence
  • Desirable: familiarity with MLOps tooling (e.g., experiment tracking, model registries) and integrating ML artefacts into CI/CD or evidence pipelines

Key Competencies

  • Translating policy into technical controls
  • Designing controls as code or machine-checkable evidence
  • Familiarity with frameworks (GovAssure, CAF, NIST) and AI governance standards (NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894)
  • Experience building risk management workflows, including for AI-specific risks (model misuse, capability escalation, data/weights security)
  • Stakeholder engagement with governance teams and AI/ML engineering teams


Salary & Benefits

We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.

  • Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
  • Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
  • Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
  • Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
  • Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

Government Digital and Data Profession Capability Framework - Government Digital and Data Profession Capability Framework

There are a range of pension options available which can be found through the Civil Service website. 

 


Additional Information

Internal Fraud Database 

The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented.  For more information please see - Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.

 

Nationality requirements

We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).

Working for the Civil Service

The Civil Service Code (opens in a new window) sets out the standards of behaviour expected of civil servants. We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window). The Civil Service embraces diversity and promotes equal opportunities. As such, we run a Disability Confident Scheme (DCS) for candidates with disabilities who meet the minimum selection criteria. The Civil Service also offers a Redeployment Interview Scheme to civil servants who are at risk of redundancy, and who meet the minimum requirements for the advertised vacancy.

Diversity and Inclusion

The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan (opens in a new window) and the Civil Service Diversity and Inclusion Strategy (opens in a new window).
Jetzt bewerben

Weitere Jobs