Featured
AI Safety Researcher
£90,000 - £150,000 / year (£7,500/month) - Negotiable
Job Description
Research and develop methods to ensure AI systems are safe, reliable, and aligned with human values. Critical role in building trustworthy AI.
Requirements
- PhD in AI Safety, ML, Philosophy, or related
- Published research in AI safety/alignment
- Deep understanding of AI risks
- Experience with RLHF, constitutional AI
- Strong analytical and communication skills
- Passion for responsible AI development
- Published research in AI safety/alignment
- Deep understanding of AI risks
- Experience with RLHF, constitutional AI
- Strong analytical and communication skills
- Passion for responsible AI development
Responsibilities
- Research AI alignment techniques
- Develop safety evaluation frameworks
- Red-team AI systems
- Write safety documentation and policies
- Collaborate with product and policy teams
- Engage with AI safety community
- Develop safety evaluation frameworks
- Red-team AI systems
- Write safety documentation and policies
- Collaborate with product and policy teams
- Engage with AI safety community
Benefits
- Salary £90,000 - £150,000
- Meaningful impact work
- Research freedom
- Conference attendance
- Premium benefits package
- Sabbatical opportunities
- Meaningful impact work
- Research freedom
- Conference attendance
- Premium benefits package
- Sabbatical opportunities
Job Overview
Employment Type
Full Time
Experience Level
Senior
Location
London, England, United Kingdom
Vacancies
2
Apply Now