En vedette
AI Safety Researcher
£90,000 - £150,000 / year (£7,500/month) - Negotiable
Description du poste
Research and develop methods to ensure AI systems are safe, reliable, and aligned with human values. Critical role in building trustworthy AI.
Exigences
- PhD in AI Safety, ML, Philosophy, or related
- Published research in AI safety/alignment
- Deep understanding of AI risks
- Experience with RLHF, constitutional AI
- Strong analytical and communication skills
- Passion for responsible AI development
- Published research in AI safety/alignment
- Deep understanding of AI risks
- Experience with RLHF, constitutional AI
- Strong analytical and communication skills
- Passion for responsible AI development
Responsabilités
- Research AI alignment techniques
- Develop safety evaluation frameworks
- Red-team AI systems
- Write safety documentation and policies
- Collaborate with product and policy teams
- Engage with AI safety community
- Develop safety evaluation frameworks
- Red-team AI systems
- Write safety documentation and policies
- Collaborate with product and policy teams
- Engage with AI safety community
Avantages
- Salary £90,000 - £150,000
- Meaningful impact work
- Research freedom
- Conference attendance
- Premium benefits package
- Sabbatical opportunities
- Meaningful impact work
- Research freedom
- Conference attendance
- Premium benefits package
- Sabbatical opportunities
Aperçu du poste
Type d'emploi
Temps plein
Niveau d'expérience
Senior
Lieu
London, England, United Kingdom
Postes vacants
2
Postuler maintenant