Рекомендуем
AI Safety Researcher
£90,000 - £150,000 / year (£7,500/month) - Negotiable
Описание вакансии
Research and develop methods to ensure AI systems are safe, reliable, and aligned with human values. Critical role in building trustworthy AI.
Требования
- PhD in AI Safety, ML, Philosophy, or related
- Published research in AI safety/alignment
- Deep understanding of AI risks
- Experience with RLHF, constitutional AI
- Strong analytical and communication skills
- Passion for responsible AI development
- Published research in AI safety/alignment
- Deep understanding of AI risks
- Experience with RLHF, constitutional AI
- Strong analytical and communication skills
- Passion for responsible AI development
Обязанности
- Research AI alignment techniques
- Develop safety evaluation frameworks
- Red-team AI systems
- Write safety documentation and policies
- Collaborate with product and policy teams
- Engage with AI safety community
- Develop safety evaluation frameworks
- Red-team AI systems
- Write safety documentation and policies
- Collaborate with product and policy teams
- Engage with AI safety community
Преимущества
- Salary £90,000 - £150,000
- Meaningful impact work
- Research freedom
- Conference attendance
- Premium benefits package
- Sabbatical opportunities
- Meaningful impact work
- Research freedom
- Conference attendance
- Premium benefits package
- Sabbatical opportunities
Обзор вакансии
Тип занятости
Полная занятость
Уровень опыта
Старший
Местоположение
London, England, United Kingdom
Вакансии
2
Подать заявку