7,624 AI Safety Researcher jobs in the United States
AI Ethics and Safety Researcher
Posted today
Job Viewed
Job Description
- Conducting cutting-edge research on AI ethics, safety, and responsible AI development.
- Developing frameworks, guidelines, and best practices for ethical AI implementation.
- Analyzing AI systems for potential biases, fairness issues, and safety risks.
- Evaluating AI models and algorithms for transparency and explainability.
- Collaborating with engineering teams to integrate ethical considerations into AI development lifecycles.
- Staying abreast of emerging trends and challenges in AI ethics and safety.
- Publishing research findings in academic journals and presenting at conferences.
- Contributing to policy recommendations and industry standards for AI.
- Advising on the societal impact and potential risks of AI technologies.
- Engaging with stakeholders to foster dialogue on AI ethics.
Qualifications: PhD or Master's degree in Computer Science, Artificial Intelligence, Philosophy, Ethics, or a closely related field with a specialization in AI ethics/safety. Demonstrated research experience and publications in AI ethics, machine learning fairness, or AI safety. Strong understanding of AI and machine learning concepts. Familiarity with ethical theories and frameworks. Excellent analytical, critical thinking, and problem-solving skills. Strong written and verbal communication skills, with the ability to articulate complex technical and ethical concepts. Experience working with AI development teams is a significant plus.
This is a unique opportunity to shape the future of artificial intelligence and ensure its development benefits humanity, contributing to groundbreaking work for our client.
Lead AI Ethics and Safety Researcher
Posted today
Job Viewed
Job Description
Lead AI Ethics and Safety Researcher
Posted today
Job Viewed
Job Description
Responsibilities will include:
- Developing and implementing comprehensive AI ethics frameworks and safety standards.
- Conducting in-depth research on potential risks and societal impacts of AI.
- Designing and overseeing rigorous testing procedures to ensure AI safety and alignment with human values.
- Collaborating with cross-functional teams, including AI engineers, product managers, and legal experts, to integrate ethical considerations into the AI development lifecycle.
- Publishing research findings, presenting at conferences, and contributing to the broader AI ethics community.
- Mentoring junior researchers and fostering a culture of responsible innovation.
- Staying abreast of the latest advancements in AI, machine learning, and ethical AI research.
- Identifying and mitigating biases in AI algorithms and datasets.
- Ensuring compliance with emerging AI regulations and standards.
The ideal candidate will possess a Ph.D. or Master's degree in Computer Science, Philosophy, Ethics, Cognitive Science, or a related field, with a strong specialization in AI ethics or safety. Proven experience in AI research, particularly in areas such as fairness, accountability, transparency, and explainability (FATE), is essential. Exceptional analytical and critical thinking skills are required, along with the ability to translate complex technical and ethical concepts into clear, actionable strategies. Experience in leading research projects and mentoring teams is highly preferred. Strong communication and collaboration skills are vital for success in this remote-first environment, where effective virtual teamwork is paramount. We are looking for someone deeply passionate about ensuring AI benefits humanity.
Lead AI Ethics and Safety Researcher
Posted today
Job Viewed
Job Description
Qualifications:
- Ph.D. in Computer Science, Artificial Intelligence, Philosophy, or a related field with a strong focus on AI ethics, fairness, transparency, or safety.
- Minimum of 6 years of research experience in AI ethics, safety, or responsible AI development.
- Demonstrated experience in leading research teams and projects.
- Strong publication record in reputable AI/ML conferences and journals.
- Deep understanding of machine learning algorithms, AI system design, and their societal implications.
- Expertise in developing and applying frameworks for AI fairness, bias detection, and interpretability.
- Excellent analytical, critical thinking, and problem-solving skills.
- Exceptional communication and interpersonal skills, with the ability to effectively present complex research to both technical and non-technical audiences.
- Experience in shaping AI policy and best practices.
Lead AI Ethics and Safety Researcher
Posted today
Job Viewed
Job Description
Key Responsibilities:
- Develop, implement, and refine ethical guidelines and safety standards for AI systems.
- Conduct rigorous research on AI risks, including bias, fairness, accountability, transparency, and robustness.
- Design and oversee the implementation of safety mechanisms and guardrails for AI models.
- Collaborate with AI/ML engineers and data scientists to integrate ethical considerations throughout the development lifecycle.
- Publish research findings in leading AI and ethics conferences and journals.
- Engage with external stakeholders, including policymakers and industry experts, to contribute to the broader discourse on AI governance.
- Lead and mentor a team of AI ethics and safety specialists.
- Develop training programs on AI ethics and safety for internal teams.
- Perform risk assessments and mitigation strategies for AI projects.
- Contribute to the company's vision for responsible AI innovation.
- Ph.D. or Master's degree in Computer Science, AI, Machine Learning, Philosophy, Ethics, Law, or a related interdisciplinary field.
- 5+ years of experience in AI research with a specific focus on AI ethics, safety, fairness, or accountability.
- Demonstrated experience in leading research projects and teams.
- Deep understanding of various AI/ML algorithms and their societal implications.
- Familiarity with AI governance frameworks and relevant regulations.
- Exceptional analytical, critical thinking, and problem-solving skills.
- Strong publication record in top-tier venues is highly desirable.
- Excellent communication and interpersonal skills to effectively convey complex concepts to diverse audiences.
Senior AI Ethics and Safety Researcher
Posted today
Job Viewed
Job Description
The successful candidate will conduct in-depth research on a wide range of AI safety and ethics topics, including bias detection and mitigation in machine learning models, fairness, transparency, accountability, and the societal impact of advanced AI systems. You will collaborate closely with AI engineers, product managers, and legal teams to integrate ethical considerations into the entire AI development lifecycle. Responsibilities include authoring research papers, developing practical tools and frameworks for AI safety, and contributing to public discourse on AI ethics through presentations and publications. You will also play a key role in advising leadership on critical ethical dilemmas and policy development.
A Ph.D. or Master's degree in Computer Science, Artificial Intelligence, Philosophy, Ethics, or a related quantitative field is required, with a strong focus on AI ethics or safety. Demonstrated experience in conducting advanced research in AI safety, machine learning fairness, or algorithmic bias is essential. Proficiency in programming languages commonly used in AI development (e.g., Python) and familiarity with machine learning frameworks (e.g., TensorFlow, PyTorch) are highly desirable. Excellent analytical, problem-solving, and critical thinking skills are a must, as are strong written and verbal communication abilities. Experience with public policy or regulatory frameworks related to AI is a plus. This is an unparalleled opportunity to influence the responsible advancement of artificial intelligence within a pioneering organization in the heart of Silicon Valley, **San Jose, California, US**.
Senior AI Ethics and Safety Researcher
Posted today
Job Viewed
Job Description
Key Responsibilities:
- Conduct cutting-edge research in AI ethics, safety, fairness, and accountability.
- Develop and implement methodologies for evaluating and mitigating bias in AI models.
- Design and propose frameworks for AI safety and robustness across the development lifecycle.
- Collaborate with AI researchers, engineers, and product managers to integrate ethical considerations into AI systems.
- Publish research findings in top-tier AI conferences and journals.
- Contribute to policy recommendations and industry standards for responsible AI.
- Mentor junior researchers and contribute to the growth of the AI ethics community within the organization.
- Engage with external stakeholders, including academic institutions and regulatory bodies.
- Stay at the forefront of advancements in machine learning, deep learning, and AI ethics.
Qualifications:
- Ph.D. in Computer Science, Artificial Intelligence, Statistics, Philosophy, or a related field with a strong focus on AI ethics or safety.
- 5+ years of research experience in AI ethics, fairness, accountability, transparency, or safety.
- Proven track record of publications in leading AI conferences (e.g., NeurIPS, ICML, AAAI, FAT*) or relevant journals.
- Deep understanding of machine learning algorithms, deep learning architectures, and their potential societal impacts.
- Experience with AI fairness toolkits (e.g., AIF360, Fairlearn) and robustness testing methods.
- Strong programming skills in Python and relevant ML libraries (e.g., TensorFlow, PyTorch).
- Excellent analytical, communication, and presentation skills.
- Ability to translate complex technical concepts into actionable recommendations.
- Passion for ensuring the safe and beneficial development of AI technologies.
Be The First To Know
About the latest Ai safety researcher Jobs in United States !
Lead AI Ethics and Safety Researcher
Posted today
Job Viewed
Job Description
The ideal candidate will possess a Ph.D. or Master's degree in Computer Science, Artificial Intelligence, Philosophy, Ethics, or a closely related field, with a significant emphasis on AI ethics or safety. A minimum of 8 years of experience in AI research or development, with at least 3 years focused specifically on AI ethics and safety, is required. You must have a proven track record of publishing in top-tier AI conferences and journals, and a deep understanding of current AI safety challenges and proposed solutions. Experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and data analysis tools is essential. Excellent analytical, problem-solving, and communication skills are necessary to articulate complex ethical concepts and collaborate with cross-functional teams to ensure responsible AI development in Atlanta, Georgia, US .
Lead AI Ethics and Safety Researcher
Posted today
Job Viewed
Job Description
Senior AI Ethics and Safety Researcher
Posted today
Job Viewed
Job Description
The ideal candidate will possess a strong academic background in fields such as Computer Science, Philosophy, Law, or a related discipline, coupled with extensive practical experience in AI research or policy. Your responsibilities will include conducting in-depth research on AI bias, fairness, transparency, and accountability. You will develop and implement frameworks for ethical AI deployment, hazard analysis, and risk mitigation strategies for complex AI models, including large language models and generative AI. Collaboration will be key, as you will work closely with AI engineers, product managers, and legal experts to integrate ethical considerations seamlessly into the AI development lifecycle. Furthermore, you will contribute to thought leadership by publishing research papers, presenting at industry conferences, and engaging with regulatory bodies.
We are looking for individuals who demonstrate exceptional analytical skills, a critical thinking approach, and the ability to articulate complex technical and philosophical concepts clearly. Proficiency in programming languages commonly used in AI (e.g., Python) is advantageous. The ability to translate theoretical ethical principles into practical engineering guidelines is essential. This role offers a unique opportunity to make a significant impact on the ethical trajectory of AI. If you are passionate about building AI that benefits humanity and are ready to tackle some of the most challenging questions in the field, we encourage you to apply.
Responsibilities:
- Conduct rigorous research on AI ethics, safety, and societal impact.
- Develop and refine ethical guidelines and safety frameworks for AI systems.
- Analyze AI models for bias, fairness, and potential risks.
- Collaborate with engineering teams to implement safety and ethical measures.
- Stay abreast of emerging trends and regulations in AI.
- Communicate research findings to technical and non-technical audiences.
- Contribute to the company's AI governance strategy.
- Master's or Ph.D. in Computer Science, Philosophy, Law, Ethics, or a related field.
- 5+ years of experience in AI ethics, safety research, or AI policy.
- Proven track record of research and publication in relevant areas.
- Deep understanding of AI concepts and machine learning principles.
- Familiarity with ethical AI tools and methodologies.
- Excellent written and verbal communication skills.