1,804 Safeguards jobs in the United States

Software Engineer, Safeguards

94199 San Francisco, California Anthropic

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

About Anthropic

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role:

We are looking for software engineers to help build safety and oversight mechanisms for our AI systems. As a software engineer on the Safeguards team, you will work to monitor models, prevent misuse, and ensure user well-being. This role will focus on building systems to detect unwanted model behaviors and prevent disallowed use of models. You will apply your technical skills to uphold our principles of safety, transparency, and oversight while enforcing our terms of service and acceptable use policies.
Responsibilities:
  • Develop monitoring systems to detect unwanted behaviors from our API partners and potentially take automated enforcement actions; surface these in internal dashboards to analysts for manual review
  • Build abuse detection mechanisms and infrastructure
  • Surface abuse patterns to our research teams to harden models at the training stage
  • Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale
  • Analyze user reports of inappropriate content or accounts
You may be a good fit if you:
  • Bachelor's degree in Computer Science, Software Engineering or comparable experience
  • 3-10+ years of experience in a software engineering position, preferably with a focus on integrity, spam, fraud, or abuse detection.
  • Proficiency in SQL, Python, and data analysis tools.
  • Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders
Strong candidates may also:
  • Have experience building trust and safety mechanisms for AI/ML systems, such as fraud detection models or security monitoring tools or the infrastructure to support these systems at scale
  • Have experience with machine learning frameworks like Scikit-Learn, Tensorflow, or Pytorch, and experience building machine learning models
  • Have experience with prompt engineering, jailbreak attacks, and other adversarial inputs
  • Have worked closely with operational teams to build custom internal tooling

Deadline to apply: None. Applications will be reviewed on a rolling basis.

The expected salary range for this position is:

Annual Salary:

$300,000-$405,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
View Now

Economic Prosperity Safeguards

Palos Park, Illinois beBeeSafeguarder

Posted today

Job Viewed

Tap Again To Close

Job Description

Enhance the nation's economic prosperity by safeguarding its borders.

As a Customs and Border Protection Officer , you will be part of a 60,000+ workforce that protects America's security and facilitates legitimate trade and travel.

Key Responsibilities:
  • Enforce laws and regulations related to customs, immigration, and agriculture.
  • Ensure the admissibility of individuals for entry into the United States.
  • Prevent the illegal entry of individuals and prohibited goods and the smuggling of contraband.
Eligibility Requirements:
  • Citizenship: You must be a U.S. citizen.
  • Residency: You must have lived in the U.S. for at least three of the last five years.
  • Age Restriction: You must be referred before your 40th birthday (some exceptions apply).
  • Veterans' Preference: Eligible veterans may qualify for an appointment.

Formal Training includes two-week orientation and 101-day academy at FLETC in Glynco, GA. Spanish training may be required for certain locations.

View Now

Safeguards & Non-Proliferation Engineer

98009 North Bend, Washington Bee Talent Solutions

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Our client's MCFR project seeks a motivated engineer to develop tools for inventory calculations and uncertainty analysis for material accounting. Your job will include identifying and quantifying new or unanticipated sources of uncertainty; developing computational tools and processes for propagating uncertainty across complex, dynamic systems; and conducting sensitivity analyses to prioritize efforts to reduce uncertainty.

Responsibilities:

  • Create methodologies and tools for calculating special nuclear material inventories and uncertainties in MCFR fuel salts and waste products under varied deployment, operational, and process conditions.
  • Develop approaches to identify, quantify, and propagate uncertainty from diverse sources of data and computational tools
  • Develop statistical methodologies for detection of inventory losses or anomalies
  • Conduct sensitivity analysis to prioritize sources of uncertainty and inform the selection, design, and testing of instrumentation used for material accounting across reactor systems
  • Collaborate with cross-functional, multi-disciplinary engineering teams to develop material accounting and safeguards approaches for MCFR plants and address technical challenges
  • Facilitate licensing activities for MCFR technology
  • Participate in training programs and workshops to enhance understanding of safeguards and material accounting in domestic and international contexts
  • Publish and present results for peer review to advance broad public acceptance
Key Qualifications and Skills:
  • B.S. (+4-6 years relevant experience), M.S. (+2-3 years relevant experience), or Ph.D (+1-2 years relevant experience) in mathematics, data science, engineering, physical sciences, or a related technical field, is required.
  • Good working knowledge of nuclear and/or general numerical methods, e.g., partial differential equations, dynamics, deterministic and stochastic particle transport, isotopic depletion, perturbation theory, nuclear data, kinetics.
  • Experience in uncertainty analysis and quantification for complex systems is required.
  • Experience with nuclear engineering software such as ARMI, MCNP, OpenMC, Serpent, SCALE, DIF3D, Attila, GRIFFIN, PROTEUS, MC2, PARTISN, PARCS, APA, CASMO, DRAGON, etc. is highly desired.
  • Strong programming skills in any language is required, with a strong preference for Python, C++, Git, and Github experience.
  • Ability to work in a rapidly evolving and iterative design environment.
  • Excellent analytical, problem-solving, and communication skills, with the ability to convey complex technical information effectively to diverse audiences.
  • Proven ability to work collaboratively in cross-functional teams, prioritize tasks, and manage multiple projects simultaneously.
  • The successful candidate will possess a high degree of trust and integrity, communicate openly and display respect and a desire to foster teamwork.
  • Actual position starting level and title will be determined based on assessment of qualifications.
Bonus Qualifications and Skills:
  • Familiarity with domestic and international safeguards, including prior experience working on material control and accounting (MC&A) programs for advanced reactors or bulk material handling fuel cycle facilities
  • Experience with advanced and/or molten salt reactor core design
  • Ability and professionalism to work within published regulatory guidelines for nuclear reactor design, including experience working under an NQA-1 or equivalent quality program
View Now

Technical Threat Investigator, Safeguards

94199 San Francisco, California Anthropic

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

About Anthropic

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About The Role:

As a Threat Investigator, you will be conducting investigations around adversarial actors, identifying vulnerabilities, and developing novel detection techniques to identify and mitigate abuse of our products and services. This role requires conducting thorough investigations, creating and implementing processes, tools, and strategies to proactively detect adversarial actors, managing sensitive incidents, and working cross-functionally to enhance our defenses against emerging risks in the rapidly evolving landscape of AI technology. Your work will be essential in maintaining Anthropic's commitment to safe and beneficial AI as we continue to expand our product capabilities.

IMPORTANT CONTEXT ON THIS ROLE: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.

Responsibilities:
  • Analyze the deployment of our products and services to identify how these systems are being misused or abused, with a particular focus on influence operations
  • Develop abuse signals and tracking strategies to proactively detect adversarial actors
  • Study trends internally and in the broader ecosystem to anticipate how systems could be misused or manipulated for harm in the future, generating and publishing reports
  • Create actionable intelligence reports on new attack vectors, vulnerabilities, and threat actor TTPs targeting LLM systems
  • Utilize the results of deep dive investigations to implement systematic changes to our safety approach to mitigate harm
  • Keep abreast of the latest industry risks, vulnerabilities, and issues related to the use of language models and generative AI; identify opportunities for improvement to our policies, controls, and enforcement mechanisms
  • Forecast how abuse actors will leverage new advances in AI technology and inform safety by design strategies
  • Build and maintain relationships with external threat intelligence partners and information sharing communities
  • Work with cross-functional team members to build out our threat intelligence program, establishing processes, tools, and best practices
You may be a good fit if you:
  • Have experience in technical analysis and investigations, including skills in SQL and Python
  • Have experience with large language models and a deep understanding of AI technology
  • Have experience tracking bad actors in the deep and dark web.
  • Have subject matter expertise in abusive user behavior detection, for example influence operations, coordinated inauthentic behavior patterns, and/or cyber threat intelligence
  • Can derive insights from large amounts of data to make key decisions and recommendations
  • Have experience conducting threat actor profiling and utilizing threat intelligence frameworks
  • Have strong project management skills and the ability to build processes from the ground up
  • Possess excellent communication skills to collaborate with cross-functional teams

Deadline to apply: None. Applications will be reviewed on a rolling basis.

The expected salary range for this position is:

Annual Salary:

$230,000-$355,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
View Now

ML Infrastructure Engineer, Safeguards

94199 San Francisco, California Anthropic

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

About Anthropic

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role

We are seeking a Machine Learning Infrastructure Engineer to join our Safeguards organization, where you'll build and scale the critical infrastructure that powers our AI safety systems. You'll work at the intersection of machine learning, large-scale distributed systems, and AI safety, developing the platforms and tools that enable our safeguards to operate reliably at scale.

As part of the Safeguards team, you'll design and implement ML infrastructure that powers Claude safety. Your work will directly contribute to making AI systems more trustworthy and aligned with human values, ensuring our models operate safely as they become more capable.

Responsibilities:
  • Design and build scalable ML infrastructure to support real-time and batch classifier and safety evaluations across our model ecosystem
  • Build monitoring and observability tools to track model performance, data quality, and system health for safety-critical applications
  • Collaborate with research teams to productionize safety research, translating experimental safety techniques into robust, scalable systems
  • Optimize inference latency and throughput for real-time safety evaluations while maintaining high reliability standards
  • Implement automated testing, deployment, and rollback systems for ML models in production safety applications
  • Partner with Safeguards, Security, and Alignment teams to understand requirements and deliver infrastructure that meets safety and production needs
  • Contribute to the development of internal tools and frameworks that accelerate safety research and deployment
You may be a good fit if you:
  • Have 5+ years of experience building production ML infrastructure, ideally in safety-critical domains like fraud detection, content moderation, or risk assessment
  • Are proficient in Python and have experience with ML frameworks like PyTorch, TensorFlow, or JAX
  • Have hands-on experience with cloud platforms (AWS, GCP) and container orchestration (Kubernetes)
  • Understand distributed systems principles and have built systems that handle high-throughput, low-latency workloads
  • Have experience with data engineering tools and building robust data pipelines (e.g., Spark, Airflow, streaming systems)
  • Are results-oriented, with a bias towards reliability and impact in safety-critical systems
  • Enjoy collaborating with researchers and translating cutting-edge research into production systems
  • Care deeply about AI safety and the societal impacts of your work
Strong candidates may have experience with:
  • Working with large language models and modern transformer architectures
  • Implementing A/B testing frameworks and experimentation infrastructure for ML systems
  • Developing monitoring and alerting systems for ML model performance and data drift
  • Building automated labeling systems and human-in-the-loop workflows
  • Experience in trust & safety, fraud prevention, or content moderation domains
  • Knowledge of privacy-preserving ML techniques and compliance requirements
  • Contributing to open-source ML infrastructure projects

Deadline to apply: None. Applications will be reviewed on a rolling basis.

The expected salary range for this position is:

Annual Salary:

$320,000-$405,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
View Now

Machine Learning Engineer, Safeguards

94199 San Francisco, California Anthropic

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

About Anthropic

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role

We are looking for ML engineers to help build safety and oversight mechanisms for our AI systems. As a Safeguards Machine Learning Engineer, you will work to train models which detect harmful behaviors and help ensure user well-being. You will apply your technical skills to uphold our principles of safety, transparency, and oversight while enforcing our terms of service and acceptable use policies.

Responsibilities:
  • Build machine learning models to detect unwanted or anomalous behaviors from users and API partners, and integrate them into our production system
  • Improve our automated detection and enforcement systems as needed
  • Analyze user reports of inappropriate accounts and build machine learning models to detect similar instances proactively
  • Surface abuse patterns to our research teams to harden models at the training stage
You may be a good fit if you:
  • Have 4+ years of experience in a research/ML engineering or an applied research scientist position, preferably with a focus on AI safety.
  • Have proficiency in Python, LLMs, SQL and data analysis/data mining tools.
  • Have proficiency in building safe AI/ML systems, such as behavioral classifiers or anomaly detection.
  • Have strong communication skills and ability to explain complex technical concepts to non-technical stakeholders.
  • Care about the societal impacts and long-term implications of your work.
Strong candidates may also have experience with:
  • Machine learning frameworks like Scikit-Learn, TensorFlow, or PyTorch
  • High-performance, large-scale ML systems
  • Language modeling with transformers
  • Reinforcement learning
  • Large-scale ETL


The expected salary range for this position is:

Annual Salary:

$340,000-$425,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
View Now

Prin Safeguards & Sec Spec

94551 Livermore, California Mission Support and Test Services

Posted today

Job Viewed

Tap Again To Close

Job Description

**Job Description**
Mission Support and Test Services, LLC (MSTS) manages and operates the Nevada National Security Site (NNSS) for the U.S. National Nuclear Security Administration (NNSA). Our MISSION is to help ensure the security of the United States and its allies by providing high-hazard experimentation and incident response capabilities through operations, engineering, education, field, and integration services and by acting as environmental stewards to the Site's Cold War legacy. Our VISION is to be the user site of choice for large-scale, high-hazard, national security experimentation, with premier facilities and capabilities below ground, on the ground, and in the air. (See NNSS.gov for our unique capabilities.) Our 2,750+ professional, craft, and support employees are called upon to innovate, collaborate, and deliver on some of the more difficult nuclear security challenges facing the world today.
+ MSTS offers our full-time employees highly competitive salaries and benefits packages including medical, dental, and vision; both a pension and a 401k; paid time off and 96 hours of paid holidays; relocation (if located more than 75 miles from work location); tuition assistance and reimbursement; and more.
+ MSTS is a limited liability company consisting of Honeywell International Inc. (Honeywell), Jacobs Engineering Group Inc. (Jacobs), and HII Nuclear Inc.
**Responsiblities**
The qualified candidate will report to the Livermore Operations (LO) Facility Manager and will perform the duties of the Facility Security Officer (FSO) for the LO Facility located in Livermore, CA. Security areas of responsibility include (but are not limited to): physical security of facilities, protection of government property, lock and key control, controlled and prohibited articles, defining and maintaining facility security areas, access authorizations and security clearances, incoming and outgoing visitor control, security badges, creating, handling, storing, processing, and transporting classified matter, secure communications, security training, awareness and employee briefings, OPSEC, communications security (COMSEC), incidents of security concern, and assisting with the local implementation of the Insider Threat and Counterintelligence (CI) programs.
**Responsibilities**
+ Ensure that the security program at LO conforms and complies with the requirements and regulations in the applicable Department of Energy (DOE) Orders, Mission Support & Test Services (MSTS) Company Directives, and other relevant Nevada National Security Sites (NNSS) procedures.
+ Security areas of responsibility include but also are not limited to physical security of facilities, protection of government property, lock and key control, controlled and prohibited articles, defining and maintaining facility security areas, access authorizations and security clearances, incoming and outgoing visitor control, security badges, creating, handling, storing, processing and transporting classified matter, secure communications, security training, awareness and employee briefings, OPSEC, investigating incidents of security concern and assisting with the local implementation of the Insider Threat and Counterintelligence (CI) programs.
+ Setting up subcontracts and providing oversight of subcontractors providing security services in the areas of alarms, cameras, card readers, locks, and other needed security services.
+ Implements and follows company policies, procedures and directives.
+ Create an environment where employees feel safe to raise issues, empowered to address issues, and supported to resolve issues.
+ Demonstrate environment, safety, health, and quality leadership and consistently enforce environment, safety, health, and quality policies and procedures. Implement applicable environment, safety, health, and quality requirements; emphasize the safety of each employee, and the protection of equipment and property in area of responsibility.
+ Take immediate action to correct reported or observed unacceptable environment, safety, health and quality conditions and/or behaviors.
+ Assure that appropriate procedures, training, equipment, warnings, and tools are provided to employees to permit work to be performed safely.
+ Promote and actively participate in the MSTS safety concept. Support and encourage employee participation in MSTS environment, safety, health, and quality initiatives.
**Qualifications**
+ Bachelors' degree or equivalent training and experience, plus a minimum of 8 years of related and progressively responsible experience.
+ Bachelor's degree in security, business management or field related to the position is preferred.
+ Knowledge of security policies, procedures, and technical terminology associated with security functions.
+ Demonstrates leadership qualities with emphasis on continuous improvement and team building.
+ Skill to develop and analyze information for studies and reports.
+ Able to work independently on safeguards and security program objectives and strategies and formulate strategies for improving operations/processes.
+ Must possess interpersonal communication skills of an influencing and motivating nature to interface effectively with all levels of management, DOE security personnel, and outside agencies.
+ Able to develop and maintain relationships with all levels of employees throughout the company, customers, outside agencies, and various levels of personnel within parent organizations, DOE/HQ,DOE/NNSA, and other contractors including LANL, LLNL, and Sandia as needed to facilitate meeting safeguards and security program objectives while screening and maintaining confidentiality.
+ Able to prioritize and schedule multiple activities in the most efficient manner and meet required deadlines.
+ Able to use software applications needed in the position, including word processing software, spreadsheet software, presentation software, and database software.
+ Working knowledge of LENEL and Milestone applications preferred (not required).
+ Attention to detail and accuracy are required to ensure that policy decisions, procedures, and operations are compliant with MSTS and DOE regulations, procedures, and federal and state laws.
+ Must possess planning/organizing skills and initiative; employ independent judgment; and apply knowledge and experience to ensure that requirements are completed efficiently and on time.
+ Current Q or TS clearance is preferred.
+ The primary work location will be at the Livermore Operations Facility located in Livermore, CA.
+ Flexible work schedule can be negotiated with the manager; employees can work 5/8, 9/80 or 4/10 workweeks.
+ Pre-placement physical examination, which includes a drug screen, is required. MSTS maintains a substance abuse policy that includes random drug testing.
+ Must possess a valid driver's license.
MSTS is required by DOE directive to conduct a pre-employment drug test and background review that includes checks of personal references, credit, law enforcement records, and employment/education verifications. Applicants offered employment with MSTS are also subject to a federal background investigation to meet the requirements for access to classified information or matter if the duties of the position require a DOE security clearance. Substance abuse or illegal drug use, falsification of information, criminal activity, serious misconduct or other indicators of untrustworthiness can cause a clearance to be denied or terminated by DOE, resulting in the inability to perform the duties assigned and subsequent termination of employment. In addition, Applicants for employment must be able to obtain and maintain a DOE Q-level security clearance, which requires U.S. citizenship, at least 18 years of age. Reference DOE Order 472.2 ( , "Personnel Security". If you hold more than one citizenship (i.e., of the U.S. and another country), your ability to obtain a security clearance may be impacted.
**Department of Energy Q Clearance** (position will be cleared to this level). Reviews and tests for the absence of any illegal drug as defined in 10 CFR Part 707.4 ( , "Workplace Substance Abuse Programs at DOE Sites," will be conducted. Applicant selected will be subject to a Federal background investigation, required to participate in subsequent reinvestigations, and must meet the eligibility requirements for access to classified matter. Successful completion of a counterintelligence evaluation, which may include a counterintelligence-scope polygraph examination, may also be required. Reference 10 CFR Part 709 ( , "Counterintelligence Evaluation Program."
MSTS is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, disability, veteran status or other characteristics protected by law. MSTS is a background screening, drug-free workplace.
Annual salary range for this position is: **$99,790.57 - $152,180.63.**
Starting salary is determined based on the position market value, the individual candidate education and experience and internal equity.
View Now
Be The First To Know

About the latest Safeguards Jobs in United States !

Sr Safeguards & Security Spec

94551 Livermore, California Mission Support and Test Services

Posted today

Job Viewed

Tap Again To Close

Job Description

**Job Description**
Mission Support and Test Services, LLC (MSTS) manages and operates the Nevada National Security Site (NNSS) for the U.S. National Nuclear Security Administration (NNSA). Our MISSION is to help ensure the security of the United States and its allies by providing high-hazard experimentation and incident response capabilities through operations, engineering, education, field, and integration services and by acting as environmental stewards to the Site's Cold War legacy. Our VISION is to be the user site of choice for large-scale, high-hazard, national security experimentation, with premier facilities and capabilities below ground, on the ground, and in the air. (See NNSS.gov for our unique capabilities.) Our 2,750+ professional, craft, and support employees are called upon to innovate, collaborate, and deliver on some of the more difficult nuclear security challenges facing the world today.
+ MSTS offers our full-time employees highly competitive salaries and benefits packages including medical, dental, and vision; both a pension and a 401k; paid time off and 96 hours of paid holidays; relocation (if located more than 75 miles from work location); tuition assistance and reimbursement; and more.
+ MSTS is a limited liability company consisting of Honeywell International Inc. (Honeywell), Jacobs Engineering Group Inc. (Jacobs), and HII Nuclear Inc.
**Responsiblities**
The qualified candidate will report to the Facility Manager and will perform the duties of the Facility Security Officer (FSO) for the Livermore Operations (LO) Facility located in Livermore, CA. Security areas of responsibility include (but are not limited to): physical security of facilities, protection of government property, lock and key control, controlled and prohibited articles, defining and maintaining facility security areas, access authorizations and security clearances, incoming and outgoing visitor control, security badges, creating, handling, storing, processing, and transporting classified matter, secure communications, security training, awareness and employee briefings, OPSEC, communications security (COMSEC), incidents of security concern, and assisting with the local implementation of the Insider Threat and Counterintelligence (CI) programs.
**Key Responsibilities**
+ Ensure that the security program at LO conforms and complies with the requirements and regulations in the applicable Department of Energy (DOE) Orders, Mission Support & Test Services (MSTS) Company Directives, and other relevant Nevada National Security Sites (NNSS) procedures.
+ Security areas of responsibility include but also are not limited to physical security of facilities, protection of government property, lock and key control, controlled and prohibited articles, defining and maintaining facility security areas, access authorizations and security clearances, incoming and outgoing visitor control, security badges, creating, handling, storing, processing and transporting classified matter, secure communications, security training, awareness and employee briefings, OPSEC, investigating incidents of security concern and assisting with the local implementation of the Insider Threat and Counterintelligence (CI) programs.
+ Setting up subcontracts and providing oversight of subcontractors providing security services in the areas of alarms, cameras, card readers, locks, and other needed security services.
+ Implements and follows company policies, procedures and directives.
+ Create an environment where employees feel safe to raise issues, empowered to address issues, and supported to resolve issues.
+ Demonstrate environment, safety, health, and quality leadership and consistently enforce environment, safety, health, and quality policies and procedures. Implement applicable environment, safety, health, and quality requirements; emphasize the safety of each employee, and the protection of equipment and property in area of responsibility.
+ Take immediate action to correct reported or observed unacceptable environment, safety, health and quality conditions and/or behaviors.
+ Assure that appropriate procedures, training, equipment, warnings, and tools are provided to employees to permit work to be performed safely.
+ Promote and actively participate in the MSTS safety concept. Support and encourage employee participation in MSTS environment, safety, health, and quality initiatives.
**Qualifications**
**Due to the nature of our work, US Citizenship is required for all positions.**
+ Bachelor's degree in field related to the position or equivalent training and experience plus a minimum of 5 years of progressive related experience.
+ Bachelor's degree in security, business management or field related to the position is preferred.
+ Knowledge of security policies, procedures, and technical terminology associated with security functions.
+ Demonstrates leadership qualities with emphasis on continuous improvement and team building.
+ Skill to develop and analyze information for studies and reports.
+ Able to work independently on safeguards and security program objectives and strategies and formulate strategies for improving operations/processes.
+ Must possess interpersonal communication skills of an influencing and motivating nature to interface effectively with all levels of management, DOE security personnel, and outside agencies.
+ Able to develop and maintain relationships with all levels of employees throughout the company, customers, outside agencies, and various levels of personnel within parent organizations, DOE/HQ,DOE/NNSA, and other contractors including LANL, LLNL, and Sandia as needed to facilitate meeting safeguards and security program objectives while screening and maintaining confidentiality.
+ Able to prioritize and schedule multiple activities in the most efficient manner and meet required deadlines.
+ Able to use software applications needed in the position, including word processing software, spreadsheet software, presentation software, and database software.
+ Working knowledge of LENEL and Milestone applications preferred (not required).
+ Attention to detail and accuracy are required to ensure that policy decisions, procedures, and operations are compliant with MSTS and DOE regulations, procedures, and federal and state laws.
+ Must possess planning/organizing skills and initiative; employ independent judgment; and apply knowledge and experience to ensure that requirements are completed efficiently and on time.
+ Current Q or TS clearance is preferred.
+ The primary work location will be at the Livermore Operations Facility located in Livermore, CA.
+ Flexible work schedule can be negotiated with the manager; employees can work 5/8, 9/80 or 4/10 workweeks.
+ Pre-placement physical examination, which includes a drug screen, is required. MSTS maintains a substance abuse policy that includes random drug testing.
+ Must possess a valid driver's license.
MSTS is required by DOE directive to conduct a pre-employment drug test and background review that includes checks of personal references, credit, law enforcement records, and employment/education verifications. Applicants offered employment with MSTS are also subject to a federal background investigation to meet the requirements for access to classified information or matter if the duties of the position require a DOE security clearance. Substance abuse or illegal drug use, falsification of information, criminal activity, serious misconduct or other indicators of untrustworthiness can cause a clearance to be denied or terminated by DOE, resulting in the inability to perform the duties assigned and subsequent termination of employment. In addition, Applicants for employment must be able to obtain and maintain a DOE Q-level security clearance, which requires U.S. citizenship, at least 18 years of age. Reference DOE Order 472.2 ( , "Personnel Security". If you hold more than one citizenship (i.e., of the U.S. and another country), your ability to obtain a security clearance may be impacted.
**Department of Energy Q Clearance** (position will be cleared to this level). Reviews and tests for the absence of any illegal drug as defined in 10 CFR Part 707.4 ( , "Workplace Substance Abuse Programs at DOE Sites," will be conducted. Applicant selected will be subject to a Federal background investigation, required to participate in subsequent reinvestigations, and must meet the eligibility requirements for access to classified matter. Successful completion of a counterintelligence evaluation, which may include a counterintelligence-scope polygraph examination, may also be required. Reference 10 CFR Part 709 ( , "Counterintelligence Evaluation Program."
MSTS is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, disability, veteran status or other characteristics protected by law. MSTS is a background screening, drug-free workplace.
Annual salary range for this position is: **$78,832.00 - $118,248.00.**
Starting salary is determined based on the position market value, the individual candidate education and experience and internal equity.
View Now

ML Infrastructure Engineering Manager, Safeguards

10261 New York, New York Anthropic

Posted today

Job Viewed

Tap Again To Close

Job Description

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role Anthropic is seeking an ML Infrastructure Engineering Manager to lead a critical team within our Safeguards organization. You'll manage and grow a team of infrastructure engineers who build and scale the foundational systems that power our AI safety and trust mechanisms. This role combines deep technical leadership in ML infrastructure with people management, driving both the strategic vision and day-to-day execution of systems that ensure our AI models operate safely and reliably at scale. Your team will be responsible for the infrastructure backbone that enables real-time safety evaluations and systems to make Claude safe. You'll work closely with research teams to translate cutting-edge safety research into production-ready systems, while partnering with Safeguards, Security, and Alignment teams to ensure our infrastructure meets the demanding requirements of safety-critical applications. Responsibilities Set team vision and roadmap for ML infrastructure that powers Anthropic's safety and trust systems, ensuring scalability, reliability, and performance at production scale Lead a team of ML infrastructure and software engineers to build robust platforms supporting real-time safety evaluations, feature stores, model serving, and data pipelines Partner with Safeguards, Security, Research, and Product teams to identify infrastructure requirements and translate complex safety research into scalable production systems Drive technical strategy for ML infrastructure architecture, making key decisions about technology choices, system design, and platform evolution Maintain deep technical expertise in ML infrastructure, distributed systems, and safety-critical applications to provide technical leadership and guidance Hire, support, and develop team members through continuous feedback, career coaching, and people management practices Collaborate across teams to ensure infrastructure supports rapid experimentation while maintaining production reliability and safety standards Champion engineering best practices including automated testing, deployment pipelines, monitoring, and incident response for safety-critical systems You may be a good fit if you Have 4+ years of management experience leading technical teams focused on ML infrastructure, platform engineering, or distributed systems Have 8+ years of hands-on experience building production ML infrastructure, ideally in safety-critical domains like fraud detection, content moderation, or risk assessment Demonstrated ability to lead and manage high-performing technical teams through periods of rapid growth and scaling challenges Possess deep technical knowledge of ML serving platforms, feature stores, data pipelines, and distributed systems architecture Show excellent communication skills in translating complex technical concepts for various audiences, from individual contributors to executive leadership Have strong project management skills with the ability to balance multiple priorities and coordinate across cross-functional teams Experience managing teams that bridge research and production, with a track record of productionizing experimental systems Strong candidates may also Possess knowledge of modern ML frameworks, cloud platforms, and container orchestration in production environments Excel at building strong relationships with research teams and translating their needs into infrastructure requirements Have experience implementing automated testing, deployment, and monitoring systems for ML models in production Demonstrate passion for ensuring the responsible development and deployment of AI systems Have managed teams working on real-time, high-throughput systems with strict latency and reliability requirements Experience with compliance and security requirements for safety-critical applications At Anthropic, we value diversity and are committed to creating an inclusive environment for all employees. We encourage applications from candidates of all backgrounds. Deadline to apply: None. Applications will be reviewed on a rolling basis. The expected salary range for this position is: $405,000 - $485,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas. However, we aren't able to successfully sponsor visas for every role and every candidate. If we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process Apply for this job To apply, submit your resume and complete the application through our careers portal. #J-18808-Ljbffr

View Now

Technical Threat Investigator, Safeguards (CBRN)

94199 San Francisco, California Anthropic

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

About Anthropic

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About this role

As a Technical Threat Investigator focused on CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosives) risks with particular emphasis on biodefense, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic's AI systems for biological threats and harm on Anthropic's threat intelligence team. This role combines deep technical investigation skills with specialized domain expertise in biodefense to protect against sophisticated threat actors who may attempt to leverage our AI technology for malicious biological applications.

You will work at the intersection of AI safety and biosecurity, conducting thorough investigations into potential misuse cases, developing novel detection techniques, and building robust defenses against emerging biological threats in the rapidly evolving landscape of AI-enabled risks.

Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.
Responsibilities:
  • Detect and investigate CBRN threats: Identify and thoroughly investigate attempts to misuse Anthropic's AI systems for developing, enhancing, or disseminating CBRNE weapons, pathogens, toxins, or other CBRNE threats to harm people, critical infrastructure, or the environment. The primary focus will be on biological and chemical harms.
  • Cross-platform threat analysis: Ground investigations in real threat actor behavior, basing findings off of cross-internet and open-source research, as well as past publicly reported programs.
  • Conduct technical investigations: Utilize SQL, Python, and other technical tools to analyze large datasets, trace user behavior patterns, and uncover sophisticated CBRNE threat actors across our platform.
  • Create actionable intelligence: Develop detailed threat intelligence reports on biological attack vectors, vulnerabilities, and threat actor TTPs leveraging AI systems, with specific focus on biodefense implications.
  • Develop biological-specific detection capabilities: Create abuse signals, tracking strategies, and detection methodologies specifically tailored to identify users attempting dual-use biological misuse, including emerging biothreat vectors and novel attack patterns.
  • Collaborate with policy & enforcement teams: Work closely with policy & enforcement teams to make informed decisions about user violations related to biological threats and ensure appropriate mitigation actions.
  • External stakeholder engagement: Communicate findings with external partners including government agencies, regulatory bodies, scientific organizations, and biosecurity research communities.
You may be a good fit if you:
  • Have biological and chemical domain expertise: Possess deep knowledge in biosecurity, biological weapons non-proliferation, dual-use research of concern (DURC), biodefense, synthetic biology, or related biological threat domains
  • Strong technical investigation skills: Demonstrated experience in technical analysis and investigations, with proficiency in SQL, Python, and data analysis tools for threat detection and user behavior analysis
  • Threat intelligence or Targeting background: Experience in threat actor profiling, utilizing threat intelligence frameworks, and conducting adversarial analysis, particularly in biosecurity or related domains
  • Experience with AI systems: Have hands-on experience with large language models and deep understanding of how AI technology could potentially be misused for biological threats
  • Cross-functional collaboration: Excellent stakeholder management skills and ability to work effectively with diverse teams including researchers, policy experts, legal teams, and external partners
  • Communication skills: Ability to present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership
Preferred qualifications:
  • Advanced degree (MS or PhD) in biological sciences, biodefense, biosecurity, or related field, or equivalent professional experience
  • Real world experience countering weapons of mass destruction, CBRNE, or other high risk dangerous asymmetric threats
  • 3+ years of experience in biosecurity threat analysis, biological defense, or related investigative roles
  • Comfortable with SQL and Python
  • Experience working with government agencies or in regulated environments dealing with sensitive biological information
  • Background in AI safety, machine learning security, or technology abuse investigation
  • Familiarity with synthetic biology, biotechnology, or dual-use biological research
  • Experience building and scaling threat detection systems or abuse monitoring programs


The expected salary range for this position is:

Annual Salary:

$230,000-$355,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
View Now
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Safeguards Jobs