12 Gpus jobs in the United States

Program Manager GPUs

94199 San Francisco, California FluidStack

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

New Product Introduction Program Manager

Fluidstack is the AI Cloud Platform. We build GPU supercomputers for top AI labs, governments, and enterprises. Our customers include Mistral, Poolside, Black Forest Labs, Meta, and more.

Our team is small, highly motivated, and focused on providing a world class supercomputing experience. We put our customers first in everything we do, working hard to not just win the sale, but to win repeated business and customer referrals.

We hold ourselves and each other to high standards. We expect you to care deeply about the work you do, the products you build, and the experience our customers have in every interaction with us.

You must work hard, take ownership from inception to delivery, and approach every problem with an open mind and a positive attitude. We value effectiveness, competence, and a growth mindset.

About the Role

Fluidstack's data-center footprint is expanding fast, bringing ever-greater technical complexity and a steady stream of new hardware. We need a seasoned New Product Introduction (NPI) Program Manager to be the single point of ownership for all Supply-Chain activityfrom definition through factory rampso every AI, compute, storage, and network platform launches on schedule and at quality. You will coordinate internal engineering, TPM, and operations teams alongside global manufacturing partners, untangling ambiguity, exposing critical paths, clearing blockers, and managing risk end-to-end.

Focus
  • Full-cycle NPI execution for AI compute, storage, and networking hardware: schedule, build/test readiness, factory and material readiness, risk tracking, and program communications.

  • Primary bridge between Supply Chain and every internal/external stakeholder on the program.

  • Daily build issue resolution and debug cadence across Fluidstack and partner engineering/manufacturing teams.

  • Manufacturing, test, and material readiness (infrastructure, processes, and tooling).

  • Risk identification ? mitigation ? communication across the entire Supply Chain org.

  • Program-level communicationsstatus, metrics, and escalationsto leadership and partners.

  • Factory ramp preparedness with engineering and manufacturing allies.

  • Continuous improvement: surface operational pain points and drive cross-functional fixes.

About You
  • Bachelor's degree in a relevant field - or equivalent hands-on experience.

  • Deep NPI lifecycle expertise: kickoff through ramp; cross-functional influence; work-stream prioritization.

  • Technical foundation in engineering or manufacturing for AI, compute, or networking hardware.

  • 5+ years in computer-hardware and/or networking manufacturing - shop-floor control, supply-chain execution, test, PLM tools, and processes.

  • Thrive in ambiguity; proactive and adaptable.

  • Working knowledge of all NPI roles (hardware, software, product, process, test, reliability, QA).

  • Familiarity with supply-chain functions across NPI (procurement, planning, materials, demand/supply, logistics).

  • Understanding of manufacturing test processes/systems - quality inspections, automated server, storage, and rack tests.

  • Fluency with PLM concepts: BOMs, ECOs, deviations, part substitutions, effectivity-date control, QMS.

  • Excellent communicator and presenter.

Nice to Have
  • M.S. in hardware or manufacturing engineering (or equivalent).

  • Track record sourcing and managing global suppliers and CMs.

  • Experience with risk management and disaster-recovery practices for hardware manufacturing.

  • Familiarity with data-center hardware architecture and deployment.

Benefits
  • Competitive total compensation package (cash + equity).

  • Retirement or pension plan, in line with local norms.

  • Health, dental, and vision insurance.

  • Generous PTO policy, in line with local norms.

Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

View Now

Program Manager GPUs

10261 New York, New York Fluidstack

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

About FluidStack

Fluidstack is the AI Cloud Platform. We build GPU supercomputers for top AI labs, governments, and enterprises. Our customers include Mistral, Poolside, Black Forest Labs, Meta, and more.

Our team is small, highly motivated, and focused on providing a world class supercomputing experience. We put our customers first in everything we do, working hard to not just win the sale, but to win repeated business and customer referrals.

We hold ourselves and each other to high standards. We expect you to care deeply about the work you do, the products you build, and the experience our customers have in every interaction with us.

You must work hard, take ownership from inception to delivery, and approach every problem with an open mind and a positive attitude. We value effectiveness, competence, and a growth mindset.

About the Role

Fluidstacks data-center footprint is expanding fast, bringing ever-greater technical complexity and a steady stream of new hardware. We need a seasoned Program Manager, GPUs to be the single point of ownership for all Supply-Chain activityfrom definition through factory rampso every AI, compute, storage, and network platform launches on schedule and at quality. You will coordinate internal engineering, TPM, and operations teams alongside global manufacturing partners, untangling ambiguity, exposing critical paths, clearing blockers, and managing risk end-to-end.

Focus
  • Full-cycle NPI execution for AI compute, storage, and networking hardware: schedule, build/test readiness, factory and material readiness, risk tracking, and program communications.

  • Primary bridge between Supply Chain and every internal/external stakeholder on the program.

  • Daily build issue resolution and debug cadence across Fluidstack and partner engineering/manufacturing teams.

  • Manufacturing, test, and material readiness (infrastructure, processes, and tooling).

  • Risk identification ? mitigation ? communication across the entire Supply Chain org.

  • Program-level communicationsstatus, metrics, and escalationsto leadership and partners.

  • Factory ramp preparedness with engineering and manufacturing allies.

  • Continuous improvement: surface operational pain points and drive cross-functional fixes.

About You
  • Bachelors degree in a relevant field - or equivalent hands-on experience.

  • Deep NPI lifecycle expertise: kickoff through ramp; cross-functional influence; work-stream prioritization.

  • Technical foundation in engineering or manufacturing for AI, compute, or networking hardware.

  • 5+ years in computer-hardware and/or networking manufacturing - shop-floor control, supply-chain execution, test, PLM tools, and processes.

  • Thrive in ambiguity; proactive and adaptable.

  • Working knowledge of all NPI roles (hardware, software, product, process, test, reliability, QA).

  • Familiarity with supply-chain functions across NPI (procurement, planning, materials, demand/supply, logistics).

  • Understanding of manufacturing test processes/systems - quality inspections, automated server, storage, and rack tests.

  • Fluency with PLM concepts: BOMs, ECOs, deviations, part substitutions, effectivity-date control, QMS.

  • Excellent communicator and presenter.

Nice to have
  • M.S. in hardware or manufacturing engineering (or equivalent).

  • Track record sourcing and managing global suppliers and CMs.

  • Experience with risk management and disaster-recovery practices for hardware manufacturing.

  • Familiarity with data-center hardware architecture and deployment.

Benefits
  • Competitive total compensation package (cash + equity).

  • Retirement or pension plan, in line with local norms.

  • Health, dental, and vision insurance.

  • Generous PTO policy, in line with local norms.

Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

#J-18808-Ljbffr
View Now

Program Manager, GPUs (Confidential)

94103, California Insight Global

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Job Description
Insight Global is looking for a sharp, experienced Program Manager to join an AI cloud platform company and be the single point-of-contact for all GPU supply chain activities. This role is remote sitting in in NYC or San Francisco. This is a full-time, permanent role with excellent compensation, equity, and benefits.
This company is dedicated to accelerating their go-to-market time in a big way, so it is imperative for this person to be willing to work hard, take ownership from inception to delivery, and approach every problem with a strategic, open mind.
We need a seasoned Program Manager, GPUs to be the single point of ownership for all Supply-Chain activityfrom definition through factory rampso every AI, compute, storage, and network platform launches on schedule and at quality. You will coordinate internal engineering, TPM, and operations teams alongside global manufacturing partners, untangling ambiguity, exposing critical paths, clearing blockers, and managing risk end-to-end.
Day-to-day responsibilities:
- Full-cycle NPI execution for AI compute, storage, and networking hardware: schedule, build/test readiness, factory and material readiness, risk tracking, and program communications
- Primary bridge between Supply Chain and every internal/external stakeholder on the program
- Daily build issue resolution and debug cadence across internal teams and partner engineering/manufacturing teams
- Manufacturing, test, and material readiness (infrastructure, processes, and tooling)
- Risk identification mitigation communication across the entire Supply Chain org
- Program-level communicationsstatus, metrics, and escalationsto leadership and partners
- Factory ramp preparedness with engineering and manufacturing allies
- Continuous improvement: surface operational pain points and drive cross-functional fixes
We are a company committed to creating inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity employer that believes everyone matters. Qualified candidates will receive consideration for employment opportunities without regard to race, religion, sex, age, marital status, national origin, sexual orientation, citizenship status, disability, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to Human Resources Request Form ( . The EEOC "Know Your Rights" Poster is available here ( .
To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: .
Skills and Requirements
- Bachelors degree in a relevant field - or equivalent hands-on experience
- Deep NPI lifecycle expertise: kickoff through ramp; cross-functional influence; work-stream prioritization
- Technical foundation in engineering or manufacturing for AI, compute, or networking hardware
- 5+ years in computer-hardware and/or networking manufacturing - shop-floor control, supply-chain execution, test, PLM tools, and processes
- Thrive in ambiguity; proactive and adaptable
- Working knowledge of all NPI roles (hardware, software, product, process, test, reliability, QA)
- Familiarity with supply-chain functions across NPI (procurement, planning, materials, demand/supply, logistics)
- Understanding of manufacturing test processes/systems - quality inspections, automated server, storage, and rack tests
- Fluency with PLM concepts: BOMs, ECOs, deviations, part substitutions, effectivity-date control, QMS - M.S. in hardware or manufacturing engineering (or equivalent)
- Track record sourcing and managing global suppliers and CMs
- Experience with risk management and disaster-recovery practices for hardware manufacturing
- Familiarity with data-center hardware architecture and deployment null
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal employment opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment without regard to race, color, ethnicity, religion,sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military oruniformed service member status, or any other status or characteristic protected by applicable laws, regulations, andordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to
View Now

Staff Software Engineer, AI/ML Infrastructure, GCE, GPUs

98194 Seattle, Washington Google

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

**Minimum qualifications:**
+ Bachelor's degree or equivalent practical experience.
+ 8 years of experience in software development.
+ 5 years of experience testing, and launching software products.
+ 5 years of experience building and developing large-scale infrastructure, distributed systems or networks, or experience with compute technologies, storage, or hardware architecture.
+ 3 years of experience with software design and architecture.
+ Experience in GPU programming.
**Preferred qualifications:**
+ Master's degree or PhD in Engineering, Computer Science, or a related technical field.
+ 8 years of experience with data structures/algorithms.
+ 3 years of experience in a technical leadership role leading project teams and setting technical direction.
+ 3 years of experience working in a complex, matrixed organization involving cross-functional, or cross-business projects.
Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google's needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.
With your technical expertise you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions.
In this role, you will help drive innovation in Machine Learning and Artificial Intelligence (AI) by enabling access to hardware accelerators such as Graphical Processing Unit (GPUs) and Tensor Processing Units (TPUs) on Google Cloud. You will enable access to hardware accelerators that power workloads such Large Language Model (LLMs), face recognition and voice processing by ensuring their seamless integration into the software stack through the Google Compute Engine (GCE) accelerators team, making them easily accessible to customers via virtual machines and instances.
Google Cloud accelerates every organization's ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google's cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
The US base salary range for this full-time position is $197,000-$291,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .
**Responsibilities:**
+ Collaborate with multiple teams and roles to understand the infrastructure requirements for the new product.
+ Lead efforts across coding, code reviews, quality, reliability, performance, billing and other areas that arise during New Product Introduction (NPI) execution.
+ Drive engineering decisions for the new instance/Virtual Machine (VM) family.
+ Partner with engineers to align and execute effectively.
+ Communicate status updates clearly and align technical decisions across cross-functional teams.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also and If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form:
View Now

Graphics Processing Unit (GPU) Engineer

20811 Bethesda, Maryland Leidos

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

Description

At Leidos, we deliver innovative solutions through the efforts of our diverse and talented people who are dedicated to our customers' success. We empower our teams, contribute to our communities, and operate sustainable practices. Everything we do is built on a commitment to do the right thing for our customers, our people, and our community. Our Mission, Vision, and Values guide the way we do business. Employees enjoy career enrichment opportunities available through mobility and development and experience rewarding relationships with supportive supervisors and talented colleagues and customers. Your most important work is ahead.

If this sounds like the kind of environment where you can thrive, keep reading!

Leidos is looking for a highly skilled Graphics Processing Unit (GPU) Engineer with a deep understanding of operating systems, hardware, and extensive knowledge of the GPU industry, particularly in the context of Linux-based systems. As a GPU Engineer, you will play a pivotal role in designing, developing, and optimizing GPUs for various applications, with a strong emphasis on seamless integration with operating systems and hardware. Your expertise will contribute to advancing GPU technology and its efficient utilization in diverse fields.

This is a 100% on-site position. All work must be performed at the customer site in Bethesda at the Intelligence Community Campus.

Responsibilities:

  1. GPU Architecture and Design: Collaborate with a multidisciplinary team to define, develop, and optimize GPU architectures, ensuring they meet stringent performance, power efficiency, and feature requirements. Leverage industry insights to drive design decisions. Ensure that GPU designs and integrations are not only optimized for Linux but are also adaptable to other operating systems.

  2. Operating System Integration: Work closely with operating system developers to ensure smooth GPU integration with Linux-based systems. Optimize GPU drivers for compatibility, performance, and reliability in a Linux environment. Provide regular maintenance and updates to ensure continued compatibility.

  3. Hardware Expertise: Contribute to the design and development of GPU hardware, providing insights into hardware architecture to ensure efficient interaction with software components. Maintain and update hardware designs as needed.

  4. CUDA (Compute Unified Device Architecture) /OpenCL (Open Computing Language) Programming: Develop and optimize applications using CUDA or OpenCL, harnessing the full potential of GPU hardware for parallel processing, high-performance computing, and machine learning on Linux platforms. Maintain and update software for optimal performance.

  5. Performance Analysis: Analyze GPU performance, identify bottlenecks, and develop strategies to enhance performance across various applications in Linux, addressing both hardware and software considerations. Regularly monitor and improve performance.

  6. GPU Tooling: Create and maintain debugging tools, profiling utilities, and performance analysis software tailored for Linux systems to facilitate efficient GPU development and troubleshooting. Keep tools up-to-date and functional.

  7. Power Efficiency: Work on power management techniques to optimize GPU power consumption, ensuring efficient operation on both mobile and desktop Linux platforms. Continuously assess and enhance power efficiency strategies.

  8. Testing and Validation: Design and execute tests to validate GPU performance and functionality on Linux, including stress testing, benchmarking, and debugging to ensure robust operation. Maintain and expand the testing suite.

  9. Documentation: Maintain comprehensive technical documentation, including architectural specifications, code documentation, and Linux-specific best practices for GPU development. Keep documentation up-to-date with changes and improvements.

  10. Industry Insight: Stay updated on the latest trends, innovations, and competitive landscapes within the GPU industry, contributing to research efforts and proposing Linux-specific approaches to GPU design and optimization. Share regular updates and insights with the team.

You Bring

  • Bachelor's or higher degree in Computer Science, Electrical Engineering, or a related field. Additional years of experience may be considered in lieu of a degree.

  • 10+ years of relevant systems engineering experience

  • Proven experience in GPU architecture design, and GPU performance optimization.

  • Expertise in operating system integration for Linux.

  • Strong understanding of computer hardware architecture, particularly as it relates to Linux systems.

  • Knowledge of parallel computing, graphics algorithms, and real-time rendering in Linux environments.

  • Familiarity with GPU debugging tools and profiling software for Linux.

  • Excellent problem-solving skills and the ability to collaborate within a team.

  • Strong communication skills for conveying technical information in a Linux context.

  • Proficiency with scripting languages such as Python or BASH.

  • Proficiency with automation tools such Ansible, Puppet, Salt, Terraform, etc.

  • Candidate must, at a minimum, meet DoD 8570.11- IAT Level II certification requirements (currently Security+ CE, CCNA-Security, GICSP, GSEC, or SSCP along with an appropriate computing environment (CE) certification). An IAT Level III certification would also be acceptable (CASP+, CCNP Security, CISA, CISSP, GCED, GCIH, CCSP).

Clearance

  • Active TS/SCI clearance with Polygraph required OR active TS/SCI and willingness to obtain and maintain a Poly.

  • US Citizenship is required due to the nature of the government contracts we support.

Preferred Qualifications

  • Published research or contributions in the GPU industry, especially related to Linux.

  • Experience with machine learning and neural network frameworks on GPUs in Linux.

  • Knowledge of GPU virtualization, cloud computing, and emerging Linux-based technologies in the field.

  • Proficiency in programming languages such as GPU-specific languages.

  • Experience with container technologies (Docker, Kubernetes)

  • Experience with Prometheus/Grafana for monitoring

  • Knowledge of distributed resource scheduling systems (Slurm (preferred), LSF, etc.)

  • Familiarity with CUDA and managing GPU-accelerated computing systems

  • Basic knowledge of deep learning frameworks and algorithms

What You'll Gain

At Leidos, your work directly supports some of the most critical missions in the intelligence community. You'll be part of a high-performing, collaborative environment where your contributions shape national security operations. Whether optimizing secure networks, refining system baselines, or integrating next-generation technologies, you'll find meaningful work, continuous technical challenges, and a clear path for advancement.

Original Posting:

June 12, 2025

For U.S. Positions: While subject to change based on business needs, Leidos reasonably anticipates that this job requisition will remain open for at least 3 days with an anticipated close date of no earlier than 3 days after the original posting date as listed above.

Pay Range:

Pay Range $126,100.00 - $227,950.00

The Leidos pay range for this job level is a general guideline onlyand not a guarantee of compensation or salary. Additional factors considered in extending an offer include (but are not limited to) responsibilities of the job, education, experience, knowledge, skills, and abilities, as well as internal equity, alignment with market data, applicable bargaining agreement (if any), or other law.

REQNUMBER: R-00160931

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status. Leidos will consider qualified applicants with criminal histories for employment in accordance with relevant Laws. Leidos is an equal opportunity employer/disability/vet.

View Now

Graphics Processing Unit (GPU) Engineer

20814 Bethesda, Maryland Leidos

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

**Description**
At Leidos, we deliver innovative solutions through the efforts of our diverse and talented people who are dedicated to our customers' success. We empower our teams, contribute to our communities, and operate sustainable practices. Everything we do is built on a commitment to do the right thing for our customers, our people, and our community. Our Mission, Vision, and Values guide the way we do business. Employees enjoy career enrichment opportunities available through mobility and development and experience rewarding relationships with supportive supervisors and talented colleagues and customers. Your most important work is ahead.
**If this sounds like the kind of environment where you can thrive, keep reading!**
Leidos is looking for a highly skilled Graphics Processing Unit (GPU) Engineer with a deep understanding of operating systems, hardware, and extensive knowledge of the GPU industry, particularly in the context of Linux-based systems. As a GPU Engineer, you will play a pivotal role in designing, developing, and optimizing GPUs for various applications, with a strong emphasis on seamless integration with operating systems and hardware. Your expertise will contribute to advancing GPU technology and its efficient utilization in diverse fields.
This is a 100% on-site position. All work must be performed at the customer site in Bethesda at the Intelligence Community Campus.
Responsibilities:
1. **GPU Architecture and Design:** Collaborate with a multidisciplinary team to define, develop, and optimize GPU architectures, ensuring they meet stringent performance, power efficiency, and feature requirements. Leverage industry insights to drive design decisions. Ensure that GPU designs and integrations are not only optimized for Linux but are also adaptable to other operating systems.
2. **Operating System Integration:** Work closely with operating system developers to ensure smooth GPU integration with Linux-based systems. Optimize GPU drivers for compatibility, performance, and reliability in a Linux environment. Provide regular maintenance and updates to ensure continued compatibility.
3. **Hardware Expertise:** Contribute to the design and development of GPU hardware, providing insights into hardware architecture to ensure efficient interaction with software components. Maintain and update hardware designs as needed.
4. **CUDA (Compute Unified Device Architecture) /OpenCL (Open Computing Language) Programming:** Develop and optimize applications using CUDA or OpenCL, harnessing the full potential of GPU hardware for parallel processing, high-performance computing, and machine learning on Linux platforms. Maintain and update software for optimal performance.
5. **Performance Analysis:** Analyze GPU performance, identify bottlenecks, and develop strategies to enhance performance across various applications in Linux, addressing both hardware and software considerations. Regularly monitor and improve performance.
6. **GPU Tooling:** Create and maintain debugging tools, profiling utilities, and performance analysis software tailored for Linux systems to facilitate efficient GPU development and troubleshooting. Keep tools up-to-date and functional.
7. **Power Efficiency:** Work on power management techniques to optimize GPU power consumption, ensuring efficient operation on both mobile and desktop Linux platforms. Continuously assess and enhance power efficiency strategies.
8. **Testing and Validation:** Design and execute tests to validate GPU performance and functionality on Linux, including stress testing, benchmarking, and debugging to ensure robust operation. Maintain and expand the testing suite.
9. **Documentation:** Maintain comprehensive technical documentation, including architectural specifications, code documentation, and Linux-specific best practices for GPU development. Keep documentation up-to-date with changes and improvements.
10. **Industry Insight:** Stay updated on the latest trends, innovations, and competitive landscapes within the GPU industry, contributing to research efforts and proposing Linux-specific approaches to GPU design and optimization. Share regular updates and insights with the team.
**You Bring**
+ Bachelor's or higher degree in Computer Science, Electrical Engineering, or a related field. Additional years of experience may be considered in lieu of a degree.
+ 10+ years of relevant systems engineering experience
+ Proven experience in GPU architecture design, and GPU performance optimization.
+ Expertise in operating system integration for Linux.
+ Strong understanding of computer hardware architecture, particularly as it relates to Linux systems.
+ Knowledge of parallel computing, graphics algorithms, and real-time rendering in Linux environments.
+ Familiarity with GPU debugging tools and profiling software for Linux.
+ Excellent problem-solving skills and the ability to collaborate within a team.
+ Strong communication skills for conveying technical information in a Linux context.
+ Proficiency with scripting languages such as Python or BASH.
+ Proficiency with automation tools such Ansible, Puppet, Salt, Terraform, etc.
+ Candidate must, at a minimum, meet DoD 8570.11- IAT Level II certification requirements (currently Security+ CE, CCNA-Security, GICSP, GSEC, or SSCP along with an appropriate computing environment (CE) certification). An IAT Level III certification would also be acceptable (CASP+, CCNP Security, CISA, CISSP, GCED, GCIH, CCSP).
**Clearance**
+ Active TS/SCI clearance with Polygraph required OR active TS/SCI and willingness to obtain and maintain a Poly.
+ US Citizenship is required due to the nature of the government contracts we support.
**Preferred Qualifications**
+ Published research or contributions in the GPU industry, especially related to Linux.
+ Experience with machine learning and neural network frameworks on GPUs in Linux.
+ Knowledge of GPU virtualization, cloud computing, and emerging Linux-based technologies in the field.
+ Proficiency in programming languages such as GPU-specific languages.
+ Experience with container technologies (Docker, Kubernetes)
+ Experience with Prometheus/Grafana for monitoring
+ Knowledge of distributed resource scheduling systems (Slurm (preferred), LSF, etc.)
+ Familiarity with CUDA and managing GPU-accelerated computing systems
+ Basic knowledge of deep learning frameworks and algorithms
**What You'll Gain**
At Leidos, your work directly supports some of the most critical missions in the intelligence community. You'll be part of a high-performing, collaborative environment where your contributions shape national security operations. Whether optimizing secure networks, refining system baselines, or integrating next-generation technologies, you'll find meaningful work, continuous technical challenges, and a clear path for advancement.
**Original Posting:**
June 12, 2025
For U.S. Positions: While subject to change based on business needs, Leidos reasonably anticipates that this job requisition will remain open for at least 3 days with an anticipated close date of no earlier than 3 days after the original posting date as listed above.
**Pay Range:**
Pay Range $126,100.00 - $227,950.00
The Leidos pay range for this job level is a general guideline onlyand not a guarantee of compensation or salary. Additional factors considered in extending an offer include (but are not limited to) responsibilities of the job, education, experience, knowledge, skills, and abilities, as well as internal equity, alignment with market data, applicable bargaining agreement (if any), or other law.
REQNUMBER: R-00160931
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status. Leidos will consider qualified applicants with criminal histories for employment in accordance with relevant Laws. Leidos is an equal opportunity employer/disability/vet.
View Now

Graphics Processing Unit (GPU) Engineer - Top Secret/SCI

20811 Bethesda, Maryland SUNAYU

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

Location: Bethesda, MD

Category: Graphics Processing Unit Engineer

Travel Required: No

Remote Type: Onsite

Clearance: Top Secret/SCI

Sunayu LLC is looking for a highly skilled Graphics Processing Unit (GPU) Engineer with a deep understanding of operating systems, hardware, and extensive knowledge of the GPU industry, particularly in the context of Linux-based systems. As a GPU Engineer, you will play a pivotal role in designing, developing, and optimizing GPUs for various applications, with a strong emphasis on seamless integration with operating systems and hardware. Your expertise will contribute to advancing GPU technology and its efficient utilization in diverse fields.

This is a 100% on-site position. All work must be performed at the customer site in Bethesda at the Intelligence Community Campus.

Primary Responsibilities

  1. GPU Architecture and Design: Collaborate with a multidisciplinary team to define, develop, and optimize GPU architectures, ensuring they meet stringent performance, power efficiency, and feature requirements. Leverage industry insights into driving design decisions. Ensure that GPU designs and integrations are not only optimized for Linux but are also adaptable to other operating systems.
  2. Operating System Integration: Work closely with operating system developers to ensure smooth GPU integration with Linux-based systems. Optimize GPU drivers for compatibility, performance, and reliability in a Linux environment. Provide regular maintenance and updates to ensure continued compatibility.
  3. Hardware Expertise: Contribute to the design and development of GPU hardware, providing insights into hardware architecture to ensure efficient interaction with software components. Maintain and update hardware designs as needed.
  4. CUDA (Compute Unified Device Architecture) /OpenCL (Open Computing Language) Programming: Develop and optimize applications using CUDA or OpenCL, harnessing the full potential of GPU hardware for parallel processing, high-performance computing, and machine learning on Linux platforms. Maintain and update software for optimal performance.
  5. Performance Analysis: Analyze GPU performance, identify bottlenecks, and develop strategies to enhance performance across various applications in Linux, addressing both hardware and software considerations. Regularly monitor and improve performance.
  6. GPU Tooling: Create and maintain debugging tools, profiling utilities, and performance analysis software tailored for Linux systems to facilitate efficient GPU development and troubleshooting. Keep tools up-to-date and functional.
  7. Power Efficiency: Work on power management techniques to optimize GPU power consumption, ensuring efficient operation on both mobile and desktop Linux platforms. Continuously assess and enhance power efficiency strategies.
  8. Testing and Validation: Design and execute tests to validate GPU performance and functionality on Linux, including stress testing, benchmarking, and debugging to ensure robust operation. Maintain and expand the testing suite.
  9. Documentation: Maintain comprehensive technical documentation, including architectural specifications, code documentation, and Linux-specific best practices for GPU development. Keep documentation up to date with changes and improvements.
  10. Industry Insight: Stay updated on the latest trends, innovations, and competitive landscapes within the GPU industry, contributing to research efforts and proposing Linux-specific approaches to GPU design and optimization. Share regular updates and insights with the team.


Basic Qualifications
  • Bachelor's or higher degree in Computer Science, Electrical Engineering, or a related field. Additional years of experience may be considered in lieu of a degree.
  • 10+ years of relevant systems engineering experience.
  • Proven experience in GPU architecture design, and GPU performance optimization.
  • Expertise in operating system integration for Linux.
  • Strong understanding of computer hardware architecture, particularly as it relates to Linux systems.
  • Knowledge of parallel computing, graphics algorithms, and real-time rendering in Linux environments.
  • Familiarity with GPU debugging tools and profiling software for Linux.
  • Excellent problem-solving skills and the ability to collaborate within a team.
  • Strong communication skills for conveying technical information in a Linux context.
  • Proficiency with scripting languages such as Python or BASH.
  • Proficiency with automation tools such Ansible, Puppet, Salt, Terraform, etc.
  • Candidate must, at a minimum, meet DoD 8570.11- IAT Level II certification requirements (currently Security+ CE, CCNA-Security, GICSP, GSEC, or SSCP along with an appropriate computing environment (CE) certification). An IAT Level III certification would also be acceptable (CASP+, CCNP Security, CISA, CISSP, GCED, GCIH, CCSP).
Clearance
  • Due to the nature of the government contracts we support, US Citizenship is required.
  • TS/SCI clearance with Polygraph required or a TS/SCI and willingness to get a Poly.
Preferred Qualifications
  • Published research or contributions in the GPU industry, especially related to Linux.
  • Experience with machine learning and neural network frameworks on GPUs in Linux.
  • Knowledge of GPU virtualization, cloud computing, and emerging Linux-based technologies in the field.
  • Proficiency in programming languages such as GPU-specific languages.
  • Experience with container technologies (Docker, Kubernetes)
  • Experience with Prometheus/Grafana for monitoring
  • Knowledge of distributed resource scheduling systems (Slurm (preferred), LSF, etc.)
  • Familiarity with CUDA and managing GPU-accelerated computing systems.
  • Basic knowledge of deep learning frameworks and algorithms
View Now
Be The First To Know

About the latest Gpus Jobs in United States !

COURT PROCESSING UNIT CLERK

10261 New York, New York City of New York

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Company Description

Job Description

APPLICANTS MUST BE PERMANENT IN THE CLERICAL ASSOCIATE CIVIL SERVICE TITLE

The Office of Child Support Services (OCSS) puts children first by helping both parents provide for the economic and social well-being, health, and stability of their children. OCSS serves parents and guardians, regardless of income and immigration status. The program helps tens of thousands of children every year by bringing more income into the home. OCSS collects more than $700 million a year on their behalf.

The DSS Court Services (DSS CS) division is responsible for processing cases received from the OCSS Borough Offices and Enforcement division on behalf of families in receipt of Cash Assistance and Medical Assistance. DSS CS works in conjunction with the DSS Office of Legal Affairs (OLA) and NYS Office of Court Administration (OCA) to file child support petitions in pursuit of establishment/modification of support and establishment of paternity, calendar hearings with the NYC Family Courts and process post-hearing actions on those cases.

Under supervision of the Court Services Supervisor with latitude for independent judgment, the Clerical Associate III/Court Processing Unit Clerk is responsible for performing multiple complex clerical tasks associated with child support court orders, such as generating petitions, calendaring/preparing court cases, mailouts and various post hearing activities as specified by the Office of Legal Affairs. Staff will also perform case status updates, such as summons processing and pre-hearing or post-hearing processes.

The Office of Child Support Services (OCSS) is recruiting four (4) Clerical Associate III, to function as Court Processing Unit Clerk for DSS Court Services, who will:

- Perform the data-entry of court referrals received from the OCSS Borough Offices, Enforcement
Services and other Child Support areas. Utilizes the Unified Court Management System (UCMS) to
schedule Court hearings, generate summonses/petitions and notices for child support hearings.

- Utilize databases and applications including but not limited to the Electronic Case Folder System
(ECFS), SharePoint and the Automatic Mail System (AMS) to retrieve, process and update
assigned work.

- Prepare summonses/petitions packages for noncustodial parents for mail service and personal
service. Prepare envelopes for mail-outs of court appearance notices to the custodial parents.
Reconcile service certificates received from the Sheriff's Office, and service affidavits received
from the contracted process servers.

- Review Court Outcome Disposition sheets and case files after the hearing is completed to
determine the outcome of the court hearings and process actions needed.

- Utilize case locate systems to identify alternate addresses for NCPs as needed

- Update case information in appropriate child support applications (e.g., ASSETS, ECFS) and
index required documents as needed.

- Complete daily and weekly logs and statistical reports.

- Assist other work units as needed and participate in special projects.

Hours: 8:30 AM - 4:30 PM

Work Location: 60 Lafayette Street, 7th Floor-Room 7C, New York, NY 10013

CLERICAL ASSOCIATE - 10251

Qualifications

Qualification Requirements
A four-year high school diploma or its educational equivalent approved by a State's department of education or a recognized accrediting organization and one year of satisfactory clerical experience.
Skills Requirement
Keyboard familiarity with the ability to type at a minimum of 100 key strokes (20 words) per minute.

Additional Information

The City of New York is an inclusive equal opportunity employer committed to recruiting and retaining a diverse workforce and providing a work environment that is free from discrimination and harassment based upon any legally protected status or protected characteristic, including but not limited to an individual's sex, race, color, ethnicity, national origin, age, religion, disability, sexual orientation, veteran status, gender identity, or pregnancy.
View Now

Principal Software Engineer - RDMA Azure Data Processing Unit

95053 Santa Clara, California Microsoft Corporation

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Microsoft Silicon, Cloud Hardware, and Infrastructure Engineering (SCHIE) is the team behind Microsoft's expanding Cloud Infrastructure and responsible for powering Microsoft's "Intelligent Cloud" mission. SCHIE delivers the core infrastructure and foundational technologies for Microsoft's over 200 online businesses including Bing, MSN, Office 365, Xbox Live, Teams, OneDrive, and the Microsoft Azure platform globally with our server and data center infrastructure, security and compliance, operations, globalization, and manageability solutions. Our focus is on smart growth, high efficiency, and delivering a trusted experience to customers and partners worldwide and we are looking for passionate, high-energy engineers to help achieve that mission.

The Azure Data Processing Unit (DPU) team brings together state-of-the-art software and hardware expertise to create a highly programmable and high-performance chip with the capability to efficiently handle large data volumes. Thanks to its integrated design, this solution empowers Azure to develop solutions for solving the next generation problems with increased agility and performance leveraging the DPU's compute, storage, and networking capabilities.

As a Principal Software Engineer in the DPU Networking software team, you will design, develop, deploy and support networking packet forwarding and control plane functions that enable high performance data processing within various network endpoints in Azure data centers. You will work as part of a dynamic, multi-talented team of engineers from across the world. You would collaborate with technical stakeholders in a cross functional team manner and contribute towards the success of multiple projects and initiatives across the organization. This opportunity will allow you to develop new solutions for the Azure fleet, participate in the design of cutting-edge networking solutions and hone your design and performance optimization skills.

As Microsoft's cloud business continues to grow the ability to deploy new offerings and hardware infrastructure on time, in high volume with high quality and lowest cost is of paramount importance. To achieve this goal, the DPU Networking Software team is instrumental in defining and delivering operational measures of success for quality, delivery, scale and sustainability related to Microsoft cloud software. We are looking for seasoned engineers with a dedicated passion for customer focused solutions, insight and industry knowledge to envision and implement future technical solutions that will manage and optimize the Cloud infrastructure.

Responsibilities

  • Collaborate with stakeholders to understand business needs and translate them into technical requirements and solutions.

  • Work across team and organizational boundaries to drive clarity and alignment.

  • Drives identification of dependencies and the development of design documents for a product, application, service, or platform.

  • Drives, creates, implements, optimizes, debugs, refactors, and reuses code to establish and improve performance and maintainability, effectiveness, and return on investment (ROI).

  • Conduct research, stay updated with the latest industry trends, and experiment with cutting-edge technologies to drive innovation.

  • Leverages subject-matter expertise of product features and partners with appropriate stakeholders (e.g., project managers) to drive a workgroup's project plans, release plans, and work items.

  • Acts as a Designated Responsible Individual (DRI) and guides other engineers by developing and following the playbook, working on call to monitor system/product/service for degradation, downtime, or interruptions, alerting stakeholders about status and initiates actions to restore system/product/service for simple and complex problems when appropriate.

  • Proactively seeks new knowledge and adapts to new trends, technical solutions, and patterns that will improve the availability, reliability, efficiency, observability, and performance of products while also driving consistency in monitoring and operations at scale.

  • Coaching and mentorship of fellow team members.

  • Effective?communication skills and a passion for delivering scalable solutions through a diverse team of engineers.

Qualifications

Required Qualifications

  • Bachelor's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in language C

  • OR equivalent experience.

  • 2+ years of experience in developing networking software stack for RDMA forwarding or control plane functions

  • 4+ years of experience in software design and coding of Layer2/L3/L4 ethernet/IP networking data plane packet forwarding and control plane processing functions withina programmable NIC or network switches and routers or an architecture with hardware offload

Other Qualifications

Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter.

Preferred Qualifications

  • Experience with RDMA (RoCE) packet forwardingdevelopment in data center switches and NICs

  • Experience in developing networking software on DPUs or programmable NICs or other hardware offload architectures.

  • Experience in developing technologies for reliable data transfer across networks with efficient fabric utilization and deterministic latency.

  • CI/CD Experience: Knowledge of Continuous Integration and Continuous Deployment (CI/CD) practices for streamlined software development and deployment processes.

  • Scripting for Developer Tools: Proficiency in scripting languages to build and enhance developer tools, automating repetitive tasks and improving workflow efficiency.

Software Engineering IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $74,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD 188,000 - 304,200 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: US corporate pay information | Microsoft Careers (

Microsoft will accept applications for the role until August 17th, 2025.

#DPU

#SCHIE

Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations ( .

View Now

Principal Software Engineer - RDMA Azure Data Processing Unit

95054 Santa Clara, California Microsoft Corporation

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Microsoft Silicon, Cloud Hardware, and Infrastructure Engineering (SCHIE) is the team behind Microsoft's expanding Cloud Infrastructure and responsible for powering Microsoft's "Intelligent Cloud" mission. SCHIE delivers the core infrastructure and foundational technologies for Microsoft's over 200 online businesses including Bing, MSN, Office 365, Xbox Live, Teams, OneDrive, and the Microsoft Azure platform globally with our server and data center infrastructure, security and compliance, operations, globalization, and manageability solutions. Our focus is on smart growth, high efficiency, and delivering a trusted experience to customers and partners worldwide and we are looking for passionate, high-energy engineers to help achieve that mission.
The Azure Data Processing Unit (DPU) team brings together state-of-the-art software and hardware expertise to create a highly programmable and high-performance chip with the capability to efficiently handle large data volumes. Thanks to its integrated design, this solution empowers Azure to develop solutions for solving the next generation problems with increased agility and performance leveraging the DPU's compute, storage, and networking capabilities.
As a Principal Software Engineer in the DPU Networking software team, you will design, develop, deploy and support networking packet forwarding and control plane functions that enable high performance data processing within various network endpoints in Azure data centers. You will work as part of a dynamic, multi-talented team of engineers from across the world. You would collaborate with technical stakeholders in a cross functional team manner and contribute towards the success of multiple projects and initiatives across the organization. This opportunity will allow you to develop new solutions for the Azure fleet, participate in the design of cutting-edge networking solutions and hone your design and performance optimization skills.
As Microsoft's cloud business continues to grow the ability to deploy new offerings and hardware infrastructure on time, in high volume with high quality and lowest cost is of paramount importance. To achieve this goal, the DPU Networking Software team is instrumental in defining and delivering operational measures of success for quality, delivery, scale and sustainability related to Microsoft cloud software. We are looking for seasoned engineers with a dedicated passion for customer focused solutions, insight and industry knowledge to envision and implement future technical solutions that will manage and optimize the Cloud infrastructure.
**Responsibilities**
+ Collaborate with stakeholders to understand business needs and translate them into technical requirements and solutions.
+ Work across team and organizational boundaries to drive clarity and alignment.
+ Drives identification of dependencies and the development of design documents for a product, application, service, or platform.
+ Drives, creates, implements, optimizes, debugs, refactors, and reuses code to establish and improve performance and maintainability, effectiveness, and return on investment (ROI).
+ Conduct research, stay updated with the latest industry trends, and experiment with cutting-edge technologies to drive innovation.
+ Leverages subject-matter expertise of product features and partners with appropriate stakeholders (e.g., project managers) to drive a workgroup's project plans, release plans, and work items.
+ Acts as a Designated Responsible Individual (DRI) and guides other engineers by developing and following the playbook, working on call to monitor system/product/service for degradation, downtime, or interruptions, alerting stakeholders about status and initiates actions to restore system/product/service for simple and complex problems when appropriate.
+ Proactively seeks new knowledge and adapts to new trends, technical solutions, and patterns that will improve the availability, reliability, efficiency, observability, and performance of products while also driving consistency in monitoring and operations at scale.
+ Coaching and mentorship of fellow team members.
+ Effective?communication skills and a passion for delivering scalable solutions through a diverse team of engineers.
**Qualifications**
**Required** **Qualifications**
+ Bachelor's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in language C
+ OR equivalent experience.
+ 2+ years of experience in developing networking software stack for RDMA forwarding or control plane functions
+ 4+ years of experience in software design and coding of Layer2/L3/L4 ethernet/IP networking data plane packet forwarding and control plane processing functions withina programmable NIC or network switches and routers or an architecture with hardware offload
**Other Qualifications**
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter.
**Preferred** **Qualifications**
+ Experience with RDMA (RoCE) packet forwardingdevelopment in data center switches and NICs
+ Experience in developing networking software on DPUs or programmable NICs or other hardware offload architectures.
+ Experience in developing technologies for reliable data transfer across networks with efficient fabric utilization and deterministic latency.
+ CI/CD Experience: Knowledge of Continuous Integration and Continuous Deployment (CI/CD) practices for streamlined software development and deployment processes.
+ Scripting for Developer Tools: Proficiency in scripting languages to build and enhance developer tools, automating repetitive tasks and improving workflow efficiency.
Software Engineering IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $74,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD 188,000 - 304,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: US corporate pay information | Microsoft Careers ( will accept applications for the role until August 17th, 2025.
#DPU
#SCHIE
Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations ( .
View Now
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Gpus Jobs