4,314 AI Models jobs in the United States
Principal Applied Scientist - Security AI Models

Posted 1 day ago
Job Viewed
Job Description
The Security Models Training team builds and operates the large-scale AI training and adaptation engines that power Microsoft Security products, turning cutting-edge research into dependable, production-ready capabilities. As a **Principal Applied Scientist - Security AI Models** , you will lead end-to-end model development for security scenarios, including privacy-aware data curation, continual pretraining, task-focused fine-tuning, reinforcement learning, and rigorous evaluation. You will drive training efficiency on distributed GPU systems, deepen model reasoning and tool-use skills, and embed responsible AI and compliance into every stage of the workflow. The role is hands-on and impact-focused, partnering closely with engineering and product to translate innovations into shipped experiences, designing objective benchmarks and quality gates, and mentoring scientists and engineers to scale results across globally distributed teams. You will combine strong coding and experimentation with a systems mindset to accelerate iteration cycles, improve throughput and reliability, and help shape the next generation of secure, trustworthy AI for our customers.
Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
**Responsibilities**
+ Execute the full modeling lifecycle for security scenarios from data ingestion and curation to training, evaluation, deployment, and monitoring
+ Design and operate privacy-preserving data workflows, including anonymization, templating, synthetic augmentation, and quantitative utility measurement
+ Develop and maintain fine-tuning and adaptation recipes for transformer models, including parameter-efficient methods and reinforcement learning from human or synthetic feedback
+ Contribute to objective benchmarks, metrics, and automated gates for accuracy, robustness, safety, and performance to enable repeatable model shipping
+ Collaborate with engineering and product teams to productionize models, harden pipelines, and meet service-level objectives for latency, throughput, and availability
+ Uphold high-quality documentation and experiment hygiene and foster a culture of rapid iteration grounded in responsible AI principles
+ Stay current with the latest AI advances and help translate promising techniques into practical, measurable impact
**Qualifications**
**Required Qualifications:**
+ Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)
+ OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research)
+ OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research)
+ OR equivalent experience.
+ 5+ years experience creating publications (e.g., patents, libraries, peer-reviewed academic papers).
+ Proficiency in Python and PyTorch, with hands-on experience building and debugging large-scale training jobs
**Other Requirements**
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
Microsoft Cloud Background Check:
- This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
**Preferred Quallifications:**
+ Security domain experience in one or more areas: security operations, threat intelligence, malware analysis, vulnerability and posture management, anomaly detection, phishing and fraud detection, or cloud identity and access.
+ Experience with distributed training and scaling techniques, for example DeepSpeed, FSDP, ZeRO, model and pipeline parallelism, mixed precision, and profiling.
+ Experience with privacy preserving ML including differential privacy concepts, privacy risk assessment, and utility measurement on privatized data.
Applied Sciences IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $74,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD 188,000 - 304,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: will accept applications for the role until October 23, 2025.
#MSFTSecurity #MSECAI #FoundationModels #ReinforcementLearning #DomainAdaptation #FineTuning #AgenticAI #MSCareerEvents25
Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations ( .
Principal Applied Scientist - Security AI Models

Posted 1 day ago
Job Viewed
Job Description
The Security Models Training team builds and operates the large-scale AI training and adaptation engines that power Microsoft Security products, turning cutting-edge research into dependable, production-ready capabilities. As a **Principal Applied Scientist - Security AI Models** , you will lead end-to-end model development for security scenarios, including privacy-aware data curation, continual pretraining, task-focused fine-tuning, reinforcement learning, and rigorous evaluation. You will drive training efficiency on distributed GPU systems, deepen model reasoning and tool-use skills, and embed responsible AI and compliance into every stage of the workflow. The role is hands-on and impact-focused, partnering closely with engineering and product to translate innovations into shipped experiences, designing objective benchmarks and quality gates, and mentoring scientists and engineers to scale results across globally distributed teams. You will combine strong coding and experimentation with a systems mindset to accelerate iteration cycles, improve throughput and reliability, and help shape the next generation of secure, trustworthy AI for our customers.
Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
**Responsibilities**
+ Execute the full modeling lifecycle for security scenarios from data ingestion and curation to training, evaluation, deployment, and monitoring
+ Design and operate privacy-preserving data workflows, including anonymization, templating, synthetic augmentation, and quantitative utility measurement
+ Develop and maintain fine-tuning and adaptation recipes for transformer models, including parameter-efficient methods and reinforcement learning from human or synthetic feedback
+ Contribute to objective benchmarks, metrics, and automated gates for accuracy, robustness, safety, and performance to enable repeatable model shipping
+ Collaborate with engineering and product teams to productionize models, harden pipelines, and meet service-level objectives for latency, throughput, and availability
+ Uphold high-quality documentation and experiment hygiene and foster a culture of rapid iteration grounded in responsible AI principles
+ Stay current with the latest AI advances and help translate promising techniques into practical, measurable impact
**Qualifications**
**Required Qualifications:**
+ Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)
+ OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research)
+ OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research)
+ OR equivalent experience.
+ 5+ years experience creating publications (e.g., patents, libraries, peer-reviewed academic papers).
+ Proficiency in Python and PyTorch, with hands-on experience building and debugging large-scale training jobs
**Other Requirements**
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
Microsoft Cloud Background Check:
- This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
**Preferred Quallifications:**
+ Security domain experience in one or more areas: security operations, threat intelligence, malware analysis, vulnerability and posture management, anomaly detection, phishing and fraud detection, or cloud identity and access.
+ Experience with distributed training and scaling techniques, for example DeepSpeed, FSDP, ZeRO, model and pipeline parallelism, mixed precision, and profiling.
+ Experience with privacy preserving ML including differential privacy concepts, privacy risk assessment, and utility measurement on privatized data.
Applied Sciences IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $74,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD 188,000 - 304,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: will accept applications for the role until October 23, 2025.
#MSFTSecurity #MSECAI #FoundationModels #ReinforcementLearning #DomainAdaptation #FineTuning #AgenticAI #MSCareerEvents25
Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations ( .
Principal Applied Scientist - Security AI Models
Posted 12 days ago
Job Viewed
Job Description
The Security Models Training team builds and operates the large-scale AI training and adaptation engines that power Microsoft Security products, turning cutting-edge research into dependable, production-ready capabilities. As a **Principal Applied Scientist - Security AI Models** , you will lead end-to-end model development for security scenarios, including privacy-aware data curation, continual pretraining, task-focused fine-tuning, reinforcement learning, and rigorous evaluation. You will drive training efficiency on distributed GPU systems, deepen model reasoning and tool-use skills, and embed responsible AI and compliance into every stage of the workflow. The role is hands-on and impact-focused, partnering closely with engineering and product to translate innovations into shipped experiences, designing objective benchmarks and quality gates, and mentoring scientists and engineers to scale results across globally distributed teams. You will combine strong coding and experimentation with a systems mindset to accelerate iteration cycles, improve throughput and reliability, and help shape the next generation of secure, trustworthy AI for our customers.
Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
**Responsibilities**
+ Execute the full modeling lifecycle for security scenarios from data ingestion and curation to training, evaluation, deployment, and monitoring
+ Design and operate privacy-preserving data workflows, including anonymization, templating, synthetic augmentation, and quantitative utility measurement
+ Develop and maintain fine-tuning and adaptation recipes for transformer models, including parameter-efficient methods and reinforcement learning from human or synthetic feedback
+ Contribute to objective benchmarks, metrics, and automated gates for accuracy, robustness, safety, and performance to enable repeatable model shipping
+ Collaborate with engineering and product teams to productionize models, harden pipelines, and meet service-level objectives for latency, throughput, and availability
+ Uphold high-quality documentation and experiment hygiene and foster a culture of rapid iteration grounded in responsible AI principles
+ Stay current with the latest AI advances and help translate promising techniques into practical, measurable impact
**Qualifications**
**Required Qualifications:**
+ Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)
+ OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research)
+ OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research)
+ OR equivalent experience.
+ 5+ years experience creating publications (e.g., patents, libraries, peer-reviewed academic papers).
+ Proficiency in Python and PyTorch, with hands-on experience building and debugging large-scale training jobs
**Other Requirements**
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
Microsoft Cloud Background Check:
- This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
**Preferred Quallifications:**
+ Security domain experience in one or more areas: security operations, threat intelligence, malware analysis, vulnerability and posture management, anomaly detection, phishing and fraud detection, or cloud identity and access.
+ Experience with distributed training and scaling techniques, for example DeepSpeed, FSDP, ZeRO, model and pipeline parallelism, mixed precision, and profiling.
+ Experience with privacy preserving ML including differential privacy concepts, privacy risk assessment, and utility measurement on privatized data.
Applied Sciences IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $74,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD 188,000 - 304,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: will accept applications for the role until October 23, 2025.
#MSFTSecurity #MSECAI #FoundationModels #ReinforcementLearning #DomainAdaptation #FineTuning #AgenticAI #MSCareerEvents25
Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations ( .
Principal Applied Scientist - Security AI Models
Posted 12 days ago
Job Viewed
Job Description
The Security Models Training team builds and operates the large-scale AI training and adaptation engines that power Microsoft Security products, turning cutting-edge research into dependable, production-ready capabilities. As a **Principal Applied Scientist - Security AI Models** , you will lead end-to-end model development for security scenarios, including privacy-aware data curation, continual pretraining, task-focused fine-tuning, reinforcement learning, and rigorous evaluation. You will drive training efficiency on distributed GPU systems, deepen model reasoning and tool-use skills, and embed responsible AI and compliance into every stage of the workflow. The role is hands-on and impact-focused, partnering closely with engineering and product to translate innovations into shipped experiences, designing objective benchmarks and quality gates, and mentoring scientists and engineers to scale results across globally distributed teams. You will combine strong coding and experimentation with a systems mindset to accelerate iteration cycles, improve throughput and reliability, and help shape the next generation of secure, trustworthy AI for our customers.
Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
**Responsibilities**
+ Execute the full modeling lifecycle for security scenarios from data ingestion and curation to training, evaluation, deployment, and monitoring
+ Design and operate privacy-preserving data workflows, including anonymization, templating, synthetic augmentation, and quantitative utility measurement
+ Develop and maintain fine-tuning and adaptation recipes for transformer models, including parameter-efficient methods and reinforcement learning from human or synthetic feedback
+ Contribute to objective benchmarks, metrics, and automated gates for accuracy, robustness, safety, and performance to enable repeatable model shipping
+ Collaborate with engineering and product teams to productionize models, harden pipelines, and meet service-level objectives for latency, throughput, and availability
+ Uphold high-quality documentation and experiment hygiene and foster a culture of rapid iteration grounded in responsible AI principles
+ Stay current with the latest AI advances and help translate promising techniques into practical, measurable impact
**Qualifications**
**Required Qualifications:**
+ Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)
+ OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research)
+ OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research)
+ OR equivalent experience.
+ 5+ years experience creating publications (e.g., patents, libraries, peer-reviewed academic papers).
+ Proficiency in Python and PyTorch, with hands-on experience building and debugging large-scale training jobs
**Other Requirements**
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
Microsoft Cloud Background Check:
- This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
**Preferred Quallifications:**
+ Security domain experience in one or more areas: security operations, threat intelligence, malware analysis, vulnerability and posture management, anomaly detection, phishing and fraud detection, or cloud identity and access.
+ Experience with distributed training and scaling techniques, for example DeepSpeed, FSDP, ZeRO, model and pipeline parallelism, mixed precision, and profiling.
+ Experience with privacy preserving ML including differential privacy concepts, privacy risk assessment, and utility measurement on privatized data.
Applied Sciences IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $74,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD 188,000 - 304,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: will accept applications for the role until October 23, 2025.
#MSFTSecurity #MSECAI #FoundationModels #ReinforcementLearning #DomainAdaptation #FineTuning #AgenticAI #MSCareerEvents25
Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations ( .
Research Scientist, Graph Neural Networks, Omega

Posted 19 days ago
Job Viewed
Job Description
_corporate_fare_ Google _place_ New York, NY, USA
**Mid**
Experience driving progress, solving problems, and mentoring more junior team members; deeper expertise and applied knowledge within relevant area.
**Minimum qualifications:**
+ PhD degree in Computer Science, a related field, or equivalent practical experience.
+ 2 years of experience with software development in one or more programming languages (e.g., Python, C, C++, Java, JavaScript), including application of data structures and algorithms
+ Experience in Machine Learning (Graph Convolutional Networks, Deep Neural Networks, Transformers etc.) or related fields.
+ Contribution to research communities or efforts, including publishing papers at conferences (such as NeurIPS, ICML, CVPR, SIGGRAPH etc.).
**Preferred qualifications:**
+ Experience in large language models or graph mining research.
+ Experience in research leadership.
+ Proven track record of publication history at machine learning (ML)/data mining conferences with emphasis on our research area (such as GNNs, geometric deep learning, etc.).
**About the job**
As an organization, Google maintains a portfolio of research projects driven by fundamental research, new product innovation, product contribution and infrastructure goals, while providing individuals and teams the freedom to emphasize specific types of work. As a Research Scientist, you'll setup large-scale tests and deploy promising ideas quickly and broadly, managing deadlines and deliverables while applying the latest theories to develop new and improved products, processes, or technologies. From creating experiments and prototyping implementations to designing new architectures, our research scientists work on real-world problems that span the breadth of computer science, such as machine (and deep) learning, data mining, natural language processing, hardware and software performance analysis, improving compilers for mobile platforms, as well as core search and much more.
As a Research Scientist, you'll also actively contribute to the wider research community by sharing and publishing your findings, with ideas inspired by internal projects as well as from collaborations with research programs at partner universities and technical institutes all over the world.
Google Research addresses challenges that define the technology of today and tomorrow. From conducting fundamental research to influencing product development, our research teams have the opportunity to impact technology used by billions of people every day.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field -- we publish regularly in academic journals, release projects as open source, and apply research to Google products.
The US base salary range for this full-time position is $166,000-$244,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more aboutbenefits at Google ( .
**Responsibilities**
+ Integrate graph-structured data with foundation models and generative AI.
+ Develop novel methods to improve model performance on challenging real world problem settings and working with product teams to get these models out to users.
+ Run experiments and document research results for academic conferences.
+ Educate Googlers about best practices for learning and reasoning over graph data.
+ Develop new data mining pipelines and make improvements to the Graph Mining library as needed.
Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google'sApplicant and Candidate Privacy Policy (./privacy-policy) .
Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See alsoGoogle's EEO Policy ( ,Know your rights: workplace discrimination is illegal ( ,Belonging at Google ( , andHow we hire ( .
If you have a need that requires accommodation, please let us know by completing ourAccommodations for Applicants form ( .
Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.
To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also and If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form:
Principal AI Researcher - Generative Models
Posted 2 days ago
Job Viewed
Job Description
Lead AI Engineer, Generative Models
Posted 4 days ago
Job Viewed
Job Description
Key Responsibilities:
- Lead the design, development, and implementation of advanced generative AI models, including LLMs and diffusion models.
- Conduct cutting-edge research in AI and machine learning, with a focus on novel architectures and algorithms.
- Oversee the training, fine-tuning, and evaluation of AI models using large datasets.
- Develop and implement strategies for deploying and scaling AI models into production environments.
- Collaborate with product managers and other engineering teams to integrate AI capabilities into new and existing products.
- Mentor and guide junior AI engineers and researchers, fostering technical growth and innovation.
- Stay abreast of the latest advancements in AI and machine learning research and industry trends.
- Contribute to technical documentation, patents, and publications.
- Ensure the ethical and responsible development and deployment of AI technologies.
- Ph.D. or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field.
- 7+ years of experience in AI/ML research and development, with a specialization in generative models.
- Proven track record of designing, building, and deploying sophisticated AI models.
- Expertise in deep learning frameworks such as TensorFlow, PyTorch, or JAX.
- Strong programming skills in Python and experience with relevant libraries (e.g., NumPy, SciPy, Hugging Face).
- Deep understanding of natural language processing (NLP), computer vision, and reinforcement learning.
- Excellent leadership, communication, and problem-solving skills.
- Experience with cloud platforms (AWS, Azure, GCP) and distributed computing is a plus.
Be The First To Know
About the latest Ai models Jobs in United States !
Lead AI Researcher - Generative Models
Posted 5 days ago
Job Viewed
Job Description
Key Responsibilities:
- Lead research initiatives in generative AI, including large language models, diffusion models, and GANs.
- Design, develop, and implement advanced AI algorithms and models.
- Oversee the research roadmap and strategy for generative AI projects.
- Mentor and guide a team of AI researchers and engineers, fostering a culture of innovation and academic rigor.
- Collaborate with cross-functional teams to integrate research findings into product development.
- Publish research findings in top-tier AI conferences and journals.
- Stay abreast of the latest advancements and trends in AI and machine learning.
- Evaluate and benchmark new AI models and techniques.
- Secure research grants and funding where applicable.
- Contribute to the intellectual property portfolio through patent applications.
- Drive the ethical considerations and responsible development of AI technologies.
- Present research findings to internal and external stakeholders.
The ideal candidate holds a Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field, with a strong publication record in reputable venues. A minimum of 7 years of research experience in AI, with a significant focus on generative models, is required. Expertise in deep learning frameworks (e.g., TensorFlow, PyTorch), programming proficiency in Python, and experience with large-scale data processing are essential. Exceptional leadership, communication, and strategic thinking skills are paramount. This role is a unique chance to make a significant impact on the future of AI from a remote setting.
Lead AI Researcher - Generative Models
Posted 6 days ago
Job Viewed
Job Description
Responsibilities:
- Lead the research and development of novel generative AI models and algorithms.
- Design and execute research experiments, analyze results, and draw insightful conclusions.
- Publish research findings in top-tier AI conferences and journals.
- Mentor and guide a team of AI researchers and engineers, fostering a collaborative and innovative research environment.
- Collaborate with product and engineering teams to translate research breakthroughs into practical applications.
- Stay at the forefront of AI research, identifying emerging trends and opportunities.
- Contribute to the overall research strategy and roadmap of the organization.
- Develop and maintain high-quality code for research prototypes and experiments.
- Present research findings internally and externally to diverse audiences.
- Contribute to the intellectual property portfolio through patents and publications.
Qualifications:
- Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field.
- Minimum of 7 years of post-graduate research experience in AI, with a strong focus on generative models (e.g., GANs, VAEs, Transformers, Diffusion Models).
- Proven track record of impactful publications in leading AI conferences (e.g., NeurIPS, ICML, ICLR, CVPR, ACL).
- Deep understanding of deep learning architectures, optimization techniques, and mathematical foundations of AI.
- Experience leading research projects and mentoring junior researchers.
- Proficiency in programming languages commonly used in AI research, such as Python, and deep learning frameworks (e.g., TensorFlow, PyTorch).
- Exceptional analytical, problem-solving, and critical thinking skills.
- Strong communication and presentation skills, with the ability to articulate complex research concepts clearly.
- Experience working effectively in a distributed, remote research team.
- Passion for advancing the field of artificial intelligence and its ethical implications.
Principal AI Engineer, Generative Models
Posted 7 days ago
Job Viewed