22 Data Scientists jobs in Albuquerque
Lead Engineer, Big Data

Posted 1 day ago
Job Viewed
Job Description
Responsible for all the aspects of architecture, design and implementation of Data Management solution using Big Data platform on Cloudera or Hortonworks and other areas of enterprise application platforms.
**KNOWLEDGE, SKILLS & ABILITIES** (Generally, the occupational knowledge and specific technical and professional skills and abilities required to perform the essential duties of this job):
+ Provide leadership on choosing ideal Architecture, evaluating tools and Frameworks, define Standards & Best Practices for implementing scalable business solutions
+ Understand, articulate, interpret, and apply the principles of the defined data and analytics strategy to unique, complex business problems
+ Mentor development teams to build tools for data quality control and repeatable data tasks that will accelerate and automate data management duties.
+ Implement Batch and Real-time data ingestion/extraction processes through ETL, Streaming, API, etc., between diverse source and target systems with structured and unstructured datasets
+ Design and build data solutions with an emphasis on performance, scalability, and high reliability
+ Design analytical data models for self-service BI
+ Contribute to leading and building a team of top-performing data technology professionals
+ Help with project planning and execution
+ Analyze current business practices, processes and procedures and identify opportunities for leveraging Microsoft Azure data & analytics PaaS services.
+ Expert level experience on Azure Big Data Services (like Azure Data Factory, Azure Devops, Azure Storage/ Data Lake, Azure Databricks, etc.)
+ Expert level experience on Hadoop cluster components and services (like HDFS, YARN, ZOOKEEPER, AMBARI/CLOUDERA MANAGER, SENTRY/RANGER, KERBEROS, etc.)
+ Designing and implementing BI solutions to meet business requirements using modern BU tools (Like Power BI, Tableau, etc.)
+ Ability to lead, in solving technical issues while engaged with infrastructure and vendor support teams
**JOB FUNCTION:**
Responsible for all the aspects of architecture, design and implementation of Data Management solution using Big Data platform on Cloudera or Hortonworks and other areas of enterprise application platforms.
**REQUIRED EDUCATION:**
Bachelor's Degree
**REQUIRED EXPERIENCE:**
+ 8 + years of data management experience
+ Previous experience leading projects or teams
+ Experience in building stream-processing systems, using solutions such as Kafka, Storm or Spark-Streaming
+ Proven experience on Big Data tools such as, Spark, Hive, Impala, Polybase, Phoenix, Presto, Kylin, etc.
+ Experience with integration of data from multiple data sources (using ETL tool such, Talend, etc.)
+ Experience with manipulating large data sets through Big Data processing tools
+ Strong experience on Data Lake, Data Warehouse, Data Validation & Certification, Data Quality, Metadata Management and Data Governance
+ Experience with programming language such as, PySpark/ Scala/SQL, etc.
+ Experience implementing Web application and Web Services APIs (REST/SOAP)
**PREFERRED EDUCATION:**
Master's Degree
**PREFERRED EXPERIENCE:**
Experience in the healthcare industry is preferred
To all current Molina employees: If you are interested in applying for this position, please apply through the intranet job listing.
Molina Healthcare offers a competitive benefits and compensation package. Molina Healthcare is an Equal Opportunity Employer (EOE) M/F/D/V.
Pay Range: $107,028 - $250,446 / ANNUAL
*Actual compensation may vary from posting based on geographic location, work experience, education and/or skill level.
Lead Engineer, Big Data

Posted 1 day ago
Job Viewed
Job Description
Responsible for all the aspects of architecture, design and implementation of Data Management solution using Big Data platform on Cloudera or Hortonworks and other areas of enterprise application platforms.
**KNOWLEDGE, SKILLS & ABILITIES** (Generally, the occupational knowledge and specific technical and professional skills and abilities required to perform the essential duties of this job):
+ Provide leadership on choosing ideal Architecture, evaluating tools and Frameworks, define Standards & Best Practices for implementing scalable business solutions
+ Understand, articulate, interpret, and apply the principles of the defined data and analytics strategy to unique, complex business problems
+ Mentor development teams to build tools for data quality control and repeatable data tasks that will accelerate and automate data management duties.
+ Implement Batch and Real-time data ingestion/extraction processes through ETL, Streaming, API, etc., between diverse source and target systems with structured and unstructured datasets
+ Design and build data solutions with an emphasis on performance, scalability, and high reliability
+ Design analytical data models for self-service BI
+ Contribute to leading and building a team of top-performing data technology professionals
+ Help with project planning and execution
+ Analyze current business practices, processes and procedures and identify opportunities for leveraging Microsoft Azure data & analytics PaaS services.
+ Expert level experience on Azure Big Data Services (like Azure Data Factory, Azure Devops, Azure Storage/ Data Lake, Azure Databricks, etc.)
+ Expert level experience on Hadoop cluster components and services (like HDFS, YARN, ZOOKEEPER, AMBARI/CLOUDERA MANAGER, SENTRY/RANGER, KERBEROS, etc.)
+ Designing and implementing BI solutions to meet business requirements using modern BU tools (Like Power BI, Tableau, etc.)
+ Ability to lead, in solving technical issues while engaged with infrastructure and vendor support teams
**JOB FUNCTION:**
Responsible for all the aspects of architecture, design and implementation of Data Management solution using Big Data platform on Cloudera or Hortonworks and other areas of enterprise application platforms.
**REQUIRED EDUCATION:**
Bachelor's Degree
**REQUIRED EXPERIENCE:**
+ 8 + years of data management experience
+ Previous experience leading projects or teams
+ Experience in building stream-processing systems, using solutions such as Kafka, Storm or Spark-Streaming
+ Proven experience on Big Data tools such as, Spark, Hive, Impala, Polybase, Phoenix, Presto, Kylin, etc.
+ Experience with integration of data from multiple data sources (using ETL tool such, Talend, etc.)
+ Experience with manipulating large data sets through Big Data processing tools
+ Strong experience on Data Lake, Data Warehouse, Data Validation & Certification, Data Quality, Metadata Management and Data Governance
+ Experience with programming language such as, PySpark/ Scala/SQL, etc.
+ Experience implementing Web application and Web Services APIs (REST/SOAP)
**PREFERRED EDUCATION:**
Master's Degree
**PREFERRED EXPERIENCE:**
Experience in the healthcare industry is preferred
To all current Molina employees: If you are interested in applying for this position, please apply through the intranet job listing.
Molina Healthcare offers a competitive benefits and compensation package. Molina Healthcare is an Equal Opportunity Employer (EOE) M/F/D/V.
Pay Range: $107,028 - $250,446 / ANNUAL
*Actual compensation may vary from posting based on geographic location, work experience, education and/or skill level.
Big Data Engineer - Medicare/Medicaid

Posted 1 day ago
Job Viewed
Job Description
We are seeking a highly skilled and forward-thinking Big Data Engineer to join our healthcare data team. This role encompasses the end-to-end design, development, and management of large-scale data systems tailored for healthcare analytics. The ideal candidate will be responsible for architecting and maintaining robust, scalable, and secure data pipelines that support critical decision-making across the organization. This position requires deep technical expertise in modern Big Data tools, real-time and batch data integration, and a strong understanding of data governance and compliance in healthcare environments.
**Knowledge/Skills/Abilities:**
- Architect and implement scalable, high-performance Big Data solutions that support structured and unstructured data from diverse sources.
- Build and manage batch and real-time data ingestion/extraction pipelines using tools like Kafka, Spark Streaming, and Talend.
- Develop reusable and efficient ETL frameworks using Python/Scala for high-volume data transformation and movement.
- Design and optimize data models to support analytical and operational use cases, including healthcare claims and utilization data.
- Collaborate with cross-functional teams, including data scientists, analysts, and business partners, to translate requirements into robust data products.
- Deploy, monitor, and troubleshoot Hadoop-based infrastructure using tools such as Cloudera Manager, Ambari, and Zookeeper.
- Enforce data quality, security, and compliance standards using tools such as Kerberos, Ranger, and Sentry.
- Implement web services and APIs (REST/SOAP) to enable seamless integration with applications and visualization platforms.
- Contribute to data governance initiatives, including metadata management, lineage tracking, and quality assurance.
**Job Qualifications**
**Required Qualifications**
- Minimum 3 years of hands-on experience in Big Data engineering, data integration, and pipeline development.
- Proficiency in Python, Java, or Scala for data transformation and system scripting.
- Expertise in Big Data tools: Spark, Hive, Impala, Presto, Phoenix, Kylin, and Hadoop (HDFS, YARN).
- Experience building real-time stream-processing systems using Kafka, Storm, or Spark Streaming.
- Strong knowledge of NoSQL databases like HBase and MemSQL, and traditional RDBMS including PostgreSQL, Oracle, and SQL Server.
- Skilled in ETL design and development using tools such as Talend or Informatica.
- Demonstrated experience in deploying and monitoring big data infrastructure with Ambari, Cloudera Manager, and Zookeeper.
- Solid understanding of data warehousing, data validation, data quality checks, metadata management, and governance.
**Preferred Qualifications**
- 5+ years of progressive experience in Big Data engineering or analytics. HEOR (Health Economics and outcomes Research)
- Prior experience working in the healthcare industry with familiarity in clinical, claims, or care management data.
- Experience with cloud platforms (AWS, Azure) and containerization tools (Docker, Kubernetes).
**Technical Environment**
- Big Data Ecosystem: Hadoop, Spark, Hive, Kafka, Presto, Impala, Phoenix, Kylin, Zookeeper
- Streaming & Messaging: Kafka, Spark Streaming, Storm
- ETL & Integration: Talend, Informatica, Python/Scala-based ETL
- Programming Languages: Python, Java, Scala, SQL
- Databases: HBase, MemSQL, PostgreSQL, Oracle, SQL Server
- Cloud & DevOps: AWS, Azure, Docker, Kubernetes, Git
- Security & Governance: Kerberos, Ranger, Sentry, Metadata Management
- Monitoring Tools: Ambari, Cloudera Manager
- APIs: REST, SOAP
To all current Molina employees: If you are interested in applying for this position, please apply through the intranet job listing.
Molina Healthcare offers a competitive benefits and compensation package. Molina Healthcare is an Equal Opportunity Employer (EOE) M/F/D/V.
Pay Range: $77,969 - $171,058 / ANNUAL
*Actual compensation may vary from posting based on geographic location, work experience, education and/or skill level.
Big Data Engineer - Medicare/Medicaid

Posted 1 day ago
Job Viewed
Job Description
We are seeking a highly skilled and forward-thinking Big Data Engineer to join our healthcare data team. This role encompasses the end-to-end design, development, and management of large-scale data systems tailored for healthcare analytics. The ideal candidate will be responsible for architecting and maintaining robust, scalable, and secure data pipelines that support critical decision-making across the organization. This position requires deep technical expertise in modern Big Data tools, real-time and batch data integration, and a strong understanding of data governance and compliance in healthcare environments.
**Knowledge/Skills/Abilities:**
- Architect and implement scalable, high-performance Big Data solutions that support structured and unstructured data from diverse sources.
- Build and manage batch and real-time data ingestion/extraction pipelines using tools like Kafka, Spark Streaming, and Talend.
- Develop reusable and efficient ETL frameworks using Python/Scala for high-volume data transformation and movement.
- Design and optimize data models to support analytical and operational use cases, including healthcare claims and utilization data.
- Collaborate with cross-functional teams, including data scientists, analysts, and business partners, to translate requirements into robust data products.
- Deploy, monitor, and troubleshoot Hadoop-based infrastructure using tools such as Cloudera Manager, Ambari, and Zookeeper.
- Enforce data quality, security, and compliance standards using tools such as Kerberos, Ranger, and Sentry.
- Implement web services and APIs (REST/SOAP) to enable seamless integration with applications and visualization platforms.
- Contribute to data governance initiatives, including metadata management, lineage tracking, and quality assurance.
**Job Qualifications**
**Required Qualifications**
- Minimum 3 years of hands-on experience in Big Data engineering, data integration, and pipeline development.
- Proficiency in Python, Java, or Scala for data transformation and system scripting.
- Expertise in Big Data tools: Spark, Hive, Impala, Presto, Phoenix, Kylin, and Hadoop (HDFS, YARN).
- Experience building real-time stream-processing systems using Kafka, Storm, or Spark Streaming.
- Strong knowledge of NoSQL databases like HBase and MemSQL, and traditional RDBMS including PostgreSQL, Oracle, and SQL Server.
- Skilled in ETL design and development using tools such as Talend or Informatica.
- Demonstrated experience in deploying and monitoring big data infrastructure with Ambari, Cloudera Manager, and Zookeeper.
- Solid understanding of data warehousing, data validation, data quality checks, metadata management, and governance.
**Preferred Qualifications**
- 5+ years of progressive experience in Big Data engineering or analytics. HEOR (Health Economics and outcomes Research)
- Prior experience working in the healthcare industry with familiarity in clinical, claims, or care management data.
- Experience with cloud platforms (AWS, Azure) and containerization tools (Docker, Kubernetes).
**Technical Environment**
- Big Data Ecosystem: Hadoop, Spark, Hive, Kafka, Presto, Impala, Phoenix, Kylin, Zookeeper
- Streaming & Messaging: Kafka, Spark Streaming, Storm
- ETL & Integration: Talend, Informatica, Python/Scala-based ETL
- Programming Languages: Python, Java, Scala, SQL
- Databases: HBase, MemSQL, PostgreSQL, Oracle, SQL Server
- Cloud & DevOps: AWS, Azure, Docker, Kubernetes, Git
- Security & Governance: Kerberos, Ranger, Sentry, Metadata Management
- Monitoring Tools: Ambari, Cloudera Manager
- APIs: REST, SOAP
To all current Molina employees: If you are interested in applying for this position, please apply through the intranet job listing.
Molina Healthcare offers a competitive benefits and compensation package. Molina Healthcare is an Equal Opportunity Employer (EOE) M/F/D/V.
Pay Range: $77,969 - $171,058 / ANNUAL
*Actual compensation may vary from posting based on geographic location, work experience, education and/or skill level.
Senior Engineer, Big Data - Databricks/Healthcare/Payment Integrity - Remote -2033477

Posted 1 day ago
Job Viewed
Job Description
**Job Summary**
Responsible for all the aspects of architecture, design and implementation of Data Management solution using Big Data platform on Cloudera or Hortonworks and other areas of enterprise application platforms.
**Knowledge/Skills/Abilities**
- Convert concepts to technical architecture, design and implementation
- Provide guidance on choosing ideal Architecture, Evaluating tools and Frameworks, define Standards & Best Practices for implementing scalable business solutions
- Implement Batch and Real-time data ingestion/extraction processes through ETL, Streaming, API, etc., between diverse source and target systems with structured and unstructured datasets
- Design and build data solutions with an emphasis on performance, scalability, and high-reliability
- Code, test, and document new or modified data systems to create robust and scalable applications for data analytics
- Build data model for analytics and application layers
- Contribute to leading and building a team of top-performing data technology professionals
- Help with project planning and scheduling
- Expert level experience on Hadoop cluster components and services (like HDFS, YARN, ZOOKEEPER, AMBARI/CLOUDERA MANAGER, SENTRY/RANGER, KERBEROS, etc.)
- Ability to participate and lead, in solving technical issues while engaged with infrastructure and vendor support teams.
**Job Qualifications**
**Required Education**
Bachelor's Degree
**Required Experience**
- 5-7 years of data management experience.
- Experience in building stream-processing systems, using solutions such as Kafka, Storm or Spark-Streaming.
- Proven experience on Big Data tools such as, Spark, Hive, Impala, Polybase, Phoenix, Presto, Kylin, etc.
- Experience with integration of data from multiple data sources (using ETL tool such, Talend, etc.).
- Experience building solutions with NoSQL databases, such as HBase, Memsql.
- Strong experience on Database technologies, Data Warehouse, Data Validation & Certification, Data Quality, Metadata Management and Data Governance.
- Experience with programming language such as, Java/Scala/Python, etc.
- Experience implementing Web application and Web Services APIs (REST/SOAP).
**Preferred Education**
Master's Degree
**Preferred Experience**
- 7-10 years of data management experience.
- Experience in the healthcare industry is preferred.
To all current Molina employees: If you are interested in applying for this position, please apply through the intranet job listing.
Molina Healthcare offers a competitive benefits and compensation package. Molina Healthcare is an Equal Opportunity Employer (EOE) M/F/D/V.
Pay Range: $88,453 - $206,981 / ANNUAL
*Actual compensation may vary from posting based on geographic location, work experience, education and/or skill level.
Senior Engineer, Big Data - Databricks/Healthcare/Payment Integrity - Remote -2033477

Posted 1 day ago
Job Viewed
Job Description
**Job Summary**
Responsible for all the aspects of architecture, design and implementation of Data Management solution using Big Data platform on Cloudera or Hortonworks and other areas of enterprise application platforms.
**Knowledge/Skills/Abilities**
- Convert concepts to technical architecture, design and implementation
- Provide guidance on choosing ideal Architecture, Evaluating tools and Frameworks, define Standards & Best Practices for implementing scalable business solutions
- Implement Batch and Real-time data ingestion/extraction processes through ETL, Streaming, API, etc., between diverse source and target systems with structured and unstructured datasets
- Design and build data solutions with an emphasis on performance, scalability, and high-reliability
- Code, test, and document new or modified data systems to create robust and scalable applications for data analytics
- Build data model for analytics and application layers
- Contribute to leading and building a team of top-performing data technology professionals
- Help with project planning and scheduling
- Expert level experience on Hadoop cluster components and services (like HDFS, YARN, ZOOKEEPER, AMBARI/CLOUDERA MANAGER, SENTRY/RANGER, KERBEROS, etc.)
- Ability to participate and lead, in solving technical issues while engaged with infrastructure and vendor support teams.
**Job Qualifications**
**Required Education**
Bachelor's Degree
**Required Experience**
- 5-7 years of data management experience.
- Experience in building stream-processing systems, using solutions such as Kafka, Storm or Spark-Streaming.
- Proven experience on Big Data tools such as, Spark, Hive, Impala, Polybase, Phoenix, Presto, Kylin, etc.
- Experience with integration of data from multiple data sources (using ETL tool such, Talend, etc.).
- Experience building solutions with NoSQL databases, such as HBase, Memsql.
- Strong experience on Database technologies, Data Warehouse, Data Validation & Certification, Data Quality, Metadata Management and Data Governance.
- Experience with programming language such as, Java/Scala/Python, etc.
- Experience implementing Web application and Web Services APIs (REST/SOAP).
**Preferred Education**
Master's Degree
**Preferred Experience**
- 7-10 years of data management experience.
- Experience in the healthcare industry is preferred.
To all current Molina employees: If you are interested in applying for this position, please apply through the intranet job listing.
Molina Healthcare offers a competitive benefits and compensation package. Molina Healthcare is an Equal Opportunity Employer (EOE) M/F/D/V.
Pay Range: $88,453 - $206,981 / ANNUAL
*Actual compensation may vary from posting based on geographic location, work experience, education and/or skill level.
Principal Data Scientist - Generative AI, Machine Learning, Python, R - Remote

Posted 1 day ago
Job Viewed
Job Description
**Job Summary**
Responsible for overseeing data science projects, managing and mentoring a team, and aligning data initiatives with business goals. Lead the development and implementation of data models, collaborate with cross-functional teams, and stay updated on industry trends. Ensure ethical data use and communicate complex technical concepts to non-technical stakeholders. Lead initiatives on model governance and model ops to align with regulatory and security requirements. This role requires technical expertise, strategic thinking, and leadership to drive data-driven decision-making within the organization and be the pioneer on generative AI healthcare solutions, aimed at revolutionizing healthcare operations as well as enhancing member experience.
**Job Duties**
- **Research and Development:** Stay current with the latest advancements in AI and machine learning and apply these insights to improve existing models and develop new methodologies.
- **AI Model Deployment, Monitoring & Model Governance:** Deploy AI models into production environments, monitor their performance, and adjust as necessary to maintain accuracy and effectiveness and meet all governance and regulatory requirements.
- **Innovation Projects:** Lead pilot projects to test and implement new AI technologies within the organization
- **Data Analysis and Interpretation:** Extract meaningful insights from complex datasets, identify patterns, and interpret data to inform strategic decision-making.
- **Machine Learning Model Development** : Design, develop, and train machine learning models using a variety of algorithms and techniques, including supervised and unsupervised learning, deep learning, and reinforcement learning.
- **Agentic Workflows Implementation:** Develop and implement agentic workflows that utilize AI agents for autonomous task execution, enhancing operational efficiency and decision-making capabilities.
- **RAG Pattern Utilization:** Employ retrieval-augmented generation patterns to improve the performance of language models, ensuring they can access and utilize external knowledge effectively to enhance their outputs.
- **Model Fine-Tuning** : Fine-tune pre-trained models to adapt them to specific tasks or datasets, ensuring optimal performance and relevance in various applications.
- **Data Cleaning and Preprocessing:** Prepare data for analysis by performing data cleaning, handling missing values, and removing outliers to ensure high-quality inputs for modeling.
- **Collaboration:** Work closely with cross-functional teams, including software engineers, product managers, and business analysts, to integrate AI solutions into existing systems and processes.
- **Documentation and Reporting:** Create comprehensive documentation of models, methodologies, and results; communicate findings clearly to non-technical stakeholders.
- Mentors, coaches, and provides guidance to newer data scientists.
- Partner closely with business and other technology teams to build ML models which helps in improving Star ratings, reduce care gap and other business objectives.
- Present complex analytical information to all level of audiences in a clear and concise manner Collaborate with analytics team, assigning and managing delivery of analytical projects as appropriate
- Perform other duties as business requirements change, looking out for data solutions and technology enabled solution opportunities and make referrals to the appropriate team members in building out payment integrity solutions.
- Use a broad range of tools and techniques to extract insights from current industry or sector trends
**Job Qualifications**
**REQUIRED EDUCATION:**
Master's Degree in Computer Science, Data Science, Statistics, or a related field
**REQUIRED EXPERIENCE/KNOWLEDGE, SKILLS & ABILITIES:**
- 10+ years' work experience as a data scientist preferably in healthcare environment but candidates with suitable experience in other industries will be considered
- Knowledge of big data technologies (e.g., Hadoop, Spark)
- Familiar with relational database concepts, and SDLC concepts
- Demonstrate critical thinking and the ability to bring order to unstructured problems
- **Technical Proficiency:** Strong programming skills in languages such as Python and R, and experience with machine learning frameworks like TensorFlow, Keras, or PyTorch.
- **Statistical Analysis:** Excellent understanding of statistical methods and machine learning algorithms, including k-NN, Naive Bayes, SVM, and neural networks.
- **Experience with Agentic Workflows:** Familiarity with designing and implementing agentic workflows that leverage AI agents for autonomous operations.
- **RAG Techniques:** Knowledge of retrieval-augmented generation techniques and their application in enhancing AI model outputs.
- **Model Fine-Tuning Expertise:** Proven experience in fine-tuning models for specific tasks, ensuring they meet the required performance metrics.
- **Data Visualization:** Proficiency in data visualization tools (e.g., Tableau, Power BI) to present complex data insights effectively.
- **Database Management:** Experience with SQL and NoSQL databases, data warehousing, and ETL processes.
- **Problem-Solving Skills:** Strong analytical and problem-solving abilities, with a focus on developing innovative solutions to complex challenges.
**PREFERRED EDUCATION:**
PHD or additional experience
**PREFERRED EXPERIENCE:**
- Experience with cloud platforms (e.g., Databricks, Snowflake, Azure AI Studio etc.) for working with AI workflows and deploying models.
- Familiarity with natural language processing (NLP) and computer vision techniques.
#PJCorp2
#LI-AC1
To all current Molina employees: If you are interested in applying for this position, please apply through the intranet job listing.
Molina Healthcare offers a competitive benefits and compensation package. Molina Healthcare is an Equal Opportunity Employer (EOE) M/F/D/V.
Pay Range: $117,731 - $275,491 / ANNUAL
*Actual compensation may vary from posting based on geographic location, work experience, education and/or skill level.
Be The First To Know
About the latest Data scientists Jobs in Albuquerque !
Principal Data Scientist - Generative AI, Machine Learning, Python, R - Remote

Posted 1 day ago
Job Viewed
Job Description
**Job Summary**
Responsible for overseeing data science projects, managing and mentoring a team, and aligning data initiatives with business goals. Lead the development and implementation of data models, collaborate with cross-functional teams, and stay updated on industry trends. Ensure ethical data use and communicate complex technical concepts to non-technical stakeholders. Lead initiatives on model governance and model ops to align with regulatory and security requirements. This role requires technical expertise, strategic thinking, and leadership to drive data-driven decision-making within the organization and be the pioneer on generative AI healthcare solutions, aimed at revolutionizing healthcare operations as well as enhancing member experience.
**Job Duties**
- **Research and Development:** Stay current with the latest advancements in AI and machine learning and apply these insights to improve existing models and develop new methodologies.
- **AI Model Deployment, Monitoring & Model Governance:** Deploy AI models into production environments, monitor their performance, and adjust as necessary to maintain accuracy and effectiveness and meet all governance and regulatory requirements.
- **Innovation Projects:** Lead pilot projects to test and implement new AI technologies within the organization
- **Data Analysis and Interpretation:** Extract meaningful insights from complex datasets, identify patterns, and interpret data to inform strategic decision-making.
- **Machine Learning Model Development** : Design, develop, and train machine learning models using a variety of algorithms and techniques, including supervised and unsupervised learning, deep learning, and reinforcement learning.
- **Agentic Workflows Implementation:** Develop and implement agentic workflows that utilize AI agents for autonomous task execution, enhancing operational efficiency and decision-making capabilities.
- **RAG Pattern Utilization:** Employ retrieval-augmented generation patterns to improve the performance of language models, ensuring they can access and utilize external knowledge effectively to enhance their outputs.
- **Model Fine-Tuning** : Fine-tune pre-trained models to adapt them to specific tasks or datasets, ensuring optimal performance and relevance in various applications.
- **Data Cleaning and Preprocessing:** Prepare data for analysis by performing data cleaning, handling missing values, and removing outliers to ensure high-quality inputs for modeling.
- **Collaboration:** Work closely with cross-functional teams, including software engineers, product managers, and business analysts, to integrate AI solutions into existing systems and processes.
- **Documentation and Reporting:** Create comprehensive documentation of models, methodologies, and results; communicate findings clearly to non-technical stakeholders.
- Mentors, coaches, and provides guidance to newer data scientists.
- Partner closely with business and other technology teams to build ML models which helps in improving Star ratings, reduce care gap and other business objectives.
- Present complex analytical information to all level of audiences in a clear and concise manner Collaborate with analytics team, assigning and managing delivery of analytical projects as appropriate
- Perform other duties as business requirements change, looking out for data solutions and technology enabled solution opportunities and make referrals to the appropriate team members in building out payment integrity solutions.
- Use a broad range of tools and techniques to extract insights from current industry or sector trends
**Job Qualifications**
**REQUIRED EDUCATION:**
Master's Degree in Computer Science, Data Science, Statistics, or a related field
**REQUIRED EXPERIENCE/KNOWLEDGE, SKILLS & ABILITIES:**
- 10+ years' work experience as a data scientist preferably in healthcare environment but candidates with suitable experience in other industries will be considered
- Knowledge of big data technologies (e.g., Hadoop, Spark)
- Familiar with relational database concepts, and SDLC concepts
- Demonstrate critical thinking and the ability to bring order to unstructured problems
- **Technical Proficiency:** Strong programming skills in languages such as Python and R, and experience with machine learning frameworks like TensorFlow, Keras, or PyTorch.
- **Statistical Analysis:** Excellent understanding of statistical methods and machine learning algorithms, including k-NN, Naive Bayes, SVM, and neural networks.
- **Experience with Agentic Workflows:** Familiarity with designing and implementing agentic workflows that leverage AI agents for autonomous operations.
- **RAG Techniques:** Knowledge of retrieval-augmented generation techniques and their application in enhancing AI model outputs.
- **Model Fine-Tuning Expertise:** Proven experience in fine-tuning models for specific tasks, ensuring they meet the required performance metrics.
- **Data Visualization:** Proficiency in data visualization tools (e.g., Tableau, Power BI) to present complex data insights effectively.
- **Database Management:** Experience with SQL and NoSQL databases, data warehousing, and ETL processes.
- **Problem-Solving Skills:** Strong analytical and problem-solving abilities, with a focus on developing innovative solutions to complex challenges.
**PREFERRED EDUCATION:**
PHD or additional experience
**PREFERRED EXPERIENCE:**
- Experience with cloud platforms (e.g., Databricks, Snowflake, Azure AI Studio etc.) for working with AI workflows and deploying models.
- Familiarity with natural language processing (NLP) and computer vision techniques.
#PJCorp2
#LI-AC1
To all current Molina employees: If you are interested in applying for this position, please apply through the intranet job listing.
Molina Healthcare offers a competitive benefits and compensation package. Molina Healthcare is an Equal Opportunity Employer (EOE) M/F/D/V.
Pay Range: $117,731 - $275,491 / ANNUAL
*Actual compensation may vary from posting based on geographic location, work experience, education and/or skill level.
Junior Machine learning/AI engineer/Software Developer
Posted 9 days ago
Job Viewed
Job Description
Entry level Job seekers struggle to get responses to their applications, are getting ghosted after interviews. In such a scenario the Job seekers need to differentiate themselves by ensuring to obtain exceptional skills and technologies so that they can wear multiple roles at a client as clients now would want to expand roles and responsibilities assigned to a particular job to save costs.
Since 2010 Synergisticit has helped Jobseekers differentiate themselves by providing candidates the requisite skills and experience to outperform at interviews and clients. Here at SynergisticIT We just don't focus on getting you a Job we make careers.
All Positions are open for all visas and US citizens
We are matchmakers we provide clients with candidates who can perform from day 1 of starting work. In this challenging economy every client wants to save $'s and they want the best value for their money. Jobseekers need to self-evaluate if they have the requisite skills to meet client requirements and needs as Clients now post covid can also hire remote workers which increases even more competition for jobseekers.
We at Synergisticit understand the problem of the mismatch between employer's requirements and Employee skills and that's why since 2010 we have helped 1000's of candidates get jobs at technology clients like apple, google, Paypal, western union, Client, visa, walmart lab s etc to name a few.
We have an excellent reputation with the clients. Currently, We are looking for entry-level software programmers, Java Full stack developers, Python/Java developers, Data analysts/ Data Scientists, Machine Learning engineers for full time positions with clients.
Who Should Apply Recent Computer science/Engineering /Mathematics/Statistics or Science Graduates or People looking to switch careers or who have had gaps in employment and looking to make their careers in the Tech Industry.
We assist in filing for STEM extension and also for H1b and Green card filing to Candidates
please check the below links :
Why do Tech Companies not Hire recent Computer Science Graduates | SynergisticIT
Technical Skills or Experience? | Which one is important to get a Job? | SynergisticIT
For preparing for interviews please visit
We are looking for the right matching candidates for our clients
Please apply via the job posting
REQUIRED SKILLS For Java /Full stack/Software Programmer
• Bachelors degree or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, Information Systems, IT
• Highly motivated, self-learner, and technically inquisitive
• Experience in programming language Java and understanding of the software development life cycle
• Project work on the skills
• Knowledge of Core Java , javascript , C++ or software programming
• Spring boot, Microservices, Docker, Jenkins and REST API's experience
• Excellent written and verbal communication skills
For data Science/Machine learning Positions
REQUIRED SKILLS
• Bachelors degree or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, Information Systems, IT
• Project work on the technologies needed
• Highly motivated, self-learner, and technically inquisitive
• Experience in programming language Java and understanding of the software development life cycle
• Knowledge of Statistics, SAS, Python, Computer Vision, data visualization tools
• Excellent written and verbal communication skills
Preferred skills: NLP, Text mining, Tableau, PowerBI, SAS, Tensorflow
If you get emails from our skill enhancement team please email them or ask them to take you off their distribution list and make you unavailable as they share the same database with the client servicing team who only connect with candidates who are matching client requirements.
No phone calls please. Shortlisted candidates would be reached out. No third party or agency candidates or c2c candidates
Data Scientist - COBRA
Posted today
Job Viewed
Job Description
Requisition Number: 24963
Required Travel: 0 - 10%
Employment Type: Full Time/Salaried/Exempt
Anticipated Salary Range: - $170,000.00
Security Clearance: TS/SCI
Level of Experience: Senior
This opportunity resides with Warfare Systems (WS), a business group within HII's Mission Technologies division. Warfare Systems comprises cyber and mission IT; electronic warfare; and C5ISR systems.
HII works within our nation's intelligence and cyber operations communities to defend our interests in cyberspace and anticipate emerging threats. Our capabilities in cybersecurity, network architecture, reverse engineering, software and hardware development uniquely enable us to support sensitive missions for the U.S. military and federal agency partners.
Meet HII's Mission Technologies Division
Our team of more than 7,000 professionals worldwide delivers all-domain expertise and advanced technologies in service of mission partners across the globe. Mission Technologies is leading the next evolution of national defense - the data evolution - by accelerating a breadth of national security solutions for government and commercial customers. Our capabilities range from C5ISR, AI and Big Data, cyber operations and synthetic training environments to fleet sustainment, environmental remediation and the largest family of unmanned underwater vehicles in every class. Find the role that's right for you. Apply today. We look forward to meeting you.
To learn more about Mission Technologies, click here for a short video:
Job Description
Huntington Ingalls Industries (HII) Mission Technologies Warfare Systems partners with the DoD and defense innovation ecosystem to rapidly acquire and field critical and emerging technologies, particularly integrated communications, networking, and Systems-of-Systems (SoS) technologies, to enhance national security and warfighter capabilities. Through a multiagency contracting approach, the Collaborative Operations for Battlespace Resilient Architecture (COBRA) initiative focuses on advancing these technologies to achieve multi-domain battlespace integration and resilient command and control. This includes a broad range of services such as systems engineering, cybersecurity, operational integration, and data analytics, all aimed at modernizing communications and ensuring seamless information exchange for the DoD and its allies.
This role is part of a pending contract proposal. We are actively interviewing to identify qualified candidates in anticipation of a successful award
Essential Job Responsibilities
Provides support in identifying, coordinating, harvesting, and exposing relevant data sources in support of Mission Partner data integration requirements. Leverages or develops quality and confidence methods to inform data processing and effective downstream analytical utility of novel data sources. Supports the development of new and revision of existing analytic techniques to ensure extraction of the most value from new and existing data. Contributes to data schema development and revision and ensures proper mapping of new data sources to the data schema. Conducts long-term data and trend analysis to identify systemic vulnerabilities and high priority threats, recommends strategies to mitigate threats and challenges to the supply chain, providing best practices for implementing Artificial Intelligence (AI) and Machine Learning (ML). Provides direct on-site Mission Partner support in identifying, coordinating, harvesting, and exposing relevant Mission Partner data sources across multiple domains and classification levels. Provides technical support to Mission Partners and serves as subject matter expert for integration of capabilities into each partner's mission and as a specialist in harvesting and operationalizing multi-intelligence data. Perform duties within laboratory or cloud based environment.
Minimum Qualifications
9 years relevant experience with Bachelors in related field; 7 years relevant experience with Masters in related field; 4 years relevant experience with PhD or Juris Doctorate in realted field; or High School Diploma or equivalent and 13 years relevant experience.
Must possess an active TS/SCI clearance.
-
Demonstrated ability to communicate complex concepts clearly and effectively
-
Capacity to manage competing priorities in a fast-paced environment
-
Proven track record of contributing to successful team outcomes
Preferred Requirements
Knowledge of DoD acquisition and contracting processes
-
Working familiarity with federal compliance standards (e.g., DFARS, ITAR)
-
PMP, CISSP, or other relevant certifications preferred
Physical Requirements
May require working in an office, industrial, shipboard, or laboratory environment. Capable of climbing ladders and tolerating confined spaces and extreme temperature variances.
The listed salary range for this role is intended as a good faith estimate based on the role's location, expectations, and responsibilities. When extending an offer, HII's Mission Technologies division takes a variety of factors into consideration which include, but are not limited to, the role's function and a candidate's education or training, work experience, and key skills.
Together we are working to ensure a future where everyone can be free and thrive.
All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Do You Need Assistance?
If you need a reasonable accommodation for any part of the employment process, please send an e-mail to and let us know the nature of your request and your contact information. Reasonable accommodations are considered on a case-by-case basis. Please note that only those inquiries concerning a request for reasonable accommodation will be responded to from this email address. Additionally, you may also call for assistance. Press #3 for HII Mission Technologies.