9,185 Data Engineer jobs in the United States
Data Engineer - Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer
Arlington, VA
100% Remote
$95,000/yr - $140,000/yr + benefits
MUST:
Active Secret clearance required
5+ years overall professional experience in IT and Data management
3 years of experience as a Data Engineer with demonstrated experience in relational databases (creating staging, fact, and dimension tables).
3 years ETL expertise with Databricks, Python, Spark, Scala, JavaScript/JSON, and PL-SQL.
3 years Qlik and Tableau report/dashboard development.
Strong ability to interpret data requirements and produce actionable visualizations.
AWS S3, delta, and external tables are preferred
Skilled in developing/implementing pipelines using Databricks, Python, Spark, Scala, JavaScript/JSON, and PL-SQL.
Detail-oriented, delivering high-quality results with minimal oversight.
Bachelor's or Master's degree (B.A./B.S. or M.A./M.S.).
DUTIES:
Design and maintain enterprise databases, data warehouses, and multidimensional networks.
Set database standards for operations, programming, queries, and security.
Create and optimize data models, integrating new systems and improving performance.
Maintain CMIS ETL pipeline: troubleshoot, update for new data, and manage ingestion/transformation.
Develop data products: create Qlik/Tableau reports, refactor dashboard data ingest, document usage, and resolve automation/production issues.
*Progression Inc. is an affirmative action/equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, status as a protected veteran, or status as an individual with a disability.* #INDPRO
Data Engineer/Senior Data Engineer/Lead Data Engineer
Posted 20 days ago
Job Viewed
Job Description
Job DescriptionJob DescriptionJob Title: Data Engineer/Senior Data Engineer/Lead Data EngineerLocation: San Diego, San Mateo, Remote
FLSA Status: Exempt
ClearNote Health is a precision oncology company developing non-invasive diagnostic tests that detect cancer early—when it’s most treatable. We are seeking a Lead Data Engineer to play a pivotal role in shaping and operating the data infrastructure that powers both our scientific discovery and commercial execution.
In this high-impact role, you’ll be responsible for architecting and maintaining the data systems that unify our business, clinical, and scientific data into a trusted and accessible platform. You’ll collaborate cross-functionally with R&D, Lab Operations, G&A, Regulatory, and Software Engineering teams to ensure timely, accurate, and secure access to critical data.
This is an outstanding opportunity for an experienced data engineer who thrives in a fast-paced environment, enjoys working across diverse domains, and is passionate about using data to transform patient outcomes.Key Responsibilities
- Partner with stakeholders across the company to understand their data needs and the systems generating that data.
- Design and implement a scalable, sustainable and reliable data operations strategy.
- Own the architecture and ongoing evolution of our enterprise data warehouse.
- Develop and maintain data contracts between systems of record and the warehouse to ensure data quality and transparency.
- Continuously improve the architecture and pipelines for ingesting, transforming, and delivering data.
- Create and maintain high-quality documentation for systems, pipelines, and data models.
- Collaborate with QA and Regulatory teams to ensure data systems support clinical and regulatory requirements.
- Work closely with the R&D team to capture, organize and analyze NGS-based laboratory data from R&D and production environments.
- Collaborate with BI team to deliver data from scientific analysis to our business intelligence platform.
Qualifications
- Bachelor's degree in Computer Science, Information Systems, Laboratory Science, or a related field; Master’s degree .
- 3+ years of experience in data engineering or data operations roles.
- Strong proficiency in SQL and Python.
- Demonstrated experience with data warehousing and transforming raw data into useful models for business and scientific use.
- Excellent written and verbal communication skills, with the ability to collaborate effectively across technical and non-technical teams.
- Experience working with scientific data, particularly from next- sequencing (NGS), is strongly .
- Familiarity with common bioinformatics data formats is a plus.
- Bonus: Familiarity with laboratory workflows and relevant regulatory standards (e.g., FDA, CLIA, ISO).
What We ValueAt ClearNote Health, we are driven by our mission and guided by our values:
- Put Patients First
- Redefining the Possible
- Together We Win
CompensationTitle and compensation will be commensurate with skills, experience, and qualifications. The estimated base salary range for this position is $140,000 to $250,000, with potential variation based on depth of experience and location. This role includes stock options and generous benefits.
Come join us in addressing large healthcare needs through precision epigenomic medicine!
ClearNote Health is an exciting life science company that is reinventing non-invasive molecular diagnostic testing using next epigenomic technologies. We are passionate and dedicated to discovering and developing medicines that will make a significant difference in cancer and other epigenomic-driven diseases. Our technologies provide novel insight and quantitation of human health and disease, with our focus on precision medicine applications improving both clinical and health system outcomes. Our company was founded based on pioneering work in the Stanford laboratory of Stephen Quake, with advisors from Stanford and UCSF.
We look for extraordinary lifelong learners with a passion and growth mindset for these areas, and for combining biological ingenuity with AI and data analysis. Led by a team with decades of experience bringing products from concept to market, we are an equal opportunity employer and value at our company.
We provide generous benefits to all employees including stock options. We are building a world-class company, based in San Diego and San Mateo.
Our commitment to , Equity, , and Belonging:
We celebrate in perspectives and backgrounds, and this is reflected in our innovation, our mission, and values. Our make us unique, help us innovate, and allow us to persevere. We strive to achieve representation and and redefine the possible in patients living longer lives.
Powered by JazzHR
FN9USGaHv0
Data Engineer / Senior Data Engineer
Posted 23 days ago
Job Viewed
Job Description
Arcadia is dedicated to happier, healthier days for all. We transform diverse data into a unified fabric for health. Our platform delivers actionable insights for our customers to advance care and research, drive strategic growth, and achieve financial success. For more information, visit arcadia.io.
Why This Role Is Important To Arcadia
The Arcadia Data Engineering team onboards and supports the data feed integrations between Client Claim and Clinical data management platforms and our Healthcare Solution Platform. Our customers are top Healthcare providers and payers, and we help them integrate their internal systems with our analytic platform. The Data Engineering team is responsible for the data architecture that drives the partnership with customers and other internal organizations to drive success through adoption of cutting edge analytic solutions that leverage new age technologies and best practices. Our Data Engineers require both SQL Database knowledge and design , along with multiple programing languages.
As a Data Engineer, you will drive the successful development of solution architecture and the completion of data pipeline connectors that automate the flow of data between client Claim and Clinical data platforms and our analytic health solution platform. Your efforts will be critical to driving the long-term partnership between Arcadia and our customers.
What Success Looks Like
In 3 months
- Learn the different areas of the data connector life cycle, and develop a working knowledge of the technical stacks, storage platforms, data models, and Arcadia's development cycle
- Familiarize yourself with the existing data pipeline and associated product capabilities
- Set to work on new ingestion pipelines with full bandwidth available (as formal training will end)
In 6 months
- Work on higher level enhancement requests and ingestion pipelines
- Deliver data reviews to clients and other departments regarding code quality and test cases
- Set your own personal vision of development and career aspirations and set a working path forward with leadership to work on how we can help you attain those goals
- Contribute to scrum ceremonies and ceremonies within the dev cycles while successfully updating status and progress in Jira
In 12 months
- Develop/support a range of data pipelines with varying complexity
- Work with Product, Engineering or Implementation to build out tools for better data integration capabilities
- Work on standardized data connector development
- Pick a SME (subject matter expert) path that aligns with your career development goals and evolving needs of the business
What You'll Be Doing
- As a Senior Data Engineer, you will drive the successful development of solution architecture and the completion of data pipeline connectors that automate the flow of data between client data platforms and our analytic health solution platform. Your outcomes are critical to driving the long-term partnership between Arcadia and our customers.
- Design and documentation of connectors / ingestion pipelines
- Build and unit testing of delivery connectors / ingestion pipelines
- Support of our processes in partaking in peer code reviews, sprint planning, product grooming, maintaining Jira tasks and peer test reviews
- You will be expected to contribute to multiple implementations simultaneously, which will include both new customer setup as well as support and enhancements for existing customers
- Responsible for delivery of work on expected timelines
- Able to identify risk to project success and communicate to leadership
- Works mostly independently on delivery w/decreasing involvement from engineering and more senior team members
- Consistently deliver pipelines of increasing quality with "lessons learned" incorporated into next project
- Able to apply critical thinking and problem solving skills to propose solutions for complex problems within day to day work
- Working and growing knowledge of new tech stack with less focus on finding efficiency in the technology and greater focus on understanding use of it
- Developing ability to understand technical issues and communicate potential solutions to team members or engineering team
- Developing working knowledge of the business of healthcare data and how it interacts within the Arcadia products
- Understanding of shared value contracts that our customers are in and how data is impacted by them
- Developing knowledge of industry data expected values such as PMPM by LOBs, MM trends, etc.
- Developing internal and external professional communication skills including presentation of issues using appropriate industry vocabulary
- Responsible for contributing to the advancement of team processes and internal
The expectations of the day to day of an engineer is as follows:
Delivery
Technical Domain Knowledge:
Business Domain Knowledge:
Communication Skills:
Team Projects:
- Senior Experience Level of 5-8 years post-grad with relevant industry experience or graduate level degree
- As a data engineer you will be expected to problem solve coding issues and enhancements with frameworks that are built in Spark, while also leveraging technical skills to partake in idea sessions on process improvement and POC design of how to carry out a solution
- SQL: 2-4 year (Preferred)
- Spark: 1-2 years (Preferred)
- NoSQL Databases: 1-2 years (Preferred)
- Database Architecture: 2-3 years (Preferred)
- Cloud Architecture: 1-2 years (Preferred)
- As a data engineer you will be expected to problem solve some basic data analysis issues and work the data to create analytic enhancements
- Healthcare Data: 2-4 years (Preferred)
- Healthcare Analytics: 1-3 years (Preferred)
Tech
Data
- AWS (S3 and EC2): 2-3 years
- Python: 3-5 years
- Nifi: 3- 5 years
- Kafka: 3-5 years
- Java : 3-5 years
- Healthcare Analytics: 3-5 years
- Chance to be surrounded by a team of extremely talented and dedicated individuals driven to succeed
- Be a part of a mission driven company that is transforming the healthcare industry by changing the way patients receive care
- A flexible, remote friendly company with personality and heart
- Employee driven programs and initiatives for personal and professional development
- Be a member of the Arcadian and Barkadian Community
About Arcadia
Arcadia.io helps innovative providers and payers across the country transform healthcare to reduce cost while improving patient health. We do this by aggregating large amounts of disparate data, applying algorithms to identify opportunities to provide better patient care, and making those opportunities actionable by physicians at the point of care in near-real time. We are passionate about helping our customers drive meaningful outcomes. We are growing fast and have emerged as a market leader in the highly competitive population health management software market and have been recognized by industry analysts KLAS, IDC, Forrester, and Chilmark for our leadership. For a better sense of our brand and products, please explore our website.
Protect Yourself
If you have concerns about the authenticity of a job offer or recruitment-related communication claiming to be from Arcadia, we encourage you to verify by contacting us directly at and select option 3. For more information, visit our website.
This position is responsible for following all Security policies and procedures in order to protect all PHI under Arcadia's custodianship as well as Arcadia Intellectual Properties. For any security-specific roles, the responsibilities would be further defined by the hiring manager.
Data engineer I/Data Engineer II
Posted 6 days ago
Job Viewed
Job Description
Salary commensurate with experience and qualifications
About SMU
SMU's more than 12,000 diverse, high-achieving students come from all 50 states and over 80 countries to take advantage of the University's small classes, meaningful research opportunities, leadership development, community service, international study and innovative programs.
SMU serves approximately 7,000 undergraduates and 5,000 graduate students through eight degree-granting schools: Dedman College of Humanities and Sciences , Cox School of Business , Lyle School of Engineering , Meadows School of the Arts , Simmons School of Education and Human Development , Dedman School of Law , Perkins School of Theology and Moody School of Graduate and Advanced Studies .
SMU is data driven, and its powerful supercomputing ecosystem - paired with entrepreneurial drive - creates an unrivaled environment for the University to deliver research excellence.
Now in its second century of achievement, SMU is recognized for the ways it supports students, faculty and alumni as they become ethical, enterprising leaders in their professions and communities. SMU's relationship with Dallas - the dynamic center of one of the nation's fastest-growing regions - offers unique learning, research, social and career opportunities that provide a launch pad for global impact.
SMU is nonsectarian in its teaching and committed to academic freedom and open inquiry.
About the Department:
SMU DataArts provides data-driven insights for the agencies, funders, and organizations shaping arts and culture across the country. We specialize in identifying trends affecting cultural organizations, assessing vibrancy across the arts ecosystem, and illuminating barriers to access and inclusion in all areas of culture.
As a research center at SMU that collaborates across multiple disciplines, we develop innovative approaches and curated datasets-and apply them to critical challenges facing the sector. SMU DataArts mission is to equip the arts and culture ecosystem with data and research insights that help build strong, vibrant, and equitable communities. We partner with organizations whose strategies have wide influence across the sector to maximize impact.
About the Position:
The SMU DataArts team is distributed across the country. This position can work remotely or from SMU's Dallas campus.
SMU Data Arts is seeking a dynamic individual who will serve as a Data Engineer I, or Data Engineer II, depending on level of experience.
The Data Engineer is responsible for the design, management, and optimization of the organization's Snowflake data warehouse. This role oversees the integration of public and private datasets into core research workflows, ensuring data consistency, security, and accessibility for analytical and operational use. The position requires expertise in data engineering, ETL processes, and API-based data integration to support research and data-driven decision-making.
The Data Engineer also pulls data insights from DataArts' databases and other datasets for research purposes. This can involve compiling, analyzing and visualizing data independently, or supporting research projects and responding to requests internally and externally.
Essential Functions:
- Data Warehouse Management: Oversee and maintain the Snowflake data warehouse, ensuring its architecture supports efficient storage, querying, and scalability; Implement best practices for performance optimization, cost efficiency, and security in Snowflake; Develop and manage role-based access controls (RBAC) and data governance policies to protect sensitive data.
- Data Integration & Automation: Ingest and clean public and private datasets to build the core data infrastructure for research need; Develop and maintain connections to external data sources via APIs to enable real-time or batch data ingestion; Design and implement ETL pipelines for structured and unstructured data; Automate data pipelines using Python, SQL, and dbt.
- Data Quality, Security, & Compliance: Establish data validation and quality control processes to maintain high-integrity datasets; Monitor data quality and resolve data discrepancies across integrated sources; Align with leading and/or required data privacy frameworks, particularly for sensitive or restricted data; Implement disaster recovery and backup solutions for critical datasets
- Data Analysis and Project Support: Conduct data analysis and data compilations to support projects; Develop analytical tools, dashboards, and pipelines for research projects; Use statistical tools and methodologies to generate insights for research or data projects; Visualize data with tables, charts, graphs and maps for internal and external use.
- Cross-Functional Collaboration: Collaborate with researchers and communication team to understand data requirements and optimize workflows; Work closely with the IT and security teams to enforce data privacy and compliance standards; Support the documentation of data, metadata, process and tools.
- Continuous Improvement & other duties as assigned: Identify and propose enhancements in data management processes and systems; Stay current with emerging tools, technologies, and best practices in research and data management.
Bachelor's degree is required.
A minimum of two years of work experience in data wrangling, database management, and the design, engineering, and implementation of end to end software solutions, from prototype through production pipeline is required for a Data Engineer I level or a minimum of three years of work experience in data wrangling, database management, and the design, engineering, and implementation of end to end software solutions, from prototype through production pipeline for a Data Engineer II level is required.
Candidate must have experience with multiple statistical languages (e.g., Python and R in Linux platforms) to query databases, manipulate data, and maintain large data sets. Strong relational database skills (e.g., MS SQL) and working knowledge of Cloud services (e.g., AWS) are also required.
Knowledge, Skills and Abilities:
Candidate with experiences in non-profit financing, economics, and project management are preferred. Candidate must demonstrate strong interpersonal and verbal communication skills, with the ability to communicate broadly across the University and develop and maintain effective relationships with a wide range of constituencies. Must also demonstrate strong written communication skills.
Candidate must possess strong problem-solving skills with the ability to identify and analyze problems, as well as devise solutions. Must also have strong organizational, planning and time management skills.
Python and R in Linux platforms, and relational database skills, such as MS SQL are required. Candidate with proficiency in sing UNIX OS in high performance environment is preferred. Candidate with intermediate to advanced Microsoft Excel skills is desired. Familiarity with ARC GIS, Tableau, and other data analysis and visualization software (e.g. SAS, STATA, Python, R etc.) and experience using Salesforce is preferred.
Physical and Environmental Demands:
- Sit for long periods of time
Deadline to Apply:
September 19, 2025
EEO Statement
SMU is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, age, disability, genetic information, veteran status, sexual orientation, or gender identity and expression.
Benefits:
SMU offers staff a broad, competitive array of health and related benefit s. In addition to traditional benefits such as health, dental, and vision plans, SMU offers a wide range of wellness programs to help attract, support, and retain our employees whose work continues to make SMU an outstanding education and research institution.
SMU is committed to providing an array of retirement programs that benefit and protect you and your family throughout your working years at SMU and, if you meet SMU's retirement eligibility criteria, during your retirement years after you leave SMU.
The value of learning at SMU isn't just about preparing our students for the future. Employees have access to a wide variety of professional and personal development opportunities , including tuition benefits .
Data Engineer II / Senior Data Engineer
Posted 9 days ago
Job Viewed
Job Description
Circle is a financial technology company at the epicenter of the emerging internet of money, where value can finally travel like other digital data - globally, nearly instantly and less expensively than legacy settlement systems. This ground-breaking new internet layer opens up previously unimaginable possibilities for payments, commerce and markets that can help raise global economic prosperity and enhance inclusion. Our infrastructure - including USDC, a blockchain-based dollar - helps businesses, institutions and developers harness these breakthroughs and capitalize on this major turning point in the evolution of money and technology.
What you'll be part of:
Circle is committed to visibility and stability in everything we do. As we grow as an organization, we're expanding into some of the world's strongest jurisdictions. Speed and efficiency are motivators for our success and our employees live by our company values: High Integrity, Future Forward, Multistakeholder, Mindful, and Driven by Excellence. We have built a flexible and diverse work environment where new ideas are encouraged and everyone is a stakeholder.
What you'll be responsible for:
As a member of the Data Engineering team, you own the data warehouse and pipelines that are used for blockchain analytics and reporting. Your work powers Circle business functions, including product development for actionable insights and operational excellence to fuel business growth. In this role, you will work closely with other data engineers to build up a robust data warehouse. High integrity to data accuracy and quality is crucial. Your work will directly impact Circle's transparency, trust, and accountability needs.
What you'll work on:
- Collaborate with fellow data engineers, data analysts and data scientists produce useful data for crypto blockchain analytics to inform product decisions.
- Build and maintain ELT pipelines, parsing complex blockchain data, to source and aggregate the required data for analysis and reporting.
- Design data models and maintain reliable metadata to give context for improved data discoverability.
- Continually improve data warehouse operations, monitoring, and performance.
- Serve as a domain expert in blockchain data modeling, pipelines, quality, and warehousing.
- Leverage AI to increase your work efficiency in code development and transform old processes.
For Data Engineer (II)
- 2+ years of data engineering experience with strong data fundamentals.
- Skilled in SQL for data warehouse platforms such as Snowflake, BigQuery, Databricks, and more.
- Proficient in one or more programming languages (Python, Java, Scala).
- Experience with workflow orchestration (Airflow, Dagster, DBT) and cloud platforms (AWS, GCP, Azure).
- Background in blockchain or financial data is a plus.
- Self-starter with a proven ability to learn, collaborate remotely, and communicate effectively in fast-paced environments.
- 4+ years of data engineering experience with strong data fundamentals.
- Proficient in SQL within data warehouse systems like Snowflake, BigQuery, Databricks, etc.
- Skilled in building scalable infrastructure to support batch, micro-batch or stream processing for large volumes of data.
- Experience in data provenance, governance, and big data technologies (including open-source solutions).
Starting pay is determined by various factors, including but not limited to: relevant experience, skill set, qualifications, and other business and organizational needs. Please note that compensation ranges may differ for candidates in other locations.
Base Pay Range:
- Data Engineer II: $120,000 - $62,500
- Senior Data Engineer: 147,500 - 195,000
We are an equal opportunity employer and value diversity at Circle. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Additionally, Circle participates in the E-Verify Program in certain locations, as required by law.
Should you require accommodations or assistance in our interview process because of a disability, please reach out to for support. We respect your privacy and will connect with you separately from our interview process to accommodate your needs.
#LI-Remote
Data Engineer
Posted 1 day ago
Job Viewed
Job Description
MANTECH seeks a motivated, career and customer-oriented Data Engineer to join our Data and AI Practice onsite in Chantilly and/or Herndon, VA. You will be a part of a dynamic, cross-functional team focused on transforming how Government clients extract insights from data to achieve critical mission objectives. You will be providing support in the analysis, design, implementation, and optimization of complex, enterprise-wide, cloud-based data management solutions that align with mission goals and requirements.
Responsibilities include but are not limited to:
- Designing, deploying, and maintaining cloud-based databases
- Providing support for database management tasks
- Work closely with cloud infrastructure teams to ensure seamless integration of cloud base databases with applications and services
- Implement robust security measures to protect sensitive data and ensure compliance with relevant data security policies and practices.
Minimum Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field and at least 5 years of experience in data engineering, database administration, or related roles
- Experience with data architecture, engineering, and data modeling
- Experience with managing structured and unstructured data, and knowledge of binary file types like Parquet, Hudi, Iceberg, Delta Lake
- Experience with Schema on read
- Experience integrating Oracle databases with big data platforms such as Hadoop, Spark, or cloud-based data lakes
- Familiarity with Oracle SQL, MySQL, PostgreSQL, MongoDB, or Oracle Database Engineering.
Preferred Qualifications:
- Master’s degree in Computer Science, Information Technology, Engineering, or a related field.
- Experience with PL/SQL and/or Exadata Exposure to cloud platforms such as Oracle Cloud Infrastructure (OCI), AWS, or Azure Experience with streaming data architectures such as Apache Kafka, Oracle Stream Analytics
- Experience with semantic layer/integrated data layer Knowledge of data security best practices and regulatory compliance
- Experience with DevOps and CI/CD pipelines in a data engineering context
- Oracle certifications such as Oracle Certified Professional or Oracle Big Data certification are a plus
Clearance Requirements:
- Must possess a current and active TS/SCI with polygraph
Physical Requirements:
- Must be able to be in a stationary position more than 50% of the time
- Must be able to communicate, converse, and exchange information with peers and senior personnel
- Constantly operates a computer and other office productivity machinery, such as a computer
- The person in this position frequently communicates with co-workers, management, and customers, which may involve delivering presentations. Must be able to exchange accurate information in these situations
- The person in this position needs to occasionally move about inside the office to access file cabinets, office machinery, etc.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
MANTECH seeks a motivated, career and customer-oriented Data Engineer to join our Data and AI Practice onsite in Chantilly and/or Herndon, VA. You will be a part of a dynamic, cross-functional team focused on transforming how Government clients extract insights from data to achieve critical mission objectives. You will be providing support in the analysis, design, implementation, and optimization of complex, enterprise-wide, cloud-based data management solutions that align with mission goals and requirements.
Responsibilities include but are not limited to:
- Designing, deploying, and maintaining cloud-based databases
- Providing support for database management tasks
- Work closely with cloud infrastructure teams to ensure seamless integration of cloud base databases with applications and services
- Implement robust security measures to protect sensitive data and ensure compliance with relevant data security policies and practices.
Minimum Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field and at least 5 years of experience in data engineering, database administration, or related roles
- Experience with data architecture, engineering, and data modeling
- Experience with managing structured and unstructured data, and knowledge of binary file types like Parquet, Hudi, Iceberg, Delta Lake
- Experience with Schema on read
- Experience integrating Oracle databases with big data platforms such as Hadoop, Spark, or cloud-based data lakes
- Familiarity with Oracle SQL, MySQL, PostgreSQL, MongoDB, or Oracle Database Engineering.
Preferred Qualifications:
- Master’s degree in Computer Science, Information Technology, Engineering, or a related field.
- Experience with PL/SQL and/or Exadata Exposure to cloud platforms such as Oracle Cloud Infrastructure (OCI), AWS, or Azure Experience with streaming data architectures such as Apache Kafka, Oracle Stream Analytics
- Experience with semantic layer/integrated data layer Knowledge of data security best practices and regulatory compliance
- Experience with DevOps and CI/CD pipelines in a data engineering context
- Oracle certifications such as Oracle Certified Professional or Oracle Big Data certification are a plus
Clearance Requirements:
- Must possess a current and active TS/SCI with polygraph
Physical Requirements:
- Must be able to be in a stationary position more than 50% of the time
- Must be able to communicate, converse, and exchange information with peers and senior personnel
- Constantly operates a computer and other office productivity machinery, such as a computer
- The person in this position frequently communicates with co-workers, management, and customers, which may involve delivering presentations. Must be able to exchange accurate information in these situations
- The person in this position needs to occasionally move about inside the office to access file cabinets, office machinery, etc.
Be The First To Know
About the latest Data engineer Jobs in United States !
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
MANTECH seeks a motivated, career and customer-oriented Data Engineer to join our Data and AI Practice. This position requires full-time on-site presence in Chantilly and/or Herndon, VA.
The ideal candidate will play a critical role in designing, building, and optimizing data pipelines and architecture that support our enterprise-scale data initiatives in an IC customer space. This role demands technical expertise, strategic thinking, and hands-on experience with Oracle-based systems in high-volume environments. If you thrive in a fast-paced, data-driven organization and enjoy solving complex data engineering challenges, we want to hear from you.
Responsibilities include but are not limited to:
- Designing, deploying, maintaining and optimizing cloud-based databases and data pipelines
- Providing support for database management and data governance tasks
- Ensuring data security and integrity
- Working closely with cloud infrastructure teams to ensure seamless integration of data services with analytic and data science applications
Minimum Qualifications
- Must possess a Bachelor’s degree in Engineering, Computer Science, Business or a related field and at least 6 years of experience in data engineering, ETL development, or database management
- Strong expertise in Oracle Database technologies (e.g., Oracle 19c, Oracle Exadata, Oracle Data Integrator - ODI)
- Proven experience with PL/SQL, Oracle partitioning, and performance tuning
- Understanding of data modeling, data warehousing concepts, and star/snowflake schemas
- Knowledge of ETL pipeline design and implementation for large-scale data systems. Experience in shell scripting, Python, or other automation tools
- Familiarity with Oracle GoldenGate or similar CDC (Change Data Capture) tools
- Familiarity with data governance, metadata management, and data lineage practices
- Ability to work with cross-functional teams including data scientists, DBAs, and business analysts.
- Strong verbal and written communication skills for documenting and presenting solutions
Preferred Qualifications:
- Experience integrating Oracle databases with big data platforms (e.g., Hadoop, Spark, or cloud-based data lakes)
- Familiarity with Oracle Big Data SQL or Oracle Autonomous Database. Exposure to cloud platforms such as Oracle Cloud Infrastructure (OCI), AWS, or Azure
- Experience with streaming data architectures (e.g., Apache Kafka, Oracle Stream Analytics)
- Knowledge of data security best practices and regulatory compliance.
- Experience with DevOps and CI/CD pipelines in a data engineering context
- Master’s degree is preferred. Oracle certifications (e.g., Oracle Certified Professional or Oracle Big Data certification) are a plus
Clearance requirements:
- Must possess a TS/SCI security clearance with Polygraph
Physical Requirements:
- The person in this position must be able to remain in a stationary position 50% of the time.
- Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
MANTECH seeks a motivated, career and customer-oriented Data Engineer to join our Data and AI Practice. This position requires full-time on-site presence in Chantilly and/or Herndon, VA.
The ideal candidate will play a critical role in designing, building, and optimizing data pipelines and architecture that support our enterprise-scale data initiatives in an IC customer space. This role demands technical expertise, strategic thinking, and hands-on experience with Oracle-based systems in high-volume environments. If you thrive in a fast-paced, data-driven organization and enjoy solving complex data engineering challenges, we want to hear from you.
Responsibilities include but are not limited to:
- Designing, deploying, maintaining and optimizing cloud-based databases and data pipelines
- Providing support for database management and data governance tasks
- Ensuring data security and integrity
- Working closely with cloud infrastructure teams to ensure seamless integration of data services with analytic and data science applications
Minimum Qualifications
- Must possess a Bachelor’s degree in Engineering, Computer Science, Business or a related field and at least 6 years of experience in data engineering, ETL development, or database management
- Strong expertise in Oracle Database technologies (e.g., Oracle 19c, Oracle Exadata, Oracle Data Integrator - ODI)
- Proven experience with PL/SQL, Oracle partitioning, and performance tuning
- Understanding of data modeling, data warehousing concepts, and star/snowflake schemas
- Knowledge of ETL pipeline design and implementation for large-scale data systems. Experience in shell scripting, Python, or other automation tools
- Familiarity with Oracle GoldenGate or similar CDC (Change Data Capture) tools
- Familiarity with data governance, metadata management, and data lineage practices
- Ability to work with cross-functional teams including data scientists, DBAs, and business analysts.
- Strong verbal and written communication skills for documenting and presenting solutions
Preferred Qualifications:
- Experience integrating Oracle databases with big data platforms (e.g., Hadoop, Spark, or cloud-based data lakes)
- Familiarity with Oracle Big Data SQL or Oracle Autonomous Database. Exposure to cloud platforms such as Oracle Cloud Infrastructure (OCI), AWS, or Azure
- Experience with streaming data architectures (e.g., Apache Kafka, Oracle Stream Analytics)
- Knowledge of data security best practices and regulatory compliance.
- Experience with DevOps and CI/CD pipelines in a data engineering context
- Master’s degree is preferred. Oracle certifications (e.g., Oracle Certified Professional or Oracle Big Data certification) are a plus
Clearance requirements:
- Must possess a TS/SCI security clearance with Polygraph
Physical Requirements:
- The person in this position must be able to remain in a stationary position 50% of the time.
- Occasionally move about inside the office to access file cabinets, office machinery, or to communicate with co-workers, management, and customers, via email, phone, and or virtual communication, which may involve delivering presentations.
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
MANTECH seeks a motivated, career and customer-oriented Data Engineer to join our Data and AI Practice onsite in Chantilly and/or Herndon, VA. You will be a part of a dynamic, cross-functional team focused on transforming how Government clients extract insights from data to achieve critical mission objectives. You will be providing support in the analysis, design, implementation, and optimization of complex, enterprise-wide, cloud-based data management solutions that align with mission goals and requirements.
Responsibilities include but are not limited to:
- Designing, deploying, and maintaining cloud-based databases
- Providing support for database management tasks
- Work closely with cloud infrastructure teams to ensure seamless integration of cloud base databases with applications and services
- Implement robust security measures to protect sensitive data and ensure compliance with relevant data security policies and practices.
Minimum Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field and at least 5 years of experience in data engineering, database administration, or related roles
- Experience with data architecture, engineering, and data modeling
- Experience with managing structured and unstructured data, and knowledge of binary file types like Parquet, Hudi, Iceberg, Delta Lake
- Experience with Schema on read
- Experience integrating Oracle databases with big data platforms such as Hadoop, Spark, or cloud-based data lakes
- Familiarity with Oracle SQL, MySQL, PostgreSQL, MongoDB, or Oracle Database Engineering.
Preferred Qualifications:
- Master’s degree in Computer Science, Information Technology, Engineering, or a related field.
- Experience with PL/SQL and/or Exadata Exposure to cloud platforms such as Oracle Cloud Infrastructure (OCI), AWS, or Azure Experience with streaming data architectures such as Apache Kafka, Oracle Stream Analytics
- Experience with semantic layer/integrated data layer Knowledge of data security best practices and regulatory compliance
- Experience with DevOps and CI/CD pipelines in a data engineering context
- Oracle certifications such as Oracle Certified Professional or Oracle Big Data certification are a plus
Clearance Requirements:
- Must possess a current and active TS/SCI with polygraph
Physical Requirements:
- Must be able to be in a stationary position more than 50% of the time
- Must be able to communicate, converse, and exchange information with peers and senior personnel
- Constantly operates a computer and other office productivity machinery, such as a computer
- The person in this position frequently communicates with co-workers, management, and customers, which may involve delivering presentations. Must be able to exchange accurate information in these situations
- The person in this position needs to occasionally move about inside the office to access file cabinets, office machinery, etc.