2,686 Etl Engineer jobs in the United States
ETL Engineer
Posted 1 day ago
Job Viewed
Job Description
Overview
In today's rapidly evolving technology landscape, an organization's data has never been a more important aspect in achieving mission and business goals. Our data exploitation experts work with our clients to support their mission and business goals by creating and executing a comprehensive data strategy using the best technology and techniques, given the challenge.
At Steampunk , our goal is to build and execute a data strategy for our clients to coordinate data collection and generation, to align the organization and its data assets in support of the mission, and ultimately to realize mission goals with the strongest effectiveness possible.
For our clients, data is a strategic asset. They are looking to become a facts-based, data-driven, customer-focused organization. To help realize this goal, they are leveraging visual analytics platforms to analyze, visualize, and share information. At Steampunk you will design and develop solutions to high-impact, complex data problems, working with the best and data practitioners around. Our data exploitation approach is tightly integrated with Human-Centered Design and DevSecOps.
Contributions
We are looking for a seasoned ETL Engineer to work with our team and our clients to develop enterprise grade data pipelines. We are looking for more than just an "ETL Engineer", but a technologist with excellent communication and customer service skills and a passion for data and problem solving.
-
Assess and understand ETL jobs and workflows
-
Create reusable data pipelines from source to target systems
-
Test, validate, and deploy ETL pipelines
-
Support reporting, business intelligence, and data science end users through ETL and ELT operations
-
Work with data architects to create data models and design schemas for RDBMS, warehouse, and data lake systems
-
Key must have skill sets - Python, SQL
-
Work within an Agile software development lifecycle
-
You will contribute to the growth of our Data Exploitation Practice!
Qualifications
-
Ability to hold a position of public trust with the US government.
-
2-4 years industry experience coding commercial software and a passion for solving complex problems.
-
2-4 years direct experience in Data Engineering with experience in tools such as:
-
ETL Tools: Python, Informatica, Pentaho, Talend
-
Big data tools: Hadoop, Spark, Kafka, etc.
-
Relational SQL and NoSQL databases, including Postgres and Cassandra.
-
Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
-
AWS cloud services: EC2, EMR, RDS, Redshift (or Azure equivalents)
-
Data streaming systems: Storm, Spark-Streaming, etc.
-
Search tools: Solr, Lucene, Elasticsearch
-
Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
-
Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases.
-
Experience with message queuing, stream processing, and highly scalable 'big data' data stores.
-
Experience manipulating structured and unstructured data for analysis
-
Experience constructing complex queries to analyze results using databases or in a data processing development environment
-
Experience working in an Agile environment
-
Experience supporting project teams of developers and data scientists who build web-based interfaces, dashboards, reports, and analytics/machine learning models
About steampunk
Steampunk relies on several factors to determine salary, including but not limited to geographic location, contractual requirements, education, knowledge, skills, competencies, and experience. The projected compensation range for this position is $110,000 to $130,000. The estimate displayed represents a typical annual salary range for this position. Annual salary is just one aspect of Steampunk's total compensation package for employees. Learn more about additional Steampunk benefits here.
Identity Statement
As part of the application process, you are expected to be on camera during interviews and assessments. We reserve the right to take your picture to verify your identity and prevent fraud.
Steampunk is a Change Agent in the Federal contracting industry, bringing new thinking to clients in the Homeland, Federal Civilian, Health and DoD sectors. Through our Human-Centered delivery methodology , we are fundamentally changing the expectations our Federal clients have for true shared accountability in solving their toughest mission challenges. As an employee owned company , we focus on investing in our employees to enable them to do the greatest work of their careers - and rewarding them for outstanding contributions to our growth. If you want to learn more about our story, visit .
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. Steampunk participates in the E-Verify program.
Refer a Friend (
Need help finding the right job?
We can recommend jobs specifically for you!
Job Location US-VA-McLean
Posted Date 3 months ago (5/14/2025 11:43 AM)
Job ID 6418
Clearance Requirement Public Trust
ETL Engineer
Posted 5 days ago
Job Viewed
Job Description
ETL Engineer
Job Locations
US-VA-Reston
Requisition ID
2025-155256
Position Category
Information Technology
Clearance
Top Secret/SCI w/Poly
Responsibilities
As an ETL Engineer, you will be responsible for designing, developing, and maintaining the ETL processes that support our data infrastructure. You will work with various teams to gather data from multiple sources, transform it into usable formats, and load it into data warehouses and databases. The ideal candidate will have experience in ETL development, data integration, and a deep understanding of data systems and workflows.
Key Responsibilities:
- Design and develop ETL workflows to extract data from various data sources (databases, APIs, flat files, etc.), transform it as needed, and load it into data warehouses or other storage solutions.
- Optimize ETL processes for efficiency, scalability, and error-free execution.
- Develop data validation routines to ensure data accuracy and integrity during extraction, transformation, and loading.
- Work closely with data analysts, data scientists, and business stakeholders to understand data needs and integrate data from various internal and external sources.
- Create seamless and automated data pipelines for real-time or batch data processing.
- Develop and implement transformation logic to convert raw data into usable formats for analysis.
- Ensure data quality by implementing cleansing and validation processes to guarantee consistency and reliability.
- Collaborate with data engineers and database administrators to manage and optimize databases and data warehouses.
- Ensure the ETL process is tightly integrated with data warehouse architecture to maintain consistency and improve performance.
- Monitor ETL processes to ensure timely data loads and troubleshoot any issues or failures in the ETL pipeline.
- Proactively identify bottlenecks and recommend performance improvements to ETL processes.
- Maintain clear documentation on ETL processes, data flows, and transformation logic for future reference.
- Prepare regular reports and communicate the status of ETL workflows, including identifying potential risks and suggesting mitigation strategies.
*Position contingent upon Spring 2025 contract award*
QualificationsRequired Skills & Qualifications:
- TS/SCI with Polygraph level clearance is required.
- BA/BS and 8+ years of experience; Masters and 6+ years of experience; an additional four years of experience may be considered in lieu of degree.
- Proven experience as an ETL Engineer, Data Engineer, or in a similar data engineering role.
- Strong knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica, Microsoft SSIS, Apache Airflow, etc.).
- Proficiency in SQL and experience working with relational databases (e.g., MySQL, PostgreSQL, Oracle).
- Experience with cloud-based data platforms (AWS, GCP, Azure) and data warehouse solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake).
- Experience in scripting languages such as Python, Shell, or Bash.
- Strong understanding of data integration, data transformation, and data quality practices.
- Familiarity with version control systems (e.g., Git).
Preferred Skills & Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, AWS Kinesis, etc.).
- Familiarity with data governance and compliance requirements (e.g., GDPR, HIPAA).
- Knowledge of data visualization tools (e.g., Tableau, Power BI) and their integration with data systems.
- Experience with automation tools and CI/CD pipelines for ETL processes.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to work collaboratively in a team environment and communicate effectively with non-technical stakeholders.
Peraton Overview
Peraton is a next-generation national security company that drives missions of consequence spanning the globe and extending to the farthest reaches of the galaxy. As the world's leading mission capability integrator and transformative enterprise IT provider, we deliver trusted, highly differentiated solutions and technologies to protect our nation and allies. Peraton operates at the critical nexus between traditional and nontraditional threats across all domains: land, sea, space, air, and cyberspace. The company serves as a valued partner to essential government agencies and supports every branch of the U.S. armed forces. Each day, our employees do the can't be done by solving the most daunting challenges facing our customers. Visit peraton.com to learn how we're keeping people around the world safe and secure.
Target Salary Range$135,000 - $216,000. This represents the typical salary range for this position based on experience and other factors.
EEO
EEO: Equal opportunity employer, including disability and protected veterans, or other characteristics protected by law.
ETL Engineer
Posted 5 days ago
Job Viewed
Job Description
ETL Engineer
ETL Engineer
BCforward is currently seeking a highly motivated ETL Engineer in Denver, CO - Remote role!
Position Title: ETL Engineer
Location: Denver, CO - Remote!
Anticipated Start Date: 08/020/2025
Please note this is the target date and is subject to change. BCforward will send official notice ahead of a confirmed start date.
Expected Duration: 6+ Months
Pay Range: $63-$66/hr
Please note that actual compensation may vary within this range due to factors such as location, experience, and job responsibilities, and does not encompass additional non-standard compensation (e.g., benefits, paid time off, per diem, etc.).
Description:
Key Responsibilities:
* ETL Development & Innovation: Architect, develop, and optimize sophisticated ETL workflows using Informatica PowerCenter and IICS to manage data extraction, transformation, and loading
from diverse sources into Amazon Redshift and other platforms, incorporating real-time and near-real-time processing capabilities.
* Cloud Data Integration & Orchestration: Lead the implementation of cloud-native data integration solutions using IICS, leveraging API-driven architectures to seamlessly connect on- premises, cloud, and hybrid ecosystems, ensuring scalability, resilience, and low-latency data flows.
* Advanced Data Modeling: Design and maintain enterprise-grade logical and physical data models, incorporating advanced techniques like Data Vault or graph-based modeling, to support high-performance data warehousing, and analytics.
* Data Warehousing Leadership: Spearhead the development and optimization of Amazon Redshift data structures, utilizing advanced features like Redshift Spectrum, workload management, and materialized views to handle petabyte-scale datasets with optimal performance.
* Advanced Analytics : Conduct in-depth data profiling, cleansing, and analysis using Python and advanced analytics tools to uncover actionable insights, to enable predictive and prescriptive
analytics.
* Python-Driven Automation: Develop and maintain Python-based scripts and frameworks for data processing, ETL automation, and orchestration, leveraging libraries like pandas, PySpark, or Airflow to streamline workflows and enhance operational efficiency.
* Performance Optimization & Cost Efficiency: Proactively monitor and optimize ETL processes, Redshift queries, Python scripts, and data pipelines using DevOps practices (e.g., CI/CD for data pipelines) to ensure high performance, cost efficiency, and reliability in cloud environments.
* Cross-Functional Leadership & Innovation: Collaborate with data scientists, AI engineers, business stakeholders, and DevOps teams to translate complex business requirements into innovative data solutions, driving digital transformation and business value.
* Data Governance & Ethics: Champion data governance, quality, and ethical data practices, ensuring compliance with regulations (e.g., GDPR, CCPA) and implementing advanced data lineage, auditing, and observability frameworks.
* Documentation & Thought Leadership: Maintain comprehensive documentation of ETL processes, data models, Python scripts, and configurations while contributing to thought leadership by sharing best practices, mentoring teams, and presenting at industry forums.
Experience:
* 10-15 years of hands-on experience with Informatica PowerCenter for designing and implementing complex ETL workflows.
* 5+ years of experience with Informatica Intelligent Cloud Services (IICS) for cloud-based data integration and orchestration.
* 5-7 years of hands-on experience with Amazon Redshift, including advanced schema design, query optimization, and large-scale data management.
* Extensive experience (8+ years) in data modeling (conceptual, logical, and physical) for data warehousing, analytics solutions.
* 7+ years of experience as a Data Analyst, performing advanced data profiling, analysis, and reporting to support strategic decision-making.
* 5+ years of hands-on experience with Python for data processing, automation, and integration with ETL, data warehousing, and analytics platforms.
Technical Skills:
* Expert-level proficiency in Informatica PowerCenter and IICS for ETL and cloud-native data integration.
* Advanced SQL skills for querying, optimizing, and managing Amazon Redshift environments, including expertise in Redshift-specific features.
* Strong expertise in data modeling tools (e.g., ER/Studio, Erwin, or Data Vault) and advanced modeling techniques (e.g., star/snowflake schemas, graph-based models).
* Proficiency in Python for data manipulation, automation, and analytics (e.g., pandas, NumPy, PySpark, Airflow).
* Experience with data visualization and analytics platforms (e.g.,MSTR, Power BI) for delivering actionable insights.
* Familiarity with AWS cloud services (e.g., S3, Glue, Lambda, SageMaker) and DevOps tools (e.g., Jenkins, Git) for data pipeline automation
Benefits:
BCforward offers all eligible employees a comprehensive benefits package including, but not limited to major medical, HSA, dental, vision, employer-provided group life, voluntary life insurance, short-term disability, long-term disability, and 401k.
About BCforward:
Founded in 1998 on the idea that industry leaders needed a professional service, and workforce management expert, to fuel the development and execution of core business and technology strategies, BCforward is a Black-owned firm providing unique solutions supporting value capture and digital product delivery needs for organizations around the world. Headquartered in Indianapolis, IN with an Offshore Development Center in Hyderabad, India, BCforward's 6,000 consultants support more than 225 clients globally.
BCforward champions the power of human potential to help companies transform, accelerate, and scale. Guided by our core values of People-Centric, Optimism, Excellence, Diversity, and Accountability, our professionals have helped our clients achieve their strategic goals for more than 25 years. Our strong culture and clear values have enabled BCforward to become a market leader and best in class place to work.
BCforward is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against based on disability.
To learn more about how BCforward collects and uses personal information as part of the recruiting process, view our Privacy Notice and CCPA Addendum. As part of the recruitment process, we may ask for you to disclose and provide us with various categories of personal information, including identifiers, professional information, commercial information, education information, and other related information. BCforward will only use this information to complete the recruitment process.
This posting is not an offer of employment. All applicants applying for positions in the United States must be legally authorized to work in the United States. The submission of intentionally false or fraudulent information in response to this posting may render the applicant ineligible for the position. Any subsequent offer of employment will be considered employment at-will regardless of the anticipated assignment duration.
Interested candidates please send resume in Word format Please reference job code 242986 when responding to this ad.
ETL Engineer
Posted 24 days ago
Job Viewed
Job Description
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
ETL Engineer

Posted today
Job Viewed
Job Description
We are looking for a seasoned **ETL Enginee** r to work with our team and our clients to develop enterprise grade data pipelines. We are looking for more than just an "ETL Engineer". We are looking for a technologist with excellent communication and customer service skills and a passion for data and problem solving.
**Contributions**
+ Assess and understand ETL jobs and workflows
+ Create reusable data pipelines from source to target systems
+ Test, validate, and deploy ETL pipelines
+ Support reporting, business intelligence, and data science end users through ETL and ELT operations
+ Work with data architects to create data models and design schemas for RDBMS, warehouse, and data lake systems
+ Key must have skill sets - Python, SQL
+ Work within an Agile software development lifecycle
+ You will contribute to the growth of our Data Exploitation Practice!
**Qualifications**
+ Ability to hold a position of public trust with the US government.
+ Bachelor's degree in computer science, information systems, engineering, business, or a scientific or technical discipline
+ 2-4 years industry experience coding commercial software and a passion for solving complex problems
+ 2-4 years direct experience in Data Engineering with experience in tools such as:
+ ETL Tools: Python, Informatica, Pentaho, Talend
+ Big data tools: Hadoop, Spark, Kafka, etc.
+ Relational SQL and NoSQL databases, including Postgres, CloudSQL, MongoDB
+ Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
+ AWS cloud services: EC2, EMR, RDS, Redshift (or Azure and GCP equivalents)
+ Data streaming systems: Storm, Spark-Streaming, etc.
+ Search tools: Solr, Lucene, Elasticsearch
+ Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
+ Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases
+ Experience with message queuing, stream processing, and highly scalable 'big data' data stores
+ Experience manipulating structured and unstructured data for analysis
+ Experience constructing complex queries to analyze results using databases or in a data processing development environment
+ Experience working in an Agile environment
+ Experience supporting project teams of developers and data scientists who build web-based interfaces, dashboards, reports, and analytics/machine learning models
+ Experience with the following are required: Postgres, Crunchy Data, CloudSQL, MongoDB, BigQuery, Looker, Datastream, Dataform, Composer.
**About** **steampunk**
Steampunk relies on several factors to determine salary, including but not limited to geographic location, contractual requirements, education, knowledge, skills, competencies, and experience. The projected compensation range for this position is $110,000 to $130,000. The estimate displayed represents a typical annual salary range for this position. Annual salary is just one aspect of Steampunk's total compensation package for employees. Learn more about additional Steampunk benefits here.
**Identity Statement**
As part of the application process, you are expected to be on camera during interviews and assessments. We reserve the right to take your picture to verify your identity and prevent fraud.
Steampunk is a **Change Agent** in the Federal contracting industry, bringing new thinking to clients in the Homeland, Federal Civilian, Health and DoD sectors. Through our **Human-Centered delivery methodology** , we are fundamentally changing the expectations our Federal clients have for true shared accountability in solving their toughest mission challenges. As an **employee owned company** , we focus on investing in our employees to enable them to do the greatest work of their careers - and rewarding them for outstanding contributions to our growth. If you want to learn more about our story, visit .
_We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. Steampunk participates in the E-Verify program._
Refer a Friend ( help finding the right job?**
We can recommend jobs specifically for you!
**Job Location** _US-VA-McLean_
**Posted Date** _6 days ago_ _(8/18/2025 4:18 PM)_
**_Job ID_** _5870_
**_Clearance Requirement_** _Public Trust_
ETL Engineer

Posted 26 days ago
Job Viewed
Job Description
In today's rapidly evolving technology landscape, an organization's data has never been a more important aspect in achieving mission and business goals. Our data exploitation experts work with our clients to support their mission and business goals by creating and executing a comprehensive data strategy using the best technology and techniques, given the challenge.
At **Steampunk** , our goal is to build and execute a data strategy for our clients to coordinate data collection and generation, to align the organization and its data assets in support of the mission, and ultimately to realize mission goals with the strongest effectiveness possible.
For our clients, data is a strategic asset. They are looking to become a facts-based, data-driven, customer-focused organization. To help realize this goal, they are leveraging visual analytics platforms to analyze, visualize, and share information. At Steampunk you will design and develop solutions to high-impact, complex data problems, working with the best and data practitioners around. Our data exploitation approach is tightly integrated with Human-Centered Design and DevSecOps.
**Contributions**
We are looking for a seasoned **ETL Engineer** to work with our team and our clients to develop enterprise grade data pipelines. We are looking for more than just an "ETL Engineer", but a technologist with excellent communication and customer service skills and a passion for data and problem solving.
+ Assess and understand ETL jobs and workflows
+ Create reusable data pipelines from source to target systems
+ Test, validate, and deploy ETL pipelines
+ Support reporting, business intelligence, and data science end users through ETL and ELT operations
+ Work with data architects to create data models and design schemas for RDBMS, warehouse, and data lake systems
+ Key must have skill sets - Python, SQL
+ Work within an Agile software development lifecycle
+ You will contribute to the growth of our Data Exploitation Practice!
**Qualifications**
+ Ability to hold a position of public trust with the US government.
+ 2-4 years industry experience coding commercial software and a passion for solving complex problems.
+ 2-4 years direct experience in Data Engineering with experience in tools such as:
+ ETL Tools: Python, Informatica, Pentaho, Talend
+ Big data tools: Hadoop, Spark, Kafka, etc.
+ Relational SQL and NoSQL databases, including Postgres and Cassandra.
+ Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
+ AWS cloud services: EC2, EMR, RDS, Redshift (or Azure equivalents)
+ Data streaming systems: Storm, Spark-Streaming, etc.
+ Search tools: Solr, Lucene, Elasticsearch
+ Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
+ Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases.
+ Experience with message queuing, stream processing, and highly scalable 'big data' data stores.
+ Experience manipulating structured and unstructured data for analysis
+ Experience constructing complex queries to analyze results using databases or in a data processing development environment
+ Experience working in an Agile environment
+ Experience supporting project teams of developers and data scientists who build web-based interfaces, dashboards, reports, and analytics/machine learning models
**About** **steampunk**
Steampunk relies on several factors to determine salary, including but not limited to geographic location, contractual requirements, education, knowledge, skills, competencies, and experience. The projected compensation range for this position is $110,000 to $130,000. The estimate displayed represents a typical annual salary range for this position. Annual salary is just one aspect of Steampunk's total compensation package for employees. Learn more about additional Steampunk benefits here.
**Identity Statement**
As part of the application process, you are expected to be on camera during interviews and assessments. We reserve the right to take your picture to verify your identity and prevent fraud.
Steampunk is a **Change Agent** in the Federal contracting industry, bringing new thinking to clients in the Homeland, Federal Civilian, Health and DoD sectors. Through our **Human-Centered delivery methodology** , we are fundamentally changing the expectations our Federal clients have for true shared accountability in solving their toughest mission challenges. As an **employee owned company** , we focus on investing in our employees to enable them to do the greatest work of their careers - and rewarding them for outstanding contributions to our growth. If you want to learn more about our story, visit .
_We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. Steampunk participates in the E-Verify program._
Refer a Friend ( help finding the right job?**
We can recommend jobs specifically for you!
**Job Location** _US-VA-McLean_
**Posted Date** _3 months ago_ _(5/14/2025 11:43 AM)_
**_Job ID_** _6418_
**_Clearance Requirement_** _Public Trust_
Teradata ETL Engineer
Posted 4 days ago
Job Viewed
Job Description
Design and develop Tera Data ETL jobs and data pipeline. Design, Develop and support enterprise data warehouse solutions leveraging Teradata and Teradata tools and utilities. Extracting data from various data sources, transforming, and loading it into target systems. Ensuring data quality, integrity, and consistency across different systems. Developing and maintaining data warehouses, data lakes and data mart architecture. Optimizing ETL processes for performance, scalability and reliability. SQL proficiency (intermediate skill level minimum) Ability to manage Teradata/SQL Server/ Oracle database migration including code migration, data loads and server retirements. Troubleshooting and resolving data integration issues Solid skills with writing/translating SQL queries and SQL data validation. Performance tuning of SQL queries.
Base Salary Range: $110,000 - $130,000 per annum
TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection. Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
#LI-SV2
Be The First To Know
About the latest Etl engineer Jobs in United States !
Teradata ETL Engineer
Posted 4 days ago
Job Viewed
Job Description
Location: Dallas,TX/St. Louis ,MO/Seattle WA (DAY-1 ONSITE)
Duration: Full Time
Job Description
- Design and develop Tera Data ETL jobs and data pipeline.
- Design, Develop and support enterprise data warehouse solutions leveraging Teradata and Teradata tools and utilities.
- Extracting data from various data sources, transforming, and loading it into target systems.
- Ensuring data quality, integrity, and consistency across different systems.
- Developing and maintaining data warehouses, data lakes and data mart architecture.
- Optimizing ETL processes for performance, scalability and reliability.
- SQL proficiency (intermediate skill level minimum)
- Ability to manage Teradata/SQL Server/ Oracle database migration including code migration, data loads and server retirements.
- Troubleshooting and resolving data integration issues
- Solid skills with writing/translating SQL queries and SQL data validation.
- Performance tuning of SQL queries.
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
Teradata ETL Engineer
Posted 10 days ago
Job Viewed
Job Description
Design and develop Tera Data ETL jobs and data pipeline. Design, Develop and support enterprise data warehouse solutions leveraging Teradata and Teradata tools and utilities. Extracting data from various data sources, transforming, and loading it into target systems. Ensuring data quality, integrity, and consistency across different systems. Developing and maintaining data warehouses, data lakes and data mart architecture. Optimizing ETL processes for performance, scalability and reliability. SQL proficiency (intermediate skill level minimum) Ability to manage Teradata/SQL Server/ Oracle database migration including code migration, data loads and server retirements. Troubleshooting and resolving data integration issues Solid skills with writing/translating SQL queries and SQL data validation. Performance tuning of SQL queries.
Base Salary Range: $110,000 - $120,000 per annum
TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection. Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
#LI-SV2
OMOP ETL Engineer
Posted 10 days ago
Job Viewed
Job Description
OMOP ETL Engineer
-
Bethesda, MD
-
Type: Full-time
-
Min. Experience: Senior Level
Opportunity Overview
Northramp is seeking an experienced OMOP ETL Engineer to support the design, development, and sustainment of extract-transform-load (ETL) processes that convert clinical data into the Observational Medical Outcomes Partnership (OMOP) common data model within a federal government environment. As consultants, you will help solve complex data-harmonization challenges, strengthen client relationships, and expand work under existing contracts through insight, quality, and collaboration.
This role offers a hybrid work arrangement, with up to 50% of work performed remotely.
The ideal candidate will understand clinical terminologies, data-modeling standards, and have hands-on experience using Azure Databricks or similar platforms to transform large healthcare datasets. Engineers will create repeatable ETL pipelines, populate an OMOP v5.3 database in Azure Cosmos DB, and support FHIR API integration.
Key Responsibilities
-
Design and implement ETL processes that transform CRISPI and other clinical datasets in Azure Data Lake into the OMOP v5.3 schema.
-
Develop scalable transformation scripts (using Python, Spark, or SQL) and ensure flexibility to accommodate new data types and vocabularies.
-
Collaborate with external vendors to pilot cohort-discovery solutions and integrate their outputs into the OMOP model.
-
Deploy and configure an OMOP database instance and FHIR server, and provide unit-test documentation for FHIR API queries.
-
Ensure high data quality and mapping accuracy while documenting lineage and transformation rules.
-
Work with program managers and stakeholders to refine requirements and deliverables.
-
Assist with performance tuning, error handling, and operational monitoring of ETL workflows.
Required Qualifications
-
US Citizenship
-
Ability to pass a federal background check successfully and maintain eligibility for continued access authorizations
-
Public Trust / IT-I / IT-II eligibility
-
Bachelor’s degree
-
Minimum of four (4) years of relevant experience in data warehousing or clinical data transformations
-
Expertise with the OMOP common data model and related standards (HL7, FHIR)
-
Hands-on experience with Azure Databricks, Cosmos DB, Spark, Python, and SQL
-
Strong communication, leadership, and problem-solving skills
-
Flexibility to adapt to changes in work scope to meet organizational goals
Clearance
Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information.
About Northramp
Northramp helps clients cut through complexity to achieve clarity and make a lasting impact. We streamline IT operations, improve technical services, and maximize technology investments—always with a focus on delivering measurable value. Learn more at northramp.com.
Our Commitment
At Northramp, we know diverse perspectives spark great ideas. We’re committed to equal opportunity and an inclusive workplace where everyone feels valued—because different backgrounds, identities, and experiences make us stronger as a team and sharper for our clients.
We provide reasonable accommodations for individuals with disabilities throughout the hiring process and employment. To request an accommodation, contact or ( .
Northramp is an Equal Opportunity Employer, supports pay transparency, and participates in E-Verify ( as a federal contractor to confirm employment eligibility in the United States. Learn more about your rights: EEO is the Law ( , EEO is the Law Supplement ( , and Pay Transparency Nondiscrimination Provision ( .