5,289 Data Engineer jobs in the United States

Big Data Engineer

90407 Santa Monica, California Eliassen Group

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**Big Data Engineer**
**Santa Monica, CA**
**Type:** Contract
**Category:** Data
**Industry:** Communications
**Reference ID:** JN -
**Date Posted:** 10/12/2025
**Shortcut:** Description
+ Recommended Jobs
**Description:**
**Onsite | Santa Monica, CA**
Are you passionate about building scalable data platforms that deliver real-world impact? Our Ad Platforms Engineering team is at the forefront of advertising technology, creating cutting-edge solutions that enhance performance and value across all media channels. We're seeking a seasoned engineer to design and maintain robust data infrastructure, lead architectural decisions, and collaborate across disciplines to shape the future of ad tech.
_Due to client requirement, applicants must be willing and able to work on a w2 basis. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance._
Rate: $95 - $0 / hr. w2
**Responsibilities:**
+ Design, build, and maintain scalable data platform components for both real-time and batch processing. Own the full software development lifecycle - from requirements gathering and design to implementation, testing, and deployment.
+ Drive engineering best practices including code quality, performance optimization, automated testing, CI/CD, and system reliability.
+ Define and evaluate technical architecture, contribute to system-level design discussions, and lead decision-making on key engineering trade-offs.
+ Collaborate cross-functionally with product managers, program managers, SDETs, and data scientists to deliver impactful solutions aligned with business goals.
+ Lead by example to foster an inclusive, high-performing engineering culture; provide technical guidance and mentorship to junior and mid-level engineers.
+ Troubleshoot and resolve complex production issues, ensuring system performance, availability, and reliability.
+ Continuously evaluate emerging technologies and contribute to innovation efforts across the organization.
**Experience Requirements:**
+ 5+ years of professional programming in Scala, Python, and etc.
+ 3+ years of big data development experience with technical stacks like Spark, Flink, Airflow, Singlestore, Kafka and AWS big data technologies
+ Deep understanding of data modeling, distributed systems, and performance optimization
+ Knowledge of system, application design and architecture
+ Experience of building industry level high available and scalable service
+ Passion about technologies, and openness to interdisciplinary work
+ Excellent communication and collaboration skills
**Education Requirements:**
+ Bachelor's Degree
_Skills, experience, and other compensable factors will be considered when determining pay rate. The pay range provided in this posting reflects a W2 hourly rate; other employment options may be available that may result in pay outside of the provided range._
_W2 employees of Eliassen Group who are regularly scheduled to work 30 or more hours per week are eligible for the following benefits: medical (choice of 3 plans), dental, vision, pre-tax accounts, other voluntary benefits including life and disability insurance, 401(k) with match, and sick time if required by law in the worked-in state/locality._
_Please be advised- If anyone reaches out to you about an open position connected with Eliassen Group, please confirm that they have an Eliassen.com email address and never provide personal or financial information to anyone who is not clearly associated with Eliassen Group. If you have any indication of fraudulent activity, please contact
_About Eliassen Group:_
_Eliassen Group is a leading strategic consulting company for human-powered solutions. For over 30 years, Eliassen has helped thousands of companies reach further and achieve more with their technology solutions, financial, risk & compliance, and advisory solutions, and clinical solutions. With offices from coast to coast and throughout Europe, Eliassen provides a local community presence, balanced with international reach. Eliassen Group strives to positively impact the lives of their employees, clients, consultants, and the communities in which they operate._
_Eliassen Group is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status._
_Don't miss out on our referral program! If we hire a candidate that you refer us to then you can be eligible for a 1,000 referral check!_
View Now

Big Data Engineer

46074 Westfield, Indiana Robert Half

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Description We are looking for a skilled Big Data Engineer to join our team in Westfield, Indiana. This role involves leveraging advanced technologies to design, implement, and optimize big data solutions that drive our business objectives. The ideal candidate will have extensive experience in data engineering and a passion for building scalable systems.
Responsibilities:
- Design, develop, and implement scalable big data solutions using Python, Apache Spark, and other relevant technologies.
- Build and optimize ETL pipelines to efficiently handle large volumes of structured and unstructured data.
- Manage and process data using frameworks such as Apache Hadoop and Apache Kafka.
- Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.
- Utilize cloud platforms like Amazon Web Services (AWS) to deploy and maintain data systems.
- Ensure the security, reliability, and performance of big data architectures.
- Troubleshoot and resolve issues related to data systems and pipelines.
- Monitor and analyze system performance to identify opportunities for improvement.
- Stay updated on emerging technologies and incorporate them into data engineering practices as appropriate. Requirements - A minimum of 10 years of experience in big data engineering or a related field.
- Proficiency in programming languages such as Python.
- Strong expertise in data processing frameworks, including Apache Spark, Hadoop, and Kafka.
- Solid experience with ETL processes and pipeline development.
- Familiarity with cloud platforms, particularly Amazon Web Services (AWS).
- Proven ability to design and implement scalable data architectures.
- Excellent problem-solving skills and attention to detail.
- Strong communication skills to collaborate effectively with technical and non-technical teams. Technology Doesn't Change the World, People Do.®
Robert Half is the world's first and largest specialized talent solutions firm that connects highly qualified job seekers to opportunities at great companies. We offer contract, temporary and permanent placement solutions for finance and accounting, technology, marketing and creative, legal, and administrative and customer support roles.
Robert Half works to put you in the best position to succeed. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity - whenever you choose - even on the go. Download the Robert Half app ( and get 1-tap apply, notifications of AI-matched jobs, and much more.
All applicants applying for U.S. job openings must be legally authorized to work in the United States. Benefits are available to contract/temporary professionals, including medical, vision, dental, and life and disability insurance. Hired contract/temporary professionals are also eligible to enroll in our company 401(k) plan. Visit roberthalf.gobenefits.net for more information.
© 2025 Robert Half. An Equal Opportunity Employer. M/F/Disability/Veterans. By clicking "Apply Now," you're agreeing to Robert Half's Terms of Use ( .
View Now

Big Data Engineer

77007 Houston, Texas Robert Half

Posted 9 days ago

Job Viewed

Tap Again To Close

Job Description

Description
Are you obsessed with data and thrive on turning complexity into clarity? We're looking for a Lead Data Engineer who lives and breathes Python, knows Snowflake inside out, and can architect scalable solutions in AWS. If you love solving hard problems, building elegant data platforms, and communicating your ideas with clarity and impact-this is your role.
What You'll Do
+ Design and implement cloud-native data platforms using AWS and Snowflake
+ Build robust data pipelines and services using Python and modern engineering practices
+ Architect scalable solutions for data ingestion, transformation, and analytics
+ Collaborate with analysts, scientists, and business stakeholders to translate ideas into technical reality
+ Lead cross-functional teams to deliver high-impact data products
+ Migrate legacy systems to cloud-based platforms with minimal disruption
+ Define long-term data architecture strategies aligned with business goals
+ Mentor junior engineers and champion best practices in design, testing, and deployment
+ Communicate complex technical concepts in a way that resonates with non-technical audiences
Requirements
7+ years of hands-on experience in data engineering and architecture
Mastery of Python and SQL for building scalable data solutions
Deep expertise in Snowflake, including performance tuning and advanced modeling
Strong experience with AWS data services (e.g., S3, Glue, Lambda, Redshift, Kinesis)
Passion for data modeling: dimensional, data vault, and modern techniques
Familiarity with tools like DBT, Airflow, and Terraform
Experience with streaming, CDC, and real-time data integration patterns
Excellent communication skills-you make the complex simple and the abstract tangible
A genuine love for data and a drive to innovate
Technology Doesn't Change the World, People Do.®
Robert Half is the world's first and largest specialized talent solutions firm that connects highly qualified job seekers to opportunities at great companies. We offer contract, temporary and permanent placement solutions for finance and accounting, technology, marketing and creative, legal, and administrative and customer support roles.
Robert Half works to put you in the best position to succeed. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity - whenever you choose - even on the go. Download the Robert Half app ( and get 1-tap apply, notifications of AI-matched jobs, and much more.
All applicants applying for U.S. job openings must be legally authorized to work in the United States. Benefits are available to contract/temporary professionals, including medical, vision, dental, and life and disability insurance. Hired contract/temporary professionals are also eligible to enroll in our company 401(k) plan. Visit roberthalf.gobenefits.net for more information.
© 2025 Robert Half. An Equal Opportunity Employer. M/F/Disability/Veterans. By clicking "Apply Now," you're agreeing to Robert Half's Terms of Use ( .
View Now

Big Data Engineer

20080 Washington, District Of Columbia SAIC

Posted 15 days ago

Job Viewed

Tap Again To Close

Job Description

**Description**
SAIC is seeking **Big Data Engineers** to join the Machine-assisted Analytic Rapid-repository System (MARS) Advanced Development Operations (DevOps) and Sustainment Support (ADOS) program and provide on-site technical support to facilitate operations of critical MARS infrastructure and services. This effort focuses on providing a comprehensive set of System/ Software Engineering and IT Services to maintain, sustain, enhance, and improve/ modernize MARS. This position will be located in the National Capital Region.
**Please note that this is contingent upon contract award, with an anticipated decision expected by Winter/ Spring 2026.**
The Big Data Engineer responsibilities include, but are not limited to:
+ Designs scalable data architectures, leading ETL processes, and oversees the implementation of data storage solutions
+ Support the development and integration of advanced analytics tools, ensuring efficient data access and insights generation
+ Optimize system performance and scalability, implementing continuous improvement initiatives
**Qualifications**
+ Active TS/SCI with Polygraph
+ Bachelor's degree in Information Technology, Cybersecurity, Computer Science, Information Systems, Data Science, or Software Engineering and 14 years or more relevant experience (will consider an additional 4+ years of relevant experience in lieu of degree)
+ One Active Certification: CCISO, CISM, CISSP, GSLC, SSCP or GSEC
+ Expertise in designing, implementing, and managing Big Data solutions using Hadoop, Spark, and data streaming technologies
+ Proven experience optimizing data pipelines, performing large-scale data processing, and ensuring data quality
+ Strong knowledge of data warehousing concepts, ETL processes, and distributed computing environments
Target salary range: $160,001 - $00,000. The estimate displayed represents the typical salary range for this position based on experience and other factors.
REQNUMBER:
SAIC is a premier technology integrator, solving our nation's most complex modernization and systems engineering challenges across the defense, space, federal civilian, and intelligence markets. Our robust portfolio of offerings includes high-end solutions in systems engineering and integration; enterprise IT, including cloud services; cyber; software; advanced analytics and simulation; and training. We are a team of 23,000 strong driven by mission, united purpose, and inspired by opportunity. Headquartered in Reston, Virginia, SAIC has annual revenues of approximately 6.5 billion. For more information, visit saic.com. For information on the benefits SAIC offers, see Working at SAIC. EOE AA M/F/Vet/Disability
View Now

ETL Big Data Engineer

85067 Phoenix, Arizona CTG

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**CTG is seeking to fill an ETL Big Data Engineer position for our client in Phoenix, AZ.**
**Duration:** 12 months
**Key Skills:** Genesys, Oracle, PL/SQL, Apache Kafka, API development, Denodo, Google Cloud Platform (GCP)
**Duties:**
+ Design, develop, and optimize ETL pipelines for large-scale data processing.
+ Build real-time data streaming solutions using Apache Kafka.
+ Integrate data across systems via APIs and Denodo data virtualization.
+ Deploy and maintain cloud-based data solutions on GCP.
+ Ensure data quality, security, and reliability across workflows.
**Experience & Education:**
+ Proven experience in ETL, Big Data, and data integration projects.
+ Hands-on experience with data pipelines, data warehousing, and analytics.
+ Bachelor's degree in Computer Science, IT, or related field preferred.
Excellent verbal and written English communication skills and the ability to interact professionally with a diverse group are required.
CTG does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services for this role.
**To Apply:**
To be considered, please apply directly to this requisition using the link provided. For additional information, please contact **JoAnn Abramo** at ** ** . Kindly forward this to any other interested parties. Thank you!
**About CTG**
CTG, a Cegeka company, is at the forefront of digital transformation, providing IT and business solutions that accelerate project momentum and deliver desired value. Over nearly 60 years, we have earned a reputation as a faster and more reliable, results-driven partner. Our vision is to be an indispensable partner to our clients and the preferred career destination for digital and technology experts. CTG leverages the expertise of over 9,000 team members in 19 countries to provide innovative solutions. Together, we operate across the Americas, Europe, and India, working in close cooperation with over 3,000 clients in many of today's highest-growth industries. For more information, visit .
Our culture is a direct result of the people who work at CTG, the values we hold, and the actions we take. In other words, our people define our culture. It's a living, breathing thing that is renewed every day through the ways we engage with each other, our clients, and our communities. Part of our mission is to cultivate a workplace that attracts and develops the best people, reflected by our recognition as a Great Place to Work Certified company across many of our global operations.
CTG will consider for employment all qualified applicants including those with criminal histories in a manner consistent with the requirements of all applicable local, state, and federal laws.
CTG is an Equal Opportunity Employer. CTG will assure equal opportunity and consideration to all applicants and employees in recruitment, selection, placement, training, benefits, compensation, promotion, transfer, and release of individuals without regard to race, creed, religion, color, national origin, sex, sexual orientation, gender identity and gender expression, age, disability, marital or veteran status, citizenship status, or any other discriminatory factors as required by law. CTG is fully committed to promoting employment opportunities for members of protected classes.
View Now

Hadoop Big Data Engineer

07308 Jersey City, New Jersey Insight Global

Posted 14 days ago

Job Viewed

Tap Again To Close

Job Description

Job Description
For 68.93/hr, we are seeking a skilled Big Data Engineer to support a leading financial institution in building and optimizing large-scale data processing systems. This role involves working with Hadoop and Spark to ingest, transform, and analyze high-volume market and trading data. The engineer will contribute to the development and maintenance of distributed data pipelines, ensuring performance, scalability, and reliability across the analytics infrastructure. You will use datasets, data lake architecture, etc. to help build a proprietary analytics platform. The ideal candidate will have a strong understanding of big data frameworks and a passion for enabling data-driven decision-making in a fast-paced financial environment.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: and Requirements
- 7+ years of experience in a Data Engineer role utilizing Python scripting
- Experience with Hadoop and Spark - Background in financial services or analytics platforms
- Familiarity with tools like Apache Hudi, Hive, and Airflow is a plus
View Now

Senior Big Data Engineer

15222 Pittsburgh, Pennsylvania System One

Posted 24 days ago

Job Viewed

Tap Again To Close

Job Description

Position Title: Senior Big Data Engineer
Position Location: Pittsburgh
Provide locations/flexible work by preference: Most preferred - Pittsburgh PA - Two PNC Plaza, 620 Liberty Ave, Pittsburgh, PA 15222; Second - Cleveland OH - Strongsville Technology Center, 8100 Mohawk Dr., Strongsville, OH 44136
Ability to work remote: In office, some flexibility could be available in the future
Travel: If in Ohio, could have to travel to Pittsburgh - 1 a month, or 1 every two months
Target Start Date: 8/1
Intended length of Assignment: 6 months
Potential for Contract Extension: Possibility, depending on performance. Would prefer to convert within the 6 month period.
Function of the group: Developing Data Requirements based on Regulation Requirements
Industry background: Nice to have - Banking experience is desirable; PNC would be a plus
Team dynamic - 12/all of them are 5+ years /Pittsburgh/Dallas/OffShore
Roles and Responsibilities: Design and implement new applications/maintain existing applications/ Support RTB functions
Technical Skills required: (4+ years)
+ Spark3
+ Python
+ Shell Scripting
+ Hadoop Cloudera - Hadoop admin , Hadoop Tables/hdfs file processing
+ Data and System Architectural experience
+ Knowledge with Data Modelling using Erwin Tool.
+ Knowledge with Alation or any data mining/reporting tools
Flex Skills: Agile methodology, Jira, GIT Bash, Jenkins, uDeploy, CA7
Soft Skills: Team Player/ Analytical skills /Banking industry experience/ 3+ years development experience
Required Degrees/Certifications: Graduate/Relevant certifications if any
Role Differentiator: Career Growth/comfortable work environment/good team members
Interview Process:
1. Video interview with manager (1 hour max)
2. In-person interview - Technical Assessment - (1 hour)
3. In-person Panel interview - same day as round 2 - (1 hour)
Skills:?
+ Data and System Architectural experience?
+ Hadoop Cloudera?
+ Knowledge with Alation or any data mining reporting tools?
+ Knowledge with Data Modelling using Erwin Tool?
+ Python?
+ Shell Scripting
+ Spark3?
Share your resume with Also connect me at LinkedIn : (16) Ariz J. Khan | LinkedIn ( #404-IT Pittsburgh
System One, and its subsidiaries including Joulé, ALTA IT Services, CM Access, TPGS, and MOUNTAIN, LTD., are leaders in delivering workforce solutions and integrated services across North America. We help clients get work done more efficiently and economically, without compromising quality. System One not only serves as a valued partner for our clients, but we offer eligible full-time employees health and welfare benefits coverage options including medical, dental, vision, spending accounts, life insurance, voluntary plans, as well as participation in a 401(k) plan.
System One is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, age, national origin, disability, family care or medical leave status, genetic information, veteran status, marital status, or any other characteristic protected by applicable federal, state, or local law.
View Now
Be The First To Know

About the latest Data engineer Jobs in United States !

Senior Big Data Engineer

15222 Pittsburgh, Pennsylvania System One

Posted 24 days ago

Job Viewed

Tap Again To Close

Job Description

Big Data Engineer
Reason for Open Position: Tenure backfill
Position Location:
+ Most preferred: Pittsburgh, PA - Two PNC Plaza, 620 Liberty Ave
+ Secondary: Cleveland, OH - Strongsville Technology Center, 8100 Mohawk Dr.
Work Arrangement: In-office (some flexibility may be available in the future)
Travel: If based in Ohio, may need to travel to Pittsburgh 1x per month (or every 2 months)
Schedule: Monday-Friday (some weekend flexibility depending on project urgency)
Working Hours: 8 AM - 5 PM EST, 40 hours/week
Time Zone: EST
Overtime: N/A
Target Start Date: 8/1
Assignment Length: 6 months
Contract Extension: Possible, depending on performance (preferred conversion to FTE within 6 months
Function of the Group
+ Develop data requirements based on regulatory requirements
Current Initiatives / Projects
+ Regulatory-driven data engineering and support initiatives
Industry Background
+ Banking/financial industry experience is preferred (PNC experience is a strong plus)
Team Dynamic
+ Team of 12 (all 5+ years' experience)
+ Locations: Pittsburgh, Dallas, Offshore
Roles & Responsibilities
+ Design and implement new applications
+ Maintain existing applications
+ Support RTB (Run the Bank) functions
Required Technical Skills (4+ years)
+ Spark 3
+ Python
+ Shell Scripting
+ Hadoop Cloudera (admin, Hadoop tables, HDFS file processing)
+ Data & System Architecture experience
+ Data Modeling (Erwin Tool)
+ Alation or similar data mining/reporting tools
Flex Skills
+ Agile methodology
+ Jira, Git Bash
+ Jenkins, uDeploy
+ CA7
Soft Skills
+ Team player
+ Strong analytical skills
+ Banking industry experience preferred
+ 3+ years software development experience
Education / Certifications
+ Bachelor's degree required (relevant certifications a plus)
Role Differentiators
+ Career growth opportunities
+ Supportive and collaborative work environment
+ Strong team culture
Interview Process
+ Video interview with manager (up to 1 hour)
+ In-person technical assessment (1 hour)
+ In-person panel interview (same day as Round 2, 1 hour)
? If interested, please send your updated resume to:

System One, and its subsidiaries including Joulé, ALTA IT Services, CM Access, TPGS, and MOUNTAIN, LTD., are leaders in delivering workforce solutions and integrated services across North America. We help clients get work done more efficiently and economically, without compromising quality. System One not only serves as a valued partner for our clients, but we offer eligible full-time employees health and welfare benefits coverage options including medical, dental, vision, spending accounts, life insurance, voluntary plans, as well as participation in a 401(k) plan.
System One is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, age, national origin, disability, family care or medical leave status, genetic information, veteran status, marital status, or any other characteristic protected by applicable federal, state, or local law.
View Now

Senior Data Engineer - Big Data Platforms

43215 Columbus, Ohio $155000 Annually WhatJobs

Posted today

Job Viewed

Tap Again To Close

Job Description

full-time
Our client, a rapidly expanding data analytics firm, is seeking a Senior Data Engineer to design, build, and maintain robust big data infrastructure. This impactful role is located at our modern offices in Columbus, Ohio, US , and requires a hands-on approach to solving complex data challenges. You will be instrumental in developing scalable data pipelines, optimizing data warehousing solutions, and ensuring the availability and integrity of large datasets. Responsibilities include architecting and implementing ETL/ELT processes, working with distributed computing frameworks like Spark and Hadoop, and managing cloud-based data platforms (e.g., AWS, Azure, GCP). You will collaborate closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality data solutions that drive informed decision-making. The ideal candidate possesses deep expertise in SQL, Python, and data modeling, along with extensive experience in building and managing data lakes and data warehouses. Proficiency in cloud data services and a strong understanding of data governance, security, and performance tuning are essential. A Bachelor's or Master's degree in Computer Science, Engineering, or a related quantitative field, coupled with a minimum of 7 years of experience in data engineering, is required. Experience with real-time data processing and streaming technologies is a significant plus. Excellent problem-solving skills and the ability to work effectively in a collaborative team environment are crucial. This is an outstanding opportunity to work with cutting-edge big data technologies and contribute to transformative data initiatives within a growing organization.

Key Responsibilities:
  • Design, build, and maintain scalable data pipelines and ETL/ELT processes.
  • Develop and optimize data warehouses and data lakes.
  • Implement data solutions using big data technologies (Spark, Hadoop).
  • Manage and leverage cloud data platforms (AWS, Azure, GCP).
  • Collaborate with data scientists and analysts to meet data needs.
  • Ensure data quality, integrity, and security.
Qualifications:
  • 7+ years of experience in data engineering, with a focus on big data.
  • Expertise in SQL, Python, and data modeling.
  • Proficiency with distributed computing frameworks (e.g., Spark).
  • Experience with cloud data services.
  • Strong understanding of data warehousing concepts and best practices.
  • Bachelor's or Master's degree in a quantitative field.
Apply Now

Senior Data Engineer - Big Data Analytics

68101 Omaha, Nebraska $145000 Annually WhatJobs

Posted 13 days ago

Job Viewed

Tap Again To Close

Job Description

full-time
Our client, a leader in data-driven innovation, is seeking a highly skilled Senior Data Engineer to design, build, and maintain scalable big data infrastructure and pipelines. This is a fully remote position, offering the flexibility to work from anywhere in the US, with a strong emphasis on remote collaboration and asynchronous communication. You will be responsible for developing and optimizing data ingestion, transformation, and storage solutions to support advanced analytics, machine learning, and business intelligence initiatives. The ideal candidate will have extensive experience with distributed data processing frameworks such as Apache Spark, Hadoop, and Kafka. Proficiency in SQL and NoSQL databases, as well as cloud platforms like AWS, Azure, or GCP, is essential. You will work closely with data scientists, analysts, and software engineers to understand data requirements and deliver robust data solutions. Responsibilities include designing efficient data models, building ETL/ELT processes, ensuring data quality and integrity, and automating data workflows. Strong programming skills in Python or Scala are required. You should also have experience with data warehousing concepts and performance tuning. This role requires excellent problem-solving abilities, a proactive approach to identifying and resolving data challenges, and the capacity to work independently in a remote environment. If you are passionate about building resilient data systems and enabling data-driven decision-making, this is an exceptional opportunity to make a significant impact. Join a forward-thinking company that values innovation and empowers its employees to thrive in a remote setting. You will be an integral part of shaping our client's data capabilities.

Responsibilities:
  • Design, build, and maintain scalable data pipelines and ETL/ELT processes.
  • Develop and optimize big data solutions using Spark, Hadoop, and Kafka.
  • Implement and manage data warehousing solutions.
  • Ensure data quality, integrity, and security across all data platforms.
  • Collaborate with data scientists and analysts to meet their data needs.
  • Develop and maintain data models for various analytical purposes.
  • Automate data workflows and implement monitoring solutions.
  • Troubleshoot and resolve complex data engineering issues.
Qualifications:
  • Bachelor's degree in Computer Science, Engineering, or a related quantitative field; Master's preferred.
  • 5+ years of experience in data engineering or a similar role.
  • Proven experience with big data technologies (Spark, Hadoop, Kafka).
  • Strong proficiency in SQL and NoSQL databases.
  • Experience with cloud platforms (AWS, Azure, GCP).
  • Proficient in Python or Scala for data processing.
  • Understanding of data warehousing principles and best practices.
  • Excellent problem-solving and communication skills.
  • Ability to work independently and effectively in a remote team.
This fully remote role offers a fantastic opportunity for a skilled data engineer to contribute to cutting-edge big data analytics from anywhere in the US, supporting our client located near Omaha, Nebraska, US .
Apply Now
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineer Jobs