6,577 Big Data Engineer jobs in the United States
Big Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Headquartered in the United States, TP-Link Systems Inc. is a global provider of reliable networking devices and smart home products, consistently ranked as the world’s top provider of Wi-Fi devices. The company is committed to delivering innovative products that enhance people’s lives through faster, more reliable connectivity. With a commitment to excellence, TP-Link serves customers in over 170 countries and continues to grow its global footprint.
We believe technology changes the world for the better! At TP-Link Systems Inc, we are committed to crafting dependable, high-performance products to connect users worldwide with the wonders of technology.
Embracing professionalism, innovation, excellence, and simplicity, we aim to assist our clients in achieving remarkable global performance and enable consumers to enjoy a seamless, effortless lifestyle.
KEY RESPONSIBILITIES:
-
Design, develop, and maintain scalable big data infrastructure and pipelines, including data ingestion, cleansing, transformation, and data warehouse modeling for large-scale datasets.
-
Design and maintain vector databases and embedding pipelines to support LLM applications, RAG (Retrieval Augmented Generation) systems, semantic search and agentic capabilities.
-
Collaborate with cross-functional teams to deliver reliable, actionable data solutions that support business and product decisions.
-
Implement and manage batch and streaming ETL/ELT workflows using distributed data processing frameworks Spark and orchestration tools.
-
Participate in data integration and ETL pipeline development, ensuring secure and efficient data processing.
-
Investigate system issues, perform troubleshooting, and assist in optimizing data processing workflows.
Requirements
REQUIRED QUALIFICATIONS
-
Bachelor’s degree in Computer Science, Information Systems, or related field.
-
3-5 years of hands-on experience in data engineering or big data infrastructure, working with large scale datasets in a production environment.
-
Proficiency in Python with experience developing scalable ETL/ELT pipelines or made significant contributions to open source python library
-
Ability to work effectively in a team-oriented environment with good communication and problem-solving skills.
PREFERRED QUALIFICATIONS
- Experience with LLM frameworks and libraries (e.g. LangChain, LlamaIndex) is strongly preferred
Benefits
Salary Range: $100,000 - 150,000
-
Free snacks and drinks, and provided lunch on Fridays
-
Fully paid medical, dental, and vision insurance (partial coverage for dependents)
-
Contributions to 401k funds
-
Bi-annual reviews, and annual pay increases
-
Health and wellness benefits, including free gym membership
-
Quarterly team-building events
At TP-Link Systems Inc., we are continually searching for ambitious individuals who are passionate about their work. We believe that diversity fuels innovation, collaboration, and drives our entrepreneurial spirit. As a global company, we highly value diverse perspectives and are committed to cultivating an environment where all voices are heard, respected, and valued. We are dedicated to providing equal employment opportunities to all employees and applicants, and we prohibit discrimination and harassment of any kind based on race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Beyond compliance, we strive to create a supportive and growth-oriented workplace for everyone. If you share our passion and connection to this mission, we welcome you to apply and join us in building a vibrant and inclusive team at TP-Link Systems Inc.
Please, no third-party agency inquiries, and we are unable to offer visa sponsorships at this time.
Big Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Headquartered in the United States, TP-Link Systems Inc. is a global provider of reliable networking devices and smart home products, consistently ranked as the world’s top provider of Wi-Fi devices. The company is committed to delivering innovative products that enhance people’s lives through faster, more reliable connectivity. With a commitment to excellence, TP-Link serves customers in over 170 countries and continues to grow its global footprint.
We believe technology changes the world for the better! At TP-Link Systems Inc, we are committed to crafting dependable, high-performance products to connect users worldwide with the wonders of technology.
Embracing professionalism, innovation, excellence, and simplicity, we aim to assist our clients in achieving remarkable global performance and enable consumers to enjoy a seamless, effortless lifestyle.
KEY RESPONSIBILITIES:
- Design, develop, and maintain scalable big data infrastructure and pipelines, including data ingestion, cleansing, transformation, and data warehouse modeling for large-scale datasets.
- Design and maintain vector databases and embedding pipelines to support LLM applications, RAG (Retrieval Augmented Generation) systems, semantic search and agentic capabilities.
- Collaborate with cross-functional teams to deliver reliable, actionable data solutions that support business and product decisions.
- Implement and manage batch and streaming ETL/ELT workflows using distributed data processing frameworks Spark and orchestration tools.
- Participate in data integration and ETL pipeline development, ensuring secure and efficient data processing.
- Investigate system issues, perform troubleshooting, and assist in optimizing data processing workflows.
Requirements
REQUIRED QUALIFICATIONS
- Bachelor’s degree in Computer Science, Information Systems, or related field.
- 3-5 years of hands-on experience in data engineering or big data infrastructure, working with large scale datasets in a production environment.
- Proficiency in Python with experience developing scalable ETL/ELT pipelines or made significant contributions to open source python library
- Ability to work effectively in a team-oriented environment with good communication and problem-solving skills.
PREFERRED QUALIFICATIONS
- Experience with LLM frameworks and libraries (e.g. LangChain, LlamaIndex) is strongly preferred
Benefits
Salary Range: $100,000 - 150,000
- Free snacks and drinks, and provided lunch on Fridays
- Fully paid medical, dental, and vision insurance (partial coverage for dependents)
- Contributions to 401k funds
- Bi-annual reviews, and annual pay increases
- Health and wellness benefits, including free gym membership
- Quarterly team-building events
At TP-Link Systems Inc., we are continually searching for ambitious individuals who are passionate about their work. We believe that diversity fuels innovation, collaboration, and drives our entrepreneurial spirit. As a global company, we highly value diverse perspectives and are committed to cultivating an environment where all voices are heard, respected, and valued. We are dedicated to providing equal employment opportunities to all employees and applicants, and we prohibit discrimination and harassment of any kind based on race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Beyond compliance, we strive to create a supportive and growth-oriented workplace for everyone. If you share our passion and connection to this mission, we welcome you to apply and join us in building a vibrant and inclusive team at TP-Link Systems Inc.
Please, no third-party agency inquiries, and we are unable to offer visa sponsorships at this time.
Big Data Engineer
Posted today
Job Viewed
Job Description
Are you obsessed with data and thrive on turning complexity into clarity? We're looking for a Lead Data Engineer who lives and breathes Python, knows Snowflake inside out, and can architect scalable solutions in AWS. If you love solving hard problems, building elegant data platforms, and communicating your ideas with clarity and impact-this is your role.
What You'll Do
+ Design and implement cloud-native data platforms using AWS and Snowflake
+ Build robust data pipelines and services using Python and modern engineering practices
+ Architect scalable solutions for data ingestion, transformation, and analytics
+ Collaborate with analysts, scientists, and business stakeholders to translate ideas into technical reality
+ Lead cross-functional teams to deliver high-impact data products
+ Migrate legacy systems to cloud-based platforms with minimal disruption
+ Define long-term data architecture strategies aligned with business goals
+ Mentor junior engineers and champion best practices in design, testing, and deployment
+ Communicate complex technical concepts in a way that resonates with non-technical audiences
Requirements
7+ years of hands-on experience in data engineering and architecture
Mastery of Python and SQL for building scalable data solutions
Deep expertise in Snowflake, including performance tuning and advanced modeling
Strong experience with AWS data services (e.g., S3, Glue, Lambda, Redshift, Kinesis)
Passion for data modeling: dimensional, data vault, and modern techniques
Familiarity with tools like DBT, Airflow, and Terraform
Experience with streaming, CDC, and real-time data integration patterns
Excellent communication skills-you make the complex simple and the abstract tangible
A genuine love for data and a drive to innovate
Technology Doesn't Change the World, People Do.®
Robert Half is the world's first and largest specialized talent solutions firm that connects highly qualified job seekers to opportunities at great companies. We offer contract, temporary and permanent placement solutions for finance and accounting, technology, marketing and creative, legal, and administrative and customer support roles.
Robert Half works to put you in the best position to succeed. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity - whenever you choose - even on the go. Download the Robert Half app ( and get 1-tap apply, notifications of AI-matched jobs, and much more.
All applicants applying for U.S. job openings must be legally authorized to work in the United States. Benefits are available to contract/temporary professionals, including medical, vision, dental, and life and disability insurance. Hired contract/temporary professionals are also eligible to enroll in our company 401(k) plan. Visit roberthalf.gobenefits.net for more information.
© 2025 Robert Half. An Equal Opportunity Employer. M/F/Disability/Veterans. By clicking "Apply Now," you're agreeing to Robert Half's Terms of Use ( .
Big Data Engineer
Posted 6 days ago
Job Viewed
Job Description
SAIC is seeking **Big Data Engineers** to join the Machine-assisted Analytic Rapid-repository System (MARS) Advanced Development Operations (DevOps) and Sustainment Support (ADOS) program and provide on-site technical support to facilitate operations of critical MARS infrastructure and services. This effort focuses on providing a comprehensive set of System/ Software Engineering and IT Services to maintain, sustain, enhance, and improve/ modernize MARS. This position will be located in the National Capital Region.
**Please note that this is contingent upon contract award, with an anticipated decision expected by Winter/ Spring 2026.**
The Big Data Engineer responsibilities include, but are not limited to:
+ Designs scalable data architectures, leading ETL processes, and oversees the implementation of data storage solutions
+ Support the development and integration of advanced analytics tools, ensuring efficient data access and insights generation
+ Optimize system performance and scalability, implementing continuous improvement initiatives
**Qualifications**
+ Active TS/SCI with Polygraph
+ Bachelor's degree in Information Technology, Cybersecurity, Computer Science, Information Systems, Data Science, or Software Engineering and 14 years or more relevant experience (will consider an additional 4+ years of relevant experience in lieu of degree)
+ One Active Certification: CCISO, CISM, CISSP, GSLC, SSCP or GSEC
+ Expertise in designing, implementing, and managing Big Data solutions using Hadoop, Spark, and data streaming technologies
+ Proven experience optimizing data pipelines, performing large-scale data processing, and ensuring data quality
+ Strong knowledge of data warehousing concepts, ETL processes, and distributed computing environments
Target salary range: $160,001 - $00,000. The estimate displayed represents the typical salary range for this position based on experience and other factors.
REQNUMBER:
SAIC is a premier technology integrator, solving our nation's most complex modernization and systems engineering challenges across the defense, space, federal civilian, and intelligence markets. Our robust portfolio of offerings includes high-end solutions in systems engineering and integration; enterprise IT, including cloud services; cyber; software; advanced analytics and simulation; and training. We are a team of 23,000 strong driven by mission, united purpose, and inspired by opportunity. Headquartered in Reston, Virginia, SAIC has annual revenues of approximately 6.5 billion. For more information, visit saic.com. For information on the benefits SAIC offers, see Working at SAIC. EOE AA M/F/Vet/Disability
Senior Big Data Engineer
Posted 3 days ago
Job Viewed
Job Description
ABOUT US:
Headquartered in the United States, TP-Link Systems Inc. is a global provider of reliable networking devices and smart home products, consistently ranked as the world’s top provider of Wi-Fi devices. The company is committed to delivering innovative products that enhance people’s lives through faster, more reliable connectivity. With a commitment to excellence, TP-Link serves customers in over 170 countries and continues to grow its global footprint.
We believe technology changes the world for the better! At TP-Link Systems Inc, we are committed to crafting dependable, high-performance products to connect users worldwide with the wonders of technology.
Embracing professionalism, innovation, excellence, and simplicity, we aim to assist our clients in achieving remarkable global performance and enable consumers to enjoy a seamless, effortless lifestyle.
KEY RESPONSIBILITIES
-
Develop and maintain the Big Data Platform by performing data cleansing, data warehouse modeling, and report development on large datasets. Collaborate with cross-functional teams to provide actionable insights for decision-making.
-
Manage the operation and administration of the Big Data Platform, including system deployment, task scheduling, proactive monitoring, and alerting to ensure stability and security.
-
Handle data collection and integration tasks, including ETL development, data de-identification, and managing data security.
-
Provide support for other departments by processing data, writing queries, developing solutions, performing statistical analysis, and generating reports.
-
Troubleshoot and resolve critical issues, conduct fault diagnosis, and optimize system performance.
Requirements
REQUIRED QUALIFICATIONS
-
Bachelor’s degree or higher in Computer Science or a related field, with at least three years of experience maintaining a Big Data platform.
-
Strong understanding of Big Data technologies such as Hadoop, Flink, Spark, Hive, HBase, and Airflow, with proven expertise in Big Data development and performance optimization.
-
Familiarity with Big Data OLAP tools like Kylin, Impala, and ClickHouse, as well as experience in data warehouse design, data modeling, and report generation.
-
Proficiency in Linux development environments and Python programming.
-
Excellent communication, collaboration, and teamwork skills, with a proactive attitude and a strong sense of responsibility.
PREFERRED QUALIFICAITONS
-
Experience with cloud-based deployments, particularly AWS EMR, with familiarity in other cloud platforms being a plus.
-
Proficiency in additional languages such as Java or Scala is a plus.
Benefits
Salary Range: $150,000 - $180,000
-
Free snacks and drinks, and provided lunch on Fridays
-
Fully paid medical, dental, and vision insurance (partial coverage for dependents)
-
Contributions to 401k funds
-
Bi-annual reviews, and annual pay increases
-
Health and wellness benefits, including free gym membership
-
Quarterly team-building events
At TP-Link Systems Inc., we are continually searching for ambitious individuals who are passionate about their work. We believe that diversity fuels innovation, collaboration, and drives our entrepreneurial spirit. As a global company, we highly value diverse perspectives and are committed to cultivating an environment where all voices are heard, respected, and valued. We are dedicated to providing equal employment opportunities to all employees and applicants, and we prohibit discrimination and harassment of any kind based on race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Beyond compliance, we strive to create a supportive and growth-oriented workplace for everyone. If you share our passion and connection to this mission, we welcome you to apply and join us in building a vibrant and inclusive team at TP-Link Systems Inc.
Please, no third-party agency inquiries, and we are unable to offer visa sponsorships at this time.
Senior Big Data Engineer
Posted 3 days ago
Job Viewed
Job Description
ABOUT US:
Headquartered in the United States, TP-Link Systems Inc. is a global provider of reliable networking devices and smart home products, consistently ranked as the world’s top provider of Wi-Fi devices. The company is committed to delivering innovative products that enhance people’s lives through faster, more reliable connectivity. With a commitment to excellence, TP-Link serves customers in over 170 countries and continues to grow its global footprint.
We believe technology changes the world for the better! At TP-Link Systems Inc, we are committed to crafting dependable, high-performance products to connect users worldwide with the wonders of technology.
Embracing professionalism, innovation, excellence, and simplicity, we aim to assist our clients in achieving remarkable global performance and enable consumers to enjoy a seamless, effortless lifestyle.
KEY RESPONSIBILITIES
- Develop and maintain the Big Data Platform by performing data cleansing, data warehouse modeling, and report development on large datasets. Collaborate with cross-functional teams to provide actionable insights for decision-making.
- Manage the operation and administration of the Big Data Platform, including system deployment, task scheduling, proactive monitoring, and alerting to ensure stability and security.
- Handle data collection and integration tasks, including ETL development, data de-identification, and managing data security.
- Provide support for other departments by processing data, writing queries, developing solutions, performing statistical analysis, and generating reports.
- Troubleshoot and resolve critical issues, conduct fault diagnosis, and optimize system performance.
Requirements
REQUIRED QUALIFICATIONS
- Bachelor’s degree or higher in Computer Science or a related field, with at least three years of experience maintaining a Big Data platform.
- Strong understanding of Big Data technologies such as Hadoop, Flink, Spark, Hive, HBase, and Airflow, with proven expertise in Big Data development and performance optimization.
- Familiarity with Big Data OLAP tools like Kylin, Impala, and ClickHouse, as well as experience in data warehouse design, data modeling, and report generation.
- Proficiency in Linux development environments and Python programming.
- Excellent communication, collaboration, and teamwork skills, with a proactive attitude and a strong sense of responsibility.
PREFERRED QUALIFICAITONS
- Experience with cloud-based deployments, particularly AWS EMR, with familiarity in other cloud platforms being a plus.
- Proficiency in additional languages such as Java or Scala is a plus.
Benefits
Salary Range: $150,000 - $180,000
- Free snacks and drinks, and provided lunch on Fridays
- Fully paid medical, dental, and vision insurance (partial coverage for dependents)
- Contributions to 401k funds
- Bi-annual reviews, and annual pay increases
- Health and wellness benefits, including free gym membership
- Quarterly team-building events
At TP-Link Systems Inc., we are continually searching for ambitious individuals who are passionate about their work. We believe that diversity fuels innovation, collaboration, and drives our entrepreneurial spirit. As a global company, we highly value diverse perspectives and are committed to cultivating an environment where all voices are heard, respected, and valued. We are dedicated to providing equal employment opportunities to all employees and applicants, and we prohibit discrimination and harassment of any kind based on race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Beyond compliance, we strive to create a supportive and growth-oriented workplace for everyone. If you share our passion and connection to this mission, we welcome you to apply and join us in building a vibrant and inclusive team at TP-Link Systems Inc.
Please, no third-party agency inquiries, and we are unable to offer visa sponsorships at this time.
Hadoop Big Data Engineer
Posted 5 days ago
Job Viewed
Job Description
For 68.93/hr, we are seeking a skilled Big Data Engineer to support a leading financial institution in building and optimizing large-scale data processing systems. This role involves working with Hadoop and Spark to ingest, transform, and analyze high-volume market and trading data. The engineer will contribute to the development and maintenance of distributed data pipelines, ensuring performance, scalability, and reliability across the analytics infrastructure. You will use datasets, data lake architecture, etc. to help build a proprietary analytics platform. The ideal candidate will have a strong understanding of big data frameworks and a passion for enabling data-driven decision-making in a fast-paced financial environment.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: and Requirements
- 7+ years of experience in a Data Engineer role utilizing Python scripting
- Experience with Hadoop and Spark - Background in financial services or analytics platforms
- Familiarity with tools like Apache Hudi, Hive, and Airflow is a plus
Be The First To Know
About the latest Big data engineer Jobs in United States !
Senior Big Data Engineer

Posted 15 days ago
Job Viewed
Job Description
Position Location: Pittsburgh
Provide locations/flexible work by preference: Most preferred - Pittsburgh PA - Two PNC Plaza, 620 Liberty Ave, Pittsburgh, PA 15222; Second - Cleveland OH - Strongsville Technology Center, 8100 Mohawk Dr., Strongsville, OH 44136
Ability to work remote: In office, some flexibility could be available in the future
Travel: If in Ohio, could have to travel to Pittsburgh - 1 a month, or 1 every two months
Target Start Date: 8/1
Intended length of Assignment: 6 months
Potential for Contract Extension: Possibility, depending on performance. Would prefer to convert within the 6 month period.
Function of the group: Developing Data Requirements based on Regulation Requirements
Industry background: Nice to have - Banking experience is desirable; PNC would be a plus
Team dynamic - 12/all of them are 5+ years /Pittsburgh/Dallas/OffShore
Roles and Responsibilities: Design and implement new applications/maintain existing applications/ Support RTB functions
Technical Skills required: (4+ years)
+ Spark3
+ Python
+ Shell Scripting
+ Hadoop Cloudera - Hadoop admin , Hadoop Tables/hdfs file processing
+ Data and System Architectural experience
+ Knowledge with Data Modelling using Erwin Tool.
+ Knowledge with Alation or any data mining/reporting tools
Flex Skills: Agile methodology, Jira, GIT Bash, Jenkins, uDeploy, CA7
Soft Skills: Team Player/ Analytical skills /Banking industry experience/ 3+ years development experience
Required Degrees/Certifications: Graduate/Relevant certifications if any
Role Differentiator: Career Growth/comfortable work environment/good team members
Interview Process:
1. Video interview with manager (1 hour max)
2. In-person interview - Technical Assessment - (1 hour)
3. In-person Panel interview - same day as round 2 - (1 hour)
Skills:?
+ Data and System Architectural experience?
+ Hadoop Cloudera?
+ Knowledge with Alation or any data mining reporting tools?
+ Knowledge with Data Modelling using Erwin Tool?
+ Python?
+ Shell Scripting
+ Spark3?
Share your resume with Also connect me at LinkedIn : (16) Ariz J. Khan | LinkedIn ( #404-IT Pittsburgh
System One, and its subsidiaries including Joulé, ALTA IT Services, CM Access, TPGS, and MOUNTAIN, LTD., are leaders in delivering workforce solutions and integrated services across North America. We help clients get work done more efficiently and economically, without compromising quality. System One not only serves as a valued partner for our clients, but we offer eligible full-time employees health and welfare benefits coverage options including medical, dental, vision, spending accounts, life insurance, voluntary plans, as well as participation in a 401(k) plan.
System One is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, age, national origin, disability, family care or medical leave status, genetic information, veteran status, marital status, or any other characteristic protected by applicable federal, state, or local law.
Senior Big Data Engineer

Posted 15 days ago
Job Viewed
Job Description
Reason for Open Position: Tenure backfill
Position Location:
+ Most preferred: Pittsburgh, PA - Two PNC Plaza, 620 Liberty Ave
+ Secondary: Cleveland, OH - Strongsville Technology Center, 8100 Mohawk Dr.
Work Arrangement: In-office (some flexibility may be available in the future)
Travel: If based in Ohio, may need to travel to Pittsburgh 1x per month (or every 2 months)
Schedule: Monday-Friday (some weekend flexibility depending on project urgency)
Working Hours: 8 AM - 5 PM EST, 40 hours/week
Time Zone: EST
Overtime: N/A
Target Start Date: 8/1
Assignment Length: 6 months
Contract Extension: Possible, depending on performance (preferred conversion to FTE within 6 months
Function of the Group
+ Develop data requirements based on regulatory requirements
Current Initiatives / Projects
+ Regulatory-driven data engineering and support initiatives
Industry Background
+ Banking/financial industry experience is preferred (PNC experience is a strong plus)
Team Dynamic
+ Team of 12 (all 5+ years' experience)
+ Locations: Pittsburgh, Dallas, Offshore
Roles & Responsibilities
+ Design and implement new applications
+ Maintain existing applications
+ Support RTB (Run the Bank) functions
Required Technical Skills (4+ years)
+ Spark 3
+ Python
+ Shell Scripting
+ Hadoop Cloudera (admin, Hadoop tables, HDFS file processing)
+ Data & System Architecture experience
+ Data Modeling (Erwin Tool)
+ Alation or similar data mining/reporting tools
Flex Skills
+ Agile methodology
+ Jira, Git Bash
+ Jenkins, uDeploy
+ CA7
Soft Skills
+ Team player
+ Strong analytical skills
+ Banking industry experience preferred
+ 3+ years software development experience
Education / Certifications
+ Bachelor's degree required (relevant certifications a plus)
Role Differentiators
+ Career growth opportunities
+ Supportive and collaborative work environment
+ Strong team culture
Interview Process
+ Video interview with manager (up to 1 hour)
+ In-person technical assessment (1 hour)
+ In-person panel interview (same day as Round 2, 1 hour)
? If interested, please send your updated resume to:
System One, and its subsidiaries including Joulé, ALTA IT Services, CM Access, TPGS, and MOUNTAIN, LTD., are leaders in delivering workforce solutions and integrated services across North America. We help clients get work done more efficiently and economically, without compromising quality. System One not only serves as a valued partner for our clients, but we offer eligible full-time employees health and welfare benefits coverage options including medical, dental, vision, spending accounts, life insurance, voluntary plans, as well as participation in a 401(k) plan.
System One is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, age, national origin, disability, family care or medical leave status, genetic information, veteran status, marital status, or any other characteristic protected by applicable federal, state, or local law.
Sr. Big Data Engineer - Data Infrastructure
Posted 1 day ago
Job Viewed
Job Description
Job DescriptionJob Description
ABOUT US:
Headquartered in the United States, TP-Link Systems Inc. is a global provider of reliable networking devices and smart home products, consistently ranked as the world’s top provider of Wi-Fi devices. The company is committed to delivering innovative products that enhance people’s lives through faster, more reliable connectivity. With a commitment to excellence, TP-Link serves customers in over 170 countries and continues to grow its global footprint.
We believe technology changes the world for the better! At TP-Link Systems Inc, we are committed to crafting dependable, high-performance products to connect users worldwide with the wonders of technology.
Embracing professionalism, innovation, excellence, and simplicity, we aim to assist our clients in achieving remarkable global performance and enable consumers to enjoy a seamless, effortless lifestyle.
KEY RESPONSIBILITIES
- Design and build scalable data pipeline: Develop and maintain high performance and large scale data ingestion and transformation, including ETL/ELT processes, data de-identification, and security management.
- Data orchestration and automation: Develop and manage automated data workflows using tools like Apache Airflow to schedule pipelines, manage dependencies, and ensure reliable, timely data processing and availability.
- AWS integration and cloud expertise: Build data pipelines integrated with AWS cloud- storage and compute services, leveraging scalable cloud infrastructure for data processing.
- Monitoring and data quality: Implement comprehensive monitoring, logging, and alerting to ensure high availability, fault tolerance and data quality through self healing strategies and robust data validation processes.
- Technology innovation: Stay current with emerging big data technologies and industry trends, recommending and implementing new tools and approaches to continuously improve data infrastructure.
- Technical leadership: Provide technical leadership for data infrastructure teams, guide architecture decisions and system design best practices. Mentor junior engineers through code reviews and knowledge sharing, lead complex projects from concept to production, and help to foster a culture of operational excellence.
Requirements
REQUIRED QUALIFICATIONS
- Experience requirements: 5+ years in data engineering, software engineering, or data infrastructure with proven experience building and operating large scale data pipelines and distributed systems in production, including terabyte scale big data environments.
- Programming proficiency: Strong Python skills for building data pipelines and processing jobs, with ability to write clean, maintainable, and efficient code. Experience with Git version control and collaborative development workflows required.
- Distributed systems expertise: Deep knowledge of distributed systems and parallel processing concepts. Proficient in debugging and performance tuning large scale data systems, with understanding of data partitioning, sharding, consistency, and fault tolerance in distributed data processing.
- Big data frameworks: Strong proficiency in big data processing frameworks such as Apache Spark for batch processing and other relevant batch processing technologies.
- Database and data warehouse expertise: Strong understanding of relational database concepts and data warehouse principles.
- Workflow Orchestration: Hands-on experience with data workflow orchestration tools like Apache Airflow or AWS Step Functions for scheduling, coordinating, and monitoring complex data pipelines.
- Problem solving and collaboration: Excellent problem solving skills with strong attention to detail and ability to work effectively in collaborative team environments.
QUALIFICATIONS
- Advanced degree: Master's degree in Computer Science or related field providing strong theoretical foundation in large scale distributed systems and data processing algorithms.
- Modern data technology: Exposure to agentic AI patterns, knowledge base systems, and expert systems is a plus. Experience with real-time streaming processing frameworks like Apache Kafka, Apache Flink, Apache Beam, or pub/sub real-time messaging systems is a plus.
- Advance database and data warehouse expertise: Familiar with diverse database technologies in addition to relational, such as NoSQL, NewSQL, key value, columnar, graph, document, time series databases. Ability to design and optimize schemas/data models for analytics use cases, with experience in modern data storage solutions like data warehouses (Redshift, BigQuery, Databricks, Snowflake).
- Additional programming : Proficiency in additional such as Java or Scala is a plus.
- Cloud and infrastructure expertise: Experience with AWS cloud platforms and hands on skills in infrastructure as code (SDK, CDK, Terraform) and container orchestration (Docker/Kubernetes) for automated environment setup and scaling.
Benefits
Salary Range: $150,000 - $200,000
- Free snacks and drinks, and provided lunch on Fridays
- Fully paid medical, dental, and vision insurance (partial coverage for dependents)
- Contributions to 401k funds
- Bi-annual reviews, and annual pay increases
- Health and wellness benefits, including free gym membership
- Quarterly team-building events
At TP-Link Systems Inc., we are continually searching for ambitious individuals who are passionate about their work. We believe that fuels innovation, collaboration, and drives our entrepreneurial spirit. As a global company, we highly value diverse perspectives and are committed to cultivating an environment where all voices are heard, respected, and valued. We are dedicated to providing equal employment opportunities to all employees and applicants, and we prohibit discrimination and harassment of any kind based on , , , , , , status, genetics, protected veteran status, , or expression, or any other characteristic protected by federal, state, or local laws. Beyond compliance, we strive to create a supportive and growth-oriented workplace for everyone. If you share our passion and connection to this mission, we welcome you to apply and join us in building a vibrant and inclusive team at TP-Link Systems Inc.
Please, no third-party agency inquiries, and we are unable to offer visa sponsorships at this time.