19,579 Spark Developer jobs in the United States
Spark Developer

Posted today
Job Viewed
Job Description
A career in IBM Software means you'll be part of a team that transforms our customer's challenges into solutions.
Seeking new possibilities and always staying curious, we are a team dedicated to creating the world's leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career.
We are seeking a skilled Spark Developer to join our IBM Software team. As part of our team, you will be responsible for developing and maintaining high-quality software products, working with a variety of technologies and programming languages.
IBM's product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive.
**Your role and responsibilities**
· Design, implement, and maintain distributed data processing pipelines using Apache Spark (Core, SQL, Streaming) and Scala.
· Work closely with data architects and business teams to develop efficient, scalable, and high-performance data solutions.
· Write clean, testable, and well-documented code that meets enterprise-level standards.
· Perform Spark performance tuning and optimization across batch and streaming jobs.
· Integrate data from multiple sources, including relational databases, APIs, and real-time data streams (Kafka, Flume, etc.).
· Collaborate in Agile development environments, participating in sprint planning, reviews, and retrospectives.
· Troubleshoot production issues and provide timely fixes and improvements.
· Create unit and integration tests to ensure solution integrity and stability.
· Mentor junior developers and help enforce coding standards and best practices
**Required technical and professional expertise**
* 9+ years of experience in software development with a strong background in Scala and functional programming.
* 4-5+ years of recent hands-on experience in Apache Spark development (RDD, DataFrames, Datasets, Spark SQL).
* Experience with data storage solutions such as HDFS, Hive, Parquet, ORC, or NoSQL databases.
* Solid understanding of data modeling, data wrangling, and data quality best practices.
* Strong understanding of distributed systems, big data architecture, and performance tuning.
* Hands-on experience with at least one cloud platform (AWS, Azure, or GCP).
* Familiarity with CI/CD tools and version control systems like Git and Kafka, Airflow, or other ETL/streaming tools.
**Preferred technical and professional experience**
* Experience with Databricks, Delta Lake, or AWS EMR.
* Knowledge of SQL and experience working with RDBMS like PostgreSQL or MySQL.
* Exposure to containerization and orchestration tools like Docker and Kubernetes.
* Experience with agile methodologies and tools like JIRA, Confluence.
* Understanding of data security, encryption, and governance practices.
* Understanding of data lake and Lakehouse architectures.
* Knowledge of Python, Java, or other backend languages is a plus.
* Contributions to open-source big data projects are a plus.
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Scala & Spark Developer
Posted 10 days ago
Job Viewed
Job Description
Scala & Spark Developer
Location- Sunnyvale, CA / Austin, TX
No of position : 3
Job Description:
5 - 7 years of overall experience.
4+ years of experience of building and supporting scalable Big Data applications. Working in a software product development organisation building modern and scalable pipelines and delivering data promptly in a collaborative team environment.
Proficiency in Hadoop technologies and Big Data processing technologies, e.g. Spark, YARN, HDFS, Oozie, Hive, Airflow. Shell scrpting.
Strong and exceptional knowledge of Spark Engine, Spark and Scala.
Hands on experience with data processing technologies, ETL processes and feature engineering
Expertise in Data Analytics, PL/SQL, NoSQL.
Strong analytical and troubleshooting skills.
Strong interpersonal skills and ability to work effectively across multiple business and technical teams
Excellent oral and written communication skills
Ability to independently learn new technologies.
Passionate, team player and fast leaner.
Additional Nice to Have:
Experience in commonly used cloud services.
Expertise in columnar storage such as Parquet, Iceberg.
Knowledge in deep learning models
Senior Spark Developer

Posted today
Job Viewed
Job Description
A career in IBM Software means you'll be part of a team that transforms our customer's challenges into solutions.
Seeking new possibilities and always staying curious, we are a team dedicated to creating the world's leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career.
We are seeking a skilled Spark Developer developer to join our IBM Software team. As part of our team, you will be responsible for developing and maintaining high-quality software products, working with a variety of technologies and programming languages.
IBM's product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive.
**Your role and responsibilities**
· Design, develop, and optimize big data applications using Apache Spark and Scala.
· Architect and implement scalable data pipelines for both batch and real-time processing.
· Collaborate with data engineers, analysts, and architects to define data strategies.
· Optimize Spark jobs for performance and cost-effectiveness on distributed clusters.
· Build and maintain reusable code and libraries for future use.
· Work with various data storage systems like HDFS, Hive, HBase, Cassandra, Kafka, and Parquet.
· Implement data quality checks, logging, monitoring, and alerting for ETL jobs.
· Mentor junior developers and lead code reviews to ensure best practices.
· Ensure security, governance, and compliance standards are adhered to in all data processes.
· Troubleshoot and resolve performance issues and bugs in big data solutions.
**Required technical and professional expertise**
* 12+ years of total software development experience.
* 5+ years of hands-on experience with Apache Spark and Scala.
* Proficiency in Scala with deep knowledge of functional programming.
* Strong experience with distributed computing, parallel data processing, and cluster computing frameworks and problem-solving skills and the ability to work independently or as part of a team.
* Experience with cloud platforms such as AWS, Azure, or GCP (especially EMR, Databricks, or HDInsight).
* Solid understanding of Spark tuning, partitions, joins, broadcast variables, and performance optimization techniques.
* Hands-on experience with Kafka, Hive, HBase, NoSQL databases, and data lake architectures.
* Familiarity with CI/CD pipelines, Git, Jenkins, and automated testing.
**Preferred technical and professional experience**
* Experience with Databricks, Delta Lake, or Apache Iceberg.
* Exposure to machine learning pipelines using Spark MLlib or integration with ML frameworks.
* Contributions to open-source big data projects are a plus.
* Excellent communication and leadership skills.
* Understanding of data lake and lakehouse architectures.
* Knowledge of Python, Java, or other backend languages is a plus.
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Scala Spark Developer
Posted 1 day ago
Job Viewed
Job Description
As a Spark Scala Developer, you will play a critical role in the design, development, deployment and optimization of data processing application.
Key Responsibilities: •Develop and maintain data processing applications using Spark and Scala. •Collaborate with cross-functional teams to understand data requirements and design efficient solutions. •Implement test-driven deployment practices to enhance the reliability of application. •Deploy artifacts from lower to higher environment ensuring smooth transition •Troubleshoot and debug Spark performance issues to ensure optimal data processing. •Work in an agile environment, contributing to sprint planning, development and delivering high quality solutions on time •Provide essential support for production batches, addressing issues and providing fix to meet critical business needs Skills/Competencies: •Strong knowledge of Scala programming language •Excellent problem-solving and analytical skills. •Proficiency in Spark, including the development and optimization of Spark applications. •Ability to troubleshoot and debug performance issues in Spark. •Understanding of design patterns and data structure for efficient data processing •Familiarity with database concepts and SQL * Java and Snowflake (Good to have). •Experience with test-driven deployment practices (Good to have). •Familiarity with Python (Good to have). •Knowledge of Databricks (Good to have). •Understanding of DevOps practices (Good to have).
Scala Spark Developer
Posted 1 day ago
Job Viewed
Job Description
NEED TO WORK IN W2 Mandatory Skills: Big Data Scala Spark Core Java Scala Spark Developer Job Description We are seeking a highly skilled and motivated Spark Scala Developer to join our dynamic team. As a Spark Scala Developer, you will play a critical role in the design, development, deployment, and optimization of data processing application. Key Responsibilities: Develop and maintain data processing applications using Spark and Scala. Collaborate with cross-functional teams to understand data requirements and design efficient solutions. Implement test-driven deployment practices to enhance the reliability of application. Deploy artifacts from lower to higher environment ensuring smooth transition Troubleshoot and debug Spark performance issues to ensure optimal data processing. Work in an agile environment, contributing to sprint planning, development and delivering high quality solutions on time Provide essential support for production batches, addressing issues and providing fix to meet critical business needs Skills/Competencies: Strong knowledge of Scala programming language Excellent problem-solving and analytical skills. Proficiency in Spark, including the development and optimization of Spark applications. Ability to troubleshoot and debug performance issues in Spark. Understanding of design patterns and data structure for efficient data processing Familiarity with database concepts and SQL Java and Snowflake (Good to have). Experience with test-driven deployment practices (Good to have). Familiarity with Python (Good to have). Knowledge of Databricks (Good to have). Understanding of DevOps practices (Good to have). #J-18808-Ljbffr
Scala/Spark Developer
Posted 1 day ago
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features. We are looking to hire hands on Scala developer that will be responsible for delivering a high quality code in a big data platform, Unit testing in automated test framework, automated DevOps release and documentation of technology solutions for transaction monitoring, customer activity review, Optimization of scenarios and other associated transformation activities. Candidate will be collaborating with senior stakeholder as well as technology teams to define and influence the strategy for our next generation solutions to drive innovation while meeting regulatory expectations. Candidate will work with a global team of developers for in-house solution for various data processing needs related to various financial crime control applications. Responsibilities will include: Understand and implement tactical or strategic solution for a given business problem Discussion with stakeholders on business needs and technology requirements. Define and derive the strategic solutions as well as identify tactical solutions when necessary. Write technical design & other solution documents per Agile (SCRUM) standards. Perform data analysis to aid development work and other business needs. Perform unit testing of the developed code leveraging automated BDD test framework. Participate in the testing effort to validate and approve technology solutions. Follow the MS standards for adoption of automated release process across environments. Perform automated regression test case suite and support UAT of developed solution. Work using collaborative technique with other FCT & NFRT teams. Required Skills 5+ years of experience as a Hands On Scala/Spark developer Hands on SQL writing skills on RDBMS (DB2) databases Ability to perform code optimization for performance, Scalability and configurability Must have worked at least 2 years in a HDFS platform development project. Proficiency in data analysis, data profiling, and data lineage Strong oral and written communication skills Experience working in Agile projects. Desired Skills Understanding of Actimize ACTONE platform features. Understanding/Development on AZURE Cloud based implementation Possess Big data technology certification. Experience in Machine Learning, predictive Analytics Seniority level Seniority level Mid-Senior level Employment type Employment type Full-time Job function Job function Information Technology Industries IT Services and IT Consulting Referrals increase your chances of interviewing at Capgemini by 2x Get notified about new Scala Developer jobs in New York City Metropolitan Area . New York, NY $151,819.00-$76,000.00 4 days ago New York, NY 150,000.00- 180,000.00 1 week ago Jersey City, NJ 70,000.00- 75,000.00 1 week ago Software Engineer / Quant Research Developer - Trading Applications (Python/Java) Java Developer – Algorithmic Trading & SOR New York, NY $2 5,000.00- 245,000.00 1 week ago New York, NY 150,000.00- 200,000.00 2 weeks ago Core Java Developer with Low Latency - Only W2 Jersey City, NJ 150,000.00- 250,000.00 1 week ago Platform Engineer (Java, Scala, Spark and Databricks Azure) New York City Metropolitan Area 3 weeks ago Jersey City, NJ 150,000.00- 160,000.00 3 weeks ago Senior Developer (buyside) - rates/fixed income New York, NY 250,000.00- 325,000.00 1 month ago New York, NY 90,000.00- 150,000.00 2 weeks ago New York, NY 90,000.00- 150,000.00 3 weeks ago New York, NY 120,000.00- 125,000.00 1 month ago New York City Metropolitan Area 115,000.00- 145,000.00 3 weeks ago New York City Metropolitan Area 4 days ago Java- Quant Developer - Statistical Modeling New York City Metropolitan Area 180,000.00- 265,000.00 3 weeks ago New York, NY 65,000.00- 80,000.00 1 day ago Freelance Software Developer (Java) - AI Trainer We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI. #J-18808-Ljbffr
Scala/Spark Developer - Jersey City
Posted 9 days ago
Job Viewed
Job Description
Key Responsibilities:
- Develop, test, and deploy data processing applications using Apache Spark and Scala.
- Optimize and tune Spark applications for better performance on large-scale data sets.
- Work with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Kafka) to build data pipelines and storage solutions.
- Collaborate with data scientists, business analysts, and other developers to understand data requirements and deliver solutions.
- Design and implement high-performance data processing and analytics solutions.
- Ensure data integrity, accuracy, and security across all processing tasks.
- Troubleshoot and resolve performance issues in Spark, Cloudera, and related technologies.
- Implement version control and CI/CD pipelines for Spark applications.
Required Skills & Experience:
- Minimum 8 years of experience in application development.
- Strong hands on experience in Apache Spark, Scala, and Spark SQL for distributed data processing.
- Hands-on experience with Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop.
- Familiarity with other Big Data technologies, including Apache Kafka, Flume, Oozie, and Nifi.
- Experience building and optimizing ETL pipelines using Spark and working with structured and unstructured data.
- Experience with SQL and NoSQL databases such as HBase, Hive, and PostgreSQL.
- Knowledge of data warehousing concepts, dimensional modeling, and data lakes.
- Ability to troubleshoot and optimize Spark and Cloudera platform performance.
- Familiarity with version control tools like Git and CI/CD tools (e.g., Jenkins, GitLab).
Compensation, Benefits and Duration
Minimum Compensation: USD 48,000
Maximum Compensation: USD 169,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is not available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
Be The First To Know
About the latest Spark developer Jobs in United States !
Hadoop/Spark/Java Developer
Posted 7 days ago
Job Viewed
Job Description
Job Location : Charlotte NC / Newark DE / Dallas TX (Fully Onsite)
Job Type : Full-Time
Job Description:
Job Summary:
- Seeking talented and experienced Hadoop and Spark Developer with strong Java expertise to join data engineering team.
- The ideal candidate will have a solid understanding of big data technologies, hands-on experience with the Hadoop ecosystem, and the ability to build and optimize data pipelines and processing systems using Spark and Java.
- Develop, test, and deploy scalable big data solutions using Hadoop and Spark.
- Write efficient and optimized code in Java to process large datasets.
- Design and implement batch and real-time data processing pipelines using Spark.
- Monitor, troubleshoot, and enhance the performance of Spark jobs.
- Work closely with cross functional teams to integrate big data solutions into existing systems.
- Debug and resolve complex technical issues related to distributed computing.
- Collaborate on system architecture and contribute to technical design discussions.
- Strong expertise in Java, with experience in writing optimized, high-performance code.
- Solid experience in Hadoop ecosystem (HDFS, Hive, Apache Spark (RDD, Dataframe, Dataset, Spark SQL, Spark Streaming).
- Proficiency in designing and building ETL pipelines for big data processing.
- Experience with query optimization and data manipulation using SQL-based technologies like Hive or Impala.
- Hands on experience with Git or similar version control systems.
- Strong understanding of Linux/Unix based environments for development and deployment.
- Experience with Apache Kafka.
- Exposure to DevOps practices, including CI/CD pipelines.
- Knowledge of Python or Scala is a plus.
Senior Hadoop, Spark, Scala Developer
Posted 10 days ago
Job Viewed
Job Description
Skill: Hadoop
- Over 5+ years of hands-on working experience to develop the large-scale, high-volume enterprise and distributed applications by using Big Data technologies.
- Experience in Projects implemented on Spark (Spark with Java or PySpark).
- Hands on experience in Hive.
- Experience in Core Java.
- Experience in Unix shell scripting.
- Experience in job scheduling tool Autosys.
- Worked in Banking domain projects.
- Experience with GIT for code versioning.
- Experience with development models such as Agile and SDLC.
Hadoop and Spark Lead Developer
Posted 1 day ago
Job Viewed
Job Description
Job details
Work Location
Plano, TX
State / Region / Province
Texas
Country
USA
Domain
Delivery
Interest Group
Infosys Limited
Skills
Technology|Analytics - Packages|Python - Big Data, Technology|Big Data - Data Processing|Spark, Technology|Big Data - Hadoop|Hadoop
Company
ITL USA
Requisition ID
129704BR
Job description
Infosys is seeking a Hadoop and Spark Lead Developer . In this role, y ou will enable digital transformation for our clients in a global delivery model, research on technologies independently, recommend appropriate solutions and contribute to technology-specific best practices and standards. Y ou will be responsible to interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.
Required Qualifications:
- Candidate must be located within commuting distance of Richardson, TX or Raleigh, NC or be willing to relocate to the area. This position may require travel to project locations.
- Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
- At least 4 years of Information Technology experience
- At least 3 years of experience in Hadoop, Spark, Scala/Python
- Good experience in end-to-end implementation of data warehouse and data marts
- Strong knowledge and hands-on experience in SQL, Unix shell scripting
- Good understanding of data integration, data quality and data architecture
- Experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data
- Good understanding of Agile software development frameworks
- Experience in Banking domain
- Strong communication and Analytical skills
- Ability to work in teams in a diverse, multi-stakeholder environment comprising of Business and Technology teams
- Experience and desire to work in a global delivery environment
About Us
Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.
Infosys provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability.