438 Big Data jobs in Sunnyvale

Data Engineer III

Sunnyvale, California Walmart

Job Viewed

Tap Again To Close

Job Description

What you'll do at

What you'll do.

Position: Data Engineer III

Job Location: 1375 Crossman Avenue, Sunnyvale, CA 94089

Duties: Identifies possible options to address the business problems within one's discipline through analytics, big data analytics, and automation. Supports the development of business cases and recommendations. Owns delivery of project activity and tasks assigned by others. Supports process updates and changes. Solves business issues. Supports the documentation of data governance processes. Supports the implementation of data governance practices. Understands, articulates, and applies principles of the defined strategy to routine business problems that involve a single function. Extracts data from identified databases. Creates data pipelines and transform data to a structure that is relevant to the problem by selecting appropriate techniques. Develops knowledge of current data science and analytics trends. Supports the understanding of the priority order of requirements and service level agreements. Helps identify the most suitable source for data that is fit for purpose. Performs initial data quality checks on extracted data. Analyzes complex data elements, systems, data flows, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Develops the Logical Data Model and Physical Data Models including data warehouse and data mart designs. Defines relational tables, primary and foreign keys, and stored procedures to create a data model structure. Evaluates existing data models and physical databases for variances and discrepancies. Develops efficient data flows. Analyzes data-related system integration challenges and proposes appropriate solutions. Creates training documentation and trains end-users on data modeling. Oversees the tasks of less experienced programmers and stipulates system troubleshooting supports. Writes code to develop the required solution and application features by determining the appropriate programming language and leveraging business, technical, and data requirements. Creates test cases to review and validate the proposed solution design. Creates proofs of concept. Tests the code using the appropriate testing approach. Deploys software to production servers. Contributes code documentation, maintains playbooks, and provides timely progress updates.

Minimum education and experience required: Master's degree or the equivalent in Computer Science, Information Technology, Engineering (any), or a related field; OR Bachelor's degree or the equivalent in Computer Science, Information Technology, Engineering (any), or a related field and 2 years of experience in software engineering or related experience.

Skills required: Experience coding skills to develop robust scripts for data ingestion and transformation (Python, pyspark, and scala). Experience writing complex queries to extract, manipulate and analyze data efficiently (SQL, BigQuery, and MySQL). Experience working on big data technologies including Hadoop, HIVE. Experience working on orchestrion tools for ETL workflows (Apache Airflow). Experience working on cloud services and using them to build scalable ETL pipelines (Google cloud platform, Google cloud storage, and Google dataproc). Experience working on Data transformation tools to build sophisticated data modelling, transformation, and data accuracy pipelines (DBT). Experience with designing and writing scalable data engineering applications and create robust unit and end to end integration testing strategy to ensure data quality. Employer will accept any amount of experience with the required skills.

Salary Range: $117,000/year to $234,000/year. Additional compensation includes annual or quarterly performance incentives. Additional compensation for certain positions may also include: Regional Pay Zone (RPZ) (based on location) and Stock equity incentives.

Benefits: At Walmart, we offer competitive pay as well as performance-based incentive awards and other great benefits for a happier mind, body, and wallet. Health benefits include medical, vision and dental coverage. Financial benefits include 401(k), stock purchase and company-paid life insurance. Paid time off benefits include PTO (including sick leave), parental leave, family care leave, bereavement, jury duty and voting. Other benefits include short-term and long-term disability, education assistance with 100% company paid college degrees, company discounts, military service pay, adoption expense reimbursement, and more.

Eligibility requirements apply to some benefits and may depend on your job classification and length of employment. Benefits are subject to change and may be subject to a specific plan or program terms. For information about benefits and eligibility, see One.Walmart.com.

Wal-Mart is an Equal Opportunity Employer.

#LI-DNI #LI-DNP

About Walmart

At Walmart, we help people save money so they can live better. This mission serves as the foundation for every decision we make, from responsible sourcing to sustainability-and everything in between. As a Walmart associate, you will play an integral role in shaping the future of retail, tech, merchandising, finance and hundreds of other industries-all while affecting the lives of millions of customers all over the world. Here, your work makes an impact every day. What are you waiting for?

Walmart Inc. is an Equal Opportunity Employer- By Choice. We believe we are best equipped to help our associates, customers, and the communities we serve live better when we really know them. That means understanding, respecting, and valuing unique styles, experiences, identities, abilities, ideas and opinions- while welcoming all people.

"At Walmart, we get the opportunity to grow professionally and personally-all while improving how we work and what we deliver to consumers." - Lola, Project Analyst
Apply Now

Job No Longer Available

This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.

However, we have similar jobs available for you below.

Big Data Engineer

95053 Santa Clara, California TechDigital Group

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

As a Principal Big Data Engineer, you will be an integral member of our data ingestion and processing platform team responsible for architecture, design and development. Having the dynamic ability to adapt to conventional big-data frameworks and tools with the use-cases required by the project Ability to communicate with research and development teams and data scientists, finding bottlenecks and resolving them Design and implement different architectural models for our scalable data processing, as well as scalable data storage Build tools for proper data ingestion from multiple heterogeneous sources Skills 1) 6+ years of experience in design and implementation in an environment with hundreds of terabytes of data 2) 6+ years of experience with large data processing tools such as: Dataflow, GKE, BQ, Beam 3) 6+ years of experience with Java 4) Can-do attitude on problem solving, quality and ability to execute 5) Excellent inter-personal and teamwork skills #J-18808-Ljbffr

View Now

Big Data Engineer

95053 Santa Clara, California Omni Inclusive

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

As a Principal Big Data Engineer, you will be an integral member of our data ingestion and processing platform team responsible for architecture, design and development. Having the dynamic ability to adapt to conventional big-data frameworks and tools with the use-cases required by the project Ability to communicate with research and development teams and data scientists, finding bottlenecks and resolving them Design and implement different architectural models for our scalable data processing, as well as scalable data storage Build tools for proper data ingestion from multiple heterogeneous sources
Skills
1) 6+ years of experience in design and implementation in an environment with hundreds of terabytes of data
2) 6+ years of experience with large data processing tools such as: Dataflow, GKE, BQ, Beam
3) 6+ years of experience with Java
4) Can-do attitude on problem solving, quality and ability to execute
5) Excellent inter-personal and teamwork skills

View Now

Big Data Engineer

94306 Palo Alto, California Cygnus Professionals

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Headquartered in New Jersey (U.S), Cygnus Professionals Inc. is a next-generation global information technology solutions and consulting company, powered by a strong management and leadership team with over 30 person-years of experience. Cygnus has a presence in more than 4 countries with over 25 satisfied customers. We aim to expand across industries and geographies with our industry-focused business excellence. Cygnus Professionals Inc. has been recognized by the US Pan Asian American Chamber of Commerce Education Foundation (USPAACC) as one of the “Fast 100 Asian American Businesses” for its rapid revenue growth over the past two years. Job Description Greetings from Cygnus Professionals, We are currently seeking candidates for the following position. If you are interested, please submit your updated profile, contact details, and rate. Duration: Potential Contract to Hire Mode of Interview: Skype after phone screening Eligibility: Must be a Green Card holder, US Citizen, or hold an EAD. Candidates with other statuses cannot be considered at this time. Position Responsibilities: Experience with Java, Spark, and Hadoop Minimum of 2 years experience with distributed systems Knowledge in distributed system design, data pipelining, and implementation Knowledge of machine learning algorithms Experience building large-scale applications using software design patterns and OO principles Experience with distributed computing (Hadoop/Spark/Cloud) or parallel processing (CUDA/threads/MPI) Expertise in design patterns (UML diagrams) and data modeling for large-scale analytic systems Experience converting raw data into structured data suitable for productization Familiarity with data warehousing, distributed/parallel processing, Hadoop, Cloud technologies, HDFS, and Linux clusters Experience with Agile, Scrum, and SDLC methodologies Ability to work in a research-oriented, fast-paced, technical environment Strong communication and interpersonal skills, quick learner, collaborative attitude Additional Information **U.S. Citizens and authorized work candidates are encouraged to apply. We are unable to sponsor at this time. **All information will be kept confidential according to EEO guidelines. #J-18808-Ljbffr

View Now

Big Data Engineer (Lead)

95053 Santa Clara, California Tekfortune Inc

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Tekfortune is a fast-growing consulting firm specialized in permanent, contract & project-based staffing services for world's leading organizations in a broad range of industries. In this quickly changing economic landscape, virtual recruiting and remote work are critical for the future of work. To support the active project demands and skills gaps, our staffing experts can help you find the best job for you.

Role:
Location:
Duration:
Required Skills:
Job Description:

For more information and other jobs available please contact our recruitment team at To view all the jobs available in the USA and Asia please visit our website at

View Now

Senior Big Data Engineer

95053 Santa Clara, California Nutanix

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Languages such as C, C++, Java, Python, etc. Preferred Qualifications: 3+ years of experience as a Data Engineer or in a similar role Experience with data modeling, data warehousing, and building ETL pipelines Solid working experience with Python, AWS analytical technologies and related resources (Glue, Athena, QuickSight, SageMaker, etc.) Experience with Big Data tools , platforms and architecture with solid working experience with SQL Experience working in a very large data warehousing environment, Distributed System. Solid understanding of various data exchange formats and complexities Industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets Strong data visualization skills Basic understanding of Machine Learning; Prior experience in ML Engineering a plus Ability to manage on-premises data and make it inter-operate with AWS based pipelines Ability to interface with Wireless Systems/SW engineers and understand the Wireless ML domain; Prior experience in Wireless (5G) domain a plus Education: Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline Preferred Qualifications: Masters in CS/ECE with a Data Science / ML Specialization Principal Duties and Responsibilities: Completes assigned coding tasks to specifications on time without significant errors or bugs. Adapts to changes and setbacks in order to manage pressure and meet deadlines. Collaborates with others inside project team to accomplish project objectives. Communicates with project lead to provide status and information about impending obstacles. Quickly resolves complex software issues and bugs. Gathers, integrates, and interprets information specific to a module or sub-block of code from a variety of sources in order to troubleshoot issues and find solutions. Seeks others' opinions and shares own opinions with others about ways in which a problem can be addressed differently. Participates in technical conversations with tech leads/managers. Anticipates and communicates issues with project team to maintain open communication. Makes decisions based on incomplete or changing specifications and obtains adequate resources needed to complete assigned tasks. Prioritizes project deadlines and deliverables with minimal supervision. Resolves straightforward technical issues and escalates more complex technical issues to an appropriate party (e.g., project lead, colleagues). Writes readable code for large features or significant bug fixes to support collaboration with other engineers. Determines which work tasks are most important for self and junior engineers, stays focused, and deals with setbacks in a timely manner. Unit tests own code to verify the stability and functionality of a feature. #J-18808-Ljbffr

View Now

Big Data Hadoop Engineer

94566 Pleasanton, California Buxton Consulting

Posted today

Job Viewed

Tap Again To Close

Job Description

Must Haves
  1. Strong experience in Big Data , Cloudera Distribution 7.x, RDBMS development
  2. 4-5 years of programming experience in Python , Java, Scala, and SQL is must.
  3. Strong experience building data pipelines using Hadoop components Sqoop , Hive , SOLR , MR , Impala , Spark , Spark SQL , HBase .
  4. Strong experience with REST API development using Python frameworks (Django , Flask, Fast API etc.) and Java Springboot frameworks
  5. Project experience in AI/Machine Learning and NLP development.

***Please rate candidates while submitting resumes from scale of 1-10 for skills listed above

TECHNICAL KNOWLEDGE AND SKILLS:
•Strong experience in Big Data, Cloudera Distribution 7.x, RDBMS
•4-5 years of programming experience in Python, Java, Scala and SQL is must.
•Strong experience building data pipelines using Hadoop components Sqoop, Hive, SOLR, MR, Impala, Spark, Spark SQL., HBase.
•Strong experience with REST API development using Python frameworks (Django, Flask, Fast API etc.) and Java Springboot frameworks
•Micro Services/Web service development experience using Spring framework
•Experience with Dask, Numpy, Pandas, Scikit-Learn
•Proficient in Machine Learning Algorithms: Supervised Learning (Regression, Classification, SVM, Decision Trees etc.), Unsupervised Learning (Clustering) and Reinforcement Learning
•Strong experience working in Real-Time analytics like Spark/Kafka/Storm
•Experience with Gitlab, Jenkins, JIRA
•Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs.
•Strong Experience with Data Science Notebooks like Jupyter, Zeppelin, RStudio. PyCharm etc.
PROFESSIONAL SKILLS:
•Strong analytical skills with the ability to analyze information and identify and formulate solutions to problems.
•Provide more in-depth analysis with a high-level view of goals and end deliverables.
•Complete work within a reasonable time frame under the supervision of a manager or team lead.
•Plan and manage all aspects of the support function.
•Extensive knowledge of and proven experience with data processing systems, and methods of developing, testing and moving solutions to implementation.
•Strong knowledge in project management practices and ability to document processes and procedures as needed.
•Work collaboratively with other support team members and independently on assigned tasks and deliverables with minimum supervision
•Communicate effectively with users at all levels, from data entry technicians up to senior management, verbally and in writing.
•Self-motivated, working closely and actively communicating with team members to accomplish time critical tasks and deliverables
•Ask questions and share information gained with other support team members, recording and documenting this knowledge
•Elicit and gather user requirements and/or problem description information, and record this information accurately
•Listen carefully and act upon user requirements
•Convey and explain complex problems and solutions in an understandable language to both technical and non-technical persons
•Present technical solutions to management and decision makers
•Follow the lead of others on assigned projects as well as take the lead when deemed appropriate
•Think creatively and critically, analyzing complex problems, weighing multiple solutions, and carefully selecting solutions appropriate to the business needs, project scope, and available resources
•Take responsibility for the integrity of the solution

Thank you

Preethi Madhusudhanan

Director of Talent Acquisition

Cell: +1 (

Email:

Buxton Consulting, WRMSDC Certified MBE

View Now

Big Data Hadoop Engineer - Onsite

94566 Pleasanton, California Intelliswift

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Objective & Deliverables The Big Data Engineer shall lead the Big Data Engineering team. The consultant will play the role of technical lead and provide professional services to support the long term IT strategy and planning to include high level analysis, professional reports and presentations, and mentoring, support and training. Technical Knowledge and Skills: 4+ years of hands-on Development, Deployment and production Support experience in Big Data environment. 4-5 years of programming experience in Java, Scala, Python. Proficient in SQL and relational database design and methods for data retrieval. Knowledge of NoSQL systems like HBase or Cassandra Hands-on experience in Cloudera Distribution 6.x Hands-on experience in creating, indexing Solr collections in Solr Cloud environment. Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Solr, MR, Impala, Spark, Spark SQL. Must have experience with developing Hive QL, UDF's for analyzing semi structured/structured datasets. Must have experience with Spring framework, Web Services, REST API's and MicroServices. Hands-on experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc. Must have working experience in the data warehousing and Business Intelligence systems. Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs. Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production support issue resolution process. Experience in building Client models using MLLib or any Client tools. Hands-on experience working in Real-Time analytics like Spark/Kafka/Storm Experience with Graph Databases like Neo4J, Tiger Graph, Orient DB Agile development methodologies. Top Must Haves: 5+ years of strong hands-on Design and Development experience in Hadoop/Big Data Environment. Strong Hands-on coding experience in Java, Scala and Python languages Strong Hands-on experience with Spring framework, Web Services, REST API's and MicroServices. 3+ years of Hands-on experience building data pipelines using Hadoop components Sqoop, Kafka, Hive, SOLR, Map Reduce, Spark, Spark SQL, HBase etc. #J-18808-Ljbffr

View Now
Be The First To Know

About the latest Big data Jobs in Sunnyvale !

Senior Software Engineer, Big Data

94305 Stanford, California ZipRecruiter

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

We offer a hybrid work environment. Most US-based positions can alsobeperformed remotely (any exceptions will be noted in the Minimum Qualifications below.)

Our Mission:

To actively connect people to their next great opportunity.

Who We Are:

ZipRecruiter is a leading online employment marketplace. Powered by AI-driven smart matching technology, the company actively connects millions of all-sized businesses and job seekers through innovative mobile, web, and email services, as well as through partnerships with the best job boards on the web. ZipRecruiter has the #1 rated job search app on iOS & Android.

Summary :

Our team has a unique opportunity to work on applications and data at scale, serving millions of jobseekers and tens of thousands of customers. We're working on building an efficient marketplace of jobseekers and employers and need generalist software engineers to build fast, scalable, and effective applications, stream and batch data processing, ML infrastructure and a variety of other systems all to help connect people to their next job. We provide an essential service and have a thriving business as a result.

Our stack is complex and we're looking for engineers who know how to write evolvable, properly instrumented, and efficient code as part of a growing distributed system. We're working on data driven systems and applications and need people who can build fast services and processes to power all of the intelligence we use to help change people's lives.

Responsibilities:

  • Build data processing and exploration pipelines along with ML infrastructure to power our intelligence
  • Deploy a range of cloud-based technologies for critical projects
  • Write, test, instrument, and deploy code to our Kubernetes environment
  • Help drive the innovation and evolution of ZipRecruiter
Minimum Qualifications:
  • 5+ years of professional software development experience with a focus on big data technologies
  • Experience with Hadoop, Spark, Hive, and/or other big data technologies
  • Comprehensive computer science fundamentals in coding, object-oriented programming, data structures, and algorithms
  • Experience with containerization technologies like Docker and/or Kubernetes
Preferred Qualifications:
  • 8+ years of professional software development experience, with a focus on big data technologies
  • BS/MS/PhD in Mathematics, Computer Science, Physics, related technical field or equivalent practical experience
  • Experience with data integration tools like Apache Kafka, Flume, and NiFi
  • Familiarity with data storage technologies such as Delta Lake, HBase, Cassandra, and MongoDB
  • Familiarity with Apache Hudi, Apache Beam, Apache Flink, Google Cloud Dataflow, Amazon Kinesis Data Analytics, and/or Azure Databricks
As part of our team you'll enjoy:
  • Competitive compensation
  • Exceptional benefits package
  • Flexible Vacation & Paid Time Off
  • Employer-matched 401(k) plan

The US base salary range for this full-time position is $140,000-$200,000. Our salary ranges are determined by role, level, and location, and the range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all US locations. Within the range, individual pay is determined by work location, role-related knowledge and skills, depth of experience, relevant education or training, and additional role-related considerations.

Depending on the position offered, equity, bonuses, commission, or other forms of compensation may also be provided as part of a total compensation package, in addition to a full range of medical, financial, and other benefits.

#LI-Remote

#LI-DA1

ZipRecruiter is proud to be an equal opportunity employer and provides equal employment opportunities (EEO) to all employees and applicants without regard to race, color, religion, sex, national origin, age, disability, veteran status, sexual orientation, gender identity or genetics.

Privacy Notice: For information about ZipRecruiter's collection and processing of job applicant personal data for this job, please see our Privacy Notice at:
View Now

Data Engineering & Big Data Architect

94087 Sunnyvale, California Infosys

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineering & Big Data Architect Technology Architect - US Company ITL USA Requisition ID 135930BR Infosys is seeking a highly experienced Data Engineering & Big Data Architect to join our team and perform Technical Product management/Data analyst tasks and own the development of a Tiered data architecture for our data platform . In this role, they would be responsible for crafting and building a comprehensive data architecture that will enable seamless data integration and enable the delivery of high-quality insights to our leadership and business stakeholders. You will collaborate with some of the best talent in the industry to create and implement innovative high-quality solutions, lead, and participate in sales and pursuits focused on our clients' business needs. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued. Candidate must be located within commuting distance of Austin, TX/ Sunnyvale, CA or be willing to relocate to the area. This position may require travel to project locations Required Qualifications: Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education. At least 7 years of experience in Information Technology. Experience with Hadoop distributed frameworks while handling large amount of data using Spark or PYSpark and Hadoop Ecosystems. Proven experience in data engineering, data architecture, or a related field Experience with Spark or PySpark is required. Strong understanding of data modeling, data warehousing, and ETL concepts Proficiency in SQL and experience with at least one major data analytics platform, such as Hadoop or Spark Experience with Scala or Python is required. Preferred Qualifications: Experience in understanding of Design Patterns, ability to discuss tradeoffs between RDBMS vs Distributed Storage At least 5 years of Experience with Spark or PySpark, Python, Scala and Data Engineering Experience in design and implementing a tiered data architecture that integrates analytics data from multiple sources in an efficient and effective manner. Experience with data orchestration tools like Airflow is a nice to have. Excellent problem-solving and analytical skills, and the ability to work well under tight deadlines. Excellent interpersonal skills and the ability to collaborate effectively with cross-functional teams. Experience in developing data models and mapping rules to transform raw data into actionable insights and reports. Experience in collaborating with the analytics and business teams to understand their requirements, with cross-functional teams to define and implement data governance policies and standards. Experience in developing data validation and reconciliation processes to ensure data quality and accuracy is met. Experience with development and maintenance of user documentation, including data models, mapping rules, and data dictionaries. Estimated annual compensation range for candidate based in the below locations will be: Sunnyvale, CA- $104079 -$75310 Along with competitive pay, as a full-time Infosys employee you are also eligible for the following benefits :- Long-term/Short-term Disability Health and Dependent Care Reimbursement Accounts Insurance (Accident, Critical Illness , Hospital Indemnity, Legal) 401(k) plan and contributions dependent on salary level The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face. Travel may be required as per the job requirements. Job description Infosys is seeking a highly experienced Data Engineering & Big Data Architect to join our team and perform Technical Product management/Data analyst tasks and own the development of a Tiered data architecture for our data platform . In this role, they would be responsible for crafting and building a comprehensive data architecture that will enable seamless data integration and enable the delivery of high-quality insights to our leadership and business stakeholders. You will collaborate with some of the best talent in the industry to create and implement innovative high-quality solutions, lead, and participate in sales and pursuits focused on our clients' business needs. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued. Candidate must be located within commuting distance of Austin, TX/ Sunnyvale, CA or be willing to relocate to the area. This position may require travel to project locations Required Qualifications: Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education. At least 7 years of experience in Information Technology. Experience with Hadoop distributed frameworks while handling large amount of data using Spark or PYSpark and Hadoop Ecosystems. Proven experience in data engineering, data architecture, or a related field Experience with Spark or PySpark is required. Strong understanding of data modeling, data warehousing, and ETL concepts Proficiency in SQL and experience with at least one major data analytics platform, such as Hadoop or Spark Experience with Scala or Python is required. Preferred Qualifications: Experience in understanding of Design Patterns, ability to discuss tradeoffs between RDBMS vs Distributed Storage At least 5 years of Experience with Spark or PySpark, Python, Scala and Data Engineering Experience in design and implementing a tiered data architecture that integrates analytics data from multiple sources in an efficient and effective manner. Experience with data orchestration tools like Airflow is a nice to have. Excellent problem-solving and analytical skills, and the ability to work well under tight deadlines. Excellent interpersonal skills and the ability to collaborate effectively with cross-functional teams. Experience in developing data models and mapping rules to transform raw data into actionable insights and reports. Experience in collaborating with the analytics and business teams to understand their requirements, with cross-functional teams to define and implement data governance policies and standards. Experience in developing data validation and reconciliation processes to ensure data quality and accuracy is met. Experience with development and maintenance of user documentation, including data models, mapping rules, and data dictionaries. Estimated annual compensation range for candidate based in the below locations will be: Sunnyvale, CA- $1 4079 - 175310 Along with competitive pay, as a full-time Infosys employee you are also eligible for the following benefits :- Medical/Dental/Vision/Life Insurance Long-term/Short-term Disability Health and Dependent Care Reimbursement Accounts Insurance (Accident, Critical Illness , Hospital Indemnity, Legal) 401(k) plan and contributions dependent on salary level Paid holidays plus Paid Time Off The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face. Travel may be required as per the job requirements. About Us Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem. Infosys provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability. #J-18808-Ljbffr

View Now

Lead Big Data Software Engineer Threat Intelligence Cloud

95053 Santa Clara, California Palo Alto Networks

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Palo Alto Networks is the next-generation security company maintaining trust in the digital age by helping tens of thousands of organizations worldwide prevent cyber breaches. With our deep cybersecurity expertise, commitment to innovation, and game-changing Next-Generation Security Platform, customers can confidently pursue a digital-first strategy and embark on new technology initiatives, such as cloud and mobility. We have achieved this by having devices on the network, agents on endpoint devices, and services in our cloud to detect and prevent malicious activities. The Threat Intelligence Cloud Team is looking for a SENIOR-LEVEL BIG DATA SOFTWARE ENGINEER responsible for developing and operating this big data distributed system platform. We are looking for an experienced hands-on Developer to be responsible for developing Palo Alto Network's AutoFocus. This is our contextual threat intelligence service that accelerates analysis, correlation, and prevention workflows. Unique, targeted attacks are automatically prioritized with full context, allowing security teams to respond to critical attacks faster, without additional IT security resources. If your passion is Big Data and you want to join us in building our next-generation security platform, then we want to hear from you! KEYS TO SUCCESS FOR THIS POSITION: Experience working on Big Data computing systems like Hadoop MapReduce, Spark, etc. Real-time streaming like Kafka, Spark Streaming. No-SQL databases like HBase, Cassandra, etc., and working with ElasticSearch is a big plus. A successful candidate must have: 8+ years overall Software Development experience and 2+ years in developing large scale big data projects and/or distributed systems Hands-on experience with object-oriented languages like Java or C++ Ability to contribute technically to design and development Experience partnering with dev-ops for all operational aspects of the system Great communication skills and ability to work smoothly with cross-functional teams Hands-on experience working with large scale data ingestion, processing, and storage Master’s degree in computer science or related fields Preferred skills: Hands-on experience with the Hadoop ecosystem, Spark, non-relational databases (NoSQL, MongoDB, Cassandra), messaging systems (Kafka, RabbitMQ). Comfortable building automation systems for quality control, builds, monitoring, and alerting Learn more about Palo Alto Networks HERE and check out our FAST FACTS #LI-PALSAAS #J-18808-Ljbffr

View Now
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Big Data Jobs View All Jobs in Sunnyvale