Big Data Engineer

95053 Santa Clara, California Omni Inclusive

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

As a Principal Big Data Engineer, you will be an integral member of our data ingestion and processing platform team responsible for architecture, design and development. Having the dynamic ability to adapt to conventional big-data frameworks and tools with the use-cases required by the project Ability to communicate with research and development teams and data scientists, finding bottlenecks and resolving them Design and implement different architectural models for our scalable data processing, as well as scalable data storage Build tools for proper data ingestion from multiple heterogeneous sources
Skills
1) 6+ years of experience in design and implementation in an environment with hundreds of terabytes of data
2) 6+ years of experience with large data processing tools such as: Dataflow, GKE, BQ, Beam
3) 6+ years of experience with Java
4) Can-do attitude on problem solving, quality and ability to execute
5) Excellent inter-personal and teamwork skills

View Now

Big Data Hadoop Engineer

94566 Pleasanton, California Buxton Consulting

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Must Haves
  1. Strong experience in Big Data , Cloudera Distribution 7.x, RDBMS development
  2. 4-5 years of programming experience in Python , Java, Scala, and SQL is must.
  3. Strong experience building data pipelines using Hadoop components Sqoop , Hive , SOLR , MR , Impala , Spark , Spark SQL , HBase .
  4. Strong experience with REST API development using Python frameworks (Django , Flask, Fast API etc.) and Java Springboot frameworks
  5. Project experience in AI/Machine Learning and NLP development.

***Please rate candidates while submitting resumes from scale of 1-10 for skills listed above

TECHNICAL KNOWLEDGE AND SKILLS:
• Strong experience in Big Data, Cloudera Distribution 7.x, RDBMS
• 4-5 years of programming experience in Python, Java, Scala and SQL is must.
• Strong experience building data pipelines using Hadoop components Sqoop, Hive, SOLR, MR, Impala, Spark, Spark SQL., HBase.
• Strong experience with REST API development using Python frameworks (Django, Flask, Fast API etc.) and Java Springboot frameworks
• Micro Services/Web service development experience using Spring framework
• Experience with Dask, Numpy, Pandas, Scikit-Learn
• Proficient in Machine Learning Algorithms: Supervised Learning (Regression, Classification, SVM, Decision Trees etc.), Unsupervised Learning (Clustering) and Reinforcement Learning
• Strong experience working in Real-Time analytics like Spark/Kafka/Storm
• Experience with Gitlab, Jenkins, JIRA
• Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs.
• Strong Experience with Data Science Notebooks like Jupyter, Zeppelin, RStudio. PyCharm etc.
PROFESSIONAL SKILLS:
• Strong analytical skills with the ability to analyze information and identify and formulate solutions to problems.
• Provide more in-depth analysis with a high-level view of goals and end deliverables.
• Complete work within a reasonable time frame under the supervision of a manager or team lead.
• Plan and manage all aspects of the support function.
• Extensive knowledge of and proven experience with data processing systems, and methods of developing, testing and moving solutions to implementation.
• Strong knowledge in project management practices and ability to document processes and procedures as needed.
• Work collaboratively with other support team members and independently on assigned tasks and deliverables with minimum supervision
• Communicate effectively with users at all levels, from data entry technicians up to senior management, verbally and in writing.
• Self-motivated, working closely and actively communicating with team members to accomplish time critical tasks and deliverables
• Ask questions and share information gained with other support team members, recording and documenting this knowledge
• Elicit and gather user requirements and/or problem description information, and record this information accurately
• Listen carefully and act upon user requirements
• Convey and explain complex problems and solutions in an understandable language to both technical and non-technical persons
• Present technical solutions to management and decision makers
• Follow the lead of others on assigned projects as well as take the lead when deemed appropriate
• Think creatively and critically, analyzing complex problems, weighing multiple solutions, and carefully selecting solutions appropriate to the business needs, project scope, and available resources
• Take responsibility for the integrity of the solution

Thank you

Preethi Madhusudhanan

Director of Talent Acquisition

Cell:

Email:

Buxton Consulting, WRMSDC Certified MBE

View Now

Big Data Engineer (Lead)

95053 Santa Clara, California Tekfortune Inc

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Tekfortune is a fast-growing consulting firm specialized in permanent, contract & project-based staffing services for world's leading organizations in a broad range of industries. In this quickly changing economic landscape, virtual recruiting and remote work are critical for the future of work. To support the active project demands and skills gaps, our staffing experts can help you find the best job for you.

Role:
Location:
Duration:
Required Skills:
Job Description:

For more information and other jobs available please contact our recruitment team at To view all the jobs available in the USA and Asia please visit our website at

View Now

Principal Engineer Software (Big Data)

95054 Santa Clara, California Palo Alto Networks

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

**Our Mission**
At Palo Alto Networks® everything starts and ends with our mission:
Being the cybersecurity partner of choice, protecting our digital way of life.
Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we're looking for innovators who are as committed to shaping the future of cybersecurity as we are.
**Who We Are**
We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included.
As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few!
At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision.
Prisma Access (formally GlobalProtect Cloud Service) provides protection straight from the cloud to make access to the cloud secure. It combines the connectivity and security you need - and delivers it everywhere you need it. Using cutting-edge public and private cloud technologies extending the next-generation security protection to all cloud services, customers on-premise remote networks and mobile users.
We are seeking an experienced Big Data Software Engineer to design, develop and deliver next-generation technologies within our Prisma Access team. We want passionate engineers who love to code and build great products. Engineers who bring new ideas in all facets of software development. Collaboration and teamwork are at the foundation of our culture and we need engineers who can communicate and work well with others towards achieving a common goal.
This role is located at our Santa Clara, CA headquarters.
**Your Impact**
+ Design, develop and implement highly scalable software features on our next-generation security platform as part of our Prisma Access
+ Work with different development and quality assurances groups to achieve the best quality
+ Suggest and implement improvements to the development process
+ Work with DevOps and the Technical Support teams to troubleshoot customer issues
**Your Experience**
+ At least 6+ years of development experience
+ Experience in developing services in the cloud/Kubernetes
+ Experience with building data pipelines and analytics pipelines using like dataflow, pubsub, GKE
+ Strong understanding of message queuing, stream processing, and highly scalable 'big data' data stores
+ Experience with RESTful interfaces and Build Management tools (Gradle, maven)
+ Experience in continuous integration and design
+ Experience with Test-Driven Development
+ Experience with distributed computing and object-oriented design and analysis
+ Strong understanding of microservices-based deployments with the ability to design services
+ Showcase your ability of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, peer review, and operations
+ Familiarity with Agile (e.g., Scrum Process)
+ Familiarity in Big Data technologies like Hive, Kafka, Hadoop, SQL, developing APIs
+ Familiarity working with GCP or other Cloud platforms such as AWS and Azure
+ High energy and the ability to work in a fast-paced environment with a can-do attitude
+ Enjoys working with many different teams with strong collaboration and communications skills
+ Fast learner and eager to absorb new emerging technologies
+ M.S./B.S. degree in Computer Science or Electrical Engineering
**The Team**
Our engineering team is at the core of our products - connected directly to the mission of preventing cyberattacks. We are constantly innovating - challenging the way we, and the industry, think about cybersecurity. Our engineers don't shy away from building products to solve problems no one has pursued before.
We define the industry, instead of waiting for directions. We need individuals who feel comfortable in ambiguity, excited by the prospect of a challenge, and empowered by the unknown risks facing our everyday lives that are only enabled by a secure digital environment.
**Compensation Disclosure**
The compensation offered for this position will depend on qualifications, experience, and work location. For candidates who receive an offer at the posted level, the starting base salary (for non-sales roles) or base salary + commission target (for sales/commissioned roles) is expected to be between $200,000 - $225,000/YR. The offered compensation may also include restricted stock units and a bonus. A description of our employee benefits may be found here ( .
**Our Commitment**
We're problem solvers that take risks and challenge cybersecurity's status quo. It's simple: we can't accomplish our mission without diverse teams innovating, together.
We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at .
Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics.
All your information will be kept confidential according to EEO guidelines.
Is role eligible for Immigration Sponsorship?: Yes
View Now

Sr Principal Engineer Software (Big Data)

95053 Santa Clara, California Palo Alto Networks

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Our Mission

At Palo Alto Networks® everything starts and ends with our mission:

Being the cybersecurity partner of choice, protecting our digital way of life.

Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we’re looking for innovators who are as committed to shaping the future of cybersecurity as we are.

Who We Are

We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included.

As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few!

At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision.

Your Career

Prisma Access™ (formally GlobalProtect Cloud Service) provides protection straight from the cloud to make access to the cloud secure. It combines the connectivity and security you need - and delivers it everywhere you need it. Using cutting-edge public and private cloud technologies extending the next-generation security protection to all cloud services, customers on-premise remote networks and mobile users.

We are seeking an experienced Big Data Software Engineer to design, develop and deliver next-generation technologies within our Prisma Access team. We want passionate engineers who love to code and build great products. Engineers who bring new ideas in all facets of software development. Collaboration and teamwork are at the foundation of our culture and we need engineers who can communicate and work well with others towards achieving a common goal.

This role is located at our Santa Clara, CA headquarters.

Your Impact

  • Design, develop and implement highly scalable software features on our next-generation security platform as part of our Prisma Access

  • Work with different development and quality assurances groups to achieve the best quality

  • Suggest and implement improvements to the development process

  • Work with DevOps and the Technical Support teams to troubleshoot customer issues

Your Experience

  • At least 6+ years of development experience

  • Experience in developing services in the cloud/Kubernetes

  • Experience with building data pipelines and analytics pipelines using like dataflow, pubsub, GKE

  • Strong understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores

  • Experience with RESTful interfaces and Build Management tools (Gradle, maven)

  • Experience in continuous integration and design

  • Experience with Test-Driven Development

  • Experience with distributed computing and object-oriented design and analysis

  • Strong understanding of microservices-based deployments with the ability to design services

  • Showcase your ability of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, peer review, and operations

  • Familiarity with Agile (e.g., Scrum Process)

  • Familiarity in Big Data technologies like Hive, Kafka, Hadoop, SQL, developing APIs

  • Familiarity working with GCP or other Cloud platforms such as AWS and Azure

  • High energy and the ability to work in a fast-paced environment with a can-do attitude

  • Enjoys working with many different teams with strong collaboration and communications skills

  • Fast learner and eager to absorb new emerging technologies

  • M.S./B.S. degree in Computer Science or Electrical Engineering

The Team

Our engineering team is at the core of our products – connected directly to the mission of preventing cyberattacks. We are constantly innovating – challenging the way we, and the industry, think about cybersecurity. Our engineers don’t shy away from building products to solve problems no one has pursued before.

We define the industry, instead of waiting for directions. We need individuals who feel comfortable in ambiguity, excited by the prospect of a challenge, and empowered by the unknown risks facing our everyday lives that are only enabled by a secure digital environment.

Compensation Disclosure

The compensation offered for this position will depend on qualifications, experience, and work location. For candidates who receive an offer at the posted level, the starting base salary (for non-sales roles) or base salary + commission target (for sales/commissioned roles) is expected to be between $0 - $0/YR. The offered compensation may also include restricted stock units and a bonus. A description of our employee benefits may be found here ( .

Our Commitment

We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together.

We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at .

Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics.

All your information will be kept confidential according to EEO guidelines.

Is role eligible for Immigration Sponsorship?: Yes

View Now

Sr Principal Engineer Software (Big Data)

95053 Santa Clara, California ZipRecruiter

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Job DescriptionJob DescriptionCompany Description

Our Mission

At Palo Alto Networks® everything starts and ends with our mission:

Being the cybersecurity partner of choice, protecting our digital way of life.
Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we’re looking for innovators who are as committed to shaping the future of cybersecurity as we are.

Who We Are

We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included.

As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few!

At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision.

Job Description

Your Career

Prisma Access™ (formally GlobalProtect Cloud Service) provides protection straight from the cloud to make access to the cloud secure. It combines the connectivity and security you need - and delivers it everywhere you need it. Using cutting-edge public and private cloud technologies extending the next- security protection to all cloud services, customers on-premise remote networks and mobile users.

We are seeking an experienced Big Data Software Engineer to design, develop and deliver next- technologies within our Prisma Access team. We want passionate engineers who love to code and build great products. Engineers who bring new ideas in all facets of software development. Collaboration and teamwork are at the foundation of our culture and we need engineers who can communicate and work well with others towards achieving a common goal.

This role is located at our Santa Clara, CA headquarters.

Your Impact

  • Design, develop and implement highly scalable software features on our next- security platform as part of our Prisma Access

  • Work with different development and quality assurances groups to achieve the best quality

  • Suggest and implement improvements to the development process

  • Work with DevOps and the Technical Support teams to troubleshoot customer issues

Qualifications

Your Experience 

  • At least 6+ years of development experience

  • Experience in developing services in the cloud/Kubernetes

  • Experience with building data pipelines  and analytics pipelines using like dataflow, pubsub, GKE

  • Strong understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores

  • Experience with RESTful interfaces and Build Management tools (Gradle, maven)

  • Experience in continuous integration and design

  • Experience with Test-Driven Development

  • Experience with distributed computing and object-oriented design and analysis

  • Strong understanding of microservices-based deployments with the ability to design services

  • Showcase your ability of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, peer review, and operations

  • Familiarity with Agile (e.g., Scrum Process)

  • Familiarity in Big Data technologies like Hive, Kafka, Hadoop, SQL, developing APIs

  • Familiarity working with GCP or other Cloud platforms such as AWS and Azure

  • High energy and the ability to work in a fast-paced environment with a can-do attitude

  • Enjoys working with many different teams with strong collaboration and communications skills

  • Fast learner and eager to absorb new emerging technologies

  • M.S./B.S. degree in Computer Science or Electrical Engineering



Additional Information

The Team

Our engineering team is at the core of our products – connected directly to the mission of preventing cyberattacks. We are constantly innovating – challenging the way we, and the industry, think about cybersecurity. Our engineers don’t shy away from building products to solve problems no one has pursued before.

We define the industry, instead of waiting for directions. We need individuals who feel comfortable in ambiguity, excited by the prospect of a challenge, and empowered by the unknown risks facing our everyday lives that are only enabled by a secure digital environment.

Compensation Disclosure

The compensation offered for this position will depend on qualifications, experience, and work location. For candidates who receive an offer at the posted level, the starting base salary (for non-sales roles) or base salary + commission target (for sales/commissioned roles) is expected to be between $0 - $0/YR. The offered compensation may also include restricted stock units and a bonus. A description of our employee benefits may be found here.

Our Commitment

We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together.

We are committed to providing reasonable accommodations for all qualified individuals with a . If you require assistance or accommodation due to a or special need, please contact us at  

Palo Alto Networks is an equal opportunity employer. We celebrate in our workplace, and all qualified applicants will receive consideration for employment without regard to , ancestry, , family or medical care leave, or expression, genetic information, marital status, medical condition, , physical or mental , political affiliation, protected veteran status, , , (including ), , or other legally protected characteristics.

All your information will be kept confidential according to EEO guidelines.

Is role eligible for Immigration Sponsorship?: Yes

View Now

Sr Principal Engineer Software Big Data

95053 Santa Clara, California Palo Alto Networks

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Job Description

Your Career

Prisma Access™ (formally GlobalProtect Cloud Service) provides protection straight from the cloud to make access to the cloud secure. It combines the connectivity and security you need - and delivers it everywhere you need it. Using cutting-edge public and private cloud technologies extending the next-generation security protection to all cloud services, customers on-premise remote networks and mobile users.

We are seeking an experienced Big Data Software Engineer to design, develop and deliver next-generation technologies within our Prisma Access team. We want passionate engineers who love to code and build great products. Engineers who bring new ideas in all facets of software development. Collaboration and teamwork are at the foundation of our culture and we need engineers who can communicate and work well with others towards achieving a common goal.

This role is located at our Santa Clara, CA headquarters.

Your Impact

  • Design, develop and implement highly scalable software features on our next-generation security platform as part of our Prisma Access

  • Work with different development and quality assurances groups to achieve the best quality

  • Suggest and implement improvements to the development process

  • Work with DevOps and the Technical Support teams to troubleshoot customer issues

Qualifications:
Qualifications

Your Experience

  • At least 6+ years of development experience

  • Experience in developing services in the cloud/Kubernetes

  • Experience with building data pipelines and analytics pipelines using like dataflow, pubsub, GKE

  • Strong understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores

  • Experience with RESTful interfaces and Build Management tools (Gradle, maven)

  • Experience in continuous integration and design

  • Experience with Test-Driven Development

  • Experience with distributed computing and object-oriented design and analysis

  • Strong understanding of microservices-based deployments with the ability to design services

  • Showcase your ability of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, peer review, and operations

  • Familiarity with Agile (e.g., Scrum Process)

  • Familiarity in Big Data technologies like Hive, Kafka, Hadoop, SQL, developing APIs

  • Familiarity working with GCP or other Cloud platforms such as AWS and Azure

  • High energy and the ability to work in a fast-paced environment with a can-do attitude

  • Enjoys working with many different teams with strong collaboration and communications skills

  • Fast learner and eager to absorb new emerging technologies

  • M.S./B.S. degree in Computer Science or Electrical Engineering

Additional Information

The Team

Our engineering team is at the core of our products – connected directly to the mission of preventing cyberattacks. We are constantly innovating – challenging the way we, and the industry, think about cybersecurity. Our engineers don’t shy away from building products to solve problems no one has pursued before.

We define the industry, instead of waiting for directions. We need individuals who feel comfortable in ambiguity, excited by the prospect of a challenge, and empowered by the unknown risks facing our everyday lives that are only enabled by a secure digital environment.

Compensation Disclosure

The compensation offered for this position will depend on qualifications, experience, and work location. For candidates who receive an offer at the posted level, the starting base salary (for non-sales roles) or base salary + commission target (for sales/commissioned roles) is expected to be between $0 - $0/YR. The offered compensation may also include restricted stock units and a bonus. A description of our employee benefits may be found here.

Our Commitment

We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together.

We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at

Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics.

All your information will be kept confidential according to EEO guidelines.

Is role eligible for Immigration Sponsorship?: Yes

View Now
Be The First To Know

About the latest Data scientists Jobs in Santa Clara !

Sr Principal Engineer Software (Big Data)

95054 Santa Clara, California Palo Alto Networks

Posted 17 days ago

Job Viewed

Tap Again To Close

Job Description

**Our Mission**
At Palo Alto Networks® everything starts and ends with our mission:
Being the cybersecurity partner of choice, protecting our digital way of life.
Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we're looking for innovators who are as committed to shaping the future of cybersecurity as we are.
**Who We Are**
We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included.
As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few!
At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision.
**Your Career**
Prisma Access (formally GlobalProtect Cloud Service) provides protection straight from the cloud to make access to the cloud secure. It combines the connectivity and security you need - and delivers it everywhere you need it. Using cutting-edge public and private cloud technologies extending the next-generation security protection to all cloud services, customers on-premise remote networks and mobile users.
We are seeking an experienced Big Data Software Engineer to design, develop and deliver next-generation technologies within our Prisma Access team. We want passionate engineers who love to code and build great products. Engineers who bring new ideas in all facets of software development. Collaboration and teamwork are at the foundation of our culture and we need engineers who can communicate and work well with others towards achieving a common goal.
This role is located at our Santa Clara, CA headquarters.
**Your Impact**
+ Design, develop and implement highly scalable software features on our next-generation security platform as part of our Prisma Access
+ Work with different development and quality assurances groups to achieve the best quality
+ Suggest and implement improvements to the development process
+ Work with DevOps and the Technical Support teams to troubleshoot customer issues
**Your Experience**
+ At least 6+ years of development experience
+ Experience in developing services in the cloud/Kubernetes
+ Experience with building data pipelines and analytics pipelines using like dataflow, pubsub, GKE
+ Strong understanding of message queuing, stream processing, and highly scalable 'big data' data stores
+ Experience with RESTful interfaces and Build Management tools (Gradle, maven)
+ Experience in continuous integration and design
+ Experience with Test-Driven Development
+ Experience with distributed computing and object-oriented design and analysis
+ Strong understanding of microservices-based deployments with the ability to design services
+ Showcase your ability of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, peer review, and operations
+ Familiarity with Agile (e.g., Scrum Process)
+ Familiarity in Big Data technologies like Hive, Kafka, Hadoop, SQL, developing APIs
+ Familiarity working with GCP or other Cloud platforms such as AWS and Azure
+ High energy and the ability to work in a fast-paced environment with a can-do attitude
+ Enjoys working with many different teams with strong collaboration and communications skills
+ Fast learner and eager to absorb new emerging technologies
+ M.S./B.S. degree in Computer Science or Electrical Engineering
**The Team**
Our engineering team is at the core of our products - connected directly to the mission of preventing cyberattacks. We are constantly innovating - challenging the way we, and the industry, think about cybersecurity. Our engineers don't shy away from building products to solve problems no one has pursued before.
We define the industry, instead of waiting for directions. We need individuals who feel comfortable in ambiguity, excited by the prospect of a challenge, and empowered by the unknown risks facing our everyday lives that are only enabled by a secure digital environment.
**Compensation Disclosure**
The compensation offered for this position will depend on qualifications, experience, and work location. For candidates who receive an offer at the posted level, the starting base salary (for non-sales roles) or base salary + commission target (for sales/commissioned roles) is expected to be between $0 - $0/YR. The offered compensation may also include restricted stock units and a bonus. A description of our employee benefits may be found here ( .
**Our Commitment**
We're problem solvers that take risks and challenge cybersecurity's status quo. It's simple: we can't accomplish our mission without diverse teams innovating, together.
We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at .
Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics.
All your information will be kept confidential according to EEO guidelines.
Is role eligible for Immigration Sponsorship?: Yes
View Now

Staff Hadoop Admin & Tableau Admin - Big Data - Federal

95054 Santa Clara, California ServiceNow, Inc.

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today - ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone.
**Please Note:** This position will include supporting our US Federal Government Cloud Infrastructure.
_This position requires passing a ServiceNow background screening, USFedPASS (US Federal Personnel Authorization Screening Standards). This includes a credit check, criminal/misdemeanor check and taking a drug test. Any employment is contingent upon passing the screening. _ **_Due to Federal requirements, only US citizens, US naturalized citizens or US Permanent Residents, holding a green card, will be considered._**
As a **Staff DevOps Engineer** on our **Big Data Federal Team,** you will help deliver 24x7 support for our Cloud infrastructure.
The Big Data team plays a critical and strategic role in ensuring that ServiceNow can exceed the availability and performance SLAs of the ServiceNow Platform-powered Customer instances - deployed across the ServiceNow cloud and Azure cloud. Our mission is to:
Deliver _state-of-the-art Monitoring, Analytics and Actionable Business Insights_ by employing _new tools, Big Data systems, Enterprise Data Lake, AI, and Machine Learning methodologies_ that _improve efficiencies across a variety of functions in the company: Cloud Operations, Customer Support, Product Usage Analytics, Product Upsell Opportunities,_ enabling a significant impact on both the top-line and bottom-line growth.
The Big Data team is responsible for:
+ Collecting, storing, and providing real-time access to a large amount of data
+ Provide real-time analytic tools and reporting capabilities for various functions, including:
+ Monitoring, alerting, and troubleshooting
+ Machine Learning, Anomaly Detection, and Prediction of P1s
+ Capacity planning
+ Data analytics and deriving Actionable Business Insights
**What you get to do in this role:**
+ Responsible for deploying, monitoring, maintaining and supporting Big Data infrastructure and applications on ServiceNow Cloud and Azure environments.
+ Deploy, scale, and manage containerized Big Data applications using Kubernetes, docker, and other related tools.
+ Proactively identify and resolve issues within Kubernetes clusters, containerized applications, and data pipelines. Provide expert-level support for incidents and perform root cause analysis.
+ Triage network-related issues in a containerized environment.
+ Provide production support to resolve critical Big Data pipelines, application issues, and mitigating or minimizing any impact on Big Data applications. Collaborate closely with Site Reliability Engineers (SRE), Customer Support (CS), Developers, QA and System engineering teams in replicating complex issues leveraging broad experience with UI, SQL, Full-stack and Big Data technologies.
+ Responsible for enforcing data governance policies and the Definition of Done (DoD) in all Big Data environments.
+ Install, configure, and upgrade Tableau Server in a clustered environment. Manage user licensing, site administration, and content permissions.
+ Monitor Tableau Server performance, health, and usage; perform tuning to ensure optimal performance. Automate server monitoring and maintenance tasks using scripting (e.g., PowerShell, Python) and the Tableau Services Manager (TSM) CLI.
+ Manage Tableau data source connections, extracts, and refresh schedules.
+ Implement and manage security best practices for the Tableau environment, including user authentication (SAML, Active Directory) and content-level security.
**To be successful in this role you have:**
+ Experience in leveraging or critically thinking about how to integrate AI into work processes, decision-making, or problem-solving. This may include using AI-powered tools like Copilot, Windsurf or similar to automate workflows, analyzing AI-driven insights, or exploring AI's potential impact on the function or industry
+ 6+ years of experience working with systems such as HDFS, Yarn, Hive, HBase, Kafka, RabbitMQ, Impala, Kudu, Redis, MariaDB, and PostgreSQL
+ Deep understanding of Hadoop / Big Data Ecosystem.
+ Hands-on experience with Kubernetes in a production environment
+ Deep understanding of Kubernetes architecture, concepts, and operations
+ Strong knowledge in querying and analyzing large-scale data using VictoriaMetrics, Prometheus, Spark, Flink, and Grafana
+ Experience supporting CI/CD pipelines for automated applications deployment to Kubernetes
+ Strong Linux Systems Administration skills
+ Strong scripting skills in Bash, Python for automation routine tasks.
+ Proficient with Git and version control systems
+ Familiarity with Cloudera Data Platform (CDP) and its ecosystem
+ Experience as a Tableau administrator, configuring and administering Tableau Server with knowledge of Tableau Server best practices for scalability, performance, content management and governance.
+ Familiarity with Tableau Services Manager (TSM).
+ Ability to learn quickly in a fast-paced, dynamic team environment and have a mindset to adapt to technology changes.
GCS-23
For positions in the Bay Area, we offer a base pay of $163,600 - $286,300, plus equity (when applicable), variable/incentive compensation and benefits. Sales positions generally offer a competitive On Target Earnings (OTE) incentive compensation structure. Please note that the base pay shown is a guideline, and individual total compensation will vary based on factors such as qualifications, skill level, competencies and work location. We also offer health plans, including flexible spending accounts, a 401(k) Plan with company match, ESPP, matching donations, a flexible time away plan and family leave programs (subject to eligibility requirements). Compensation is based on the geographic location in which the role is located, and is subject to change based on work location.
_Not sure if you meet every qualification? We still encourage you to apply! We value inclusivity, welcoming candidates from diverse backgrounds, including non-traditional paths. Unique experiences enrich our team, and the willingness to dream big makes you an exceptional candidate!_
**Work Personas**
We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work. Learn more here ( .
**Equal Opportunity Employer**
ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements.
**Accommodations**
We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact for assistance.
**Export Control Regulations**
For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities.
From Fortune. ©2024 Fortune Media IP Limited. All rights reserved. Used under license.
View Now

Senior Hadoop Admin - Big Data - Federal - 2nd Shift

95054 Santa Clara, California ServiceNow, Inc.

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today - ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone.
**Please Note:** This position will include supporting our US Federal Government Cloud Infrastructure.
_This position requires passing a ServiceNow background screening, USFedPASS (US Federal Personnel Authorization Screening Standards). This includes a credit check, criminal/misdemeanor check and taking a drug test. Any employment is contingent upon passing the screening. _ **_Due to Federal requirements, only US citizens, US naturalized citizens or US Permanent Residents, holding a green card, will be considered._**
As a **Staff DevOps Engineer** on our **Big Data Federal Team,** you will help deliver 24x7 support for our Government Cloud infrastructure.
**_The Federal Big Data Team has 3 shifts that provide 24x7 production support for our Big Data Government cloud infrastructure._**
**_This is a 2nd Shift Position - Sunday to Wednesday with work hours from 3 pm Pacific Time to 2 am Pacific Time_**
_Below are some highlights._
+ **4 Day work week** (Sunday to Wednesday)
+ No on-call rotation
+ Shift Bonuses for 2nd and 3rd shifts
The Big Data team plays a critical and strategic role in ensuring that ServiceNow can exceed the availability and performance SLAs of the ServiceNow Platform powered Customer instances - deployed across the ServiceNow cloud and Azure cloud. Our mission is to:
Deliver _state-of-the-art Monitoring, Analytics and Actionable Business Insights_ by employing _new tools, Big Data systems, Enterprise Data Lake, AI, and Machine Learning methodologies_ that _improve efficiencies across a variety of functions in the company: Cloud Operations, Customer Support, Product Usage Analytics, Product Upsell Opportunities_ enabling to have a significant impact both on the top-line and bottom-line growth. The Big Data team is responsible for:
+ Collecting, storing, and providing real-time access to large amount of data
+ Provide real-time analytic tools and reporting capabilities for various functions including:
+ Monitoring, alerting, and troubleshooting
+ Machine Learning, Anomaly detection and Prediction of P1s
+ Capacity planning
+ Data analytics and deriving Actionable Business Insights
**What you get to do in this role**
+ Responsible for deploying, production monitoring, maintaining and supporting Big Data infrastructure, Applications on ServiceNow Cloud and Azure environments.
+ Deploy, scale, and manage containerized applications using Kubernetes, docker, and other related tools.
+ Automate Continuous Integration / Continuous Deployment (CI/CD) data pipelines for applications leveraging tools such as Jenkins, Ansible, and Docker.
+ Proactively identify and resolve issues within Kubernetes clusters, containerized applications, and CI/CD pipelines. Provide expert-level support for incidents and perform root cause analysis.
+ Understanding of networking concepts related to containerized environments
+ Provide production support to resolve critical Big Data pipelines and application issues and mitigating or minimizing any impact on Big Data applications. Collaborate closely with Site Reliability Engineers (SRE), Customer Support (CS), Developers, QA and System engineering teams in replicating complex issues leveraging broad experience with UI, SQL, Full-stack and Big Data technologies.
+ Responsible for enforcing data governance policies in Commercial and Regulated Big Data environments.
**To be successful in this role you have:**
+ Experience in leveraging or critically thinking about how to integrate AI into work processes, decision-making, or problem-solving. This may include using AI-powered tools, automating workflows, analyzing AI-driven insights, or exploring AI's potential impact on the function or industry
+ Deep understanding of Hadoop / Big Data Ecosystem.
+ 6+ Experience working with systems such as HDFS, Yarn, Hive, HBase, Kafka, RabbitMQ, Impala, Kudu, Redis, MariaDB, and PostgreSQL
+ Hands-on experience with Kubernetes in a production environment
+ Deep understanding of Kubernetes architecture, concepts, and operations
+ Strong knowledge in querying and analyzing large-scale data using VictoriaMetrics, Prometheus, Spark, Flink, and Grafana
+ Experience supporting CI/CD pipelines for automated applications deployment to Kubernetes
+ Strong Linux Systems Administration skills
+ Strong scripting skills in Bash, Python for automation and task management
+ Proficient with Git and version control systems
+ Familiarity with Cloudera Data Platform (CDP) and its ecosystem
+ Ability to learn quickly in a fast-paced, dynamic team environment
GCS-23
For positions in the Bay Area, we offer a base pay of $158,500 - $277,500, plus equity (when applicable), variable/incentive compensation and benefits. Sales positions generally offer a competitive On Target Earnings (OTE) incentive compensation structure. Please note that the base pay shown is a guideline, and individual total compensation will vary based on factors such as qualifications, skill level, competencies and work location. We also offer health plans, including flexible spending accounts, a 401(k) Plan with company match, ESPP, matching donations, a flexible time away plan and family leave programs (subject to eligibility requirements). Compensation is based on the geographic location in which the role is located, and is subject to change based on work location.
_Not sure if you meet every qualification? We still encourage you to apply! We value inclusivity, welcoming candidates from diverse backgrounds, including non-traditional paths. Unique experiences enrich our team, and the willingness to dream big makes you an exceptional candidate!_
**Work Personas**
We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work. Learn more here ( .
**Equal Opportunity Employer**
ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements.
**Accommodations**
We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact for assistance.
**Export Control Regulations**
For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities.
From Fortune. ©2024 Fortune Media IP Limited. All rights reserved. Used under license.
View Now
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Scientists Jobs View All Jobs in Santa Clara