Job Summary

Hadoop Engineer job in Pleasanton, CA

  • Location:
    Pleasanton, California
  • Job reference:
  • Contract Type:

Are you interested in working for a company that is a leader in the healthcare industry??
Are you authorized to work in the U.S. without sponsorship?  (We are unable to work C2C all employees must work W2).THIS IS A 6 MONTH W2 CONTRACT OPPORTUNITY
Hadoop Engineer job in Pleasanton, CA
The (Hadoop Engineer), provides technical consulting, design and coding/ prototyping for Hadoop Platform activities
Rapidly architect, design, prototype, and implement architectures to tackle the Big Data and Data Science needs
Work in cross-disciplinary teams to understand client needs and ingest rich data sources such as social media, news, internal/external documents, emails, financial data, and operational data
Research, experiment, and utilize leading Big Data methodologies, such as Hadoop, Spark, Redshift, and Microsoft Azure
Implement and test data processing pipelines, and data mining / data science algorithms on a variety of hosted settings, such as AWS, Azure, and on-premise clusters
Translate advanced business analytics problems into technical approaches that yield actionable recommendations, across multiple, diverse domains; communicate results and educate others through design and build of insightful visualizations, reports, and presentations
Develop skills in business requirement capture and translation, hypothesis-driven consulting, work stream and project management, and client relationship development
• Bachelor’s degree from an accredited college/university in Computer Science, Computer Engineering, or related field and minimum four years of big data experience with multiple programming languages and technologies
• 3 years of development experience in Java
• 3 years of professional experience working with Hadoop stack, preferably with Cloudera CDH, particularly Hive, Impala and HUE.
• Good understanding of version control such as Git and automation tools like Jenkins
• Knowledge of best practices related to security, particularly Hadoop Security using Kerberos and Active Directory
• Hands on experience using relational databases like Oracle, SQL, Postgress , My SQL server, etc.
• Ability to work efficiently under Unix/Linux environment and Unix Shell Scripting
• Ability to work with team members and clients to assess needs, provide assistance, and resolve problems, using excellent problem-solving skills, verbal/written communication, and the ability to explain technical concepts to business people
• 3-5 years of experience in developing Java/J2EE application
• 3 years of development experience in MapReduce and Spark
• Hands on experience using Cassandra, HIVE, No-SQL databases ( like Hbase, Mongo db)
• Fluency in other programming languages such as Python, Scala, R with the ability to pick up new languages and technologies quickly; understanding of cloud and distributed systems principles, including load balancing, networks, scaling, in-memory vs. disk, etc.;
• Exposure to Data Science/ Machine Learning
If you feel that you are a great match for this opportunity, please apply directly or feel free to contact Josh at 303-222-2461.
"In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire."

Apply Below!

Note: Required fields marked with an asterisk (*).


Primary Number
[Ctrl (Cmd Mac) + Click] to select multiple industries
Upload your resume
Terms of Use


Upload your resume using

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.

Equal employment opportunity information:
EEO is the Law (poster) | EEO is the Law (poster supplement) | Reaffirmation of Affirmative Action Policy Statement