Blackhawk Network - Senior Data Engineer - Hadoop/Java (5-8 yrs)
About Blackhawk Network :
Blackhawk Network delivers branded payment programs to meet our partners- business objectives. We collaborate with our partners to innovate, translating market trends in branded payments to increase reach, loyalty and revenue. With a presence in over 26 countries, we reliably execute branded payment programs in over 100 countries worldwide.
Blackhawk Network is setting up a Global Center of Excellence in India for R&D to lead product delivery and innovation on Blackhawk's next generation SaaS based payments platform. Blackhawk is headquartered in Pleasanton, California. For more information, please visit blackhawknetwork.com.
Join us as we shape the future of global branded payments
- Implement and test complex data architectures (Databases and large scale processing systems)
- Discover, construct and test data acquisition pipelines
- Employ a variety of data acquisition and processing tools to build data pipelines
- Leverage and process large volumes of data from internal and external sources to answer key business questions
- Identify ways to improve data reliability, efficiency, and quality
- Build systems that will analyze and detect data patterns
- Explore and examine data to find hidden patterns
- Detect, Monitor and alert on data patterns in real time
- 3+ years of experience with data processing tools - hadoop, spark, scala.
- 5 years of experience in data engineering, modelling with large scale data processing environments.
- 2 years of experience with amazon or google data processing technologies
- Experience with reporting and analytical tools
- Knowledge of Java 8 or higher
- Strong knowledge and experience in most of the following AWS services: EC2, Lambda, SQS, Kinesis, S3, CloudFormation, CLI, CloudWatch
- Basic experience working with Amazon EMR & Spark
- Strong knowledge of SQL, experience in query performance optimization
- Experience working with column-based databases like Redshift
- Passion to pro-active learning of new technologies and sharing it in the team
- Prior work experience in implementing large scale data lake
- Build self learning systems
- Implement and optimize machine learning models recommended by Data scientists
- Prepare data sets for use in modelling