18/03 Sandesh
Talent Evangelist at Staffio HR

Views:481 Applications:36 Rec. Actions:Recruiter Actions:4

Software Development Engineer - Big Data (6-12 yrs)

Bangalore Job Code: 308616

Exp : 6 - 12 years

CTC : 35 - 50 LPA

- Talents based out of Bangalore Only

- Excellent Java coding skills

- Experience in High Level Design

- At least 1.5-2 years of experience in Big Data Technologies like HDFS, YARN, Sqoop, Hive, Spark.

- At least 4 yrs out of 7 yrs of total experience in product based/e-commerce company

- Experience with Data Structure and Algorithms

- Experience in computer fundamentals

Skills :

- 6+ years of experience in designing and developing software.

- 2+ years of experience in building data pipelines and ETL process.

- 3+ years of experience managing a team of engineers.

- Ability to architect, design and develop complex systems.

- Good communication skills.

- Ability to work with multiple languages Java, Javascript, Python, Scala etc.

- Expertise building pipe lines using big data technologies, databases and tools HDFS, YARN, Sqoop, Hive, Spark.

- Backend experience with RESTful API and RDBMS (MySQL, PostgreSQL, SQLite, etc.)

- Knowledge of other industry ETL tools (including No SQL) such Cassandra/Drill/Impala/etc. is a plus.

- Creative problem solving skills, debugging and troubleshooting skills

- Be a role model for engineers on the team, providing timely coaching and mentoring to all.

- Passion for ensuring high quality architecture and customer experience.

- Data sciences background is a big plus.

Roles and Responsibilities :

- The position will be in support of data creation, ingestion, management, and client consumption.

- This individual has an absolute requirement to be well versed in Big Data fundamentals such as HDFS and YARN.

- More than a working knowledge of Sqoop and Hive is required with understanding of partitioning/data formats/compression/performance tuning/etc.

- Preferably, the candidate has a strong knowledge of Spark on either Python or Scala. Experience in Spark is must.

- SQL for Teradata/Hive Query is required. Knowledge of other industry ETL tools (including No SQL) such Cassandra/Drill/Impala/etc. is a plus.

- Data Science Background to work with DS Team.

This job opening was posted long time back. It may not be active. Nor was it removed by the recruiter. Please use your discretion.

Add a note
Something suspicious? Report this job posting.