14/05 Swati Verma
HR Specialist at TransOrg Analytics

Views:1318 Applications:176 Rec. Actions:Recruiter Actions:75

TransOrg Analytics - Big Data Engineer - Cloudera/Hortonworks (2-4 yrs)

Gurgaon/Gurugram/Mumbai Job Code: 443873

Why would you like to join us :

- TransOrg Analytics, an award-winning - Big Data and Predictive analytics- company, offers advanced analytics solutions and services to industry leaders and Fortune 500 companies across India, US, Singapore and the Middle East. Our products like - Clonizo' yield significant business benefits to our clients. We were recognized by the CIO Review magazine as the Predictive Analytics Company of the Year- and by TiE for excellence in entrepreneurship.

Overview : This position is for a Big Data Engineer specialized in Hadoop and Spark technologies.

Location : Gurgaon and Mumbai

Responsibilities :

- Design & implement new components and various emerging technologies in Hadoop Eco System, and successful execution of various projects.

- Integrate external data sources and create data lake/data mart.

- Integrate machine learning models on the real-time input data stream.

- Collaborate with various cross-functional teams: infrastructure, network, database.

- Work with various teams to set up new Hadoop users, security and platform governance which should be pci-dss complaint.

- Create and executive capacity planning strategy process for the Hadoop platform.

- Monitor job performances, file system/disk-space management, cluster & database connectivity, log files, management of backup/security and troubleshooting various user issues.

- Design, implement, test and document performance benchmarking strategy for the platform as well for each use cases.

- Drive customer communication during critical events and participate/lead various operational improvement initiatives.

- Responsible for setup, administration, monitoring, tuning, optimizing, governing Large Scale

- Hadoop Cluster and Hadoop components: On-Premise/Cloud to meet high availability/uptime requirements.

Education & Skills Summary :

- 2-4 years of relevant experience in BIG DATA.

- Exposure to Cloudera/Hortonworks production implementations.

- Knowledge of Linux and shell scripting is a must.

- Sound knowledge on Python or Scala.

- Sound knowledge on Spark, HDFS/HIVE/HBASE

- Thorough understanding of Hadoop, Spark, and ecosystem components.

- Must be proficient with data ingestion tools like sqoop, flume, talend, and Kafka.

- Candidates having knowledge of Machine Learning using Spark will be given preference.

- Knowledge of Spark & Hadoop is a must.

- Knowledge of AWS and Google Cloud Platform and their various components is preferable.

Add a note
Something suspicious? Report this job posting.