Develop big data solutions for Hadoop platform, leveraging cloudera or Map R ecosystem tools Application development experience in Java/ J2EE Big data development experience on Hadoop platform including Hive, Impala, Sqoop, Flume, Spark.
Deploying Cloudera Enterprise to create central data hubs combining large volumes of diverse and detailed data.
Experience with data modeling, complex data structures, data processing, data quality, and data lifecycle.
Experience in Unix shell scripting, batch scheduling, and version control tools. Experience with analytical programming and the ability to work with EDW Architect to bridge the gap between a traditional DB architecture and a Hadoop centric architecture.
Coach and mentor less experienced team members.
Designing Architectural processes and procedures
Interfacing with vendors – manage POCs and RFPs
Bachelor’s or Masters in Engineering or related field.
Hadoop cluster components and services (like HDFS, YARN, ZOOKEEPER, AMBARI/CLOUDERA MANAGER, SENTRY/RANGER, KERBEROS, etc.)
Prior experience with Cloudera required
System Integration 3+