Technical Analyst (Hadoop Cloudera Administrator)

Reporting directly to the Director of Big Data Strategy, the Hadoop Cloudera Administrator will help drive forward the capacity of our technology platform to generate new and previously unseen types of insights and enable data-driven actions for our clients and our business. Working as a member of Big Data team, you will be involved in Hadoop platform core infrastructure deployment, performing software and associated services implementation and upgrades. Our Big Data Platform supports the direct (Sonnet) and Broker channels for Finance, Product and Pricing and Analytics, Marketing, Claims and Communications.

Who You Are

You are a Big Data expert and are expected to be skilled in enterprise systems administration, provide hands-on subject matter expertise to build and implement Cloudera Hadoop-based Big Data infrastructure and solutions, encourage cross-functional collaboration and drive delivery through coordination with development teams.  Responsibilities include:

  • Recommend settings and performance tuning of the Cloudera eco system.
  • Coordinate with development teams, the DevOps group, and architecture teams in order to align infrastructure configuration to projects deliveries from DEV to PROD.
  • Support the expansion of our Hadoop ecosystem, both on premise and cloud, and drive infrastructure best practices across the Big Data Strategy team.

What Skills You Should Have

·         Post-Secondary Education in Computer Science or a related discipline or equivalent experience

·         Demonstrated experience with Unix(AIX/Linux) and Windows administration

·         5+ years’ experience with IT Infrastructure & Operation administration

·         Demonstrated experience supporting Big Data systems, such as Cloudera CDH, Hortonworks

·         Good understanding and experience of Hadoop Stack, including HDFS, Hive, HBase, Yarn, Flume, Scoop, Oozie, Impala, Spark etc.

·         Experience of Hadoop Cluster installation, configuration, Cluster monitoring and troubleshooting,

·         Experience of Hadoop security, cluster resources management, cluster maintenance

·         Demonstrated experience in system performance management techniques, and performance-related system level tuning

·         Excellent experience with shell scripting, such as bash, awk, sed

·         Good knowledge of Python and Perl

·         Knowledge of Active Directory, LDAP, Kerberos, Group Policy administration,  role based access security management,

·         Experience with visualized environments including VMware, etc.

·         Experience with public Cloud environments including AWS and Azure

·         Experience in build and deployment automation using Ant, Shell Scripts, etc.

·         Extensive experience with different software version control system, e.g. GIT, CVS

·         Knowledge of DevOps, Continuous Integration (CI), Continuous Deployment (CD) pipeline, and tools like Jenkins, GIT, Artifactory

·         Knowledge of basic storage and networking topologies

·         Knowledge of application server, such as Tomcat, JBoss, IBM Websphere,

·         Knowledge of ETL tools, such as Pentaho

·         Experience working within a defined incident, problem, and change management processes to resolve and document service incident, plan and schedule changes

·         Strong problem solving skills, and converting known issue into planned changes

·         Interpersonal, communication, and documenting skills

·         Ability and motivation to learn new technologies quickly