Are you looking for unlimited opportunities to develop and succeed? Do you want work that challenges you and helps our customers make decisions easier and their lives better? Does a flexible, fun and supportive work environment work for you?
We have a role open for a Data Engineer to join an organization where Data is a key component for success and is recognized as such, simply put: data is a big deal here!
- Deliver strategic & tactical Master Data Management initiatives and projects as part of an Agile squad
- Design, build, integrate data from various resources
- Translate functional and technical requirements into detailed design and high performing capabilities
Responsibilities for this role include:
- Design and build data patterns and services - both batch, real-time and complex event handing - leveraging open technologies;
- Provide design solutions to technical issues and ensure the optimal solution is recommended to Technology leadership and business partners
- Capture, maintain and integrate technical metadata in a Big Data ecosystem and external metadata repositories
- Assist and enable the integration of business metadata in a Big Data ecosystem and external metadata repositories
- Participate in PoC/PoT efforts to integrate new Big Data management technologies, software engineering tools, and new patterns into existing structures
- Ensure the timely delivery to meet project timeline by automating development and deployment tasks
- Research opportunities for data acquisition and new uses for existing data
- Create Big Data warehouses that can be used for reporting or analysis by data scientists
- Influence/recommend ways to improve data reliability, efficiency and quality
- Collaborate with other data management and IT team members on project goals
- Document detailed Big Data design solutions conformant to enterprise standards, architecture and technologies
- Develop application support documentation as required by the application support teams for acceptance of systems changes into production.
- Maintain involvement in continuous improvement of Big Data solution processes, tools and templates
- Participate in the creation and publishing of design documents, usage patterns, and cookbooks for technical community
Experience required for this role is as follows:
- Demonstrated 3 – 5 years of experience in big data/data management including a university degree in Engineering, Computer Science or equivalent program;
- Experience with the Hortonworks Data Platform (version 2.5) a plus
- Experienced with the Hadoop ecosystem and toolset – Sqoop, Nifi, Pig, Spark, HDFS, Hive, HBase, etc.
- Experience with Big Data streaming frameworks and tools (Spark Streaming, Storm, Kafka, etc.)
- Experience developing Hadoop integrations (batch or streaming) for data ingestion, data mapping and data processing capabilities
- Experience in IBM Infosphere MDM 11.x or other MDM solutions preferred
- Experience programming in both compiled languages (Java, Scala) and scripting languages (Python or R)
- Experience with SLDC methods (i.e. agile, iterative, waterfall, etc.)
- Experience with DevOps or Continuous Delivery tools and processes a plus
- Experience working in a cloud IaaS/PaaS environment a plus
- Experience in Big Data performance analysis, tuning and capacity planning
- Experience using a source code management / version control system.
- Experience in data profiling and analysis is a plus
Experience in designing business intelligence systems, dashboard reporting, and analytical reporting is a plus