Hadoop Architect

Location: Oklahoma City, Oklahoma
Date Posted: 09-08-2017
Hadoop Architect
Direct-Hire - $120K
BASIC PURPOSE:

The Enterprise Data & Analytics group at our customer is looking for a Big Data Architect, to lead the team in designing and developing Big Data solutions that meet business objectives.  This is an exciting opportunity to work for a family-owned company that continues to experience growth, and get in on the ground floor to help build the company’s Big Data practice.  The ideal candidate has a deep technical knowledge of the Hadoop stack and possesses a desire and passion for teaching others.  This role requires a close partnership with the Data Science/Analyst community, as well as various IT teams, to ensure requirements are met and solutions are supportable and scalable.

MAJOR RESPONSIBILITIES:
  • Design and implement data ingestion techniques for real time and batch processes for a variety of sources into Hadoop ecosystems and HDFS clusters
  • Visualize and report data findings creatively in a variety of visual formats that provide insights to the organization
  • Knowledge of data, master data and metadata related standards, processes and technology
  • Define and document architecture roadmaps and standards
  • Drive use case analysis and architectural design around activities focused on determining how to best meet customer requirements within the tools of the ecosystem
  • Ensure scalability and high availability, fault tolerance, and elasticity within big data ecosystem
  • Provide technical leadership and coaching to junior team members
  • Architect and develop ELT and ETL solutions focused on moving data from highly diverse data landscape into a centralized data lake; also architect solutions to acquire semi/unstructured data sources, such as sensors, machine logs, click streams, etc.
  • Manage all activities centered on obtaining data and loading into an Enterprise Data Lake
  • Serve as an expert in efficient ETL, data quality, and data consolidation
  • Stay current with vendor/product roadmaps and make recommendations for adoption
  • Maintain a customer-focused attitude
 
EDUCATION AND REQUIREMENTS:
  • Education:
    • Bachelor’s Degree or equivalent in Information Technology, Computer Sciences or Computer Engineering
  • Experience:
    • 8 years IT experience
    • 3+ years of experience building large-scale data solutions involving data architecture, data integration, business intelligence and data analytics
    • 1+ year of experience working on large scale Big Data projects
    • Deep technical knowledge of most components contained within the Hadoop ecosystem (MapReduce, HDFS, YARN, Hive, HBase, Sqoop, etc.), preferable with Hortonworks distribution
    • Understanding of statistical and predictive modeling concepts, a plus
    • Project Management experience with Agile methodologies a plus
    • Strong Java/J2EE experience
    • Experience with visualization tools
    • Experience with RDBMS platforms, such as SQL Server and in-memory columnar storage, such as HANA
or
this job portal is powered by CATS