Job Description

Big Data Architect

The Big Data Software Architect position is a combined architectural design, technical leadership, and hands-on development role that contributes to Client’s success through expertise in large-scale Data and Distributed system. You will leverage Hadoop ecosystem and matured existing systems to help design and create the next generation service architecture. Qualified individuals will have a solid background in the fundamentals of computer science, distributed computing, and large-scale data processing. Qualified individuals will have a solid background in the fundamentals of computer science, software system architecture and design, development process and best practices, big data, distributed computing, and high availability.

Because you will be part of a small team, your ability to communicate technical ideas effectively, in oral and written forms, and solve complex problems in a team environment will also be considered.

Responsibilities:

  • Help define the vision of next generation high-scale data architecture.
  • Architect and deliver evolving complex high-performance, scalable and reliable systems given business and technical requirements.
  • Lead and drive cross-team platform-level initiatives and projects.
  • Find areas of system performance optimization in our technology stack. Challenges come in the form of scale, computational efficiency, and data fragmentation.
  • Establish best practices in applications that access data.
  • Responsible for health and correctness of data systems software running on production systems. Work with DevOps and other teams to troubleshoot production issues.

Required Qualifications:

  • Must be passionate, team oriented, creative, cooperative, and an exceptional problem solver.
  • 5-8+ years of relevant experience in system architecture, software design, optimization, etc.
  • Experience with technical leadership, defining visions/solutions and collaborating to see them to completion.
  • Strong analytical problem solving and decision-making skills.
  • Good written and verbal communication skills.
  • Degree in Computer Science (preferred) or related engineering field. MS/PhD preferred.
  • Experience with Java or Scala a must.
  • Proven hands-on experience with Hadoop, Yarn, MapReduce, Spark, Kafka, HBase
  • Experience with data systems at large scale is a must.
  • Solid skills in performance tuning, monitoring and measuring.
  • Strong knowledge of common algorithms and data structures.
  • Understanding of analytics, statistics and data science algorithms a plus
  • Proficiency in relational and NoSQL databases is preferred
  • Experience with AWS and/or Google Cloud a plus
  • Experience with related open source technologies such as Elastic Search, OpenTSDB, Grafana, Kibana, Jenkins, Zookeeper, Docker, Kubernetes, etc. a plus.
  • Solid understanding and working knowledge of Unix operating systems, networking, and scaling techniques

Application Instructions

Please click on the link below to apply for this position. A new window will open and direct you to apply at our corporate careers page. We look forward to hearing from you!

Apply Online