You are viewing a preview of this job. Log in or register to view more details about this job.

Data Architect

We are a digitally native company where innovation, design and engineering meet scale. We use the latest technologies in the digital and cognitive field to empower organizations in every aspect. 
We want you to join us to work for the biggest clients in tech, retail, travel, banking, ecommerce and media, revolutionizing and growing their core businesses while helping them (and you!) stay ahead of the curve. Be part of a company with the most cutting-edge practices and technologies plus a unique team.

WHAT ARE WE LOOKING FOR?

We are seeking an experienced Data Architect to join our team to build tools for a distributed data processing platform into a challenging hi-tech company. You will work developing new algorithms to process large scale data efficiently with a large team of interdisciplinary engineers.

Responsibilities:
 
  • Define and execute ETLs using Apache Sparks on Hadoop among other Big Data technologies
  • Design and implement data pipelines for processing and aggregate data
  • Message oriented architecture definitions using Kafka
  • Orchestrate data ingestion into Data Warehouses such as HBase
  • Work with stakeholders and a cross-functional team to understand requirements, evaluate design alternatives and architecture complex solutions.
  • Build collaborative partnerships with software architects, technical leads and key individuals within other functional organizations
  • Ensure code quality by actively participate in code reviews. Test solutions and ensure it meets specifications and requested performance.
  • Build and foster a high performance engineering culture, mentor team members and provide your team with the tools and motivation to make things happen
  • Leads the analysis and design of quality technical solutions.

Requirements:

  • BS or MS in Computer Science or related technical field or equivalent combination of education/experience
  • A minimum of 8+ years of experience in global software development and deployment
  • Experience with Java or Scala
  • Focus to details, computer science algorithms, data structures, and distributed algorithms
  • Significant experience with Hadoop ecosystem (Spark, Hive, HBase)
  • Knowledge on data streaming and message queue middlewares such as Kafka
  • Deep knowledge on Extract, Transform, Load (ETL) and distributed processing techniques such as Map-Reduce
  • Experience working with large scale enterprise organization with cross-functional teams
  • Excellent verbal and written communication skills
  • Great problem solving and analytical skills
  • Experience working in solutions over any major Cloud provider (AWS, Azure, GCP) is a plus
  • Graph, Data classification and clustering algorithms in distributed environment is a plus
  • Experience defining REST services and platform integrations is a plus

We are interested in hard-working, fast-learning talents and we have the know-how and scale to help you make your own career path. If you seek an entrepreneurial, flexible and team-oriented culture, come join us.

We are ready.