You are viewing a preview of this job. Log in or register to view more details about this job.

Data Platform Engineer

You
You are an innovative technology enthusiast who enjoys building software products and quickly seeing them work in the real world. You like to develop seriously collaborative teams and guide passionate, cross-functional technologists to solve new problems. Even more, you drive results and hold yourself and your teammates to extreme levels of software standards and professional quality while keeping up with new web application tools and technologies. You can agree to disagree with a smile and drive through with results.

Us
We drive valuable Digital Experiences for established enterprises, emerging startups and other companies through our Data Engineering, Analytics, and Application Development services. Our customized enterprise grade solutions enable our partners to achieve improved operational efficiency and deliver improved business outcomes.

Egen's tech stack on the Data Engineering team is based on Java, Scala, Python, Spark, and AWS. The applications we build are typically deployed as microservices and integrate with technologies such as Kafka, Storm, and Elasticsearch. We are working on a continuous deployment pipeline that leverages Mesos, Docker, and Marathon to provide rapid on-demand releases. Our developers work in an agile process to efficiently deliver high value applications and product packages.

Your Day
As a Data Platform Engineer at Egen, you will architect and implement cloud-native data pipelines and infrastructure to enable analytics and machine learning on Ennate's rich datasets.






Why we’re looking for you:

  • You know what it takes to build and run resilient data pipelines in production and have experience implementing ETL/ELT to load a multi-terabyte enterprise data warehouse.
  • You have implemented analytics applications using multiple database technologies, such as relational, multidimensional (OLAP), key-value, document, or graph.
  • You value the importance of defining data contracts, and have experience writing specifications including REST APIs.
  • You write code to transform data between data models and formats, preferably in Python or PySpark (bonus points).
  • You've worked in agile environments and are comfortable iterating quickly.

Bonus points for:

  • Experience moving trained machine learning models into production data pipelines.
  • Expert knowledge of relational database modeling concepts, SQL skills, proficiency in query performance tuning, and desire to share knowledge with others.
  • Experience building cloud-native applications and supporting technologies / patterns / practices including: AWS, Docker, CI/CD, DevOps, and microservices.
  • Experience moving trained machine learning models into production data pipelines.
  • Expert knowledge of relational database modeling concepts, SQL skills, proficiency in query performance tuning, and desire to share knowledge with others.
  • Experience building cloud-native applications and supporting technologies / patterns / practices including: AWS, Docker, CI/CD, DevOps, and microservices.