BI Data Engineer
About The Position
Coralogix is rebuilding the path to observability using a real-time streaming analytics pipeline that provides monitoring, visualization, and alerting capabilities without the burden of indexing.
By enabling users to define different data pipelines per use case, we provide deep Observability and Security insights, at an infinite scale, for less than half the cost.
Coralogix is a company growing fast. You will have a rare opportunity to join our BI team at the very beginning, using a cutting edge modern data stack. This team is in charge of all the internal analytics work: create data pipelines, data modeling, analysis, visualization and reporting. We enable all areas of the business to make data-driven decisions. This team will grow in the future and you could be one of the first members.
About the role
As a Business Intelligence Data Engineer, you will be responsible for defining, developing and managing curated datasets, key business metrics and reporting across functional units at Coralogix. You will architect, implement and manage data models, ELTs and ETLs that will enable product, engineering and business teams access consistent data, in near real time, across the Coralogix ecosystem and services. You are a self-starter and you are comfortable working cross-functionally with other teams across Coralogix.
What You’ll Do
- Partner with business stakeholders, upstream infrastructure platform engineering teams and downstream data consumers to understand data and translate business requirements into technical design of building scalable data pipelines.
- Develop and take ownership of BI data pipelines and centralized data warehouse with trustworthy curated datasets and standardized metrics and business definitions to empower data access and self-service.
- Provide the direction of our data engineering and architecture. Determine the right tools for the right jobs.
- Institute data engineering best practices (i.e. dimensional modeling, ETL/ELT pipeline, large scale distributed ETL/ELT pipelines) to enable large-scale machine learning
- Create and maintain analytics data pipelines that generates data + insight to power business decision making
- Provide feedback to product and engineering on new and upcoming features
Requirements
Must Have
- 4+ years as a software engineer or data engineer
- Extensive knowledge of BI concepts (i.e. ETL, dimensional modeling, data warehouse design, data quality monitoring, dashboarding)
- Extensive knowledge of database query languages (i.e. SQL or its MPP equivalent), database design, optimizing queries, internals knowledge of query planning
- Extensive experience with distributed computing/MapReduce
- Extensive experience with data streaming solutions
Nice to Have
- Extensive experience with event tracking or marketing lead tracking
- Extensive experience in distributed computing cluster management (Hadoop, Spark, Flink, etc)
- Experience running cloud native workloads on Kubernetes
- Familiarity with dbt
- Productionize machine learning pipelines
- Technical paper publications, conference speaking engagements