24. října

Data Engineer - FABORY Czech s.r.o.

Fabory, a technical trading company, offers a wide and growing range of fasteners, tools, industrial items and safety products from reputable and well-known brands. Our main customers of these materials are, for example, companies that perform repairs and maintenance and machine builders who use the materials in producing their machines. Our customers therefore operate in many markets and sectors and we support both small businesses and multinational companies in their daily production and maintenance and repair.

We are FABORY, and we take care to keep things together!

We specialize in fasteners and related products, and we are looking for an Data Engineer to join our team.

As a Data Engineer at Fabory, you play a key role in driving migration to the new data platform, designing and building modern data environments, translate complex data streams into scalable, future-proof solutions. In addition, you will contribute to the application of AI solutions within data engineering projects.

Our Tech Stack you will be working with:

SAP ECC, SAP BW, SAP S/4Hana, SAP Datasphere, Azure cloud (ADLS, Synapse, Data Factory, Event Hub, Key Vault, etc), Azure Databricks, Azure DevOps, SQL Server, Power BI, MS Fabric, Visual Studio Code, SQL, Python, PySpark

Key Responsibilities

  • Collaborate with business stakeholders to capture and identify business requirements for data products, translate them into clear technical specifications and shape clear definition of done for a data product
  • Design, develop and optimize data models (logical and physical) across different layers of the data architecture (e.g., Raw, Curated, Data Mart) to support analytics, reporting, and machine learning use cases
  • Design, develop and implement solid, scalable and fault-tolerant data pipelines to enable data driven decision making. You build it, You own it, You are responsible that data pipelines and data products are high-performing, automated, and monitored, according to CI/CD standards and the DevOps approach
  • Ensure data quality and integrity through the implementation of validation, reconciliation, and monitoring mechanisms
  • Optimize data workflows for performance, cost-efficiency, and reliability
  • Participate in planning of our Data Engineering backlog

Required skills and experience

  • Business acumen with proven experience in capturing business requirements, translating them into clear technical specifications, and shaping the definition of done to deliver data products that create real business value
  • Strong understanding of ETL/ELT processes and design patterns, data warehousing, data modeling, and medallion layer architecture (raw, silver, gold layers)
  • Experience implementing incremental data loads, SCD, CDC and data synchronization strategies
  • Knowledge of data governance, data quality, and data security principles and practices
  • Understanding differences between relational databases, data warehouses, and data lakehouse architectures
  • Understanding of cloud data platform principles (Azure, AWS, or GCP)
  • Solid hands-on experience with SQL and Python
  • Knowledge of DevOps practices and tools (Azure DevOps, Git, etc.) and CI/CD principles
  • Familiarity with BI/reporting integration (Power BI, MS Fabric or similar). Not responsible for building BI reports, but understanding of integration, data synchronization and data modelling patterns for BI would help a lot
  • Last but not least, an ownership mindset with a strong passion for results, proactivity in tackling challenges even without an immediate solution, and a constant eagerness to learn and grow to stay at the forefront of the data engineering field. At Fabory, our Data Engineering team is always here to guide, teach, and support you - and we encourage you to take the initiative to explore, learn, and grow your skills to unlock your full potential

Would be an advantage

  • Hands-on experience with cloud platforms, preferably Azure (ADLS, Synapse, Data Factory, Event Hub, Key Vault, etc)
  • Hands-on experience in building data pipelines in a datalakehouse (Databricks or similar)
  • Hands-on experience with SAP data extraction (ODP/2LIS, SLT or third-party connectors)
  • Advanced knowledge and hands-on experience with Python, PySpark, Java, Scala
  • Certifications in Data Engineering (e.g., Databricks Data Engineer Associate/Professional, Azure Data Engineer Associate, or similar)

What We Offer

  • 5 weeks of vacation and 3 sick days
  • Hybrid work model (3+2)
  • Meal allowance and contributions to pension/life insurance and leisure activities
  • Annual salary review and Annual performance-based bonus
  • Opportunities for training and professional certifications
  • Company and team events
  • Dog-friendly office, free coffee, tea, and refreshment

Benefity

Stravenky/příspěvek na stravování, Firemní akce, Bonusy/prémie, Dovolená 5 týdnů, Zdravotní volno/sickdays, Možnost občasné práce z domova, Notebook, Občerstvení na pracovišti, Příspěvek na penzijní/životní připojištění, Příspěvek na sport/kulturu/volný čas, Sleva na firemní výrobky/služby, Dog-friendly office

O pozici

Místo pracoviště:
K Letišti 1825/1a, Šlapanice
Typ úvazku:
Práce na plný úvazek
Délka úvazku:
Na dobu určitou
Pracovní vztah:
Pracovní smlouva
Doporučené vzdělání:
Středoškolské nebo odborné vyučení s maturitou
Doporučené jazyky:
Angličtina (Pokročilá)