Explore careers with our portfolio companies

Professional Data Engineer

PF

Property Finder

Data Science
Dubai - United Arab Emirates
Posted on Nov 25, 2025

Property Finder is the leading property portal in the Middle East and North Africa (MENA) region, dedicated to shaping an inclusive future for real estate while spearheading the region’s growing tech ecosystem. At its core is a clear and powerful purpose: To change living for good in the region.

Founded on the value of great ambitions, Property Finder connects millions of property seekers with thousands of real estate professionals every day. The platform offers a seamless and enriching experience, empowering both buyers and renters to make informed decisions. Since its inception in 2007, Property Finder has evolved into a trusted partner for developers, brokers, and home seekers. As a lighthouse tech company, it continues to create an environment where people can thrive and contribute meaningfully to the transformation of real estate in MENA.

Position Summary:

We are looking for a Data Engineer to build reliable, scalable data pipelines and contribute to the core data ecosystem that powers analytics, AI/ML, and emerging Generative AI use cases. You will work closely with senior engineers and data scientists to deliver high-quality pipelines, models, and integrations that support business growth and internal AI initiatives.

Key Responsibilities

Core Engineering

  • Build and maintain batch and streaming data pipelines with strong emphasis on reliability, performance, and efficient cost usage.
  • Develop SQL, Python, and Spark/PySpark transformations to support analytics, reporting, and ML workloads.
  • Contribute to data model design and ensure datasets adhere to high standards of quality, structure, and governance.
  • Support integrations with internal and external systems, ensuring accuracy and resilience of data flows.

GenAI & Advanced Data Use Cases

  • Build and maintain data flows that support GenAI workloads (e.g., embedding generation, vector pipelines, data preparation for LLM training and inference).
  • Collaborate with ML/GenAI teams to enable high-quality training and inference datasets.
  • Contribute to the development of retrieval pipelines, enrichment workflows, or AI-powered data quality checks.

Collaboration & Delivery

  • Work with Data Science, Analytics, Product, and Engineering teams to translate data requirements into reliable solutions.
  • Participate in design reviews and provide input toward scalable and maintainable engineering practices.
  • Uphold strong data quality, testing, and documentation standards.
  • Support deployments, troubleshooting, and operational stability of the pipelines you own.

Professional Growth & Team Contribution

  • Demonstrate ownership of well-scoped components of the data platform.
  • Share knowledge with peers and contribute to team learning through code reviews, documentation, and pairing.
  • Show strong execution skills — delivering high-quality work, on time, with clarity and reliability.

Impact of the Role

In this role, you will help extend and strengthen the data foundation that powers analytics, AI/ML, and GenAI initiatives across the company. Your contributions will improve data availability, tooling, and performance, enabling teams to build intelligent, data-driven experiences.

Tech Stack

  • Languages: Python, SQL, Java/Scala
  • Streaming: Kafka, Kinesis
  • Data Stores: Redshift, Snowflake, ClickHouse, S3
  • Orchestration: Dagster (Airflow legacy)
  • Platforms: Docker, Kubernetes
  • AWS: DMS, Glue, Athena, ECS/EKS, S3, Kinesis
  • ETL/ELT: Fivetran, dbt
  • IaC: Terraform + Terragrunt

Desired Qualifications:

  • 5+ years of experience as a Data Engineer.
  • Strong SQL and Python skills; good understanding of Spark/PySpark.
  • Experience building and maintaining production data pipelines.
  • Practical experience working with cloud-based data warehouses and data lake architectures.
  • Experience with AWS services for data processing (Glue, Athena, Kinesis, Lambda, S3, etc.).
  • Familiarity with orchestration tools (Dagster, Airflow, Step Functions).
  • Solid understanding of data modeling, and data quality best practices.
  • Experience working with CI/CD pipelines or basic automation for data workflows.
  • Exposure to Generative AI workflows or willingness to learn: embeddings, vector stores, enrichment pipelines, LLM-based data improvements, retrieval workflows.

Preferred Experience:

  • Experience with Real Estate
  • Familiarity with ETL tools: Fivetran, dbt, Airbyte.
  • Experience with real-time analytics solutions (ClickHouse, Pinot, Rockset, Druid).
  • Familiarity with BI tools (QuickSight, Looker, PowerBI, Tableau).
  • Exposure to tagging/tracking tools (Snowplow, Tealium).
  • Experience with Terraform & Terragrunt.
  • Knowledge of GCP or Google Analytics is a plus.

Our promise to talent

At Property Finder, we believe talent thrives in an environment where you can be your best self. Where you are empowered to create, elevate, grow, and care. Our team is made up of the best and brightest, united by a shared ambition to change living for good in the region. We attract top talent who want to make an impact. We firmly believe that when our people grow, we all succeed.

Property Finder Guiding Principles

  1. Think Future First
  2. Data Beats Opinions, Speed Beats Perfection
  3. Optimise for Impact
  4. No Ostriches Allowed
  5. Our People, Our Power
  6. The Biggest Risk is Taking no Risk at All

Find us at:

Twitter

Facebook

Instagram

Linkedin

Glassdoor