
Data Engineer
- Kuala Lumpur
- Tetap
- Sepenuh masa
Locations: Malaysia
Teams: Data Platform & Solution🔍 About the RoleWe are looking for a skilled and motivated Data Engineer (Mid-Level) to join our Data COE, contributing to both the Data Platform & Solution teams.In this role, you’ll help build and maintain modern, scalable data infrastructure to meet PropertyGuru evolving needs for data-driven insights and innovation. Our team is advancing toward a Medallion architecture approach and adopting a real-time-first mindset, with batch serving as a secondary option.You’ll also contribute to a “shift-left” data processing philosophy, where cleansing, validation, and transformation are done as early as possible near the source to improve data trust, reduce rework, and simplify downstream logic.🛠️ Key Responsibilities
- Design, develop, and maintain real-time (e.g., Kafka, Debezium, Apache Flink, Apache Beam, Kinesis) and batch (e.g., Composer Airflow, Apache Spark, AWS Glue) data pipelines
- Implement and maintain Medallion architecture to support scalable and well-governed data layers.
- Build and optimize data models, datamarts, and schemas for reporting and ML use cases.
- Apply shift-left practices by performing early-stage data cleansing, validation, and transformation close to ingestion
- Ensure data quality, integrity, and availability, with proactive monitoring and alerting (e.g., Telm.ai or similar alternative tooling).
- Handle large structured and semi-structured datasets using GCP (BigQuery) and AWS.
- Optimize storage and queries for performance and cost-efficiency.
- Contribute to data architecture, design discussions, and evolving platform standards.
- Translate business requirements into technical implementation.
- Troubleshoot and resolve pipeline and data issues, perform root cause analysis, and continuously improve reliability.
- Work closely with marketplace, analytics, product, and engineering teams to deliver end-to-end data solutions.
- Take ownership of assigned tasks and communicate progress, risks, and blockers in a timely manner.
- Ensure adherence to data governance, security, and compliance requirements.
- Contribute to technical documentation and participate in peer code reviews.
- Stay informed about modern data engineering trends and help introduce new technologies or practices (e.g., data contracts, Iceberg/Delta Lake, dbt, event-driven architecture)
- Collaborate with peers to foster technical growth and knowledge sharing.
- 2–4 years of hands-on experience in data engineering or similar roles.
- Proficient in Python and SQL; knowledge of Java/Scala is a plus
- Experience with modern data processing frameworks (e.g., Kafka, Spark, Hadoop).
- Experience with cloud data platforms (GCP and AWS).
- Familiarity with a variety of database types, including relational (e.g., PostgreSQL), key-value stores (e.g., Redis), and document databases (e.g., MongoDB).
- Experience working with search or analytics engines like Elasticsearch.
- Familiar with CI/CD, infrastructure as code, and DevOps tools.
- Strong grasp of data warehousing and data modeling principles.
- Excellent problem-solving, analytical, and communication skills
- Self-driven with the ability to work independently and collaboratively
- Experience with containerization (e.g., Docker, Kubernetes)
- Familiarity with metadata management, data cataloging, or observability platforms
- Exposure to data visualization tools (e.g., Looker, Looker Studio, Tableau, Power BI)
- Understanding of how data engineering supports ML and data science workflows
- Be part of a modern data engineering team driving Data platform capabilities
- Focus on building, not just implementing, your input will shape the future architecture
- Work across platform and data product domains in a high-impact role
- Join a bottom-up engineering culture that values innovation, ownership, and learning
- Flexible working environment with career growth opportunities into senior or individual contributor specialist tracks