Apply on
About the company
Founded in 2020, Durianpay is a next-generation B2B payment software to receive and send money anywhere in Indonesia. With a mission to modernize and democratize payments for businesses, we focus on making transactions cheaper, faster, and more efficient.
Our range of services are categorized into three product pillars: Unified Payment Stack (covering pay in and pay out rails, fluidity between money in and out, dynamic routing to increase success rates, automatic reconciliation); B2B Checkout (smart invoicing, payment terms on invoices, maker-checker function, sub-accounts); and Cash Flow Solutions (faster settlements).
Role overview
As a Data Engineer at Durianpay, you’ll lead our data transformation efforts, designing and implementing high-performance data systems to scale with our growth. You’ll optimize our PostgreSQL database, build reliable ETL pipelines, and architect a robust Data Lake to centralize data, supporting analytics, business intelligence, and machine learning initiatives that drive key business decisions. Your work will directly support our product pillars, ensuring our data infrastructure is scalable, reliable, and efficient.
Job Description
- Optimize PostgreSQL database performance, enhancing indexing, replication, and query efficiency to reduce latency and support high-volume transactions.
- Design, build, and maintain scalable ETL/ELT workflows to ensure accurate, consistent data flow from real-time databases (e.g., PostgreSQL) to data warehouses (e.g., BigQuery) and central data lakes.
- Set up and manage monitoring tools and dashboards to proactively address performance issues, ensuring a stable, reliable data infrastructure.
- Architect and manage a scalable data lake for structured and unstructured data, transforming data into actionable reports and dashboards to drive business insights.
- Automate report generation and develop data visualizations to streamline insights, supporting faster data-driven decisions across teams.
- Collaborate with product, operations, and business teams, translating requirements into effective data models, schemas, and reports to support analytical and operational needs.
- Contribute to the development of foundational data models, establish data engineering best practices, and stay updated on advancements in big data and cloud platforms.
Requirements
- 3+ years of experience in Data Engineering role or similar position, with proven expertise in database optimization and ETL design.
- Strong experience with PostgreSQL, including indexing, query optimization, and replication management.
- Proficiency in ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for automated workflows and pipeline management.
- Strong experience with cloud platforms (e.g., AWS, GCP) with its database service (e.g., AWS RDS, GCP Cloud SQL) and distributed data processing (e.g., Spark, Hadoop).
- Programming experience in language in Python or Scala.
- Skilled in designing data lake and data warehouse schemas optimized for analytics and reporting.
- Excellent analytical and problem-solving skills with a proactive approach to performance tuning and troubleshooting.
- Strong communication skills, with the ability to collaborate effectively with technical and non-technical stakeholders to align technical solutions with business requirements.