Data Engineer
Holo.ae
About Us:
At Holo, we’re on a mission to simplify homeownership through technology, expert guidance, and transparency. We operate across three core entities:
- Holo Mortgage – providing digital mortgage solutions tailored to each client
- Holo Concierge – our concierge-style service that partners with clients and real estate agents to help approved buyers find their dream home.
- Holo KSA – expanding our footprint and innovation into the Saudi market
With over 25 years of expertise and 24,000+ happy homeowners, we’ve mastered the art of simplifying the homebuying journey. We’re building a smarter, faster, and more accessible way to own a home.
Job Summary:
We are seeking an experienced and highly motivated Data Engineer to join our data team. The ideal candidate will play a critical role in designing, building, and maintaining large-scale data infrastructure and pipelines. The Data Engineer will work closely with data scientists, analysts, and software engineers to ensure the availability, reliability, and scalability of data systems. You will be responsible for developing and managing ETL processes, optimizing data storage, and supporting real-time data analytics efforts.
Key Responsibilities:
- Design and Develop Scalable ETL Pipelines: Create robust, scalable ETL (Extract, Transform, Load) pipelines that move data between various systems, ensuring data integrity and efficiency.
- Data Warehousing & Data Lake Management: Build and maintain data lakes and data warehouses to store structured and unstructured data, ensuring these systems meet business and analytics needs.
- Real-Time Data Processing: Implement real-time data processing systems for stream data from various sources.
- Cloud Data Infrastructure: Design and maintain cloud-based data infrastructures using AWS or Google Cloud, focusing on cost-effectiveness, scalability, and performance.
- Data Security and Compliance: Implement data security protocols, ensuring that all data systems comply with relevant regulations such as GDPR.
- Automation of Data Processes: Automate repetitive tasks in data ingestion, processing, and validation to optimize workflows.
- Monitoring and Optimization: Monitor ETL pipelines and optimize for speed, scalability, and reliability, handling any bottlenecks or failures that arise.
- Collaboration with Cross-functional Teams: Work with data scientists, data analysts, and product teams to understand data requirements and provide appropriate data solutions.
- Documentation and Best Practices: Create and maintain detailed documentation on data pipelines, data architecture, and processes to ensure knowledge sharing and compliance with industry best practices.
Qualifications:
- Bachelorʼs or Masterʼs degree in Computer Science, Engineering, Information Systems, or a related field.
- 3-5 years of experience in data engineering, data architecture, or software engineering.
- Strong expertise in ETL tools (e.g., Apache Airflow, Informatica, Talend, AWS Glue).
- Proficiency in SQL and NoSQL databases such as PostgreSQL or MongoDB.
- Experience working with data modeling tools and business intelligence solutions (e.g., Tableau, Looker, Power BI).
- Experience with big data processing frameworks like Apache Spark, Hadoop, or Flink.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), particularly with services like AWS Redshift, BigQuery, or Azure Synapse Analytics.
- Proficiency in programming languages such as Python, NodeJs, or Go for data processing and automation.
What We Offer:
- Competitive salary
- Health insurance and other benefits
- 25 days annual leave plus National Holidays
- Enhanced Maternity and Paternity Leave
- Opportunities for career growth and development in a dynamic environment
- A supportive and collaborative team environment.
- Half day Fridays, finishing at 1pm - giving you a well deserved early start to the weekend!