You will spearhead a high-performing technical team focused on architecting and deploying sophisticated data orchestration and transformation systems. This role bridges the gap between high-level data architecture and practical execution, ensuring our data strategies align with the complex operational demands of global energy projects and large-scale EPCI (Engineering, Procurement, Construction, and Installation) lifecycles. You will be a catalyst for technical excellence, driving innovation and fostering the professional growth of your engineering cohort.
II. Core Responsibilities
Strategic Technical Leadership
-
Workflow Orchestration: Direct the end-to-end lifecycle of data engineering projects, from initial scoping and impact analysis to resource allocation and budgetary oversight.
-
Governance & Integrity: Oversee version control and collaborative development processes, resolving complex merge conflicts and maintaining the highest standards of codebase reliability.
-
System Optimization: Proactively audit legacy systems for technical debt, identifying bottlenecks and implementing cutting-edge refactoring strategies to enhance performance.
-
Agile Delivery: Champion iterative development cycles, ensuring rapid deployment of data solutions supported by comprehensive technical documentation.
Operations & Quality Assurance
-
DevOps Integration: Implement automated CI/CD pipelines to streamline development, mitigate human error, and accelerate time-to-market for data products.
-
Data Observability: Monitor and diagnose pipeline health, establishing robust protocols for identifying and rectifying data quality issues to maintain stakeholder trust.
-
Cross-Functional Facilitation: Act as the primary technical liaison between diverse business units, ensuring transparent communication and alignment on project milestones.
Culture & Mentorship
-
Center of Excellence: Cultivate a culture of continuous learning, promoting industry best practices in data engineering and stewardship.
-
Knowledge Transfer: Lead internal workshops and change management programs to elevate the team’s collective technical maturity.
-
Strategic Representation: Serve as the data engineering subject matter expert (SME) in organizational committees and high-level strategic planning.
III. Candidate Profile & Technical Proficiency
Academic & Professional Credentials
-
Education: University degree in a quantitative field such as Computer Science, Software Engineering, or Information Systems.
-
Preferred Certifications: Advanced credentials in cloud ecosystems, specifically targeting Azure (DP-203, AZ-305, or AZ-400), or equivalent mastery in AWS/GCP.
Professional Experience
-
Industry Tenure: Minimum of 8 years in IT, including at least 5 years in a formal leadership or supervisory engineering capacity.
-
Platform Mastery: 5+ years of hands-on experience managing enterprise-grade data environments (e.g., Azure Databricks, Synapse Analytics, or modern Data Lakehouses).
-
Big Data Engineering: 3+ years architecting scalable data processing engines using Spark (Python/Scala) and designing complex ETL/ELT workflows.
-
Infrastructure & Automation:
-
Extensive experience with Containerization (Docker/Kubernetes) for data workloads.
-
Proven track record in Infrastructure as Code (Terraform/Bicep) and automated deployment frameworks.
-
-
Programming & Scripting: Expert-level command of Python, SQL, and PySpark; proficient in system automation via Bash or similar scripting languages.
-
Governance & Reliability:
-
Experience establishing SLAs, alerting frameworks, and incident response protocols.
-
Familiarity with data privacy compliance, metadata management, and lineage tracking.
-
-
Business Impact: Demonstrated ability to optimize cloud spend, increase system uptime, and reduce data ingestion latency.