Senior Data Engineer

Data#3


Date: 3 weeks ago
City: Brisbane, Queensland
Contract type: Contractor

Senior Data Engineer

We’re currently seeking a Senior Data Engineer to join a supportive and high-performing team within a large, forward-thinking QLD organisation. This is a fantastic opportunity to contribute to a cutting-edge data platform project.


Contract Details:

  • Initial Term: 6 months (with strong potential for extensions)
  • Brisbane Based: Flexible working options available (Onsite 3 days - WFH 2 days)
  • Start Date: ASAP


About the Role

As a Senior Data Engineer, you’ll play a key role in the ongoing development and enhancement of a newly implemented Retail Data Platform, built on Databricks within the AWS cloud environment. You’ll collaborate closely with an established team to deliver high-quality data engineering solutions that drive integration, reporting, and analytics outcomes.


You’ll be involved in the full development lifecycle, provide technical leadership, and contribute to building automation frameworks, metadata-driven pipelines, and efficient CI/CD processes. The role also offers a chance to work with the latest Databricks features and integrate with a wide range of enterprise applications.


Key Responsibilities

  • Lead the development and enhancement of the Retail Data Platform.
  • Collaborate with stakeholders to gather and refine requirements.
  • Build metadata-driven pipelines using Python/PySpark within Databricks.
  • Configure and maintain Databricks resources such as clusters, catalogues, schemas, and security models.
  • Conduct performance tuning and optimisation for Spark SQL and data warehouse workloads.
  • Develop CI/CD pipelines for automated deployments.
  • Participate in Agile ceremonies, sprint planning, and peer code reviews.
  • Provide guidance to platform users on best practices and effective platform utilisation.
  • Prepare and maintain high-quality technical documentation.


About You

We’re looking for someone with:

  • At least 8 years in data-related roles, with 5 years hands-on experience using Databricks, including the past 2 years.
  • Strong skills in Python and PySpark programming.
  • Expertise in metadata-driven development and automation frameworks.
  • Solid experience in Spark SQL performance tuning.
  • Hands-on experience working with APIs for data ingestion and sharing.
  • Experience in developing data quality frameworks and implementing CI/CD for deployments.
  • Proven ability to work in Agile delivery environments.
  • Knowledge of Databricks Unity Catalog, Masking Functions, Bundles, CLI, and Auto-Loader features.
  • Prior work developing frameworks for data quality management.


How to Apply:

Please submit your resume via the Apply button.

To find out more, please email Abbie on [email protected] or for a confidential chat, call 0413 444 157.

Post a CV