- 12 month engagement + extensions
- Must be in Canberra
- Ability to obtain a security clearance
- Candidate must be an Australian citizen
The Role
As a Data Engineer, you will be required to:
- Identify, transform, and implement data sources, data processing concepts and methods;
- Identify the selection and development of designing engineering methods, tools and techniques for on-premises and hybrid data engineering solutions;
- Design, implement and develop solutions for advanced structure and data storing for usage in analytics, machine learning, data mining and sharing with applications;
- Initiate the design, development and implementation of structured and unstructured data pipelines ensuring data is fit for purpose;
- Plan and contribute to the integration, consolidation, curation, migration, and transformation of data, aligning to relevant guidelines;
- Collaborate with other data professionals and business stakeholders to determine the required data sets and analysis tools. Also to identify process improvements and apply data quality frameworks;
- Identify and apply appropriate design patterns and practices;
- Work with vendors, engineers, data scientists and UX designers to integrate algorithm implementations with data pipelines and production systems;
- Identify opportunities to digitise and streamline operational data handling and optimise business intelligence capabilities;
- Plan, establish and manage processes for regular and consistent access to external information from multiple sources and for independent validation of that information;
- Performing additional duties or assume responsibility of functions as directed by the supervisor from time to time;
- Participating in Performance discussions as required by the supervisor.
Essential criteria
- Enterprise-class experience creating, updating, and maintaining new data pipelines along with debugging and updating of existing data pipelines in pyspark/SQL. This includes building pipeline automation, linked services, and other supporting processes in Azure Data Factory as required.
- Experience in any or all: Using Azure resources such as keyvaults, contributor groups, storage containers, Access Control Lists. Using Databricks; coding in pyspark and SQL, integration with ADF and storage containers, SQL Endpoint usage with BI apps, dbfs usage, machine learning capabilities. Generating supporting documentation of data pipelines, and other relevant processes. Administering Qlik on-premesis and/or AWS servers, including software upgrades, maintenance of PostgreSQL Databases, and monitoring server performance and taking corrective action as necessary. Azure Data Engineering along with related features, interested in learning and implementing new features available in the Azure platform over time as required.
- Enterprise-class experience in CI/CD including deployment via Azure DevOps, using git repo, branching, merging, conflict resolution, as required.
- In-depth understanding of data analysis and data science needs, such as data types, data cleansing, delta loads, Azure Cognitive Services, etc. as required.
- Ability to work: Collaboratively in a team environment and contribute to continuous improvement of processes by understanding existing processes and provide innovative solutions as per the requirements. With business areas and architectural teams to define technical design and approach to new projects and capabilities. Assist with preparation of documentation to support Authority to Operate, and other Cyber-related requirements.
How to apply:
Please hit the apply button or for more information contact Anne from Randstad Digital on 02 6243 6404.
At Randstad Digital, we are passionate about providing equal employment opportunities and embracing diversity to the benefit of all. We actively encourage applications from any background.
...