Accessibility Links

Data Engineer - Big Data

  • Location: London
  • Salary: £350 - £480 per day
  • Job type: Temporary / Contract
  • Ref: JTDE0001
  • Recruiter: James Thompson

Data Engineer

A chance to join a glabal energy provider in their South West London headquarters as a Data Engineer. This role will be an initial 9 month contract with a chance to go perm after.
We are looking for a Data Engineer to work with our Big Data platform team. The role will be responsible for supporting all aspects data engineering delivery from scoping, design, development and test support, solution acceptance, production implementation and support. The role will be within the platform team working closely with use case delivery teams. The role will combine elements of traditional SDLC execution as well as hands on data engineering facing off to business stakeholders, other IT teams and supporting the platform DevOps team. Responsibilities include:

* Develop data engineering pipelines and apply best practice in big data engineering
* Maintain and enhance the data engineering SDLC and framework.
* Production of design artefacts to describe the pipelines, build and deployment processes.
* Support, maintain and enhance the build and deploy framework for SQL SDLC.
* Support of development and test activity of use case delivery teams.
* Support performance testing and tuning examples.
* Support maintain and enhance reference implementations for key design patterns
* Demonstration and documentation of evidence to demonstrate Acceptance Criteria.
* Participation in daily scrum to update progress on tasks.
* Participation in sprint planning to decompose stories/PBIs into tasks and provide estimates.
* Support of End to End testing with external systems and components.

Essential
* Experience of big data engineering environment delivering data pipelines using Python, Scala or SparkSQL on Spark
* Experience of support and diagnosis in a data pipeline environment under AWS, Cloudera, Hortonworks or Microsoft Apache Hadoop implementations (such as EMR)
* In depth understanding of map/reduce and spark patterns to deliver data pipelines including evidence of best practice implementation.
* Strong data analysis skills including Entity Relationship Diagrams, Object Graph Mapping.
* Strong SQL in Oracle, SQL Server for data analysis.
* Strong Data Visualisation experience in PowerBI, Tableau.
* Strong Python development skills or analysis experience in Jupyter with R or Python
* Exposure to Scrum/Agile delivery methodologies.
* Experience of continuous integration/deployment and development techniques

Desirable
* Any Experience of the Palantir Foundry Platform
Solid grounding in Hadoop Big Data technologies: HDFS, YARN, SQOOP, Hive, Impala, Spark and general cluster management
* Exposure to streaming data applications such as Kafka or Kinesis
* Data migration delivery experience.
* Exposure to Python development and BDD frameworks.
* Demonstrable experience of requirements elicitation and documentation carried through to design and implementation.
* Excel, Access as data analysis, cataloguing and specification tools.
* Familiarity with ETL concepts and solutions particularly for big data engineering.
* Familiarity with scheduling and orchestration tools
* Certified scrum master qualification.

Related Jobs
Related Articles
Recently Viewed Jobs