background image
Salary
Negotiable

Job Description

Role: BigData Developer

Company: Analytics & IT Solutions Tech Giant 

Company Brief: Global Research & Analytics (GR&A) is the largest and top-ranked provider of high end research and analytics services to the world's leading commercial and investment banks, insurance companies, corporations, consulting firms, private equity firms and asset management firms.

Mode: Permanent or Contracts

Domain: Market Risk or Banking/Finance

Salary: Open to discuss

Location: London (Hybrid, 3 Days Office & 2 Days Remote)

Job Descriptions: As Below

Qualification

Master in Computer Science/ software engineering , additional qualification in Finance is desirable

Skills Required

  • Strong programming skills in one of the mainstream programming languages. Preferred would be Python
  • Excellent hands-on with Impala, Hadoop as extremely desirable and  Spark core and Spark SQL using Scala
  • Strong knowledge of Big Data querying tools, such as Hive, Impala, Kudu, etc.
  • Proven understanding of YARN, Sqoop, and at least one orchestration tool like Oozie
  • Large-scale distributed data analytic platforms and compute environments (Spark, Impala, etc.)
  • Distributed file systems and storage technologies (HDFS, HBase, Hive)
  • Hands on with Git, Jenkins, SQL and Linux/Unix shell scripts
  • Assist in the evaluation of new solutions for integration into the Hadoop Roadmap/Strategy
  • Experience in Data Modeling and ML pipelines on Big Data landscape
  • Proven understanding of Control-M (application and data workflow orchestration solution)
  • Understanding of visualization tools like QlikView/QlikSense and CI/CD profile- Jenkins, TeamCity, Git, Docker, etc.

Very desirable

  • Knowledge/exposure of Microsoft Azure
  • Good understanding of the software development lifecycle (SDLC) and agile methodologies

Job Duties

Client is looking for a talented and experienced big data developer with exposure to Market risk domain for one of our Global Investment Banking Clients

·         Able to understand and explore the constantly evolving tools within Hadoop ecosystem and apply them appropriately to the relevant problems at hand

·         Implementing software components in Python, Java, Scala languages

·         Consulting with various data warehousing engagement, handling large data volumes, architecting big data environments

·         Designing, developing, testing, tuning, and building a large-scale data processing system, for Data Ingestion and Data products that allow the Client to improve quality, velocity, and monetization

·         Architecting and crafting new features or improvements

·         Full ownership and delivery accountability of the assigned Epics/Stories/Tasks


Skills Required

  • Strong programming skills in one of the mainstream programming languages. Preferred would be Python
  • Excellent hands-on with Impala, Hadoop as extremely desirable and  Spark core and Spark SQL using Scala
  • Strong knowledge of Big Data querying tools, such as Hive, Impala, Kudu, etc.

Base Salary + Bonus + benefits

Competitive per Day Salaries