About Us :
GE is the world's Digital Industrial Company, transforming industry with software-defined machines and solutions that are connected, responsive and predictive.
Through our people, leadership development, services, technology and scale, GE delivers better outcomes for global customers by speaking the language of industry.
Role Summary :
You will be a member of an integrated team of data engineers, software engineers, data scientists and a product owner to deliver successful outcomes driving efficiency and creating new revenue streams
Essential Responsibilities :
Qualifications / Requirements :
Experience with relational databases, (preferably Oracle, Postgres, Greenplum)
Significant experience writing complex SQL queries, strong PL / SQL skills
Experience with at least one programming language (preferably Scala, Java or Python)
Experience in Unix / Linux environments
Good English skills (written and spoken)
Positive attitude and team player
Applications from job seekers who require sponsorship to work in the UK are welcome and will be considered alongside all other applications.
However, non-EU / EEA candidates may not be appointed to a post if a suitably qualified, experienced and skilled EU / EEA candidate is available to take up the post, as the employing body is unlikely, in these circumstances, to satisfy the Resident Labour Market Test.
For further information please visit the UK Border Agency website
http : / / www.ukba.homeoffice.gov.uk / visas-immigration / working
Desired Characteristics :
Variety of languages and tools (e.g. scripting languages) to marry systems together
Integration of these libraries to current algorithm / Spark jobs- integration of JUnit, Spark Unit-Testing framework in the spark codes
Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala, Phoenix
Programing skills in Map / Reduce , Spark jobs heavy lifting programming skills will be required for custom implementations or specialized implementations and use case based on Algorithms and machine learning models built / configured by Data scientist / Algorithm teams
Development of Map / Reduce , Spark Jobs / Pipelines on Hadoop distributed environment in Python and Java - Pyspark and Java-Spark jobs
Locations : Germany, United Kingdom; Cramlington, Stutensee