Data Engineer - Data Services (Remote - Europe) at GameAnalytics (allows remote)

About Game Analytics

From indie developers, games studios to established publishers, GameAnalytics is currently the #1 analytics tool for anyone building a mobile game. Our network is approaching 100,000 games, which are played by more than 1 billion people each month.

What's our mission? To help game developers make the right decisions based on data. And by joining our team, you'll be working on new and innovative products to help tens of thousands of people in the industry do just that.

About the Data Services team

We're a highly collaborative and creative team developing the different products in Data services. As a team, we enjoy learning and solving problems. We are flexible in evolving our product and scale quickly when it is required.

We are currently looking for a skilled Data Engineer as our third member to design and develop data structures to support the team's needs and act as a bridge between the Data Analyst and Backend teams. The mission of Data Services is to turn complex data into usable data to impact the bottom line.

This is an excellent opportunity for data engineers interested in Data Visualization and Data as a Service in an exciting industry that is quickly evolving.

We strive to deliver Game Data in a format that allows the Customer to analyze it. We have four flavours of delivery:

  • Pushing their raw data to their systems
  • Data Warehouse
  • Delivering Dashboards
  • Delivering Key Metrics

As the most prominent games analytical firm globally, collecting ~100,000 events per second, resulting in new 700Gb of (compressed) new data per day, this role requires your skills in Big Data to make the analysis, adoption of the product, and flow process simpler and more efficient.

Responsibilities

  • Designs and develops data pipelines, data ingestion, and ETL processes that are scalable, repeatable, and secure.
  • Defines company assets (data models), pyspark jobs to populate data models.
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • Collaborating closely with the Backend team to contribute to the product life-cycle, from idea generation and prototyping to writing production code and product support.
  • Taking care of software provisioning, configuration management, and application deployment

Requirements

Essential

  • At least two years of experience with Python, SQL, and Data visualization/exploration tools
  • Understanding of algorithms, data structures, and data modelling
  • Highly skilled - including practical use - in Big Data technologies (Apache Hadoop, Pig, Spark, PySpark, Spark)
  • Quick learner with an eagerness to work with new technologies
  • Ability to communicate with a wide range of people
  • Ability to transfer knowledge to other team members

Desirable

  • Experience with orchestrators such as Airflow, Dragster
  • Master's degree in Computer Science, Math, Statistics
  • Experience with production environments of high complexity and high volume
  • Professional certifications
  • Good understanding of practical Machine Learning capabilities (Spark MLlib, Amazon SageMaker or Google Cloud AI, etc.)
  • Exposure to Druid
  • Passionate about the meaning of data
  • Experience with AWS systems

What we offer

  • Remote working flexibility – or part-time remote
  • (When in office) Food, snacks and drink
  • Entertainment Area
  • 25 days paid holiday (excluding bank holiday)
  • Company sickness leave
  • Parental and guardian leave
  • Additional compassionate leave
  • Work-from-Anywhere” Scheme
  • Learning budgets
  • Monthly social nights
  • Expense phone bill
  • Cycle to work scheme

Please note that you will be hired under a PEO arrangement for remote roles located outside of the UK and Denmark. This is to ensure that our benefits are not in violation of local employment laws.

Please let the company know you found this position via aijobsdb.com so we can keep providing you with quality jobs.