Changing the future of payments takes strong personalities
Do you see the power in data and have you had plenty of hands-on experience analysing and abstracting insights from data? Have you worked with data technologies, are you curious, innovative and take pleasure in picking up new data tools and methods? Do you want to be part of leading Machine Learning services in production?Skills, ambition and that little personal twist make you succeed
We are looking for a Data Scientist// Machine Learning Data Engineer to an exciting position in the Data Analytics and Services department in Risk Management Service Area at Nets working in one of these countries: Denmark, Norway, Finland, Germany or Croatia.
Based on a self-developed XG Boost Model and continuously updated rules we detect and prevent payment card fraud for our customers. Payment fraud is a constant threat to consumers, merchants and banks and therefore an area we continuously need to improve mitigating actions against. Beyond our existing fraud prevention systems, we have also developed a Big Data environment around Hadoop, which further strengthens our position in the market. Based on our systems and understanding of data patterns we deliver fraud-mitigating services and produce internal and external reports on a frequent basis.
Areas of responsibility:
- Maintain and enhance performance and scalability of our pan-European ML (Machine Learning) services
- Deploy ML models in production and be part of building the real-time monitoring for ML models
- Implement effective methods for monitoring, evaluation and analytics for ML
- Collaborate with domain experts to discover new ML features that strengthen model performance
- Continuously optimize our data pipelines
- Continuously develop backend services for feature engineering, training and predictions (We use python and Flask)
- Research new areas of potential new ML products in innovation sprints
- Minimum a Master’s degree in Computer Science, Mathematics, Machine Leaning, Physics, Statistics or equivalent technical disciplines
- Experience with ETL tools and methods, data blending, data cleaning, data transformation, and data analysis
- Familiar with data science methods and tools (e.g., clustering, classification, optimization, data mining, predictive modelling, and machine learning)
- Proficient experience in a scripting or computing language (e.g Python, Scala, C++, etc.)
- Experience in handling large data sets using SQL/Hive.
- Experience with covering engineering best practices (code quality, repo hygiene, code reviews, unit testing, design documentation, deployment )
- Proficient skills in English both orally and in writing
Any one of the following experience would be an add-on:
- Experience with big data tools like Apache Hadoop, Apache Spark
- Experience building and optimizing ‘big data’ pipelines architectures and data sets.
- Fair knowledge of issuing and acquiring domain
- Experience with agile way of working
We emphasise that you have solid business acumen, good interpersonal skills and the ability to effectively communicate and interact with colleagues and internal and external stakeholders. You are not afraid of challenging the existing and you can reach independent conclusions as well as taking the need of an integrated team into consideration and being able to guide, negotiate, influence and decide in complex, ambiguous and contradictory situations.Apply now to power your career!
Please let the company know you found this position via
so we can keep providing you with quality jobs.