We are building a cutting-edge Data Science team in our bank, focusing on integrating Data Science/Machine learning technologies into Bank’s services.Our mission is to revolutionize the banking experience by leveraging Data Science/Machine learning for enhanced security, personalized customer interactions, and streamlined operations.We utilize a range of modern technologies and methodologies to ensure our systems are robust, scalable, and secure.We follow DevOps principles to maintain a seamless
We are building a cutting-edge Data Science team in our bank, focusing on integrating Data Science/Machine learning technologies into Bank’s services.
Our mission is to revolutionize the banking experience by leveraging Data Science/Machine learning for enhanced security, personalized customer interactions, and streamlined operations.
We utilize a range of modern technologies and methodologies to ensure our systems are robust, scalable, and secure.
We follow DevOps principles to maintain a seamless integration and deployment process, ensuring continuous delivery and continuous integration (CI/CD).
Our development practices are based on Agile methodology, specifically Scrum, to foster a collaborative and iterative approach to problem-solving.
As part of this innovative project, you will be at the forefront of integrating Data Science/Machine learning with financial services, addressing challenges in information security, fraud detection, customer service automation, and more.
This role offers the opportunity to work with a diverse range of technologies and contribute to a transformative initiative within the banking sector.
Required skills:
- 3+ years of experience as a DevOps engineer.
- Strong experience with Ansible and Terraform.
- 1+ years of experience with AWS (including maintenance of RDS).
- 1+ years of experience with Docker and Kubernetes.
- 1+ years of experience with RabbitMQ, PostgreSQL, MongoDB, ELK, Redis, and nginx.
- Strong knowledge of Linux (especially CentOS), OSI model, load balancing, clusterization, virtualization, and VMware.
- Experience in building CI/CD processes (experience with Git & Jenkins).
- Higher Education in a relevant field.
- Enthusiasm for exploring and implementing new technologies.
- Self-learning capabilities and a proactive attitude.
- Good communication and collaboration skills.
- Experience with AI/ML model deployment and monitoring.
- Knowledge of AI frameworks such as TensorFlow, PyTorch, or similar.
- Familiarity with data pipelines and tools like Apache Kafka and Spark.
- Understanding of MLOps practices and tools.
Desired skills:
- Experience with Vault, Consul, KeyCloak, Apiman, Prometheus.
- Experience with MLops tools: Feast,DVC,MLFlow
Scope of work:
- Creating and maintaining high-load platform;
- Implementing, configuration and deployment using Ansible;
- Working closely with Development to deploy new platform components and improve existing system;
- Responding reliably to on-call issues;
- Manage non-production integrative infrastructure and provide all its as a service;
- Drive the automation of multiple parts of infrastructure and deployment systems, striving to improve and shorten processes to enable engineering and operations teams work smarter and faster with a high quality.
- Setting up and maintaining a working DEV environment for the collective work of the Data Science team
- Develop and set up CI/CD pipelines for machine learning models.
- Participation in the operationalization/integration of developed machine learning models into the Bank's systems.
Why ПУМБ?
- Remote work/Comfortable work environment (your choice);
- Continuous professional competencies development and professional growth opportunities;
- Annual paid vacations ;
- Medical insurance;
- Friendly team of young and talented professionals.