We invite you to join our Senior/middle DevOps Engineer team.Responsibilities:design, deploy and support on-prem infrastructure for a 100+ TB data lake / DWH platform;build and administration of clusters (Kubernetes or similar container orchestration and also ordinary servers);scaling settings - horizontal and vertical, high availability, load balancing;automation of infrastructure using IaC (Terraform, Ansible, Helm, etc.);creation and support of CI/CD pipelines (ETL, analytical services, API);
We invite you to join our Senior/middle DevOps Engineer team.
Responsibilities:
- design, deploy and support on-prem infrastructure for a 100+ TB data lake / DWH platform;
- build and administration of clusters (Kubernetes or similar container orchestration and also ordinary servers);
- scaling settings - horizontal and vertical, high availability, load balancing;
- automation of infrastructure using IaC (Terraform, Ansible, Helm, etc.);
- creation and support of CI/CD pipelines (ETL, analytical services, API);
- performance monitoring, metrics and capacity planning for clusters with a large amount of data;
- infrastructure security: network policies, secrets, TLS, access control;
- operational support of the platform: updates, backups, disaster recovery;
- preparation of documentation, runbooks and automated deployment procedures and solution support tasks.
Mandatory requirements:
- 3+ years of experience as DevOps / SRE / Platform Engineer;
- experience working with on-prem infrastructure, - networks, virtualization, hypervisor tuning;
- knowledge of Linux (network, security, performance tuning);
- experience working with Kubernetes or other container orchestrators in production environments;
- experience with Infrastructure as Code (Terraform, Ansible, Helm);
- practical CI/CD setup (GitLab CI / GitHub Actions / Jenkins);
- understanding of the principles of building clusters and distributed systems (replication, HA, sharding, rebalancing);
- experience with network infrastructure;
- ability to write scripts;
- experience with Git and Git-flow.
Desirable (will be a plus):
- experience with object repositories (MinIO, S3-compatible systems) or HDFS file systems;
- experience of working with columnar and analytical data stores;
- knowledge of data processing stacks: Spark, Airflow, Kafka, dbt;
- development or maintenance of large clusters with 10-100+ TB of data;
- experience of building backup & disaster recovery strategies.
Personal qualities:
- orientation towards automation and the desire to minimize manual operations;
- attention to details, ability to work with large amounts of data and complex systems;
- ability to work within the constraints of on-prem infrastructure;
- ability to prioritize critical tasks under conditions of stress;
- responsibility, structure, transparency in communication;
- willingness to document solutions and share knowledge with the team;
- initiative in improving processes and platform stability.
The company offers:
- remote or hybrid work format;
- employment under the terms of a gig contract or in the staff (isthe possibility of reservation);
- paid annual leave of 24 calendar days, paid sick leave;
- regular payment of wages without delays and in the stipulated volumes, regular revision of wages;
- the possibility of professional and career growth;
- training courses.
Contact person: Kateryna, tel.0984567857 font-style: normal"> (t.me/KaterynaB_HR)