615 views
9 applications 0 responses
Tasks
· Design, build, extend, and maintain data pipelines and unified integration ETL
· Scale the data pipeline throughput using applicable instruments
· Introduce and adopt new instruments, technologies and their combinations that will allow the company to scale the data processing speed and performance in a cost-efficient way
Requirements
· Proven experience in building data processing pipelines
· Experience with workflow automation tools (AirFlow, Prefect, Luigi, etc.)- RDBMS, Column-oriented DBMS (writing queries, db architecture building, administration, etc.)
· Apache Spark-
· Strong core Python knowledge
Pleasant extras:
· ClickHouse
· Apache Kafka
· CDC (e.g. WAL, Debezium, Kafka)
· ksqlDB
· Spark SQL
· Argo Workflows
· GCP: BigQuery, Dataproc, Data Fusion, DataFlow
Benefits
· Meaningful work in an agile team of engineers, who turn business ideas into software solutions
· Freedom to choose the best suitable technologies to deliver the solution (even if it’s not in the product yet)
· Want to learn? Competera loves that and is eager to cover 60% of your training/courses fee
· Fair payout with regular performance-based reviews and stock options plan for top performers-
· Remote-first ideology even after the pandemic and the war
· Working hours that adapt to your biorhythm
· Paid vacation & sick leaves (20 business days each) + 15 days off
· Partial medical insurance coverage