148 views
Reply.io
16 серпня 2024
віддалено
We are a product company working in the global market. Right now we are a team of 80+ amazing Ukrainians working 100% remotely from day 1. We have 4000+ customers all over the world using our platform every day. And now we are looking for a great Customer Support Agent to join our great team :)
As a Python Data Engineer transitioning to DataOps, you’ll start by focusing on Python scripting and developing your data engineering skills. Your responsibilities will include automating tasks, developing and maintaining pipeline jobs in DataBricks, and troubleshooting infrastructure and data processing issues. You’ll also manage and optimize AI and data infrastructure, support analytics workflows, and ensure seamless data operations. As you collaborate with other team members, you’ll monitor system performance, ensure data integrity and security, and participate in the deployment of new features and updates. This role is designed for someone eager to grow into a DataOps Engineer, with a passion for continuous learning and improvement in data engineering and automation processes.
Requirements
• 1+ years of experience in data engineering, data analysis, or a related field
• Proficiency in Python Core at intermediate level
• Technical Skills (nice to have):
— Experience with GitHub and Git for version control
— Familiarity with data engineering tools and libraries, such as Pandas and PySpark
— Basic understanding of databases, including querying and data management
— Some exposure to Apache Airflow for workflow management
— Willingness to learn and use DataBricks for managing data pipelines and jobs
— Basic knowledge of Elasticsearch and Kibana for data visualization and search
• Developing strong troubleshooting skills to diagnose and resolve infrastructure and data-related issues
• Understanding of maintaining existing infrastructure and internal analytics systems
• Ability to work effectively both independently and within a collaborative team environment
• Strong communication skills, both written and verbal, to convey technical information clearly
• Proactive, detail-oriented, and eager to learn and grow within the role
• Passion for continuous learning and keeping up-to-date with the latest developments in data engineering and DataOps.
Responsibilities• Writing scripts in Python to automate various tasks and processes
• Developing and maintaining pipeline jobs in DataBricks
• Troubleshooting and fixing issues related to infrastructure and data processing
• Maintaining DATA and sales AI infrastructure and processes
• Maintaining existing infrastructure to ensure smooth operations
• Managing internal analytics workflows under the supervision of Apache Airflow
• Supporting and maintaining jobs in DataBricks to ensure efficient data processing
• Collaborating with other team members to optimize data workflows
• Monitoring system performance and implementing necessary improvements
• Ensuring data integrity and security across all systems and platforms
• Documenting processes and maintaining comprehensive records of changes and updates
• Participating in code reviews and providing constructive feedback
• Assisting with the deployment of new features and updates
• Continuously improving automation scripts and processes for better efficiency and reliability
• Staying updated with the latest developments in DataOps and data engineering fields
We offer
• Working with the leading product on the US and Global market
• Real opportunity for professional growth and self-improvement
• Work from home or co-working space (we cover the co-working costs)
• We provide the latest equipment for your comfortable work
• Access to internal business literature, training, seminars
• Unlimited number of sick leaves
• Awesome colleagues and a pleasant working atmosphere