IBM’s greatest invention is the IBMer. We believe that through the application of intelligence, reason and science, we can improve business, society and the human condition, bringing the power of an open hybrid cloud and AI strategy to life for our clients and partners around the world.
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
As a Data Engineer, you will develop, maintain, evaluate, and test data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the client’s needs.
Your Primary Responsibilities Include
- Design, build, optimize and support new and existing data models and ETL processes based on our client’s business requirements.
- Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
- Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.
Required Technical And Professional Expertise
- Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation.
- Experience with Apache Spark & PySpark.
- Experience Databricks with AWS/Azure services.
- Experience building complex distributed batch and real-time systems.
- Hands-on with DevOps, IaaC
Monthly salary for this position is from 3000 EUR gross to 5000 EUR gross.
The final offer will be dependent on qualifications, professional experience and competencies.