At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, let's talk.
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
As a Data Engineer, you will design, build, and maintain large-scale data systems, focusing on data warehousing, ETL, and data modeling. You will work closely with cross-functional teams to develop and optimize data pipelines, ensuring data quality and integrity. Your responsibilities will include building and maintaining data architectures, collaborating with stakeholders to identify data requirements, and developing data models using Data Vault 2.0 in DBT on Snowflake.
- Proficiency in Python programming language
- Experience with DBT for data transformation and modeling
- Strong understanding of SQL and PL/SQL
- Experience with Snowflake or similar cloud-based data warehousing platforms
- Knowledge of Data Vault 2.0 data modeling principles
- Familiarity with Git version control system
- Experience with Azure cloud services
- Knowledge of data engineering best practices and design patterns
- Familiarity with data quality and data governance principles
- Experience with data pipeline automation and optimization
- Understanding of cloud-based data security and access control principles