A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
Who you are:
A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks.
Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll do: As a Data Engineer – Data Platform Services, responsibilities include:
Data Ingestion & Processing
• Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake.
• Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark).
• Working with IBM CDC and Universal Data Mover to manage data replication and movement.
Big Data & Data Lakehouse Management
• Implementing Apache Iceberg tables for efficient data storage and retrieval.
• Managing distributed data processing with Cloudera Data Platform (CDP).
• Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies.
Optimization & Performance Tuning
• Optimizing Spark and PySpark jobs for performance and scalability.
• Implementing data partitioning, indexing, and caching to enhance query performance.
• Monitoring and troubleshooting pipeline failures and performance bottlenecks.
Security & Compliance
• Ensuring secure data access, encryption, and masking using Thales CipherTrust.
• Implementing role-based access controls (RBAC) and data governance policies.
• Supporting metadata management and data quality initiatives.
Collaboration & Automation
• Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions.
• Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus.
• Supporting Denodo-based data virtualization for seamless data access.
4-7 years of experience in big data engineering, data integration, and distributed computing.
• Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP).
• Proficiency in Python or Scala for data processing.
• Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM).
• Understanding of data security, encryption, and compliance frameworks.
Experience in banking or financial services data platforms.
• Exposure to Denodo for data virtualization and DGraph for graph-based insights.
• Familiarity with cloud data platforms (AWS, Azure, GCP).
• Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics.