A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks.
Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll do: As a Data Engineer – Data Platform Services, responsibilities include:
Data Ingestion & Processing
• Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake.
• Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark).
• Working with IBM CDC and Universal Data Mover to manage data replication and movement.
Big Data & Data Lakehouse Management
• Implementing Apache Iceberg tables for efficient data storage and retrieval.
• Managing distributed data processing with Cloudera Data Platform (CDP).
• Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies.
Optimization & Performance Tuning
• Optimizing Spark and PySpark jobs for performance and scalability.
• Implementing data partitioning, indexing, and caching to enhance query performance.
• Monitoring and troubleshooting pipeline failures and performance bottlenecks.
Security & Compliance
• Ensuring secure data access, encryption, and masking using Thales CipherTrust.
• Implementing role-based access controls (RBAC) and data governance policies.
• Supporting metadata management and data quality initiatives.
Collaboration & Automation
• Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions.
• Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus.
• Supporting Denodo-based data virtualization for seamless data access.
6-10 years of experience in big data engineering, data processing, and distributed computing.
• Proficiency in Apache Spark, PySpark, Kafka, Iceberg, and Cloudera Data Platform (CDP).
• Strong programming skills in Python, Scala, and SQL.
• Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM).
• Knowledge of data security, encryption, and compliance frameworks.
• Experience working with metadata management and data quality solutions.
Experience with data migration projects in the banking/financial sector.
• Knowledge of graph databases (DGraph Enterprise) and data virtualization (Denodo).
• Exposure to cloud-based data platforms (AWS, Azure, GCP).
• Familiarity with MLOps integration for AI-driven data processing.
• Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics.
• Architectural review and recommendations on the migration/transformation solutions.
• Experience working with Banking Data model.
• “Meghdoot” Cloud platform knowledge.