A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
Who you are:
A seasoned Data Engineer with a passion for building and managing data pipelines in large-scale environments. Have good experience working with big data technologies, data integration frameworks, and cloud-based data platforms.
Have a strong foundation in Apache Spark, PySpark, Kafka, and SQL.What you’ll do: As a Data Engineer – Data Platform Services, your responsibilities include:
Data Ingestion & Processing
• Assisting in building and optimizing data pipelines for structured and unstructured data.
• Working with Kafka and Apache Spark to manage real-time and batch data ingestion.
• Supporting data integration using IBM CDC and Universal Data Mover (UDM).
Big Data & Data Lakehouse Management
• Managing and processing large datasets using PySpark and Iceberg tables.
• Assisting in migrating data workloads from IIAS to Cloudera Data Lake.
• Supporting data lineage tracking and metadata management for compliance.
Optimization & Performance Tuning
• Helping to optimize PySpark jobs for efficiency and scalability.
• Supporting data partitioning, indexing, and caching strategies.
• Monitoring and troubleshooting pipeline issues and performance bottlenecks.
Security & Compliance
• Implementing role-based access controls (RBAC) and encryption policies.
• Supporting data security and compliance efforts using Thales CipherTrust.
• Ensuring data governance best practices are followed.
Collaboration & Automation
• Working with Data Scientists, Analysts, and DevOps teams to enable seamless data access.
• Assisting in automation of data workflows using Apache Airflow.
• Supporting Denodo-based data virtualization for efficient data access.
4-7 years of experience in big data engineering, data integration, and distributed computing.
• Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP).
• Proficiency in Python or Scala for data processing.
• Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM).
• Understanding of data security, encryption, and compliance frameworks.
Experience in banking or financial services data platforms.
• Exposure to Denodo for data virtualization and DGraph for graph-based insights.
• Familiarity with cloud data platforms (AWS, Azure, GCP).
• Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics.