A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
Key Responsibilities
- Lead the architecture and design of Databricks-based data lakehouse platforms.
- Implement Delta Lakehouse patterns (time travel, schema evolution, CDC).
- Drive adoption of Delta Live Tables, Auto Loader, Unity Catalog for enterprise-scale governance and automation.
- Optimize Spark workloads and Photon-powered queries for performance and cost efficiency.
- Oversee data governance, RBAC, lineage, and compliance via Unity Catalog.
- Enable real-time pipelines using Structured Streaming (Kafka, Event Hub, Kinesis).
- Partner with business teams to define data strategy, standards, and best practices.
- Mentor junior engineers, perform code reviews, and drive engineering excellence.
- Integrate Databricks with DevOps (CI/CD, GitOps) and automate provisioning via Terraform/ARM/CloudFormation.
- Collaborate with AI/ML teams to enable ML model deployment and monitoring with MLflow and Feature Store.
Mandatory Skills
- 7+ years of data engineering experience, with at least 3+ in Databricks.
- Deep knowledge of Databricks ecosystem: PySpark, Delta Lake, Delta Live Tables, Auto Loader, Unity Catalog, MLflow, Photon.
- Strong experience with Azure (ADF, ADLS, Synapse, Event Hub) or AWS (Glue, S3, Redshift, Lake Formation, Kinesis).
- Expertise in SQL, Python, and performance optimization.
- Strong knowledge of data architecture, governance, and security frameworks.
- Proven track record of leading teams and delivering enterprise-grade solutions.
Good to Have
- Databricks Professional Data Engineer certification.
- Experience with Databricks REST APIs and advanced automation.
- Exposure to multi-cloud Databricks deployments.