At IBM, work is more than a job: To design. To build. To code. To consult. To think along with clients. To deliver and sell. To make markets. To invent. To collaborate. Not to just do something better, but to attempt things you've never thought are possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, lets talk.
You should work on data engineering and data management initiatives. You will be responsible for designing and implementing traditional and modern data management streams, cloud based or on-premise. Among others, responsibilities include optimal extraction, transformation, and ingestion of data from a wide variety of data sources using multiple technology stacks.
- At least 5 years of hands on experience in data management and data integraton.
- Excellent written and verbal communication skills.
- Analytical thinking and a “can do” attitude.
- Capable to work with solutions at enterprise-level agile environments, interface with end-users and document functional and technical requirements.
- Ability to prepare efficient technical design documents.
- Understand business and IT concepts, best practices, and functions to support technical solutions and solve technical challenges.
- Solid experience in ETL processes, data modelling, data processing, data ingestion in cloud environments and data lake architectures.
- Solid understanding of services, DB schemas and integration processes between systems of different technologies.
- Advanced query authoring (SQL) experience and hands-on expertise with relational databases.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Familiarity with cloud platforms, particularly Azure, for deploying and managing data infrastructure.
- Experience in banking industry.
- Proficiency in PySpark/ SparkSQL/ SQL for big data processing and optimization.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience with unstructured datasets.
- Experience using one or more of the following: Python, Spark, Kafka, Mongo.