A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
As a Data Scientist at IBM, you will be at the center of strategic projects that directly impact our clients and operations. Your role will be to develop predictive and prescriptive models, perform exploratory analysis, and lead data-driven initiatives to solve complex business challenges.
Develop and tune supervised and unsupervised Machine Learning models.
Exploratory data analysis (EDA) to identify patterns, trends, and insights.
Collaborate with data engineers to ensure data quality and preparation.
Build PoCs (Proofs of Concept) to validate solutions.
Monitor and analyze performance metrics (precision, recall, F1-score).
Support in the transition of models to production, with a focus on reproducibility and scalability.
Use of Cloud platforms (Azure, AWS, or GCP), with emphasis on Azure Machine Learning, for model development, deployment, and monitoring.
Higher education in areas such as Computer Science, Engineering, Statistics, Mathematics, or related courses.
Programming in Python, with mastery of libraries such as Pandas, Scikit-Learn, StatsModels, PySpark and knowledge of Keras, PyTorch or TensorFlow.
SQL for data manipulation and extraction.
Solid knowledge of statistics: descriptive statistics, diagnostics, probabilistic distributions and hypothesis testing.
Code versioning with Git.
Knowledge of data visualization tools such as Power BI, Tableau or Looker.
Experience with Cloud platforms: Azure, AWS, GCP or IBM Cloud.
Knowledge of time series analysis, text mining (NLP) and interpretable machine learning.
Familiarity with deploying and monitoring models in a cloud environment.
Experience with Big Data frameworks: Databricks, Hadoop, Spark, Scala, Kafka, Java and/or C++.
Knowledge of statistical tools such as SPSS, SAS or Knime.
Experience with Visual Recognition and Computer Vision techniques.
Experience with Mathematical Programming/Optimization: CPLEX, Gurobi, Matlab.
Portfolio of projects on platforms such as GitHub or personal technical blog.
Experience with Azure Cognitive Services, Azure ML AutoML, Azure Container Instances and Azure Synapse Analytics.
Previous experience with GenAI (Generative Artificial Intelligence).