A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
A Data Engineer with Advanced Analytics expertise specializes in formulating mathematical approaches to solve complex business problems, gathering and preparing data, and utilizing predictive analytics tools like SPSS to draw conclusions and present findings. Their skills encompass mathematical optimization, discrete-event simulation, rules programming, and predictive analytics. They design, build, and manage solutions that involve preparing data, performing statistical analysis, data collection, data mining, and text mining, and deploying analysis results. This role requires a strong understanding of data engineering principles and the ability to apply them to advanced analytics projects.
*Experience with CI/CD pipelines & tools like Jenkins, GitLab CI/CD
*Proficiency in scripting languages such as Python, Shell. - Linux OS & Linux ops - Any monitoring tool (splunk. DataDog)
*Google Cloud Platform Knowledge
*Skills - Terraform - AWS - Google Cloud Platform (GCP) - Infrastructure as Code (IaC) - Linux - Python
* JLPT N1-N3 certification is preferred
* Experience in Big Data tools such as Python, Hadoop, Hive, or Spark
* Proven background in SQL, Unix/Linux, and ETL process