Senior Big Data Engineer – Clustering
Kafka Clusters, Hadoop Clusters, Hadoop Ecosystem Tools, DevOps
Would you like to work in a challenging environment, working on leading technologies and doing some innovative, state-of-the art work in the Big Data area? Are you interested in working in a creative, fun, international environment, where you can leave your mark? Then this could be your next contract.
In this international bank, you will be working in the Analytics area, working on models and algorithms with a focus on clustering. You will be designing and operating Clusters, considering security issues together with securing and protecting Hadoop clusters. In this context you will be collaborating closely with the development team.
Technically you are an experienced Engineer feeling comfortable with understanding user requirements to introduce Big Data technologies and you enjoy designing Big Data architectures. You have at least 5 years’ experience with both Kafka and Hadoop Clusters, working with tools from the Hadoop ecosystem (e.g. Hadoop, Hive, Impala, Spark, Karka, Solr, Flume) and DevOps automation with Ansible and Terraform. ETL, DB2 and some security experience would be advantageous. Creativity, flexibility and a go-getter will be some of your personality traits.
Does this feel like the challenge you would like to wake up to every day in the next months?
If so, please send your CV to email@example.com or alternatively you can call me on 043 588 10 34.