- Work for one of the leading operators
- Awesome benefits
- Be part of one global family
Our client is one of the world’s leading gaming operators, with millions of players and 1500+ employees. They believe passionately in what they do. Quite simply, they craft entertainment with care, building trusted brands and creating great experiences that always put the player first.
Their award-winning portfolio – are some of the best known in the industry. You’ll be joining a big, international group with some great brands and an exciting future. You’ll feel part of one global family, working with smart people, and delivering a great experience for their players.
There’s one thing they expect from you, over and above everything else. Be yourself!Key Responsibilities:
Skills & Experience:
- Be a valuable member to the Data Systems Engineering division by working closely with Developers, Architects, and DevOps Engineers to build state of- the-art systems that will improve reliability, automation, quality and cycle time of code releases for the various Data Platform Products.
- Refine communication and collaboration processes amongst the various consumers and sources of the Data Platform, paving the way for more aligned and successful product deployments.
- Ensuring that robust automation and monitoring solutions are in place in order to guarantee a highly available and resilient Data Platform.
- You have a solid understanding, and relevant experience with infrastructure-as-code solutions like AWS CloudFormation and Terraform.
- You are passionate about automation, with hands-on experience in architecting CI/CD pipelines using platforms such as Jenkins, Github Actions, GoCD, CircleCI and TeamCity
- Major cloud computing providers (AWS, GCP, Azure) together with a solid understanding of cloud computing best practices such as Shared Responsibility Model, Cost Optimization, Scalability and Availability across different geograpghic locations.
- Experience with AWS services such as IAM, S3, Kinesis, EC2, Lambda, Redshift, EMR, CloudFormation and Cloudwatch will be considered an asset
- Monitoring, alerting and escalation tools such as Grafana, AWS CloudWatch, Icinga, Zabbix, Pagerduty
- Observability tools like Dynatrace, Honeycomb, Datadog and New Relic
- Metric aggregation and/or scraping tools like Graphite, Metrictank and Prometheus
- Log Aggregation tools like Elasticsearch, Loki and Splunk
- Working in Linux environments
- Scripting languages such as Python and Bash
- You have excellent communication and team player skills with the ability to work harmoniously with a diverse workforce
- You are passionate about your area of expertise and have high interest in keeping up with latest industry best practices
You are familiar with and would like to be exposed more to:
- Relational, NoSQL and Columnar data stores such as PostgreSQL/Redshift, MySQL, Cassandra
- Data Pipeline tools such as Airflow, NiFi, Talend and Luigi
- Data streaming concepts and deployment methodologies using technologies such as Flink, Kinesis Streams, Kafka, Pulsar and CDC solutions
- Containerization technologies and managed services such Kubernetes, Docker, OpenShift, AWS ECS, AWS EKS and GCP GKE and Azure AKS
The above duties should give an overall picture of your day-to-day responsibilities but should in no way be deemed to be an exhaustive list, additional, related, duties may be assigned by the manager in line with business exigencies and continuity.