Must Have:
● Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions
● Strong expertise in end-to-end implementation of Cloud data engineering solutions like Enterprise Data lake, Data hub in AWS
● Proficient in Lambda or Kappa Architectures
● Should be aware of Data Management concepts and Data Modelling
● Strong AWS hands-on expertise with a programming background preferably Python/Scala
● Good knowledge of Big Data frameworks and related technologies - Experience in Hadoop and Spark is mandatory
● Strong experience in AWS compute services like AWS EMR, Glue and storage services like S3, Redshift & Dynamodb
● Good experience with any one of the AWS Streaming Services like AWS Kinesis, AWS SQS and AWS MSK
● Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql and Spark Streaming
● Good knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build and Code Commit
● Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules
● Good knowledge in AWS Security and AWS Key management
● Strong understanding of Cloud data migration processes, methods and project lifecycle
● Good analytical & problem-solving skills
● Good communication and presentation skills