Requirement:
- Proficient in SQL and experience working with relational databases (PostgreSQL, MySQL, SQL Server) and NoSQL (Redis, Cassandra).
- Strong passion and experience in programming: Python, Java, Go.
- Hands-on in message queueing: Kafka, PubSub, MQTT, etc.
- Experience in building and optimizing Big Data pipelines and architecture.
- CI/CD Pipelines: Experience building automated CI/CD workflows using tools like Jenkins, GitLab, or similar.
- Experience with GCP Big Data: Composer, BigQuery, DataFlow, Pub/Sub would be preferable.
- Experience with Big Data tools, such as Hadoop and Spark is a plus.
- Familiar with UNIX environment.
- Have passion in Big Data.
- Independent learner and self-motivated.
- Minimum 3 years working experience in Data Engineer role.