Description :
|
- Bachelors or master’s in Computer Science, Engineering, Physics, Math, or related work experience
- 5+ years of experience programming in SQL queries, stored procedures, query optimization performance tuning (Microsoft SQL Server, PostgreSQL or Oracle)
- 4+ years of experience in Python
- 3+ years of hands-on data architecture and data modelling experience
- 2+ years of Core Java experience
- 2+ years of experience building highly scalable data solutions using Snowflake, Hadoop, Spark, Databricks
- 2+ years of experience working in cloud environments (AWS and/or Azure)
- Create and maintain Conceptual, Logical and Physical data models
- Experience in architecting, designing, integrating, and understanding the characteristics and specificities of data-intensive systems
- Identify opportunities to reuse data and reduce redundancy across the enterprise
- Experience with Git/GitHub
- Good knowledge with DevOps tools like DevOps – Jira, Confluence and CI/CD pipelines (Jenkins)
Skills: - Strong analytical skills
- Candidate must be willing to take full ownership of projects, covering discovery, analysis, technical design and implementation, testing, and deployment tasks
- Must demonstrate good communication skills and be comfortable working closely with senior business partners, product owners and in cross-functional teams
- A strong desire to document and share work done to aid in long term support
- Candidate must be a self-starter, a dependable partner, and team player
|