Tech Lead/ Senior Architect - Big Data
Salary undisclosed
Apply on
Original
Simplified
NorthBay Solutions, a premier US-based AWS partner with over a decade of experience serving clients globally, is seeking a passionate and experienced Tech Lead/Senior Architect to join our growing Data Engineering team.
About the Role: In this role, you will play a pivotal role in leading and architecting innovative solutions leveraging AWS Big Data technologies (e.g. S3, Redshift, EMR, Glue). You will collaborate closely with a global, multicultural team of Data Engineers and guiding project execution, ensuring technical excellence, and fostering a culture of continuous learning and supporting sales growth.
Key Qualifications & Required Knowledge:
• 10+ years of experience in technology, with a minimum of 6 years specifically focused on Data Engineering
• Proven experience developing and implementing ETL pipelines on AWS that extract, transform, and load data for strategic business goals
• Expertise in ETL Data Engineering using Python, Java, and Big Data technologies (Hadoop, PySpark, Spark SQL, Hive)
• Effective time management skills with a proven ability to manage and prioritize multiple project tasks
• Experience creating and driving large-scale ETL pipelines in an AWS environment
• Experience integrating data from diverse data sources
• Strong software development and programming skills with a focus on data using Python/PySpark or Scala
Preferred Qualifications (Nice to have):
• Experience with DevOps and CI/CD tools (Bitbucket, Bamboo, Ansible, Sonar, Atlassian suite)
• In-depth knowledge of core AWS services (IAM, CloudFormation, EC2, S3, EMR/Spark, Glue, Lambda, Athena, Redshift)
• Familiarity with popular data management tools (Data Lake, Databricks, Snowflake)
• Understanding of core data science concepts (statistics, modelling, algorithms)
• Experience with AWS AI/ML Services (SageMaker, Ground Truth) for building, training, and deploying machine learning models
• Working knowledge of AWS infrastructure services (ECS, Cognito, CloudWatch, RDS: Postgres, API Gateway, Lambda) to support data engineering workflows and integrations
• Acquainted with the value and evolution of Generative AI (GenAI) and its potential applications within the Big Data landscape
• Basic familiarity with Amazon Q, a managed generative AI powered chat assistant, and Amazon Bedrock, a serverless service that hosts foundation models
Technical Responsibilities:
• Design and develop innovative solutions leveraging Big Data technologies (Hadoop, Spark, etc.) to address complex data challenges
• Architect and build highly scalable and distributed data systems on the AWS cloud platform
• Design, develop, and manage robust ETL jobs and pipelines to ensure efficient data movement and transformation
• Deploy production-ready data models, including data ingestion, configuration, resource management, and monitoring
Leadership and Collaboration:
• Partner with sales to qualify opportunities, develop compelling technical solutions, and respond to RFPs
• Serve as a thought leader, providing strategic guidance and mentoring junior team members as they leverage new data-driven solutions
• Foster a culture of continuous learning and improvement by mentoring and training colleagues
• Collaborate effectively to troubleshoot complex issues and provide timely guidance to team members
Business Impact:
• Drive impactful business improvements by analysing data, identifying trends, and recommending process optimizations
Additional qualities:
• Exceptional communication and English skills
• Proven leadership
• Client-facing expertise
• Technological agility
• Thrives in a fast-paced environment
• Analytical & Detail-oriented
• Self-driven leader
• Team player
Powered by JazzHR
JzMZjCeh6A
About the Role: In this role, you will play a pivotal role in leading and architecting innovative solutions leveraging AWS Big Data technologies (e.g. S3, Redshift, EMR, Glue). You will collaborate closely with a global, multicultural team of Data Engineers and guiding project execution, ensuring technical excellence, and fostering a culture of continuous learning and supporting sales growth.
Key Qualifications & Required Knowledge:
• 10+ years of experience in technology, with a minimum of 6 years specifically focused on Data Engineering
• Proven experience developing and implementing ETL pipelines on AWS that extract, transform, and load data for strategic business goals
• Expertise in ETL Data Engineering using Python, Java, and Big Data technologies (Hadoop, PySpark, Spark SQL, Hive)
• Effective time management skills with a proven ability to manage and prioritize multiple project tasks
• Experience creating and driving large-scale ETL pipelines in an AWS environment
• Experience integrating data from diverse data sources
• Strong software development and programming skills with a focus on data using Python/PySpark or Scala
Preferred Qualifications (Nice to have):
• Experience with DevOps and CI/CD tools (Bitbucket, Bamboo, Ansible, Sonar, Atlassian suite)
• In-depth knowledge of core AWS services (IAM, CloudFormation, EC2, S3, EMR/Spark, Glue, Lambda, Athena, Redshift)
• Familiarity with popular data management tools (Data Lake, Databricks, Snowflake)
• Understanding of core data science concepts (statistics, modelling, algorithms)
• Experience with AWS AI/ML Services (SageMaker, Ground Truth) for building, training, and deploying machine learning models
• Working knowledge of AWS infrastructure services (ECS, Cognito, CloudWatch, RDS: Postgres, API Gateway, Lambda) to support data engineering workflows and integrations
• Acquainted with the value and evolution of Generative AI (GenAI) and its potential applications within the Big Data landscape
• Basic familiarity with Amazon Q, a managed generative AI powered chat assistant, and Amazon Bedrock, a serverless service that hosts foundation models
Technical Responsibilities:
• Design and develop innovative solutions leveraging Big Data technologies (Hadoop, Spark, etc.) to address complex data challenges
• Architect and build highly scalable and distributed data systems on the AWS cloud platform
• Design, develop, and manage robust ETL jobs and pipelines to ensure efficient data movement and transformation
• Deploy production-ready data models, including data ingestion, configuration, resource management, and monitoring
Leadership and Collaboration:
• Partner with sales to qualify opportunities, develop compelling technical solutions, and respond to RFPs
• Serve as a thought leader, providing strategic guidance and mentoring junior team members as they leverage new data-driven solutions
• Foster a culture of continuous learning and improvement by mentoring and training colleagues
• Collaborate effectively to troubleshoot complex issues and provide timely guidance to team members
Business Impact:
• Drive impactful business improvements by analysing data, identifying trends, and recommending process optimizations
Additional qualities:
• Exceptional communication and English skills
• Proven leadership
• Client-facing expertise
• Technological agility
• Thrives in a fast-paced environment
• Analytical & Detail-oriented
• Self-driven leader
• Team player
Powered by JazzHR
JzMZjCeh6A
Similar Jobs