Senior Data Engineer
DISQO is a next generation consumer insights platform. We provide the highest quality consumer data to the world's largest market research agencies, analytics companies, and brands. We operate one of the world's largest true consumer insights panels. This data helps our clients understand user behavior, build better experiences, and make better decisions. We utilize cutting-edge technology and innovative, out-of-the-box strategies to collect and analyze insights which help shape the products and services of tomorrow.
This is a great opportunity to join a fun, exciting & highly motivated team and upgrade your skills while creating real impact. We use a modern tech stack and cloud infrastructure. We are not only looking for work experience, but rather the willingness to step up to challenges and the ability to learn quickly.
We believe the best software is written and managed by small teams that know how to make the impossible possible. We use agile software development techniques and modern tools to focus our efforts on solving our business goals. We use OKRs to track everything we do. We deliver early and often. We obsess over our code, architecture and infrastructure. And we believe that these practices lead to higher quality products.
- Leverage your software development and data engineering skills to impact our business by taking ownership of key projects requiring coding and data pipelines
- Collaborate with product managers, software engineers and data engineers to design, implement, and deliver successful data solutions
- Define technical requirements and implementation details for data solutions
- Design, build and optimize performant databases, data models, integrations and ETL pipelines in RDBMS and NoSQL environments
- Maintain detailed documentation of your work and changes to support data quality and governance
- Ensure high operational efficiency and quality of your solutions to meet SLAs and support commitment to the customers
- Be an active participant and advocate of agile/scrum practices to ensure health and process improvements for your team
- 5+ years of experience designing and delivering large scale, 24/7, mission-critical data pipelines and features using modern big data architectures
- 3+ years of advanced SQL (Preferably Postgres / AWS Redshift)
- 3+ years of Scala
- 3+ years of Python
- 3+ years of Spark
- 2+ years of experience with AWS data ecosystem (Redshift, EMR, Glue, Athena, ...)
- Deep knowledge in various ETL/ELT tools and concepts, data modeling, SQL, query performance optimization
- Experience with building stream processing applications using Kinesis or Kafka
- Experience with workflow management tools (Airflow, Oozie, Azkaban, Luigi, etc.)
- Comfortable working in Linux environment
- Ability to thrive in an agile, entrepreneurial start-up environment