Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionises customer engagement by transforming contact centres into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organisations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry.
Competencies:
Data Modelling: Skilled in designing data warehouse schemas (e.g., star and snowflake schemas), with experience in fact and dimension tables, as well as normalization and denormalization techniques.
Data Warehousing & Storage Solutions: Proficient with platforms such as Snowflake, Amazon Redshift, Google BigQuery, and Azure Synapse Analytics.
ETL/ELT Processes: Expertise in ETL/ELT tools (e.g., Apache NiFi, Apache Airflow, Informatica, Talend, dbt) to facilitate data movement from source systems to the data warehouse.
SQL Proficiency: Advanced SQL skills for complex queries, indexing, and performance tuning.
Programming Skills: Strong in Python or Java for building custom data pipelines and handling advanced data transformations.
Data Integration: Experience with real-time data integration tools like Apache Kafka, Apache Spark, AWS Glue, Fivetran, and Stitch.
Data Pipeline Management: Familiar with workflow automation tools (e.g., Apache Airflow, Luigi) to orchestrate and monitor data pipelines.
APIs and Data Feeds: Knowledgeable in API-based integrations, especially for aggregating data from distributed sources.
Responsibilities -
Design and implement analytical platforms that provide insightful dashboards to customers.
Develop and maintain data warehouse schemas, such as star schemas, fact tables, and dimensions, to support efficient querying and data access.
Oversee data propagation processes from source databases to warehouse-specific databases/tools, ensuring data accuracy, reliability, and timeliness.
Ensure the architectural design is extensible and scalable to adapt to future needs.
Requirement -
Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1 Engineering institutes with relevant work experience with a top technology company.
3+ years of Backend and Infrastructure Experience with a strong track record in development, architecture and design.
Hands-on experience with large-scale databases, high-scale messaging systems and real-time Job Queues.
Experience navigating and understanding large scale systems and complex code-bases, and architectural patterns.
Proven experience in building high-scale data platforms.
Strong expertise in data warehouse schema design (star schema, fact tables, dimensions).
Experience with data movement, transformation, and integration tools for data propagation across systems.
Ability to evaluate and implement best practices in data architecture for scalable solutions.
Nice to have:
Experience with Google Cloud, Django, Postgres, Celery, Redis.
Some experience with AI Infrastructure and Operations.
These cookies are necessary for the website to function and cannot be turned off in our systems. You can set your browser to block these cookies, but then some parts of the website might not work.
Security
User experience
Target group oriented cookies
These cookies are set through our website by our advertising partners. They may be used by these companies to profile your interests and show you relevant advertising elsewhere.
Google Analytics
Google Ads
We use cookies
🍪
Our website uses cookies and similar technologies to personalize content, optimize the user experience and to indvidualize and evaluate advertising. By clicking Okay or activating an option in the cookie settings, you agree to this.
The best remote jobs via email
Join 5'000+ people getting weekly alerts with remote jobs!