
Java Remote-emplois à auburn ∙ Page 1
1 Emplois à distance et à domicile en ligne

Hybrid AI Systems Engineer
None · Auburn Hills, États-Unis d'Amérique · Hybrid
- Professional
- Bureau à Auburn Hills
AI Systems Engineer
Position Summary
The AI Systems Engineer will support Transcend’s Chief Data and Artificial Intelligence Officer and play a critical role in building and optimizing data infrastructure, machine learning systems, and AI capabilities. This role bridges data engineering, MLOps, and AI deployment by ensuring scalable, reliable, and secure data and model pipelines for analytics, business intelligence, and advanced AI applications.
Key Responsibilities:
Data Pipeline & Infrastructure Engineering
- Design, implement, and optimize robust, scalable data pipelines (ETL/ELT) to process structured and unstructured data.
- Build API integrations and automation workflows to support increasing data complexity and volume.
- Leverage cloud-native tools (e.g., AWS S3, Redshift, Glue, Lambda) for data ingestion and processing.
- Develop Spark, SparkSQL, HiveSQL, and streaming jobs to populate analytical models and support real-time data flows.
Data Modeling & Architecture
- Create and manage data models, schemas, and storage structures to support analytical and operational use cases.
- Implement scalable data storage and retrieval using relational, NoSQL, and cloud data lake technologies.
- Optimize data systems for performance, availability, and cost efficiency.
AI & Machine Learning Engineering
- Collaborate with AI/ML teams to operationalize ML workflows using MLOps tools (e.g., MLFlow, Airflow).
- Contribute to the design and deployment of AI solutions using OpenAI models, LLMs, and Retrieval-Augmented Generation (RAG).
- Integrate NLP and ML models into data pipelines and applications.
DevOps, Cloud, and Platform Engineering
- Deploy and manage containerized applications using Docker and orchestrate them with Kubernetes.
- Automate infrastructure and model deployment using CI/CD workflows and GitHub Actions.
- Implement monitoring, versioning, and reproducibility for ML pipelines.
Collaboration & Documentation
- Work with cross-functional teams to improve data accessibility, quality, and transparency.
- Document technical designs, data workflows, and platform standards.
- Provide support and technical guidance to analysts, engineers, and business teams.
Required Skills & Experience
- 3+ years of experience in data engineering, AI/ML engineering, or a related field.
- Proficiency in Python, Java, or Scala for data and ML development.
- Hands-on experience with AWS tools: S3, Redshift, SageMaker, Lambda.
- Strong skills in Apache Spark, Kafka, and streaming data processing.
- Experience with MLOps tools: MLFlow, Apache Airflow, and model versioning.
- Containerization and orchestration expertise: Docker and Kubernetes.
- Familiarity with LLMs, OpenAI APIs, and RAG frameworks.
- Experience building and deploying NLP or machine learning models in production.
- Experience with advertising platform data from META (Facebook/Instagram), Google Ads, and TikTok Ads.
- Integration and analysis experience with Salesforce CRM and marketing data.
- Proficient with GitHub and CI/CD pipelines for code and model lifecycle management.
- Knowledge of data security, compliance, and governance best practices.
- Experience with BI tools like Tableau, Power BI, or Looker is a plus.
Education
- Bachelor’s degree in Computer Science, Data Science, or related field required.
- Master’s degree preferred.