Speech Research Intern-1 bei Centific
Centific · Redmond, Vereinigte Staaten Von Amerika · Hybrid
- Junior
- Optionales Büro in Redmond
About Centific
Centific is a frontier AI data foundry that curates diverse, high-quality data, using our purpose-built technology platforms to empower the Magnificent Seven and our enterprise clients with safe, scalable AI deployment. Our team includes more than 150 PhDs and data scientists, along with more than 4,000 AI practitioners and engineers. We harness the power of an integrated solution ecosystem—comprising industry-leading partnerships and 1.8 million vertical domain experts in more than 230 markets—to create contextual, multilingual, pre-trained datasets; fine-tuned, industry-specific LLMs; and RAG pipelines supported by vector databases. Our zero-distance innovation™ solutions for GenAI can reduce GenAI costs by up to 80% and bring solutions to market 50% faster.
Our mission is to bridge the gap between AI creators and industry leaders by bringing best practices in GenAI to unicorn innovators and enterprise customers. We aim to help these organizations unlock significant business value by deploying GenAI at scale, helping to ensure they stay at the forefront of technological advancement and maintain a competitive edge in their respective markets.
About Job
PhD Research Intern — Speech AI
Centific AI Research
Fulltime - 40 hours per week
Role Summary
Centific AI Research seeks a PhD Research Intern to design and evaluate speech‑first models, with a focus on Spoken Language Models (SLMs) that reason over audio and interact conversationally. You’ll move ideas from prototype to practical demos, working with scientists and engineers to deliver measurable impact.
Scope of Work
- End‑to‑end speech dialogue systems (speech‑in/speech‑out) and speech‑aware LLMs.
- Alignment between speech encoders and text backbones via lightweight adapters.
- Efficient speech tokenization and temporal compression suitable for long‑form audio.
- Reliable evaluation across recognition, understanding, and generation tasks—including robustness and safety.
- Latency‑aware inference for streaming and real‑time user experiences.
Example Projects
- Prototype a conversational SLM using an SSL speech encoder and a compact adapter on an existing LLM; compare against strong baselines.
- Create a data recipe that blends conversational speech with instruction‑following corpora; run targeted ablations and report findings.
- Build an evaluation harness that covers ASR/ST/SLU and speech QA, including streaming metrics (latency, stability, endpointing).
- Ship a minimal demo with streaming inference and logging; document setup, metrics, and reliability checks.
- Author a crisp internal write‑up: goals, design choices, results, and next steps for productionization.
Minimum Qualifications
- PhD candidate in CS/EE (or related) with research in speech, audio ML, or multimodal LMs.
- Fluency in Python and PyTorch, with hands‑on GPU training; familiarity with torchaudio or librosa.
- Working knowledge of modern sequence models (Transformers or SSMs) and training best practices.
- Depth in at least one area: (a) discrete speech tokens/temporal compression, (b) modality alignment to LLMs via adapters, or (c) post‑training/instruction tuning for speech tasks.
- Strong experimentation habits: clean code, ablations, reproducibility, and clear reporting.
Preferred Qualifications
- Experience with speech generation (neural codecs/vocoders) or hybrid text+speech decoding.
- Background in multilingual or code‑switching speech and domain adaptation.
- Hands‑on work evaluating safety, bias, hallucination, or spoofing risks in speech systems.
- Distributed training/serving (FSDP/DeepSpeed), and experience with ESPnet, SpeechBrain, or NVIDIA NeMo.
Tech Stack
- PyTorch, CUDA, torchaudio/librosa; experiment tracking (e.g., Weights & Biases).
- LLM backbones with lightweight adapters; neural audio codecs and vocoders as needed.
- FastAPI/gRPC for services; ONNX/TensorRT and quantization for efficient inference.
Logistics
- Location: Redmond(Preferred) or Remote
- Duration: <3–6 months>
$30-$50 per hour
What We Offer
- Competitive stipend and hands-on projects with measurable real-world impact.
- Mentorship from applied scientists and engineers; opportunities to publish and present.
- Access to modern GPU infrastructure and a supportive environment for fast, responsible experimentation.
- Flexible location and schedule options, subject to team needs.
Centific AI Research is an Equal Opportunity Employer. We celebrate diversity and are committed to an inclusive environment for all employees and interns.