Vision Research Intern-1 bei Centific
Centific · Redmond, Vereinigte Staaten Von Amerika · Hybrid
- Optionales Büro in Redmond
About Centific
Centific is a frontier AI data foundry that curates diverse, high-quality data, using our purpose-built technology platforms to empower the Magnificent Seven and our enterprise clients with safe, scalable AI deployment. Our team includes more than 150 PhDs and data scientists, along with more than 4,000 AI practitioners and engineers. We harness the power of an integrated solution ecosystem—comprising industry-leading partnerships and 1.8 million vertical domain experts in more than 230 markets—to create contextual, multilingual, pre-trained datasets; fine-tuned, industry-specific LLMs; and RAG pipelines supported by vector databases. Our zero-distance innovation™ solutions for GenAI can reduce GenAI costs by up to 80% and bring solutions to market 50% faster.
Our mission is to bridge the gap between AI creators and industry leaders by bringing best practices in GenAI to unicorn innovators and enterprise customers. We aim to help these organizations unlock significant business value by deploying GenAI at scale, helping to ensure they stay at the forefront of technological advancement and maintain a competitive edge in their respective markets.
About Job
Internship: Vision AI / VLM / Physical AI (Ph.D. Research Intern)
Company: Centific
Location: Seattle, WA (or Remote)
Type: Full‑time Internship
Hours: 40
Build the Future of Perception & Embodied Intelligence
Are you pushing the frontier of computer vision, multimodal large models, and embodied/physical AI—and have the publications to show it? Join us to translate cutting‑edge research into production systems that perceive, reason, and act in the real world.
The Mission
We are building state‑of‑the‑art Vision AI across 2D/3D perception, egocentric/360° understanding, and multimodal reasoning. As a Ph.D. Research Intern, you will own high‑leverage experiments from paper → prototype → deployable module in our platform.
What You’ll Do
- Advance Visual Perception: Build and fine‑tune models for detection, tracking, segmentation (2D/3D), pose & activity recognition, and scene understanding (incl. 360° and multi‑view).
- Multimodal Reasoning with VLMs: Train/evaluate vision–language models (VLMs) for grounding, dense captioning, temporal QA, and tool‑use; design retrieval‑augmented and agentic loops for perception‑action tasks.
- Physical AI & Embodiment: Prototype perception‑in‑the‑loop policies that close the gap from pixels to actions (simulation + real data). Integrate with planners and task graphs for manipulation, navigation, or safety workflows.
- Data & Evaluation at Scale: Curate datasets, author high‑signal evaluation protocols/KPIs, and run ablations that make results irreproducible impossible.
- Systems & Deployment: Package research into reliable services on a modern stack (Kubernetes, Docker, Ray, FastAPI), with profiling, telemetry, and CI for reproducible science.
- Agentic Workflows: Orchestrate multi‑agent pipelines (e.g., LangGraph‑style graphs) that combine perception, reasoning, simulation, and code‑generation to self‑check and self‑correct.
Example Problems You Might Tackle
- Long‑horizon video understanding (events, activities, causality) from egocentric or 360° video.
- 3D scene grounding: linking language queries to objects, affordances, and trajectories.
- Fast, privacy‑preserving perception for on‑device or edge inference.
- Robust multi‑modal evaluation: temporal consistency, open‑set detection, uncertainty.
- Vision‑conditioned policy evaluation in sim (Isaac/MuJoCo) with sim2real stress tests.
Minimum Qualifications
- Ph.D. student in CS/EE/Robotics (or related), actively publishing in CV/ML/Robotics (e.g., CVPR/ICCV/ECCV, NeurIPS/ICML/ICLR, CoRL/RSS).
- Strong PyTorch (or JAX) and Python; comfort with CUDA profiling and mixed‑precision training.
- Demonstrated research in computer vision and at least one of: VLMs (e.g., LLaVA‑style, video‑language models), embodied/physical AI, 3D perception.
- Proven ability to move from paper → code → ablation → result with rigorous experiment tracking.
Preferred Qualifications
- Experience with video models (e.g., TimeSFormer/MViT/VideoMAE), diffusion or 3D GS/NeRF pipelines, or SLAM/scene reconstruction.
- Prior work on multimodal grounding (referring expressions, spatial language, affordances) or temporal reasoning.
- Familiarity with ROS2, DeepStream/TAO, or edge inference optimizations (TensorRT, ONNX).
- Scalable training: Ray, distributed data loaders, sharded checkpoints.
- Strong software craft: testing, linting, profiling, containers, and reproducibility.
- Public code artifacts (GitHub) and first‑author publications or strong open‑source impact.
Our Stack (you’ll touch a subset)
- Modeling: PyTorch, torchvision/lightning, Hugging Face, OpenMMLab, xFormers
- Perception: YOLO/Detectron/MMDet, SAM/Mask2Former, CLIP‑style backbones, optical flow
- VLM / LMM: Vision encoders + LLMs, RAG for video, tool‑former/agent loops
- 3D / Sim: Open3D, PyTorch3D, Isaac/MuJoCo, COLMAP/SLAM, NeRF/3DGS
- Systems: Python, FastAPI, Ray, Kubernetes, Docker, Triton/TensorRT, Weights & Biases
- Pipelines: LangGraph‑like orchestration, data versioning, artifact stores
What Success Looks Like
- A publishable or open‑sourced outcome (with company approval) or a production‑ready module that measurably moves a product KPI (latency, accuracy, robustness).
- Clean, reproducible code with documented ablations and an evaluation report that a teammate can rerun end‑to‑end.
- A demo that clearly communicates capabilities, limits, and next steps.
Why Centific
- Real impact: Your research ships—powering core features in our MVPs and products.
- Mentorship: Work closely with our Principal Architect and senior engineers/researchers.
- Velocity + Rigor: We balance top‑tier research practices with pragmatic product focus.
Rate: $30-$50 Per hour
How to Apply
Email your CV, publication list/Google Scholar, and GitHub (or artifacts/videos) to [email protected] with the subject line:
“Vision AI / VLM / Physical AI – Ph.D. Research Intern”.