Platzhalter Bild

Software Engineer na Gresearch

Gresearch · London, Reino Unido · Onsite

Candidatar-se agora

We tackle the most complex problems in quantitative finance, by bringing scientific clarity to financial complexity.

From our London HQ, we unite world-class researchers and engineers in an environment that values deep exploration and methodical execution - because the best ideas take time to evolve.  Together we’re building a world-class platform to amplify our teams’ most powerful ideas.

As part of our engineering team, you’ll shape the platforms and tools that drive high-impact research - designing systems that scale, accelerate discovery and support innovation across the firm.

Take the next step in your career.

The role

We are looking for Software Engineers to join our Core AI subteam within the AI Engineering Group.

The team’s mission is to build, operate and continuously evolve the core platforms that power every GenAI initiative across GResearch, from RAG services used by the entire company to improving developer experience by introducing new tools, commercial or custom built, to our quants and engineers.

As a member of the Core AI you will design and deliver scalable, reliable and secure services and tooling that enable researchers, data scientists and application teams to develop, deploy and monitor AI solutions quickly and safely.

As a Software Engineer, your work will span building distributed systems, LLM orchestration and inference, LLM integration with internal systems and deploying the latest third party AI technologies internally.

Key responsibilities of the role include:

  • Designing, building and operating platform services in C# and Python that provide common capabilities such as feature stores, vector search, prompt management and model hosting

  • Implementing orchestration workflows with tools such as LangGraph and Pydantic‑based data models to ensure type‑safe, auditable pipelines

  • Integrating and scaling RAG technologies to support huge embedding workloads

  • Collaborating with product and research teams to turn cutting‑edge prototypes into robust, production‑grade services

  • Championing engineering best practices, including version control, automated testing, CI/CD and  observability, and embedding them into every platform component

  • Benchmarking and optimising latency, throughput and cost across on‑prem GPU clusters and cloud environments

  • Influencing G‑Research’s AI strategy by evaluating vendor products, open‑source projects and industry trends, and advising on build‑vs‑buy decisions

  • Coaching and upskilling engineers across the firm in using platform APIs, SDKs and self‑service tooling effectively.

Who are we looking for?

We value engineers who thrive on solving hard problems, enjoy working in polyglot codebases and care deeply about developer experience.

You should be comfortable owning a service end‑to‑end, from design docs to production dashboards, and excited by the prospect of shaping the foundation on which every AI workload at G‑Research runs.

The ideal candidate will have the following skills and experience:

  • Degree in Computer Science, Engineering or a related field, or equivalent professional experience.

  • Strong, production‑grade programming skills in C# and Python or similar languages

  • Solid understanding of distributed systems concepts, such as networking, storage, concurrency and fault tolerance

  • Familiarity with modern AI engineering tooling and patterns, such as LangGraph/LangChain, Pydantic, FastAPI, MCP, RAG pipelines and agentic workflows

  • Proven track record of delivering high‑availability services and automating their testing and deployment, including Git, Docker, Kubernetes and CI/CD

  • Ability to translate abstract requirements into secure, scalable technical designs and to communicate those designs clearly

Desirable:

  • Exposure to GPU scheduling, model‑parallel inference frameworks. Such as  vLLM or TensorRT‑LLM, or serving LLMs in production

  • Experience operating hybrid on‑prem and cloud (AWS, Azure, GCP) environments at scale

  • Knowledge of performance‑critical programming, low‑latency networking or high‑frequency data processing

  • Contributions to open‑source AI infrastructure projects

Why should you apply?

  • Highly competitive compensation plus annual discretionary bonus

  • Lunch provided (via Just Eat for Business) and dedicated barista bar

  • 30 days’ annual leave

  • 9% company pension contributions

  • Informal dress code and excellent work/life balance

  • Comprehensive healthcare and life assurance

  • Cycle-to-work scheme

  • Monthly company events

G-Research is committed to cultivating and preserving an inclusive work environment. We are an ideas-driven business and we place great value on diversity of experience and opinions.

We want to ensure that applicants receive a recruitment experience that enables them to perform at their best. If you have a disability or special need that requires accommodation please let us know in the relevant section

Candidatar-se agora

Outros empregos