USD per year
At Datadog, we are building an internal AI platform that empowers our teams to train, evaluate, and deploy models at scale. The Annotation & Evaluation team plays a foundational role in ensuring our models are reliable, safe, and production-ready. We design the infrastructure and tooling for dataset labeling, model benchmarking, trust & safety evaluation, and performance diagnostics across a range of ML and LLM applications. From interactive labeling pipelines to automated evaluation environments, our systems provide the core feedback loop that allows engineers and scientists to measure, compare, and continuously improve AI models. We work at the intersection of applied ML, data engineering, and platform infrastructure. We’re looking for a Senior Software Engineer to help us scale our evaluation systems, develop benchmarking tools, and drive trust & safety observability across Datadog's AI product offerings. At Datadog, we place value in our office culture - the relationships that it builds, the creativity it brings to the table, and the collaboration of being together. We operate as a hybrid workplace to ensure our employees can create a work-life harmony that best fits them. What You’ll Do:
- Design and build systems that support automated evaluation of AI models, including LLMs and agents, using production-like telemetry and scenarios.
- Lead efforts to develop benchmark suites, evaluation pipelines, and model comparison tooling with built-in trust & safety metrics.
- Build and maintain integrations with labeling systems (e.g., Label Studio) and coordinate with external/internal annotation workflows.
- Collaborate closely with Applied AI and Bits AI teams to enable fast iteration, reproducible experimentation, and interpretable evaluations.
- Develop data pipelines that feed metrics, results, and alerts into our observability stack to track model behavior at scale.
- Drive best practices around safe model deployment by building systems for bias checks, hallucination detection, and human-in-the-loop review.
Who You Are:
- You have 6+ years of software engineering experience, including 2+ years working with ML/AI products or platforms.
- You have experience building backend or data infrastructure systems, especially for model evaluation, benchmarking, or labeling workflows.
- You’re comfortable working across the stack: from orchestrating large-scale data processing pipelines to building UI integrations for labeling or eval dashboards.
- You have a solid understanding of ML concepts, including model performance metrics, prompt evaluation, and trust & safety concerns.
- You’re fluent in Python or Go and familiar with cloud-native development patterns.
- Bonus points: experience with open-source evaluation frameworks (e.g., lm-eval-harness), vector DBs, or human feedback loops.
Datadog is the essential monitoring platform for cloud applications. It brings together data from servers, containers, databases, and third-party services to make the stack entirely observable. These capabilities help DevOps teams avoid downtime, resolve performance issues, and ensure customers get the best user experience.
View Company Profile