USD per year
Machine Learning Engineer, Distributed vLLM
remote type: Hybrid locations: Boston time type: Full time posted on: Posted Today job requisition id: R-053073
Job Summary
At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. The Red Hat AI Inference Engineering team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM and LLM-D projects, and inventors of state-of-the-art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments. As a Machine Learning Engineer focused on distributed vLLM infrastructure in the llm-d project, you will be at the forefront of innovation, collaborating with our team to tackle the most pressing challenges in scalable inference systems and Kubernetes-native deployments. Your work with machine learning, distributed systems, high performance computing, and cloud infrastructure will directly impact the development of our cutting-edge software platform, helping to shape the future of AI deployment and utilization. If you want to solve cutting edge problems at the intersection of deep learning, distributed systems, and cloud-native infrastructure the open-source way, this is the role for you. Join us in shaping the future of AI!
What you will do
- Contribute to the design, development, and testing of new features and solutions for Red Hat AI Inference
- Innovate in the inference domain by participating in upstream communities
- Develop and maintain distributed inference infrastructure leveraging Kubernetes APIs, operators, and the Gateway Inference Extension API for scalable LLM deployments.
- Develop and maintain system components in Go and/or Rust to integrate with the vLLM project and manage distributed inference workloads.
- Develop and maintain KV cache-aware routing and scoring algorithms to optimize memory utilization and request distribution in large-scale inference deployments.
- Enhance the resource utilization, fault tolerance, and stability of the inference stack.
- Develop and test various inference optimization algorithms.
- Actively participate in technical design discussions
- Contribute to a culture of continuous improvement by sharing recommendations and technical knowledge with team members
- Collaborate with other engineering and cross-functional teams to deliver on engineering deliverables
- Communicate effectively to team members to ensure proper visibility of development efforts
- Be taught, coached, and mentored by senior members of the team
- Provide timely and constructive code reviews
What you will bring
- Strong proficiency in Python and/or GoLang or similar language
- Experience with cloud-native Kubernetes service mesh technologies/stacks such as Istio, Cilium, Envoy (WASM filters), and CNI.
- Working understanding of Layer 7 networking, HTTP/2, gRPC, and the fundamentals of API gateways and reverse proxies.
- Knowledge of serving runtime technologies for hosting LLMs, such as vLLM, SGLang, TensorRT-LLM, etc.
- Excellent written and verbal communication skills, capable of interacting effectively with both technical and non-technical team members.
- Ability work independently in a dynamic, fast-paced environment
Following is considered a plus:
- Proficiency in C, C++, or Rust
- Experience with the Kubernetes ecosystem, including core concepts, custom APIs, operators, and the Gateway API inference extension for GenAI workloads.
- Working knowledge of high-performance networking protocols and technologies including UCX, RoCE, InfiniBand, and RDMA is a plus.
- Experience with GPU performance benchmarking and profiling tools like NVIDIA Nsight or distributed tracing libraries/techniques like OpenTelemetry.
- Experience in writing high performance code for GPUs and deep knowledge of GPU hardware
- Strong understanding of computer architecture, parallel processing, and distributed computing concepts
- Bachelor's degree in computer science or related field is an advantage, though we prioritize hands-on experience
- Active engagement in the ML research community (publications,...
Red Hat is the leading provider of enterprise open source software solutions. Enterprises around the world trust our broad portfolio of hybrid cloud infrastructure, application services, cloud-native application development, automation, and artificial intelligence solutions to deliver IT services on any infrastructure quickly and cost effectively. More than 90% of companies in the U.S. Fortune 500 continue to rely on Red Hat. Building enterprise-ready solutions with open source: We believe using an open development model helps create more secure, stable, and innovative technologies. By collaborating with open source communities, we’re developing software that pushes the boundaries of technological ability. Red Hat is an open hybrid cloud technology leader, delivering a consistent, comprehensive foundation for transformative IT and artificial intelligence (AI) applications in the enterprise. As a trusted adviser to the Fortune 500, Red Hat offers cloud, developer, Linux, automation, and application platform technologies, as well as award-winning services.
View Company Profile