Senior Product Security Engineer (AI/ML)
Ontario
Full Time
2 hours ago
Senior LevelEngineeringWorldwide
Over $120K

USD per year

Job Description

Senior Product Security Engineer (AI/ML)

Location: Ontario Remote Work Mention: Not explicitly stated Employment Type: Not explicitly stated Experience Level: Senior (implied by job title) Salary: National pay range for this role is $155,600 - $233,400 CAD. Individual compensation commensurate with experience and qualifications. Additional compensation may include stock option awards, bonuses, merit increases, and sales commissions where applicable. Our mission at Greenhouse is to make every company great at hiring – so we go to great lengths to hire great people because we believe that they’re the foundation of our success. At Greenhouse, you’ll join a team that collaborates purposefully, fosters inclusivity, and communicates with transparency and accountability so we can help companies measurably improve the way they hire. Join us to do the best work of your career, solving meaningful problems with remarkable teams. Greenhouse is looking for a Senior Product Security Engineer (AI/ML) to join our team! Security at Greenhouse is foundational to our success and is critical for building & maintaining our user and customer trust. From influencing how we write our software, deploy our infrastructure, and make architectural decisions, Security is a major focus here at Greenhouse. We are excited to make our program more robust with the addition of a Product Security Engineer with AI security expertise. You will serve as the team’s Subject Matter Expert (SME) on AI security focussed engagements while contributing to our broader security engineering goals. You will act as a partner with our engineers to improve security best practices across our agile SDLC, specifically focusing on securing our emerging AI and Machine Learning features.

Who will love this job

  • An Entrepreneurial Problem-Solver - You don’t wait for a ticket to fix a gap. You proactively stay current on AI/ML trends and identify ways to harden our systems before risks manifest
  • A Pragmatic Partner - You understand the "need for business speed." You thrive in environments where security enables innovation rather than hindering it, finding creative ways to support development velocity
  • An Independent Driver - You demonstrate a high bias for action. You are comfortable completing tasks independently and asynchronously
  • A Generous Collaborator - You listen well and work effectively with diverse audiences, from legal counsel during AI Ethics committee reviews to infrastructure engineers feature development, incorporating feedback to build better solutions
  • A Clear Communicator - You can translate a complex technical concept into a real-world business impact, whether through technical writing, documentation, or internal presentations

What you’ll do

  • Act as the primary advisor for securing AI/ML workflows, conducting threat modeling for AI product features, and defining guardrails for Large Language Model (LLM) usage
  • Advise and review on agentic AI usage across the R&D department
  • Perform security testing and source code review of application and underlying platform for both AI and non-AI systems
  • Help upskill the wider security and engineering teams on AI security fundamentals and common threats/vulnerabilities
  • Partner with our compliance and legal teams on AI governance decisions and processes
  • Act as a security partner, building and maintaining relationships with product and engineering teams to integrate security into the development process
  • Embed security principles and controls to achieve a ‘secure by default’ posture
  • Secure modern technology stacks that include Kubernetes, Docker, AWS, and CI/CD tooling
  • Participate in the security engineering on-call rotation to triage and respond to urgent security alerts and incidents outside of standard business hours when necessary

You should have

  • Practical experience securing model training and inference pipelines (specifically ARC and MLFlow) and securing AI Gateways
  • Professional experience as a developer releasing production code. Proficient with modern workflows like Agile, GitOps, and CI/CD
  • Hands-on experience using modern AI development tools (e.g., Cursor, GitHub Copilot, Gemini, or Claude) and interacting with OpenAI/Gemini APIs
  • Strong foundation in AWS core services, Kubernetes (K8s), Linux systems, and networking principles
  • Expert-level knowledge of web and AI/ML application security topics (e.g., OWASP Web / LLM / Agent)
  • Exposure to AI compliance frameworks (e.g., ISO42001)
  • Experience with architecture reviews; auth protocol flows related to SAML, OAuth2, and OIDC
  • Deep understanding of the AI ecosystem including design principles, threat models, and appropriate tools
  • Ability to perform both structured and ad-hoc threat models providing practical code-level recommendations balancing security with development speed
  • Experience working with Ruby on Rails is a plus
  • Unique talents: Greenhouse values diverse backgrounds even if not 100% aligned with qualifications

Applicants must be legally eligible to work in Canada as of the start date chosen by the Company. Sponsorship is not supported. For purposes of processing or administering your employment relationship, personal information may be transferred/accessed by affiliates or agents in the United States or elsewhere. The anticipated closing date for this role is February 2nd, 2026. #LI-MM1

Extracted Skills

  • Securing model training/inference pipelines (ARC & MLFlow)
  • Securing AI Gateways
  • Software development & production code release experience
  • Agile methodologies
  • GitOps workflows
  • CI/CD tooling & processes
  • Modern AI development tools: Cursor, GitHub Copilot, Gemini, Claude
  • OpenAI/Gemini APIs interaction
  • AWS core services expertise
  • Kubernetes (K8s)
  • Linux systems administration/knowledge
  • Networking principles knowledge
  • Web application security (OWASP)
  • AI/ML application security including LLMs & agentic AI threats
  • Architecture reviews & authentication protocols: SAML/OAuth2/OIDC
  • Threat modeling (structured & ad-hoc)
  • Compliance frameworks: ISO42001 (AI compliance)
  • Ruby on Rails (optional/plus)
How to Apply
About Greenhouse

Greenhouse provides a hiring software platform designed to help companies improve their recruiting processes with AI tools and scalable workflows.

View Company Profile