Senior Security Researcher (AI)
Prelude
Location
Israel
Employment Type
Full time
Location Type
Remote
Department
Security
Senior Security Researcher (AI)
About Origin by Prelude
Origin is building the next generation of endpoint security for the AI era.
Across the enterprise, AI agents are quickly becoming part of everyday workflows. They’re driving meaningful productivity gains, but they’re also creating a new and largely uncharted risk surface: agents operating on endpoints with powerful permissions, opaque behavior, and limited visibility or control.
Origin is developing endpoint capabilities purpose-built for this shift, giving organizations the tools to safely grant AI agents the permissions they need while maintaining control, oversight, and trust.
We’re backed by Sequoia Capital, Brightmind Partners, IA Ventures, and others, and built by a deeply technical team of Windows internals researchers, product engineers, and offensive operations specialists.
Role
Origin is seeking an AI Security Researcher to investigate the evolving intersection of adversarial tradecraft and modern generative AI systems. This role focuses on understanding how attackers misuse or weaponize LLMs, on-device assistants (such as Computer Use Agents), autonomous agents, code-generation models, and multi-modal AI systems, and on translating this research into impactful defensive capabilities within Origin’s products.
Success in this role requires deep curiosity, strong technical intuition, hands-on experimentation, and the ability to convert ambiguous research signals into clear, actionable engineering outcomes.
Responsibilities
Conduct in-depth research into how modern adversaries may evolve tradecraft to exploit or abuse generative AI tools, including LLMs, autonomous agents, and on-device assistants
Conduct hands-on research into adversarial prompting, jailbreak methods, tradecraft leveraging computer use agents and local models, and other AI-enabled attack vectors
Translate and implement research findings into actionable improvements for Origin’s products
Produce high-quality, public-facing security research content, including blog posts and conference talks
Stay abreast of cutting-edge offensive and defensive security techniques through continuous self-study and research
Serve as the subject matter expert in adversary tradecraft and security operations, supporting other business units on their projects as needed
-
Support other Researchers on the team with their research and actively engage in team-driven initiatives
Skills and Experience
-
5+ years of experience in one or both of the following areas:
Offensive security research, such as red team operations or purple teaming
Defensive security research, for purposes of publication and security feature development
Ability to write code for development of research tooling
Ability to explain complex technical concepts and research outputs to both executive-level and highly technical consumers
-
Aptitude for working in a fast-paced, adaptive startup environment
Nice to Haves
Demonstrated experience investigating or exploiting generative AI systems, prompt engineering, jailbreaks, model exploitation, or agentic misuse
Familiarity with reinforcement learning, model interpretability, or safety research
Contributions to open-source AI or security tooling
Prior publications, conference presentationsProficient in at least one systems language (Rust, C/C++)
-
Knowledge of operating system internals and reverse engineering
Working at Origin
Origin is a distributed team across the US, Canada, Australia, and Israel. We have a culture built on trust, autonomy, and excellence. We empower our team to take ownership, move with purpose, and continuously improve. Our culture values top performers who align with our mission and embrace high standards. We offer generous healthcare, flexible PTO, and home-office support, ensuring our team has the freedom and resources to thrive. While we move fast, we prioritize quality, collaboration, and remain committed to building impactful security solutions with precision.
