Required AI Security Engineer
The Role
As AI capabilities accelerate across the bank, we need an engineer to design and enforce safe AI usage-protecting customer data, preserving model integrity, and meeting our regulatory obligations. You'll be the architect of guardrails, tooling, and policies that make AI both secure and useful for product and internal teams. This isn't about slowing things down; it's about building the trust layer that lets innovation move fast without breaking things.
Who You Are:
You're a security engineer who's excited about the AI wave-someone who sees GenAI and LLMs as fascinating puzzles to secure, not just threats to mitigate. You've spent 5+ years in Security Engineering, AppSec, or Cloud Security, and at least 1-2 of those years have been spent getting your hands dirty with AI/ML or data-intensive systems.
You're equally comfortable dissecting a prompt injection attack as you are writing a Terraform module or shipping a Python library. You know your way around AWS and/or Azure, modern app stacks (Python/TypeScript, REST/gRPC, containers/Kubernetes), and can translate security requirements into developer-friendly tooling-not just PDF policies that gather dust.
You communicate clearly in English and Hebrew, thrive in regulated environments, and understand that security in financial services means mapping controls to frameworks like FFIEC, SOC 2, and PCI DSS-and actually having the evidence to prove it.
What Youll Actually Be Doing:
Design enterprise AI guardrails across Azure and AWS (e.g., Azure AI Studio/Azure OpenAI, Amazon Bedrock/SageMaker): content filtering, PII redaction, prompt/response validation, and policy enforcement services.
Implement data minimization controls for GenAI/RAG workloads: context filtering, least‐privileged retrieval, document-level ACL enforcement, vector store hardening, and secure token/secret handling.
Threat model AI systems (apps, agents, RAG, fine-tuning pipelines) using frameworks like STRIDE and the OWASP Top 10 for LLM Apps; define misuse scenarios (prompt injection/jailbreaks/data exfiltration) and build mitigations.
Build monitoring and telemetry: privacy-preserving prompt/response logging, sensitive-data detection, safety/eval dashboards, drift/abuse signals, and incident hooks into our SIEM.
Integrate AI security into the SDLC: reusable libraries, pre-commit checks, CI/CD gates, policy-as-code, and secure-by-default reference architectures for product teams.
Evaluate third‑party AI vendors and internal apps: security reviews, data residency and retention requirements, SSO/SCIM integrations, DPA/TPRM inputs, and continuous control testing.
Partner across Security, Data, Privacy, and Engineering to map AI controls to FFIEC, SOC 2, and PCI DSS; document control evidence for audits.
Lead/participate in AI red‑teaming: automated jailbreak/prompt‑injection tests, safety benchmarks, purple‑team exercises, and response playbooks for AI incidents.
דרישות:
5+ years in Security Engineering/AppSec/Cloud Security (or similar), including 1-2+ years securing AI/ML or data‑intensive systems (GenAI preferred).
Hands‑on experience with AWS and/or Azure and modern app stacks (Python/TypeScript, REST/gRPC, containers/Kubernetes, IaC such as Terraform).
Practical understanding of LLM attack surfaces (prompt injection, data leakage via tools, training/fine‑tune poisoning, model supply chain) and mitigation patterns.
Familiarity with identity and access for AI workloads (OAuth2/OIDC, service principals, role tokens, PIM), and secure secret management/KMS.
Experience implementing observability/telemetry and routing findings to SIEM; comfort balancing privacy with traceability.
Ability to translate controls into developer-friendly libraries, docs, and CI/CD checks; strong written communication in English and Hebrew.
Comfort working in a regulated environment and mapping controls to frameworks (FFIEC, SOC 2, PCI D המשרה מיועדת לנשים ולגברים כאחד.