6 to 8 Years Relevant Experience
About the Role
We’re looking for an AI Security Engineer with hands-on expertise in AI/ML, cloud deployments, and security testing of Generative AI and LLM-powered solutions. This role is highly technical, research-driven, and will directly shape how we evaluate, validate, and secure emerging AI-driven cybersecurity tools.
Key Responsibilities
- Identify, analyze, and benchmark Generative AI–augmented and LLM-based security solutions in the market.
- Conduct proof-of-concept (PoC) assessments of cybersecurity capabilities to validate real-world effectiveness.
- Define security control baselines and evaluation criteria for emerging risk-focused solutions.
- Evaluate vendor claims, solution architectures, and scalability.
- Perform security testing of GenAI-powered cybersecurity tools.
- Publish detailed reports on security, compliance, and product efficacy.
- Build and integrate AI robustness, vulnerability, and stress testing capabilities within MLOps ecosystems.
- Assess and adapt open-source AI security libraries for enterprise-grade stress testing and audits.
- Implement secure model development lifecycle practices, including automated white-box and black-box assessments.
- Collaborate with application teams to ensure strong developer and customer experiences.
- Uphold Blue Box values in all interactions with internal and external teams.
Essential Skills
Must Have:
- Strong background in statistics, machine learning (DNN, NLP), big data, and data science toolkits.
- Proven experience designing and operationalizing high-performance AI/ML pipelines and production code.
- Experience deploying AI/ML models in public cloud environments.
- Familiarity with open-source ML tools, frameworks, and libraries.
- Proficiency in data science programming languages and toolkits.
- Experience with MLOps, DevOps, DataOps, and API integrations.
- Hands-on experience with AI workload management.
- Strong cloud architecture, design, and operational knowledge.
Good to Have:
- Knowledge of adversarial robustness techniques and AI risk management frameworks (e.g., Trustworthy AI).
- Familiarity with application security controls (Web, API, Mobile, AI).
- Understanding of security frameworks (NIST 800-53, CSF, OWASP ASVS).
- Knowledge of Secure SDLC, DevSecOps, and application security design.
- Experience with full-stack application architectures (SPAs, REST APIs, SOAP APIs, mobile).
- Background in Java, JavaScript, and mobile application development.
- Database expertise (Oracle, SQL, DB2, NoSQL).
- Cloud security design and operations knowledge.
- Exposure to IAM controls (OAuth 2.0, OIDC, JWT).
- Strong familiarity with cryptography (data at rest, in motion).
- Certifications: CISSP, CISM, CSSLP, CISA, CRISC.
- Peer-reviewed publications, conference presentations, or open-source contributions.