Enhance our core evaluation framework, focusing on AI security. Develop features for prompt evaluation and automated red teaming of LLM applications. Collaborate with our open-source community and contribute to cutting-edge AI security practices. Gain hands-on experience with various LLMs, identifying vulnerabilities and improving robustness. Proficiency in TypeScript, React, Node.js, and Python required. Ability to ship features quickly and prioritize effectively. Interest in AI security, developer tools, and open-source contributions. Experience with LLM evaluations is a plus.