Cynet Systems - Dallas, TX
posted 2 months ago
The Generative AI Automation Tester will play a crucial role in evaluating and testing Generative AI proof of concept (POC) models that are built using Open AI and Vertex LLM Models. This position is fully remote, allowing for flexibility while working on cutting-edge AI technologies. The tester will be responsible for designing, developing, and executing detailed test plans that validate the performance of these models using real customer data. This involves a thorough understanding of the data flow through AI models, ensuring that the input and output data are accurate and complete. In this role, collaboration is key. The tester will work closely with data scientists and engineers to ensure that the outputs generated by the models align with the expected results. This includes creating comprehensive test cases that assess various aspects of the models, such as accuracy, bias, performance, and edge cases. The tester will also be tasked with identifying weaknesses and inaccuracies within the models, as well as areas that require optimization. Reporting bugs, issues, and potential improvements will be a significant part of the job, requiring detailed feedback to be provided to the development teams. Additionally, the tester will utilize automated testing tools and frameworks specifically designed for model testing. Maintaining comprehensive documentation of the quality assurance (QA) process and results is essential to ensure transparency and facilitate future testing efforts.