Liz O’Sullivan, co-founder and CEO of Vera, a startup focused on mitigating generative AI risks, is on a mission to ensure that the benefits of AI are not outweighed by its potential harms. In an interview with TechCrunch, O’Sullivan emphasized the importance of actively controlling AI tools to prevent harm to humans.
The Risks of Generative AI
O’Sullivan pointed out that the current hype cycle surrounding AI obscures the very real risks associated with generative AI models. These risks include unpredictable and potentially criminal behavior, which can have serious consequences for individuals and society as a whole. "Today’s AI hype cycle obscures the very serious, very present risks that affect humans alive today," O’Sullivan said.
Vera’s Approach to Generative AI Moderation
Vera’s approach to generative AI moderation is centered on providing a comprehensive solution that tackles a range of threats at once. The company’s technology aims to prevent text-generating models from retaining or regurgitating sensitive data, such as customer purchase orders and phone numbers. Vera’s model-moderating tech also attempts to defend against prompt engineering attacks.
Competition in the Market
Vera is not alone in the market for model-moderating tech. Companies like Nvidia, Salesforce, and Microsoft are also working on similar solutions. However, O’Sullivan believes that Vera’s unique value proposition lies in its ability to tackle a wide range of generative AI threats simultaneously.
The Need for Policy Enforcement
O’Sullivan emphasized the importance of policy enforcement in mitigating generative AI risks. She argued that CTOs, CISOs, and CIOs around the world are struggling to strike a balance between AI-enhanced productivity and the potential risks associated with these models. Vera’s technology aims to unlock generative AI capabilities while enforcing policies that can be transferred not just to today’s models but also to future models.
The Future of Generative AI
As the use of generative AI continues to grow, O’Sullivan believes that it is essential to address the risks associated with these models. "AI is a powerful tool and like any powerful tool, should be actively controlled so that its benefits outweigh these risks," she said.
Conclusion
Liz O’Sullivan’s mission at Vera is to ensure that generative AI is used in a way that minimizes harm to humans. With the rapid growth of AI adoption, it is more important than ever to address the potential risks associated with these models. By providing comprehensive policy enforcement and moderation solutions, Vera aims to unlock the full potential of generative AI while mitigating its risks.
Related Topics
Tags
- #GenerativeAI
- #AIModeration
- #PolicyEnforcement
- #Vera
Sources
- O’Sullivan, L. (2024). The Risks of Generative AI. TechCrunch.
- Vera. (n.d.). About Us. Retrieved from https://www.vera.ai/
- Nvidia. (n.d.). NeMo Guardrails. Retrieved from https://www.nvidia.com/en-us/ai/neural-mo-guardrails/
Note: The above text is a rewritten version of the provided article, with added formatting and tags for better readability and discoverability.