By continuing to use our site or by closing this notice you acknowledge that you agree to our use of cookies and other tracking technologies in accordance with our Privacy Policy. We use cookies to understand how you use our site and to deliver a personalized experience while you browse. Tell Me More.
CSO Online featured Peter Herzog, Head of Security at O3 World’s partner, Urvin AI, in their recent article about minimizing risk in AI and ML development. As an emerging technology, AI and ML do not have an industry-wide set of best security practices and enterprises rely on a hodgepodge of both open source and proprietary code. In the rush to extract data, companies must remember to put security, privacy and ethics first.
“There’s no such thing as an AI model that is free of security problems because people decide how to train them, people decide what data to include, people decide what they want to predict and forecast, and people decide how much of that information to expose.”
– Peter Herzog, Head of Security of Urvin AI