Mar 4, 2019
Admiral Michael S. Rogers of the U.S. Navy and Director of the NSA recently said that data tampering could become the greatest cybersecurity threat organizations face -- from a simple act of revenge by a disgruntled employee, to corporate espionage, or even a nation-state attack.
I speak with Dirk Kanngiesser, CEO & co-founder of Cryptowerk, a Silicon Valley start-up for enterprise blockchain applications on my daily tech podcast. We explore how data integrity and deep fake could be AI's Achilles' heel. Dirk is a technology startup entrepreneur and investor living in Silicon Valley.
Dirk has more than 25 years of startup, operational and investing experience in Europe and the US. He is also an active angel investor and a board member of technology companies both in the U.S. and in Europe.
In an age of manipulated data, deep fake techniques and unending data breaches, companies need to know that they are using pristine data in their AI systems. Toxic data will cripple an organization’s faith in AI and wreak havoc in their business systems.
Before understanding how and why decisions were made, organizations must be able to stand by the integrity of the data used by AI. This might be called verifiable AI -- when an organization can provide immutable proof that the data used by their AI systems is unaltered.
Dirk talks about the emerging threats and explains how AI vendors like SAS, FICO, IBM, and others implement verifiable AI into their products. He can share with your listeners how organizations can build safety measures into the system before it enters the AI algorithm.