Think AI tools aren’t harvesting your data? guess again

The meteoric rise of generative artificial intelligence has created a real technological sensation thanks to user-centric products such as OpenAI’s ChatGPT, Dall-E and Lensa. But the user-friendly AI boom has come at the same time that users seem to be unaware or left unaware of the privacy risks imposed by these projects.

Amid all the hype, however, international governments and big tech figures are starting to sound the alarm. Citing privacy and security concerns, Italy just temporarily banned ChatGPT, potentially inspiring a similar block in Germany. In the private sector, hundreds of AI researchers and technology leaders, including Elon Musk and Steve Wozniak, have signed an open letter calling for a six-month moratorium on the development of AI beyond the scope of the GPT- 4.

The relatively quick action to try to curb the irresponsible development of AI is commendable, but the broader landscape of threats that AI poses to privacy and data security goes beyond one model or reason. a developer. While no one wants to rain down on the parade of AI’s paradigm-shifting capabilities, there is now a need to tackle its shortcomings head-on to prevent the consequences from becoming catastrophic.

The AI ​​Data Privacy Storm

While it would be easy to say that OpenAI and other Big Tech-powered AI projects are solely responsible for the AI ​​data privacy problem, the subject had been discussed long before it entered in the mainstream. Scandals surrounding data privacy in AI happened before this ChatGPT crackdown – they just happened out of the public eye.

Last year, Clearview AI, an AI-based facial recognition company believed to have been used by thousands of governments and law enforcement agencies with limited public knowledge, was banned to sell facial recognition technology to private companies in the United States. Clearview was also fined $9.4 million in the UK for its illegal facial recognition database. Who’s to say that consumer-focused visual AI projects like Midjourney or others can’t be used for similar purposes?

The problem is that they already have been. A series of recent scandals involving pornography and fake news created through consumer AI products have only increased the urgency of protecting users from the harmful use of AI. It takes a hypothetical concept of digital mimicry and turns it into a very real threat to ordinary people and influential public figures.

Related: Elizabeth Warren wants the police on your doorstep in 2024

Generative AI models fundamentally rely on new and existing data to develop and strengthen their capabilities and usability. This is part of the reason why ChatGPT is so impressive. That being said, a model that relies on new data inputs needs somewhere to get that data, and part of that will inevitably include the personal data of the people who use it. And this amount of data can easily be misused if centralized entities, governments or hackers get hold of it.

So, with limited scope for comprehensive regulation and conflicting views on AI development, what can companies and users working with these products do now?

What companies and users can do

The fact that governments and other developers are raising flags around AI now actually indicates progress from the glacial pace of regulation of Web2 apps and crypto. But raising flags is not the same as policing, so maintaining a sense of urgency without being alarmist is key to creating effective regulations before it’s too late.

Italy’s ChatGPT ban isn’t the first strike governments have taken against AI. The EU and Brazil are all passing laws to sanction certain types of AI use and development. Similarly, the potential of generative AI to lead data breaches sparked early legislative action by the Canadian government.

The problem of AI data breaches is serious enough, to the point that OpenAI even had to intervene. If you opened ChatGPT a few weeks ago, you might have noticed that the chat history feature was disabled. OpenAI temporarily shut down the feature due to a serious privacy issue where prompts from strangers were exposed and revealed payment information.

Related: Don’t be surprised if the AI ​​tries to sabotage your crypto

While OpenAI has effectively put out that fire, it can be hard to trust programs run by Web2 giants slashing their AI ethics teams to do the right thing preemptively.

At the industry level, an AI development strategy that focuses more on federated machine learning would also strengthen data privacy. Federated learning is a collaborative AI technique that trains AI models without anyone having access to the data, using multiple independent sources to train the algorithm with their own data sets instead.

On the user side, becoming an AI luddite and completely forgoing the use of any of these programs is unnecessary and will likely be impossible very soon. But there are ways to be smarter about the generative AI you grant access to in everyday life. For enterprises and small businesses integrating AI products into their operations, it is even more vital to be vigilant about the data you feed into the algorithm.

The evergreen saying that when you use a free product, your personal data East the product still applies to AI. Keeping this in mind may cause you to reconsider which AI projects you spend your time on and what you actually use them for. If you’ve participated in any social media trends that involve posting pictures of yourself on some sleazy AI-powered website, consider skipping it.

ChatGPT reached 100 million users just two months after its launch, a staggering number that clearly indicates that our digital future will use AI. But despite these numbers, AI is not yet ubiquitous. Regulators and businesses should use this to their advantage to create frameworks for responsible and secure AI development proactively instead of chasing after projects once they become too big to control. As it stands, the development of generative AI is not balanced between protection and progress, but there is still time to find the right path to ensure that user information and privacy remain at the forefront.

Ryan Paterson is the president of Unplugged. Prior to taking the reins of Unplugged, he was Founder, President and CEO of IST Research from 2008 to 2020. He left IST Research with the sale of the business in September 2020. He completed two tours at the Defense Advanced Research Agency and 12 years in the United States Marine Corps.

Eric Prince is an entrepreneur, philanthropist and Navy SEAL veteran with business interests in Europe, Africa, the Middle East and North America. He was founder and chairman of Frontier Resource Group and founder of Blackwater USA – a provider of global security, training and logistics solutions for the US government and other entities – before selling the company in 2010.

This article is for general informational purposes and is not intended to be and should not be considered legal or investment advice. The views, thoughts and opinions expressed herein are those of the author alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Leave a Comment