Artificial Intelligence
Alternate Model of Protecting Data Privacy 

Alternate Model of Protecting Data Privacy 

 


 

Data trusts are created, which represent an alternative for the collection and use of personal data. A company strives to pay for their use. (Image credit: Getty Images)

 

To meet the challenge of providing the large amount of data required for AI applications, which is made even more difficult by regulatory and data protection problems, innovative companies are turning to “data trusts” or “data cooperatives”.

A data trust is a structure in which data is placed under the control of a board of trustees that looks after the interests of the beneficiaries and gives them a greater say in the collection and access to the data, and used by others.

This means that one party empowers another party to make decisions about data on their behalf for the benefit of a larger group of interest groups, ”explains the Blog of the Open Data Institute, a non-profit organization founded in 2012 by Tim Berners-Lee and N.IGel Shadbolt to encourage people to innovate with data. “Data trusts are a fairly new concept and a global community of practice is still growing around them,” the blog said, citing several examples.

Reasons for exchanging data include detecting fraud in financial services, increasing the speed and transparency in all supply chains, and combining genetics, insurance data and patient data to develop new digital health solutions Harvard Business Review. The report cited research showing that 66%. of companies are willing to share data, including customer personal data. However, certain private data is subject to strict regulatory oversight, with violations creating significant financial and financial reputations.

 

George Zarkadakis, Digital Director, Willis Towers Watson

HBR article author George Zarkadakis recently piloted a data trust with his company. Willis Towers Watson, Provider of advisory and technology services to insurance companies with several of their clients. Zarkadakis is the digital director at Towers Watson, a senior fellow at the Atlantic Council, and the author of several books.

If the data trust uses cutting-edge technologies such as federated machine learning, homomorphic encryption (which allows data to be calculated without decryption), and distributed ledger technology, a trust can ensure transparency in data exchange and an audit trail for the person using the data at any time and for any purpose. “This removes the significant legal and technological friction that currently exists in exchanging data.” Zarkadakis specified.

The goals of the Towers Watson Data Trust pilot were to: identify a business case; create a successful Minimal Viable Consortia (MVC) in which data providers and consumers agree to share resources and talent to focus on a specific business case; agree on a legal and ethical governance framework to enable data sharing; and understand what technologies are required to drive transparency and trust in the MVC.

The insights gained included:

The importance of developing an ethical and legal framework for exchanging data.

The team found it important to lay that foundation at the beginning. You have worked to ensure compliance with European Union regulations General Data Protection Regulation (GDPR), which contains a number of data protection provisions. In order for the MVC to move beyond the pilot into a commercial phase, it must be reviewed by an independent “ethics council” that examines the ethical and other effects of the use of data and related AI algorithms.

Use a federated architecture.

With a federated approach, the data stays where it is, and algorithms are distributed across the data to allay fears of sending sensitive data to an external environment. The team examined privacy-keeping technologies, including differential privacy (describing patterns in a data set while information about people is withheld) and homomorphic encryption. The team also examined distributed ledger technology, including blockchain, as part of the technology stack.

“We designed data trust as a cloud-native peer-to-peer application that achieves data interoperability, shares computing resources, and provides data scientists with a common space to train and test AI algorithms,” he said Zarkadakis.

Savvy cooperatives want to compensate for the use of medical data

 

Jen Horonjeff, founder and CEO of the Savvy Cooperative

One entrepreneur saw the possibility of establishing a data trust for personal medical information that would attempt to get companies using their data to make payments to cooperating participants. Jen Horonjeff, Founder and managing director of Experienced cooperative, uses dolls in a video posted on the company’s website to explain the model. The company uses surveys, interviews, and focus groups to collect data made available to healthcare companies and other providers.

Savvy raised an undisclosed amount from Indie.vc over the past year, according to an account in TechCrunch. “The funding will allow us to expand our offering, support more companies and improve the lives of countless more patients,” said Horonjeff.

Indie.vc takes a nontraditional approach to venture capital and is aimed at startups. “Savvy represents everything we want for the future of the impact business– –Shared personal responsibility, different perspectives and coordinated incentives– –tackle one of the largest industries in the world, ”said Bryce Roberts, founder of Indie.vc.

At the other end of the spectrum of examples of data trust, Facebook set up an oversight body in 2018 with a promise to “uphold the principle of the human voice while realizing the reality of people’s security,” according to a recent report in Slate.

The board of directors was formed six months later as a 20-person body Experts | from around the world and a wide variety of fields including journalists and judges. Early critics feared it would be nothing more than a PR stunt. Six were selected from more than 150,000 cases submitted last December. They represented topics related to the moderation of content, the censorship of hate speech and Covid-19 misinformation. The first five decisions by the Board of Management were announced at the end of January.

The Cases were discussed by five-member committees, each of which includes a representative of the place where the relevant position was drawn up. The panel sometimes requested public comments and incorporated them into their decision. A majority of the Board of Management had to approve before a decision was made.

“The real decisions about what people in our world can say and how to say it are no longer based on Supreme Court decisions,” explained Michael McConnell, a former federal judge who is now director of the Constitutional Law Center at Stanford Law School who is a member of the Facebook board. The board seeks to uphold freedom of expression while recognizing the tension with “the harm that social media activity can do,” said McConnell.

Read the source article on the Blog of the Open Data Institute, in the Harvard Business Review, in the TechCrunch and in Slate.

 

 


Leave a Reply

Your email address will not be published. Required fields are marked *