Skip to content
LL.B Mania
LL.B Mania

MSME (UAM No. JH-04-0001870)

  • About
    • Core Team
    • Public Relations & Media
    • Editorial Board [BLOG]
    • Advisory Board
  • OpEd
  • BLOG
    • Alternative Dispute Resolution
    • Business Law
    • Case Analysis
    • Contract Law
    • Constitutional Law
    • Company Law
    • Competition Law
    • Consumer Law
    • Civil Law
    • CLAT
    • Criminal Law
    • Cyber Law
    • Environmental Law
    • Evidence Law
    • Family Law
    • Health Law
    • Hindu Law
    • Human Rights Law
    • International Law
    • Intellectual Property Law
    • Insolvency & Bankruptcy Law
    • Judiciary
    • Law of Contracts
    • Mergers & Acquisitions
    • Sports Law
    • Technology Law
    • Tort Law
  • Interview
  • Testimonials
  • Contact
    • Publish with Us
LL.B Mania
LL.B Mania

MSME (UAM No. JH-04-0001870)

April 30, 2024April 30, 2024

Unveiling The Privacy Paradox Of Predictive Analysis Technology

By Ishita Nayak (4th Year Student, Institute of Law, Nirma University, Ahmedabad)

Image Credits: https://incubator.ucf.edu/what-is-artificial-intelligence-ai-and-why-people-should-learn-about-it/

Introduction

Given the rapid rise in technological advancement and as a result of constant surveillance and monitoring, from a simple activity of browsing the web to complex online transactions, all these activities feed data to the most commonly used AI phenomenon known as predictive analysis.[1] Predictive analytics typically uses data mining as a method to identify trends or patterns that are not readily visible. For instance, for a user who is habitual in shopping online on a particular platform, his/her relevant searches would be sequenced as per the past behavior that the user has displayed while using the platform. This is essentially happening because the algorithm of the shopping platform is designed in a way wherein it is capable of predicting the user behavior based on the trend of previous usages like what the user generally purchases from the platform, preferred price range, color category, size, etc.

The increasing popularity of predictive analytics has prompted its use in other areas like healthcare, wherein it is used for detecting the early onset of life-threatening diseases and illnesses. The usage of this emerging technology is not restricted to aforesaid areas, but interestingly even the legal field deploys the use of predictive analytics to understand criminal behavior and tendencies of offenders to re-offend based on their past crime records, social circumstances, etc. As various equipped statistical models are coming into the market these models along with big data are being increasingly used for predictive analytics by businesses across and this has led to serious privacy dilemmas.

Technology v. Privacy Dilemma

The problem arises when businesses collect vast amounts of customer data without the user’s consent for profiling consumer behavior and tracking their preferences for personalized marketing. The fact that predictive analytics generates outcomes that contain the potential to reveal the most sensitive or crucial aspect of a user’s life without their consent or much less consultation is essentially the reason why privacy concerns are arising over the increasing use of predictive analysis technology. It infers data from the hidden layers that lie beneath the surface data, for example, data concerning furniture preference, clothes, and accessories can be used to unveil sensitive personal information like account details, sexual preferences, political affiliation etc. One such instance was observed at Cambridge University when researchers drew accurate inferences about a person’s gender, sexuality, age, and race based on the user’s Facebook activity, like the kind of posts they like or share with others.[2]

While predictive analytics can help provide customized recommendations, catering to particular uses, it can also have a perilous impact on a user’s privacy space. So even if it is imperative to talk about privacy violations by predictive analysis technology, however at the same time it is important to consider questions as to what constitutes privacy or what privacy actually ascribes to. Does it mean “control over personal information?” If privacy is defined from the perspective of control over personal information, then it is important to consider the fact that predictive analytics produces customized responses based on the already available data or the data in the public domain.[3] Does this imply that the user has no interest in controlling the information that is let out in the public domain? it is in this aspect, that the idea of presumed consent comes whereby it is presumed on the part of the user that they must have necessarily consented to their information being used if they are using a particular platform. This can be better understood, if we compare the user preference for WhatsApp over other paid platforms. Consumers are not usually very enthusiastic about shifting to paid platforms even though they offer better privacy incentives, and continue using WhatsApp. Does this mean that the user has consented to the unauthorized use of their personal information simply because they opt to continue the services of WhatsApp? And, DOES this mean that the user has no interest in protecting such information? These are some of the important questions that any legislator would face while formulating a legal framework. Therefore, at this juncture, it becomes important for any comprehensive regulatory regime to define what constitutes the contours of privacy.

How to solve this trade-off of privacy in predictive analytics-

  1. A significant approach to this could be Jack Balkin’s (a noted scholar) idea of information fiduciaries whereby he has proposed that these online service providers or platforms that are collecting and processing user information should be imposed with the duty of care and loyalty.[4] These duties should apply to data fiduciaries even while processing user’s information for predictive analysis. The duties of these information fiduciaries are substantial obligations that go beyond the simple obligation of notifying the user and are inclusive of the duty to act diligently so as to not cause prejudice to the user. This would ensure a broad limitation on not just privacy but also bias, manipulation, and unfairness. Abandoning the use of such technology would not be prudent given the benefits that it can potentially offer, however, its use also cannot remain unhinged.
  2. A probable solution to this would be to frame clear guidelines on how data must be stored, processed or analysed to ensure restricted use.
  3. Thirdly, parameters can be fixed based on which an AI processes decisions through predictive analysis. For example, in the USA, a commonly used AI application is the Public Safety Assessment tool, which specialises in making risk assessments based on nine factors namely; prior conviction based on misdemeanour or felony, prior violent crime conviction, past failure in two years to appear before a pre-trial hearing, prior failure to appear at a hearing stage more than two years ago and prior prison sentence. In this tool, the methodology of risk assessment is known beforehand and the outcome reflected is not necessarily a creation of Blackbox.

Conclusion

In the contemporary era, technology has become an indispensable part of human life and therefore it is not possible for humans to isolate themselves from the aftereffects of the use of such technology. So, while predictive analysis technology has potential benefits, its widespread usage has led to a trade-off with privacy. To eliminate these concerns, it is important to take into account certain ethical considerations while deploying the use of algorithms based on predictive analysis to ensure transparency and accountability. EU has taken an important initiative in this regard with the introduction of the Ethical Charter on the Use of Artificial Intelligence in Judicial Systems, it identifies certain principles like transparency, impartiality and fairness whereby data processing is made accessible and understandable by users, and authorises external audits.[5] This charter establishes guidelines which need to be necessarily complied with in the processing of judicial decisions by algorithms. A similar approach can be adopted concerning privacy violations by predictive analysis technology however, one must not lose sight of the fact that not all ethical considerations are programmable into an AI system. 


[1] Robert Sprague, Welcome to the Machine: Privacy and Workplace Implications of Predictive Analytics, 21 Rich. J.L. & Tech. 1, 2-3 (2015).

[2] Stratis Loannidis, Privacy trade-offs in Predictive Analytics, Acm Digital Library (Jun. 16, 2014, 8:45 PM), https://dl.acm.org/doi/abs/10.1145/2637364.2592011.

[3] Tomislav Bracanovic, Predictive Analytics, Personalized Marketing and Privacy (2019), https://tomislav.bracanovic.ifzg.hr/wp-content/uploads/2020/03/Bracanovic_Predictive-analytics.pdf.

[4] Dennis D. Hirsch, From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics, SSRN (Apr. 15, 2024, 9:45 PM), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3449112.

[5] Council Of Europe Portal, https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment (last visited on Apr. 19, 2024).

Post Views: 632

Related

Technology Law

Post navigation

Previous post
Next post

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tweets by llbmania

Recent Posts

  • South Korea Emulates EU’s Model of Comprehensive AI Regulation
  • Access to Justice for Poor Prisoners – A Distant Reality!
  • Winzo Games Pvt Limited vs Google LLC [Case No. 42 of 2022, CCI]
  • Social Media and IP Protection in the Digital Landscape
  • Navigating the Constitutional Complexities of Section 166(3), Motor Vehicles Act, 1988 (MVA, 1988): Time-Barred Claims and condonation of delay

Archives

  • May 2025
  • April 2025
  • February 2025
  • January 2025
  • December 2024
  • September 2024
  • August 2024
  • June 2024
  • May 2024
  • April 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • June 2020
  • May 2020
©2025 LL.B Mania | WordPress Theme by SuperbThemes