The PDP Bill, 2019 was largely based on the European Union\u2019s General Data Protection Regulation (GDPR). Personal data, as per PDP 2019 is defined \u201cdata about or relating to a natural person who is directly or indirectly identifiable, and shall include any inference drawn from such data for the purpose of profiling\u201d. The PDP Bill, 2019, like the GDPR, excluded data that was \u201canonymized\u201d from the definition of personal data, since as per its specifications, such data could not personally identify an individual.
A growing number of new products\/services, rely not only on personal data, but also on data regarding natural phenomena, national resources, infrastructure etc. Coupled with these developments is a rising concern in governments about the ownership of various types of data with private companies, especially Big Tech, exemplified by the various legal and regulatory initiatives against Google, Facebook and Amazon etc. globally.
Similar concerns led the Indian government to set up of a Committee of Experts (CoE) to develop a framework of regulation on Non-Personal Data (NPD). Recently, it came out with the second report, where it sought to revise the scope of its recommendations of the first report on several dimensions. The premise of the CoE is that NPD covers all data that is not in the scope of PDP Bill 2019 and also includes data regarding \u201cmachines and natural phenomena\u201d.
In this article we consider the scope of the CoE proposed by the second Report on NPD regulation.
The first question is \u201csince PDP bill does not consider anonymized data under the definition of personal data, does it become \u201cnon-personal\u201d data to be regulated by the proposed NPD Regulator (as per the CoE recommendations)?\u201d Anonymized data refers to aggregated personal attributes which should no longer enable identifying the person to whom they relate. Anonymized data is, therefore, de-personalized.
With the proliferation of sources and of data collected from individuals, there are a variety of data sets, comprising different attributes, with varying degrees of personal idenitifiability. Given the increasing sophistication of algorithms and advances in hardware, the possibility of inferring personal data from anonymized data are growing. Advances in Big Data or data sets that span across a variety of domains, make it more easily possible to distinguish a person or identify a specific person from even strongly anonymized data. Anonymization as a data protection mechanism is thus more of a legacy when large amounts of data were \u201csiloed\u201d, but is clearly found wanting today. A lot of recent well publicized research shows that data protection through anonymization is a crude and inadequate mechanism.
Thus, given that there is a chance of re-identification, the definition of \u201canonymized\u201d data must account for reversibility. GDPR recognizes this and acknowledges the risk of identification from anonymous data. However, several other groups such as the European Data Protection Board and some national regulators, do not concur that such a risk is acceptable. In the Indian case, the PDP Bill, 2019 refers to anonymization as: \u201csuch irreversible process of transforming or converting personal data to a form in which a data principal cannot be identified, which meets the standards of irreversibility specified by the Authority\u201d.
For data to be truly anonymized, irreversibly, as stated in the PDB bill 2019, it must not be capable of being cross referenced with other data to reveal identity. Since irreversible anonymization is almost impossible to achieve in practice, the EU GDPR also incorporates pseudonymization which is \u201c \u2026 the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.\u201d
Psueodonymized data comes under the purview of the GDPR. The PDP Bill, 2019 does not refer to psuedonymization and neither does the CoE. Thus, given the regulatory flux with respect to anonymization, it would not be prudent to come up with overly prescriptive mechanisms in India or consider anonymized data under the purview of the proposed NPD Regulator.
The proposed scope of NPD thus overlaps with the PDP Bill, 2019 and hence is a source of confusion. Interestingly, overlapping is recognized by the CoE which has excluded mixed data sets (that have \u201cinextricably linked personal and non-personal data\u201d) from its scope and recommends that these be governed by the PDP Bill, 2019. But the most meaningful analysis from data comes from mixed data sets and hence these are the most used data sets in the digital economy. This exclusion, further reduces the scope of activities as envisaged by CoE. In a new area of regulation, a clear cut scope helps to give a stable regulatory regime
GDPR blandly labelled anonymized data as non-personal while including a risk-based assessment of whether such anonymized data would come under its purview. Moving forward, GDPR\u2019s scope of what constitutes personal data will only increase to include greater amounts of anonymized data. Given these developments, we find that the scope as set out by CoE is not nuanced and should not be accepted without first firmly establishing the contours of PDP Bill 2019 and the current state of technology.
To accept GDPR\u2019s definition of non-personal to include anonymized data in our situation is not appropriate as the scope of regulation, regulatory environment, state of technology and other contexts are not similar. The CoE should consider that even if it were possible to have irreversible anonymization, the underlying data is aggregated depersonalized data, not non-personal.
As far as inclusion of data from machines\/devices\/sensors under the scope of NPD regulation is concerned, it must be borne in mind that these belong to individuals or enterprises. It would be their intellectual property as it would be used to improve, design, or provide services and be covered under the Copyright Act. Moreover, any data gathered from sensors on individuals could be used for profiling and hence would again come under \u2018mixed data sets\u2019 which would then be governed under the PDP Bill, 2019.
Regarding data about national infrastructure and resources and data collected by public agencies for providing public services, it is already mandated to be shared under the Open Data Policy (National Data Sharing and Accessibility Policy) of the government (with appropriate safeguards regarding individual privacy). That this policy is poorly implemented and thus does not make available data to the citizens in a systematic usable way needs to be addressed by bringing in greater accountability and better enforcement.
There is a need to have an enabling incentive-based mechanism for sharing data collected by private agencies on national infrastructure. The CoE makes a valid case for making large datasets in agriculture, healthcare, education, for helping in developing public policy. However, sharing for knowledge enhancement cannot be mandated, it has to be incentive based. In EU and the UK, discussion on sharing of such data sets is based on creating an enabling environment. In this environment, various sectoral agencies either working independently or in collaboration with others, develop a framework for sharing. Such initiatives when mandated through an overarching regulatory regime as suggested by the CoE are likely to be unsuccessful.
Yes, there is a need to support start-ups and innovation, as identified by the CoE, but a poorly designed framework for sharing data is not going to help them. In fact, it may actually harm them. Besides, it is not just access to data, start-ups also require knowledge and skills in developing sophisticated algorithms, need a deep understanding of customer domains and cross domain knowledge. None of these are currently addressed by the proposed NPD framework.
In summary, as shown above, significant discussions, deliberation and research is required regarding the scope of proposed NPD regulation. Just because use of data for economic purposes is expanding, it should not be a reason to regulate it prescriptively. Before proceeding with any proposed regulation on NPD, we need to review the final contours of the PDP Bill, 2019 as it actually gets enacted into an Act, since it is inextricably linked to the proposed NPD regulation.
Rushing into regulation as proposed by the CoE would cause confusion between the scope of PDP 2019 and the NPD regulation. Next, we should also take into account several other existing regulations like Competition Law and IPR which in fact can and do address issues of economic re-distribution and allocation. A detailed assessment of the lacunae in the existing regulation in the context of dealing with the economic aspects of data and its linkage with ownership may need to be worked out. Strengthening or extending the scope of existing legal and regulatory institutions may be a better approach than creating yet another regulator de novo, especially in the context of an existing uncertain and technologically fast evolving environment.
","blog_img":"","posted_date":"2021-03-23 12:44:00","modified_date":"2021-03-24 11:03:04","featured":"0","status":"Y","seo_title":"An existentialist dilemma for the Non-Personal Data regulation?","seo_url":"an-existentialist-dilemma-for-the-non-personal-data-regulation","url":"\/\/www.iser-br.com\/tele-talk\/an-existentialist-dilemma-for-the-non-personal-data-regulation\/4861","url_seo":"an-existentialist-dilemma-for-the-non-personal-data-regulation"}">
越来越多的分析收集的数据来自个人和传感器是塑造新产品/服务和影响我们的经济和社会生活在小说和不可预知的方式。因此,人们越来越关注如何这些数据,特别是个人数据,是由私人和公共管理/政府机构收集。在这种背景下的个人数据保护(PDP)法案2019年提出一个框架,用于保护个人数据,提供控制个人等如何管理他们的数据实体。
PDP法案,2019年主要是基于欧盟的总体数据保护监管(GDPR)。个人数据,按
PDP 2019定义“资料或有关自然人直接或间接识别,并应包括任何推理来自这类数据分析的目的”。像GDPR PDP法案,2019年,“匿名”排除数据从个人数据的定义,因为按其规格,这些数据不能识别一个人。
越来越多的新产品/服务,不仅依赖于个人数据,而且也对自然现象,对数据国家资源、基础设施等,加上人们越来越关注这些发展在政府所有权的各种类型的数据与私营企业,特别是大型科技股,以各种法律和监管措施针对谷歌,Facebook和亚马逊等在全球范围内。
类似的担忧导致印度政府成立了一个专家委员会(CoE)的开发框架的监管非个人数据(NPD)。最近,它推出了第二份报告,试图修改建议的第一个报告的范围在几个维度。CoE的前提是,NPD的范围涵盖所有数据不在PDP法案2019年,还包括数据关于“机器和自然现象”。
在本文中,我们考虑的范围CoE第二NPD报告提出的监管。
第一个问题是“因为PDP比尔不考虑匿名数据下个人数据的定义,它成为“非个人”的监管提出了NPD数据监管机构(按CoE推荐)?“匿名数据指的是聚合个人属性应该不再启用识别他们联系起来的人。非所问匿名数据,因此,压抑个性。
随着资源和数据收集来自个人,有各种各样的数据集,包括不同的属性,不同程度的个人idenitifiability。鉴于日益成熟的算法和硬件的进步,从匿名数据推断个人数据的可能性也在增加。进步大数据或数据集跨各种不同的领域,使它更容易区分一个人或识别一个特定的人甚至强烈匿名数据。匿名化的数据保护机制是因此更遗留大量数据的“孤立”时,但显然是发现今天的希望。最近很多广为人知的研究表明,通过匿名化是一个原油和数据保护机制的不足。
因此,考虑到有可能再次鉴定,“匿名”数据的定义必须占可逆性。GDPR承认,承认从匿名数据的风险识别。然而,其他几个组织如欧盟数据保护委员会和国家监管机构,不同意,这样的风险是可以接受的。在印度的情况下,PDP法案,2019指的是匿名化:“这样的不可逆过程的转变或将个人数据转换为表单的数据主要不能确认,符合指定的标准的不可逆性权威”。
真正的匿名数据,不可逆转,如上所述在PDB法案2019年,它必须不能够交叉引用与其他数据透露身份。因为不可逆转的匿名化是几乎不可能实现在实践中,欧盟也GDPR包含pseudonymization”…个人数据的处理以这样一种方式,个人数据不再是由一个特定的数据主题不使用额外的信息,提供这样的附加信息分别保存,受制于技术和组织措施,确保个人资料不归因于一个标识或可识别的自然的人。”
Psueodonymized数据受到GDPR的范围。PDP法案,2019年不指psuedonymization CoE也一样。因此,考虑到监管通量对匿名化,不会谨慎提出过于死板机制在印度或考虑匿名数据的范围下提出了NPD调节器。
NPD的提议范围从而与PDP法案,2019,因此是一个混乱的根源。有趣的是,重叠是被CoE排除混合数据集(“紧密联系的个人和非个人数据”)从其范围和建议,这些是由PDP法案,2019年。但是最有意义的分析从数据来自复杂的数据集,因此这些都是最常用的数据集在数字经济。这种排斥,进一步减少了活动所设想的CoE的范围。新领域的监管,明确范围有助于提供一个稳定的监管制度
GDPR温和地贴上匿名数据等非个人,同时包括一个基于风险的评估是否在其权限范围内会匿名数据。前进,GDPR什么是个人数据的范围只会增加包括大量的匿名数据。考虑到这些发展,我们发现由CoE设定的范围不是微妙的和不应该被接受的第一坚决建立PDP法案2019的轮廓和当前状态的技术。
接受GDPR非个人的定义包括匿名数据的范围在我们的情况是不合适的,因为监管,监管环境、技术和其他上下文不相似。CoE应该考虑,即使可能有不可逆的匿名化,底层数据是数据聚合没有人性,没有非个人。
至于包含数据从机器/设备/传感器根据NPD的范围的规定而言,它必须承担记住这些属于个人或企业。将自己的知识产权,因为它会被用来改善,设计,或提供服务和承担版权法案。此外,任何数据从传感器收集到的个人可以用于分析,因此将再次面临的复杂的数据集,将由PDP法案,2019年。
有关数据对国家基础设施和资源和数据收集的公共机构提供公共服务,它已经授权公开数据共享
政策(国家数据共享和可访问性政策)的政府(与适当的关于个人隐私保护)。这政策是很难实现的,因此不提供数据系统可用的公民需要解决方法通过引入更强的问责制度和更好的执行。
需要有一个能够激发动机的机制来共享数据收集由私人机构在国家基础设施。CoE使一个有效的理由使大型数据集在农业、医疗、教育、帮助在发展中公共政策。然而,共享知识增强不能授权,它必须是激励的基础。在欧盟和英国,讨论分享这样的数据集是基于创造有利环境。在这样的环境下,各部门机构独立工作或与他人合作,开发一个共享的框架。这类项目时要求通过一个包罗万象的监管体制CoE所显示可能会成功。
是的,有一个需要支持创业和创新,CoE确定的,但是设计不良共享框架数据不会帮助他们。事实上,它可能会伤害他们。此外,不仅仅是对数据的访问,创业也需要知识和技能在发展中复杂的算法,需要深刻理解客户域和跨域的知识。这些提议NPD目前解决的框架。
总之,如上所示,重要的讨论,需要思考和研究的范围提出了NPD的监管。只是因为使用的数据用于经济扩张,它不应该是理由监管规定。与任何拟议的监管NPD在继续之前,我们需要检查的最终轮廓PDP法案,2019年实际制定成一个法案,因为它提出的NPD监管密不可分。
冲进监管提出的CoE将导致混淆了PDP 2019和NPD监管的范围。接下来,我们还应该考虑其他几个现有法规竞争法和知识产权事实上可以解决和分配的经济再分配问题。缺损的详细评估现有的监管的环境中处理数据及其联系与所有权的经济方面可能需要。加强或扩大现有的法律和监管机构的范围可能是一个更好的方法比创建另一个监管机构新创,尤其是在现有的不确定和技术快速发展的背景下环境。
免责声明:作者的观点仅和ETTelecom.com不一定订阅它。乐动体育1002乐动体育乐动娱乐招聘乐动娱乐招聘乐动体育1002乐动体育ETTelecom.com不得负责任何损害任何个人/组织直接或间接造成的。
The PDP Bill, 2019 was largely based on the European Union\u2019s General Data Protection Regulation (GDPR). Personal data, as per PDP 2019 is defined \u201cdata about or relating to a natural person who is directly or indirectly identifiable, and shall include any inference drawn from such data for the purpose of profiling\u201d. The PDP Bill, 2019, like the GDPR, excluded data that was \u201canonymized\u201d from the definition of personal data, since as per its specifications, such data could not personally identify an individual.
A growing number of new products\/services, rely not only on personal data, but also on data regarding natural phenomena, national resources, infrastructure etc. Coupled with these developments is a rising concern in governments about the ownership of various types of data with private companies, especially Big Tech, exemplified by the various legal and regulatory initiatives against Google, Facebook and Amazon etc. globally.
Similar concerns led the Indian government to set up of a Committee of Experts (CoE) to develop a framework of regulation on Non-Personal Data (NPD). Recently, it came out with the second report, where it sought to revise the scope of its recommendations of the first report on several dimensions. The premise of the CoE is that NPD covers all data that is not in the scope of PDP Bill 2019 and also includes data regarding \u201cmachines and natural phenomena\u201d.
In this article we consider the scope of the CoE proposed by the second Report on NPD regulation.
The first question is \u201csince PDP bill does not consider anonymized data under the definition of personal data, does it become \u201cnon-personal\u201d data to be regulated by the proposed NPD Regulator (as per the CoE recommendations)?\u201d Anonymized data refers to aggregated personal attributes which should no longer enable identifying the person to whom they relate. Anonymized data is, therefore, de-personalized.
With the proliferation of sources and of data collected from individuals, there are a variety of data sets, comprising different attributes, with varying degrees of personal idenitifiability. Given the increasing sophistication of algorithms and advances in hardware, the possibility of inferring personal data from anonymized data are growing. Advances in Big Data or data sets that span across a variety of domains, make it more easily possible to distinguish a person or identify a specific person from even strongly anonymized data. Anonymization as a data protection mechanism is thus more of a legacy when large amounts of data were \u201csiloed\u201d, but is clearly found wanting today. A lot of recent well publicized research shows that data protection through anonymization is a crude and inadequate mechanism.
Thus, given that there is a chance of re-identification, the definition of \u201canonymized\u201d data must account for reversibility. GDPR recognizes this and acknowledges the risk of identification from anonymous data. However, several other groups such as the European Data Protection Board and some national regulators, do not concur that such a risk is acceptable. In the Indian case, the PDP Bill, 2019 refers to anonymization as: \u201csuch irreversible process of transforming or converting personal data to a form in which a data principal cannot be identified, which meets the standards of irreversibility specified by the Authority\u201d.
For data to be truly anonymized, irreversibly, as stated in the PDB bill 2019, it must not be capable of being cross referenced with other data to reveal identity. Since irreversible anonymization is almost impossible to achieve in practice, the EU GDPR also incorporates pseudonymization which is \u201c \u2026 the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.\u201d
Psueodonymized data comes under the purview of the GDPR. The PDP Bill, 2019 does not refer to psuedonymization and neither does the CoE. Thus, given the regulatory flux with respect to anonymization, it would not be prudent to come up with overly prescriptive mechanisms in India or consider anonymized data under the purview of the proposed NPD Regulator.
The proposed scope of NPD thus overlaps with the PDP Bill, 2019 and hence is a source of confusion. Interestingly, overlapping is recognized by the CoE which has excluded mixed data sets (that have \u201cinextricably linked personal and non-personal data\u201d) from its scope and recommends that these be governed by the PDP Bill, 2019. But the most meaningful analysis from data comes from mixed data sets and hence these are the most used data sets in the digital economy. This exclusion, further reduces the scope of activities as envisaged by CoE. In a new area of regulation, a clear cut scope helps to give a stable regulatory regime
GDPR blandly labelled anonymized data as non-personal while including a risk-based assessment of whether such anonymized data would come under its purview. Moving forward, GDPR\u2019s scope of what constitutes personal data will only increase to include greater amounts of anonymized data. Given these developments, we find that the scope as set out by CoE is not nuanced and should not be accepted without first firmly establishing the contours of PDP Bill 2019 and the current state of technology.
To accept GDPR\u2019s definition of non-personal to include anonymized data in our situation is not appropriate as the scope of regulation, regulatory environment, state of technology and other contexts are not similar. The CoE should consider that even if it were possible to have irreversible anonymization, the underlying data is aggregated depersonalized data, not non-personal.
As far as inclusion of data from machines\/devices\/sensors under the scope of NPD regulation is concerned, it must be borne in mind that these belong to individuals or enterprises. It would be their intellectual property as it would be used to improve, design, or provide services and be covered under the Copyright Act. Moreover, any data gathered from sensors on individuals could be used for profiling and hence would again come under \u2018mixed data sets\u2019 which would then be governed under the PDP Bill, 2019.
Regarding data about national infrastructure and resources and data collected by public agencies for providing public services, it is already mandated to be shared under the Open Data Policy (National Data Sharing and Accessibility Policy) of the government (with appropriate safeguards regarding individual privacy). That this policy is poorly implemented and thus does not make available data to the citizens in a systematic usable way needs to be addressed by bringing in greater accountability and better enforcement.
There is a need to have an enabling incentive-based mechanism for sharing data collected by private agencies on national infrastructure. The CoE makes a valid case for making large datasets in agriculture, healthcare, education, for helping in developing public policy. However, sharing for knowledge enhancement cannot be mandated, it has to be incentive based. In EU and the UK, discussion on sharing of such data sets is based on creating an enabling environment. In this environment, various sectoral agencies either working independently or in collaboration with others, develop a framework for sharing. Such initiatives when mandated through an overarching regulatory regime as suggested by the CoE are likely to be unsuccessful.
Yes, there is a need to support start-ups and innovation, as identified by the CoE, but a poorly designed framework for sharing data is not going to help them. In fact, it may actually harm them. Besides, it is not just access to data, start-ups also require knowledge and skills in developing sophisticated algorithms, need a deep understanding of customer domains and cross domain knowledge. None of these are currently addressed by the proposed NPD framework.
In summary, as shown above, significant discussions, deliberation and research is required regarding the scope of proposed NPD regulation. Just because use of data for economic purposes is expanding, it should not be a reason to regulate it prescriptively. Before proceeding with any proposed regulation on NPD, we need to review the final contours of the PDP Bill, 2019 as it actually gets enacted into an Act, since it is inextricably linked to the proposed NPD regulation.
Rushing into regulation as proposed by the CoE would cause confusion between the scope of PDP 2019 and the NPD regulation. Next, we should also take into account several other existing regulations like Competition Law and IPR which in fact can and do address issues of economic re-distribution and allocation. A detailed assessment of the lacunae in the existing regulation in the context of dealing with the economic aspects of data and its linkage with ownership may need to be worked out. Strengthening or extending the scope of existing legal and regulatory institutions may be a better approach than creating yet another regulator de novo, especially in the context of an existing uncertain and technologically fast evolving environment.
","blog_img":"","posted_date":"2021-03-23 12:44:00","modified_date":"2021-03-24 11:03:04","featured":"0","status":"Y","seo_title":"An existentialist dilemma for the Non-Personal Data regulation?","seo_url":"an-existentialist-dilemma-for-the-non-personal-data-regulation","url":"\/\/www.iser-br.com\/tele-talk\/an-existentialist-dilemma-for-the-non-personal-data-regulation\/4861","url_seo":"an-existentialist-dilemma-for-the-non-personal-data-regulation"},img_object:["","retail_files/author_1599561728_27456.jpg"],fromNewsletter:"",newsletterDate:"",ajaxParams:{action:"get_more_blogs"},pageTrackingKey:"Blog",author_list:"Rekha Jain",complete_cat_name:"Blogs"});" data-jsinvoker_init="_override_history_url = "//www.iser-br.com/tele-talk/an-existentialist-dilemma-for-the-non-personal-data-regulation/4861";">