top of page
Writer's pictureTina Gallico

2018 Review: technology & society

Updated: Aug 2, 2023


The exposure of corporate data usage practices and the implications of social media’s free reign evoked global debate about the negative impacts the largest internet based companies may be having on their individual users and society as a whole.

 



Data privacy and use


During 2018 the unveiling of data mining practices and numerous data leakages raised significant public discontent about how online platforms obtain and use personal information and protect user privacy. Meanwhile, new regulations for data privacy made a timely debut in the European Union and California.


The Cambridge Analytica scandal involved up to 87million Facebook user profiles being obtained by the now closed political consulting firm via a survey taken by only 270,000 Facebook users. It highlighted the lack of oversight by Facebook and the company's complacency in sharing user profile information with third party applications. Later in the year there was another Facebook data security breach with information from approximately 50 million users exposed. Similarly Google+, which will be shut down in 2019, unintentionally disclosed millions of user's personal information on more than one occasion.


To provide greater protection for individuals in the use and storage of their personal information, General Data Protection Regulation (GDPR) came into effect in the European Union. The regulation provides new obligations for businesses in terms of how they obtain information from customers and what they do with it. Explicit consent is needed for use of personal information. It affords citizens the right to know what data is held about them as well as the right to be forgotten.


The European GDPR requirements are now the default data privacy standard for any internationally operating businesses. Worldwide any company dealing with data of EU citizens must ensure they comply with GDPR or risk prosecution. By the end of the year the Information Commissioners Office (ICO) had issued a number of fines to companies including Facebook, Equifax and Uber. 


Echoing the aims of GDPR, in the United States, California released a state Consumer Privacy Act to ensure greater rights to citizens regarding the use and management of personal data. It represents the strictest data protection regime in the US. 



Social media deception and misinformation


Fake news, or misinformation and propaganda, influenced democratic processes and evoked civil unrest internationally. It has been uncovered since the events that misinformation spread on social media alongside the use of targeted political advertising impacted the outcomes of the US election and Brexit votes.


Fake news is also responsible for having broader social and cultural impacts around the world. For example fake news has been linked to civil unrest in Africa and Facebook was involved in Myanmar ethnic cleansing


Fake news can be spread broadly and quickly on social media through the use of fake accounts and bots (false accounts run by algorithms). Tens of millions Twitter accounts were suspected of being bots spreading content for particular interests or agendas. Worse still, it is difficult for human Twitter users to be able to tell the difference between human and non-human accounts.


In response to the growing issue of fake news locally and globally, in July 2018 the UK parliament published an inquiry into disinformation and fake news. It warned of social media’s effects on both the information and advertising ecosystems and the need for greater governance of these spheres.



How to govern data practices of the .com giants?


Last year Amazon, Apple, Facebook, Google, Twitter and others were called to appear in US congressional hearings to explain their activities. The July hearings focused on examining content filtering practices of the major social media players. The September hearings concerned foreign influence operations and their use of social media platforms (i.e. in response to accusations of Russian interference in the US Presidential candidate selection and election processes), and soon after another hearing to examine safeguards for consumer data privacy.


Exchanges that occurred on the testimonial days highlighted the problem that those responsible to protect individual rights and broader public interest do not adequately understand the technologies and business models involved. Since the hearings they have also not committed to taking definitive action to suitably regulate the activities these companies are employing.


Data use and information management are key to the business models of the most successful technology and internet companies. The .com giants are able to monetize information about users through the personalization of their own offerings e.g. product design and recommendations, data brokerage with others and most significantly through direct third party advertising. Through the masses of data collected by companies and websites we interact with, advertising can be targeted to reach extremely discrete audiences filtered by post codes, occupations, age, ethnicity, education, interests, marital status, dependents, past purchases and more


Marketing agencies argue that a majority of consumers have little issue with the use of their information for personalised products and services under certain conditions such as trust and not passing on data to others. However in reality these conditions are rarely transparent or guaranteed.


An unintended byproduct of targeted advertising is that it excludes certain social groups from access to potentially useful products and services. Such was the case with the US Housing and Urban Development Department’s (HUDD) complaint against Facebook for violation of its Fair Housing Act. HUDD are accusing Facebook of allowing discriminatory activities by enabling landlord’s advertisements to target specific groups and exclude others, impeding certain social groups access to housing opportunities.


On the other hand, last year saw a case of ethically aligned self-governance unfold at Google. Google had been secretively undertaking work with the US Department of Defense. Called project Maven, the work involved the development of AI for use in military activities, such as to improve capabilities of autonomous drones in warfare. When the project was made public Google staff openly objected to the organization's involvement. After about 4,000 employees signed a petition Google discontinued the project citing AI ethics reasons.


Comments


Commenting has been turned off.
bottom of page