Skip to main content

Privacy paradoxes, digital divides and secure societies


More and more, we are living our lives in the online space. The development of wearable technology, automated vehicles, and the Internet of Things means that our societies are becoming increasingly digitized. Technological advances are helping monitor city life, target resources efficiently, and engage with citizens more effectively in so-called smart cities. But as with all technological developments, these substantial benefits are accompanied by multiple risks and challenges. 

The Wannacry attack. The TalkTalk data breach. The Cambridge Analytica scandal. Phishing emails. Online scams. The list of digital threats reported by the media is seemingly endless. To tackle these growing threats, the National Cyber Security Centre (NCSC) was established in the UK in 2016 with the aim of making ‘the UK the safest place to live and do business online’. But with the increasing complexity of online life, connected appliances, and incessant data collection, how do people navigate these challenges in their day-to-day lives? As a psychologist, I am interested in how people consider and make decisions regarding these digital risks and how we can empower people to make more informed choices going forward

The privacy paradox 

People often claim that privacy is important to them. However, research shows that they are often willing to trade that privacy for short-term benefits. This incongruence between people’s self-reported attitudes and their behaviour has been termed the ‘privacy paradox’. The precise reasons for this are uncertain, but are likely to be a combination of lack of knowledge, competing goals and priorities, and the fact that maintaining privacy can be, well, difficult. 

Security is often not an individual’s primary goal, instead being secondary to other tasks that they are trying to complete. For instance, accessing a particular app, sharing location data to find directions, or communicating on the move with friends and colleagues. Using these online services, however, often requires a trade-off with regards to privacy. This trade-off may be unclear, communicated through incomprehensible terms and conditions, or simply unavoidable for the user. Understanding what drives people to make these privacy trade-offs, and under what conditions, is a growing research area. 

The digital divide 

As in other areas of life, access to technology across society is not equal. Wearable technology and smart phones can be expensive. People may not be familiar with computers or have low levels of digital literacy. There are also substantial ethical implications about how such data may be used that are still being debated. For instance, how much will the information captured and analysed about citizens differ across socio-economic groups? 

Research has also shown that people are differentially susceptible to cyber crime, with generational differences apparent (although, not always in the direction that you would expect). Trust in the institutions that handle digital data may vary across communities. Existing theories of societal differences, such as the Cultural Theory of Risk, are increasingly being applied to information security behaviour. Understanding how different groups within society perceive, consider, and are differentially exposed to, digital risks is vital if the potential benefits of such technologies are to be maximised in the future. 

Secure societies – now and in the future 

Regulation: The General Data Protection Regulation (GDPR) comes into force on the 25 May 2018. Like me, you may have been receiving multiple emails from companies informing you how they use your data, or asking your permission to keep it. This regulation is designed to help people manage their privacy and understand who has access to their data, and why. It also allows for substantial fines to be imposed if personal data is not managed adequately or if data breaches are not reported to authorities in a timely manner. 

Secure by default: There is a growing recognition that products should have security built-in. Rather than relying on us, the human user, to understand and manage security settings on the various devices that we own, such devices should be ‘secure by default’. Previous considerations of humans as the ‘weakest link’ in cyber security are being replaced with an understanding that people have limited time, expertise and ability to manage security. The simplified password guidance provided by the NCSC provides a good example of this (7). Devices,  applications and policies should take the onus off the user as much as possible. 

Education and communication: People need to be educated about online risks in an engaging, relevant and targeted way. Such risks can be perceived as abstract and distant from the individual, and can be difficult to understand at the technical level. I was recently paired with an artist as part of Creative Reactions 2018 (an art exhibition running in Hamilton House 11 - 22 May 2018) to portray my research in this area to members of the public in a different way. Understanding how best to communicate digital risks to diverse audiences who engage with the online world in a range of different contexts is crucial. In this regard, there is much to be learned from risk communication approaches used in climate change, public health, and energy sectors. 

Overall, there is much to be optimistic about. A renewed focus on empowering people to understand digital risks and make informed decisions, supported by regulation, secure design and considerations of ethical issues. Only by understanding how people make decisions regarding online activities and emerging technologies, and providing them with the tools to manage their privacy and security effectively, can the opportunities provided by a digital society be fully realised in cities of the future. 

--------------------------------
This blog has been written by Cabot Institute member Dr Emma Williams, a Vice-Chancellor's Fellow in Digital Innovation and Well-being in the School of Experimental Psychology at the University of Bristol.
Emma Williams

Popular posts from this blog

Converting probabilities between time-intervals

This is the first in an irregular sequence of snippets about some of the slightly more technical aspects of uncertainty and risk assessment.  If you have a slightly more technical question, then please email me and I will try to answer it with a snippet. Suppose that an event has a probability of 0.015 (or 1.5%) of happening at least once in the next five years. Then the probability of the event happening at least once in the next year is 0.015 / 5 = 0.003 (or 0.3%), and the probability of it happening at least once in the next 20 years is 0.015 * 4 = 0.06 (or 6%). Here is the rule for scaling probabilities to different time intervals: if both probabilities (the original one and the new one) are no larger than 0.1 (or 10%), then simply multiply the original probability by the ratio of the new time-interval to the original time-interval, to find the new probability. This rule is an approximation which breaks down if either of the probabilities is greater than 0.1. For example

1-in-200 year events

You often read or hear references to the ‘1-in-200 year event’, or ‘200-year event’, or ‘event with a return period of 200 years’. Other popular horizons are 1-in-30 years and 1-in-10,000 years. This term applies to hazards which can occur over a range of magnitudes, like volcanic eruptions, earthquakes, tsunamis, space weather, and various hydro-meteorological hazards like floods, storms, hot or cold spells, and droughts. ‘1-in-200 years’ refers to a particular magnitude. In floods this might be represented as a contour on a map, showing an area that is inundated. If this contour is labelled as ‘1-in-200 years’ this means that the current rate of floods at least as large as this is 1/200 /yr, or 0.005 /yr. So if your house is inside the contour, there is currently a 0.005 (0.5%) chance of being flooded in the next year, and a 0.025 (2.5%) chance of being flooded in the next five years. The general definition is this: ‘1-in-200 year magnitude is x’ = ‘the current rate for eve

Coconuts and climate change

Before pursuing an MSc in Climate Change Science and Policy at the University of Bristol, I completed my undergraduate studies in Environmental Science at the University of Colombo, Sri Lanka. During my final year I carried out a research project that explored the impact of extreme weather events on coconut productivity across the three climatic zones of Sri Lanka. A few months ago, I managed to get a paper published and I thought it would be a good idea to share my findings on this platform. Climate change and crop productivity  There has been a growing concern about the impact of extreme weather events on crop production across the globe, Sri Lanka being no exception. Coconut is becoming a rare commodity in the country, due to several reasons including the changing climate. The price hike in coconuts over the last few years is a good indication of how climate change is affecting coconut productivity across the country. Most coconut trees are no longer bearing fruits and thos