Big tech companies have troves of customer data that they use to obtain insights and monetise users’ preferences by targeting users specifically. Artificial intelligence (AI) algorithms are becoming part and parcel of our daily lives.

In the past, experts worry about the digital divide caused by unequal access to technology. However, as it turns out, the divide is no longer just about access, but also how people handle the information overload and the overwhelming amount of algorithmic decisions that are ubiquitous. The savvier ones are aware of how algorithms are affecting their lives, while consumers with less information are actually relying more on them unknowingly.  

This raises the question: why is relying on algorithms a bad thing? Algorithms remain a mystery to the masses as we do not understand fully how they work. They may contain biases and unfairness because of the nature of data that was fed as input. Transparency in how algorithms work is also not the panacea. Even with the process sketched out, few would be able to grasp what they mean.

The problem, the commentator argues, lies in how few people can understand fully the way algorithms affect their lives, even though algorithms affect almost every one’s daily lives. In her opinion, digital literacy should be focused on understanding and evaluating the consequences of an “always-plugged-in lifestyle”. It is important as this lifestyle has an impact on how people interact with others, on our abilities to pay attention to new information, and on the complexity of decision-making processes.  

Read the full article on Channel NewsAsia: Commentary: A new digital divide, between those who opt out of algorithms and those who don’t

Analysis:

At this point, it may not be plausible to help the masses understand how algorithms work in a simplified, transparent manner. However, what digital literacy can teach is the ways in which we can assess the impact of algorithms on our lives. This way, we can meaningfully make judgment calls about how much are we willing to entrust our decisions to machines.

The impact on our lives may be negligible and helpful to the extent to which the decisions run by machines are low-stake and low-risk. Running targeted advertisements and showing us recommended sites or products based on our data may not be as important to worry about. Perhaps, the risk of such types of algorithms is when we start to only see content that we agree with, and are less exposed to alternative views or people. That may result in us being trapped in an echo chamber of homogeneous views.

It is when those decisions are made on behalf of users and involving higher stakes, such as managing investment portfolios, and deciding which candidate to interview for a job, that we need to be more cognizant about how the algorithms are designed. It will still be wise to have humans complement the work done by AI in such important cases.

Companies who have the ability to make products to serve customers using AI can also engage their customers to educate them. However, it is a social responsibility that they may not prioritise if it does not affect their profits and bottomline. However, trends show that millennial consumers are more drawn to companies who are socially responsible and sustainable in their practices. Perhaps being socially responsible should also include practices to ensure their customers are well informed of their purchases and usage.

Questions for further personal evaluation:

  1. Do you agree with the idea of this new digital divide that it is not just about access anymore?
  2. What would incentivise companies to be more transparent?

Useful vocabulary:

  1. ‘gargantuan’: enormous
  2. ‘plethora’: a large or excessive amount of something
  3. ‘panacea’: a solution or remedy for all difficulties or diseases

Picture credits:https://unsplash.com/photos/Ndxgo2Bn3WY