While social media can be a useful tool used to gauge the consensus of the general public, it is open to manipulation and can highlight and facilitate the polarisation of online discourse – especially when it is limited to 280 characters.
Rules around data privacy and the way information is presented and disseminated are key to the integrity of social media platforms and similarly, they are important considerations of participants within the financial services industry.
While the rapid evolution of technology and innovation has accelerated the development and digital transformation of the sector, so too has it extended the reach and impact of misinformation and nefarious online activity.
This summer, cybersecurity concerns by central internet regulator, the Cyberspace Administration of China (CAC) prompted action to investigate ride hailing app, Didi. In Europe earlier this month, WhatsApp was fined for breaking the EU’s General Data Protection Regulation law (GDPR) for not alerting its users to how it was sharing their data with parent company, Facebook.
Last Friday (September 10), FinanceAsia participated in a panel covering Social Media: Managing Risk and Crisis Management, an event that formed part of Compliance Week, organised by the Asia Securities Industry and Financial Markets Association (ASIFMA).
During the discussion, experts shared their thoughts insight on how to approach data integrity in an increasingly connected world where misinformation can be accessed in real-time, transcending jurisdictions and time-zones.
What are the key differences between the past and current modes of dissemination and what concerns and considerations they present?
Puneet Kukreja, who leads Deloitte’s Cyber Cloud Centre for Excellence and is the firm’s global cyber cloud leader commented that the key differences between past and present modes of dissemination are connectivity, veracity and speed.
“It used to be that if you didn’t know something, you could Google it. But now WhatsApp has all the answers. What does this tell us? That none of the authenticated, authorised information dissemination sources carry weight these days.”
“With freedom of speech, it’s possible to say anything and then deal with the pullback of whoever might monitor, or govern it afterwards.”
Fai Hung Cheung, a partner at Allen & Overy, whose caseload in litigation and dispute resolution gravitates towards Greater China, agreed that the expanding scope of modes of online communication presents a challenge for regulators, who are often focussed on specific industries or sectors.
He noted that the tools that regulators are equipped with to clamp down on misleading information, are slow to adapt, with many not having evolved since their establishment decades ago, “the law is trying to catch up. It isn’t stagnant, but is in some way shaped by policy, which in turn is shaped by politics, and this generates debate.”
Cheung said that he is yet to see an overarching regulatory landscape being proposed in Hong Kong with regards to monitoring online content, but that some progress is being made elsewhere.
“The online safety bill was published earlier this year in the UK after years of debate. It is now formally on the parliamentary agenda and offers regulator, OFCOM some powers of enforcement. It is the first developed country’s response to the online environment, as far as I am aware.”
What are some of risks posed by disinformation on social media and how can businesses and regulators respond to these threats?
Helen Chan, regulatory compliance expert, lawyer, writer and commentator at Thomson Reuters, shared that the speed at which information can be disseminated on social media – and at volume – “creates a huge potential for posts to reach very large audiences, influence their decision making and move markets.”
She cited a recent example that saw the shares of French construction company Vinci, drop by 19% when a fraudulent news release was posted, suggesting that the company planned to revise its financial statements in order to address a number of accounting areas.
“More recently, the volatility in US meme-stocks such as that relating to Gamestop and AMC has been attributed to the spread of misinformation on social media. The SEC in the US has tried to look into the role social media plays, but it’s hard to attribute blame. We can see that misinformation presents significant risk, but there is no easy solution.”
“Some companies are turning to third parties to monitor social media. There is also some discussion about using block chain to authenticate information and to track it – similar to non-fungible tokens (NFTs).”
Oliver Leonardo, Officer-in-Charge of the Enforcement and Investor Protection Department at the Philippines Securities and Exchange Commission shared his thoughts on local developments in what he described as the ‘social media capital of the world’.
“The Philippines is the number one user of the internet and social media, globally. Here, on average a person spends nearly 11 hours online per day – compared to the international average of seven hours. Filipinos are active on social media for two hours longer than their international peers.”
He explained that it is important for not only regulators to exercise the necessary due diligence online, but for the individual to do so also, “scammers are taking advantage of the fact that social media remains something fairly new”.
“Perpetrators are using social media as the format through which to conduct their activity, exposing users to phishing, malware, ransomware, identity mining, using it to solicit for ponzi schemes… it is challenging. Our hands are full.”
How are firms protecting their data from being disseminated via social media and how are firms enabling the various remote office communication platforms, yet ensuring compliance with data and confidentiality policies?
Kukreja offered his thoughts. “If the largest banks of the world, with hundreds of thousands of people employed across cyber and risk are continuously fighting this, you can imagine the challenges lower down the pyramid.”
“Take the event in Bangladesh. Millions of dollars were taken out of the SWIFT system. But it wasn’t the bank that recognised the issue, it was the receiving entity in the US that did. This illustrates the complexity of supply chains and how trust works. In this case, the perpetrators that orchestrated the activity sat internally within the organisation, learning how the business processes worked. This puts a lens on the size of the issue.”
He explained that organisations need to have controls at home, at the office, in the cloud.
“Conscious investment is required from a controls perspective – organisations need to stay on top of updates when it comes to the platform controls that exist within an organisation, but this also extends to the Corporate Social Responsibility (CSR) protections of social platforms.”
But this brings up a larger issue.
“When Facebook decided to stop all activity on Trump’s social media account, is a great example of a platform taking accountability on behalf of public sentiment and questioning the integrity of data provided. But on the other side of this, they ended up taking away an individual’s right to freedom of expression. At what point does a social media platform decide that what Trump says is unsafe, but what I say, is not?”
Cheung noted that generally it is best to discourage employees from using their private social media profiles for business use. He cited a recent, real-life example where it was difficult to ensure strict compliance.
“Employees had to hand in their devices so that they could be passed on to regulators. In doing so, the employer had to seek employee consent as of course, this required review of all messages, including those that are private.”
“What would have happened if the employees hadn't given consent? Would disciplinary action have been necessary?”
What are the challenges posed and faced by instant messaging content platforms such as WhatsApp and WeChat in terms of privacy, misinformation, unlawful content and its takedown?
Chan offered her perspective. “Some companies such as Tencent have capabilities to monitor content, which might assist them with removing it quickly if necessary.”
She cited how the company recently ran a debunker campaign which tracked the demographics of users who shared misinformation.
“While difficult to say how effective these methods are of quashing such activity, the data gathered can be useful for businesses or regulators to understand the patterns of misinformation.”
However, when dealing with mass circulation, those apps that use end-to-end encryption such as Signal and Telegram can present challenges compared to more open channels such as Facebook.
“WhatsApp recently piloted new features such as displaying ‘forwarded’ or ‘forwarded many times’ on some messages. These measures are somewhat informative and interesting, but they don’t give companies the power to stop the spread of misinformation.”
She said that there is growing pressure from regulators to assign more accountability to social media platforms.
“In the US – there are calls to impose consumer protection regulations on social media companies like tobacco companies.”
Cheung added that some regulators may not have the tools necessary to instruct a takedown.
He said that while regulators are not powerless if criminal behaviour is suspected, they do not necessarily have overarching authority and may need to persuade a judge or other discerning powers if they require action to be imminently taken.
He added that another challenge is presented in terms of jurisdiction. “Law applies territorially,” he said, adding that cooperation is generally required if extra-territorial application to local laws is sought.
He said that in some contexts this is changing, “take Hong Kong’s anti-doxing law. The amendments proposed will give the privacy commissioner here the power to execute extra-territorial takedown with penalty.”
Kukreja ended the discussion with advice in the form of three points.
“We are dealing with a very complex issue, but three things ring true for all custodians of trust and finance."
"1) Common sense should be exercised. Ransomware attacks happen when an employee makes a bad decision because of poor training, or stress. Continuous staff awareness is key and CSR should be enforced around cyber activity. 2) Invest in tech to assist when problems arise. 3) Review the third parties you associate with and have an understanding of those that are connecting to your network – what threat do they pose? Review access controls around the individuals who have access to your media channels.”

