paint-brush
Coronavirus, Cloud Computing and Cybersecurity: A Conversation with Dr. Arun Vishwanathby@David Ben Melech
146 reads

Coronavirus, Cloud Computing and Cybersecurity: A Conversation with Dr. Arun Vishwanath

by David Ben MelechNovember 19th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Dr. Arun Vishwanath has spent his professional and academic career studying the “people problem” of cybersecurity. His research focuses on improving individual, organizational, and national resilience to cyber attacks by focusing on the weakest links in cybersecurity — Internet users. He has written and published over two-dozen articles on technology users and cyber security issues and his research has been featured on CNN, USA Today, Bloomberg Business Week, Consumer Reports and hundreds of other national and international news outlets. He discussed many of these challenges and possible solutions in a recent article for IPSwitch.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Coronavirus, Cloud Computing and Cybersecurity: A Conversation with Dr. Arun Vishwanath
David Ben Melech HackerNoon profile picture

As Chief Technologist for Avant Research Group in Buffalo, New York, and formerly as a professor at the University of Buffalo, Arun Vishnawath has spent his professional and academic career studying the “people problem” of cybersecurity.

His current research focuses on improving individual, organizational, and national resilience to cyber attacks by focusing on the weakest links in cybersecurity — Internet users.


He has written and published over two-dozen articles on technology users and cyber security issues and his research has been presented to principals at national security and law enforcement agencies around the world. His research has also been featured on CNN, USA Today, Bloomberg Business Week, Consumer Reports and hundreds of other national and international news outlets.

In a recent interview, I had the chance to sit down with Dr. Vishwanath to discuss recent trends that impact cybersecurity, where some of the vulnerabilities are, and how businesses can take proactive steps to mitigate these risks.

How has the shift to remote working and remote learning impacted cybersecurity? 

It has made us more vulnerable for a whole lot of reasons. There are several challenges for a remote workforce, including the availability and reliability of high-speed internet; remote work on legacy software and systems; data protection regulations and compliance; and cybersecurity risks. I discussed many of these challenges and possible solutions in a recent article for IPSwitch.

How has the movement to cloud-based storage and computing services affect cybersecurity?

I talked about this at the Digital Government Institute’s (DGI) conference in the Regan building. Cloud computing, at least as it is presently implemented, increases the surface area of vulnerability. Among the reasons for it: we are sharing more links that routinize the sharing of links; the current storage services have very poorly designed interfaces, making it easy to mimic (as in spoof) them and hard to detect issues in them; we depend on browsers for access, and browser are notoriously easy to infect and attack because they are also used to do many–arguably most–online activity; and finally, more files and information are stored in the external cloud service’s or platform’s servers, so we have to depend on some unknown entity for our data’s protection and integrity.

When using the cloud, files can be hacked even if our devices are secure, if your browser is hacked, or worse yet, the service providing the cloud storage platform is hacked. I have written about this in a number of different forums, most recently on an article in Medium, where I have also presented a number of solutions. 

How will AI impact the IT security space in the future?

I have been interested in the impact of AI on information technology for a long time. The disruptive potential of new technology on employment is a subject I wrote about at length in articles for CNN in January and February 2018. Andrew Yang borrowed many of my ideas during his candidacy for the Democratic Presidential nomination in an editorial he wrote for the New York Times, recognizing the potential impact of AI on America’s workforce.

What is the biggest mistake you see businesses make that can make them vulnerable to information security risks?

I think it is thinking they understand their users. Consequently, policies are created, practices put into place, and technologies implemented that take little account of what users want, how they think, and what they do.

It needs to be People first, then Policy, then Technology. In the past, it used to be policy that dictated what people did; today, it is increasingly technology that dictates. Some of this is implicit–for example, everyone has to use PowerPoint and create slide decks and bullet points. From the Pentagon to academia, too often, the move is to create compelling presentations that have many talking points, with technology driving towards even less content and just quick takeaways (as in Tweets and short lightning talks).

Such moves are attempts to accommodate short attention spans and rise above all the other noise, but it replaced content and nuance with images that appeal to the audience’s emotion and short talking points. New technologies such as AI is further exacerbating this move towards using technology not just for presenting ideas but for coming up with decisions about them.

Much of this appears to be objective or efficient at first glance, but in reality, these technology-driven decisions often obscure, even make it easier to argue away bias and poor decision making by blaming the technology for it. 

We have already seen this in the use of AI for legal/judicial decision-making and job or hiring decisions where such AI-based algorithmic decision-making appears to either replicate human bias or, in some cases, heighten it.

Technology is never going to be a panacea against biases in human decision-making processes, nor will it ever replace humans. Nor do we want it to — because what humans can do, as in think outside the box, technology can only be programmed to do that after a human thinks about it or actually does it. 

Take the 2009 incident where "Sully" Sullenberger landed the US Airways airplane in the Hudson river saving 155 lives. No AI program in 2009 would have done this before this incident; no program would likely even do it now because this one incident data point against millions of other flights would likely be seen as an error by AI algorithms and discarded. 

Thus, we need people because of how they can think, and we, therefore, must prioritize them. The prioritization of people must be first, followed by policy, and then technology. We need to think of people in the organization as a source of potential innovation and efficiency, and use computing technology to achieve it.   

What recommendations do you have for businesses to help protect against cybersecurity-related risks?

There are many, but among the most pivotal is to develop policies that are aligned with their assets — their users. Begin by assessing the cyber risk among your users.

This is more than just blindly training using phishing simulations or implementing technological security measures. Understand who is at risk, how, and why. Then go about developing programs, policies, practices, and perimeters around them.

This is the only way to achieve cyber resilience. I think every organization must endeavor to assess user risk and then build defense based on the data. This is the evidence-based and need-based approach that medical science follows—and organizations today must do likewise. I have written and presented the methods for doing this at Blackhat in 2016 and again in 2017.


Your academic work at the University of Buffalo brought you international recognition as a technologist and futurist.  Why did you decide to leave and what are you doing now?  

I still live and work in Buffalo and love the city!  I was a tenured professor at the University at Buffalo (also called State University of New York at Buffalo) for close to two decades. I was beginning to advise Silicon Valley security companies such as Iconix Inc. and government agencies and had started a security consultancy. 

Around 2016, I was invited to be a Faculty Associate at Harvard University and this exposure made me reconsider how I could make a difference in the world of cybersecurity by influencing policy makers and security professionals. This progression led to my interest in serving as a technologist in the public interest and focusing on solving pressing problems in cybersecurity, rather than just identifying problems — which further enhanced the quality, value, and relevancy of my work.   

It also made me recognize the limitations of today’s academic research process. Academia has become highly reactive. Things in the real world are moving faster than today’s academic research cycles can support. An average research project in the social sciences takes anywhere from six months to a year to complete, between getting institutional and human subject approvals and actually getting the support, time, and help to do the study. Then comes the publication process, which in top journals can be anywhere from a few months to years.

This is why many academics pick topics that appear old or at least are irrelevant by the time they are done. Adding to this, academics are still writing for other academics and most publications are behind paywalls that most readers cannot access. For such reasons, much of the core research that used to be done in universities are now done at Google, Microsoft, and Amazon.

Being inside academia almost cuts you off from cutting-edge research. Furthermore, with little real-world insight, especially in cybersecurity, it is hard to sit in an office and say this is why security patches aren’t applied by people working in national security. To answer that or even be aware of it, you need real-world experience working, talking, and being there. 

After two decades, I had learned a lot being inside academia; I needed perspective from outside it.

Lead image via Adi Goldstein on Unsplash