Since the beginning of rapid human evolution and expansion, our security as an individual and as a species has been the key driver for our landmark inventions. The discovery of fire and its control was to protect ourselves from wild beasts and germs from raw food. The invention of the wheel was to secure our journeys so that we could travel longer distances with less fear and more adventurous spirit. The necessity for shelter came out of a primal need for security in terms of permanence of location. We can quote many more examples, but it is clear that security of self and/or community is what motivates us for achieving progress.
In this day and age, however, security has taken a more nuanced and subtler meaning to all of us. Sure, we still install expensive security measures in our homes to protect our valuables and property (unless, of course, you’re Kevin McCallister from the Home Alone series).
But, nowadays, security includes emotional and psychological dimensions, as well as legal and ethical standards. Eminent people like M.K.Gandhi and Martin Luther King Jr. have fought for the freedom and security of their respective oppressed classes. Even now, we have many parades to defend and equalize the LGBTQ community around the world.
In light of this, we are also witnessing the emergence of new technologies with rapid scope for expansion like Artificial Intelligence, Machine Learning, Blockchain etc. Also, we are witnessing the destructive effects of technology intrusion into our lives, such as with the Facebook data leaks or the Cambridge-Analytica scandal.
In this scenario, we must look at how these technologies can help or harm our notions of security, how they could possibly broaden our very definition of security, and a few personal ideas of mine about how they could be of huge benefit to us in the long run (or even in the short run).
Fair warning though, I am NOT an expert at any of these technologies. I’m just a normal guy, like you, trying to make sense of a rapidly changing world, using my own limited knowledge to navigate through this labyrinth.
Nowadays, artificial intelligence is progressing rapidly, whether it be in increased interactions with the user based on past experiences (cognitive intelligence), ability to understand what the user is feeling based on stimuli cues (emotional intelligence), or in grasping the external environment to tailor the response it gives even further (social intelligence).
However, how far can this go?
As AI systems continue to reach human standards in complexity, it’ll start to develop a new form of security i.e. security for programs. And I’m not even talking about firewalls and antivirus, it’ll soon involve security guidelines for robots as well. A basic example is the fundamental Three Laws of Robotics devised by the famous science fiction writer Isaac Asimov and made popular by the dystopian representation in “I, Robot”.
However, there have been many papers and surveys written on the topic of ‘Robotic Ethics’. For instance, a survey conducted in the year 2011 by Mr. Pawel Lichocki and his team from the Learning Algorithms and Systems Laboratory (LASA) of EPFL talks about the main problem in ethics (when a robot causes harm) being figuring out who is responsible when such an incident happens, the humans operating it or the robots themselves. More recently, there has been an International Conference on Robotic Ethics and Standards (ICRES) 2018 where the keynote speeches have been about creating a modern standard in AI systems, about autonomous weapons and the future of war etc.
Therefore, we can see that Artificial Intelligence can possibly branch out the definition of security to include human-robot and robot-robot interactions, which must be done as a safeguard against dystopian futures and unfavourable utilisation by antisocial third parties.
AI is finding a wide variety of applications in terms of personal security as well as detecting potential threats.
The main application for AI in this sphere is in identification of possible threats (which have been already uploaded into its database) and then scanning the external environment (such as in malls, parks and other public places) to identify, analyse and pinpoint the suspect, thereby acting as a sort of preemptive strike or counter-attack in case of suspicious activity. To this effect, some technologies have already been developed to implement this.
The first technology is Evolv Edge, developed by Evolv Technologies, which is using AI to scan and profile all the persons passing through, say, a particular security checkpoint, using accurate visual sensors and face recognition technology. Once that is done, for each individual, the AI program cross-verifies it with its database for suspects and criminals, and currently, it can identify potential threats at the rate of 1–2 persons per second (which is quite a speedy process!)
The second prominent technology is Deep Sentinel offering a complete home security package which scans real-time for suspicious activity, recognizes it and alerts the police in addition to alerting the home owner via a customized mobile app. You can check about this security system here.
Even in military purposes, drones equipped with AI are able to scan an environment for threats and are able to traverse terrains difficult for humans to reach as well, thereby increasing the reach of armies for surveillance, reconnaissance and counter-terrorism. A prime example is Hivemind Nova, a quadcopter-type drone developed by Shield AI which is powered by Hivemind, a machine learning application which I will talk more about in the section on Machine Learning.
One of the key challenges when it comes to AI is defining the upper and lower limits i.e. what is the basic functions an AI should be able to do (which depends on the application it is to be used) and upto what extent should AI assimilate and interpret data accordingly (which depends on the amount of control a user wants to exert on it). This should be the first question to be answered, in my opinion, as valid answers specific to the application will help mitigate risks and errors due to either software or operation method.
Another critical issue that could be addressed is the privacy levels that can be defined. In other words, we should know upto what extent can the AI predict or observe information, especially when dealing with sensitive issues involving people. For example, even in the home security system, we should be sure that the data the security system receives when it identifies ‘friends’ or ‘relatives’ should be stored isn’t shared in any external server, so as to prevent misuse by anybody.
Nowadays, Machine Learning is taken in sync with Artificial Intelligence when it comes to real life operation or discussions regarding security. And rightly so, as AI algorithms need huge data sets to effectively respond in the field, which is facilitated by Machine Learning and Big Data.
However, Machine Learning has already transformed how we view security as a concept, with the advent of the data leaks and the Cambridge-Analytica scandal. This happened partly because it tapped into our intuitive understanding and interpretation of security. In simpler terms, it made us realise that we wouldn’t be comfortable when some random program knows more about us than we do and we will feel insecure when said program uses said data to effectively change strategic decisions in a particular way.
This expansion of understanding is also critically important when creating algorithms for medical purposes, as a person’s medical records are supposed to be confidential between the patient and the doctor. This involves thorough analysis into what goes in the program and also defining limits to the extent of invading patient’s medical data security for the purpose of more accuracy.
In a broader sense, Machine Learning has now made us understand that there is a concept of ‘personal space’ even on the Internet domain, and that we should be mindful of our activity in that domain. Personally, I am afraid as to how my data is being used by some third-party program, but from a neutral perspective, it would be interesting (to say the least) to see how this redefines how we view Machine Learning as a concept, and indeed, how we could/would tweak programs to suit our purposes.
Most of the Machine Learning applications in terms of Security are done in sync with AI for better performance of the algorithm. AI algorithms are done with a combination of supervised learning (classification) and unsupervised learning (clustering) for collection and interpretation of its data set. However, the classification set is more predominant when starting of with an AI algorithm in its initial stages. When it comes to classification, the supervised part, there are 4 key phases:-
In these phases, Machine Learning helps immensely in the training and deployment phases for the express purpose of collecting data, hence improving the accuracy of the model created, thereby shortening the duration of the validation and testing phases, leading to a shorter execution period overall.
Also, like I had mentioned before, Machine Learning is also being used in military drone applications, with the example I had taken being Hivemind.
In this case, Hivemind uses visual sensors to accurately map the environments and feed it as virtual data to the algorithm which in turn stores the data and uses it to understand different environments so that the next time it is sent to survey an unknown terrain, it scans and analyzes based on the previous information it has collected and learned from.
Although Machine Learning is already used extensively, and in conjunction with AI as well, I think it can go further. There is a possibility it already exists, but we could use Big Data and Machine Learning to predict the next terrorist strike, or the next worldwide health epidemic.
However, I think the best way that Machine Learning can help us is in forensic profiling. Law enforcement is one of the few areas where complete data can be obtained without fear of persecution, hence we could condense hundreds of thousands of archives regarding data on forensic evidence into a single algorithm that can track and provide clues that will help us stay two or even three steps ahead of the perpetrators of crimes.
As of now, the most important part is resolving the issue of privacy of personal data in other fields like medicine and public safety, because that is a crucial viewpoint that needs to be addressed. Also, once done so, laws must be enacted to enforce strict rules for privacy to the public and to corporates alike.
IoT has had its own share of problems regarding security, with some recent leaks having put serious doubts in the minds of consumers regarding the efficacy of IoT devices in remaining secure. Although there have been unspecified reports of Alexa from the Amazon Echo laughing randomly (?), one such IoT-based attack happened quite recently when an unsecured product demo of LocationSmart, a geolocation data firm, allowed the user to look up the location of any mobile phone without providing any security credentials. This was especially dangerous as all the major mobile carriers in the US (like Verizon, AT&T, Sprint, T-Mobile etc.) were affected by this hack, including a couple of Canadian carriers as well.
So, IoT can potentially break down security if in the hands of the wrong people, good to know. But, is prevention and safety enough? We need to look into the basic concept of allowing toasters and vacuum cleaners to have microchips inside which can connect to the Internet and can be controlled with the touch of a button.
Although I agree that we will have our lives easier and at the touch of a button, we should be concerned whether our button is the only one that can control our devices, because tech going out of control could potentially lead to household accidents as well, which is even more alarming considering that household accidents result in the deaths of more than 18,000 people every year, that too in America alone.
Sure, we don’t need to be this pessimistic, but think about your own home appliances being used to do covert surveillance, which can be used for blackmail or worse. Hence, we need to be very careful, as IoT, if misused, has the potential to create huge (yet hidden) security breaches, whose damage can be irreparable.
Although I have mentioned about Deep Sentinel and how it is touted to be an effective security system for your home, IoT as a means of security is still in its nascent stages, and the situation is such that IoT requires its own security instead of it providing the same.
There are many methods being followed for safety of IoT devices, with the simplest being IoT authentication by means of a simple password, or a two-step verification, or biometric identification etc. Also, methods like IoT encryption and IoT network security are being implemented, but then again, we are creating security for IoT, instead of it securing us. And it will be a long time before IoT can act as a viable standalone security measure.
To be honest, I think Deep Sentinel is going in the right direction regarding this, but my idea is that if we can use AI to monitor and defend against cyber threats for IoT devices, it would be a better countermeasure than a simple password or an antivirus. Moreover, instead of connecting all devices to the cloud, we could have a local intranet for each home, where the requisite devices are connected to this intranet and then connected to the phone, and can only be accessed via a separate authentication system, again AI being the most preferred security option. When this occurs, IoT could be used safely and also effectively to provide home security efficiently.
A second idea is that IoT for military weaponry could be implemented (or maybe it already is) but again, much care must be taken with AI to safeguard against external cyber threats, so that wars aren’t started suddenly.
A third, more outlandish idea of mine is using IoT to store malware in a physical form and then disposing it physically, like for example using AI to isolate a particular malware or virus and rerouting it to a particular device (here by device I mean a household appliance like a toaster) and then disposing of the toaster. I’m sure it’s not as easy as I make it to sound like, but it would be very helpful if we could find the source node of any cyber-attack in addition to isolating and storing that node inside even an IoT toothbrush, and then throwing the toothbrush away, so that it won’t have any point of re-entry. (Don’t quote me on this, its pure speculation!)
These new technologies are here to stay, and even as we speak, they are increasingly being utilized in our daily lives, to make us more comfortable, or at the very least, more secure. We do have a long way to go, and as the times change, our definition of ‘security’ is sure to undergo some major upheavals. However, we hope that with the right track of progress, and with the right set of laws passed and research done, we could be setting foot into a safer, more secure, yet free world. And that is a future I’m keen to experience.
Did you like this article? If so, please do 👏 (you can do this more than just once if you really enjoyed it) to show your 💓 and also consider responding to this article down in the comments section below if you feel like sharing your thoughts on this!! Thanks for reading! :)
Have a great day!! :)
Create your free account to unlock your custom reading experience.