paint-brush
Africa: AI and Securityby@zraso
619 reads
619 reads

Africa: AI and Security

by Zaira RasoolAugust 30th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Africa, on the whole, is more of a consumer than a producer when it comes to interacting with AI technologies. This is a really fundamental issue that has a long-term impact - yet again leaving the continent on the losing end of the power balance when it comes to property and resources (in this case, the intellectual kind).
featured image - Africa: AI and Security
Zaira Rasool HackerNoon profile picture


“AI will wipe out our history”: Securing AI in the scramble for Africa

Word by Alieu Saidy, Zaira Rasool, Amadou A Bah


Artificial intelligence is a rapidly evolving technology with the power to change many aspects of individual lives. However, AI also has some unique security challenges that pose a threat, especially within Africa. Africa, on the whole, is more of a consumer than a producer when it comes to interacting with AI technologies. This is a really fundamental issue that has a long-term impact - yet again leaving the continent on the losing end of the power balance when it comes to property and resources (in this case, the intellectual kind).


With technology developing at a faster rate than many African countries can keep up with, this leaves the continent particularly vulnerable to some of the key risks posed by the growth of AI technologies. For example, the Malabo Convention, a framework designed to deal with cybersecurity measures across Africa, has been adopted by only 15 countries.


The Africa Cybersecurity Alliance found that during their Summit, discussion around AI tools like ChatGPT raised serious concerns about the risk to traditional values and beliefs, as well as the discriminatory nature of some of the data used by these technologies. For example, when asking questions about Islam and the Prophet, the responses would certainly offend most of the Muslims in Africa and beyond. There is a growing concern for the poisoning of future generations behaviour with the social contamination that can come from technology designing intellectual thought, and with good reason.


Bringing Africa into AI Leadership

We can all agree that the bulk of the development of AI, and much of the data being fed to it, is in the Western world or from a Western perspective, with secondary effects and ongoing influence in the written content, education and emerging technology access of post-colonial countries. In this way, the West is leading the development of these technologies and the data mining that comes with it.


However, one thing that may go unnoticed is that Africa is leading the way in something that is considerably more advanced. We have seen Western policymakers and companies spend years and huge amounts of resources to retrospectively design regulations, hire staff to police their content, and fight legal battles over data. This is because the progress of these developments has been driven by capitalism and to an extent publicity and media, which creates excitement as well as quick financial wins. However, when left without structure, they quickly start creating unimaginable risks to both the minds of our youth and the social fabric of our communities.


So, when African countries, who are often more deeply rooted in traditions and religion, raise concerns - though many in the Western world may see this as going backwards, when applied to creating policy and regulations, it is actually one of the most forward-thinking things you can do. One example of this is the Conformity Assessment Framework within The Gambia. This ensures that thorough checks are made of each piece of technology and equipment that comes into the country, checking spyware and algorithms embedded within them to protect the country from undesirable influences.

Ways that we can secure Artificial Intelligence (AI)

Use robust training data that is free of malicious content


This is one of the most influential things that can be done to secure AI systems. If the training data is malicious, it can make the AI system learn incorrect patterns and make wrong decisions and then use techniques to detect and mitigate data poisoning attacks. Data poisoning is a type of attack where the attacker introduces malicious data into the AI system's training data. This can cause the AI system to learn incorrect patterns and make wrong decisions. There are a number of techniques that can be used to detect and mitigate data poisoning attacks, such as using data validation and anomaly detection.


Use techniques to detect and mitigate evasion attacks


Evasion attacks are a type of attack where the attacker tries to fool the AI system into making an incorrect decision. This can be done by using adversarial examples, which are carefully created inputs that the AI system misclassifies. There are a number of techniques that can be used to detect and mitigate evasion attacks, such as using input validation and adversarial training.


Make AI systems more explainable so that they can be audited and debugged


AI systems are often black boxes, meaning that it is difficult to comprehend how they make decisions. This can make it difficult to detect and mitigate security vulnerabilities. To tackle this, it is important to make AI systems more explainable, so that they can be audited and debugged. This can be done by using techniques such as interpretability, transparency and explain ability.


AI systems can be biased, meaning that they make different decisions for different people or groups of people


As we have discussed earlier, this can lead to discrimination and other forms of harm. To address this, it is important to design AI systems that are fair and equitable. This can be done by using techniques such as fairness testing and bias mitigation.


Protect the privacy of data used by AI systems


AI systems often collect and process large amounts of data. This data can be used to track people's activities, identify them, and make inferences about their personal lives. This raises privacy concerns. To tackle this, it is important to protect the privacy of data used by AI systems. This can be done by using techniques such as anonymization, pseudonymization, and encryption.


Using security by design principles to build security into AI systems from the start


This means considering security at every stage of the development process, from the design of the system to the way it is deployed and used. Raising awareness of the security challenges posed by AI among developers, users, and decision-makers. This will help to ensure that everyone is taking steps to mitigate these risks.


Further research into how Blockchain can be used to protect the integrity and security of data


We have seen in Africa how products like money-transfer Wave are being used to scam people en-masse with the power of AI. In other examples, we have seen how data can become malformed over time if even the smallest inconsistencies exist, or worse yet, values can dissolve with small inaccuracies in otherwise age-old testaments. Blockchain is a powerful tool to protect from this because of its immutability - so as long as we can create an accurate data system the first time, it is the best opportunity to do this.

Global collaboration and standardized regulation


If we can create a global standard regulation, this can pave a pathway for many countries to match up and catch up. That is to say, Western countries can catch up to some of the detailed regulatory measures that are being developed in other parts of the world, while those countries can match up some of their key technological developments with the global best. Creating best practices and regulations, regularly updating these platforms and making sure they are informed from a global perspective regarding culture, tradition and values - this is the most important part.

Authors

Amadou A. Bah is the Founder of the Gambia Cybersecurity Alliance and the elected President of the Africa Cybersecurity Alliance. Hailing from rural Gambia, he has made a huge impact across the continent advocating for cybersecurity issues, one of the most pressing concerns within Africa. He has over a decade’s experience in IT systems and cybersecurity networks, whilst working tirelessly as a youth activist. You can follow these links to his LinkedIn and Twitter.


Zaira Rasool is the Founder of Coderoots, on a mission to provide digital access and technology education to those that need it most. The majority of Coderoots’ work is focused on providing access to resources that will allow people to create their own pathways within technology and succeed in their careers as well as provide opportunities for their country. She is a Software Engineer with a decade’s experience in community development. Find out more on Instagram and via this video.


Alieu Saidy is one of the young and gifted members at Coderoots, with a keen interest in Networking and Cybersecurity. Age 23, he has studied Network A+, Network N+, HTML, CSS, Python and is seeking opportunities in both software and networking (or ideally both!). You can find him on Instagram, and on LinkedIn.