paint-brush
Technological Advancement vs Moralityby@krithigamurugavel
2,748 reads
2,748 reads

Technological Advancement vs Morality

by Krithiga MurugavelApril 9th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The biggest challenge the engineering world will face — or rather,<em> is</em> facing — is to incorporate morality and ethical values while both designing an engineered product as well as while engineering a product from&nbsp;scratch.

Company Mentioned

Mention Thumbnail
featured image - Technological Advancement vs Morality
Krithiga Murugavel HackerNoon profile picture

The biggest challenge the engineering world will face — or rather, is facing — is to incorporate morality and ethical values while both designing an engineered product as well as while engineering a product from scratch.

Are ethics lost in the process of winning the competition?

Today’s market is growing very fast and the number of players in the market has increased exponentially over the years, with so many companies and engineers trying to design the same end product, everybody wants to outsmart one another, in this process the ultimate moral values and safety aspects are sometimes ignored and the main focus is shifted towards producing the best goods in the market. A good example of this is Volkswagen’s Dieselgate emission scam, where Volkswagen had cars with “defeat devices” — software that could detect test conditions and cut its emissions accordingly to improve results whereas the cars were actually emitting more pollution than the allowed limits.

Another notable example is the failure of Samsung’s note 7 which literally bombed Samsung’s name and market. In the end, it comes down to a tradeoff between beating the competitors with the best product or sticking to the safety measures and making a moral decision even if it means the product is not the best it can be. This can be a hard task, the engineering specifications and the safety measures should be combined in the right amount. A proper disclaimer of the safety issues that come with the product also has to be formulated. This disclaimer should be unbiased and should contain all information irrespective of the fact that it might prevent some consumers from buying or using the product.

Moreover, what feels like a wrongful thing to do for one might not be a big deal for another person. Also, sometimes the engineers working directly on the design of the product might have a clearer and better understanding of the safety standards and use of the product as compared to the management. The challenge is to incorporate all these ideas and opinions and come up with a standard which everyone agrees to without any conflict of interest. The decisions made on the top level should be discussed with the lower tier technicians as well, everybody should be aware of the safety issues and compromises they are making in the product their manufacturing. Everybody should hold equal responsibility for designing products that meet the safety standards of the society. Maintaining this level of transparency, especially in a technology company is the real deal.

Are our “cars” fair and ethical?

This challenge gets even harder when what the engineers are trying to do is not just manufacture a product for use by the consumers but designing a revolutionary idea that is going to shape the future for everyone. One specific example is Self-Driving cars. There are many engineers working on making self-driving cars a thing of the future. In the near future, we will probably have all manual cars replaced by Autonomous self-driving cars. The safety of not only the people driving the car but also everyone on the road including other car drivers and pedestrians, bikers, etc. rests on the efficient design of the cars. Self-driving cars have ethical and moral issues similar to the famous trolley problem. There is no right or wrong answer or solutions to these problems. Most of the consumers would not buy cars which would sacrifice its driver and passengers in order to save pedestrians on road, at the same time it is not ethical to design a car that would run over a group of pedestrians to save the driver and the passenger. Here, the engineers and designers face a dilemma. They need to decide between making cars that would sell more or cars that make the roads safer. Moreover, the ultimate goal behind the invention and development of Self-Driving cars is to reduce the number of accidents on the road annually and improve the quality of life. Therefore, obviously designing the latter kind of cars is not the right decision if it is going to make the roads unsafe for the users of the road. At the same time manufacturing cars that will not be bought by anyone leads to people using the old manual cars and the accident rate remains the same. Coming up with a solution or guidelines for this problem in itself is a very difficult challenge, incorporating this in the algorithm of Self-Driving cars is an even bigger challenge. Coming up with a universal specification for these cars is a tedious process, all the ideas and suggestions and moral values of every designer, politicians and the customers who are going to use these cars have to be taken into consideration. And the algorithm has to do justice to both the processing and efficiency of these cars without compromising the safety and ethical problems discussed above.

Moreover, the constraints are not very straightforward, there would arise many hypothetical situations like choosing between running over a dog or running over pedestrians? Running over a dozen people or running over a child? Is it a designated area to cross the road? What is the extent of risk that the passengers inside the car would face if the car crashes? Do the passengers in the car include a child? In order to check for all these constraints and make a decision in a fraction of a sec, the car’s algorithm has to run many different processes simultaneously like image processing, mathematical calculations, and million conditional statements and run various tasks to take the appropriate action.

The AI “ revolution” or “Catastrophy”?

Another area of research where ethical issues are a big challenge is the infamous Artificial Intelligence. Artificial Intelligence is making its mark in almost every field now. The future technologies might include robots in military, robotic pilots and many more. Robotic surgery methods are already doing well in the industry. The software developers are now aiming at developing Artificial General Intelligence (AGI) which can be used for general tasks. In such kind of system, the challenge is to incorporate algorithms and techniques that would let AI make decisions about moral problems like whether to save a mother or the child during a cesarean, to sacrifice one for saving many lives or save one by sacrificing many lives depending on the situation. Making a bot think like a human being and teach it moral and ethical values will be the hardest challenge engineers developing an AI will face. In binary language there exists only one and zero which means there is just good and bad, right and wrong, yes or no. There are conditions evaluated and a decision is made, but the real world does not exist on just yes or no questions. There is a lot of grey area. An action generally considered wrong might be the right thing to do at that time because of the scenario. For example in Self-Driving Cars, killing a person is generally considered wrongdoing but if it is to save a hundred thousand people, it might be the correct decision. Many situations of these kinds can arise in military or war setup, where these autonomous robots are expected to be used. Generally, human beings make a split-second decision based on their intuition, since AI’s don’t have intuition, the engineers need to think of all different possibilities of conflicts and code it into the AI beforehand to avoid havoc.

Also, many AI’s use deep learning concepts, which aims at replicating the brain activity of learning, but in the case of the brain before we learn or pick up some habit from any kind of experience or teaching, we analyze it most of the time and decide whether it is a good habit or a bad one and whether it is beneficial to us or not. Designing AI’s that can do such kind of analysis is a challenging task. In order to find solutions for such complicated problems, just the idea, experience, and values of a few engineers are not enough. A collective and group work needs to be done, but again the problem here is different people have different opinions and different meaning of what is the right thing to do. In-fact even different governments around the world have, different policy, so for a technology like AI which is supposed to bring a world industrial revolution, to incorporate global rules is a complicated and a long process.

What happens next?

Assuming we manage to create the perfect Artificial Intelligence machines in the near future. What happens next? What are the consequences after that? Will we be living in a perfect world where nothing goes wrong or will we be in a more miserable state. The answers to these questions are very vague and subjective. Yes, having machines to do intricate jobs will increase the quality of life by many folds. These machines will have so much accuracy in what they do whereas it might take years and years of practice and experience for humans to attain such accuracy, but what does this mean to jobs. There are certain jobs that cannot be done by automation and even if it is done by automated machines, it would need a human touch to it. Also, the AI developers claim that with the advancement in developing automation, the intellectual jobs opportunities will increase, but what would happen when we develop AGI which has the same intellectual level of a human being? These ethical issues don’t seem like a big deal because it feels like this is a very far-fetched future, but the growth of automation and 4th industrial revolution is accelerating very quickly, hence we might have to start thinking of these issues. Moreover, with globalization, most of the developing countries population depend on the Multi-National Companies for various job opportunities. If these MNC’s go ahead and automate everything, billions of people will lose their job and many developing countries whose economy and job opportunity depend on these MNC’s will face a serious crisis.

There are two types of company — service company, and product company. The service companies receive their contracts from product companies and the main consumers of the product companies are employed in such service companies. If automation takes over, its effect will be more prominent in the service companies. If the employees of these service companies lose their source of income, they will eventually stop buying from the product companies which indirectly affects the service companies as well, therefore making everything automated is not as simple as it sounds.

What should we do?

In order to avoid such a global crisis, the companies which implement and use automation or AI technology should keep in mind their worker population. If they replace a certain number of jobs with AI, they should be able to create an equal number of job opportunity namely social service and other creative jobs which cannot be replaced.

Technological advancement is definitely here to make the quality of life better. Engineers involved in these designs have the great opportunity as well as obligation to come up with a design that can save millions of lives, make billions of lives better and increase the global economy, at the same time they also have the responsibility to include moral and ethical aspects to their design to stop a calamity from happening. The moral decisions to be made is not easy to come up with, it requires various engineers, economists, philosophers and entrepreneurs to collaborate and work in coordination to come up with optimal solutions.