theclever

The Premium The Premium The Premium

15 Legitimate Fears About Artificial Intelligence

Tech
15 Legitimate Fears About Artificial Intelligence

If you ask about the legitimate fears surrounding artificial intelligence or AI, there’s so much to learn. In an era where artificial intelligence is being implicated in our major decisions, a small group of organizations are trying to convince the world about the positive impact of artificial intelligence.

People are simply listening to what these companies are saying instead of talking openly about why they are scared, or why they don’t need the interference of technology in their intimate decisions.

It is a saying that majority rules everywhere. Those who are working on artificial intelligence are minorities in this case. They will set rules, and their rules will be applicable to the machines they will design.

They do not have any rule book. They don’t even have a set of ethics or industry standard. They are not aware about how much power they are going to give machines and they have no realistic answer of how they will be able to stop machines if they get out of control.

If you think that humans will have all the control, think again. It is not just about artificial intelligence, tech companies are working on bringing super-intelligent machines which will be greater than humans and will have the ability to think on their own and make decisions.

If you tell AI advocates that machines will end the world and human race, they will simply laugh and advise you to stop watching Terminator, Transformers and other AI inspired movies. But in reality, no one knows the point at which we should stop making more advanced machines. When machines will replace humans, there is a very high chance that we will misjudge the power of machines and sign our own death warrant.

So, if you are scared and feel like artificial intelligence will destroy mankind, you’re not alone. People like Stephen Hawking and Elon Musk are also warning everyone of the same. Here are 15 legitimate fears about artificial intelligence you should know about.

15. There Are No Ethics, Industry Standards or Testing Rules

Via: chatsworthconsulting.com

Artificial Intelligence is touching our lives in one way or another and with the increasing desire to see such technology, ethical issues are also rising. The authorities and civil society want technology to be produced in an ethical and fair manner. But if you look at the concepts being presented by those who are working on artificial intelligence, the concepts are fuzzy.

Tech companies are beating the drum and the media is constantly telling us to forget the fears about artificial intelligence. These companies are following incentives, and although that’s not necessarily bad, can we really rely on the big corporate houses to be ethical when it comes to choosing between ethics and profits?

Companies like Google say that they have incorporated an ethics board for the safe practice of artificial intelligence. The idea really sounds great, but why are the details of who manages and regulates such an ethics board so scarce? Can these companies really certify that a machine will be safe for use? At present, there are no common approaches and AI is being developed in the absence of concrete industry standards and testing methods.

14. Relations between man and machine is changing

Via: wired.com

We are already dealing with a machine and software addiction. You may find people sharing Facebook addiction memes on social media platforms and even after realizing that they are tech addicts, they find it hard to leave their smartphones alone. But artificial intelligence will not work in the same way. It will influence users at a very high level which you will find impossible to ignore. The artificial intelligence will make big life decisions on your behalf. For example, it is being implemented in healthcare, education, career and in the field of finance.

Siri, Cortana, Google Assistant, Alexa or Bixby don’t make life-changing decisions for you. And you really don’t care how they conclude your instructions. When they are not capable of understanding you or giving you the right results, you simply call them dumb and do things manually.

One of the major fears about artificial intelligence is the changing dynamic between man and machine. You will be bound to trust machine generated results. Will you be able to trust the process through which the machines will come to decide about your health problems? The interaction will surely require trust between people and machines because it is meaningless if you can’t trust artificial intelligence to make your decisions if you don’t trust the machines. Trusting something is hard because we humans still don’t know how to trust and whom to trust.

13. Genetic diversity will be wiped out

Via: ishn.com

Genetic diversity is what makes us different from each other and at the same time, it keeps us all united. Due to this diversity, we are linked socially and culturally, and our reproductive decisions affect everyone in the group.

We feel excited when we see that technology has improved the crop, but we neglect the damage caused to the natural resources. Gene-tweaking looks good when it helps reduce the risks of diseases, but when misused to make changes to the existing micro-organisms at the level of genes, we can’t expect a better future.

We have never accepted cloning, but we know that it exists. When it comes to altering genetics, many things go unacknowledged. Even the technology companies don’t know whether it may impose a loss of strength or cause a weakness depending on our environment and heredity. We can’t foresee this and such changes will wipe out the diversity.

12. AI is goal-oriented regardless to a fault

Via: guim.co.uk

The fact is, artificial intelligence can not complete the simplest tasks without the context provided by humans. So when we call the technology intelligent, it is unfair because today’s AI is not even able to figure out the solutions by itself. If any company or journalist claims that we are in the age of super intelligent machines, then they are purely misleading. Nonsensical responses and its struggle to find the solutions without human help makes the technology dangerous.

A human worker when performing tasks uses his own experiences and is skilled to solve problems by finding many safe ways. You can’t expect the same from machines. If you program a robot to perform certain jobs, it will only be goal oriented. If something comes in between the robot and its goals, obviously the machine would not care about it as it is designed to focus on the goal itself.

You can imagine situations where the goal-orientation would put humans at risk.

11. We will have no control over the machines

Via: theconversation.com

Fears about artificial intelligence are many. No matter how much comfort tech companies are trying to give us through sugarcoated words, the reality is that humans are bound to lose control when artificial intelligence will be ruling us.

When we speak about artificial super-intelligence or super intelligent machines, we will have to shift our focus from human-provided contexts to self-inspired actions. To achieve the level where machines will be capable of making decisions based on their self-intelligent programming, we will have to give them more abilities than humans.

The human brain is still a mystery for scientists, but as far as the AI is concerned, artificial intelligence systems always work on instructions, whether self-made or predetermined. The preferences, values, and mechanisms by which artificial intelligence decides its actions will need human-like capabilities and it will slowly make humans lose control over the machines.

“Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded,” said Prof Stephen Hawking about the rise of artificial intelligence.

10. Value of labor and skills will fall

Via: nationalgeographic.com

A computer that enables a company to accomplish tasks currently done by skilled employees is soon to become a reality. The deep learning techniques which are a part of artificial intelligence have made it possible.

The idea that manual work can be automated puts highly trained white collar workers in danger as they will be replaced by machines. But if you look at the work, a human worker does many different things you don’t see a machine being capable of doing anytime soon.

Many AI advocates are trying to convince the world that there’s no need to fear because artificial intelligence will in fact create more jobs. Do you remember how many jobs technology has already killed and created new problems for mankind? The answers given by AI advocates sound amazing, but unemployed people will find no one accountable for the loss caused to them because of limited job opportunities. 

9. Criminal potential for Artificial Intelligence is bigger than you think

Via: azureedge.net

We are still fighting against cyber crime and we have to admit that we have failed to stop it. No matter how good our security agencies and their IT cell is, we are still chasing cyber criminals and are far behind them. Now with the advancements of artificial intelligence, we are going to see more advanced cyber crimes.

Let’s say you get a call from your wife asking about some confidential information that’s between you two. You tell her things on the phone but later you realize that you revealed the details to a computer generated sound which was similar to her voice. Although it is still an imaginary situation, there are masking software which are being used by many criminals around the world and there can be more.

8. Reality is different than how the media represents AI

via: cbsistatic.com

They ask you to be fearless. They are not ready to admit that artificial intelligence can fail. They don’t tell you the fact that there is a chance that the human-generated complex programs do not always perform the way they are expected to work. Also, that the creator can even get confused by machines if the machine starts writing the programs itself.

Businesses and their PR campaigns will not tell you all this. When Mark Zuckerberg says “you should not fear artificial intelligence,” he is not completely right. The systems we are going to get in the future are not guaranteed to be developed with safety precautions. A very good example is the Stuxnet virus developed by the Israeli and US military to target nuclear power plants in Iran. But this virus managed to infect a power plant in Russia.

Of course, computer viruses are not artificial intelligence, but the thing is that the AI technology is based on computer programs and instructions, so injecting a virus won’t be a big deal.

7. We are handing over the keys

Via: investmentnews.com

A highly intelligent system can do what you want it to do. It may solve certain problems, find solutions, and give you the results. But you can’t expect it to do things from outside of its specialized realms. The system will easily put you in trouble if something goes wrong, but it won’t be capable of damage control. We can only imagine that to what extent such problems can go.

For example, Google’s DeepMind system works brilliantly at Go, but you will find it ignorant when it comes to doing something outside of its domain.

Assuming we will create better-than-human artificial intelligent machines, we will be facing control issues. AI advocates and theorists have failed to give a strong, realistic, fact based idea on how they will make sure that humans won’t lose control over the machines and how it will be possible that the machines will be friendly towards humans.

Everyone is just speculating on the fact that companies will try to make these machines learn human values. Don’t you feel that it will be more complicated than that?

6. It will lead human race close to the dead end

Via: nyt.com

Stephen Hawking says the early stages of artificial intelligence look promising and more positive. The technology developed so far has already proved that it is very powerful and useful. But he says that there is a fear about artificial intelligence because the day it will match or surpass humans, things will probably go wrong.

“It would take off on its own, and redesign itself at an ever increasing rate,” he said.

According to Hawking, “We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it.”

Tech entrepreneur Elon Musk also says that artificial intelligence is our biggest existential threat.”

The problems and fears about artificial intelligence is going to emerge in all areas where machines are going to work with less to zero human intervention. The theories given by the well-known scientists can be ignored, but ignoring the questions about how much freedom humans should give to AI is in our hands, and we must think before handing over the keys.

5. AI creators will never know all the solutions

Via: panow.com

At this point of time, machines are not smarter than humans. But companies are consistently working on building them to be better and superior. When that days comes, we will need to execute a plan through which we will be able to monitor AI systems, but where’s the plan?

Not even its creators expected that the small cognitive revolution will label computer as a necessary equipment in psychology labs. Doctors are now being replaced by computers that runs on codes. However, codes are complex, can be hacked and altered, and are never perfect.

As we see a world that is increasingly influenced by computers, there will be more codes to deal with. When machines gain the ability to write codes on their own, how will their creators come up with solutions against these complex machines? How will they be able to find the cause of the problem and fix it?

4. Laws and fundamental rights will lose their impact

Via: abaforlawstudents.com

Laws govern the actions made by humans, and sometimes the laws also rule the machines used by humans such as computers, smartphones, and vehicles. But machines are predicted to become so powerful that they will be able to drop you off at your office and at home, while also operating other machines. However, who will be responsible for the laws violated by machines? Even if in some cases you will sue the owner, what about the actions taken due to the problems in machines? Who will be responsible for governing machines that will be capable enough to think on their own without the consent of their owners?

Ethics, industry standards and laws will simply lose their impact. There will be so many problems which will keep the courts and the police busy. Aren’t we going to see AI cops? So, when there will be no one to take responsibility, who will respect the laws and who will take care of fundamental rights of humans!?

3. Our education system needs to better prepare the next generation

via: qzprod.com

Our education system teaches students how to build AI. It teaches kids to live in an environment surrounded by simple machines and complex robots. But we will have to make sure that we teach the next generation about how they can survive in an environment where everything is destined to be replaced by robots.

The fields of science, technology, law, math and engineering does not promote such things. Education may help build a wall between creation and destruction. But our education system is not teaching students to build that wall to fight and prevent the bad artificial intelligence practices. We need to redesign the education model because we don’t know how much power we are going to give artificial intelligence.

2. There’s no cure for ignorance

Via: mention.com

Do you really think that after successfully developing ‘greater than human’ machines, the organizations building them will be taking good care of people to ensure eliminating potential threats? Do you really feel that governments, military organizations and companies throughout the world will follow the ethics, rules and AI standards that don’t even exist yet?

If you look at the world history, no one has ever followed the same rules. Organizations are prone to bend and break the rules and later, they successfully justify their actions. So even if you don’t break the rule, someone else will.

Don’t you think that currently we are living under the threat of World War 3? Remember when hey told us not to fear nuclear power because it will empower our generation? Of course, now we know better. 

1. AI is not a problem, but human hubris is

Via: alluremedia.com.au

Many companies are telling the world to forget the fears about artificial intelligence. The major problem is that we don’t see the actual threat. Of course, a technology which is still under development is not killing anyone or putting the human race in danger.

When Hawking or Musk say that we should fear artificial intelligence, they don’t mean that we should stop building solutions that are human-friendly. Their fear stems from the hubris of human nature and the way things work in organizations around the world.

Artificial intelligence is like our children. Those robots, which will be replacing humans are good when they are under control and work on the basis of human instructions. Super-intelligence on the other hand will enable machines to surpass the human abilities and we are all aware about what it can do. If a human being are capable of doing so much good and evil, just imagine what an emotionless machine can do.

Source: businessinsider.com 

  • Ad Free Browsing
  • Over 10,000 Videos!
  • All in 1 Access
  • Join For Free!
GO PREMIUM WITH THECLEVER
Go Premium!