The dark side of technology

We all love new technologies because it makes us more productive and happy in our lives but very few of us who think about the disadvantages of technology. One of them called “Technological Singularity

Singularity happens when the machine becomes capable of processing instructions faster than the human brain. It’s predicted to occur around 2045 -by Ray Kurzweil– as the fastest PC’s speed is 177,730 MIPS (Million Instruction Per Second) and the fastest supercomputer’s speed is 10^10 MIPS which is close to the human brain’s speed (around 10^14 MIPS).

After this point, the machine can predict easily what we’re gonna do before we even think about it. How ? by simply creating a decision tree.

A decision tree is a list of actions and its consequences, something like if X do Y, else if Z do L and so on. The deeper the tree the more possibilities it contains and when a machine is faster than the human brain it can go out of control because it will predict your actions towards the situation.

Lets take playing chess with a computer that is faster than your brain as an example. The computer will be able to create a tree of every move you make and the move next to it by going deeper into this tree. You won’t be able by any means to defeat the computer because you can barely predict the next 2 or 3 moves but the computer will predict far more than that and with the goal of defeating you the computer will definitely win !

But how that can be dangerous ? that’s what the I, Robot movie was talking about. By more machine intelligence it will predict that humans by their nature create conflicts when dealing with each other everyday and there is no way to come over these conflicts except by preventing humans to deal with each other because that’s what robots are supposed to do, to SAVE US according to the three laws or robotics by Isaac Asimov :

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
This will create a conflict between us and the machines which want to control our actions to prevent harm and every action we will think about to end this conflict will be predicted by the machines even shutting them down and that conflict may not end easily or may not end at all !
The solution in I, Robot was to make the machines feel not just think and calculate and let their feelings control their actions. This may change the alternatives of the conflict.
But don’t rely on that because we don’t yet fully understand how feelings works inside us, so we can’t implement them in machines in the near future.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s