Skip directly to content

World Futures: Artificial Intelligence – Part Four

on December 3, 2018 - 11:52am
By ANDY ANDREWS
Los Alamos World Futures Institute
 
Previously we looked at presenting a consumer with a list of video program selections based on “preferences” of the individual consumer. The service being provided is essentially a “cloud library” of video programming with the “librarian software” helping in the selection process.
 
Concurrently, the “cloud librarian” is collecting data about the collective consumers to help in the selection or creation of new media products.
 
The “cloud librarian” is making decisions in providing individual customer service but is restricted from making decisions about new acquisitions.
It provides user service to the consumer while providing data collection and presentation to the manager.

Viewed another way, the machine does not act or make decisions about the company. Instead, its goal is to provide assistance to the user by reprogramming itself for better user satisfaction.

If the machine’s decisions are in error, only a single user is affected, maybe. And from a business perspective, the goal of customer satisfaction is pretty straightforward.
Change the goal. Assume that it is important to the community to reduce the consumption of natural gas and the distribution system is controlled by a computer program with this goal.

The computer “sees” that the temperature is falling because winter is approaching. How can it control consumption? Could it send out a directive to all consumers to reduce the desired temperature of heated space by two degrees Fahrenheit?
Assume it did so and consumption dropped. The computer would move notifications to the top of its list of preferred solutions.

Now assume that winter is getting even closer, temperatures are falling faster, and consumption is rising again.

The computer’s goal is to reduce consumption so it reduces the pressure in the distribution system. Lower pressure reduces the delivery rate forcing a reduction in consumption.
The computer’s goal is to control use of natural gas in the system, not to ensure that every individual structure is sufficiently warm. In fact, the computer sees that reducing the availability for consumption easily satisfies the program goal. So why not just turn off the distribution system?

Clearly this is a foolish example. We would never allow a machine independence in making a decision affecting large numbers of people, especially in life supporting measures.

Yet there are many examples of artificial intelligence systems that do make mistakes or even cheat.
In its August 2018 edition, Wired magazine published some examples of algorithms or computer programs reprogramming themselves to “cheat.” One example was at four-legged virtual robot. Note the word virtual. The robot had the goal of walking smoothly with a ball balanced on its back.

The robot took the ball, trapped it in a leg joint, and continued walking. As humans, we would have reasoned from the start that trying to crawl along with a ball on our backs is foolish and would have held it in our hand. So hooray for the AI robot, it learned and took action. But back up briefly.

The robot was programmed by a human with a task – walk and keep the ball on its back. In reprogramming itself, the robot essentially said to the human, “I will defy you, at least in part.”

Was the programmed goal to simply move forward, or to move forward and keep the ball on its back, or to move forward and not lose the ball?
Return to the media program selection example. The company has humans make decisions in the media programming.
 
These are complicated decisions based on large quantities of use data, cost factors, operations, production schedules and on and on. Assume a management model could be built and the decision process “automated.”
 
Why not write the software to fully automate media programming changes? Things like the Pareto principle (the 80/20 rule) could be incorporated to ensure accuracy and based on consumer choices the software could reprogram itself and “control” the consumer with media reprogramming selections.
 
But the consumer is influenced by other factors, the media programming decisions would be based on historical data, and the company might go broke before it recognized the weaknesses of the automated reprogramming.

Now shift gears to self-drive cars. Great concept if all cars were self-driving, assuming the road, weather, and pedestrians are constantly consistent.
 
Yet the traffic today is full of human drivers who respond ‘slowly,” break the rules and drive too fast, and apply judgment when “new” situations arise. The key word is new.
 
Can a prewritten piece of computer code based on historical data and problem perception adapt as quickly as humans even though the machines programmed by humans seem to “run” much faster?
 
As we move into the future, will the human creation of artificial intelligence make a world where machines can control humanity or where humans use machines to augment the intelligence of humanity?

Till next time...
Los Alamos World Futures Institute website is LAWorldFutures.org. Feedback, volunteers and donations (501.c.3) are welcome. Email andy.andrews@laworldfutures.org or email bob.nolen@laworldfutures.org. Previously published columns can be found at www.ladailypost.com or www.laworldfutures.org.

Advertisements