Under Construction

When computers came into general use in the 1960s. There was a lot of speculation that they would become smarter than humans in 10 years.
It soon became apparent that the human brain was a lot more complex.

The example that I like to give (I doubt if it is real, but is representative of some of the problems). A language translation program was written to go between English and Russian. A test of success was to translat a phrase from english to russian and then translate it back and see if you got the same thing.
The phrase "The spirit is willing but the flesh is weak." was translated to russian.
It came back as "The vodka is good but the meat is rotten."

In the movie, 2001: A Space Odyssey, 1968, a computer "HAL" (Take each letter before "IBM" and you get "HAL") takes over the space ship after the crew thinks it made a mistake and tries to turn it off. HAL controls most of the functions on the ship and used natural language to communicate with the crew.

In the 1980s there was a lot of work on expert systems, a computer system that emulates the decision-making ability of a real human expert.

See: Rule Based Expert Systems

in the 2010's there were advances in Deep Learning [(Neural Networks (Software to work like the brain with artificial neurons), which learn (progressively improve performance) to do tasks by considering examples] is becoming more practical.
This started the speculation about computers taking over humankind again.
See Deep Learning | MIT Technology Review
Deep learning is being used among other things to develop self-driving cars.
See Deep Learning for Self-driving Cars | princeton.edu

A problem with deep learning systems is you can't tell how they reached their conclusions. The neural networks cannot be reverse engineered. Trusting these systems requires a leap of faith.
Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach.
See The Dark Secret at the Heart of AI - MIT Technology Review.

But, in 2017 we are still a long way from a computer program like HAL.

History:

  • 1950 - Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence.
  • 1951 - The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.
  • The UNIVAC (1951) and IBM 701 in 1952 were the first commercial computers.
  • 1954 - Natural Language Processing (NLP). The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English.
  • 1956 - The first Dartmouth College summer AI conference is organized by John McCarthy (Dartmouth), Marvin Minsky (MIT), Nathan Rochester (IBM) and Claude Shannon (Bell Labs)
  • 1956 - The first demonstration of the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert A. Simon (Carnegie Institute of Technology, now [[Carnegie Mellon University] or CMU]). It would eventually prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica,
  • 1958 - John McCarthy (MIT) invented the Lisp programming language. It quickly became the favored programming language for artificial intelligence (AI) research.
  • 1965 Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers.
  • 1965 - First Expert system. Software which could emulate the thinking of an expert in some field of expertise. Edward Feigenbaum (Stanford) initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data.
  • 1966 - The ALPAC (Automatic Language Processing Advisory Committee) report found that ten-year-long research into language translation had failed to fulfill expectations.
  • 1969 - Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of AI. This marked the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI.
  • 1974 - Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses.
  • 1980's First expert systems, a computer system that emulates the decision-making ability of a real human expert.
    They were of limited success, because experts in many fields had some innate ability to analyze situations which they could not explain sufficiently to automate it.
  • Mid 1980s - Neural Networks [(Software to work like the brain with artificial neurons) to learn (progressively improve performance) to do tasks by considering examples] are becoming more practical.
  • 1986 - The team of Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55 mph on empty streets.
  • 1990s - Advances in machine learning, natural language understanding, translation, data mining, virtual reality games and other topics.
  • 2010s - Deep Learning - AI and Neural Networks never lived up to their promise, but better algorithms and more powerful computers are allowing a jump in AI functionality. Things like language processing, object recognition are some of the applications.
    See Deep Learning - MIT Technology Review
    and The Difference Between AI, Machine Learning, and Deep Learning? | NVIDIA Blog

Links:
Timeline of artificial intelligence - Wikipedia
European Union regulations on algorithmic decision-making and a "right to explanation" | 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016)
Deep Learning | MIT Technology Review, Apr., 2017
The Dark Secret at the Heart of AI - MIT Technology Review
The Great A.I. Awakening - The New York Times, Dec, 2016


Return to computers

last updated 30 Aug 2017