Site icon Fictionistic

The History of Artificial Intelligence

How to test a “thinking” machine was covered in a paper written by Alan Turing in 1950. He asserted that a machine may be considered to be thinking if it could have a conversation through teleprinter and looked exactly like a human while doing so. He developed the Hodgkin-Huxley model of the brain as an electrical network of neurons in 1952. This model depicts the firing patterns of individual neurons as being all-or-nothing (on/off) pulses. These incidents, which were discussed in a 1956 meeting at Dartmouth College, helped to spark the idea of artificial intelligence.

The process of developing AI has not been quick or easy. After numerous publications decried a lack of advancement, funding for artificial intelligence research was ceasing in the 1970s. In 1956, it first appeared as a fascinating, original idea. “Neural networks,” which attempted to replicate the functioning of the human brain, were tried and failed.

The unimpressed referred to them as toys since even their most sophisticated programmes could only resolve straightforward problems. AI researchers had overconfidently (a typical mistake) defined their goals and had made erroneous assumptions about the issues they would encounter. Their money being cut off after the promised results failed to materialise should not come as a surprise.

First Winter of AI


“The First AI Winter” was a period that lasted from 1974 until 1980. Insufficient memory and processor speeds that would be considered horrible by modern standards were two basic constraints faced by researchers working on AI. Government funding for artificial intelligence research was cut off at the time, similar to what happened with gravity research. Contrary to gravity, however, research into artificial intelligence (AI) was funded by the United States and the United Kingdom in the 1980s in an effort to rival Japan’s ambitious “fifth generation” computer programme and take the lead in global information technology.

With the optimistic arrival of “Expert Systems,” which were quickly being created and deployed by enormously competitive enterprises around the world, the First AI Winter came to an end. AI research is now mostly focused on the idea of gathering knowledge from many professionals and sharing that knowledge with its consumers. The revival of Connectionism in the 1980s benefited AI as well.

Systems of knowledge


Expert systems were a common strategy in artificial intelligence research throughout the 1970s. An expert system is a computer software that develops utilising the knowledge of experts. An Expert System receives a user’s question and provides an answer that may or may not be helpful. Within a well defined field of knowledge, the system applies “rules” of logic to respond to queries and resolve issues.

The software is easy to create, build, and modify because to its straightforward design. Bank loan screening algorithms were a notable example of an expert system in the early 1980s. However, expert systems were also applied in sales and medical applications. These straightforward programs—articlewine—have generally shown to be very helpful. They started to help businesses save a lot of money.

The Second AI Winter


The AI industry through another severe winter from 1987 to 1993. The view of XCON and other early Expert System computers as slow and unwieldy coincided with this second plateau in AI progress. Earlier, bigger, and less user-friendly computer banks were being replaced by desktop computers, which were gaining popularity.

Expert Systems finally proved to be too expensive to maintain in comparison to desktop computers. Expert systems could not “learn” new information and were difficult to maintain. Desktop computers were deficient in several areas. Around the same time, DARPA (Defense Advanced Research Programs Agency) decided that AI “would not be” the next big thing and diverted its funding to initiatives with a higher chance of producing results quickly. The Second AI Winter occurred in the late 1980s as a result of a sharp reduction in funding for AI research.

It’s Now Possible to Have a Conversation with a Computer


Artificial intelligence’s natural language processing (NLP) field enables machines and computers to comprehend human language. Interest in natural language processing was sparked by early 1960s attempts to use computers to translate between English and Russian. These initiatives led to theories about technological devices that might comprehend human speech. Most attempts to make those concepts a reality failed, and by 1966, “many” had completely given up on the idea-articlewine.

Due to ongoing increases in CPU power and the development of new machine learning techniques, natural language processing made significant progress in the late 1980s. Instead of models like decision trees, these new algorithms primarily relied on statistical models. In the 1990s, statistical models for NLP grew significantly.

Intelligence-equipped agents


In the early 1990s, intelligent agents became the main topic of artificial intelligence research. For tasks like news retrieval, online shopping, and web browsing, these intelligent agents can be used. Bots or agents are other names for intelligent agents. As a result of the use of Big Data programmes, they have developed into digital virtual assistants and chatbots.

Computer-based education


Machine learning, a subfield of artificial intelligence, is being used to advance NLP. It is still being used as an AI building piece. Despite having developed into a separate industry, it still accepts phone calls and only provides a small number of pertinent answers. Deep learning and machine learning are now included in artificial intelligence.

Digital virtual assistants and chatbots


Digital virtual assistants can comprehend spoken commands and carry them out by doing tasks.

In 2011, Siri, Apple’s digital personal assistant, established itself as one of the most well-liked and effective digital personal assistants for natural language processing. Although virtual assistants like Alexa, Siri, and Google may have initially only served as simple weather, news, and traffic updates, advances in natural language processing (NLP) and access to vast amounts of data have turned these virtual assistants into useful customer support tools. They can now perform a lot of the same tasks as a human assistant. They can even tell jokes, according to articlewine

Digital virtual assistants may now manage schedules, take dictation, make phone calls, read emails aloud, and manage emails. There are many virtual digital assistants on the market right now, including Apple’s Siri, Amazon’s Alexa, Google Assistant, and Microsoft’s Cortana. These AI assistants might use to free up their hands. This is because they follow voice commands. It involves giving someone the freedom to make coffee or change a baby’s diaper while the assistance works on the assigned task.

Future AI research will revolve around these virtual helpers. They are operating vehicles, disguising themselves as robots to offer assistance physically, and performing research to help with commercial decisions. The use of artificial intelligence is expanding, according to articlewine.

Test of Alan Turing is Successful


In my honest opinion, chatbots and digital virtual assistants have successfully undergone Alan Turing’s test and gained true artificial intelligence. Because of its capacity for judgement, artificial intelligence today can be defined as thinking. A user would believe there was a live person on the other end of the line if these entities were conversing with them via teletype. When these beings were able to communicate verbally and recognise faces and other visuals, Turing’s expectations were far exceeded.

Exit mobile version