
Pandatube
SeguirOverview
-
Fecha de fundación agosto 25, 1935
-
Sectores Energía
-
Retos publicados 0
Sobre la Entidad
The idea of «a maker that believes» go back to ancient Greece. But because the development of electronic computing (and relative to a few of the topics discussed in this short article) important events and turning points in the development of AI include the following:

1950. Alan Turing releases Computing Machinery and Intelligence. In this paper, Turing-famous for breaking the German ENIGMA code throughout WWII and frequently described as the «daddy of computer system science»- asks the following question: «Can devices think?»

From there, he offers a test, now famously called the «Turing Test,» where a human interrogator would try to differentiate between a computer and human text reaction. While this test has actually gone through much analysis because it was published, it remains a fundamental part of the history of AI, and a continuous principle within philosophy as it uses concepts around linguistics.
1956. John McCarthy coins the term «synthetic intelligence» at the first-ever AI conference at Dartmouth College. (McCarthy went on to develop the Lisp language.) Later that year, Allen Newell, J.C. Shaw and Herbert Simon produce the Logic Theorist, the first-ever running AI computer system program.
1967. Frank Rosenblatt constructs the Mark 1 Perceptron, the first computer system based on a neural network that «discovered» through trial and mistake. Just a year later, Marvin Minsky and Seymour Papert publish a book titled Perceptrons, which becomes both the landmark work on neural networks and, a minimum of for a while, an argument against future neural network research study efforts.

1980. Neural networks, which utilize a backpropagation algorithm to train itself, became commonly used in AI applications.
1995. Stuart Russell and Peter Norvig release Artificial Intelligence: A Modern Approach, which turns into one of the leading books in the study of AI. In it, they look into four possible objectives or definitions of AI, which differentiates computer systems based on rationality and thinking versus acting.
1997. IBM’s Deep Blue beats then world chess champ Garry Kasparov, in a chess match (and rematch).
2004. John McCarthy writes a paper, What Is Expert system?, and proposes an often-cited definition of AI. By this time, the era of big information and cloud computing is underway, enabling organizations to handle ever-larger information estates, which will one day be utilized to train AI designs.
2011. IBM Watson ® beats champs Ken Jennings and Brad Rutter at Jeopardy! Also, around this time, information science begins to emerge as a popular discipline.
2015. Baidu’s Minwa supercomputer uses a special deep neural network called a convolutional neural network to identify and categorize images with a greater rate of accuracy than the average human.
2016. DeepMind’s AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champ Go player, in a five-game match. The triumph is significant given the huge number of possible relocations as the video game progresses (over 14.5 trillion after simply four moves). Later, Google purchased DeepMind for a reported USD 400 million.

2022. An increase in big language designs or LLMs, such as OpenAI’s ChatGPT, creates a huge change in performance of AI and its potential to drive business worth. With these AI practices, deep-learning models can be pretrained on big amounts of data.
2024. The newest AI trends point to a continuing AI renaissance. Multimodal designs that can take several types of information as input are supplying richer, more robust experiences. These designs bring together computer system vision image recognition and NLP speech acknowledgment abilities. Smaller models are likewise making strides in an age of lessening returns with enormous models with big specification counts.