he history of AI is a long one, with roots that go back hundreds of years. You might be surprised to learn that there were computers as early as the 17th century.
No, not PCs or laptops of course, but mechanical devices that could add and subtract using gears and levers. They were called mechanical calculators and the first one to ever appear was named Pascaline. It looked nothing like a PC, but it was just the beginning.
The first real AI effort began in the 1940s when scientists started looking into how machines could learn from experience rather than follow fixed rules. And, one of the creators that are credited for the development and history of Artificial intelligence (AI) is Alan Turing. He was a mathematician who wondered if machines could think for themselves.
That led to some exciting discoveries but also some dead ends. This is just a sneak peek to get you excited about today’s article, but be sure, we will take a trip down memory lane throughout the history of artificial intelligence!
The first half of the 20th century was a time of great technological advancement and intellectual curiosity.
The advent of computers led to many new fields being formed, including computer science and AI. In 1956, John McCarthy organized the first conference on this subject at Dartmouth College. During this meeting, he coined the term "artificial intelligence" as well as its abbreviation "AI", making him the first person to ever mention AI.
But that’s not all! McCarthy was a promoter of the idea that machines can learn from data and can even adapt, respond, and evolve to become smarter over time. He also invented the first programming language used for programming AI, and it was called Lisp. This dialect evolved and underwent a lot of changes, still in use even today.
However, in this early period, there were many different ideas about what AI should look like or do. Some researchers thought that it should be able to solve problems on its own while others believed that it would assist humans when needed. Sounds familiar? In the present, we have both!
Subscribe to our newsletter
Between the end of the 50s and the beginning of the 80s, AI made great steps toward its goal, beginning to think for itself
During this period, machine learning improved, and people started to master applying algorithms to solve their problems. Early stages of progress, such as Newell and Simon’s General Problem Solver and the appearance of the Natural Language Processing computer program - ELIZA, demonstrated that the industry had a lot of potential for problem-solving and even for artificial intelligence.
In the 60s, researchers had witnessed the first computer programs that could play chess, and they believed that once computers could beat humans at games like chess and checkers, they would then be able to do everything else, including solve problems and make decisions in ways indistinguishable from human intelligence. And, as it turned out, this was not true. However, this was an important milestone in the development of the history of AI. This later led to the creation of virtual assistants, voice assistants, and AI chatbots, which we use daily.
Although the first time a chess game powered by AI won a game playing with a human being was two decades later, in the early ‘90s, and it was named Deep Blue.
However, a setback came with the invention of Shakey at the Stanford Research Institute, which was supposed to be able to navigate around obstacles while driving on its own by using sensors mounted on its body.
But when Shakey tried this task in real life, it failed. The robot froze up whenever confronted with an unexpected situation or object in its path. Rather than being able to "think" through these problems as a human would do (or even just learn from experience), Shakey required constant intervention from human programmers if it wanted any chance at success at all. However, Shakey was just the beginning of AI robots, and the concept behind it showed huge potential!
And one of the computer scientists who truly believed that was Marvin Minsky. Minsky, quoted in the 70s, in Life magazine, said, “From three to eight years we will have a machine with the general intelligence of an average human being.”
But this optimistic point of view didn’t represent the founding government, which was slowly losing its patience.
The 1980s and 1990s were a time of resurgence for the history of AI
After Shakey, things slowed down a little bit for the artificial intelligence industry. But fortunately, or unfortunately, depending on how you see it - the field experienced a renewed interest after the dormancy from the 1970s. New developments brought about by the advent of Machine Learning and Neural Networks helped to usher in an era known as "The AI Spring." Poetic, right?
In addition to these developments within computer science, other fields were also experiencing advancements that would later play an important role in how we perceive artificial intelligence today. Natural Language Processing (NLP), enables machines to understand written language and respond appropriately. So, the next step in the history of artificial intelligence was to create virtual assistants, voice assistants, and AI chatbots. Computer Vision and Robotics - all these areas saw rapid progress during this period.
In the 2000s, AI experienced rapid growth and development.
The rise of Big Data and cloud computing made it possible for companies to collect large amounts of data. Then the details needed to be analyzed by algorithms so that they could be used by businesses. This led to the development of new methods such as Deep Learning and Reinforcement Learning.
And since 2010, we have lived in the AI revolution era! The widespread adoption of AI in various industries and its use as a mainstream technology are two aspects that define this period.