Generative AI has become a buzzword in 2023 with the explosive proliferation of ChatGPT and large language models (LLMs). This brought about a debate about which is trained on the largest number of parameters. It also expanded awareness of the broader training of models for specific applications. Therefore, it is unsurprising that an association has developed between the term “Generative AI” and the use of trained models. But its origins and meaning can be traced back to a time before models were initially discussed in the industry. What is the meaning of the term "Generative AI"? And what are the foundational concepts that laid the groundwork for the development of this revolutionary technology?
The Early Days of Generative AI
Before the emergence of trained AI models, the concept of generative AI revolved around the idea of creating intelligent systems that could generate new and original content. One of the earliest examples of generative AI was the field of evolutionary algorithms. Inspired by the process of natural selection, these algorithms aimed to generate new solutions by iteratively evolving and improving upon existing ones.
Another significant development in the pre-model era of generative AI was the field of expert systems. These systems aimed to capture the knowledge and expertise of human experts in a specific domain and use it to generate intelligent outputs. While these rules and heuristics could be considered models, they were not trained in the same way as today’s LLMs.
Click here to read more ...