Phrase artificial intelligence is changing its meaning quickly. In about 2012, I would say it is a sci-fi concept or a game’s opponent controlled by a computer. In 2017 this was very similar, but AI seemed to encompass gradient descent and fancy heuristics. Everyone was creating “chat-bots” that I never found anywhere close to useful, even for their dedicated purpose. (E.g. our school had big PR news about our awesome chatbot about Covid 2019; it didn’t have reasonable answers even for the most basic questions.)

In November 2022, about ten months ago (huh, only?) ChatGPT large language model showed the world that training a simple language model on huge quantities of data gives reasonably looking models. Every big company now looks into this tech while ethical questions about obtaining the huge amount of necessary data went aside. The exploitation of people in disadvantaged regions that are by their situation forced to label data through microtasks went aside. The contribution to the climate change because the training is energy-intensive also did.

Now, October 2023, it seems that models that process picture, sound, text, and also video are kinda getting united. The next iteration of GPT is said to be multi-modal (able to process more than text).

I like the use of these models on interfacing – querying one model to create subtasks, prompting other models and systems to get results, putting together the result (also via models), and finally checking its answers. The difficulty of processing of simple tasks is in my opinion one of modern failures of technology. It will probably be solved by LLMS, making us more dependent on corps that own them. Oh well.