Stuart Mills in Singularity Hub:
AI is developing rapidly. ChatGPT has become the fastest-growing online service in history. Google and Microsoft are integrating generative AI into their products. And world leaders are excitedly embracing AI as a tool for economic growth. As we move beyond ChatGPT and Bard, we’re likely to see AI chatbots become less generic and more specialized. AIs are limited by the data they’re exposed to in order to make them better at what they do—in this case, mimicking human speech and providing users with useful answers. Training often casts the net wide, with AI systems absorbing thousands of books and web pages. But a more select, focused set of training data could make AI chatbots even more useful for people working in particular industries or living in certain areas.
An important factor in this evolution will be the growing costs of amassing training data for advanced large language models (LLMs), the type of AI that powers ChatGPT. Companies know data is valuable: Meta and Google make billions from selling advertisements targeted with user data. But the value of data is now changing. Meta and Google sell data “insights”; they invest in analytics to transform many data points into predictions about users.
More here.