Apr 6 - I believe AI in the form of LLMs poses little existential threat and will provide significant value to society at large assuming it is properly utilized. It's a decent step forward into allowing systems to interface more naturally with humans and to generate things in a humanlike manner. While I think that other methods that don't try to emulate human activity would be more efficient/better, I think that as an interim where humans need to be interacted with, llms prove useful. I think that with the amount of competition and open source development in the market, the fear/chance of a monopoly/single power having control of the knowledge of the world is luckily quite small. I look forward to the day when a ubi is implemented and automating jobs away is seen as a good thing rather than a bad thing. I don't think that it is the end of the world, and at the end of the day its probably just another tool for people to do things. As for environemntal consequences, openai and anthropic atleast claim to be moving to closed loop cooling systems, and even if not, its a government failure to not properly manage local water resources. Power usage is pretty bad but efficiency is ever improving in the interest of cost cutting and from assorted research from various places.
Either way its gonna run its course so there not too much point in worrying about it cause theres not much to do about it. Theres a good way to use it as long as you aren't a bum and choose to stunt your learning with it, it really can be quite useful in many cases. Obviously llms could be used with malicious intent too, but is that not also the case with most new technologies? Responsible usage is something people just have to adapt to and be educated on, I fear 'safety measures' intended to curb malicious intent that just end up making the model closed access/source instead.