The extent of my use of LLMs is asking our company's specific instance for writing emails. If there's only one (1) thing it's good for its writing instructions based on what I've already provided, or because part of it's knowledge is based in Microsoft's KBs, I can have it write steps out for performing basic tasks. I have to zhuzh it up and review it afterwards, but I think having a basic draft to work from that I can fix vs the hurdle of starting somewhere is nice.
I still have many reservations about how much energy these LLMs use up, the time save vs energy use is probably very inefficient. When the bubble bursts, I think that some versions of the GPT will exist to spit out things that are NOT meant to make important decisions - emailing people, setting up appointments, hell, I think if combined with transcription services and AI is probably appropriate to take minute notes from meetings. AI as a personal assistant seems most appropriate because it can attempt to provide options for a user or time-save, but it's always ultimately the user's choice to take action.
AI would also be an effective tool for creative fields in the sense of like, for instance an author trying to keep track of descriptions or other similar features of the text that don't require context, which the LLMs (I believe) are incapable of answering. For art, being able to bash out landscapes, even in broad terms, to me, doesn't seem like an affront to the craft, but also as said above, it wouldn't be very useful for people who don't know what they're doing. I've seen enough artists who have done their own work based on prompts before but then fail to remove the issues such as perspective problems, color/shading, etc. The big asterisk here is that obviously the AI needs to be trained on an ethically sourced dataset. As far as I understand, only Adobe has really done the legwork to do this.