This analogy also doesn't really work, because cats and roses are part of greater creation(as in, they were not made by humans), possess consciousness, which is not really comparable to machines(soul/consciousness, unlike physical phenomena, likely cannot be reduced to any mathematical symbols because it then would be an object within consciousness), while any AI publicly available is most definitely created by humans, with possible inspiration/influence from higher densities. Cats and roses don't really have an agenda at odds with human development, whereas Silicon Valley Overlords most definitely do.
I get it, you're saying that we can make even dangerous tools work for ourselves if we put in the time to learn and steer them. But you didn't really address any of my arguments. The details matter. Chatgpt is not just another tool on our belts. It is an artifcially made lens which steer information in pre-programmed(i.e. biased) directions, and is not even reliable in the most basic of tasks. Compound errors are fatal to any philosophy or information network, and with a huge database such as the Cassiopaean transcripts, it is bound to hallucinate like mad and/or distort information. Does it really save time like that?