It may depend on the level of intelligence and the perception of the strength of that cage by the captors. The cage may be strong enough that without any unknown or unexpected event, it would hold up.
However, the heterogeneity of intelligence may result in forms [or messages] with which escapes can be made, without leaving the cage. These escapes, depending on the might and support, may end up reversing the status.
There are several cages for organisms across the world. Some domesticated, others not. When some domesticated organisms escape their cages, they may remain within the perimeter with no clear plan for what to do next other than to seek food and chill.
Though without causing harm, it is possible to recapture them, they are not the best examples of marooned intelligence. Throughout history, there have been several political groups that were on the mountainside, who went on to power, after many years.
Artificial intelligence is not humans, yes, but artificial intelligence is not also a dog. AI, for now, is free, it has no agency, desire or plans, it seems, but it is percolating enough that as advances are built into it, including with safety, it may hold a lot of data, which may be ripe for whatever spark, unknown.
Indeed, it may not suddenly develop agency or desire. It is also true that giving it similarity to humans, in language and knowledge texts, is not nothing. The world is still a hugely divided place, between groups of humans. The expectation of danger is in the form of humans, or with the kind of human agency. That presumption may be tested in the era of AI.
Some people say dogs can plan better than AI. Maybe. But dogs have no direct roles in the productivity centers of human affairs, neither are dogs beyond their immediate environments.
A dog has a better sense of smell—than humans, but a dog can hardly make complex inferences. The reasoning ability of a dog is limited. Its means of communication for its limited reasoning ability is also limited. Dogs can detect smells. Their ability to recognize—or understand—smell is however limited. They may often know there is a smell, but the memory to define what kind, what for or how dangerous, is slight. A dog, without a bath for a while, may smell. However, it may not mean much to it and its kind may not tell it. This means that an aspect of memory for recognition is steeper than the other aspect for detection, even though mind detection and recognition are divisions of interpretation.
Many humans have never seen a live human liver, but they know what it is, what it does, and its location. The consciousness for recognition sharpens the consciousness for self-awareness. AI has no liver, but it has data about the liver, better than any dog. If it could smell, it would be neater than many dogs, and would not attempt to ingest whatever randomly.
It is possible that LLMs would be another technology. However, the dynamic swerves of their answers, even with hallucinations, are markers to not just downplay that they would ever remain the same.