If you ask an LLM to explain its own reasoning process, it may well simply confabulate a plausible-sounding explanation for its actions based on text found in its training data. To get around this ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results