Within the last few months, AI image generators such as craiyon (the one I've been using) have proliferated and then been used to generate all sorts of sometimes funny, sometimes compelling imagery. Usually, if my input is fairly "abstract" (or: consists substantially in words like logic, theory, inference, etc.) the output seems to be pastiches of diagrams and graphs the AI scanned while evaluating the input. Not particularly "informative," then.
Or for example, then, I just tried out knowledge by itself and the AI put out pictures of a bald-head outline with a menagerie of symbol-like thought-bubbles inside. I followed up with knowledge gettier and the output was pictures of assortments of books (or things that look like books). Next I tried understanding and the results were not especially dissimilar to those for knowledge (although now the outlined bald head had some brain-like characteristics intertwined with the symbols/thought-bubbles). Jarringly, episteme mapped to pictures of butterflies; epistemology gave blurry portraitures resembling drawings of Socrates (or some generic bearded philosopher, anyway).
So still, not quite much, if anything, in the way of showing what parts the relevant concepts might "break down into," but still showing what kinds of intuitive associations or contrasts there are between terms for those concepts.
If comparing the outputs for such terms can "help" with conceptual analysis, I would think of the process as something akin to using computer proofs in mathematics or experimental philosophy more broadly. If computer proofs are admissible in mathematics, and if experimental philosophy is not a "dead end" (so to speak), then does justification for those transfer over to using AI to complement conceptual analysis?