The inherent cause for questioning is to fill gaps or holes in our mental web or model. The existence of missing pieces, especially ones in areas important, brings anxiety and cognitive dissonance.
Two hurdles when asking a good question are:
Finding a link between our mental gap and the recipient's mental model. If our respective models, or starting contexts, are too different, we may need to paint the recipient a new model before pointing out the hole in said model.
Leading the recipient to the hole in question. Once we share enough internal context, we need to point out the right hole for them to know our concern. In the case they already filled this hole in the past, they can lend us the missing information. In the case we both have a gap of understanding in this area, we can work together in bridging the gap.
Put another way, step one is to find or create compatible context, or model overlap, in both minds; step two is to bring both our attentions to the same deficit, or same hole. Usually if the other already has the answer, this much will be clear to them once context is found. If both minds are incomplete, then the key is focusing on the same gap. For elusive problems, a back and forth may be needed to build the same peripheral context in both minds -- to reify and illuminate the cavity.
A question need not be an interrogative. Any context-building or focus-directing behaviour may suffice, as long as both minds see the same hole at the same time. For example, literally drawing or writing a partial model or thought process can shine upon the gap, sometimes enough that no sharing is necessary.
The overarching theme is context and focus. Once these are aligned, two minds become one.
On a related tangent, language transformers, or machine learning models capable of intelligent text prediction, operate on similar principles to the recipient described above. These carry forth user-provided context using probabilistic inference to fetch and create new textual output. Like walking up to a stranger, an untuned language transformer knows only the context you provide. To get the answer you want, you should provide the key background, along with hints of the focus you seek, or the gaps you want filled. The art of asking questions is analogous to the art of prompt-crafting, as they call it, which is the process of giving a language transformer the right context to get the desired output. If interested, I recommend playing with some. One option worth mentioning is GPT-NeoX, which can be found in the Model dropdown at the TextSynth Playground. A powerful paid option is GPT-3. Or you could search YouTube for videos of others playing with these. This task should help to give you a picture of working from nearly zero context in building a "question" that leads to a desirable answer.