Since you are immediately talking about consistency and completeness, let's start from the computation theory, rather than the ethics or the presentational logic.
Those systems that have well-defined axioms and permit integer arithmetic are notoriously not complete via some version of Goedel's reasoning. It does not matter what the corresponding semantics would be. So if you have this aim at all, you would need to pursue a very odd sort of reasoning, one in which counting things can never affect the ethical outcome.
But then can any such system be precise enough that anyone would consider it 'complete'? Huge chunks of finance would need to be immediately excluded. Most of us think that the moral value of gain and risk really do depend rather finely upon arithmetical details.
Kantianism (as it gets naively employed, not necessarily as rigidly imagined by Kant himself) tends to rule out arithmetic or the complexity that would require it as an intuitive part of the definition of a maxim. But then, by focussing on autonomy, it often gives the answer 'that depends upon an arbitrary negotiation between those involved.' Does that count as being complete?
One basic problem here is that moral (and more concretely, legal) systems are not by nature consistent, they are generally overdetermined. So there are numerous equally good right answers, none of them perfect, but then arbitrary combinations of those multiple right answers are not subject to consistent reasoning without paradox.
This suggests that any real ethical system is not axiomatic, but algorithmic, and consists of negotiation processes that govern the acceptable exchange of power and seek a consistent balance. (The point I generally come back to, the central social process is a language-game.) But, by some version of Turing's thesis, powerful enough algorithmic systems almost always permit questions that have the halting problem and do not converge. Instead, the world is full of 'Julia sets' -- algorithms get hung up in infinite regress near fractal boundaries between basins of attraction. So those are not going to be complete, either.
The trend in something like a predictive consequentialism is to assume the best you could hope for if you want something complete is a system that involves ad-hoc compromises, so it is inconsistent by design, but trends toward consistency over time. "Rule utilitarianism" with some kind of cutoff on complexity would be an example -- it seems to be what judges would like to imagine they do.