On a certain level, the argument that all ethical theories can be construed in consequentialist terms is true. This is exactly because the particular set of consequences we want to maximize is not stated in making this assessment, but this elasticity is also a weakness of this claim.
For instance, if we want to represent the Kantian account in "consequentalist" (or perhaps more broadly calculative) terms, then I don't think it's sufficient to say that we want to maximize the "use of acceptable means." Instead, (working just from one formula -- the formula of humanity) it seems like we want to
- Maximize the consideration of rationality where ever we encounter it as an ends
- Set at a value of negative infinity in our system any treatment of a person as a mere means
- calculate in terms of maxims of actions with relation to 1 rather than merely extrinsically (i.e., positives only occur when we have a maxim motivated in this way -- not just when we have an action that would correspond to this maxim).
But now this doesn't look very much like a consequentialist matrix, because actions in 2 are completely excluded from the realm of conceivable moral action, and that we are evaluating not what happens as a result of actions but that we are evaluating whether our maxim bears the right relation to this principle. Which on Kant's theory is inaccessible.
So then yes, each agent would be maximizing the value of their maxims (and absolutely not engaging in the actions contrary to the CI or treating others as mere means), but they would have no ability to calculate whether they have achieved anything positive vis-a-vis this maximization. (Since the Kantian moral will in its classical version is not inside of the manifolds of sensibility or the categories of understanding, we can never know when we commit moral actions, because we have only access to our past actions in the world and not our prior moral will; conversely, we can identify some patently bad actions directly -- like lying).
Similar conundrums arise if we try to look at virtue theory as a consequentialist theory. Looking just at Aristotle's account, Nicomachean Ethics BK I Chapter 7 identifies what we are trying to maximize as our human function (i.e. our humanity). As the subsequent parts demonstrate, the way we do this is by applying practical wisdom to certain emotional and rational states and finding the balance that matches (a) our species, (b) our specific abilities (if I'm say genetically predisposed to being strong or weepy), (c) the precise circumstance, and (d) my growth vis-a-vis (a) and (b) through my choices of action. And then we have to look at the natural unit of human beings as being not the individual but the polis.
The root problem here is similar but not identical to the Kantian case. The similarity is its difficult to calculate this. But Aristotle is going to give different reasons: (1) those of us who lack phronesis won't know the optimal conditions and behaviors, (2) phronesis can include situational factors in a way that makes it so it's not always clear there is one action that should be taken. (3) Since virtue is about seeking means, these means will be such that an action that was virtuous (or at least virtue-forming for one individual) is now vicious for the same individual.
On the flip side consequentialism depending on what we are optimizing can be articulated in other terms as well. For instance, you could identify the the end as a teleology and make mean-seeking towards that end the implementation method.
From of all of this, I personally would say, yes, any theory can be represented as a form of consequentialism, but for the key competing views the results are going to be so pretzeled as to make that a highly inefficient way of describing the view.
Does that undermine Friedman's quote? I don't know, but I don't know if Friedman's quote is trying to claim that all ethical theories are consequentialist. If so, I'd say he's wrong. If instead what he's saying is that the ends don't justify the means is actually a different objection masquerading as a truism -- then yes, I'd agree to that.
E.g.,
A: this plan saves the most people on the planet. We just need to bomb every orphanage.
B: "the ends don't justify the means"
vs.
A: this plan saves the most people on the planet. We just need to bomb every orphanage.
B: Murdering orphans is wrong enough that it doesn't matter whether this saves more people than other plans.
Version 1 just hides the objection; version 2 states it. And as we carve out more and more specific modifications to the values we use for our equation, adding asymptotes etc., it becomes no more efficient or useful to think in utilitarian terms than to think in the terms more natural to the other theories.