If you fiddle with logical pluralism and the set-theoretic multiverse enough, you can find a way to infer infinitism from foundationalism. Now, usually foundationalism is equipped with an exclusivist aspect: "Only foundationalism is true, i.e. the only way to justify beliefs is by tracing them to foundations." Historically, for theological kinds of reasons perhaps, foundationalism was also equipped with a subtle variant of the well-ordering principle, in that not only is there a well-founded regress of reasons but there is, ultimately, only one such regress. (Note that the foundation and choice axioms are quite independent on each other and that in an ambiguated intuitionistic/type-theoretic context, choice can be "described" in terms of phrases like "propositions-as-types" and "formulae-as-monotypes" (pg. 43).)
But so the following derivation either follows from even the outright exclusivist and singular model of the regress as well-founded, rendering that position (hopefully) self-defeating, or it at least shows how you can combine an inclusivist foundationalism with infinitism.
- Every well-founded set A of all elements with property a is such that ¬Aa (i.e., A does not have the property a). Otherwise, A would be an element of itself and therefore not well-founded.
- On the flip side, every hyperfounded set X (hyperfounded sets are infinite descending elementhood sequences) has a similar meta-property, except in the reverse direction: if X is the hyperfounded set of all sets with property x, but if by definition X isn't cofounded (circular/an element of itself or caught in a loop with its own other elements besides), then X has to lack the property x.
- Now, imagine a sequence of axioms for introducing larger and larger sets in such terms. Here, we assume that the purpose of the axiomatic method is to implement foundationalism mathematically: axioms are the base nodes in the epistemic graph of a foundationalism-friendly regress. Since we obtain the larger and larger sets by their excession of their elements' parameters, we have that we can define such a sequence of axioms in terms of "how far from absolute infinity" they are. The highest axiom is the fewest steps from being overgeneralized when taken as not admitting of excession, i.e. if the axiom introduces a set with property x, then if it holds that there is no set above this that well-foundationally holds all x-sets, then it (the axiom) becomes the assertion that all further sets, on to absolute infinity, have such a property. But so then let us say that, whatever x is, it admits of the fastest overgeneralization as such. We'll say that x is one step from being overgeneralized.
- Below x, then, there is some axiom for introducing sets with a property x - 1. This will be two steps from overgeneralization.
- Then there are axioms below for properties that take three, four, five, ... steps to be overgeneralized. Assuming some sufficient replacement scheme, we can then get to an axiom that is omega-many steps away, omega1-many steps away, etc.
- But then not all of these axioms can be combined in a single well-founded world. Suppose there is a large cardinal on the omega-level. Using powerset (if this has not been exceeded)/replacement/etc., we can unfold some more cardinals from this base. But where do we situate the first cardinal introduced by the "next" axiom? Well, there isn't a directly next axiom. To go from the omega-level to any of the n-levels, we'd have to leap to them, not necessarily arbitrarily per se (perhaps some function on the omega-level could latch on to a specific n by reduction from omega, although such a function sounds like it would be tortuous to identify at best) but still so as to prevent the overall combination of all the cardinals given by the in-play sequence of axioms, into a single universe of sets/cardinals.
- So either there must really be only a finite sequence of such excession axioms, or if there are infinitely many of them in descending order, this is tantamount to the set of all those axioms being internally well-foundational in purpose but externally infinitistic in character. Ergo, this kind of foundationalism implies some kind of infinitism in the limit. QED
To more directly answer your question, though: consider the possibility of a hypergraph with no nodes but two edges:
Alternately, edges can be allowed to point at other edges, irrespective of the requirement that the edges be ordered as directed, acyclic graphs. This allows graphs with edge-loops, which need not contain vertices at all. For example, consider the generalized hypergraph consisting of two edges e1 and e2, and zero vertices, so that e1 = {e2} and e2 = {e1}. As this loop is infinitely recursive, sets that are the edges violate the axiom of foundation. [emphasis added]
A word of caution: take the above quote with a good-sized grain of salt, seeing as it's been flagged in the given Wikipedia article as unsourced information. Still, it was cited on the MathSE without detraction, I think (I'll check up on that again later), so I will assume it goes through. Now, in an epistemic graph, the edge relation is more or less an inferential/discursive relation (when two nodes, taken for propositions, are given, then their involved edges are discursive relations between the propositions), so a loop of two such edges is effectively a representation of inference relations unto themselves. (C.f. Lewis Carroll's poem about modus ponens.)
So, freestanding inference relations can actually be repurposed as logical imperatives of inference: some logicians will say things like, "From A, infer B," or, "Infer B from A." Then the e1/e2 cycle would be two imperatives directed at each other somehow. Oftentimes, when introducing axioms/premises, we do use imperatives like, "Let x = y," or, "Assume that S," and so on. Accordingly, if there are genuine foundationalist solutions to at least some regress problems, it's possible that we could track their origins down to such an e-cycle, although only if we had nodes in the same rough logical space besides, yet with these nodes having some kind of (epistemic) property that holds the cycle as nodeless on one level while nodeful on another (so: lower- and higher-order sets of nodes and edges, perchance). So in fact, there are ways to take sets of edges in one domain and reintroduce them in another domain (or on another level of the same domain) as single nodes.
What we are claiming now is that the question of "foundationally justifying" logic might be taken for a cogent display of a manifold of nodeless edges, except that if e.g. the 2-cycle is the primitive (or maybe if there's a possible self-cycle of this nature...), then in another dimension of that discourse, the 2-cycle counts for a hypernode and so we also can represent what's going on in a more normal manner, emphasizing only that logic's justification occurs, on its lowest order, in a relatively node-free context such that the rules of logic don't themselves occur as axioms/premises in more substantive epistemic contexts as such.
One upshot of all this is that, if we try to read restrictions on "realistic" logics off the mathematical properties of nodeless-but-edgeful hypergraphs, we will be using mathematical understanding to interpret logical understanding, even though we might think that logic precedes mathematics somehow. Then, if we did think there was some partial priority for logic, here, we would end up with a coherentist system. But methodological coherentism actually is an aspect of much conventional mathematical practice: some theorists might prefer proof theory to model theory, despite these theory-types' reciprocal characters, but that reciprocity is better taken as a coherentistic indicator. Then the even more complicated practice of translating different logics into each other can be seen as an even stronger coherentist moment in the metasystem (you improve your coherentist justification for a pair of mathematical and logical theories by translating the pair into more and more other systems).
For technical reasons, it is possible to take an elementary cofounded set A = {A} and extract an infinitary set from it. However, it is not necessary that you do this. In this essay about the axiom of foundation's place in modern set theory, the authors forge a set world by starting with a set of Quine atoms (which are self-singletons) At except that they prefix At such that their world is founded by the function WF(At). In other words, with an inclusive foundationalism, you can have a circular set of basic beliefs but which does, nevertheless, in the light of the epistemic graphs that it can be further equipped with/embedded into, happen to form a very well-founded sequence of further beliefs. For a hopefully intuitive example, suppose that the initial cycle is a Quine triangle but there are also individual edges off from each node that convergently land on another node besides those in the Quine trinity, a fourth node which does not represent itself as an element of itself and connects onward with other such well-foundational nodes.
Note that, from what has been said so far, nothing prevents us from imagining an epistemic graph that is initiated by a cyclical subgraph, proceeds for a while along a well-founded pathway, gets interrupted by another cycle, which then branches of well-foundationally in other directions, etc., as if we had a circle with a line coming off it from at least one point, that line zig-zags or smoothly sails along until its last edge connects to a point on another circle, yet then opposite this point on this circle there is yet another line going along its merry way, and so on and on. (And nothing prevents us from imagining an epistemic graph that resembles a crossword puzzle, or maybe even a set of circles that intersect each other at various points, or which are nested and connected by other lines, etc.)