I am going to begin by assuming by utilitarianism, we are taking our goal to be "pleasure" or perhaps "happiness" for the most people.
Time travel is actually relatively unimportant to answering your question, but we need to answer two questions about the metaphysics of time travel before we can answer the question about inventing future products before their original time:
- Can we fully grasp the effects of our time travel? (Here, I mean specifically can we know what this will do in terms of pleasure in our present and in our past / future).
- Do we weight future happiness as equivalent to present happiness?
The reason is that among contemporary utilitarian views, there's a few debates that interface with these questions:
First, are we maximizing expected happiness or realized happiness? In other words, is it the intention of our actions towards happiness producing or their net effects. To give an example, if I walk into a room with a bag of poisoned candy and plan to hand it to children to kill them, then I have a goal that is evil in terms of intent. If it turns out by accident that I saved all of their lives from the poisoned water, then I've had a greatly positive effect, and my action was good. Conversely, if only intent matters, then the actual effects are irrelevant.
Thus, if we assume that we can have full knowledge of our effects, then the right/wrong calculation shifts accordingly, and everything trivially follows. If we can't know but we need to know, then our hands are tied as to making these sorts of actions. If we don't need to know, then the answer to 1 is irrelevant.
Second, The answer to 2, however, matters in that we need to know if our calculus cares about future happiness, present happiness, and past happiness equally. To borrow an idea from another debate, are we A-theorists or B-theorists about human happiness (Cf. McTaggart and time)? In other words, is "now" privileged or is "now" equally valuable with all other moments. If A-theorists, then we have quite a few calculations to make, and we will need to figure out which timeframe's utility concerns us the most. If B-theorists, then we either need knowledge as per our response to first.
Third, there's a further issue about whether we think every moment should calculate the right/wrong of an action de novo or whether what we want to do is establish rules that are happiness-maximizing and follow these (this is the act vs. rule distinction suggested by R. M. Hare).
If we are rule utilitarians, then the rules apply equally well whether we can time travel or not. And what might need adjustment is possibly our knowledge condition -- if we can time travel, then our knowledge condition would probably be pretty strong, because we could evaluate the happiness outcomes of the things we attempt.
Looking at your example, there's by and large nothing special about this case unless we have eccentric answers to 1 and 2. In other words, it seems no different from stealing in the present if we are rule utilitarians and no less in need of a happiness calculation if we are act utilitarians (with whatever conditions we have on the knowledge-intention divide).
An issue that may gum up all of this is is the idea the time travel is on parallel world lines rather than an identical one. Then, we must resolve how much we care about happiness in each world-line, and whether we privilege our own. But again, the time travel aspect in a sense works itself out rather than being a major concern for the morality of our actions.