I'm going to tackle this more directly than I feel the other answers have done.
Consider the following: broadly construed, harm in the ethical sense is something that can only happen to an agent - an entity with certain cognitive capacities (the ability to feel pleasure/pain, make decisions, etc. - incapacitated persons are a legal corner case I'm myself leery of). You can't, in an ethical sense, "hurt" a rock unless you believe the rock would experience something negative as a result of your actions. You can't declare it "wrong" to break a rock by virtue of it being a rock simpliciter. If it's wrong to break a rock, it's wrong because it infringes on somebody's property rights, it contributes to the destruction of a living habitat, et cetera.
In the same way, we impose restrictions on the sorts of violence that are permitted with the intent to prevent harm; we want to minimize negative experiences, to make it so that thinking and feeling things are not inflicted with unpleasant qualia. Thus there's nothing wrong with acting violent to a rock, excepting the aforementioned cases. Thus, there's nothing wrong with shooting Nazi zombies in Call of Duty - because we have no reason to believe that the zombies "feel" anything, that they possess any sort of agency.
But, this is not a claim which finds its philosophical origins in the mere fact that the zombies are "simulated". That, I will make explicit, is irrelevant. I will disclaim that the following is sourced on my studies into philosophy of mind - I'm not pulling this out of a magic hat.
The Mind
What are the kinds of things that "feel"? How does it happen that I, for example, am able to experience emotions and sensations? If I see a red apple, there's more happening than just optical processing of light being reflected by the apple's skin; there is additionally something it is like to see red, something it's like to be me seeing the apple. That's my "first person subjective experience", and it belongs to me alone. It's the existence of this first person subjective experience, I'd say, that motivates us to treat ourselves as agents. We want to minimize harm amongst humans because each human possesses an intricate, highly complex first person experience, and we consider these experiences valuable. Similarly, we want to minimize harm to "sufficiently cognitive" non-humans (with the boundary debated), because cats and dogs definitely feel pain and have their own experiences.
But then, where do these varied experiences come from? What do we have in common with other animals, and potentially other non-biological structures which might have experiences - which might be conscious? I'm not going to dive into the colossal philosophy of mind debate. I'll just summarize the consensus position: unless you believe in souls as distinctly separate from and unrelated to physical bodies (and not many philosophers these days do) or are an idealist, you'd be rational to believe that consciousness is, one way or another (there are many ways) tied to/derived from our physical forms. That is, I am conscious because my bodies, and particularly my nervous system, behaves in such a way that it "generates" my conscious experience (the details are debated). Because my neurons are connected to one another in such-and-such a way, and because the electromagnetic configuration in the whole matrix is arranged in such-and-such a way, I am conscious.
The puzzle then becomes to figure out what these "magical", consciousness-generating configurations are - and the best ideas we have right now simply suggest that a sufficiently complex organization of flowing information will generate some level of consciousness. Given how subjective consciousness inherently is, it's in practice more or less impossible to even be sure of what things might lead to consciousness. And hence, theory and practice collide. How can we, for example, know if a robot may ever be conscious if any hypothetical experience the robot has is totally private to it?
The answer, of course, is that we do the same thing as we do for each other. I in fact have no way of knowing for certain that I'm not the only subjective experience in the world - you might all be philosophical zombies. But I trust that you're not, because things work out more easily that way with my ethics. I treat you like you feel, and you treat me the same - because we all demonstrate sufficiently complex behaviour that, as far as we can tell, we're probably conscious agents.
The Simulation
Now suppose we're all immersed in some higher-order simulation. Nothing changes. We still exhibit incredibly complex behaviour, we still possess (or appear to possess) our first person subjective experiences, and we still feel and act as agents. Being in a simulation has no bearing on our moral worth. We warrant treatment according to our established rules because of how we behave and demonstrate a capacity for agential behaviour.
Return next to the Nazi zombies I've been shooting up. Watch how they scramble haplessly around the building and show minimal behavioural complexity. The most complicate decisions they make are "take this route or take that route?" They don't even have anything like a pain subroutine - they just charge at me until one of us is down. I'm allowed to be violent to them not because they're simulated, but because they're simple. They don't give me any reason to believe that they feel, so I'm under no pragmatic obligation to be nice to them.
In short, we sanction violence in our video games because there's no reason to believe we're causing any pain, given how simple we create our characters. But, if we're in a simulation ourselves, that changes nothing about our own behavioural, functional, neurological complexity, all of which indicate that we each possess a subjective experience worth protecting.
Sorry if that was a bit of a ramble. I get a bit excited about mind stuff.