I'm going to give a critque of the methods mentioned.
Tobias has a nice approach, but it's restrictive in a way that it forces the bound to be in a binary form (i.e. (L=2^p,p\in\mathbb{N}), or maybe (2^p-1)), now that isn't such a problem if you want the (upper) bound to be 1000, so you say, what the hell, lets make it 1024, but if you want the bound to be (2^{1000}+1), going for (2^{1001}) would be a big issue. Also, it can be just plain wrong. Suppose, your upper bound is 8 (so anything less or equal than 8 goes, that is 4 bits), you take (x=1,y=8), x has 1 bit, y has 4 bits, together 5 bits, so you would say: wow, thats out of range. Another simple example, upper bound still 8 (4 bits), (x=3,y=3), x has 2 bits, y has 2 bits, so their product has 4 bits, so you would say; you're not out of range

.
Dearapppu and iddqd assume system/language specifics, that is, if the product is out of range, that the number being stored will be unrelated to the actual value. Still assume, that the upper bound is 8, but my system can hold numbers up to (2^{32}), I take, x=5 and y=2, their product would be 10 (out of range), but dividing 10 by either x or y, would give me the correct value.
Amaan has a very good idea, but its not correct. Assume, upper bound (L^{up}), lower bound (L^{down}), take x and y really small, but positive (x,y\approx 0) so their product in fact would be bellow the upper bound. Taking logarithm of x and y, you could break the lower bound, because (\operatorname{ln}(x \text{ or} y)) go negative, so logaritam of any one of them can break the lower bound and you would be out of range. If, they alone don't break the lower bound, when you sum them, again you can break the lower bound, where in fact you are only concerned if their product doesnt break the
upper bound cause both numbers are positive..