So I have been recently hired on as a QA/Firmware Developer for a Printer Company, and alot of my work involves writing Small widgets/test apps to place on the printers themselves to test to make sure everything functions correctly. Before you move this to careerexchance and programmers etc....im actually talking about the source code here...
However since I just graduated college with my B.S. in CS Im pretty new to the professional world...and QA especially.
Anyways im having a hard time grasping a really good "method" for giving a good test case result.
Like lets say Your Inputting Numbers from a keypad (0-9).....So to test you'd test the accepted range (like 1-100) but then I ask myself how many should I test in that range (keep in mind some of these are impossible to automate since we have to hand press them occasionally)
Then you would test outside range obviously (but how many times?)
And for instance Inputting ASCII characters that dont belong like * $ %, or characters. Im a bit confused at how to go about giving a good test case for bounded/unbounded cases?
Any ideas?
If it helps, you're now hitting a genuinely difficult problem - anyone who tells you it's easy or trivial to pick the right test cases out of an almost infinite number is either ignorant, or trying to sell you a very expensive tool!
Grouping your input into families (aka equivalence partitions) as another answer suggests can be helpful. I'd suggest reading up on test design - I like Lee Copeland's book "A Practitioner's Guide to Software Test Design", but you might also find Tobbe Ryber's "Essential Test Design" book useful - it's available as a free pdf download here: http://www.ryber.se/?p=213 - take a look at chapter 10 to start with.
Glowcoder's suggestions for looking at bug reports for clues, and looking at the code for further ideas are both very worth following up. Also be aware of the possibility that there may be "invisible" boundaries - i.e. limits that you aren't aware of that aren't obvious just from looking at the code or requirements - for instance numbers below a certain value work just fine, and then at some apparently totally arbitrary value they'll suddenly start to fail. Take a look at this for an example: Strangest language feature (and yes, I have met that one in the wild).
This is one good reason why it's worth sprinkling in the odd high value, varying your test data as much as possible (within reason) - you increase your chances of coming across something you just would not have predicted. (This is also a big argument against running exactly the same test cases over and over - if they're automated, then the cost is lower and they act as change indicators. But if you're having to key in values manually - you might as well switch them up a bit and cover a bit more of the search space each time).
Here's a video on boundary testing, and some pointers to further resources: http://www.testingreflections.com/node/view/4292