The best magazine
JavaScript Testing
The first set of conditions which you definitely need to test are those where the script performs the function that it is there to perform.
You need to run tests to ensure that an animation script correctly animates and that a validation script permits valid data to be passed through and so on for whatever type of script that you have. These are the paths through the script that should be followed most of the time and so are the ones that you need to be completely certain that they will work.
The second set of conditions that you need to test are those which deal with invalid input from your visitor. Validating a numeric field into which your visitor has typed 'one' is not an error in so far as your script is concerned since the actual purpose of the script is to detect invalid input and report that back to the person who entered it. What is an error as far as the visitor to your site is concerned is just a regular part of the processing of your script and these paths through the script should be tested just the same as those where the script receives valid input. While these paths should be far less frequent than the first set when the script runs live these paths really deserve just as much testing as the first set since these form a part of the main functionality of the script.
There's a third set of conditions that you may have in your script and which if they are there you probably should test as well since there's little point in actually including it if you don't know that it will work if it does occur. This group of conditions is commonly known as exception handling and in JavaScript exception processing can generally be recognised since the exception processing code will generally include references to reserved words such as try, throw, catch, and finally or to predefined objects such as Error, EvalError, RangeError, ReferenceError, SyntaxError, TypeError, or URIError. In some instances these constructs are used to distinguish between the code supported by different browsers in situations where feature sensing doesn't work and if that is the reason for its use then at least the main paths through the code need to be tested. Alternative situations where exception handling is used to handle situations that are possible but never expected to occur where you may decide not to test since if the exception code doesn't work the result will be no different than if you hadn't added the exception code in the first place and you don't expect it to ever need to run anyway.
There is one more set of conditions that you may or may not decide to test and those are ones that are also never expected to occur but where special processing would be required to handle things if they ever did. For example consider an animation script which moves an object around the web page - what should the script do if the viewport is resized to be smaller than the size of the object being animated? When writing out your test plan it is well worthwhile at least listing these sorts of test conditions on the test plan but it might not be worthwhile to test these situations. It may not eve even be worth adding any of the extra code that would be needed in your script to even allow for the situation ever happening. If you are working for a large company that has a separate testing team then that team would be coming up with all sorts of tests like this and running them in order to show that the script as written fails the test since it is a testing team's job to find as many tests that the code will fail as possible. That doesn't mean that these failed tests will necessarily result in the script being updated in order that the test can be passed, the purpose of a testing team doing such testing is to find all the situations where the script fails so that someone else can then determine which of those failures must be fixed and which don't matter. If you work for a smaller company or by yourself then the resources to do such exhaustive testing are not available and you would ship over running tests that you have decided in advance are not going to be economical to handle in your code because the possibility of their ever occurring for the live script is not worth the impact that it would have on the running of the script the rest of the time. By recording all such tests along with your decision to not run them on your test plan means that you have a record that you have considered those conditions and deliberately decided not to test them. You can revisit the decision if a change in the script means that the condition becomes far more likely or if it starts to show up with the live script.
By breaking up your proposed tests into these different sets of conditions you make it far easier to determine which tests are the most important to run and which tests may be delayed or not run at all if there is insufficient time to run all of the tests before your script needs to go live. It also makes it easier to determine when a test fails as to whether it is essential that the script be fixed and retested and when it is sufficient to just note on the test plan that the script fails with a particular test in the knowledge that the conditions of that test are not likely to occur in the normal use of the script.
Source: ...