Custom Search

4 Tips to write test cases

4 Tips to write test cases

 
It is important for tester to improve how to write test cases. So, I summarize the key things that I normally consider when I am writing test cases. From this article, I will start with the meaning and the required information for test cases. Then, I will explain the tips to write test cases.

What is the test case?

The test case is a sequence of steps to test the correct behaviour of software by comparing with the expected result or expected outcome.

Required information in the test case

- Test case ID
- Component
- Test case description
- Steps to be executed
- Test data
- Expected result
- Pass/Fail

4 Tips to write test cases

1. Focus on requirement analysis
First of all, we should analyze what we are going to test. This activity occurs during development phase so we should focus in requirement analysis. We should keep in mind that we cannot write test cases well by using the poor requirement.

2. Make the test case for unfamiliar tester
We should write test cases which is easy to understand. Then, the tester who is unfamiliar with the application can execute the test case easily. Someone may think why he have to spend more time to write test cases for this purpose. Let's think what will happen if we have to assign tester from another team to help us testing but he cannot help because of our test cases.

3. Apply black box testing technique
We should always use black box testing technique to derive the effective test case.

4. Always cover negative flow
We should cover negative flow in test case because developer often make mistake with negative flow. From my experience, the problem from negative flow often cause big problem with software.



---
    Best Regards
    Venu Naik Bhukya
    Blog: www.nuve.info

Development Model : Agile Software Development

Development Model : Agile Software 

Development


What is Agile Software Development?

Agile software development is a methodology based on incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams.

Agile Manifesto

The Agile Manifesto emphasizes that:
1) Individuals and interactions over processes and tools.

2) Working software over comprehensive documentation.

3) Customer collaboration over contract negotiation.

4) Responding to change over following a plan.

That means we value the items on the left more than right.

Supplementing the Manifesto, the Twelve Principles explain what it is to be Agile.

1) Our highest priority is to satisfy the customer through early and continuous delivery of 

valuable software. This thing has been done by incremental development. Development has 

to split the big feature to many small working features and they will be released 

continuously.

2) Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage. As we have released small working feature, we can get the feedback from customer quicker. Then, the changing can come to development easily.

3) Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

4) Business people and developers must work together daily throughout the project. As we have short timescale, the quick answer from business and developers are required for both party.

5) Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

6) The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

7) Working software is the primary measure of progress. If development cannot release the working software, the small release to customer is unacceptable.

8) Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

9) Continuous attention to technical excellence and good design enhances agility.

10) Simplicity is essential.

11) The best architectures, requirements, and designs emerge from self-organizing teams.

12) At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. One activity is the retrospective meeting which is 

used for continuous improvement in how the team improve the process.

Test-Driven Development (TDD) in Agile

Test-Driven Development (TDD) in Agile


Test-Driven Development (TDD) in Agile

Test-Driven Development (TDD) is a developer's process that focus on "Build the test first". First the developer creates the automated unit tests that defines a desired expectation of new function then produces code to pass that test. After that, developers refactor to optimize the code and then, retest it. Passing the tests confirms correct expected behavior as developers build and refactor the code.

Test-Driven Development (TDD) process will help developers save their manpower for unit testing if they have to retest often. So, this process will support Agile method because developers have to deliver working software frequently in Agile.

10 Tips for Becoming an Effective Agile Tester

10 Tips for Becoming an Effective Agile Tester


1. Focus on Risk/Change Based Testing – In an Agile environment, the goal is always to get the highest quality product to market in the shortest amount of time. Therefore an Agile tester must be able to assess which areas of the application are changing, how they all fit together and prioritize tests based on areas that pose the greatest risk of failure.



2. Understanding Product Architecture – Put simply, this means understanding exactly how data flows through an application. This allows us to test the impact of sub-system component failures, as well as address potential security vulnerabilities. As a result, when defects are found an Agile tester will be able to help developers fix the issues quickly and thoroughly early on.



3. Understanding Business Objects and Context – Since testing should always align with customer context, an Agile tester must understand how the end user will interact with the product. If you focus on the applications and scenarios that provide value to the end user you will be able to dimensionalize your testing strategy. Meaning, if you can divide a product up based on the product architecture and customer context you will be better equipped to test for potential issues that would negatively impact end users.

The added benefit of dimensionalizing the product from the customer view point is that the tester will be able to tag defects with the affected business objects. That way when you begin to review your test sets, you will be able to recognize areas with higher instances of defects. This will allow you to address the issues in a more focused and effective way.



4. Application Logs - While reporting 'when feature X breaks we do Y' provides value, it is not enough in an Agile environment. Agile testers should be leveraging the great amount of information contained in application logs for a number of reasons. First, it provides insight into the underlying architecture that is occurring on the system level, which gives you a better understanding of what the defect is. Second, application logs also allow you to address 'silent errors' – i.e. errors that occur without the end user ever even knowing. Lastly, it allows testers to gain credibility and work more closely with the development team, which is an integral component of being Agile.



5. Based Tools – In addition to application logs, browser based tools are incredibly important for Agile testers to be able to quickly troubleshoot defects. Two examples of easy-to-use browser tools are Developer Tools in Google Chrome and Firebug in Firefox. Tools such as these provide value in the form of metrics, errors, analysis, java script consoles and debuggers.




6. Test Document Requirements – Many Agile organizations utilize some variant of SCRUM or Kanban, in which there is an element of work tackled by a team to ensure that all aspects function as intended. In order to do this, the team will need to create test scenarios. The expectance criterion of these test scenarios often becomes a requirement repository over the long term. While some people view the written test scenario as a waste, Brian Rock contends that these test scenarios become very valuable to the Agile tester for future automation, regression testing and product analysis.

7. Automation – Automation is an excellent tool but it goes without saying that it is not the 'silver bullet' to fix the testing problem. Frameworks can be expensive to create and even more expensive to maintain. However, Agile testers need to be able to leverage automation for quick, easily repeatable tasks. If the delta of change is low, chances are you should automate it.

8. Exploratory Testing – Exploratory testing (ET) is common sense testing. ET focuses on instantaneous testing and learning and thus should be a primary tool of all Agile testing. Agile testers can leverage knowledge gained through ET on every future product iteration. Additionally, it allows the Agile tester to make variations in testing to quickly ferret out bugs.



9. Testing From the Customer Perspective – In many ways, this goes back to understanding the business objects and how testing should focus the end user – despite the fact that the end user is going to use the product in ways that you never thought feasible, wise or even sane. Testing from the customer perspective goes a step further to state that the fitness of the product for use by the end user is the primary standard of quality. After all, without customers there is no product. Agile testers are in a unique position to face QA challenges from this context.



10. Know That Change is Constant – Perhaps this is a bit cliché, but a static product is a dead product. As an Agile tester you should be able to give a quality assessment of the product at any stage in its life cycle. If you are able to do this, you will be better equipped to handle the ever changing landscape that is Agile.


We would like to thank Brian Rock for these great tips, tools and strategies on how to test successfully in an Agile environment. We would also like to invite you to keep the discussion going on the uTest forums and in the comments. Let us know which of these tips is most applicable within your organization or if you have any of your own suggestions.

BASICS of Automation

BASICS of Automation


 
1. What is AUTOMATION

Automation is to execute your existing regression test cases without any human intervention. And your script should keep on updating stakeholders about the current status.

For good automation, manager and resources have to plan lot of things in advance, so that your automation suite is robust, maintainable and error free.

Following are basic points you should think of:

a. Automation framework installation in new machine
b. Proper documentation for each step needed by a QA professional to start suite 
c. Easy to learn and easy to execute
d. Easy to add New Test Cases
e. Easy to Add new product
f. Easy to maintain Input data for different products
g. Good Logging for each test case, so that it can be used later if some error is found by automation. 
h. Reporting to Stakeholder at each stage
i. Scaling of your automation suite
  • Increase in number of cases
  • Increase in number of system
j. Script to check performance stats and Accuracy Stats

2. Automation Effort

There is an easy way to get automation effort needed to implement system in use. All the different steps can be divided into 4 high level verticals (Plan, Design, Test, and Maintain). 
Again it will depend how many automation engineers you have and how many machines you have to achieve your goal. 
Think of PDTM when you want to automate any application.

PDTM stands for:


P: Plan 
D: Design 
T: Test
M: Maintenance 


Below is the chart which describes 4 different levels of Automation:



2.1 Task Layout Plan:


1. Estimate Time
2. Define Scope
3. Create Test Plan
4. Form team, with different level of scripting knowledge on different tools needed in automation
5. Train team on different tools and different scripts integration
6. Finalize all features and test cases.
7. Create Final Schedule
8. Finalize on different environment your test case will run
9. Identify Test Data set.

Design:


1. Read Documents
2. Run Manual cases
3. Design Framework
4. Different feature and test cases can be selected as users will.
5. Review Framework
6. Design Reporting
7. Design Error Handling
8. Verify that different feature and test cases should be picked properly

Test:


1. Verify each test case
2. Verify test cases in batch 
3. Run test cases with different dataset
4. Introduce defects to verify that your script catches those defects.

Maintenance:


1. Make Scripts version control
2. Make changes as and when feature changes
3. Maintain Script
4. Store script in Secure location
5. Script should be updated with latest release

Automation Effort Graph


Overall effort calculation may have the following components:-

1. Test Requirement gathering & Analysis 
2. Framework design and development 
3. Test Case development (incase the available manual test cases not compatible) 
4. Script Development 
5. Integration Testing and Baseline. 
6. Test Management. 

2.2 Roles and Responsibilities

Manager


1. Estimate Time of automation
2. Understand Scope
3. Create a Plan
4. Form Team
5. Train Team with the product
6. Finalize features to automate
7. Create a Schedule
8. Check Environment Availability
9. Identify Test Data Set to be used for automation

QA Professional

Understand Phase:

1. Go through documents of application
2. Install and execute few cases manually

Design Phase:

1. Understand / Design Framework

Development Phase:

1. Write Code for Framework 
2. Write code for Reporting
3. Write code for error handling
4. Write code for important functions in library 

Coding Phase:


1. Code case and test case
2. Do thorough testing on each script
3. Run you script for several Test Data set
4. Introduce defect and check whether your script catches 

Maintenance Phase:


1. Make your script as version control
2. Make changes as and when feature changes
3. Maintain for new environment and DATA set
4. Store scripts in secure location
5. Script should match with Application Release

Important Tips on Android Usability Testing


Important Tips on Android Usability Testing

Usability testing is one of the most important tasks of a tester before launching a mobile application to the market. Testers as well as developers should focus on this type of testing right from the time they start developing the application.

Usability for mobile applications is different from web applications. Jakob Nielsen explains how mobile testing firms can conduct usability testing on applications without financially affecting the firm. "The main thing I recommend is to study your actual users: invite a handful of representative customers to your location and run them through simple usability studies of your software. One day in the lab is worth a year in university lecture halls, in terms of actionable lessons learned. (And remember that your "usability lab" can be a regular office or conference room — as long as you shut the door.)," as reported by Mike Brown on mobileapptesting.com.

InfoGraphic Design Team designed an infographic to highlight some important tips on Android Usability Testing.  



3 Tips to Deal with the Challenges of Mobile Device Testing


3 Tips to Deal with the Challenges of Mobile Device Testing


Mobile phones are now part and parcel of our life. They have gobbled every other device to become one of the most important devices and are now as important as our wallets and purses. Various developments in the mobile space are going on at a very fast pace. The end result of all the developments is an explosive growth in the number of smartphones in the market.

However, this growth in the mobile space posed many challenges to testingprofessionals. Matt Johnston of uTest provided testers with some tips to deal with the problems of mobile device testing as mentioned by Jennifer Lent on searchsoftwarequality.techtarget.com. Let us have a look at some of them.

1.    Place a portion of your testers around the globe
According to Johnston, testers should also be distributed as applications and mobile phone users are located all around the world. Testing companies should analyze the usage statistics to get an idea of where to locate their testers.  From various analyses, testers will also know what needs to be tested and what is nice to test.

2.    Ongoing Testing
It must be remembered that testing is an ongoing process. It is a challenging task for testers to predict a situation in the future with regards to mobile devices. Johnston advised them to focus on testing the important processes the application is interacting with.

3.    Educate Top Management
Testers should educate the top management about where mobile device testing is heading. They must know that it is no longer an afterthought but the main thing, according to Johnston.

Overcoming the Challenge of Mobile App Testing

Overcoming the Challenge of Mobile App Testing


Smart-phones are becoming more prevalent by the day and that day is not far when they will be our primary access to Internet. Most software products or web applications now come with their mobile versions. The remaining (if any) will soon follow suit. As of April 2012, there were more than 1 million apps on the 2 leading app stores combined (Apple and Google). Businesses are realizing that mobile is the best way to reach maximum customers. Hence E-Commerce apps are burgeoning. As apps are rising in value chain, the need for quality testing these apps is also increasing. There are several challenges in mobile application testing though. Let us look at some of these challenges and also the ways in which a tester can encounter them.

1. Multiple OS Platforms and Fragmentation

This is the biggest cost factor in any mobile application. Ideally, businesses would like to reach the maximum customers. Hence they'd like their app to work on all OS platforms. But testing the complete functionality on multiple platforms is expensive. Hence the challenge is to find a sweet spot between too much testing and too less testing. So, let us see some strategies to find this sweet spot.

a. Create the following test suites:
i. Smoke Test Suite - The most critical test cases that need to pass on each platform.
ii. Regression Test Suite - Test cases that cover all the critical features that need to be tested with every build, but can be executed on any 1 platform.
iii. Integration Test Suite - Create a Test suite for testing Native feature integrations like Camera, GPS, accelerator, sharing, etc. These test cases need to be tested on every platform that your app supports.
iv. Current Release Functional Test Suite for features that are going into the current build/ release.

b. The Smoke Test Suite needs to be executed on all platforms whenever there is a new build or release.

c. Distribute all your Regression test cases across all platforms. This way, executing them only once gives you coverage on all platforms. For the next release or execution, change the distribution such that after a few executions, you would cover the regression test cases on all platforms.

d. The Current Release Functional Test suite needs to be executed on all platforms (emulator or hardware), but need not be executed on all fragments of the OS. Instead a distribution strategy like above can be used.

2. Emulator or Actual Device


A mobile application depends heavily on Native hardware integrations. E.g. integration with the Camera, GPS, accelerometer, Mobile data, Bluetooth, NFC, etc. So, let us see some strategies to efficiently control the cost of buying multiple hardware.

a. If these integrations are used for critical features of your application, having the hardware to test those is essential. You cannot rely on the emulator to mock the hardware interfaces. But, there are a few things to consider when buying hardware.
i. Do not buy hardware for each version of the operating system, since an OS interface with the hardware does not change with every OS version. Find out if it is really required before buying it. Also, this is a sure shot way to bankruptcy
ii. Try to use a Virtualization solution which provides virtual access to actual hardware. One such service is http://www.perfectomobile.com/ and another onehttp://www.keynotedeviceanywhere.com

b. But if your application is not dependent on these integrations heavily, use emulators as much as possible. They are faster to set up and test on. There are multiple emulations available for sensors like GPS, accelerometer, etc specific to the platform. Some of the features that can be easily tested on emulator are: 
i. Different Screen sizes and resolutions.
ii. Mocking GPS Coordinates (For testing applications offshore).

3. To Automate or not

Mobile applications are typical small. But just because of the multiple platform possibilities, the potential of retesting and regression testing is huge. This is why automation for a mobile application is critical for longer duration projects. Let us see how we can leverage Automation for the best ROI:

a. Automate the smoke test suite as much as possible.
b. UI Automation is time consuming and fragile, hence try to use a good automation framework like Keyword driven or Hybrid framework.
c. Automate most of the functionality using web services or APIs for faster and robust scripts.

4. How to test Usability

Usability can make or break a mobile app. Testers are in the best position to experience the usability constraints since they are aware of the end to end flows and spend most time with the application. Even if it is not an explicit requirement, a good tester should look for usability issues and report them. This is where the tester can provide a value add to the team. To get better at finding usability issues, do the following:

a. Research the market (Application Stores) for similar apps and study them. See how they are doing things. You may learn a thing or two from them.
b. Compare the app with the PC version. Users usually try to find the functionality available on PC software in a mobile app. See if the user can find the critical features easily.
c. Review the results if any analytics was done on the PC app. This will give you a hint on the most sought after areas of the app and test them for user experience.
d. If testing a native app, keep updated with the latest OS recommendations for native feature integrations like accessing the phonebook, menu styles, breadcrumbs, etc. Report issues where your app does not meet the standards.
e. You may have a separate performance test effort running, but a good tester should report any performance slowness or battery consumption issues that are apparent during the functional testing.
f. Plan for some time to perform just exploratory testing.

Mobile testing is seemingly easy, but a tester needs to be more aware when testing on a mobile device. It is easy to ignore minor travails and use workarounds in the test environment as the app is under development. But this could become a habit and ignored when the final app is ready. Hence even when minor deviations are found, it is critical to document them (this need not be a detailed bug report, but a simple note to check later). Equally important is to have a list of planned test cases that are categorized in suites and executed before a release.