Custom Search

How to specialize in performance testing

How to specialize in performance testing

Q-I have been learning performance testing for the past eight months. However, I was not given any opportunities in performance testing in my present company even though I performed well in the internal interviews. I really want to become a good performance tester -- it is my dream to become a performance tester. Please guide me as to how I can make this happen.

Expert’s Response: I think this is a great question. It's specific, which makes it easy to start answering, but it's also general enough that anyone who is interested in specializing in something within software testing should be able to pull something from the answer. I think there are three overarching dynamics to your question:

· How can you best structure your future learning to support your goals?

· How can you best market your abilities to get the opportunities you want?

· How can you best structure the work you're currently doing to support your work and learning objectives?

Continuing to learning about performance testing
The activity that you have the most control over is your own learning. I've been studying and doing performance testing for eight years, and I honestly still learn something new about performance testing almost every week. It's a deep and rich specialization in software testing. There's a lot to performance testing that still needs to be formalized and written down. It's still a growing body of knowledge.

If your dream is performance testing, then you need to continue to learn. Reading articles, blogs, books and tool documentation is a good place to start. Attending conferences, training, workshops and local groups is a great place to meet others who have similar passions. If you don't have opportunities like those, then join one of the many online communities where performance testers have a presence. Depending on your learning style, dialog and debate can be as great a teacher as reading, if not greater.

Finally, no learning is complete without practice. I'm so passionate about the topic of practice that I wrote an entire article on it. Many of the materials you read will include exercises. Work through them. Many of the conferences, training, and workshops you attend will show examples. Repeat them. Going through the work on your own, even if you already know the outcome, provides a different kind of learning. Some people learn best when the experience is hands-on.

For performance testing, I think a great place to start practicing is in the open source community. Given the nature of performance testing, most tool knowledge is transferable to other performance testing tools. Learning multiple open source tools will also give you different ideas for how you can solve a performance testing problem. Many times, our available tools anchor our thinking about how to approach the problem. If you've practiced with multiple tools, you're more likely to have variety in your test approaches and solutions.

Once you know how to use a couple of performance testing tools, if you can't seem to get the project work you need at your current employer, and you're unwilling or unable to leave for another opportunity, then I recommend volunteering your time. There are a lot of online communities that help connect people who want to volunteer their technical talents to nonprofits or other community-minded organizations. Finding project work outside of your day job can be just as valuable as formal project work.

Start marketing your skills and abilities
If you're serious about performance testing as a career, I recommend you start pulling together some marketing material. A resume is the place most people focus their limited marketing skills. That could be a good place for you to start as well. What story does your resume tell a potential employer? Is it that you're a performance tester? How has each of your past experiences helped you develop a specific aspect of performance testing? Remember, one of the great challenges performance testing presents to practitioners is its variety. That makes it easy to relate a variety of experiences to the skills a performance tester needs.

Don't forget to include your training on your resume. I've had to remind several people of classes they've attended, workshops they participated in, or people who have been an active member of an online community for years and have not included that on their resume. If it helps you tell the story of your expertise, get it on there. Include anything that shows an employer that you're passionate about performance testing and you're continuously learning more about it.

Depending on the types of companies you want to work for, or the types of projects you might want, a certification might be appropriate. Certifications relevant to performance testing aren't just performance testing tool certifications. Appropriate certifications may also come in the form of programming languages (e.g., Java certification), networking (e.g., CCNA), application servers (e.g., WebSphere administrator certification), databases (e.g., Oracle certification), or even a certification in the context you want to work in (e.g., CPCU certification if you want to work in the Insurance industry). I'm not normally a big fan of certifications, but they are clear marketing products.

Finally, I think the best way to market yourself is to write. Start by being active in an online community. Answer questions on forums or debate ideas on mailing lists. As you learn, catalog your learning in a blog so others can benefit from your hard work. If you feel you're really starting to understand a specific aspect of performance testing, try writing an article or paper on it (for example, email your idea to an editor at SearchSoftwareQuality.com -- they'll point you in the right direction for help if you need it). Present your idea at a conference or workshop. The more of a public face you develop by writing, the more you learn. My experience has been that people are very vocal in their feedback on what you write. You should get to learn a lot. Even if you don't become the next Scott Barber, when a potential employer Googles your name, they'll quickly see that you know something about performance testing and have a passion for it.

Align your project work with performance testing activities
Even if you can't get performance testing projects at your current employer, you can still get project work that relates to performance testing. Does your team test Web services? See if you can get involved; it will get you experience with XML, various protocols and, often, specialized tools. Does your team test databases? See if you can get involved; it will get you experience with SQL and managing large datasets. Does your team write automated tests? See if you can get involved; it will get you experience programming and dealing with the problems of scheduled and distributed tests. Does your team do risk-based testing? See if you can get involved; it will get you experience modeling the risk of an application or feature and teach you how to make difficult choices about which tests to run. I could go on with more examples. Take your current opportunities and make them relevant for learning more about performance testing.

If you can't get your own performance testing project, ask if you can work with someone else. What if you volunteer some of your time? What if you work under someone else's supervision for a while? Work with your current manager to understand what factors are preventing them from giving you the opportunity. Perhaps they can't give you the opportunity for a number of reasons out of their direct control. Perhaps they can, they just haven't given it enough attention. After a conversation where you try to figure it out with them, you should have an idea of what opportunities are available at that company. Just recognize that sometimes you have to leave for different opportunities. If you do that, make sure you're clear with your new employer as to what your expectations are.

I hope that's helpful. Your question is a great one, and I feel like it covers a general concern software testers have. The general form of the answer is the same for people who might want to specialize in security testing, test automation, Web service testing, test management, or any other aspect of testing where there can be specialization. Stay focused on your learning and development, actively market your knowledge and abilities, and work to align your work with your goals -- even if that means taking projects outside of the specialization to help you develop a specific skill.



How to do integration testing

How to do integration testing

Q- How do testers do integration testing? What are top-down and bottom-up approaches in integration testing?

Expert’s response: Ironically, integration testing means completely different things to completely different companies. At Microsoft, we typically referred to integration testing as the testing that occurs at the end of a milestone and that "stabilizes" a product. Features from the new milestone are integration-tested with features from previous milestones. At Circuit City, however, we referred to integration testing as the testing done just after a developer checks in -- it's the stabilization testing that occurs when two developers check in code. I would call this feature testing, frankly…

But to answer your question, top-down vs. bottom-up testing is simply the way you look at things. Bottom-up testing is the testing of code that could almost be considered an extension of unit testing. It's very much focused on the feature being implemented and that feature's outbound dependencies, meaning how that feature impacts other areas of the product/project.

Top-down, on the other hand, is testing from a more systemic point of view. It's testing an overall product after a new feature is introduced and verifying that the features it interacts with are stable and that it "plays well"' with other features.

The key to testing here is that you are in the process of moving beyond the component level and testing as a system. Frankly, neither approach alone is sufficient. You need to test the parts with the perspective of the whole. One part of this testing is seeing how the system as a whole responds to the data (or states) generated by the new component. You want to verify that data being pushed out by the component are not only well-formatted (what you tested during component testing) but that other components are expecting and can handle that well-formatted data. You also need to validate that the data originating within the existing system are handled properly by the new component.

Real-world examples? Well, let's assume you are developing a large retail management system, and an inventory control component is ready for integration. Bottom-up testing would imply that you set up a fair amount of equivalence-classed data in the new component and introduced that new data into the system as a whole. How does the system respond? Are the inventory amounts updated correctly? If you have inventory-level triggers (e.g., if the total count of pink iPod Nanos falls below a certain threshold, generate an electronic order for more), does the order management system respond accordingly? This is bottom-up testing.

At the same time, you want to track how well the component consumes data from the rest of the system. Is it handling inventory changes coming in from the Web site? Does it integrate properly with the returns system? When an item's status is updated by the warehouse system, is it reflected in the new component?

We see constant change in the testing profession, with new methodologies being proposed all the time. This is good -- it's all part of moving from art to craft to science. But just as with anything else, we can't turn all of our testing to one methodology because one size doesn't fit all. Bottom-up and top-down testing are both critical components of an integration testing plan and both need considerable focus if the QA organization wants to maximize software quality.




Test coverage: Finding all the defects in your application

Test coverage: Finding all the defects in your application

Q-If trace matrix does not meet the requirement for test coverage, what would you suggest for the same? How can I assure the coverage of all functionalities by a team member as a team leader?

Expert Response: The trace matrix is a well-established test coverage tool. Let me offer a quick definition -- the purpose of the trace matrix is to map one or more than one test case to each system requirement, the trace matrix is usually formatted in a table. The fundamental premise is that if one or more than one test case has been mapped to each requirement, then all the requirements of the system must have been tested and therefore the trace matrix proves testing is complete.

I see flaws with this line of reasoning and here are my primary reservations on the over-reliance of the trace matrix:

  1. A completed trace matrix is only as valuable as the contents. If the requirements are not complete or clear than the test cases designed and executed might fulfill the requirements but the testing won't have provided what was needed. Conversely if the requirements are clear but the test cases are insufficient then a completed trace matrix still doesn't indicate the testing coverage and confidence that is being sought by a completed table.
  2. The trace matrix design relies too stringently on system requirements -- that is the primary design of the trace matrix -- to ensure all system requirements have been tested. But all sorts of defects can be found outside of the system requirements that are still relevant to the application providing a solution for the customer. By looking only at the system requirements and potentially not considering the customers' needs and real life product usage, essential testing could be overlooked. Testing only according to specified requirements may be too narrowly focused to be effective in real life usage -- unless the requirements are exceptionally robust.

Overall I feel the trace matrix might provide a clean high level view of testing but a checked-off list doesn't prove an application is ready to ship. The reason some people value the trace matrix is the matrix attempts to offer an orderly view of testing; but in my experience testing is rarely such a tidy task.

So how do you call the end of testing? And how can you assure test coverage?

  1. To be able to assure coverage at the end, I'd start with reviewing the beginning -- look at the test planning. Did your test planning include a risk analysis? A risk analysis at the start of a project can provide solid information for your test plan. Host a risk analysis either formally or informally, gather ideas by talking with multiple people. Get different points of view -- from your project stakeholders, talk to your DBAs, your developers, your network staff, and your business analysts. Plan testing based on your risk analysis.
  2. As a project continues, shift testing based on the defects found and the product and project as it evolves. Focus on high risk areas. Adapt testing based on you and your testing team's experience with the product. Be willing to adjust your test plan throughout the project.
  3. Throughout testing, watch the defects reported. Keep having conversations and debriefs with hands-on testers to understand not just what they've tested but how they feel about the application. Do they have defects they've seen but haven't been able to reproduce? What is their perception of the current state of the application?

In my view, there is no one tool including the trace matrix that signals testing is complete but the combination of knowing how testing was planned and adapted throughout the project, a thorough review of the defects reported and remaining, and the current state of the application according to you and your team's experience should provide you with an objective assessment of the product and the test coverage.