Custom Search

ETL / Data Warehouse Testing – Tips, Techniques, Process and Challenges


Today let me take a moment and explain my testing fraternity about one of the much in demand and upcoming skills for my tester friends i.e. ETL testing (Extract, Transform, and Load). This article will present you with a complete idea about ETL testing and what we do to test ETL process.

It has been observed that Independent Verification and Validation is gaining huge market potential and many companies are now seeing this as prospective business gain. Customers have been offered different range of products in terms of service offerings, distributed in many areas based on technology, process and solutions. ETL or data warehouse is one of the offerings which are developing rapidly and successfully.

ETL testing

Why do organizations need Data Warehouse?
Organizations with organized IT practices are looking forward to create a next level of technology transformation. They are now trying to make themselves much more operational with easy-to-interoperate data. Having said that data is most important part of any organization, it may be everyday data or historical data. Data is backbone of any report and reports are the baseline on which all the vital management decisions are taken.

Most of the companies are taking a step forward for constructing their data warehouse to store and monitor real time data as well as historical data. Crafting an efficient data warehouse is not an easy job. Many organizations have distributed departments with different applications running on distributed technology. ETL tool is employed in order to make a flawless integration between different data sources from different departments. ETL tool will work as an integrator, extracting data from different sources; transforming it in preferred format based on the business transformation rules and loading it in cohesive DB known are Data Warehouse.

Well planned, well defined and effective testing scope guarantees smooth conversion of the project to the production. A business gains the real buoyancy once the ETL processes are verified and validated by independent group of experts to make sure that data warehouse is concrete and robust.

ETL or Data warehouse testing is categorized into four different engagements irrespective of technology or ETL tools used:

  • New Data Warehouse Testing – New DW is built and verified from scratch. Data input is taken from customer requirements and different data sources and new data warehouse is build and verified with the help of ETL tools.
  • Migration Testing – In this type of project customer will have an existing DW and ETL performing the job but they are looking to bag new tool in order to improve efficiency.
  • Change Request – In this type of project new data is added from different sources to an existing DW. Also, there might be a condition where customer needs to change their existing business rule or they might integrate the new rule.
  • Report Testing – Report are the end result of any Data Warehouse and the basic propose for which DW is build. Report must be tested by validating layout, data in the report and calculation.

ETL Testing Techniques:

1) Verify that data is transformed correctly according to various business requirements and rules.
2) Make sure that all projected data is loaded into the data warehouse without any data loss and truncation.
3) Make sure that ETL application appropriately rejects, replaces with default values and reports invalid data.
4) Make sure that data is loaded in data warehouse within prescribed and expected time frames to confirm improved performance and scalability.

Apart from these 4 main ETL testing methods other testing methods like integration testing and user acceptance testing is also carried out to make sure everything is smooth and reliable.

ETL Testing Process:

Similar to any other testing that lies under Independent Verification and Validation, ETL also go through the same phase.

  • Business and requirement understanding
  • Validating
  • Test Estimation
  • Test planning based on the inputs from test estimation and business requirement
  • Designing test cases and test scenarios from all the available inputs
  • Once all the test cases are ready and are approved, testing team proceed to perform pre-execution check and test data preparation for testing
  • Lastly execution is performed till exit criteria are met
  • Upon successful completion summary report is prepared and closure process is done.

It is necessary to define test strategy which should be mutually accepted by stakeholders before starting actual testing. A well defined test strategy will make sure that correct approach has been followed meeting the testing aspiration. ETL testing might require writing SQL statements extensively by testing team or may be tailoring the SQL provided by development team. In any case testing team must be aware of the results they are trying to get using those SQL statements.

Difference between Database and Data Warehouse Testing
There is a popular misunderstanding that database testing and data warehouse is similar while the fact is that both hold different direction in testing.

  •  Database testing is done using smaller scale of data normally with OLTP (Online transaction processing) type of databases while data warehouse testing is done with large volume with data involving OLAP (online analytical processing) databases.
  •  In database testing normally data is consistently injected from uniform sources while in data warehouse testing most of the data comes from different kind of data sources which are sequentially inconsistent.
  • We generally perform only CRUD (Create, read, update and delete) operation in database testing while in data warehouse testing we use read-only (Select) operation.
  • Normalized databases are used in DB testing while demoralized DB is used in data warehouse testing.

There are number of universal verifications that have to be carried out for any kind of data warehouse testing. Below is the list of objects that are treated as essential for validation in ETL testing:
- Verify that data transformation from source to destination works as expected
- Verify that expected data is added in target system
- Verify that all DB fields and field data is loaded without any truncation
- Verify data checksum for record count match
- Verify that for rejected data proper error logs are generated with all details
- Verify NULL value fields
- Verify that duplicate data is not loaded
- Verify data integrity

ETL Testing Challenges:

ETL testing is quite different from conventional testing. There are many challenges we faced while performing data warehouse testing. Here is the list of few ETL testing challenges I experienced on my project:
- Incompatible and duplicate data.
- Loss of data during ETL process.
- Unavailability of inclusive test bed.
- Testers have no privileges to execute ETL jobs by their own.
- Volume and complexity of data is very huge.
- Fault in business process and procedures.
- Trouble acquiring and building test data.
- Missing business flow information.

Data is important for businesses to make the critical business decisions. ETL testing plays a significant role validating and ensuring that the business information is exact, consistent and reliable. Also, it minimizes hazard of data loss in production.

Hope these tips will help ensure your ETL process is accurate and the data warehouse build by this is a competitive advantage for your business.

This is a guest post by Vishal Chhaperia who is working in a MNC on a test management role. He is having extensive experience in managing multi technology QA projects, Processes and teams.

Have you worked on ETL testing? Please share your ETL/DW testing tips and challenges below.

-----------------------------------------------

20 Top Practical Testing Tips A Tester Should Know

20 Top Practical Testing Tips A Tester Should Know



Testing doesn't stop with de bugging. It is very rare to come across all kind of scenarios at a single instant while testing. After all testers learn all these testing practices by experience and here are Top 20 practical software testing tips a tester should read before testing any application.
1) Analyze your test results Troubleshooting the root cause of failure will lead you to the solution of the problem, thus analyzing is very much needed, proper analyzing may get you out of all the possible mistakes. Bugs in softwares are introduced by both man and machine and some of the other practical reasons for the occurrence of bugs are miscommunication, software complexity, Programming errors, changing requirements, time pressures and reluctance.
2) Maximized Test coverage Make use of the entire possible tool for testing application. It can be done by trial and error method for better results, but practically it is impossible to include all the testing methods therefore it is advised to use the testing methods which gave best results earlier. Selecting a testing tool from a QA perspective will result in producing media verification, release scenario and Decision to release the product.
3) Ensure maximum test coverage Breaking your Application Under Test (AUT) in to smaller functional modules will help you to cover the maximum testing applications also if possible break these modules into smaller parts and here is an example to do so.
E.g: Let's assume you have divided your website application in modules and accepting user information is one of the modules. You can break this User information screen into smaller parts for writing test cases: Parts like UI testing, security testing, functional testing of the User information form etc. Apply all form field type and size tests, negative and validation tests on input fields and write all such test cases for maximum coverage.
4) While writing test cases First preference should be given to intended functionality before writing a test case and then for invalid conditions. This will cover expected as well unexpected behavior of application under test.
Some of the Cases should be considered while testing web applications. • Functionality Testing • Performance Testing • Usability Testing • Server Side Interface • Client Side Compatibility • Security
5) Error finding attitude Being a software Tester or a QA engineer you must stay curious about finding a bug in an application, existence of subtle bugs may even crash the entire system. So finding such a subtle bug is most challenging work and it gives you satisfaction of your work and to remain positive.
6) Test cases in requirement analysis Designing the pre requirements about the test cases and analysis can help you to ensure that all the cases are testable.
7) Availability of test cases to developers. Let developers analyze your test cases thoroughly to develop quality application. Letting them to do your work will help them to stay vigil while coding. This is also a time consuming scenario which will help you to release a quality product.
8) To ensure quick testing If possible identify and group your test cases for regression testing. This will ensure quick and effective manual regression testing.
9) Performance testing When it comes to the case of applications it consumes critical response time, therefore it must be given highest priority by choosing performance testing. But at many instants performance testing is avoided as it requires large data volume.
10) Avoid testing your own code Developers are not good testers, none of the developers like to be blamed for their work because they remain optimistic when it comes to their product and they tend to skip their bugs as the person who develops the code generally sees only happy paths of the product and don't want to go in much detail.
11) Testing requirement has no limits Sky is the only limit for testing an application, use all the available means for testing application to improve the quality.
12) Advantage of previous bug graph Using a previous graph will be an aid for finding bugs against different time modules, especially while doing regression testing. This module-wise bug graph can be useful to predict the most probable bug part of the application.
13) Review your Test process Keep in track with your test results, these results may teach you a lot about learning new things. Keep a text file open while testing an application and use these notepad observations while preparing final test release report. This good habit will help you to provide the complete unambiguous test report and release details.
14) Importance of code changes When it comes to the banking projects it requires lots of steps in development or testing environment to avoid execution of live transaction processing. Therefore note down all the changes done for the testing purpose, as testers or developers make changes in code base for application under test.
15) Stay away - Developers If developers don't have access to testing environment they will not do any such changes accidentally on test environment and these missing things can be captured at the right place.
16) Role of a tester in design When you bring in testers right from software requirement and design phase it is obvious they will also become a part of development hence request to your lead or manager to involve your testing team in all decision making processes or meetings. In this way testers can get knowledge of application dependability resulting in detailed test coverage.
17) Rapport with the other testing team Holding a good relationship with your co testers from other team helps both the parties to share best of their testing experience.
18) Together testers and developers Do not keep anything verbal. To know about more details of the product, testers should relate with the developers, maintaining such kind of relationship will resolve more issues which are coming up in the product in the initial stage, make sure to communicate the same over written communication ways like emails.
19) Timing priority Analyzing all risks helps a lot to prioritize work and it is the first stage of implementing time saving method. From this you can avoid wasting time.
20) Importance of final report Testing is a creative and challenging task and do not fail to create a clear report about the bugs and possible solutions. This will remain as a record for Do's and Dont's in testing for future generation.'

Automation Testing Vs Manual Testing

Automation Testing Vs Manual Testing

One can know when to automate and when to manually test using common sense which has always been the general thumb rule especially when one has to come up with deterministic set of guidelines on how & when to automate. If one has to run the test once or twice and the test's really very costly to automation, it's most likely a manual test.
Manual Testing in which testers' tests software in order to find defects and also testers play the roles of end users. Manual Testing tester performs all testing operations in all the phases of the STLC being tested by human efforts.
Some important aspects to be considered in Manual Testing: 

Manual testing costs less than automation testing. If the test case only runs two times a coding milestone most likely it should be a manual test. It allows the tester to perform more random testing.
The more time a tester spends playing with the feature the greater of finding real bugs. Manual testing is very time consuming as it's done manually. If there is a new build, each time the tester must rerun all required tests.
Automation testing testers use some automated tools to test the software. It is performed where the user operations of the application are performed automatically by running some scripts using automated tools. In case of testing large software or executing a huge number of test cases automation testing is preferable.
While performing Automation testing some important aspects to be considered: 

To run a set of repeated tests, automation is preferable. Automation testing provides the ability to run automation against code that frequently changes & in mainstream scenarios to catch regressions in a timely manner. It also aids in testing a large test matrix. On different machines the automated tests can be run at the same time, whereas the manual tests have to be run sequentially.
As automation testing is very expensive than running the test manually. Visual references cannot be automated, for e.g. just via code or the automation tool it's not possible to tell the font colour, it is a manual test.
Some of the top challenges in testing are: 

Testing each & every application combination is an impossible task both in automation & manual testing. Tester requires good communication and analyzing skills to handle the relation very carefully with the developers in order to complete the work in a tester's way.
Regression testing work becomes complex when the project goes on expanding. Testing always under time constraint when the task has to be completed faster and which also includes writing, executing, automating and reviewing the test cases. It's very important to understand that which test has to be executed first.
Testers are responsible to find out the requirements of the customer for this they have to communicate properly to understand the requirements. The question of when to stop testing is difficult to answer since it requires core judgement of testing processes as well as importance of each of them.
As days are changing rapidly in application methods, reusing the test scripts will be very difficult to manage the test tools and test scripts. Testers only concentrate on finding easy bugs because of this there will be hard or subtle bugs will remain unnoticed.

Testing in Agile

Testing in Agile


Traditional Style Quality Assurance

Agile approaches are changing the conversation about software development

An Agile Tester
A professional tester who embraces change, collaborates well with both technical and business people, and understands the concept of using tests to document requirements and drive development.

Agile testers tend to have good technical skills, know how to collaborate with others to automate tests, and are also experienced exploratory testers.
They're willing to learn what customers do so that they can better understand the customers' software requirements.

Traditional vs. Agile Testing



TraditionalAgile
In the phased approach diagram (see previous image), it is clear that testing happens at the end, right before release. The diagram is idealistic, because it gives the impression there is as much time for testing as there is for coding. In many projects, this is not the case. The testing gets "squished" because coding takes longer than expected, and because teams get into a code-and-fix cycle at the end.Agile is iterative and incremental (see previous image). This means that the testers test each increment of coding as soon as it is finished. An iteration might be as short as one week, or as long as a month. The team builds and tests a little bit of code, making sure it works correctly, and then moves on to next piece that needs to be built. Programmers never get ahead of the testers, because a story is not "done" until it has been tested.
Tests are usually created from a requirements document.Rather than creating tests from a requirements document that was created by business analysts before anyone ever thought of writing a line of code, someone will need to write tests that illustrate the requirements for each story days or hours before coding begins.

Role Testers Play
The role of the tester with agile methods is an area that has received increasing attention. Initially with a focus on unit testing and 'acceptance' testing it appeared the system tester did not have a role in agile.


As Cem Kaner put it:
'The nature of the tester's role changes in iterative projects. We are no longer the high-profile victims, we are no longer the lonely advocates of quality, we are merely (!) competent service providers, collaborating with a group that wants to achieve high quality.'


Testing and Testers on Agile Projects
1. It comes as no surprise to testers that working software is not the same as code – the tester clearly needs to be involved in not only assessing the product, but in deciding how the product is to be assessed. However, with automated unit tests in the hands of the coders, and confirmation-focused acceptance testing driven by the customer, testers should be aware that they will not be the sole – or even the primary – owner of deciding what works, and what doesn't.

2. Testers need to be able to interact directly with designers and coders to understand the technological imperatives and restrictions that affect the software and its unit tests.


3. Testing will be driven by what is important to a user, rather than to fulfill a procedural requirement. It is better to have communication between tester, customer and designer than to maintain independence of the test team. In practice, it is common to find large-scale automated unit testing on agile projects, to confirm that code works as expected. The product will be judged by the customer typically by manual, confirmatory tests, with close observation for undesirable behaviors. Testing by testers is often driven by the need to measure the system's performance and to find surprises – tools are very much in evidence, but rigid test scripts and procedures do not give the requisite opportunity for discovery, diagnosis and exploitation.

4. Testers are key collaborators with the customer, and on some agile projects will take on much of the role of the customer in designing and executing confirmation-driven acceptance tests. However, although testers traditionally make good customer advocates, working closely with a customer is preferable to becoming a proxy. Test strategies which lean heavily on an unchanging set of requirements (for example: designing and coding tests to be bought together with code late in the project; prioritizing tests based on a fixed risk assessment; testing only what has been agreed in the contract; reporting bugs only against fixed requirements) may be considered to be fatally flawed in the light of this value. Iterative collaboration is favored over a negotiated bill of work.

While a developer is coding a task, it is impossible for a tester to test it (it doesn't exist yet). What then is the role of a tester at this point.
1. Testers can prepare their test plans, test cases, and automated tests for the user stories before (or while) they are implemented. This helps the team discover any inconsistency or ambiguity in the user stories even before the developers write any code.

2. The tester could be working with the customer to fine tune the stories in the sprint.

3. They can often be involved in designing the tests that the coder will write to perform TDD.

4. If the agile team is fairly advanced then the tester would normally be writing the ATDD (Acceptance Test Driven Development) tests. These could be in a tool such as Fitnesse or Robot Framework or they could be more advanced ruby tests or even some other programming language. Or in some cases, simple record and playback can often be beneficial for a small number of tests.

5. They would obviously be writing/planning some exploratory testing scenarios or ideas.

6. The tricky thing to comprehend sometimes for the team is that the story does not have to be complete in order to drop it to the test stack for testing. For example the coders could drop a screen with half of the fields planned on it. The tester could test this half whilst the other half is being coded and hence feedback in with early test results. Testing doesn't have to take place on "finished" stories.

Is the tester now involved in unit testing? Is this done parallel to black box testing?

Testers only test code that passes all of the automated unit, integration and acceptance tests, which are all written by the developers. This split may be different elsewhere, though; for example your testers could be writing automated acceptance tests.


What does the tester do during a sprint where primarily infrastructural changes have been made, that may only be testable in unit testing?
1. Tester's workload will vary between sprints, but regression tests still need to be run on [if any] changes...

2. You may also find that having the testers spend the first couple of days of each sprint testing the tasks from the previous sprint may help, however it's better to get them to nail down the things that the developers are going to be working on by writing their test plans.

3. Ensure that project or sprint requirements are clear, measurable and testable. In an ideal world each requirement will have a fit criterion written down at this stage. Determine what information needs to be automatically logged to troubleshoot any defects.

4. Prepare a project specific test strategy and determine which QA steps are going to be required and at which project stages: integration, stress, compatibility, usability, performance, beta testing etc. Determine acceptable defect thresholds and work out classification system for defect severity, specify guidelines for defect reporting.

5. Specify, arrange and prepare test environment: test infrastructure and mock services as necessary; prepare test data; write scripts to quickly refresh test environment when necessary; establish processes for defect tracking, communication and resolution; prepare for recruitment or recruit users for beta, usability or acceptance testing. Write test scripts.

6. Ideally the tester would be working with the team and the customer (who by the way, is part of the team!) to define the planned stories and build in some good, detailed acceptance criteria. This is invaluable and can save loads of time later down the line. The tester could also be learning new automation techniques, planning test environments, helping to document the outcome of the planning.

Specific Technical Skills For Agile Tester's Toolkit
1. Automation Skills

Learn how to evaluate and choose the right tools, so you can help your team create maintainable automated regression tests. You can free up time for essential testing activities such as exploratory testing.

2. Acceptance Test-driven Development
Communication skills and good domain understanding enable testers to help business experts give good examples of both desired and undesired system behavior. We can turn these examples into tests that help the programmers understand what code to write. This is called acceptance test-driven development, and it is a major step toward building quality into the code and preventing defects.

3. Learning Styles

We all have blind spots that may prevent us from learning or triggers where we shut down and don't hear the message anymore. Keep your emotional "hot buttons" in mind and focus on what you can learn from instructors, material, or teammates to enhance your abilities. Mentors with different backgrounds or from other industries besides testing and software development might work best with your learning style. Don't limit yourself to coaches, mentors, and instructors who work specifically in software testing.

4. Learning Resources Examples

Negative Testing Examples

Negative Testing Examples

7 Examples of Negative Test Cases

The process of negative testing is intended to exhibit - that a system does not do what it is not believed or supposed to do. While positive testing verifies that your application works as expected, negative testing makes sure that your application can gracefully handle invalid input or unexpected/ unpredicted user behavior.

Below are some of the examples of negative test cases:
1. I worked on an application where in there were few modules which had an Attachment tab containing buttons for adding an attachment (DOCX, PDF, and JPEG etc.), viewing an attachment, editing an attachment and deleting an attachment. In this scenario one of the negative test cases was uploading an empty DOCX file. It was uploaded fine but while viewing, it showed exceptions.

2. Suppose there is a Date field on a page. Negative testing will require you to enter invalid dates. On the other hand there are some fields which are required like Name etc. Try leaving those required fields empty and examine application behavior or response with that. For a Numeric field, try entering alphabets or vice versa to observe application behavior. For a field accepting limited characters observe application behavior by entering more characters than limited, enter negative numbers in case only positive are accepted and so on.

3. If a feature implements authentication/verification functionality, a positive test would consist of trying the legitimate username and legitimate password. Everything else would be negative testing, including incorrect username, incorrect password, and someone else's password, other special characters (not allowed for username/password) and so on.

4. Negative test cases for an installer includes - try installing on a drive with "not adequate" disk space, try installing in a "read only" folder/directory etc, consume more RAM so that it starts paging data and see the behavior of installer afterwards.

5. Negative test cases about ATM (Automated Teller Machine) will include Inserting a wrong ATM card, or wrong pin number, or responding to an ATM query after taking long pause etc.

6. A compact disk player can be in one of three states: Standby, On or Playing.

When in standby mode, the CD player can be turned On by pressing the standby button once (an indicator light turns from red to green to show the CD player is On). When the CD player is On, it can return to standby mode by pressing the standby button once (an indicator light turns from green to red to show the CD player is in standby mode). When the CD player is On, pressing the play button causes the currently loaded CD to play. Pressing the stop button when the CD player is playing a CD causes the CD player to stop playing the disk.

Examples of positive tests could include:
Verifying that with the CD player in Standby mode, pressing the standby button causes the CD player to turn On and the indicator light changes from red to green.

Verifying that with the CD player in the On state, pressing the Standby button causes the state of the CD player to change to Standby and the indicator light changes from green to red.

Examples of negative tests could include:
Investigating what happens if the CD player is playing a CD and the Standby button is pressed.

Investigating what happens if the CD player is On and the Play button is pressed without CD in the CD player.