What is Performance Testing?
Performance testing, which is a non-functional testing method performed to determine system parameters in terms of responsiveness and stability under various workloads. Performance testing measures the quality characteristics of a system, such as scalability, reliability, and resource use.
Types of Performance Testing
1. Functional testing test cases
Twelve functional test case scenario questions:
- Does the application work as intended when starting and stopping?
- Does the app work accordingly on different mobile and operating system versions?
- Does the app behave accordingly in the event of external interruptions?
- (i.e. receiving SMS, minimized during an incoming phone call, etc.)
- Can the user download and install the app with no problem?
- Can the device multitask as expected when the app is in use or running in the background?
- Applications work satisfactorily after installing the app.
- Do social networking options like sharing, publishing, etc. work as needed?
- Do mandatory fields work as required? Does the app support payment gateway transactions?
- Are page scrolling scenarios working as expected?
- Navigate between different modules as expected.
- Are appropriate error messages received if necessary?
There are two ways to run functional testing: scripted and exploratory.
ScriptedRunning scripted tests is just that - a structured scripted activity in which testers follow predetermined steps. This allows QA testers to compare actual results with expected ones. These types of tests are usually confirmatory in nature, meaning that you are confirming that the application can perform the desired function. Testers generally run into more problems when they have more flexibility in test design.
ExploratoryExploratory testing investigates and finds bugs and errors on the fly. It allows testers to manually discover software problems that are often unforeseen; where the QA team is testing so that most users actually use the app. learning, test design, test execution and interpretation of test results as complementary activities that run in parallel throughout the project. Related: Scripted Testing Vs Exploratory Testing: Is One Better Than The Other?
2. Performance testing test cases
Seven Performance test case scenarios ensure:
- Can the app handle the expected cargo volumes?
- What are the various mobile app and infrastructure bottlenecks preventing the app from performing as expected?
- Is the response time as expected? Are battery drain, memory leaks, GPS and camera performance within the required guidelines?
- Current network coverage able to support the app at peak, medium and minimum user levels?
- Are there any performance issues if the network changes from / to Wi-Fi and 2G / 3G / 4G?
- How does the app behave during the intermittent phases of connectivity?
- Existing client-server configurations that provide the optimum performance level?
3. Battery usage test cases
Seven battery usage test case scenarios to pay special attention to:
- Mobile app power consumption
- User interface design that uses intense graphics or results in unnecessarily high database queries
- Battery life can allow the app to operate at expected charge volumes
- Battery low and high performance requirements
- Application operation if used when battery is removed Battery usage and data leaks
- New features and updates do not introduce new battery usage and data
- Related: The secret art of battery testing on Android
4. Usability Testing Test Cases
Nine usability test case scenarios ensure:
- The buttons are of a user-friendly size.
- The position, style, etc. of the buttons are consistent within the app
- Icons are consistent within the application
- The zoom in and out functions work as expected
- The keyboard can be minimized and maximized easily.
- The action or touching the wrong item can be easily undone.
- Context menus are not overloaded.
- Verbiage is simple, clear and easily visible.
- The end user can easily find the help menu or user manual in case of need.
- Related: High impact usability testing that is actually doable
We will see more points in our next articles.
In the previous article, we learned 4 cases for how to test Android Applications.
In this article, we will learn more cases for how to test Android Applications.
5. Compatibility testing test cases
Six compatibility test case scenarios questions:
- Have you tested on the best test devices and operating systems for mobile apps?
- How does the app work with different parameters such as bandwidth, operating speed, capacity, etc.?
- Will the app work properly with different mobile browsers such as Chrome, Safari, Firefox, Microsoft Edge, etc.
- Does the app's user interface remain consistent, visible and accessible across different screen sizes?
- Is the text readable for all users?
- Does the app work seamlessly in different configurations?
6. Security testing test cases
Twenty-four security testing scenarios for mobile applications:
- Can the mobile app resist any brute force attack to guess a person's username, password, or credit card number?
- Does the app allow an attacker to access sensitive content or functionality without proper authentication?
- This includes making sure communications with the backend are properly secured. Is there an effective password protection system within the mobile app?
- Verify dynamic dependencies.
- Measures taken to prevent attackers from accessing these vulnerabilities.
- What steps have been taken to prevent SQL injection-related attacks?
- Identify and repair any unmanaged code scenarios
- Make sure certificates are validated and whether the app implements certificate pinning
- Protect your application and network from denial of service attacks
- Analyze data storage and validation requirements
- Create session management to prevent unauthorized users from accessing unsolicited information
- Check if the encryption code is damaged and repair what was found.
- Are the business logic implementations secure and not vulnerable to any external attack?
- Analyze file system interactions, determine any vulnerabilities and correct these problems.
- What protocols are in place should hackers attempt to reconfigure the default landing page?
- Protect from client-side harmful injections.
- Protect yourself from but vicious runtime injections.
- Investigate and prevent any malicious possibilities from file caching.
- Protect from insecure data storage in app keyboard cache.
- Investigate and prevent malicious actions by cookies.
- To provide regular checks for the data protection analysis
- Investigate and prevent malicious actions from custom-made files
- Preventing memory corruption cases
- Analyze and prevent vulnerabilities from different data streams
7. Localization testing test cases
Eleven localization testing scenarios for mobile applications:
- The translated content must be checked for accuracy. This should also include all verification or error messages that may appear.
- The language should be formatted correctly.(e.g. Arabic format from right to left, Japanese writing style of Last Name, First Name, etc.)
- The terminology is consistent across the user interface.
- The time and date are correctly formatted.
- The currency is the local equivalent.
- The colors are appropriate and convey the right message.
- Ensure the license and rules that comply with the laws and regulations of the destination region.
- The layout of the text content is error free.
- Hyperlinks and hotkey functions work as expected.
- Entry fields support special characters and are validated as necessary (ie. postal codes)
- Ensure that the localized UI has the same type of elements and numbers as the source product.
8. Recoverability testing test cases
Five recoverability testing scenarios questions:
- Will the app continue on the last operation in the event of a hard restart or system crash?
- What, if any, causes crash recovery and transaction interruptions?
- How effective is it at restoring the application after an unexpected interruption or crash?
- How does the application handle a transaction during a power outage?
- What is the expected process when the app needs to recover data directly affected by a failed connection?
9. Regression testing test cases
Four regression testing scenarios for mobile applications:
- Check the changes to existing features
- Check the new changes implemented
- Check the new features added
- Check for potential side effects after changes start
That's it. If you want a good application, take these tips and follow cases for Android Application test. It will help to make quality & standardize your Applications.
As I mentioned earlier, there is a contradiction in the usage of Bug and Defect. People widely say the bug is an informal name for the defect.
Defect : The variation between the actual results and expected results is known as defect.
If a developer finds an issue and corrects it by himself in the development phase then it’s called a defect.
Failure : Once the product is deployed and customers find any issues then they call the product as a failure product. After release, if an end user finds an issue then that particular issue is called as failure
Error : We can’t compile or run a program due to coding mistake in a program. If a developer unable to successfully compile or run a program then they call it as an error.
Critical: This defect indicates complete shut-down of the process, nothing can proceed further
Major: It is a highly severe defect and collapses the system. However, certain parts of the system remain functional
Medium: It causes some undesirable behavior, but the system is still functional
Low: It won't cause any major break-down of the system
Low: The Defect is an irritant but repair can be done once the more serious Defect has been fixed
Medium: During the normal course of the development activities defect should be resolved. It can wait until a new version is created
High: The defect must be resolved as soon as possible as it affects the system severely and cannot be used until it is fixed
New: When a new defect is logged and posted for the first time. It is assigned a status as NEW.
Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and assigns the bug to the developer team
Open: The developer starts analyzing and works on the defect fix
Fixed: When a developer makes a necessary code change and verifies the change, he or she can make bug status as "Fixed."
Pending retest: Once the defect is fixed the developer gives a particular code for retesting the code to the tester. Since the software testing remains pending from the testers end, the status assigned is "pending retest."
Retest: Tester does the retesting of the code at this stage to check whether the defect is fixed by the developer or not and changes the status to "Re-test."
Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bug detected in the software, then the bug is fixed and the status assigned is "verified."
Reopen: If the bug persists even after the developer has fixed the bug, the tester changes the status to "reopened". Once again the bug goes through the life cycle.
Closed: If the bug is no longer exists then tester assigns the status "Closed."
Duplicate: If the defect is repeated twice or the defect corresponds to the same concept of the bug, the status is changed to "duplicate."
Rejected: If the developer feels the defect is not a genuine defect then it changes the defect to "rejected."
Deferred: If the present bug is not of a prime priority and if it is expected to get fixed in the next release, then status "Deferred" is assigned to such bugs
Not a bug:If it does not affect the functionality of the application then the status assigned to a bug is "Not a bug".
2. A defect is forwarded to the project manager for analysis.
3. The project manager decides whether a defect is valid.
4. Here the defect is invalid. The status is "Rejected".
5. The project manager assigns a rejected status.
6. If the bug is not resolved, the next step is to check that it is in scope.
7. Next, the manager checks to see if a similar error has occurred earlier. If so, a duplicate status is assigned to the error.
8. If not, the bug is assigned to the developer, who starts correcting the code.
9. During this phase, the defect is assigned a status in process,
10. Once the code is fixed. A defect is assigned a status fixed.
11. Next, the tester tests the code again, If the test case is passed, the defect is closed. If the test cases fail again, the bug is reopened and assigned to the developer.
12. Consider a situation where, during the first release of the flight reservation, an error was detected in the fax order, which has been fixed and a status of closed has been assigned. The same error occurred again during the second upgrade version.
In such cases, a closed defect is opened again.
The above diagrams clearly states that Modules 1, 2 and 3 are available for integration, whereas, below modules are still under development that cannot be integrated at this point of time. Hence, Stubs are used to test the modules. The order of Integration will be:
- Reduces errors in the newly developed functions or reduces errors when changing the existing functionality.
- Reduces test costs as errors are detected at a very early stage.
- Improves the design and allows for better code redesign.
- Unit tests also show the quality of the build when integrated into Build.
- Black Box Testing - Using which the user interface, input and output are tested.
- White Box Testing - used to test each one of those functions' behavior is tested.
- Gray Box Testing - Used to execute tests, risks and assessment methods.
- Identify the business-critical functions that a product must perform.
- Design and run the basic functions of the application.
- Make sure the smoke test passes each and every build to continue testing.
- Smoke testing enables obvious errors to be revealed, saving time and effort .
- Smoking testing can be manual or automated.
- Check that communication between systems is done correctly
- Check if all supported hardware / software has been tested
- Check if all related documents are supported / open on all platforms
- Check security requirements or encryption when communicating between application server systems
- Final Regression Tests: - A "final regression testing" is performed to validate the build that hasn't changed for a period of time. This build is deployed or shipped to customers.
- Regression Tests: - A normal regression testing is performed to verify if the build has NOT broken any other parts of the application by the recent code changes for defect fixing or for enhancement.
- Requires knowledge of the system and its effects on the existing functions.
- Tests are selected based on the area of common failure.
- Tests are chosen to include the area where code changes have been made multiple times.
- Tests are selected based on the criticality of the features.
- Regression tests are the ideal cases of automation which results in better Return On Investment (ROI).
- Select the regression tests.
- Choose apt tool and automate regression testing.
- Verify applications with checkpoints.
- Manage regression testing and update as needed.
- Schedule tests.
- Integrate with builds.
- Evaluate performance acceptance criteria.
- Identify critical scenarios.
- Design the workload model.
- Identify target load levels.
- Design the tests.
- Run the tests.
- Analyze the results
- Response time.
- Resource usage rate.
- Maximum user load.
- Work-related metrics
- This allows the test team to monitor the performance of the system during failures.
- To check if the system saved data before crashing or NOT.
- To check if the system is printing messages Significant error during a failure or if it has printed random exceptions
- To check whether unexpected failures do not lead to safety issues
- Monitor the behavior of the system when the maximum number of 'users are logged in at the same time
- All users performing critical operations at the same time
- All users Accessing the same file
- Hardware issues such as the downed database server or some of the servers in a downing farm breakdown.
- Operating system Compatibility Testing - Linux, macOS, Windows
- Database Compatibility Testing - Oracle SQL Server
- Browser Compatibility Testing - IE, Chrome, Firefox
- Other System Software - Web server, networking/ messaging tool, etc.
- White Box Testing
- Black Box Testing
- Grey Box Testing
- Unit Testing
- Integration Testing
- System Testing
- Software malfunction.
- Error in interface.
- Errors in concepts.
- Errors related to the database.
- Performance or behavior errors.
- Errors in product startup or termination
- Integration Testing
- System Testing
- Acceptance Testing
Software Testing Life Cycle (STLC) identifies the test activities to perform and when to perform those test activities. While testing differs between organizations, there is a test lifecycle.
There are mainly eight phase of STLC
1. Test Planning And Control
3. Test Analysis
4. Test Case Development
5. Test Environment Setup
6. Test Execution
7. Exit Criteria Evaluation And Reporting
8. Test Closure
This phase helps to identify whether the requirements are likely or not. If any requirement is not verifiable, the test team can communicate with various stakeholders (customer, business analyst, technical leaders, system architects, etc.) during this phase so that the mitigation strategy can be planned.
Entry criteria: BRS (Business Requirement Specification) Results
Deliverables: list of all verifiable requirements, automation feasibility report (if applicable)
The deliverables of this phase are Test Plan & Effort estimation documents.
Entry Criteria: Requirements Documents
Deliverables: Test Strategy, Test Plan, and Test Effort estimation document.
- CRS (Customer Requirement Specification)
- SRS (Software Requirement Specification)
- BRS (Business Requirement Specification)
- Functional Design Documents
Following are the three activities that are carried out in the Test Case Development phase
2. Enumerate potential users, their actions, and their goals.
3. Evaluate Users who have a hacker mindset and listed possible scenarios for abuse of the system.
4. List System events and how the system handles these requests.
5. List Benefits and create comprehensive tasks to verify.
6. Read about similar systems and their behavior.
7. Studying Complaints about competitors' products and their predecessors.
Test Case serves as a starting point for running the test. After a set of input values is applied; the application is final and leaves the system at an end point also known as a post execution condition.
Test Data is used to run the tests for test ware. Test Data must be precise and complete in order to detect the shortcomings. To achieve these three goals, follow a step-by-step approach as given below -
1. Identify resources or test requirements
2. Identify conditions / functionality to be tested
3. Set priority test conditions
4. Select conditions to test
5. Determine expected result of test case processing
6. Create Test cases
7. Document test
8. conditions Conduct test
9. Verify and correct test cases based on modifications
The following diagram shows the different activities that form part of Test Case Development.
Following people are involved in test environment setup
- System Admins,
- Sometimes users or techies with an affinity for testing.
For example, Fedora configuration for PHP, Java-based applications with or without mail server, cron configuration, Java-based applications and so on.
- Internet setup
- LAN Wi-Fi setup
- Private network setup
It ensures that the congestion that occurs during testing doesn't affect other members. (Developers, designers, content writers, etc.)
For example, windows of phone app testing may require.
- Visual Studio installation
- Windows phone emulator
- Alternatively, assigning a Windows phone to the tester.
Bug reporting tools should be provided to testers.
Creating Test Data for the Test Environment
Many companies use a separate test environment to test the software product. The common approach used is to copy production data to test. This helps the tester, to detect the same issues as a live production server, without corrupting the production data.
The approach for copying production data to test data includes,
- Set up production jobs to copy the data to a common test environment
- All PII (Personally Identifiable Information) is modified along with other sensitive data. The PII is replaced with logically correct, but non-personal data.
- Remove data that is irrelevant to your test.
Testers or developers can copy this to their individual test environment. They can modify it as per their requirement.
Privacy is the main issue in copy production data. To overcome privacy issues you should look into obfuscated and anonymized test data.
For Anonymization of data two approaches can be used,
- Blacklist: In this approach, all the data fields are left unchanged. Except those fields specified by the users.
- Whitelist: By default, this approach, anonymizes all data fields. Except for a list of fields which are allowed to be copied. A whitelisted field implies that it is okay to copy the data as it is and anonymization is not required.
Also, if you are using production data, you need to be smart about how to source data. Querying the database using SQL script is an effective approach.
Every test may not be executed on a local machine. It may need establishing a test server, which can support applications.
For example, Fedora set up for PHP, Java-based applications with or without mail servers, cron set up, Java-based applications, etc.
Network set up as per the test requirement. It includes,
- Internet setup
- LAN Wi-Fi setup
- Private network setup
It ensures that the congestion that occurs during testing doesn't affect other members. (Developers, designers, content writers, etc.)
In This manual test / automation script phase is executed. If any defect is detected during the execution of the test case, it will be reported to the developer through the bug tracking system.
If any test case result is a failure than this particular test case marked as fail.
If any test case result is matched to the expected result then a particular test case marked as Pass.
If all modules dependent are tested and any fault is detected, the particular module test case is marked as blocked, first corrects the main module fault, and then runs the associated module test case. For example, B the module depends on the module A.
If any fault found in the module A, the test case of the module B is not executed. First correct the fault of the module A then rerun the module A test case, If module A result of the test case is Pass then run the module B execution of the test case.
Blocked Test noticed cases are executed after the fault is corrected by the developer.
In this phase if the exit criteria match the test result. In Termination Criteria There is one condition that is predefined. At this stage The test summary report is generated. A Document containing a summary of testing activities and final test results is called Test Summary Report.
The final stage where we prepare Test Closure Report, Test Metrics.
The testing team will be called out for a meeting to evaluate cycle completion criteria based on Test coverage, Quality, Time, Cost, Software, Business objectives.
The test team analyses the test artifacts (such as Test cases, defect reports, etc.,) to identify strategies that have to be implemented in the future, which will help to remove process bottlenecks in the upcoming projects.
Test metrics and Test closure report will be prepared based on the above criteria.
Entry Criteria: Test Case Execution report (make sure there are no high severity defects opened), Defect report
Deliverables: Test Closure report, Test metrics
- Boundary Value Analysis (BVA)
- Equivalence Class Partitioning
- Decision Table based testing.
- State Transition
- Error Guessing
- Boundary Value Analysis (BVA)
- The marginal cost assessment is based entirely on trying the boundaries between partitions. It includes maximum, minimum, internal or external barriers, typical values, and error values. It is generally seen that numerous errors occur at the obstacles of the defined input values as opposed to the center.
- It is also known as BVA and offers a selection of test entities that train limit values. This black container test method improves equivalence partitioning. This software program trying one approach is based on the principle that if a machine works well for these exact values, it will paint error-free for all values that are between the two limits.
CONDITIONS CASE 1 CASE 2 CASE 3 CASE 3
| EMAIL | F | T | F | T
| PASSWORD | F | F | T | T
| OUTPUT | ERROR | ERROR | ERROR | HOME SCREEN
CASE 1: Email And Password Wrong: Error Message Displayed.
- First Enter Right Number In This Text Box and Click RESET PASSWORD Button. One OTP Comes On Mobile Number.
- To reset the secret word should experience the "OTP" framework. The first run through the client enters the right "OTP", they will be permitted to go to the secret phrase change page.
- In the event that the client enters mistaken "OTP" unexpectedly and second, the framework will request the third time "OTP" is entered.
- In the event that "OTP" is valid, it will be permitted to go to the secret phrase change page, in any case, if the OTP is off base the third time, Error Message Displayed Like "Your OTP has been expired!!".
ATTEMPT CORRECT PIN INCORRECT PIN
| [B1] Start | B5 | B2
| [B2] First attempt | B5 | B3
| [B3] Second attempt | B5 | B4
| [B4] third attempt | B5 | B4
| [B5] Access granted | - | -
| [B6] Account blocked | - | -
- Divide by zero
- Inserting blanks in text fields
- Pressing the enter button without entering values
- Uploading files that exceed the maximum limits
- Exception null pointer.
- Invalid parameters
- What will be the result if the cellphone number is left blank?
- What is the result if a character other than a digit is entered?
- What is the result if fewer than 10 digits are entered?
- Before you begin testing any app/website, you should know the concepts of the product, what problems does the product solves and how are users going to use it.
- Here the steps to rectify the concept of product
- Identifying customer needs.
- Defining the problem and objectives.
- Drafting and analysis.
- Ask for detailed design and drawings.
- Final successful delivery.
- ‘’First how we know to understand the requirements here the steps mention below’’
- There are mainly two types of requirements: 1. Functional 2. Non-functional
- What are Functional Requirements?
- Functional requirements define the basic system behavior. Essentially, they are what the system does or must not do, and can be thought of in terms of how the system responds to inputs. Functional requirements usually define if/then behaviors and include calculations, data input, and business processes.
- What are the Non-Functional Requirements?
- While functional requirements define what the system does or must not do,
- non-functional requirements specify how the system should do it. Non-functional requirements do not affect the basic functionality of the system Even if the non-functional requirements are not met.
- the system will still perform its basic purpose.
- This will help to customize or change the task from the bug
- This will give ease to know the bug
- It will give a glance at the task and the flow in which it is working
- This will help you prioritize where to look for potential bugs
- Understand the Learners: To write concrete and effective scenarios you must understand your learners and know their needs and expectations.
- Create Real Life and Relevant Situations: Make your scenarios as real as possible.
- Motivate the Learner: A well-written scenario should motivate the learner to action.
- ‘’How to write the test cases’’
- Title Must be strong
- Include a Strong Description with Assumptions & Preconditions
- Keep the Test Steps Clear and Concise
- The result must be Expected
- Also, Make is Reusable
- This will ensure, that in case something goes wrong in production, it would be any business-critical flows
- Reason to do: for uncertain changes of code, some regression will occur in code, if we didn't check & deployed in the server.to rid the problem in the product needs to check the critical flow twice.
- If you test with the developer, you may miss out on edge cases due to the developer's bias or perspective. So make sure you test the app/website once while the developer is not with you.
- It always helps to communicate any doubt that you have
- As the perspective varies and as well as a method so in case of any doubt ask the developer and correct it.
- Also communicate with the product lead in case of doubt so the better output can be generated of the task with minimal bugs and well-defined task