Monday, 20 January 2014

Ship quickly may Ship bugs, do you agree with this?

It is rightly said that ‘Quick response is much more appreciated than a delayed response’. The statement is somewhat true that deliverables shipped too early before the timed deadline may contain bugs, however the uncertainty of finding a bug later depends entirely on the quality/skill set of the testers, how effective was the QA plan and/or policies followed by them, which include internal reviewers and QA engineers appointed and responsible for reviewing completed deliverables and sign-off. And if QA tasks are completed before and/or on time, covering the entire product as expected, then this cannot be considered as a negative point.
QA objective is to ensure that the requirements are met and the project's needs are being matched to what the client desires. QA team when involved in the development process from the onset of a project through maintenance, they know which area to focus more, and which scenarios to apply. Also, there can be the case when signing off a product too quickly leaves a positive impact on the client and the customers. This stands true when client is also expecting QA sign off as soon as possible so that they can start with succeeding milestones as well.

Ship quickly may ship bugs as it's easy to commit mistakes if time is less for thorough testing. It also depends on the criticality of  the product and how you handle your testing efforts. There is always a possibility that your product is vulnerable to bugs if you if don't have enough time for planning a testing strategy.There are sometimes certain deadlines for the product delivery and a tester often doesn't get much time to test the application efficiently. So, it is a sure responsibility of a tester to deliver a quality product to the customer in that given time or he can ask the client to extend the product delivery time. But if the tester is not confident about the quality of testing he has performed, he has surely left some bugs in the application. So it is better to the complete the testing by extending the working hours or can ask the client to extend the delivery date of the product so that he gain the confidence about the testing he has done and give a quality product to the customer.


Sometimes due to shortage of time, testers try to complete the testing in hurry and eventually leaving bugs in the deliverable. Due to less time given for testing, it is hard for the testers to think of out of box or negative scenarios or in some cases they may even miss some positive scenarios. A tester should have enough time to understand the requirements and test the product so as to provide a quality end product.


More on this at http://www.qainfotech.com/

Tuesday, 13 August 2013

Software Testing

Good QA team needs a proper software staging environment for testing, If you agree please share your thoughts.

Yes, I completely agree with this statement because the test environment & conditions which we get during the testing phase can never actually matches up with the production environment. Especially the test data which we input in the application for testing is kind of constrained. So for End to End testing it is must for QA to get a environment we need a environment which is real imitation of production and hence here is why concept of staging environment came in to limelight.

There are numerous issues with the content in QA environment due to limited set of data which gets resolved by its own in the production environment after the deployment. But what if doesn’t gets resolved? What if it was the functional issue of application and not due to limited data? Being a tester one can’t rely on assumption. So it is must do task for a QA to test the application in proper staging environment before it gets deployed to the production.

As a Good QA team, we always tries to provide the best experience to customer by performing quality testing with what we have and it is rarely possible that what we have matches with production. While 'Staging Environment' is the exact replica of 'Production Environment'. Staging environment is separated from development server. It means, 'Staging Environment' allows the user to test the application in 'Production Environment' without providing access of application for customers. So QA can find bugs, performance issues, platform related issues and many other critical issues on 'Staging Environment' before the final product is pushed for the customers. QA team would be more confident that any critical issues or differences between dev server and final product, does not exists.

Staging environment is a replica of production environment i.e. the configuration of staging is same as that of production. Once the build pass testing on QA tier it needs to be tested on a configuration that is similar to production. Once QA team perform testing on staging environment and think that build is a production candidate they ask users (or business analysts) to perform UAT on the build to verify that everything is working fine. If staging environment does not exist then many issues can go into production without anyone's knowledge and hence can cost more in the later stages.

QA or software staging environment allows testing the software without closing down development work.  If development is in progress on the same environment it would be difficult for both the teams (dev and QA) to keep up with the current situation.  One person may break the software that has been tested while another person could be trying to test   .  Without the two being separate there is no way to do proper testing.  So, good QA team needs a proper software staging environment for testing to follow the organized approach.

"Testing on a staging environment provides a more precise measure of performance capacity and functional accuracy. As Web applications become more mission-critical for end-users, it becomes more and more important to test on environments that exactly imitate production because it's production where customers use your application. Any defect found in production is a fail to notice or an escape and it is not acceptable also from business point of view. Any defects experienced by customers in production negatively impact your application's and company's reputation.
Customers prefer not to be amazed. No one wants their system to go down or to go really slow or crash abruptly. As workers, we don't want to be negatively impacted at all by any kind of software up gradations. As a professionals, we want software upgrades to be flawless, barely discernible and a non-event. The only way to make sure that your software doesn't stop or obstruct with your professional users is to test on a staging environment.
As a company, it's attractive to go around creating a staging environment for per testing of production. However, when producing mission-critical software of any kind, the staging environment is essential to ongoing success.
"

Tuesday, 18 June 2013

Software Testing


What do you think are the 3 most important Software Testing skills and why do you think so.

In today’s world Software Testing industry is growing at a very rapid pace. The number of people pursuing software testing as a career has grown immensely over the years. The need to deliver quality products has become the foremost goal of the software testing companies. So in order to achieve this, need of good software testers is tremendous.
Hence the 3 most important skills which a good software tester must possess would be:

1. Out of the Box thinkers: A good software tester should be able to create multiple what if scenarios. S/He should be able to put him/herself in customer’s shoes and apply all the scenarios in the workflow of the application.

2. Excellent Communication Skills: To be able to communicate the issue to the development team is the most important job of the tester. Hence every tester must possess excellent communication skill (Oral and Written both) in order to communicate the issues faced by him/her in the most effective and efficient manner.

3. Quick Learner: This is last and most important quality that a tester must have is the ability to adapt and learn quickly. The 2 applications for testing given to the tester may of completely different domains. For eg. One A Banking Website and another A Learning Management System. So tester needs to adapt himself quickly and should be able to switch quickly from banking website to a LMS.

4. Analytical Skills - An important goal of testing is to identify the hidden errors. To be an effective tester, the person must be able to analyze the given business situation and judge all the possible scenarios. He should have the capability to identify and test unfamiliar scenarios. Creating logical scenarios and validating the application under test before releasing it to production. This can be done effectively only by a person who has strong analytical skills.

5. Creativity – A person should have the out-of-the-box thinking so that s/he should exercise the system which requires one to try non-intuitive ways of accomplishing tasks. Those who are Task-oriented and receive a set of instructions and follow the given instruction every time cannot become good testers.

6. Communication Skills - Excellent communication skills are very important for reporting bugs. A tester must be able to effectively communicate his thoughts and issues encountered in the application. Arguments should be supported by facts; the language should be pragmatic rather than philosophical.

7. Ability to think out of the box, if a tester has the ability to think and apply such scenarios which are covering a vast logical area then the chances of finding defects increases.

8. Passion and Enthusiasm, as testing is a repetitive activity so sometimes it becomes very boring and tester losses its interest in finding bugs.

9. The most important skill is communication, as it is very important to communicate your findings to the developer so that it can easily understood to the developer and can fix it quickly

Friday, 23 November 2012

SOFTWARE TESTING

   What are the Pros and cons of requirements-based software testing?

Requirement based software testing is testing of product based on requirements provided in requirement documents like Use Cases or 3Cs to ensure all features are developed as per the requirement or not. Test cases are created and executed based on requirements to make sure each and every requirement is covered correctly.
However, requirement based testing is not effective if requirements are not provided in detailed or defined properly. Sometimes, requirements are missed by Business Analyst and hence will not be developed and tested. Many times, real world business scenarios are not covered in requirement documents which are left undeveloped and untested.
So, requirement based testing is effective if requirements are documented properly.

In the beginning of the testing cycle, it is important to first validate the requirements and check their correctness, unambiguity, and logically and practically its consistency. In validating the requirements and verifying the build product, a requirements based testing is done. To achieve this, testers write a sets of test cases on all functionalities on the basis of the requirements outlines and provided by stakeholders.
As every coin has two sides, thereby requirement based testing has its own pros and cons, that are outlines as below:

Pros:

1) Firstly, As requirement based testing demonstrates that the software meets the requirement or not, so by this it adds value to the product/software if meets the requirement, so this leads to the building of an accurate and reliable product/software .

2) Secondly, there are always a variety of tests involved with requirement based testing like Black
Box, Integration, System, Coverage testing with which the quality is ensured

3) All the requirements are validated, if the product is build as per the requirements.

4) Logical consistency of the application is checked.

5) Acceptance of the application is marked from the stakeholders perspective.

Cons:

1) As most of the requirements are poorly defined, or even not defined as it should be, hence requirement based tests can never be better, because if the requirements are poorly defined , it would never be exact what to develop and what to test. Everything would work on guesses.

2)  As, Requirements-based testing is a black box testing. So till the application is producing the expected results, the test cases are passed. Requirement based testing is not concerned about how the results are produced, or the effects/impact it may show on other parts of application.

3) From RBT, how much a requirement is valid is not considered. No feasibility analysis for this done.

4) RBT should not be completely depended for the Pass/Fail of the application.


Sunday, 23 September 2012

Based on your Expertise please share what will be the top 5 Trends to drive market for Leading software testing Companies in 2012-13.

"I, see some major trends in business and information technology, and drive market for software testing in 2012-13. Some of these shaping trends are:
1)Mobility Application Testing : Since the smart phones have become more accessible, the convenience of using the internet on the move has seen many people use this platform as opposed to the PC for accessing the internet. So business demand develops application portable to mobile. Smartphone’s are on a rampage and are dominating the market. Mobile apps of various OS are popping everywhere. Thus, there is a huge scope of mobile app testing. Now-a-days, mobility is not just limited to receive/make calls on-the-go, but has transformed as an essential living style. Mobile/Smartphone testing assures the quality of mobile devices, like mobile phones, PDAs, etc.
2) Testing-as-a-Service/Cross Cloud computing: With TaaS or cross cloud computing, organizations are benefitted huge cutback in infrastructure costs, immediate access to a range of testing scenarios and capabilities, and reduces resolution time for critical and non-critical defects. TaaS stands for Testing as a Service. It is one of the platforms provided by the Cloud Infrastructure. It is a type of software testing which ensures the quality of the process through which the organizations can use applications on the internet without investing in new infrastructure and licensing new software.   

3) Business Intelligence Testing: Data is increasing in huge volume hence filtering meaningful data is huge challenge. The efficiency quotient of a business intelligence tool lies in its ability to crunch huge loads of data, present meaningful results that can be used by top business users. Some secure web applications contains classified data and exposure of the same to the test engineers may be harmful for the enterprise. Thus, there is a need to generate and manage test data so that it can be used by the test engineers without leaking vital information.

4) Crowd sourced Testing: Crowd sourced testing is getting pace across the industry to perform testing to uncover bugs and suggest new ideas. And enterprises now prefer a crowd sourced marketplace so as to get access to global talent and different set of skills. It is the process of outsourcing various tasks and activities to a group of teams or individuals. In testing field, a group of professionals are invited to find loopholes in the application and provide concrete suggestions to improve the user experience.
5) Test data generation and management: Test data creation and management would ensure the best use of filtered test data. Here, mainly the efficiency of the tool is put to test. Performance test engineers check for slowness of the system when huge volume of data is processed.
6) Agile Testing:  This evolving methodology helps to release the product in the market ASAP by continuous releases of functional software (rapid agile sprint process).
7) Web Accessibility Testing: It is a division of usability testing where the targeted users have disabilities that involve how they use the application.
8) Independent Outsourced Testing: It provides a neutral and fresh approach which is essential for the market success of the software product.

Wednesday, 19 September 2012

What Happens when a QA Testing Company Introduces a New process within Ongoing Project? Share your Experience with us.


What Happens when a QA Testing Company Introduces a New process within Ongoing Project? Share your Experience with us.
                    
Introducing a new process within an ongoing project in an organization is a sensitive decision. Various highlighted points which should be kept in consideration while introducing the new processes to the current process within an software organization are summarized as follows:

1: Size of the organization plays a vital role in introducing the new processes within an organization and subsequently risk associated with the organization is a major factor. Larger the size of the organization greater is the risk involved in implementing the new process to the ongoing processes.

2: Balancing should be maintained with productivity and the output of the organization. Management should be smart enough to decide the goal of project, Test requirement specification documents being used to create design documents for a project.

3: Sometimes performing Ad-hoc is well enough and all this depends on the size of the project with which the organization is dealing with. There should be a fair channel for communication within Team leads, Managers, Developers, QA Team and the end users so that the scope and the progress of the project is well defined and there is no discrepancy within various phases of project.


Injecting new Software QA Processes in an existing organization depends upon:-

a) Organization Strength: For large organizations with high risk projects, serious management buy-in is required and a formalized QA process is necessary.

B) For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.

For more on this, please log on to:- http://www.qainfotech.com/

Wednesday, 22 August 2012

Test Automation Framework

Can you differentiate between Test Plan and Test Strategy? Do we really need Test Plan documents?

Test strategy is a higher level document defining “How to test”. However; Test Plan defines “What to Test”.

Test strategy defines how different functionalities will be tested and testing tools to be used. Also, it provides test cases designing methodology that will be used. But test plan defines; features to be tested (In –scope), features not to be tested (Out scope), risk analysis, entry and exit criteria, Roles and responsibilities of each and every team member and Project Schedule It is a set of guide lines that describes test design. Test strategy says - How testing is going to be performed? What will be the test architecture? Test Strategy Document describes what we are going to test and what all are the testing approach to be followed by the Test Engineer.

Below are some more points that can be included in Test Strategy Document:

a) Defect Report and Tracking
b) How to communicate between Team and Status reports

Test Plan is the set of ideas that guide or represent the intended test process. Test plan is the sub set of Test Strategy. Test Plan covers – What needs to be tested, How testing is going to be performed? Resources needed for testing, Timelines and Risk associated. A test plan acts a guide in the project. It ensures that a project meets all of its specifications and other requirements. It is really good to have a test plan in a project. More than having it, it’s more important is it being followed. A Test Plan is document that is usually prepared by Manager or Team Lead. A software project test plan is a document that describes the objectives, scope, approach, and focus of software testing effort.

Different types of Test Plans
a. Unit
b. Integration
c. System
d. UAT

The way tests will be designed and executed to support an effective quality assessment is called Test Strategy.


Mentioned below are some points that a good Test Strategy should follow:

A. focuses most effort on areas of potential technical risk, while still putting some effort into low risk areas just in case the risk analysis is wrong.

B. address test platform configuration, how the product will be operated, how the product will be observed, and how observations will be used to evaluate the product

C. is diversified in terms of test techniques and perspectives. Methods of evaluating test coverage should take into account multiple dimensions of coverage, including structural, functional, data, platform, operations, and requirements.

D. Specify how test data will be designed and generated.
Different test techniques ( Ad hoc, Exploratory, Specification based, code based, Fault Injection and etc) and test methods (Black, White and Incremental) is used while designing test cases and execution

Few says that Test Plan document is required for testing as it really contains some useful information such as What needs to be tested, How testing is going to be performed? Resources needed for testing, Timelines and Risk associated.

Do we really need Test Plan?

Test Plan covers:

a) What need to be tested
b) When to start testing
c) Resources needed for testing
d) Risk associated etc?

But most of the people says that, for efficient Test Planning we do not need Test plan document. Just identify your test strategy and carry on with your testing. No wonder that Test Plan documents are not required formulating a plan. Planning can be done without the Test Plan documents. However, from a systematic point of view, having Test Plan documents is preferable.

For more details, please log on to: http://www.qainfotech.com/eLearning.html