Thursday, 22 June 2017

Oniyosys Advertising Testing: For Highest ROI Generation



Advertising testing is one of the staples of market research as it directly appeals to the measurement and improvements of marketing effectiveness. Ad testing itself comes in a variety of types depending on the specific platform where the advert is in development and implementation.
The purpose of an advert is to create sales, but a good advertising does more than just raising sales value, it awares consumers about the brand and imparts meaning to that brand.

Advertising testing therefore mostly starts with the creative end of the scale looking at concept testing using qualitative research. Various concepts are drawn up and respondents, often in focus groups, but also in direct sensory-emotional depths, describe what they take out of the advert, what they like or don't like about it and how they think it would affect their behavior. Naturally it's very difficult for someone to exactly say how they would respond to advertising or which advert they would find most appealing, so researchers take care to introduce the advertising carefully. For instance, hiding the test ad among others, changing the order in which the adverts are shown, giving respondents dials to play with to show interest, or play games like a post-test after the respondents think the testing has finished.

At an initial level, these concept tests can screen out poor adverts those are difficult to understand, but they are often tested before they are fully finished and if can be difficult for respondents to fully imagine the final version. An extension of this type of qualitative testing is qualitative concept development. That is where the research is used iteratively with the creative team to define and refine the ideas. It might start very open and then the design team works up concepts to test, placing them in front of respondents to see how individuals respond neurologically or psychologically to the concepts, then slowing refining and picking winners. This type of iterative development is rare, but is being used more often. With online research it can also be processed into fast-testing to ensure the equal is reflected by small sample quant tests.

Pre-testing
The formal testing of advertising which is practically finished is known as pre-testing. This is typically a more quantitative process to evaluate the potential reach and success it can generate. For broadcast advertising, much of the cost is in buying media space so in an advanced form of pre-testing the advertising is tested in a smaller region or area prior to rolling out finally. In this way, the advertising would only be executed if it meets certain goals.

Pre- Post- Test and Control testing
The main testing of advertising is done through a traditional statistical test. It is possible that the collection of advertising to be quite poor but for the advertising itself to have an effect on brand recognition and consideration and other market metrics, almost at a subconscious level, and secondly there is usually an amount of false recognition (around 3-4% in the UK, and up to 5-6% in the US). So to formally measure effectiveness it's not correct to blindly rely on post-advertising recollection as reported by respondents. Instead measurement is done by a pre- and post- measurement using matched samples. The pre- measurement takes place before the advertising goes live and sets a benchmark. It's normally constructed carefully to ensure that a range of different awareness and consideration measures are captured firstly without the respondent knowing which company is sponsoring the research, then with prompting to capture additional recollection. The post- measurement then re-measures these details among a sample matched to the pre- sample (matched samples) to ensure statistical replicability. Changes are then made to be constructed directly to the advertising campaign and any other news or information that the advertising generates.

In practice this still might not sufficient to measure the real effect. Changes to the market, or arecent economic or political event or even simple seasonality can cause the post- measurement to change even without any advertising effect. So to control for this a full pre- post- test and control trial can be run. In this design the pre- and post- measures are divided into two areas (typically geographic, such as different locations) - one larger area, the test area where people get to see or hear the advertising and a smaller area - the control - where the advertising is not shown. From this it becomes possible to isolate out the advertising effectiveness from other factors by looking at how measurements changed in the control area compared to how they changed in the test area.

To make this even more effective you can look at test and control areas for different platforms - eg some with radio, some with radio plus poster and so on, so you can start to try to isolate out media effects (generally media has a cumulative effect - that is combined has a bigger effect than either one thing or another separately). Even where there is no formal demarcation it can be possible to infer effectiveness by looking at groups that listen to the radio compared to those who didn't.

Ad Testing allows you to:


  • Effectively target key market segments with content that results into resonance.
  • Get iterative feedback to ensure core messaging sticks, and to share those insights with ad creators and/or stakeholders.
  • Achieve data-driven confidence when promoting a campaign
  • Make an informed go or no-go decision when deploying an ad
  • Evaluate the performance of an ad agency
  • Get the highest possible ROI out of your ad spend
  • Predict advertising influence on purchase intent 


The following are eight commonly performed ad tests:


RECALL
Companies need to be worth memorizing if customers are going to consider their products or services. In a recall test, participants see an ad and then wait a specified amount of time before being asked whether they are able to recall a particular ad or product.

PERSUASION
A test for persuasion measures the effectiveness of an ad in changing attitudes and intentions. This test assesses brand attitudes before and after ad exposure. Participants answer a series of questions before seeing the proposed advertisement. Then they take a second test to assess how the advertisement changed their attitudes and intentions.

RESPONSE
All ads are designed to drive an action or a conversion. This is especially true in the cases of online businesses that rely on click-through and conversion to generate revenue. In a response test, participants receive an ad with a unique identifier (URL string, promo code, phone number, etc.) to track how well the advertisement performs in converting interest to action.

SERVICE ATTRIBUTES
This type of ad test determines which attributes and features the ad is successfully communicating. For instance, a services attribute test might ask whether the ad communicates that a certain computer is reliable or whether it tells more about the highlighted features.

COMMUNICATING BENEFITS
Effective ads communicate the right product or feature benefits to the target market. Benefits might include aspects like comfort, quality, or luxury.

PERSONAL VALUES
Personal values are a large factor in driving consumer purchase decisions. If a customer is purchasing a car, they may value customer service, vehicle reliability, or the affordability of dealership services. When testing ads it’s important to determine how well an advertisement communicates the personal values of the target market.

HIGHER ORDER VALUES
Advertisements often communicate higher order values, such as accomplishment, piece of mind, or personal satisfaction that resonates much into audience psychology. These higher order values can have great influence on purchase decisions, brand awareness, and market positioning.

AD EFFECTIVENESS
This type of ad testing measures how effective an ad is, based on behavioral and attitudinal goals. These goals will vary by ad and include such factors as whether the ad is entertaining to watch, whether the ad is informative, and whether the ad drives consumers to purchase specific a product of service.

Oniyosys provides Advertisement Quality Testing Service for various types of Ads including Banner Ads, Text Ads, Inline Ads, Pop-up Ads, In-text Ads, and Video Ads etc. We report bad quality ads with its screenshots, HTML code and we take the latest fiddler session which helps clients to remove bad quality ads quickly. We also provide testing for bad quality ads on Chrome and Firefox browsers. Our team is equipped with experienced Digital Experts who can rule out every error and possible faults for better conversion.

Friday, 26 May 2017

Oniyosys Mobile Application Testing: for optimum and seamless mobile web applications



Mobile applicatins are at the center of digital revolution across sectors today. Customers now have a lot of options to effortlessly switch to alternative mobile applications and are increasingly intolerant of poor user experience, functional defects, below-par performance, or device compatibility issues. Mobile testing of applications is therefore now critical step for businesses looking for launching new applications and consumer communication. With the latest developments and changing requirements, Oniyosys provides comprehensive mobile application testing services with best output assurance. To cope up with the emerging challenges of complex mobile devices, we provide extensive training and monitoring of the latest trends and development in testing.

Mobile Application Testing:

Here the applications that work on mobile devices and their functionality is tested for better user interface and error checks. It is called the “Mobile Application Testing” and in the mobile applications, there are few basic differences that are important to understand:

a) Native apps: A native application is created for using it on a platform like mobile and tablets.
b) Mobile web apps are server-side apps to access website/s on mobile using different browsers like Chrome, Firefox by connecting to a mobile network or wireless network like WIFI.
c) Hybrid apps are combinations of native app and web app. They run on devices or offline and are written using web technologies like HTML5 and CSS.


There are few basic differences that set these apart:

  • Native apps have single platform affinity while mobile web apps have cross platform affinity.
  • Native apps are written in platforms like SDKs while Mobile web apps are written with web technologies like html, css, asp.net, java, php.
  • For a native app, installation is required but for mobile web apps, no installation is required.
  • Native app can be updated from play store or app store while mobile web apps are centralized updates.
  •  Many native app don’t require Internet connection but for mobile web apps it’s a must.
  • Native app works faster when compared to mobile web apps.
  •  Native apps are installed from app stores like Google play store or app store where mobile web are websites and are only accessible through Internet.

Significance of Mobile Application Testing


Testing applications on mobile devices is more challenging than testing web apps on desktop due to

  • Different range of mobile devices with different screen sizes and hardware configurations like hard keypad, virtual keypad (touch screen) and trackball etc.
  • Wide varieties of mobile devices like HTC, Samsung, Apple and Nokia.
  • Different mobile operating systems like Android, Symbian, Windows, Blackberry and IOS.
  • Different versions of operation system like iOS 5.x, iOS 6.x, BB5.x, BB6.x etc.
  • Different mobile network operators like GSM and CDMA.
  • Frequent updates – (like android- 4.2, 4.3, 4.4, iOS-5.x, 6.x) – with each update a new testing cycle is recommended to make sure no application functionality is impacted.



      Types of Mobile App Testing:

To address all the above technical aspects, the following types of testing are performed on Mobile applications.

Usability testing– To make sure that the mobile app is easy to use and delivers a satisfactory user experience to the customers

Compatibility testing– Testing of the application in various mobiles devices, browsers, screen sizes and OS versions according to the requirements.

Interface testing– Testing of menu options, buttons, bookmarks, history, settings, and navigation flow of the application.

Services testing– Testing the services of the application online and offline.

Low level resource testing: Testing of memory usage, auto deletion of temporary files, local database growing issues known as low level resource testing.

Performance testing– Testing the performance of the application by changing the connection from 2G, 3G to WIFI, sharing the documents, battery consumption, etc.

Operational testing– Testing of backups and recovery plan if battery goes down, or data loss while upgrading the application from store.

Installation tests– Validation of the application by installing /uninstalling it on the devices.
Security Testing– Testing an application to validate if the information system protects data or not.


Test Cases for Testing a Mobile App

In addition to functionality based test cases, Mobile application testing requires special test cases which should cover following scenarios.

Battery usage– It’s necessary to keep a track of battery consumption while running application on the mobile devices.

Speed of the application- the response time on different devices, with different memory parameters, with different network types etc.

Data requirements – For installation as well as to verify if the user with limited data plan will able to download it.

Memory requirement– again, to download, install and run

Functionality of the application– make sure application is not crashing due to network failure or anything else.

Oniyosys Mobile Testing Practice comprises of a unique combination of skilled software engineering and testing teams with proven expertise in testing tools and methodologies to offer a wide range of testing solutions. We offer our services across all major Mobile Devices, Platforms, Domains and Operating Systems.


Monday, 15 May 2017

Oniyosys Localization Testing: For Better Optimized Market Specific Softwares



At Oniyosys, we are dedicated to our commitment in performing all testing for the improvement of software lifecycles. Localization testing requires professional knowledge and careful control of the IT environment: clean machines, workstations, and servers with local operating systems, local default code pages, and regional settings within a controlled system configuration are only a few reasons. Moreover, the knowledge and experience gathered from testing one localized version can provide ready solutions that may be needed in other versions and locales, as well. 


What is Localization Testing?


Localization Testing is a software testing technique, where the product is checked to determine whether it behaves according to the local culture, trend or settings. In other words, it is a process of customization of a specific software application as per the targeted language and country.

The major area affected by localization testing includes content and UI. It is a process of testing a globalized application and its UI, default language, currency, date, time format and documentation are designed keeping in mind the targeted country or region. It ensures that the application is otimized enough for using in that particular country.

Example:

1. If the project is designed for Karnataka State in India, The designed project should be in Kannada language, Kannada or relevant regional virtual keyboard should be present, etc.

2. If the project is designed for the UK, then the time format should be changed according to the UK Standard time. Also language and currency format should follow UK standards.

Why to do Localization Testing?


The purpose of doing localization testing is to check appropriate linguistic and cultural aspects for a particular locale. It includes a change in user interface or even the initial settings according to the requirements. In this type of testing, many different testers will repeat the same functions. They verify various things like typographical errors, cultural appropriateness of UI, linguistic errors, etc. It is also called as "L10N", because there has 10 characters in between L & N in the word localization.


Best practices for Localization testing:



  • Hire a localization firm with expertise in i18n engineering
  • Make sure your localization testing strategy enables more time for double-byte languages.
  • Ensure that you properly internationalize your code for the DBCS before extracting any text to send for translation


Sample Test Cases for Localization Testing


S.No Test Case Description
1. Glossaries are available for reference and check.
2. Time and date is properly formatted for target region.
3. Phone number formats are proper to target region.
4. Currency for the target region.
5. Is the License and Rules obeying the current website (region).
6. Text Content Layout in the pages are error free, font independence and line alignments.
7. Special characters, hyperlinks and hot keys functionality.
8. Validation Message for Input Fields.
9. The generated build includes all the necessary files.
10. The localized screen has the same type of elements and numbers as that of the source product.
11. Ensure the localized user interface of software or web applications compares to the source user interface in the target operating systems and user environments.


Benefits for Localization Testing


Following are the benefits of localization testing


  • Overall testing cost reduction
  • Overall support cost reduction
  • Helps in reducing the time for testing.
  • It has more flexibility and scalability.



Localization of Testing Challenges:


Following are the challenges of localization testing


  • Requires a domain expert
  • Hiring local translator often makes the process expensive
  • Storage of DBCS characters differ in various country
  • Tester may face schedule challenges


At Oniyosys, we conduct localization testing to ensure that your interactive project is grammatically correct in a variety of languages and technically well adapted to the target market where it will be used and sold. It requires paying attention to the correct version of the operating system, language and regional settings.

Wednesday, 3 May 2017

Oniyosys Agile Testing: efficient software testing services that deliver high-quality, stable software



In the world of software development, the term agile typically refers to any approach to project management that strives to unite teams around the principles of collaboration, flexibility, simplicity, transparency, and responsiveness to feedback throughout the entire process of developing a new program or product. And agile testing generally means the practice of testing software for bugs or performance issues within the context of an agile workflow.


Testing using Agile Methodology is the buzzword in today's industrial world as it yields quick and reliable testing results. The following course is designed for beginners with no Agile Experience. Unlike the WaterFall method, Agile Testing can start at the beginning of the project with continuous integration between development and testing. Agile Testing is not sequential (in the sense it's executed only after coding phase) but continuous.

Oniyosys Agile team works as a single team towards a common objective of achieving Quality. Agile Testing has shorter time frames called iterations (say from 1 to 4 weeks). This methodology is also named release, or delivery driven approach since it gives a better prediction on the workable products in short period of time.

Test Plan for Agile


Unlike waterfall model, in an agile model, test plan is written and updated for every release.  The agile test plan consists types of testing done in that iteration like test data requirements, infrastructure, test environments and test results. 


Typical test plans in agile includes-


1) Testing Scope

2) New functionalities which are being tested

3) Level or Types of testing based on the features complexity

4) Load and Performance Testing

5) Infrastructure Consideration

6) Mitigation or Risks Plan

7) Resourcing

8) Deliverables and Milestones


Agile Testing Strategies


Agile testing life cycle spans through four stages

(a) Iteration 0

During the first stage or iteration 0, you perform initial setup tasks. It includes selecting people for testing, installing testing tools, scheduling resources (usability testing lab), etc. The following steps are set to achieve in Iteration 0

a) Establishing a business case for the project

b) Establish the boundary conditions and the project scope

c) Outline the key requirements and use cases that will drive the design trade-offs

d) Outline one or more candidate architectures

e) Calculating the risk

f) Cost estimation and prepare a preliminary project

(b) Construction Iterations

The second phase of testing is Construction Iterations, the maximum number of the testing occurs during this phase. This phase is observed as a set of iterations to build an increment of the solution.  In order to do that, within each iteration, the team implements a hybrid of practices from XP, Scrum, Agile modelling, and agile data and so on.

In construction iteration, agile team follows the prioritized requirement practice: With each iteration they take the most essential requirements remaining from the work item stack and implement them.

Construction iteration is divided into two, confirmatory testing and investigative testing.  Confirmatory testing concentrates on verifying that the system fulfils the intent of the stakeholders as described to the team to date, and is performed by the team.  While the investigative testing finds the problem that confirmatory team have skipped or ignored.  In Investigative testing, tester determines the potential problems in the form of defect stories. Investigative testing deals with common issues like integration testing, load/stress testing and security testing.

Again for, confirmatory testing there are two aspects developer testing and agile acceptance testing. Both of them are automated to ensure continuous regression testing throughout the lifecycle.  Confirmatory testing is the agile equivalent of testing to the specification.

Agile acceptance testing is a mixture of traditional functional testing and traditional acceptance testing as the development team, and stakeholders are doing it together.  While developer testing is a mix of traditional unit testing and traditional service integration testing.  Developer testing verifies both the application code and the database schema.

(c) Release End Game or Transition Phase

The goal of “Release, End Game” is to deploy your system successfully into production.  The activities include in this phase are training of end users, support people and operational people.  Also, it includes marketing of the product release, back-up & restoration, finalization of system and user documentation.

The final testing stage includes full system testing and acceptance testing.   In accordance to finish your final testing stage without any obstacles, you should have to test the product more rigorously while it is in construction iterations. During the end game, testers will be working on its defect stories.

(d) Production

After release stage, the product will move to the production stage.


 The Agile Testing Quadrants


The agile testing quadrants separates the whole process in four Quadrants and helps to understand how agile testing is performed.

a) Agile Quadrant I – The internal code quality is the main concern in this quadrant, and it consists of test cases which are technology driven and are implemented to support the team, it includes

1. Unit Tests

2. Component Tests

b) Agile Quadrant II – It consists test cases that are business driven and are implemented to support the team.  This Quadrant focuses on the requirements. The kind of test performed in this phase is

1. Testing of examples of possible scenarios and workflows

2. Testing of User experience such as prototypes

3. Pair testing

c) Agile Quadrant III – This quadrant delivers feedback to quadrants one and two.  The test cases can be used as the basis to perform automation testing.  In this quadrant, many rounds of iteration reviews are carried out which builds confidence in the product.  The kind of testing done in this quadrant is

1. Usability Testing

2. Exploratory Testing

3. Pair testing with customers

4. Collaborative testing

5. User acceptance testing


d) Agile Quadrant IV – This quadrant focuses on the non-functional requirements such as performance, security, stability, etc.  With the help of this quadrant, the application is made to deliver the non-functional qualities and expected value.

1. Non-functional tests such as stress and performance testing

2. Security testing with respect to authentication and hacking
3. Infrastructure testing

4. Data migration testing

5. Scalability testing

6. Data migration testing

7. Scalability testing

8. Load testing

In the world of software development, the term agile typically refers to any approach to project management that strives to unite teams around the principles of collaboration, flexibility, simplicity, transparency, and responsiveness to feedback throughout the entire process of developing a new program or product.  And agile testing generally means the practice of testing software for bugs or performance issues within the context of an agile workflow.

Testing using Agile Methodology is the buzzword in the industry as it yields quick and reliable testing results. The following course is designed for beginners with no Agile Experience.

Unlike the WaterFall method, Agile Testing can begin at the start of the project with continuous integration between development and testing. Agile Testing is not sequential (in the sense it's executed only after coding phase) but continuous.

Agile team works as a single team towards a common objective of achieving Quality. Agile Testing has shorter time frames called iterations (say from 1 to 4 weeks). This methodology is also called release, or delivery driven approach since it gives a better prediction on the workable products in short duration of time.


Oniyosys Agile Testing Methodology and Approach



We understand the QA challenges that can arise when implementing testing in an Agile environment: Communication on larger-scale Agile projects with globally distributed teams; incorporating risk planning and avoidance; accounting for management loss of controlling time and budget; maintaining flexibility versus planning; and not getting side-tracked by speed of delivery over quality software.


Using a collaborative network-based approach, Oniyosys defines clear, shared goals and objectives across all teams both internally and client-side for improved velocity, quality software, and customer user satisfaction — resulting in stakeholder buy-in for metrics that matter.

Fully transparent updates and reports are shared with a strong focus on immediate feedback, analysis and action.


Our metrics provide:

  • Information used to target improvements — minimizing mistakes and rework
  • Purposeful evaluation for actionable takeaways — Assisting our clients utilize resources effectively
  • Insights for process optimization — predicting possible problems; enabling clients to fix defects immediately rather than later reducing overall costs




Oniyosys DevOps Methodology Testing: helping developers and small teams work smarter



DevOps is the offspring of agile software development – born from the need to keep up with the increased software velocity and throughput agile methods have achieved. Advancements in agile culture and methods over the last decade exposed the need for a more holistic approach to the end-to-end software delivery lifecycle.

What is DevOps?

DevOps – a combination of Development & Operations – is a software development methodology which looks to integrate all the software development functions from development to operations within the same cycle.

This calls for higher level of coordination within the various stakeholders in the software development process (namely Development, QA & Operations)

So an ideal DevOps cycle would start from:

  • The dev writing code
  • Building & deploying of binaries on a QA environment
  • Executing test cases and finally
  • Deploying on to Production in one smooth integrated flow.
  • Obviously, this approach places great emphasis on automation of build, deployment and testing. Use of Continuous Integration (CI) tools, automation testing tools become a norm in a DevOps cycle.


What Is the Goal of DevOps?

Improve collaboration between all stakeholders from planning through delivery and automation of the delivery process in order to: 
  • Improve deployment frequency
  • Achieve faster time to market
  •  Lower failure rate of new releases
  • Shorten lead time between fixes
  • Improve mean time to recovery
  • According to the 2015 State of DevOps Report, “high-performing IT organizations deploy 30x more frequently with 200x shorter lead times; they have 60x fewer failures and recover 168x faster.”
  • A Common Pre-DevOps Scenario
  • The software team meets prior to starting a new software project. The team includes developers, testers, operations and support professionals. This team plans how to create working software that is ready for deployment.


Each day new code is deployed as the developers complete it. Automated testing ensures the code is ready to be deployed. After the code passes all the automated testing it is deployed to a small number of users. The new code is monitored for a short period to ensure there are no unforeseen problems and it is stable. The new code is then proliferated to the remaining users once the monitoring shows that it is stable. Many, if not all, of the steps after planning and development are done with no human intervention.





What Are the Phases of DevOps Maturity?

There are several phases to DevOps maturity; here are a few of the key phases you need to know.

Waterfall Development
Before continuous integration, development teams would write a bunch of code for three to four months. Then those teams would merge their code in order to release it. The different versions of code would be so different and have so many changes that the actual integration step could take months. This process was very unproductive.

Continuous Integration
Continuous integration is the practice of quickly integrating newly developed code with the main body of code that is to be released. Continuous integration saves a lot of time when the team is ready to release the code.

DevOps didn’t come up with this term. Continuous integration is an agile engineering practice originating from the Extreme Programming methodology. The terms been around for a while, but DevOps has adopted this term because automation is required to successfully execute continuous integration. Continuous integration is often the first step down the path toward DevOps maturity.

The continuous integration process from a DevOps perspective involves checking your code in, compiling it into usable (often binary executable) code and running some basic validation testing.

Continuous Delivery
Continuous delivery is an extension of continuous integration [DevOps stage 2]. It sits on top of continuous integration. When executing continuous delivery, you add additional automation and testing so that you don’t just merge the code with the main code line frequently, but you get the code nearly ready to deploy with almost no human intervention. It’s the practice of having the code base continuously in a ready-to-deploy state.

Continuous Deployment
Continuous deployment, not to be confused with continuous delivery [DevOps nirvana], is the most advanced evolution of continuous delivery. It’s the practice of deploying all the way into production without any human intervention.

At Oniyosys, our team utilizes continuous delivery don’t deploy untested code; instead, newly created code runs through automated testing before it gets pushed out to production. The code release typically only goes to a small percentage of users and there’s an automated feedback loop that monitors quality and usage before the code is propagated further.



Monday, 24 April 2017

Oniyosys Cloud Testing: providing testing and quality assurance services for projects


Cloud computing is an internet based platform that renders various computing services like hardware, software and other computer related services remotely. Cloud computing is opening up new vistas of opportunity for testing. Cloud testing is the process of testing the performance, scalability and reliability of Web applications in a cloud computing environment.

Type of Testing in Cloud

The whole cloud testing is segmented into four main categories

1. Testing of the whole cloud: The cloud is viewed as a whole entity and based on its features testing is carried out. Cloud and SaaS vendors as well as end users are interested in carrying out this type of testing

2. Testing within a cloud: By checking each of its internal features, testing is carried out. Only cloud vendors can perform this type of testing

3. Testing across cloud: Testing is carried out on different types of cloud like private, public and hybrid clouds

4. SaaS testing in cloud: Functional and non-functional testing is carried out on the basis of application requirements

Cloud testing focuses on the core components like

Application: It covers testing of functions, end-to-end business workflows, data security, browser compatibility, etc.

Network: It includes testing various network bandwidths, protocols and successful transfer of data through networks.

Infrastructure: It covers disaster recovery test, backups, secure connection and storage policies. The infrastructure needs to be validated for regulatory compliances


Other Testing types in Cloud includes


  • Performance
  • Availability
  • Compliance
  • Security
  • Scalability
  • Multi-tenancy
  • Live upgrade testing





Task performed in Cloud Testing:




          Types of Cloud Testing         



     Task Performed

       SaaS or Cloud oriented Testing:   

  This type of testing is usually performed by cloud or SaaS vendors. The primary objective is to assure the quality of the provided service functions offered in a cloud or a SaaS program. Testing performed in this environment is integration, functional, security, unit, system function validation and regression testing as well as performance and scalability evaluation.

    Online based application testing on a cloud:

   Online application vendors perform this testing that checks performance and functional testing of the cloud based services. When applications are connected with legacy systems, the quality of the connectivity between the legacy system and under test application on a cloud is validated.

    Cloud based application testing over clouds:
      To check the quality of a cloud-based application across different clouds this type of testing is performed.



Test cases for Cloud Testing



Test Scenarios

Test case

Performance Testing
  •            Failure due to one user action on cloud should not affect other users performance
  •            Manual or automatic scaling should not cause any disruption
  •            On all types of devices the performance of the application should remain same
  •            Overbooking at supplier end should not hamper the application performance

Security Testing
  • Only authorized customer should get access to data
  •            Data must be encrypted well
  •           Data must be deleted completely if it is not in use by client
  •           Data should be accessible with insufficient encryption
  •         Administration on suppliers end should not access the customers data
  •           Check for various security settings like firewall, VPN, Anti-virus etc.

           
Functional Testing         
  •            Valid input should give the expected results
  •             Service should integrate properly with other applications
  •           System should display customer account type when successfully login to the cloud
  •          When customer chose to switch to other service the running service should close automatically

Interoperability & Compatibility Testing             
  •             Validate the compatibility requirements of the application under test system
  •           Check browser compatibility on cloud environment
  •            Identify the defect that might arise while connecting to cloud
  •          Any incomplete data on cloud should not be transferred
  •           Verify that application works across different platform of cloud
  •           Test application on in-house environment and then deploy it on cloud environment

Network Testing
  •           Test protocol responsible for cloud connectivity
  •          Check for data integrity while transferring data
  •          Check for proper network connectivity
  •           Check if packets are being dropped by firewall on either side

Load and Stress Testing          

  •            Check for services when multiple users access the cloud services
  •            Identify the defect responsible for hardware or environment failure
  •          Check whether system fails under increasing specific load
  •          Check how system changes over time under a certain load




           
Best Practices:


1. Testing is a periodic activity and requires new environments to be set up for each project. Test labs in companies typically sit idle for longer periods, consuming capital, power and space. Approximately 50% to 70% of the technology infrastructure earmarked for testing is underutilized, according to both anecdotal and published reports.

2. Testing is considered an important but non business-critical activity. Moving testing to the cloud is seen as a safe bet because it doesn’t include sensitive corporate data and has minimal impact on the organization’s business-as-usual activities.

3. Applications are increasingly becoming dynamic, complex, distributed and component-based, creating a multiplicity of new challenges for testing teams. For instance, mobile and Web applications must be tested for multiple operating systems and updates, multiple browser platforms and versions, different types of hardware and a large number of concurrent users to understand their performance in real-time. The conventional approach of manually creating in-house testing environments that fully mirror these complexities and multiplicities consumes huge capital and resources.


At Oniyosys, we provide an end-to-end solution that transforms the way cloud testing is done and can help an organization boost its competitiveness by reducing the cost of testing without negatively impacting mission-critical production applications.