Wednesday, 5 April 2017

Oniyosys Big Data Testing: Serving Perfect Data Analytics Solutions


Big data is a collection of large datasets that cannot be processed using traditional computing techniques. Testing of these datasets involves various tools, techniques and frameworks to process. Big data relates to data creation, storage, retrieval and analysis that is remarkable in terms of volume, variety, and velocity. The Oniyosys Big Data Testing Services Solution offers end-to-end testing from data acquisition testing to data analytics testing.


Big Data Testing Strategy

Testing Big Data application is more a verification of its data processing rather than testing the individual features of the software product. When it comes to Big Data Testing, performance and functional testing are the key.

In Big data testing QA engineers verify the successful processing of terabytes of data using commodity cluster and other supportive components. It demands a high level of testing skills as the processing is very fast. Processing may be of three types

1. Batch
2. RealTime
3. Interactive

Along with this, data quality is also an important factor in big data testing. Before testing the application, it is necessary to check the quality of data and should be considered as a part of database testing. It involves checking various characteristics like conformity, accuracy, duplication, consistency, validity, data completeness, etc.


Testing Steps in verifying Big Data Applications

The following figure gives a high level overview of phases in Testing Big Data Applications


Step 1: Data Staging Validation

  • The first step of bigdata testing, also referred as pre-Hadoop stage involves process validation. 
  •  Data from various source like RDBMS, weblogs, social media, etc. should be validated to make sure that correct data is pulled into system
  • Comparing source data with the data pushed into the Hadoop system to make sure they match 
  • Verify the right data is extracted and loaded into the correct HDFS location 
  • Tools like Talend, Datameer, can be used for data staging validation


Step 2: "MapReduce" Validation

The second step is a validation of "MapReduce". In this stage, the tester verifies the business logic validation on every node and then validating them after running against multiple nodes, ensuring that the -

  • Map Reduce process works correctly 
  • Data aggregation or segregation rules are implemented on the data
  • Key value pairs are generated
  • Validating the data after Map Reduce process


Step 3: Output Validation Phase

The final or third stage of Big Data testing is the output validation process. The output data files are generated and ready to be moved to an EDW (Enterprise Data Warehouse) or any other system based on the requirement.

Activities in third stage includes

  • To check the transformation rules are correctly applied
  • To check the data integrity and successful data load into the target system
  • To check that there is no data corruption by comparing the target data with the HDFS file system data
  • Architecture Testing


Hadoop processes very large volumes of data and is highly resource intensive. Hence, architectural testing is crucial to ensure success of your Big Data project. Poorly or improper designed system may lead to performance degradation, and the system could fail to meet the requirement. At least, Performance and Failover test services should be done in a Hadoop environment.

Tools used in Big Data Scenarios

NoSQL: CouchDB, DatabasesMongoDB, Cassandra, Redis, ZooKeeper, Hbase

MapReduce: Hadoop, Hive, Pig, Cascading, Oozie, Kafka, S4, MapR, Flume

Storage: S3, HDFS ( Hadoop Distributed File System)

Servers: Elastic, Heroku, Elastic, Google App Engine, EC2

Processing: R, Yahoo! Pipes, Mechanical Turk, BigSheets, Datameer


Challenges In Big Data Testing:

1.Huge Volume and Heterogeneity

Testing a huge volume of data is the biggest challenge in itself. A decade ago, a data pool of 10 million records was considered massive. Today, businesses work with few Petabytes or Exabytes data, extracted from various online and offline sources, to conduct their daily business. Testers are required to audit such voluminous data to ensure that they are a fit for business purposes. It is difficult to store and prepare test cases for such large data that is not consistent. Full-volume testing is impossible due to such a huge data size.

2. Understanding the Data

For the Big Datatesting strategy to be effective, testers need to continuously monitor and validate the 4Vs (basic characteristics) of Data – Volume, Variety, Velocity and Value. Understanding the data and its impact on the business is the real challenge faced by any Big Data tester. It is not easy to measure the testing efforts and strategy without proper knowledge of the nature of available data.

3. Dealing with Sentiments and Emotions

In a big-data system, unstructured data drawn from sources such as tweets, text documents and social media posts supplement a data feed. The biggest challenge faced by testers while dealing with unstructured data is the sentiment attached to it. For example, consumers tweet and discuss about a new product launched in the market. Testers need to capture their sentiments and transform them into insights for decision making and further business analysis.

4.Lack of Technical Expertise and Coordination

Technology is growing, and everyone is struggling to understand the algorithm of processing Big Data. Big Data testers need to understand the components of the Big Data ecosystem thoroughly. Today, testers understand that they have to think beyond the regular parameters of automated testing and manual testing. Big Data, with its unexpected format, can cause problems that automated test cases fail to understand. Creating automated test cases for such a Big Data pool requires expertise and coordination between team members. The testing team should coordinate with the development team and marketing team to understand data extraction from different resources, data filtering and pre and post processing algorithms. As there are a number of fully automated testing tools available in the market for Big Data validation, the tester has to possess the required skill-set inevitably and leverage Big Data technologies like Hadoop. It calls for a remarkable mind set shift for both testing teams within organizations as well as testers. Also, organizations need to be ready to invest in Big Data-specific training programs and to develop the Big Data test automation solutions.

At Oniyosys, we conduct detailed study of current and new data requirements and apply appropriate data acquisition, data migration and data integration testing strategies to ensure seamless integration for your Big Data Testing. 

Wednesday, 29 March 2017

IoT Testing At Oniyosys: Strengthening Multiple Dimensions Of Services


Kevin Ashton, co-founder of the Auto-ID Center at MIT, which created a global standard system for RFID and other sensors, coined the phrase “Internet of Things” in 1999. IoT encompasses a world where living and inanimate things are connected wirelessly and serve the purpose of machine-to-machine communication.

In the development of applications which involve Internet of things (IoT), the IoT gadget, device application and communication module plays a vital role in analyzing the performance and behavior of the IoT service. Poor design may hamper the working of the application and affect the end-user experience. Oniyosys has developed a comprehensive QA strategy to handle these unique requirements and challenges associated with validating IoT applications.

In today’s article we will discuss Why IoT, QA Opportunities In IoT Testing, Sample IoT Test Cases on IoT testing, Challenges That QA Team Can Face During IoT Testing and Solutions And Best Practices.


Why IoT?


• Efficient Machine to Machine (M2M) Communication
• Development of multiple Protocols (IPv6, MQTT, XMPP (D2S), DDS (D2D) etc.)
• Development and Integration of Enabling Technologies (Nano-electronics, embedded systems, software and cloud computing, etc.)
• Supports Smart Living concept

Important Domains of IoT:


• Smart Cities
• Smart Environment
• Smart Water
• Smart Metering
• Smart Safety measures
• Smart Retail

Sample IoT Test Cases:


• Verify that IoT gadget is able to register to the network and data connection is made successfully.

• Set a proper time delay after the connection for the first gadget is established. Verify that another IoT gadget is able to register to the network and data connection is made successfully.

• Verify that all the gadgets involved in the Internet of things testing are able to register to the network.

• Verify that all the gadgets involved in the IoT testing are able to send SMS to the network.

• Verify that only gadgets with proper authentication are able to connect to network.

• Verify that gadget disconnects quickly from the network when user removes the (U) SIM.

• Verify that gadget is able to enable or disable network friendly mode feature.

• Verify that gadgets involved in IoT are able to transmit huge chunks of user data if required.

• Verify that gadget transmits keep-alive message once in every half an hour.

• Verify that if sim subscription is subject to terminated condition, gadget does not retry service request as per the requirements in NFM.

• Verify that if sim subscription with roaming not allowed, gadget does not retry service request as per the requirements in NFM.

• Verify that if SIM subscription with barred GPRS service, gadget does not retry service request as per the requirements in NFM.

• Verify that if maximum number of connections (as per the requirement) is attained, the IoT gadget need to stop attempt to link to the network till a predefined duration.

• Verify that in case data volume exceeds that defined in requirement, the IoT gadget should not initiate any more transfer of data till a predefined duration.

• Verify that IoT gadget need to inform the network about power status.

• Verify that IoT gadget is able to transfer data in low power mode.

• Verify that IoT gadget transmits data with IoT device application in the form of encrypted data.

Challenges That QA Team Can Face During IoT Testing:


• It is expensive to replicate the environment required for IoT testing and demands too much of effort

The subsystems, sub-components, and services that are interrelated are possessed by various groups and third party units. If user is unable to access a single dependent sub-component, it could affect the testing of the whole system.

• In order to obtain the right test data among different systems, Substantial effort and organization among multiple teams is required

• Gadget which is available for testing might be of inadequate capacity or is not available at the right time

• Sensor quality and accuracy – Device under test may not be of good quality or have the right precision needed for testing

• Compatibility Issues

• Complexity

• Connectivity issues

• Power problems

• Security/Privacy issues

• Safety Concerns
 

IoT Testing – Solutions And Best Practices:


• IoT Services stresses for robust testing competences to guarantee that the performance of the services is able to meet the requirements and SLA. By adopting effective best practices user can successfully execute IoT testing.

• QA need to concentrate on good testing approaches and practices for efficiently implementing a testing job. Well-defined requirements, comprehensive test plan, unit testing, integration testing and effective communication would form the basis of IoT testing. Impeccable programming tactics and practices ensure that the end-result is a quality product.

• New platforms ensure effective communication and to efficiently obtain valid info from huge amounts of raw data. This ensures good timing and systems framework to back the real-time applications. QA testing team can also make use of cutting-edge tools, consoles, viewers and simulators to ensure successful execution of the project.

• QA testing team also need to have sound understanding of the architecture, the Operating System, hardware, applications, protocols and shortcomings of hardware gadgets to design good test cases.

• Robust backend – if the mainstream functionalities are embedded into a robust backend, backend functionalities can be tested using usual testing methods, tools and approaches.

At Oniyosys, our team expertise and efforts serve to make testing and validating IoT applications a simple and productive experience. The Oniyosys Test solution includes a combination of testing with actual devices, tools, and frameworks.

Friday, 24 March 2017

Performance & Stress Testing: For Delivering Responsive Future-Proof Systems


At Oniyosys, we get involved in performance testing right from the pre-deployment stage itself. This helps in early resolution of issues. In the event that systems are already live.

What is Performance Testing?


Performance Testing is the general name for tests that check how the system behaves and performs. Performance testing examines responsiveness, stability, scalability, reliability, speed and resource usage of your software and infrastructure. Different types of performance tests provide you with different data, as we will further detail.

Before Performance Testing, it’s important to determine your system’s business goals so you can tell if your system behaves satisfactorily or not according to your customers’ needs.

After running performance tests, you can analyze different KPIs, such as the number of virtual users, hits per second, errors per second, response time, latency and bytes per second (throughput), as well as the correlations between them. Through the reports, you can identify bottlenecks, bugs and errors, and decide what needs to be done.

When should you use Performance Testing?


When you want to check your website performance and app performance, as well as servers, databases, networks, etc. If you work with the waterfall methodology, then at least each time you release a version. If you’re shifting left and going agile, you should test continuously.


What is Stress Testing?



Stress Testing is testing that checks the upper limits of your system by testing it under extreme loads. The testing examines how the system behaves under intense loads, and how it recovers when going back to normal usage, i.e are the KPIs like throughput and response time the same as before? In addition to load testing KPIs, stress testing also examines memory leaks, slowness, security issues and data corruption.

Stress Testing can be conducted through load testing tools, by defining a test case with a very high number of concurrent virtual users. If your stress test includes a sudden ramp-up in the number of virtual users, it is called a Spike Test. If you stress test for a long period of time to check the system’s sustainability over time with a slow ramp-up, it’s called a Soak Test.

When Should You Use Stress Testing?

Website stress tests and app stress tests are important before major events, like Black Friday, ticket selling for a popular concert with high demand or the elections. But we recommend you stress test every once in a while so you know your system’s endurance capabilities. This ensures you’re always prepared for unexpected traffic spikes, and gives you more time and resources to fix your bottlenecks.

This is an example of what a Spike Test would look like on JMeter. This test analyzes adding 7,000 users at once and then adding 500 users every 30 seconds until reaching 10,000 users.  After reaching 10,000 threads all of them will continue running and hitting the server together for 5 minutes.


We offer the following performance tests:


Load Test – where we test applications at the optimal level of its specifications.

Stress Test - here we test the system or application at extreme operating conditions by stressing it out by removing the resources that support it & see how it works.

Ageing Test – this test gauges how an application performs after extended usage over a long period of time.

Throttle Test – here the application is testing across different bandwidths and within specifications like CPU usage, memory, web traffic, web processes etc.


At Oniyosysour performance testing team will help with suggestions on how to improve the existing applications and help in identifying the segments of software that need fine-tuning and fixing under both normal and extraordinary conditions.


Wednesday, 15 March 2017

Automation Testing : The Best Alternative Of Manual Testing


Website test automation requires you to use software and tools, which is an ideal alternative to time consuming manual testing. In order to successfully plan and execute test automation, you need to have an effective framework, tested methodology, and suitable tools to reduce time and boost the quality of testing. Oniyosys has the expertise to create software test automation process for applications across a number of domains. Our proficiency in developing test automation scripts for customized website allows us to manage product complexities.



What is Automation Testing?



Manual testing is performed by a human sitting in front of a computer carefully executing the test steps. Automation Testing means using an automation tool to execute your test case suite.   

The automation software can also enter test data into the System Under Test ,  compare  expected and actual  results and generate detailed test reports. Successive development cycles will require execution of same test suite repeatedly. Using a test automation tool it's possible to record this test suite  and re-play it  as required.Once the  test suite is automated,  no human intervention is required .This improved ROI of Test Automation. Goal of Automation is to reduce number of test cases to be run manually and not eliminate manual testing all together.



Why Automation Testing?


  • Manual Testing of all work flows, all fields , all negative scenarios is time and cost consuming
  • It is difficult to test for multi lingual sites manually
  • Automation does not require Human intervention. You can run automated test unattended (overnight)
  • Automation increases  speed of test execution
  • Automation helps increase  Test Coverage
  • Manual Testing can become boring and hence error prone.



Which Test Cases to Automate?

Test cases to be automated can be selected using the following criterion to increase the automation ROI

  • High Risk - Business Critical test cases
  • Test cases that are executed repeatedly
  • Test Cases that are very tedious or difficult to perform manually
  • Test Cases which are time consuming



The following category of test cases are not suitable for automation:

  • Test Cases that are newly designed and not executed manually  atleast once
  • Test Cases for which the requirements are changing frequently
  • Test cases which are executed on ad-hoc basis.



Simple Steps to follow in Automation Testing:


There are lots of helpful tools to write automation scripts, before using those tools it’s better to identify the process which can be used to automate the testing.

  • Identify areas within software to automate
  • Choose the appropriate tool for test automation
  • Write test scripts
  • Develop test suits
  • Execute test scripts
  • Build result reports
  • Find possible bugs or performance issue


For the products that are change-resistant and economical, Oniyosys is adept in building functional test automation cycle and effective regression testing capabilities pre- and post-deployment and it suits all budgets and technical needs. Explore our Lucrative Test Automation Benefits for Maximum Level of Accuracy, Fast Turn Around Period & Money Saving, Team Skill Improvement, Interactive Approach and Advanced Test Automation Tools Usage.


Wednesday, 8 March 2017

Regression Testing - A crucial step for the success of application development and upgrades



Oniyosys conducts regression testing, where previously run tests are re-conducted to avoid the emergence of old/new software bugs or regressions that typically come back when major code or program modifications/maintenance are done.

Let’s talk about our Regression Testing for better understanding :



When any modification or changes are done to the application or even when any small change is done to the code then it can bring unexpected issues. Along with the new changes, it becomes very important to test whether the existing functionality is intact or not. This can be achieved by doing the regression testing.



Types of Regression testing techniques:


We have four types of regression testing techniques. They are as follows:


1) Corrective Regression Testing: Corrective regression testing can be used when there is no change in the specifications and test cases can be reused.


2) Progressive Regression Testing: Progressive regression testing is used when the modifications are done in the specifications and new test cases are designed.


3) Retest-All Strategy: The retest-all strategy is very tedious and time-consuming because here we reuse all test which results in the execution of unnecessary test cases. When any small modification or change is done to the application then this strategy is not useful.


4) Selective Strategy: In selective strategy, we use a subset of the existing test cases to cut down the retesting effort and cost. If any changes are done to the program entities, e.g. functions, variables etc., then a test unit must be rerun. Here the difficult part is to find out the dependencies between a test case and the program entities it covers.



When to use it:


Regression testing is used when:


  • Any new feature is added
  • Any enhancement is done
  • Any bug is fixed
  • Any performance related issue is fixed


Advantages of Regression testing:

  • It helps us to make sure that any changes like bug fixes or any enhancements to the module or application have not impacted the existing tested code.
  • It ensures that the bugs found earlier are NOT repeatable.
  • Regression testing can be done by using the automation tools
  • It helps in improving the quality of the product.



Regression Testing Techniques

Software maintenance is an activity which includes enhancements, error corrections, optimization and deletion of existing features. These modifications may cause the system to work incorrectly. Therefore, Regression Testing becomes necessary. Regression Testing can be carried out using following techniques:

                             
Retest All
This is one of the methods for regression testing in which all the tests in the existing test bucket or suite should be re-executed. This is very expensive as it requires huge time and resources.

Regression Test Selection
Instead of re-executing the entire test suite, it is better to select part of test suite to be run.

Test cases selected can be categorized as 1) Reusable Test Cases 2) Obsolete Test Cases.


Reusable Test cases can be used in succeeding regression cycles.

Obsolete Test Cases can't be used in succeeding cycles.


Prioritization of Test Cases

Prioritize the test cases depending on business impact, critical & frequently used functionalities. Selection of test cases based on priority will greatly reduce the regression test suite.


If your software undergoes frequent changes, regression testing costs will escalate. In such cases, Manual execution of test cases increases test execution time as well as costs. Automation of regression test cases is the smart choice in such cases.  The extent of automation depends on the number of test cases that remain re-usable for successive regression cycles.


Regression Testing Tools


Following are most important tools used for both functional and regression testing:


Selenium: This is an open source tool used for automating web applications. Selenium can be used for browser-based regression testing.


Quick Test Professional (QTP): HP Quick Test Professional is automated software designed to automate functional and regression test cases. It uses VBScript language for automation. It is a Data-driven, Keyword based tool.


Rational Functional Tester (RFT): IBM's rational functional tester is a java tool used to automate the test cases of software applications. This is primarily used for automating regression test cases and it also integrates with Rational Test Manager.



Oniyosys run regression tests using a suitable combination of automated and manual testing. They are conducted not only during the operational software development stage but also before the release stage into a live environment. Regression testing helps detect major variances that could have serious implications on the revenue, schedule, and company reputation. This type of testing is crucial to the success of application development and upgrades.

Thursday, 23 February 2017

Functional Testing :The Most On-Demand Service Of Oniyosys



Functional testing services at Oniyosys cover end-to-end software testing right up to user acceptance testing with complete system integration and acceptance. During our functional testing process, we check the programs thoroughly for any bugs which may not be visible during the normal testing process. Our functional testing module is applicable for both new applications and as well as existing applications with added features. Let's discuss our functional testing -

Functional Testing is a testing technique that is used to test the features/functionality of the system or Software, should cover all the scenarios including failure paths and boundary cases. This testing mainly involves black box testing and it is not concerned about the source code of the application.

Each and every functionality of the system is tested by providing appropriate input, verifying the output and comparing the actual results with the expected results. This testing involves checking of User Interface, APIs, Database, security, client/ server applications and functionality of the Application Under Test. The testing can be done either manually or using automation


What do you test in Functional Testing?

The prime objective of Functional testing is checking the functionalities of the software system. It mainly concentrates on -

Mainline functions:  Testing the main functions of an application

Basic Usability: It involves basic usability testing of the system. It checks whether a user can freely navigate through the screens without any difficulties.

Accessibility:  Checks the accessibility of the system for the user

Error Conditions: Usage of testing techniques to check for error conditions.  It checks whether suitable error messages are displayed.



Functional Testing Process:

In order to functionally test an application, following steps must be observed. 

1. Identify Test Input ( test data ) 

2. Compute The Expected Outcomes With The Selected Test Input Values

3.Execute  Test Cases

4.Comparison Of Actual and Computed Expected Result



Types of Functional Testing :

Mainly major functional testing techniques are of two types - 
1. Black Box and 2. White Box Testing

The other major Functional Testing techniques include:

1.Unit Testing

2. Integration Testing

3. Smoke Testing

4. User Acceptance Testing

5. Localization Testing

6. Interface Testing

7. Usability Testing

8. System Testing

9. Regression Testing

10. Globalization Testing



Functional testing tools:

There are several tools available in the market to perform functional testing. They are explained as follows: 

Selenium - Popular Open Source Functional Testing Tool.

QTP - Very user-friendly Functional Test tool by HP.

JUnit- Used mainly for Java applications and this can be used in Unit and system testing.

soapUI - This is an open source functional testing tool, mainly used for Web service testing. It supports multiple protocols such HTTP, SOAP and JDBC.

Watir - This is a functional testing tool for web applications. It supports tests executed at the web browser and uses ruby scripting language.

Functional testing is more effective when the test conditions are created directly from user/business requirements. When test conditions are created from the system documentation (system requirements/ design documents), the defects in that documentation will not be detected through testing and this may be the cause of end-users’ wrath when
they finally use the software.

At Oniyosys our functional testing is the most in-demand service of the company with scores of successfully completed projects and they span a wide spectrum of activities from integration testing to user acceptance testing to production release support.

Thursday, 18 September 2014

Software Application Testing Services

The main purpose of application testing is to find defects or failures in the product or application. Also during the test planning it’s decided what constitutes an “important defect”. Usually an important defect is the one that influences the usability and functionality of an application and makes it hard for the customer to use the application.

Oniyosys offers a truly effective and streamlined solution for Application Testing your software and web application. Our application testing services enables you to deploy your application with assurance that it will endure contemporary and near future levels of load. Our testing services have helped organizations in accomplishing predictable and ameliorated quality levels.

Oniyosys
The application must successfully pass all test conditions before it is ready for the general customer. However by testing one cannot establish the fact that the product will function properly under all conditions. Rather it will be able to highlight those specific conditions under which the product will not function properly.

There are various methods of conducting Application Testing namely static and dynamic testing. The process of dynamic testing is more often used than static testing. The process of dynamic testing is conducted when the application is run or executed for the first time. It is primarily used to test some specific sections of the code.

Application testing can be carried out at any point of time during the development process. However, most of the testing takes place after each and every requirement is fulfilled and the coding process is completed.
Testing a web application is an important part of preparing it for release. From usability to loading performance, there are several areas of a program that the testing phase evaluates, and the key to proper evaluation is proper test design. Although different applications can require different test procedures, some procedures are ubiquitous.

Application Testing offers an independent viewpoint to the business to understand and evaluate the risks associated with the product or the software. The test procedure basically includes executing the application to find out software bugs.

A good web application testing plan ensures that a web application is functional and user-friendly. By empowering the testing phase to evaluate critical areas of user experience, companies can develop applications that are instantly user friendly – an important aspect of sales momentum during an application’s release period.

Most web applications require several types of testing, but perhaps none of them is as important as testing for user acceptance. If a program contains problems that significantly affect how it performs for end users, it can fail to generate enough sales to justify the cost of developing it.