Product Engineering Outsourcing, Tech Talk

A QUIC Introduction

QUIC is UDP-based network protocol designed by Jim Roskind at Google. Initially implemented by Google in 2012 and announced as an experimental project in 2013.

TCP & UDP are the widely-used protocols over Transport Layer and both protocols have their own advantages & disadvantages. To leverage the advantages of both protocols, Google redefined internet transport called QUIC – Quick UDP Internet Connections.

QUIC Features

QUIC reshaped the key mechanisms (connection establishment, stream multiplexing, congestion control & reliability) of TCP, outline is as follows:

QUIC Negotiation & Connection Establishment

QUIC offers “Zero-RTT” with equivalent security as “TLS over TCP” but with lower latency.

In TCP, connection establishment is a three-way handshake negotiation process that means additional roundtrip-time (RTT) for starting a connection which adds significant delays to any new connection establishment.  On top of that, if TCP also needs to negotiate TLS for a secure connection, more network packets need to be sent back & forth which means furthermore delay to the connection establishment.


QUIC Congestion Control & Loss Recovery

QUIC offers pluggable congestion control, TCP Cubic is reimplemented as the default congestion control algorithm. which enables QUIC to provide richer information to congestion control algorithms than TCP. QUIC uses a monotonically increasing sequence number for an original packet as well as retransmitted one, which never gets repeated in a lifetime of a connection.  This helps to determine ACKs for retransmissions from ACKs for original transmissions, thus suppressing TCP’s retransmission ambiguity problem.

QUIC precisely calculates roundtrip-time (RTT) based on the delay between the receipt of a packet and its acknowledgment being sent along with monotonically-increasing packet numbers.

 QUIC Stream & Connection Flow Control

QUIC implemented flow control at stream-level & connection-level.In stream-level connection flow, receiver advertises the offset limit upto which it is willing to receive the data.  On a particular stream, after successful data transmission, the receiver sends WINDOW_UPDATE to increase the advertised offset limit for that stream, which allows the sender to send more data on that stream.  Connection-level flow control works in the same way as stream-level flow control, but the delivered & received offsets are all aggregates across all streams.

In stream-level connection flow, receiver advertises the offset limit upto which it is willing to receive the data.  On a particular stream, after successful data transmission, the receiver sends WINDOW_UPDATE to increase the advertised offset limit for that stream, which allows the sender to send more data on that stream.  Connection-level flow control works in the same way as stream-level flow control, but the delivered & received offsets are all aggregates across all streams.

Multiplexing without Head-of-Line blocking

Head of line (HOL) blocking is the performance-limiting phenomenon that occurs when HTTP/2 multiplexes many streams on top of TCP single-byte stream abstraction. As a segment loss in a TCP causes blocking of all subsequent segments until a retransmission covers the loss.


QUIC is designed from scratch to support multiplexing, a segment loss in an individual stream generally only impact that specific steam. So, the streams without loss can continue to be reassembled and make forward progress in the application.


Authenticated and Encrypted Header & Payload

TCP headers transit as plaintext & not authenticated, causing the possibilities of active attack like injection, header manipulation (receive-window or sequence-number manipulation) etc.

QUIC packets are always authenticated and typically the payload is fully encrypted.  The parts of the packet header which are not encrypted are still authenticated by the receiver, so the possibilities of active attack are very less. not authenticated.


QUIC Forward Error Correction

Forward Error Correction (FEC) is a proactive loss recovery scheme. QUIC uses XOR-based Forward FEC, the XOR-based mechanism has a concept of a ‘group’ of packets. The group number is indicated on each packet in the group, as well as on the XOR packet sent to protect the group. If any of the packets is lost, the contents of that packet can be recovered from the FEC packet and the remaining packets in the group.


QUIC Connection Migration

A TCP connection uniquely identified by a four-tuple (source address, source port, destination address & destination port).  If any of the tuples attributes changes (for example, by switching from Wi-Fi to cellular) TCP connection do not survive.

QUIC connections are identified by a 64-bit Connection ID, randomly generated by the client.  QUIC can survive IP address changes and NAT re-bindings since the Connection ID remains the same across these migrations.

 QUIC in Action

QUIC integrated (experimental) with the Chrome version 2 since 2013. QUIC is enabled by default in Chrome.


Post QUIC support verification, see QUIC in action by opening the chrome://net-internals/#quic:


Verify the Connection ID (CID) in traces using Wireshark:


QUIC Implementation

As wireless networks experience hardship with network congestion, wireless interference & poor connection, QUIC is immensely suitable for the mobile applications where TCP faces shortcomings. As QUIC supports RESTful API, it improves server-2-server & server-2-client communications built on top of REST or other protocols that use HTTP today.

The noteworthy surge has attracted open source community for flavored QUIC implementations.

QUIC in GO –

QUIC in Caddy –

QUIC in Docker –

QUIC in Node.js –


QUIC in benchmark tool –

QUIC for .Net –

To explore QUIC, Google provided step by step chromium code base client & server implementation –

QUIC Summary

The glimpse of experimental improvements:

  • 50 – 80% of the overall latency reduces.
  • 25% retransmission reduces.
  • 5% faster page loading times on average.
  • 1-second faster for web search at 99th-percentile.
  • 30% lesser rebuffering on YouTube videos.

Predicted Internet Video Traffic will be 80% by 2019. To deal with emerging video, gaming & virtual reality based traffic QUIC and other new network improvements are important. Cloud-based communication and collaboration to do also. The IETF committee to standardize it is underway.

QUIC References

Technical Lead


Querying data from physical disks was traditional approach, In-memory database computing (IMDBC) replaced this approach, as in IMDBC data is queried through the computer’s random access memory i.e. RAM. Upon querying data from RAM results in shorter query response time and allows analytics applications to support faster business decisions. It allows performing many scenarios in minutes as opposed to hours. [1]

The figure given below borrowed from the article referenced as [3] depicts the comparison between traditional computing approach and in-memory computing approach. As it can be clearly seen in the figure that in traditional approach data used to get stored in external device whereas in in-memory computing data is loaded into main memory.

In-Memory Database Computing - A way to get faster and smarter analysis

Figure 1, Traditional vs. In-Memory Computing Approach [3].

Importance of In-memory database computing (IMDBC)

In comparison to recent times only a few selected business intelligence users in an organisation used to have a weekly report. In contrast, today, every company is trying to implement fact-based decision making model across the organisation. However, every company knows that to implement the same, they have to buy analytical tools, which are not only expensive but also difficult to use as these tools requires extensive IT expertise, knowledge, and hands-on experience. Nowadays, business experts desire speedy access to information and easy analysis and in such case if tools aren’t that fast and easy, business experts will have no choice other than waiting for slow responsive answer to the desired query.

IMDBC not only provides win-win situation to the business users who can get self-service analysis capabilities, but also provides greater amount of agility as using in-memory tool team would spend quite less time on query analysis, model building, data preparation, data joining and performance-tuning tasks. It has also been claimed that in-memory technology can eradicate requirement of having data warehouse in place and so does cost and complexity associated with the same can be eliminated [2]. IMDBC also provides full picture of business scenarios as there is no limit to the granularity of the data view available to the user. This drill-down capability enables user to view data from different perspectives and take real-time business decisions which eventually can lead to higher business value and revenue growth of an organisation. The ‘SAP’ was having very poor end-user application support for its performance and pushing IMDBC solutions have helped SAP to quickly overcome the performance issues at very low cost and that too without re-factoring the design of the whole application as it becomes very difficult for the organisation to justify the poor performance at end-user level after, they have already spent millions of dollars on software and hardware implementations. [5]

To better understand the IMDBC, we will target business problem associated with the food retailer to learn more about how exactly IMDBC works. The BI business users of food retailer in the beginning used to stumble while trying to track and analyse data about the online traffic to discover the user behaviour and patterns. They certainly started using big data for analysis, but it was not working right. The higher authorities of BI team decided to implement QlikView (IMDBC tool). QlikView provided food retailer business user’s very good flexibility compared to the old tools for creating queries on the fly and joining information from dissimilar data sources to get answers for all business questions. Prior to IMDBC solution implementation, it used to be very tedious to build reports. However, after implementing it, the reporting became much faster and business users started spending more time in taking actions rather analysing it. [6]

Types of In-memory database computing (IMDBC) tools

IMDBC comes in different flavours and when we compare these tools we need not only to consider factors like speed of querying, reporting and analysis but also need to consider other factors such as flexibility, agility and rapid prototyping. For many other reasons, all in-memory analytics approaches are created distinctively. Below given are the options available for the users to buy in the market today;

Proprietary/ Commercial/Paid types of (IMDBC) tools

This section includes the list of proprietary/ commercial/paid types of IMDBC tools.

1.     In-memory OLAP

This in-memory classis MOLAP cube entirely gets loaded into in memory. The following named tools are available in the market for the buyers like IBM Cognos TM1, Actuate BIRT. The main advantages of using these tools are,

  • It provides faster reporting, querying and analysis as entire model and data gets loaded into memory
  • It allows users to write back
  • In-memory OLAP provides accessibility access via 3 rd party tools

On-contrary to the advantages this type of tools also has few disadvantages as mentioned below,

  • Traditional multidimensional data modelling is required to implement in-memory OLAP
  • The biggest drawback is the limited single physical memory space i.e. 3 TB , but yet only 300 GB has been achieved practically
2.     In-memory ROLAP

In this in-memory tool i.e. ROLAP, metadata gets loaded entirely in-memory. This product is only available with MicroStrategy merchant. The main advantages of using in-memory ROLAP are,

  • It speeds up reporting, querying and analysis as metadata fully gets loaded into memory
  • These tools doesn’t have limitation to the physical memory as compare to the earlier OLAP type

Apart from advantages this type too has few disadvantages such as,

  • The entire data model doesn’t get loaded into memory, only metadata does, even though tool provider i.e. MicroStrategy has ability to build complete cubes from the subset of data available in the in memory
  • Traditional multidimensional data modelling is required
3.     In-memory inverted index

In this kind of in-memory tool Index including data gets loaded into memory. Inverted Index in- memory tools are available with SAP BusinessObjects (BI Accelerator), and Endeca vendors. Following given are the advantages of inverted index products,

  • Faster reporting, querying and analysis can be achieved as entire index gets loaded into in- memory
  • These in-memory tools requires less effort to create data model as compared to OLAP based solutions

This category also has below given disadvantages,

  • Physical memory limitations
  • It required Index modelling at en extent
  • Reporting and analysis activities are restricted to the entity relationships built in index
4.     In-memory associative index

In this type, index with every attribute is correlated to every other attribute. In-memory associative index products are available with QlikView, TIBCO Spotfire, SAS JMP, and Advizor Solutions merchants. In-memory associative index products has following advantages,

  • It provides faster reporting, querying and analysis as entire index gets loaded into memory
  • Modelling effort required it less than compared to an OLAP based products
  • Without having any model constraint reporting, querying, analysis can be achieved. For example, any attribute can be instantly reused as fact or as a dimension.

Despite of having good advantages these kinds of products do have few disadvantages as given below,

  • Physical memory limitations
  • To the load the data modelling is still required
5.     In-memory Spreadsheet

In this category spreadsheets like arrays are loaded entirely into memory. This product is available in the market with only Microsoft (PowerPivot) company. This form of in-memory products have below given advantages,

  • As entire Spread-sheet gets loaded into in-memory faster reporting, querying and analysis can be easily achieved
  • Modelling is not required at all

This form has only one disadvantage as given below,

  • Physical memory limitations
Open Source/Freeware types of IMDBC tools

This section includes the list of Open Source/Freeware types of IMDBC tools.

1.     In-memory Distributed

The Pivotal has released GemFire distributed IMDBC tool, which basically can support large amount of data in the working memory of multiple servers of nodes. GemFire can also balance data across hundreds of nodes; possibly can manage terabytes of data. The Apache has developed Apache Ignite In-Memory Data Fabric integrated, high-performance , distributed in-memory platform to compute and transact on large-scale data sets in real-time.

The advantage of having GemFire and Apache Ignite In-Memory Data Fabric in place is that,

  • It can provide enterprise applications with low-latency access to datasets that are too large.
  • The major feature is fail-over capabilities, in case of single or multiple node failure system remains responsive.
2.     In-memory Hybrid

Altibase has developed HBD In-memory database with hybrid architecture.  The HBD offers high performance data processing in main memory with even having limited storage capabilities of physical disk.

Advantages of Hybrid In-memory

  • HBD has NVRAM (non-volatile memory), this has battery backup and this gets started in case of power failure and that can help in maintaining data in the event of power failure.
IMDBC Challenges

In-memory technology enhances application performance at great extent. However, there are many underlying challenges that need to be addressed prior implementing it. IMDBC has made database querying inexpensive and feasible, adoptions rate is increasing. However, due to fast read/slow write capability they are not getting adopted swiftly. [4]

The significant performance has definitely been offered by IMDBC over disk-based systems. However, disk transfer speeds and memory capacities are still the major concerns associated with in-memory solutions. Due to this reason, not every application can be run successfully in-memory and provide the same benefits.

The major challenge on IMDBC is having slow recovery times. As to recover an in-memory database, you have to refresh all that memory, and that has to be derived from the disk. However, this may soon change. [5]


IMDBC approach has been around for 30 years, but the reason it has been in headlines recent times is all because it wasn’t feasible to have 64- GB of computing ram as in the market maximum of amount of RAM available was 4-GB, which was not enough to run high-end analytical solutions or multi-user BI solutions. IMDBC has become more feasible for many businesses as the cost has been gradually declined over the period of years. Nowadays, it is possible to cache large volumes of data in a computer’s RAM (perhaps it could be an entire data warehouse or data mart — size of 512 PB (peta byte)) due to the newer 64-bit operating systems, with 1 terabyte (TB) addressable memory (potentially can be more in the future).  IMDBC approach provides way better performance, speeds up reporting, querying and analysis compared to traditional approach. IMDBC let business users take better decisions with speed and precision. If we keep these together, the improved access and response competences offered by in-memory technologies can help organisations to deliver the correct information to perfect business decision makers at the right time. [3]


[1] Cindi Howson, “Take Advantage of In-Memory”, April 2009, [Online]. Available: [Accessed: Aug.31, 2016].

[2] Margaret Rouse, “In-memory analytics”, Jan 2015, [Online]. Available: [Accessed: Oct.20, 2016].

[3] Anytersys, “System Solutions: High-performance in-memory DBMS”, [Online]. Available: [Accessed: Oct. 24, 2016].

[4] Rober L. Mitchell, Oct 2014,” 8 big trends in big data analytics”, Available: [Accessed on: Jan. 20, 2017].

[5] Sharon D’Souza, “In-memory database technology gains ground, but challenges remain.” Jan, 2012, Available: [Accessed on: Jan. 20, 2017].

[6] Beth Stackpole, “In-memory analytics tools pack potential big data punch”, May 2013, Available: [Accessed: Jan. 21, 2017]


Vishal Prajapati

Senior Business Analyst


Automated outbound call Testing is emerged as new technology in organization and mostly focused on marketing, sales, support department and alerting system. To successfully provide automated calls and sharing of resources, the outbound call process should be tested before being used by the end users. Automated outbound call testing is a form of testing in which testers must verify the quality of the calls, voicemail, callback feature with standard quality related parameters for voice based interactions. This blog focuses on various coverage areas for automated outbound call testing and challenges faced by provider. It also covers automation for call based testing using UFT Insight object technology and hangout.

Where we need?
  • Marketing: Marketing organizations uses automated outbound calls to generate, track, qualify, filter, route, and report on sales calls. They also use the call recording component to review sales and support calls.
  • Sales: Sales department makes outbound calls to capture and respond to phone leads. They often set up a virtual call center to manage, route, and record inbound sales calls.
  • Support Ddepartments: Most of the support departments make use of automated outbound calls to provide phone-based customer service using a virtual call center to handle support calls. Support teams often use the IVR technology component to answer incoming calls
  • Alerting system: Provide voice based alerts to the users from automated outbound calling system. If calls are not received by the users, a voicemail with message or callback information will be sent to them.
Approach to Outbound call testing
  1. Voice quality Testing:

While testing a call, it is very important to check the quality of voice. The voice should be clear and properly audible. Below 2 parameters should be taken into consideration:

  • Pitch – It helps to decide the degree of highness and lowness of the message.
  • Volume – It measures the loudness of the message.
  • Speed – It helps to decide that how fast or slow message should get delivered so that user will be able to understand the message properly.

Below are the areas where we can check the voice quality:

Voicemail quality:

When a recipient is unable to pick up the call or if the call gets disconnected in between the conversation, a Voicemail will get delivered to the user. Here the challenge is the quality of the voice message delivered to the user. The voice should be clear with complete and detail message. Apart from this, proper pitch, volume, speed and language should be verified.

Callback feature:

Sometimes the call is protected with a PIN. When the user receives the call they have to provide the PIN, but if the user is not able to pick up the call then, the user should get a voicemail with only callback information and not the detailed one. Testing the entire callback flow is quite lengthy because we need to take care of multiple aspects like:

  • Voicemail should contain only the callback information.
  • Voicemail should contain the PIN which is needed if the user dials the callback number from the other phone instead of the phone where the voicemail has been delivered.
  • Callback number should get connected successfully.
  • The quality of the message should be clear and complete.
  1. Voice Detection Testing:

When the call is getting dialed automatically and once someone receives the call, it is the biggest challenge to differentiate if the call has been received by a human or by an answering machine or voice-mail system. If it’s a voice message left by the user, then the message will not get delivered and it will drop a voicemail or callback information on the user’s phone. Voice detection technique is covered in detail in the section below.

  1. Language support Testing:

Despite advancements in many other alerting areas, multi-language alerts remain a struggle. So, we need to test that alert should get delivered with the content of the message to all the required languages with the same quality as in for English language.
Most commonly used Locales are en-US, en-GB, es-ES, fr-FR, fr-CA, es-MX, nl-NL, it-IT.

  1. Hosted IVR Testing:

IVR is a cloud based technology that allows a system to interact with humans through the use of voice and humans can also respond to the call. Without using live operator orders can be processed. Outbound IVRs are also used to conduct customer surveys, solicit and process orders.

  1. Text-to-speech Testing:

A text-to-speech (TTS) system converts normal language text into speech; this can be set up as per the requirements of the organization. Mostly in automated outbound calls, text message gets delivered from application to multiple users all at one go. Hence, it’s important that the message should get delivered with same quality and content to all the users.  TTS is used to convert application text message to voice.

Outbound call flow and Voice Detection Techniques:
  • Each call/line is managed by a dedicated thread.
  • Call flow is passed in from plug-in in XML along with all wav file paths.
  • Voice Detection process runs in parallel with call flow to handle extension dialing and voice mail.
  • Performs real time analysis on incoming audio stream to detect voice, tone and Pauses on the fly.
  • FFT (Fast Fourier Transform) based spectral analysis, FFT is provided by Intel IPP library.
An Integral Approach to Automated Outbound Call Testing

Fig 1: FFT based spectral analysis(Ref from TAS document)

An Integral Approach to Automated Outbound Call Testing

Fig 2: Voice Detection Techniques (Ref from TAS document)

  • Cisco TSP unstable/crashes occasionally while scaling up.
  • Audio quality decrease while scaling up.
  • Voice Mail/Extension Handling (Great variety of phone systems).
  • Monitoring of the server /troubleshooting the application and DB server issues.
  • High scale load test challenges.
  • Complexity of UCM, SIP trunks, data center infrastructure.
  • Performance: Measuring transmission of real time audio streams is a big challenge due to the diversity in the type of audios.
  • Scalability: It’s difficult to maintain huge number of calls on one server hence, it’s hard to find right number of cluster to expand.
How to approach Automation of Outbound call?

If we want a clear answer to this question, it will be very tough to decide that call testing can be automated or not. There are two ways to do it:-

  • Using UFT and Google hangout we can automate outbound call testing with maximum coverage. The main challenge for automation is we need to receive the call and respond it as per the instructions given in the message. Using below process we can automate most part of call that is 1. Calls receive 2. Respond 3. Calls disconnect. We can configure the number in Google Hangout and then using Insight object concept we can identify the objects like: – Receive, End call and Number pad buttons. Once the alert is delivered then we can receive the call and respond to it using UFT and Hangout technology.
  • We can also automate using add-in (device-connect) which is available in UFT to automate the phone calls. Testers can use device-Connect’s Airstream app in conjunction with AiDisplay remote viewer to access such device functions using device-Connect. Access to the device’s apps and functions can be done remotely with Mobile Labs’ device-Connect.

Automated outbound call testing is used in multiple organizations. In order to test these calls, we need to know the proper approach by which we can get maximum coverage of testing. This blog has covered the approach and areas where we need to focus and the points where we need to take care while doing outbound call testing. At Xoriant, we have successfully implemented automated outbound call testing for our clients without any error and achieved highest quality. For our client’s, we have tested the automated outbound call for alerting system where we need to test the alerts in multiple languages which were published using IWS application.

Senior Test Engineer


The IoT (Internet of Things) is a concept which encompasses much more than technology associated with smart homes. It contains amazingly powerful applications for the future business world and that too with built-in capability to efficiently perform data analysis to let companies to function more cost- efficiently and productively. The IoT can also enable digital transformation and drive new business models & values in all the size of companies, across almost all industries. Upon connecting people, systems, processes and assets, IoT business leaders can make better-informed decisions which can improve customer experience and competence, reduce costs and generate more revenue.

Internet of Things

Figure 1 Internet of Things (IOT)

The figure 1 shown above borrowed from article referenced as [5] is simplifying the term IoT, as in the figure it can be clearly seen that, devices are connected to the internet. These devices are connected to transmit critical data back into the cloud for further analysis as data generated through IoT devices could be a game changer for the organisations. According to the Gartner’s market research, spending on IoT technology has been increased by 30 % from year 2015 to 2016. According to McKinsey Global Institute, IoT can potentially create an economic impact of 2.7 trillion to 6.2 trillion dollars annually by 2025. This will not only open up great amount of business opportunities for big business to create new value using highly digital and data driven future but also help small business to have the same advantage. [1]

IoT Today

Experts affirm that still IoT implementation is in early stages. In regards to IoT devices, consumers are only aware of Nest’s Learning Thermostat as an IoT compatible smart device. This device learns and get adapted to consumer patterns of behaviour and the change of seasons to program itself for ideal proficiency and ease. The feature that makes IoT such a powerful tool for individuals and businesses is its ability to learn and make decision without any human involvement. There are many organisations that are already using networked sensors and products to serve variety of purposes including modernising the manufacturing process, to better understand the consumer needs, to track shipments and to make better decisions.

Ryan Lester, director of IoT Strategy for IoT platform states that he sees three main use cases for the IoT in the organisations he works with.

  1. The first is connectivity to enable new feature. This connectivity allows capturing Telemetry data. Telemetry process is about automatically measuring and wirelessly transmitting data from remote sources. Following use cases are also achieved using telemetry data.
  2. The second is related to better service as to identify when a product will fail or it requires new parts.
  3. The third way is periodic replacement. For instance, an air filter company can automatically send a replacement based on the customer usage.

In recent years, retailers have started using IoT to get complete understanding of how consumers interact with products in the retail environment. Manufactures have also started using IoT to develop better manufacturing practices and process by using networking essential machines and using robotics throughout the process. It has also been observed that companies have installed sensors on the manufacturing machines, and this can allow them to process the data to identify the trend of poor quality. The IoT can be used to monitor whole product lifecycle including its creation to its end point. For an instance, networked manufacturing monitors the creation of the product for ensuring quality and production efficiency. Next step, IoT can help tracking and coordinating shipping logistics and this can ensure efficiency, speed and accuracy. As soon as products are in distribution center, information including inventory and organisation along with interaction taken place between automatic systems like stock picking robots, can also be achieved by an IoT system. The IoT can also deliver more personal and customized customer experience by providing data about maintenance and user interaction. Thus, IoT will help business at great extent to boost the loyalty and create lifetime customers.

Upon understanding more about a person, his behaviour and life, moving from a one-time transaction to selling them a product as a service, unique customer experience can be delivered by giving them power and control. As you understand customer’s challenges, the possibility of delivering better product increases.

The Future of IoT

According to the Business News Daily survey conducted from industry experts about how IoT technology may grow and how businesses will incorporate these systems into their business in the future, below are some of the predictions about the future of IoT.

Predictions about consumer behaviours and needs

Justin Davis states that as IoT devices will start storing the data about our daily activities, it will have complete understanding about our lives. The information collected from all the devices will be merged using software platform and humans will interact with devices using virtual assistants. For an instance, the virtual assistant of coffee machine, it reminds you that you are going to run out of coffee, and as it knows about your coffee brand and the amount you pay for the coffee, it may recommend a different brand for same or low cost.

A famous clothing brand was facing major issue in manual inventory counts as accuracy levels were 60 to 70 percent and this used to lead to missed sales and disappointed customers due to out-of-stock situations in the store and online as well. This brand was also facing problem of clear visibility of what customers would buy from shelf as how often items were considered or tried in a fitting room before being sold off the shelf i.e. conversion ratio. Upon implementing IoT solutions such as Smart Cosmos, Rain RFID tags, network connected RFID sensors, METRICS empowered clothing giant with end-to-end supply-chain visibility and real-time analytics provided consolidated reports through which store and shelf availability of merchandise can be identified and great customer satisfaction can be provided. The benefits don’t end here, as in the long run, deployment of METRICS in stores can provide major improvement in sales, cost savings and good improvement in gross margin.

Personalized one-to-one marketing

Businesses using interactive displays that provides answers on real-time basis for the consumer needs would be successful. Interactive displays can help an organisation to create own set for model of products and it can take you through variety of products and solutions. One of the best examples is Nike, the footwear giant. Your phone can also provide detailed information about product including pricing information and that can be achieved just by pointing at any product in the store and using the store’s interface.

Continued refinement of business operations

IoT in conjunction with big data analytics would not only revolutionize traditionally managed businesses, but it would also result in more effective and efficient use of resources. Especially service companies can make best use of IoT based solutions by sending their technicians to monitor and identify the issues by visiting the customer’s location. Small and medium scale enterprises will have greater advantage of using IoT, as it would bridge the demand supply gap by integrating the inventory management and customer relationship management systems. Thus, in a world where everything is connected and devices are intelligently communicating with each other, we can definitely say that IoT can become internet of everything instead internet of things.

New Business Opportunities

The IoT not only offers greater efficiency but it also opens up many doors for new business opportunities. It also has great potential to change the way companies and customers approach the world. However, they both will have to get adapted to new devices and services to this changing and ultra connected space. The current wave which has been emerged of IoT embraces billions of devices, which would flourish every business domain including all sectors like retail, manufacturing, healthcare etc. The Cisco Internet group has made forecast that approximately 50 billion units of IoT devices will be in use by next 20 years. [2]

Internet of Things

Figure 2 Estimated Number of Installed IoT devices by Sector

The figure 2 given above borrowed from article referenced as [4] depicts the estimated number of installed IoT devices by sector. The key findings from the estimation are given below,

  • It has been predicted that by 2021, IoT market would be the largest device market in the world as it will be double the size of smartphone, tablet, wearable devices (fitness tracking devices, smart watches), and computers combined.
  • The IoT business would add almost 1.7 trillion to the global economy by 2021 and this would include software and hardware installation cost and management services.
  • It has also been predicted that by 2021, government sector will be the leading sector for IoT device shipments as soon as government and home sectors gain momentum.
  • The topmost benefit offered by IoT will be increased efficiency and reduced costs as IoT promises to increase efficiency within home, city and workplace by providing utmost control to the IoT device user.

These IoT connected devices includes wearable’s such as fitness tracking devices, smart homes and the offices where connected devices can be lights, thermostats, TV, refrigerators, weather sensors, pollution sensors, security systems, and cars ( connectivity is defined between the engine and car parking sensors).  Umbrellium’s Thingful, the world’s first search engine specially designed for public IoT devices, can provide the geographical index of where exactly the things are, who is the owner and how and why these things are used.

In current market, globally, Google and Apple are the two major companies in the IoT market. The application developed by Google, named as ‘ Google Glass’ provides fast access to information by giving commands to the microphone built-in smart eyewear device. Apple has developed a new smart framework called as Homekit which can be used to control devices connected inside home. Giri Krishna affirms that IoT can surely increase business efficiency, as in the recent time it is best innovation in the science and technology world and IoT has been and will benefit companies as it not only supports but also provides a new angle to have a comfortable lifestyle.

IoT Challenges

Although, IoT offers huge and wide-range possibilities, there are few challenges associated with IoT deployment.

IoT Security Challenges

Security and privacy remain the major concerns associated with IoT.  IoT is about more data and more connected devices and this means that more opportunities for hackers and cybercriminals to steal private information or misuse this information. When security systems are fully automated, hackers can break in and lock the entire system. Therefore, the security risk associated with IoT must be taken into account by all the businesses and entire industry will continue to develop strong and creative remedy as IoT expands. Currently, there are only few standards (regulations) available that are used to run IoT devices. However, there are groups comprising of electronics, global industrial and tech companies to standardize IoT and solve the major concern of IoT i.e. security. [4]

IoT Scalability

The companies who primarily adopted IoT products and technologies in their business environments are facing real scalability challenges. Even necessity of having highly specialized and customized solutions makes IoT more difficult to scale. Due to this, IoT deployment is moving at very slow pace than anticipated. The news also states that many organisations are still in the POC (proof of concept) stage for IoT, despite working from past several years.

To better understand the scalability challenge associated with IoT, let’s assume that a manufacturing company would like to deploy IoT technology to gather better insights about its overall operations, improve its manufacturing efficiency and modernize its operations.  Also assume that manufacturer already has multiple manufacturing plants with different kinds of equipment with multiple workflows. In the computer world, the devices and software are referred as legacy when they turn five years old. However, in the manufacturing world, even 35 years old equipment would be fully functional and it is very difficult to deal with manufacturing equipment of different age of one site and it can get very challenging when it comes to dealing with number of dissimilar sites. Hence, it becomes extremely challenging to find out ways to reliably get set of data to analyse across all types of devices.

Nevertheless, modern equipment offers wide range of data. However, upgrades typically are done by outside specialists. Thus, simple solution can be used to replace all the old equipment with new ones, but this will require high capital, and it is not realistic option.

IoT Adoption

One challenge can be is making customer to trust and get adapted to the new technologies. As, IoT is sort of buzzword and some people don’t want their personal information to be shared; this can cause adoption challenge. However, this can’t be applied to businesses as companies have already started to implement IoT on mass basis and businesses will definitely adapt IoT with less hesitation.

IoT Maintenance

Another challenge associated with IoT is building and maintaining IoT systems. According to Lester till the time IoT systems are properly built and open enough to share and analyze data, most of the information will not be much useful to the organisations looking to profit from own networks and sensors. Lester also states that organisations are too busy understanding the technical part of IoT and due to that they are missing business opportunities. Majority of the companies says that, it is very important to connect to the production data, however only 51 percent of companies are actually gathering the data, and less than one-third are using that data for decision making, and are able to analyse it. Hence, there is a clear gap and to bridge such gap, companies will have to bring already organised data into business systems and not use separate dedicated IoT system. This will allow easy access to the people who would need to use data daily to analyse it.


It is usual that many businesses, especially smaller businesses adopt technology very lately. However, IoT can add value to the businesses of all the size including areas such as customer satisfaction, bottom lines and other important KPIs. The companies will have to remain very proactive in building a concrete plan to practically deploy IoT. It is highly recommended to invest in the IoT technologies such as sensors, data intelligence and infrastructure to support the connectivity and data. Nevertheless, there will be unexpected and impulsive challenges that can occur in real-time and it will be hard to be prepared for such challenges. Nevertheless, the conclusion is to proceed with more caution and strategies on adopting a data driven business model which can create new and improved insights into customer behaviour resulting into innovation in product design, customer management and product delivery.


[1] Samiksha Jain, “Make your business thrive with Internet of Things”, Oct. 2016, Available at: [Accessed: 24 Jan. 17].

[2] Adam C. Uzialko, Business News Daily, “How the Internet of Things Will Make Your Business Better at Customer Service”, Aug. 2016, Available at: [Accessed: 24 Jan. 17].

[3] Nicole Fallon, Business News Daily, “Internet of Things: How Businesses Can Prepare and Adapt”, Jul. 2014, Available at: [Accessed: 24 Jan. 17].

[4] John Greenough, “The ‘Internet of Things’ Will Be The World’s Most Massive Device Market And Save Companies Billions Of Dollars”, Oct. 2014, Available at: [Accessed: 24 Jan. 17].

[5] Waypost, “Can Your Business Benefit from the Internet of Things?” ,May. 2016, Available at: [Accessed: 24 Jan. 17].

Vishal Prajapati

Senior Business Analyst


Nowadays, web Applications are becoming our integral part in day to day life due to their 24X7 availability and accessing huge data on fingertips. As more and more vital data is stored in web applications and the numbers of transactions are increased on the web, proper security testing of web applications is becoming very important.

The prime objective of Security testing is to find out ways to identify vulnerability in the system and to ensure that data is protected from hackers & invaders.

Most Common Types of Attacks causing Web Vulnerabilities

Injection Flaws [A1]: Injection Flaws results from failure in filtering un-trusted inputs. There are various forms of injection attacks like passing unfiltered data to the DataBase (SQL injection), to the browser (XSS), to the LDAP server (LDAP injection). This allows an attacker to submit malicious DB queries and pass commands directly to a database/server. To prevent such injections we need to make sure that application input fields should accepts inputs by filtering data, preferably according to a whitelist and should not support to black listed data.

Broken Authentication & Session Management [A2]: Broken Authentication and Session Management attacks are generated to try and retrieve passwords, user IDs, account details and some of the common causes are:

  • The URL may contain the session id which will leak in the referrer header
  • The passwords may not be encrypted or hard coded
  • The session ids may be predictable
  • No Session timeouts implementation using HTTP, SSL

There are numerous steps that developers can use to prevent these attacks, including session expiration, login expiration and various other strategies like Two-factor authentication, Methods to enforce user to change their password after certain duration.

Cross Site Scripting (XSS) [A3]: Cross Site Scripting (XSS) is a type of vulnerability where information is sent to web service providers such as banks or online stores, an attacker can interrupt the transaction process and extract valuable information. This is achieved by enabling attackers to inject client-side script into Web pages, viewed by other users and trick a user to click on that URL. Once it executed by the other user’s browser, this code then performs action to change website behaviour and stealing personal data.

Developers should make use of existing security control libraries, such as OWASP’s Enterprise Security API or Microsoft’s Anti-Cross Site Scripting Library. Also they should ensure that any client inputs are checked, filtered and encoded before being passed back to the user.

Insecure Direct Object Reference [A4]: Poor application design where authentication levels are not sufficiently checked and users can gain administrative access to system data. E.g. if a user’s account ID is shown in the page URL, an attacker may able to guess another user’s ID and can resubmit the request to access their data, provided if the ID is a predictable value.

The best ways to prevent this vulnerability are user ID creation using UUID method, by randomly and authenticate user each time when try to access sensitive files or content.

Security Misconfiguration [A5]: The Primary cause of this vulnerability is misconfiguration of the infrastructure that supports a Web application. Common issues include default usernames such as “admin”, and passwords, such as “password” or “123”. Various unattended web pages/services running on server can also cause for such flaws.

This can be prevented by educating the resources about the Security & Privacy and implementing them on priority at work by providing adequate training.

Sensitive Data Exposure: This vulnerability occurs when sensitive data like User ID, password, Session ID, cookies are not encrypted and shows in browser URLs.

Following are preventive measure to avoid above vulnerability:

  • Sensitive data should be encrypted all times including in transit and at rest by using “HTTPS”
  • Payment transactions should process by using Payment Processor such as “Stripe”, “Braintree”
  • All passwords should be hashed and stored in encrypted using encryption utility such as “Bcrypt”

Missing Function Level Access Control [A6]: An authorization failure will cause this vulnerability. This vulnerability exists when websites has hierarchal or tier level user access accounts and depending on the account’s privileges, the user will be able to access a certain level of applications.

Whenever a valid user sends some request, the application verifies its access & privilege and sends an approval token to him. However, in case of untrusted, anonymous users, administrative functions become targets as they are prone to unauthorized functionality.

To prevent it, authorization must be done for every server side calls.

Cross Site Request Forgery (CSRF Or XSRF) [A7]: This is one of the most prevalent attacks from online scammers and spammers, where users are manipulated to provide sensitive information through a forged website. Attackers typically warn the user that their “account has been suspended”; their “password has changed” which force users to submit their information through the forged site.

Use of CSRF, XSRF cookies into the session will validate every HTTP request and prevent such vulnerability.

Denial of Service (DoS) or Distributed Denial of Service (DDoS) [A8]: These are attempts to flood a site with external requests, making the site unavailable for users. “DoS” attacks usually target specific ports, IP ranges, or entire networks, but can be targeted to any connected device or service.

“Denial of Service” attacks are when one computer with an internet connection attempts to flood a server with packets. “DDoS” attacks are when many devices, which are widely distributed and attempt to flood the target with hundreds, often thousands of requests.

Main DDoS attacks are:

  • Volume Attacks where the attack attempts to overwhelm bandwidth on a targeted site.
  • Protocol Attacks where packets attempt to consume server or network resources.
  • Application Layer Attacks where requests are made with the intention of crashing the web server by overwhelming the application layer.

Invalidated Redirect & Forwards [A9]: This is again an input filtering issue, where a web application accepts unverified input that affects URL redirection and redirects users to malicious websites. In addition, hackers can alter automatic forwarding routines to gain access to sensitive information.

Summing up:

Top N vulnerability lists may initially appear to be interesting data sets but all of these are interwoven, and one can lead to another. Hence it is vital that one should have an understanding of the application security landscape to decide the approach for security testing to reduce the risk. This can be achieved by including multiple assessment approaches rather than depending on traditional approach, such as – code review/static analysis, threat modelling, and application-specific assessment methodologies like mobile or embedded, to get a more comprehensive picture of your software security threats.

Sr.Software Engineer


Burn-down charts are commonly used for sprint tracking by agile practitioners. The most effective and used method is plotting burn-down charts using efforts remaining versus remaining time to complete it, by doing so teams can manage their progress.

At any point in a Sprint, the remaining efforts in the Sprint Backlog can be summed. The team tracks these remaining efforts for every Daily Scrum to showcase how to achieve Sprint Goal.

The Product Owner calculates total work remaining at least in every Sprint Review. In sprint reviews work remaining is compared with this amount by Product Owner to check progress toward finishing the projected work by the desired time for the goal.

How To Create Burn-Down Chart

The very first step is to breakdown task to different sub-tasks. This is done during the sprint planning meeting. Each task should have working hours associated to it (ideally not more than 12, roughly two days’ work at six per day), which the team agrees on during the planning meeting.
Once the task breakdown is done, the ideal burn-down chart is plotted. This chart reflects progress considering that all tasks along with their sub-tasks are accomplished within the sprint at uniform rate (refer to the red line in below figure).

Many Agile tools (JIRA, Rally, Mingle etc.) have built-in feature for burn-down charts. However, a burn-down chart can be plotted and maintained in a spreadsheet in its simplest form. Sprint Cycle (Dates) in the sprint is plotted on the X axis, while efforts remaining are plotted on the Y axis.

Refer below example:
Duration of Sprint – 2 weeks
Size of Team – 7
Time (Hours/Day) – 6
Total Capacity to complete work  – 420 hours

On Day 1 of the sprint, once the task breakdown is in place, the ideal burn-down will be plotted as below:


The Y axis depicts total hours in the sprint (420 hours), which should be achieved by the end of the sprint. Ideal progress is shown in the blue line, which assumes all tasks will be completed by the end of sprint.

How To Update Burn-Down Chart

Each member picks up tasks from the task breakdown and works on them. At the end of the day, they update effort remaining for the task, along with its status.

Refer below example; the total evaluated effort for Task 1 is 10 hours. After spending 6hrs on the task, if the developer thinks he requires another 4hrs to complete, the “Effort Pending” section should be updated as 4. Requirements team have completed their task hence they have updated status as “Closed” and “Effort Spent” as 6. QA team has not yet started with their task hence status is “In-progress” and “Effort Pending” as 12.


As we progress during the sprint, the burn-down will look like this:


Sometimes, scrum teams are not able to predict efforts for sprints. Important aspect of the chart at the end of the day is that it should accurately reflect work remaining against required efforts to accomplish it with single team member updating it. Also, this can be discussed in daily scrum.

For example: If team had initially broken a task into 3 sub-tasks without understanding the complexity and dependency then they have created a potential bottleneck situation.

To tackle such instance team can re-visit efforts required and recalculate it to complete task at the end of day when they have started sprint or if they are in initial phase of sprint and update burn-down chart.

Here team can consider adding a “Spike Task” of 2-3 days to understand the complexity (by taking KT sessions, revisiting references, and walkthrough from product owner or BA) and remove uncertainty around the task. They can thus add “follow on tasks” to the original estimate and recalculate efforts required for the sprint.

Understanding Burn-Down Chart

 There are only two lines drawn in Burndown chart, but the situation they describe might have different reasons and meaning to it. If effort remaining is above the ideal effort hours, it means we are going at a slower pace and may not be able to finish all the commitments of the sprint decided during sprint meetings. If effort remaining is below the ideal effort hours, it shows that we are going at a better rate and may be able to finish earlier in the sprint.

Below are different stages of scrum teams in a sprint and way to interpret it.

Sprint commitment MET


Above progress is observed on charts of experienced agile teams. It indicates team is able to organize itself. The team has completed work on time and achieved sprint goal.

The most important is they have great product owner who understands the reason for a locked sprint backlog and a great scrum master able to help the team.

The team is not taking more work outside team’s capacity and velocity and finished the spring backlog on time. The team is also able to estimate capacity correctly.

 Sprint commitment NOT met


This burndown chart says: “You have not completed your commitment”. This progress is mostly observed in inexperienced agile team. The team has been late for the entire sprint. They did not adapt the sprint scope to appropriate level. It shows that the team has not completed stories that should have been split or moved to the next sprint.

In such situation amount of work allocated in next sprint should be reduced. If this happens again, curative actions should be taken after a few days when progress is slow. Typically, less priority story should be pushed to the next sprint or back to the product backlog.

 Team stretched towards end to meet the commitment


This chart says that team started well in the first half of the sprint but later in middle of sprint team lost focus and worked at slower pace. At the end, team completed sprint on time by meeting sprint goals by stretching working hours.

In retrospection, team should discuss the reasons of late progress in the first half of the sprint and solve issues so they are better positioned in the coming sprint. Team should also consider the capacity of task they are able to complete in one sprint.

Team is not consistent


A chart like this depicts that stories or tasks were not estimated correctly during the sprint planning meeting, though the commitment is met at the end, the team’s performance has not been consistent.

Teams can come across such state when estimation of work is not correctly done. Team did not identify the problems coming before start of the sprint.

In retrospection, team should focus on estimating stories correctly. They should rearrange there planning method by correctly calculating team`s load and velocity for coming sprints. Scrum Master should pitch in here and help team identify work estimation problems and guide them and bring them out of this situation.

Sprint commitment met early before time


Such situation arises when teams probably overestimated stories without understanding the difficulty of task or committed less during sprint meeting; hence they finished them ahead of time. Also team velocity has not been estimated correctly.

Team implemented all committed stories but did not worked on additional backlog stories even they had time to do so. To fix this situation, team should immediately arrange a planning meeting, re-estimate remaining user stories, include them in the sprint according to their velocity and start the sprint.

In retrospection meet, scrum master must be proactive in getting his team to fix estimation by providing training after identifying problem areas. He can have a word with product owner and work on backlog stories to be included in sprint.

Avoiding Mistakes While Using Burn-Down Charts
Multiple stories have a common task

There are occasions when different stories may have similar efforts involved. In such cases, if we include these efforts under each story, it will provide incorrect number of total hours, and tracking will hamper. For example, consider “data set-up”. This can be common task and applicable for all stories

Tasks are too detailed or huge

Tracking becomes difficult for teams if many tasks are created. At same time, tasks should not be huge in size (ideally not more than 12 hours) else tracking will be painful on daily basis. If tasks are more than 12 hours, it becomes difficult for teams to measure remaining efforts.

Misreading between effort remaining and effort spent

One of the common mistake new scrum teams does in first few sprints is to misread “effort remaining” as “effort spent”. When updating effort column every day, team should re-estimate task again and update remaining efforts to complete that task.

Update chart on daily basis without fail

Every scrum team member is required to update “effort remaining”. Every team member representing there scrum teams should update it at the end of the day. This will help teams to come up with burn down charts that depict correct position of scrum teams in ongoing sprints and eventually in the release.

Benefits Of Using Burn-Down Charts

 Below are benefits which scrum teams can achieve if burn-down charts are plotted and used effectively on daily basis.

Risk mitigation

Burn down charts provide daily updates on efforts and schedule, thereby mitigating risks and raising alarms as soon as something goes wrong in sprint. Thus providing daily visibility to scrum teams and involved customers and stakeholders.

If red line shown in figures above which is actual progress line goes flat and hovers above blue line which resembles ideal line, then the scrum team knows they are in trouble. Risk mitigation can be planned by them immediately rather waiting for end of sprint.

Single communication tool for scrum teams, customers & stakeholders

Burn down charts can be printed and placed in agile rooms or shared at a common place with involved audiences in sprint on daily basis at the end of every day’s work. Thus providing high visibility of scrum team’s progress on sprint which ultimately helps in release completion.

Common planning & tracking tool for scrum team

Scrum teams come up with task breakdown, updates estimated efforts and effort remaining. This enable teams to own the plan made for sprints. Biggest advantage is that entire scrum team is involved in planning and tracking using burn down chart.

Placeholder to track retrospective action items

It’s a good practice to add retrospective action items from the previous completed sprint as “non-functional requirements” in the task breakdown for the current sprint. This way, the team targets those action items, and they are also tracked as the sprint progresses.


Sprint burn-downs are usually monitored using effort remaining, it`s a common practice to use story points to monitor release burn-down.

After its introduction, many deviations of burn-down charts have been derived. Cumulative Flow Diagram (CFD) is one more favorite tool among agile practitioners which provides greater level of detail and insight into various stages of story.

Few practitioners find “burn-up” charts useful at sprint and release level but at the end it all comes up to the end result and how efficiently team uses it to track it`s daily activities at end of the day.

However, recent studies show that burn-down charts remain the most favored tracking tool for agile practitioners, due to their effectiveness and simplicity.

Ameya Tawde

Sr. Test Engineer


Python: A tester’s choice

Python is a general purpose, dynamic and flexible language, hence a lot of applications are being developed using Python. From a tester’s perspective, it has readily available modules and libraries which make scripts creation a lot easier. Tests can be written as xUnit-style classes or as functions. It provides full fledge automation testing solution for any sort of project cause and It is capable enough to perform unit, functional, system and BDD testing.

Pytest: Best among all python testing frameworks

Pytest is a python testing framework which provides a single solution for unit, functional and Acceptance testing.
It is popular than the other available frameworks because of its attractive features. Below are some of the features of Pytest:

  • It offers test design with no boilerplate
  • It does not require separate methods for assertion like  assertEquals, assertTrue, assertContains
  • Tests can be parametrized which reduces code duplication
  • Pytest can run tests written in unittest, doctest, autres and nose
  • 150+ external plugins available to support all sorts of functional testing
  • Plugins available like pytest-BDD and pytest-konira for writing tests for Behaviour Driven Testing
  • It works wonder for GUI automation testing, when used along with testing tools like selenium webdriver or splinter

In short pytest is a one stop solution for all sort of python testing be it unit, functional, highly complex functional or acceptance tests (BDD).

Top Most Pytest Features and properties

Pytest Fixtures: Nervous system of pytest
Fixtures are the key concept in pytest which can essentially provide baseline for test creation and execution. To declare any method as fixture just put the annotation for the method “@pytest.fixture” and put them in “”

Fixture example:

def open_browser():
driver = webdriver.Firefox()
assert “Python” in driver.title

The above designed fixture will be available for the whole project provided this should specified in project directory file.

  • file contains all configuration settings, all defined fixtures, hooks implementations and it is applicable to the directory level. They get loaded by default whenever tool is invoked.

Some key points on fixtures:

  • Fixtures have names and can be called from anywhere in the project, modules, classes and tests
  • Fixtures can return or not return any value or just execute the specified steps in it.
  • Fixtures can be passed as function arguments and in that case the returning value of fixture will be available in the mentioned method.
  • Fixture can be specified in directory level file. Can be called from any method and it will execute the steps specified in it.
  • A fixture can take multiple fixtures, and each fixture triggers other fixture, thus serves a modular approach.
  • Fixtures can be scoped as per the need. This is a good practice keeping in mind time-expensiveness. Scope can be “session”, “module”, “class” and “function”.

Request object: Introspect agent
This is one of the useful features to search or introspect the “requesting item”. It can introspect from test function, class, module, or session. Items specified in config file or returning by some other fixtures can be used efficiently by the called fixture by “getattr” function. Just check the below mentioned fixture:

  • Example:

def driver(request, browser_type):
“””return a webdriver instance for testing
_driver = getattr(webdriver, browser_type)()
except AttributeError as error:“Browser type %s unavailable!”, browser_type)
raise AttributeError
return _driver
Finalization code: Setting up of teardown
Fixture can be used for teardown as well. This can be achieved by using “yield” keyword. Just put the fixture with the annotation “@pytest.yield_fixture“ and put the steps after “yield” keyword in the fixture. And whenever fixture will go out of scope the steps after the “yield” keyword will serves as  teardown process. Have a look at the below modified steps of the “driver” fixture.

  • Example:

def driver(request, browser_type):
“””return a webdriver instance for testing
_driver = getattr(webdriver, browser_type)()
except AttributeError as error:“Browser type %s unavailable!”, browser_type)
raise AttributeError
yield _driver‘Finishing test”)‘*************************’)

In the above mentioned fixture whenever “browser_type” is not available the driver instance will be quit.

Fixture parametrization: Enhances reusability
Fixtures can be parametrized and can be executed multiple times with different set of data, the same way as a normal function is executed.

Usefixtures: Call for fixture from anywhere
One way by which fixtures can be made available anywhere in the project is by calling them by using the decorator “@pytest.mark.usefixtures(“fixture_1”, “fixture_2”)”.

Autouse fixture: Mark the fixture for  all
AF are the fixtures methods which get invoked without “usefixtures” decorator or “funcgars”.
Any fixture can be registered for autouse. Just need to put keyword autouse with “True” flag. “pytest.fixture(scope=”module”,autouse=True)”. Fixture will run for a class or test or module as mentioned in the scope. If they are defined in conftest then they will be invoked by the all tests present below the directory. These fixtures are particularly useful to set applicable global settings for the test session.

Auto Test Discovery in Pytest: Ease of execution
One of the very useful features is auto test discovery in pytest. Its means it detects all the tests once execution command is invoked,  user only need to specify the test modules and test with a prefix “test_*.” while designing . Command line arguments can be specified with tests names, directories, node ids and file names. In case of absence of any command line arguments then collection with start from specified ‘testpaths’ (provided they need to be configured). This feature help in running all the tests, multiple tests in groups, single test, and tests belong to specific directories. Alternatively test can be organized in specific folder structure as per the modules and thus can be executed as per the need.

Test parametrization
Tests can be parametrize using the built in decorator “pytest.mark.parametrize “.

  • Example:

@pytest.mark.parametrize(“input, expected_result”, [
(“2+5”, 7),
(“2+3”, 5)])

def test_calc(input, expected_result):
    assert eval(input) == expected_ result

Another way by which parametrization can be done in pytest is by using “pytest_generate_tests” hook which is automatically called at the time of tests collection. “Metafunc” object can be called to get requesting context or calling “metafunc.paratrize()” method for parametrization of the items.

Pytest Markers (Custom and inbuilt)
Tests can be marked with custom metadata. This provide flexibility in selection of tests for execution .
Markers can be applied only to the tests and not to fixtures. These can be implemented at class level, module level and test levels

  • Example:

@pytest.mark.mytest #mytest marker
def test_1():

Command to run only the “mytest” marked tests: $ pytest -v -m mytest

Some useful builtin markers

Skip – This is used when a test need to be. An optional reason can be specified for the test.

@pytest.mark.skip(reason=“different test data required “)    skipif – This is used when a test need to be skipped if certain condition is met. An optional reason can be specified for the test.@pytest.mark.skipif(“condition”)

xfail – At times tests are expected to be failed and thus need to be marked for “expected failure”.  Xfail marker can be used for such types of tests.


Command lines markers/flags: Way to control selection and execution
Pytest provides some command line flags which again comes handy for tests collection, execution and for generating result in required format. 

Selection: Command options for  selecting tests :

pytest   – Collect n execute all the tests present in the specified module

pytest  testpath      – Collect all the tests from specified path

Execution: Command options for  executing tests

pytest –x: This flag stop the execution after first failure

pytest –maxfail=2:  This flag stop the execution after two failures

pytest –durations=10 : Collect list of the slowest 10 test durations

pytest –ff: Collect all tests but execute failed one first

pytest –lf: Collect only failed test and re-execute

pytest –q –s  -v = Display result on the console

Plugins: Rewarded association
Pytest has a rich plugin infrastructure. There are so many builtin plugins available which makes this tool a hit among available ones. One can use builtin plugins, external ones or can write new plugins. Plugins contain well specified hooks functions which ultimately are responsible for implementation of configuration, running, reporting and gathering the tests. Whenever the tool is started, all builtin plugins get loaded up following by external ones which are registered through setuptools entry points and at last the hooks functions specified in or in other words user created ones. Many external plugins with excellent additional features are available which works wonder along with pytest. User created ones can be specified in and will be available to whole project or remain directory specific tests.

Most popular external plugins

pytest-sugar : It generates prettier output and  shows failures instantly

pytest-cache: Its allow to run only the failed tests the previous run with –lf.

pytest-xdist: In case tests need to be run parallel this plugin can be used. It distributes the tests among the specified nodes.

pytest-cov: This plugin measure code coverages and give the report.

pytest-ordering: If tests need to be run in some order due to dependency of outputs then the plugin can be used. Just need to put a decorator for sequence indication.

pytest-expect: Pytest tests with asserts, and in case of multiple asserts in a single testcase the execution stops on the first failed assert, this can overcome by using the mentioned plugin which causes whole test to execute irrespective of assert failure in between.

Pytest-Selenium: Great association for Funtional Testing
Pytest for its simplify and selenium webdriver the top most UI testing tools when combined together then provides a robust solution for UI Automaton Testing. Selenium webdriver supports nearly all web browsers and can work across many platforms. Pytest’s test design, assertion approach and test results reporting are magnificent from testing perspective. Pytest support for external plugins provides a stable background for complex browsers interaction through scripts. All these factors are congenial for high quality GUI automation testing.

Pytest-bdd and pytest-koniraBehaviour-driven testing made easy for Automated Acceptance testing

Now these days more and more software projects are going for BDD approach because of its simplicity and clear understanding of software features among all the associated people

Pytest-bdd plugin is an extension of pytest which supports BDD testing. It implements Gherkin language due to which Behaviour Driven Testing becomes easier. When other BDD tools requires separate runners, it uses all the flexibility and power of pytest. Tests written in GIVEN – WHEN – THEN format are easy to understand and communicate the purpose clearly. Sets of examples are bonus in providing the clarity of application behaviour. Prerequisites, actions and expected output are conveyed effortlessly. This helps in designing simple unit level tests to highly complex end to end testing scenarios.

Pytest: Getting Started

Installation: Just run below command and its done…

pip install -U pytest

Sample test example: for testing assertion
def func(x):
return x + 1

def test1_should_pass():
assert func(4) == 5

def test2_should_pass():
assert func(3) == 5-1

def test1_should_fail():
assert func(3) == 5

def test2_should_fail():
assert func(3) == 5

Execution Result of the example:

============================= test session starts =============================

platform win32 — Python 3.5.0, pytest-2.8.7, py-1.4.31, pluggy-0.3.1

rootdir: D:\UI-Automationfff_test, inifile:

plugins: expect-0.1, bdd-2.17.1, html-1.9.0, rerunfailures-2.0.0, xdist-1.14

collected 4 items

tests\creatives\concept\ ..FF

================================== FAILURES ===================================

______________________________ test1_should_fail ______________________________

def test1_should_fail():

>       assert func(3) == 5

E       assert 4 == 5

E        +  where 4 = func(3)

tests\creatives\concept\ AssertionError

______________________________ test2_should_fail ______________________________

def test2_should_fail():

>       assert func(3) == 5

E       assert 4 == 5

E        +  where 4 = func(3)

tests\creatives\concept\ AssertionError

===================== 2 failed, 2 passed in 0.17 seconds ======================


Above result clearly states the way pytest generates the test result and very convenient convey the failure reason, which is very easy to interpret.


Installation: Just run below command and its done…

pip install pytest-bdd

  • Example:

Feature: Verification of gmail login page
Scenario: Verify user can login with valid username and password

Given I navigates to login page
And I enter valid username
And I enter valid password
When I clicked on submit button
Then I should get login successful message

|user_name |password      |
|xyz       |12345         |




Automation Analyst



Companies are in search for a way to decrease expenses and increase revenue, as global competition is continuing shrinking profit margins. However, at the same time, companies are overwhelmed with data, which is getting generated through operations and actions. As rapid surge in information is creating new challenges for some companies, other companies are using the same information to get higher profits into their business. These smart companies are using predictive analytics to gain a competitive advantage by turning data into knowledge.

The predictive intelligence is achieved by business users using statistics, text analytics along with data mining and this is accomplished by unveiling relationships and patterns from unstructured and structured data. The structured data generally has relational data model and it is about real –world objects, whereas un-structured data is generally opposite, as it doesn’t have pre-defined data model, as it is usually text. To deal with unstructured data usually involve text- analysis and sentiment analysis. [1]

Why Use Predictive Analytics?

Predictive Analytics (PA) can be used in any industry including marketing, financial, workforce, healthcare, and manufacturing. It is mainly used for customer management (customer acquisitions and customer retention), fraud and risk management to increase revenue, improve current operations and reduce associated risks. Almost every industry makes profits through selling goods and services. Credit cards industry has been using models since decades, who predict the response to a low-rate offer. Nowadays, due to sudden growth in e-commerce, companies are also using online behavior and customer profile information to promote offers to the customers.


Figure – 1. Response/Purchase PA model

The figure given above borrowed from article referenced as [5] depicts response/purchase PA model. This model represents the customer lifecycle starting from old customer/former customer, established customer, and new customer/prospect customer. The scores derived using these models can be used expand the customer acquisition ratio or lower the expenses, or can be used for both. The below given are the real world instances where response/purchase PA model is used currently in day to day business decision making process.


Many banks are using PA to foresee the probability of fraud transaction before they get authorised and PA provides answer within 40 milliseconds of the transaction commencement.


One of the best office supply retailer uses PA to determine which products to stock, when to execute promotional events and which offers are most suitable for consumers and doing so 137 % of surge in ROI was observed.


One of top notch computer manufacturer who has used PA to predict the warranty claims associated with computers and its peripherals and using so it has been able to bring down 10% to 15% warranty cost to the company.

Talent Acquisition & Resource Management

According to the survey conducted by Radius, start-up companies such as Gilds, Entelos, and many others are using PA to find out the right candidates suitable for the job. The selection criteria of candidates using keywords for job descriptions and search is not only restricted to LinkedIn, but they are also targeting blog posts and forums that includes candidate’s skills. In some instance, finding candidate for a particular skill is hard, such as master of new programming language, and in such cases a PA approach can help in discovering candidates; candidates having skills closely related to the requirement. There are PA algorithms that can even predict when a hopeful candidate (candidate who is already employed), is likelihood to change the job and become available. [3]

Predictive Analytics Process

Predictive Analysis Process

Figure – 2. Predictive Analytics Process

A typical predictive analytics process can be depicted as shown in the figure given above which was borrowed from article referenced as [6] and only the main stages of the process are briefly outlined here:

  1. Define Project: In this step project outcomes, deliverables, scope of the project, business objectives are defined and data sets which are going to be used for analysis are identified.
  1. Data Collection: The data for PA is generally collected from multiple sources in order to perform PA. It provides a complete view of the customer interactions to the user.
  1. Data Analysis: This is the vital and critical step of PA process, as in this step data is analysed to identify the trends, imputation, outlier detection, and identifying meaningful variables etc. to discover the information which can help business users take right decisions and arrive at conclusions.
  1. Statistics: In PA process, the statistical analysis facilitates to validate the assumptions and test those assumptions using standard statistical models which include analysis of variance, chi-squared test, correlation, factor analysis, regression analysis, time series analysis muti- variates, co-variates and many more techniques.
  1. Modelling: Using predictive modelling, user is given ability to automatically create accurate predictive models about future. For predictive modelling, mainly machine learning, artificial intelligence, and statistics are used. The model for predictive modelling is chosen based on testing, validation and evaluation using the detection theory to assume the probability of a result for given set of input data.
  1. Deployment: This is the phase where actually, user is given a preference to deploy the analytical results into everyday decision making process and to automate the decision making process. Depending on the requirements, this phase can be very simple as generating a report or it can be complex as implementing data mining process.PA model can be deployed in offline/online mode depending on the data availability and decision making requirements. It generally assists a user to make informed decisions.

Predictive Model Implementation

In this blog, we will target business problem associated with the retail industry to learn more about how exactly PA works. SportDirect (a fictitious company) is an online sports retailer and wants to come up with the strategy for selling more sports equipments to existing customers to increase revenue in total. To achieve the same, company tried several different marketing campaigning programs. However, it resulted in waste of time and money. Store didn’t receive any outcome out of these programs. The store has now become very keen to identify information that, which customers are eager to buy more sports equipments, what products they are most likely to buy and what effort would be required to make them purchase sports equipment’s and products. Based on these insights, marketing team needs to project their next customer offer. The store has eventually stored several years of data including sales and customer data online, which will play vital role. Store has decided to put into action an IBM SPSS predictive- analytics solution.

To develop the accuracy of analysis and prediction the store is required to build and deploy a predictive model. This model will provide suggestions for offers to be given on special products to the set of clients. To build this model and deploy it, thorough participation from an administrator, a data architect and an analyst will be required. The administrator will configure, manage and control access to the analytic environment, the data architect will provide the data, and the analyst will use the data to create the model itself.

For the first step, the team of an analyst, an administrator and an architect would discover and locate all the required information. The significant subset of the store’s chronological (Historical) sales and customer information will be used to build model. However, building model from historical data doesn’t provide store a comprehensive view of its existing customers. Thus, business analyst provides suggestions to survey preferences and opinions regarding sports equipment of existing customers. The store would use IBM SPSS Data Collection to pull together the additional data by creating a customer survey, gathering information from completed surveys and managing the resulting data. To determine customer buying habits, patterns and preferences related to the sports equipment the survey data will be inserted into the model and associated with historical data.

SportsDirect would use IBM SPSS Text Analytics software to analyse and identify the valuable customer sentiment and product feedback which could lie within the text inform of thousands of blog entries and email customer have sent to its service center. This information can be used to gain insights into customer buying patterns, habits and opinions about products. Hence, this information can be used to feed into the model and figure given below borrowed from article referenced as [1] demonstrates the steps required to build a predictive model.

Steps for building predictive model

Figure – 3. Steps for building Predictive Model

To generate a model, an algorithm and complete set of data are required, where in algorithm is used to mine data, and identify trends & patterns that leads to predicting outcomes. The analyst will perform market-basket analysis using association algorithm, and this algorithm will automatically discover the combinations of products that are sold well together and will provide suggestion for providing specific offers to the distinct clients.

The next step is building, training and testing model based on the collected data and algorithm and IBM SPSS Modeler workbench is used for the same. Now, PA includes information that can be used to real-world customer to determine buying behaviours, predicting future buying patterns and identify the best marketing offer for each customer resulting in increase in sales. This modelling process provides an output which is called as ‘scoring.’ The sales and marketing team managers uses this score as an input to their respective marketing campaign and decision-making process. This scoring output generally contains the list of clients who are most likely to purchase a certain type of products. In special cases, special discount is also offered to attract classified set of customer to act swiftly.

To better understand the scoring output of PA modeller, let’s consider tennis players as a customer for this conjectural findings and recommended actions. Tennis players are the customers who have purchased a tennis racquet from SprotsDirect in the past. These tennis players live in hot-region and due to the hot weather, they purchase three times as many racquet grips in given time-period compared to players from other regions. However, same customers are buying limited number of cans of tennis balls to those of residing in other regions. Based upon this discovery, hot-weather customers will be given an email offer of 25 % discount on the next order, if customer would purchase racquet grips and tennis balls together. This also provides many other recommendations targeting different types of customers and sports. This can also help in pricing policies, reducing price at the end of buying season for a particular product line, generally when demand is quite low. [2]


The PA focuses on finding and identifying hidden patterns in the data using predictive models and these models can be used to predict future outcomes. It has been acknowledged that predictive models are built automatically. However, for overall success of the business, it actually requires exceptional marketing strategies and powerful team as James Taylor in [4] states that “Value comes only when insights gained from analysis are used to drive to improve decision making process.” PA can make real difference, by optimising resources to make better decisions and take actions for the future.

The predictive analytics is currently used in retail, insurance, banking, marketing, financial services, oil & gas, healthcare, travel, pharmaceuticals and other industries. If applied correctly and successfully, predictive analytics can definitely take your company to the next level as there are many reasons including,

  • It can help your organization to work with own strengths and taking full advantage of areas where competitors are falling.
  • It can help your company to limit the distribution of offers and distribution codes only to the audience who are about to leave.
  • It can help your company to grow beyond increasing sales and it provides insights through which company can improve its core offerings.
  • It can help your company to grow existing base and acquire new customer base by enabling positive customer experience.


[1] Imanuel, What is deployment of predictive models?   [Online]. Available: [Accessed: Nov. 16, 2016].
[2] Beth L. Hoffman, “Predictive analytics turns insight into action”, Nov. 2011, [Online]. Available:  [Accessed: Dec. 8, 2016].
[3] Gareth Jarman, “Future of the Global Workplace: The Changing World of Recruiting”, Sep. 2015 [Online]. Available: [Accessed: Dec. 12, 2016].
[4] Kaitlin Noe, “7 reasons why you need predictive analytics today”, Jul. 2015[Online]. Available: [Accessed: Dec. 14, 2016].
[5] Olivia Parr-Rud, “Drive Your Business with Predictive Analytics” [Online]. Available: [Accessed: Dec. 14, 2016].
[6] Imanuel, “What is Predictive Analytics?”, Sep 2014, [Online]. Available: [Accessed: Oct. 14, 2016].

Vishal Prajapati

Senior Business Analyst


When was last time you spent waiting more than 2 seconds for a page to load? The average user has no patience to wait too long for a page to load. Users lose their interest in a site if they don’t get a response quickly, people like fast responding websites.

The Riverbed Global Application Performance Survey has revealed a major performance gap between the needs of business and its current ability to deliver.

  • According to the survey 98% of executives agree that optimal enterprise application performance is essential to achieving optimal business performance.
  • 89% of executives say poor performance of enterprise applications has negatively impacted their work.
  • 58% of executives specified a weekly impact on their work

Poor app performance impacts every area of the business.

  • 41% cited dissatisfied clients or customers
  • 40% experienced contract delays
  • 35% missed a critical deadline
  • 33% lost clients or customers
  • 32% suffered negative impact on brand
  • 29% faced decreased employee morale

Application performance should be on top priority; every millisecond matters, few milliseconds difference is enough to send users away.  Performance optimization saves time, money and valuable resources.

Our team was assigned on a critical mission to bring down a core business API execution time from 21 seconds to 4 second. I am sharing my experience on this mission; hopefully it will help you in understanding performance monitoring process and optimization techniques. Performance improvement is an ongoing iterative process. This blog discusses more about server side application tuning. On reading this article you would learn

  • How to initiate application performance tuning
  • Performance monitoring
  • Identifying optimization areas and optimization techniques

Below is the sequence of steps in performance optimization.

  • Benchmarking
  • Running performance test
  • Establish a Baseline
  • Identify bottlenecks
  • Optimization

Below diagram shows typical process of running performance initiative

Process of Running Performance Initiative


Benchmarking can be simply defined as “setting expectation”.

An athlete sets a benchmark of running 200mtrs distance in 20 seconds (athlete is setting expectation here), similarly a product owner sets a benchmark for a Login API that it needs to be executed not more than 1000ms for 15 parallel users. API will be running on 3 application servers under load balancer having 1TB External memory, 12GB RAM, running on Intel i7 core processor each. Application server connecting to DB server will have same hardware configuration as an application server. These are examples of benchmarking. It’s very important that system Hardware Configuration (HDD, RAM, and no. of server), application configuration and acceptable payload must be fixed during benchmarking and ensure it remains unchanged during all subsequent performance tests. Performance environment generally are replica of production environments.

Running Performance Test 

The purpose of the performance testing is to measure the response time of an application/API with an expected number of users under moderate load. It’s generally done to establish a baseline for future testing and/or measure the saving over baseline on performance related code changes. Performance tests can be carried out by using tools like SOAP UI, Load Runner, etc. Ensure you’re having same configuration, payload as fixed on benchmarking before running test. You also need to have a performance monitoring tools like AppDynamics, ANTS profiler, etc. configured to capture call graph for the executed application/API. These tools help in analyzing and identifying bottlenecks.

Establish a Baseline

Baseline is a system assessment that tells how far we are from benchmark figures; Current state of the system is a baseline.  It is an iterative process; it keeps evolving on code changes.

Currently athlete runs 200mtrs in 25 seconds is an example of baseline (Athlete is still behind by 5 seconds from benchmark example above), a test performed on a login API with same criteria used as benchmarking takes 1500ms (still 500ms behind of benchmark figures as mentioned in example above). This gap between benchmark and baseline need to be filled in by increasing performance so that baseline figures are equal or less than benchmark figures.

 Identify bottlenecks

Increasing performance would first need identifying the bottlenecks. This is very important part of performance tuning, it needs keen observation. Performance monitoring tool gives you report with detail call graph and statement wise time taken. Call graph need to be further analyzed and narrowed down to the root cause of performance issues, ensuring no single opportunity goes unnoticed.

Hardware bottlenecks – objective is to monitor hardware resources such as CPU utilization, memory utilization, I/O usage, load balancer, etc. to see if it has some bottleneck.

Software bottlenecks – monitor webserver (IIS), DB server, etc. to see if it has any bottleneck.

Code bottlenecks – No matter how careful and attentive your developer team is, things are going to happen. Identifying code bottlenecks is a technique, find out code areas that takes more resources or execution time. Finding out such code areas will open more performance opportunities. Below are few common code bottlenecks to be looked for.

Identify Methods/block/statement on call graph that takes long time to execute.

Find duplicate DB/IO calls on a call graph.

Identify long running SQL queries/IO operations.


After finding bottlenecks, next step is to find out the solution on identified bottlenecks.

Hardware optimization – if you see high memory or CPU utilization during performance test run, analyze code and find root cause of issue. There are many possible reasons behind this issue. E.g. memory leaks, multi threads, etc.

If you particularly find multithreading reason behind high system consumption, try combining these threads statements into main thread to run it synchronously. Obviously, it will increase overall execution time. If you can afford it, go ahead and implement change else execute these threads asynchronously on other server to keep application server health under control without compromising overall execution time.

Address any other hardware bottlenecks if found.

Software optimization – Analyze bottleneck and find root cause. You may sometime need to involve respective expertise (IIS, Database, etc.)

Code optimization –

  • If possible, use object cache on heavy non-changing objects.
  • Check if time taking statements/methods can be executed asynchronously without violating fair system usage.
  • Use proper Indexing on table to increase query performance.
  • Check call graph to see if same method getting called multiple of times, if so apply appropriate cache mechanism to avoid these duplicate DB calls.


Good design and coding practices leads to high-performance applications. Irrespective of the power of the hardware, application can be inefficient when not designed well and not optimized. Many performance problems are related to application design rather than specific code problems. It’s very important to have high performance applications in this highly competitive market to grow and sustain.  We see many applications failing when data grows significantly, as data grows performance becomes crucial, it’s important to keep application performance consistent even if data grows. At Xoriant, we have specialized team which works on Performance tuning and monitoring that help clients to tune their critical enterprise applications performance even with large data sets.

Technical Lead


The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.

-Bill Gates

Mobile Apps are a new window to user solutions in IT. With every user need shifting to mobile, the numbers of mobile apps are increasing, therefore increasing the competition to deliver quality apps. Testing mobile apps is thus becoming a key process before rolling out app releases to users. Hence, mobile test automation is the need of the hour, to facilitate thorough testing of mobile apps efficiently and in less amount of time.

Robot framework is an open source test automation framework used for Acceptance-Test Driven Development (ATDD) implemented using python. It has an ecosystem which consists of various test libraries and tools that adhere to the keyword driven approach of robot framework. One of the external test libraries for mobile test automation is Appium Library which uses Appium to communicate with android and iOS applications. This blog is a walkthrough of how robot communicates with appium to bring out the best of robot framework and appium to mobile test automation with the help of a demo on running a test suite for testing a basic android application.

Robot framework

Robot Framework is a generic test automation framework released under Apache License 2.0. Robot has standard test libraries and can be extended by test libraries implemented either with Python or Java.

Key Features of Robot Framework
  • Business Keyword driven, tabular and easy to understand syntax for test case development
  • Allows creation of reusable higher-level keywords from the existing keywords
  • Allows creation of custom keywords
  • Platform and application independence
  • Support for standard and external libraries for test automation
  • Tagging to categorize and select test cases to be executed
  • Easy-to-read reports and logs in HTML format

Robot framework requires installation of the following on the system:

  • Java (JRE and JDK)
  • Python
  • Robot framework package (pip install)
  • Python IDE (PyCharm)
Appium Library

 Appium Library is one of the external libraries of robot framework for mobile application testing which only supports Python 2.x. It uses Appium (version 1.x) to communicate with Android and iOS applications. Here most of the capabilities of Appium are framed into keywords, which are easy to understand and help understand the purpose of the test case by reading the script.

 Key Features of Appium
  •  No recompilation or modification of app to be tested is required
  • App source code is not needed
  • Tests can be written in any language using any framework
  • Standard automation specification and API

To use appium library with robot framework for mobile app test automation requires installation of the following on the system:

  • Node js
  • Robot framework appium library package (pip install)
  • Appium Desktop Client (Appium Server)
  • Android SDK (For Android apps)
  • Xcode (For iOS apps)
Robot – Appium Interaction

A basic flow of robot framework’s interaction with the application under test is illustrated in the following diagram.

Fig 1: Interaction of robot framework with the application under test

Test Suites consisting of test cases written using robot’s keyword-driven approach are used to test the mobile application (Android/iOS). Appium server, robot’s Pybot and Appium-Python Client play a significant role in this interaction.

Appium Server – Appium is an open source engine running on Node.js. It is mainly responsible for the interaction between the app’s UI and robot’s appium library commands. It needs to be up and running to facilitate this interaction.

Pybot – This is a robot framework module used to trigger the test scripts written in Robot framework format. Pybot reads the different framework files from framework’s code base and executes the tests by interacting with Appium Library. On completion of test case/suite execution, pybot generates report and log files with complete details of the test run.

Appium-Python Client Appium-Python Client facilitates the interaction between appium library and appium server using JSON Wire Protocol.  This client initiates a session with the appium server in ways specific to appium library, resulting in a POST /session request to the appium server, with a JSON object. The appium server then starts an automation session and responds with a session ID. This session ID is used in sending further commands to the server.

This is illustrated in the below flow diagram.

Fig 2: Flow Diagram of Robot – Appium Interaction

Fig 2: Flow Diagram of Robot – Appium Interaction


 The following example is for testing the Calculator App on an Android device using robot framework’s appium library.

Test Suite

A test suite is a .robot file which can be written and executed using a python IDE. The basic skeleton of a test suite written using robot framework’s syntax consists of the following sections.

Fig 3: Basic skeleton of Test Suite

Fig 3: Basic skeleton of Test Suite

  • Settings – This section consists of the test suite documentation, imports of libraries and resource files, suite and test level setup and teardown. (Fig 4)
  • Variables – This section consists of all the variable declarations for the variables used in the test suite. (Fig 4)
  • Keywords – This section consists of the higher level keywords formed using in built keywords from robot’s standard libraries and appium library. (Fig 5)
  • Test Cases – This section consists of all the test cases that belong to the test suite. (Fig 5)
Fig 4: Settings and Variables section of test suite

Fig 4: Settings and Variables section of test suite

Fig 5: Keywords and Test Cases section of test suite

Fig 5: Keywords and Test Cases section of test suite


            UIAutomator is a tool used to obtain the locators of all the elements on a particular android application. It is a part of the android SDK. The locators in the form of xpaths were obtained using UIAutomator for the calculator app (Fig 6)

Fig 6: UIAutomator screenshot for Calculator App for Android

Fig 6: UIAutomator screenshot for Calculator App for Android

Test Reports and Logs

            The above test suite can be executed using the following command on python terminal:

pybot -d Results\TestSuite  TestSuite.robot

 On execution of the test suite, report and log files are created in the form of HTML documents. These files have a detailed summary of the test case execution and all the necessary statistics related to the test case execution. (Fig 7, 8)

Fig 7: Report of the execution of Test Suite for Calculator App

Fig 7: Report of the execution of Test Suite for Calculator App

Fig 8: Log of the execution of Test Suite for Calculator App

Fig 8: Log of the execution of Test Suite for Calculator App

In conclusion, the appium library of robot framework facilitates automation of test cases for mobile applications with a simple tabular syntax, which is easy to read and is platform independent, without altering the source code of the application under test. The keyword driven approach of robot framework ensures the reusability and readability of the test cases, thus making the automation framework robust and tester friendly.

Sayalee Pote

Software Engineer