Product Engineering Outsourcing, Tech Talk


The IoT (Internet of Things) is a concept which encompasses much more than technology associated with smart homes. It contains amazingly powerful applications for the future business world and that too with built-in capability to efficiently perform data analysis to let companies to function more cost- efficiently and productively. The IoT can also enable digital transformation and drive new business models & values in all the size of companies, across almost all industries. Upon connecting people, systems, processes and assets, IoT business leaders can make better-informed decisions which can improve customer experience and competence, reduce costs and generate more revenue.

Internet of Things

Figure 1 Internet of Things (IOT)

The figure 1 shown above borrowed from article referenced as [5] is simplifying the term IoT, as in the figure it can be clearly seen that, devices are connected to the internet. These devices are connected to transmit critical data back into the cloud for further analysis as data generated through IoT devices could be a game changer for the organisations. According to the Gartner’s market research, spending on IoT technology has been increased by 30 % from year 2015 to 2016. According to McKinsey Global Institute, IoT can potentially create an economic impact of 2.7 trillion to 6.2 trillion dollars annually by 2025. This will not only open up great amount of business opportunities for big business to create new value using highly digital and data driven future but also help small business to have the same advantage. [1]

IoT Today

Experts affirm that still IoT implementation is in early stages. In regards to IoT devices, consumers are only aware of Nest’s Learning Thermostat as an IoT compatible smart device. This device learns and get adapted to consumer patterns of behaviour and the change of seasons to program itself for ideal proficiency and ease. The feature that makes IoT such a powerful tool for individuals and businesses is its ability to learn and make decision without any human involvement. There are many organisations that are already using networked sensors and products to serve variety of purposes including modernising the manufacturing process, to better understand the consumer needs, to track shipments and to make better decisions.

Ryan Lester, director of IoT Strategy for IoT platform states that he sees three main use cases for the IoT in the organisations he works with.

  1. The first is connectivity to enable new feature. This connectivity allows capturing Telemetry data. Telemetry process is about automatically measuring and wirelessly transmitting data from remote sources. Following use cases are also achieved using telemetry data.
  2. The second is related to better service as to identify when a product will fail or it requires new parts.
  3. The third way is periodic replacement. For instance, an air filter company can automatically send a replacement based on the customer usage.

In recent years, retailers have started using IoT to get complete understanding of how consumers interact with products in the retail environment. Manufactures have also started using IoT to develop better manufacturing practices and process by using networking essential machines and using robotics throughout the process. It has also been observed that companies have installed sensors on the manufacturing machines, and this can allow them to process the data to identify the trend of poor quality. The IoT can be used to monitor whole product lifecycle including its creation to its end point. For an instance, networked manufacturing monitors the creation of the product for ensuring quality and production efficiency. Next step, IoT can help tracking and coordinating shipping logistics and this can ensure efficiency, speed and accuracy. As soon as products are in distribution center, information including inventory and organisation along with interaction taken place between automatic systems like stock picking robots, can also be achieved by an IoT system. The IoT can also deliver more personal and customized customer experience by providing data about maintenance and user interaction. Thus, IoT will help business at great extent to boost the loyalty and create lifetime customers.

Upon understanding more about a person, his behaviour and life, moving from a one-time transaction to selling them a product as a service, unique customer experience can be delivered by giving them power and control. As you understand customer’s challenges, the possibility of delivering better product increases.

The Future of IoT

According to the Business News Daily survey conducted from industry experts about how IoT technology may grow and how businesses will incorporate these systems into their business in the future, below are some of the predictions about the future of IoT.

Predictions about consumer behaviours and needs

Justin Davis states that as IoT devices will start storing the data about our daily activities, it will have complete understanding about our lives. The information collected from all the devices will be merged using software platform and humans will interact with devices using virtual assistants. For an instance, the virtual assistant of coffee machine, it reminds you that you are going to run out of coffee, and as it knows about your coffee brand and the amount you pay for the coffee, it may recommend a different brand for same or low cost.

A famous clothing brand was facing major issue in manual inventory counts as accuracy levels were 60 to 70 percent and this used to lead to missed sales and disappointed customers due to out-of-stock situations in the store and online as well. This brand was also facing problem of clear visibility of what customers would buy from shelf as how often items were considered or tried in a fitting room before being sold off the shelf i.e. conversion ratio. Upon implementing IoT solutions such as Smart Cosmos, Rain RFID tags, network connected RFID sensors, METRICS empowered clothing giant with end-to-end supply-chain visibility and real-time analytics provided consolidated reports through which store and shelf availability of merchandise can be identified and great customer satisfaction can be provided. The benefits don’t end here, as in the long run, deployment of METRICS in stores can provide major improvement in sales, cost savings and good improvement in gross margin.

Personalized one-to-one marketing

Businesses using interactive displays that provides answers on real-time basis for the consumer needs would be successful. Interactive displays can help an organisation to create own set for model of products and it can take you through variety of products and solutions. One of the best examples is Nike, the footwear giant. Your phone can also provide detailed information about product including pricing information and that can be achieved just by pointing at any product in the store and using the store’s interface.

Continued refinement of business operations

IoT in conjunction with big data analytics would not only revolutionize traditionally managed businesses, but it would also result in more effective and efficient use of resources. Especially service companies can make best use of IoT based solutions by sending their technicians to monitor and identify the issues by visiting the customer’s location. Small and medium scale enterprises will have greater advantage of using IoT, as it would bridge the demand supply gap by integrating the inventory management and customer relationship management systems. Thus, in a world where everything is connected and devices are intelligently communicating with each other, we can definitely say that IoT can become internet of everything instead internet of things.

New Business Opportunities

The IoT not only offers greater efficiency but it also opens up many doors for new business opportunities. It also has great potential to change the way companies and customers approach the world. However, they both will have to get adapted to new devices and services to this changing and ultra connected space. The current wave which has been emerged of IoT embraces billions of devices, which would flourish every business domain including all sectors like retail, manufacturing, healthcare etc. The Cisco Internet group has made forecast that approximately 50 billion units of IoT devices will be in use by next 20 years. [2]

Internet of Things

Figure 2 Estimated Number of Installed IoT devices by Sector

The figure 2 given above borrowed from article referenced as [4] depicts the estimated number of installed IoT devices by sector. The key findings from the estimation are given below,

  • It has been predicted that by 2021, IoT market would be the largest device market in the world as it will be double the size of smartphone, tablet, wearable devices (fitness tracking devices, smart watches), and computers combined.
  • The IoT business would add almost 1.7 trillion to the global economy by 2021 and this would include software and hardware installation cost and management services.
  • It has also been predicted that by 2021, government sector will be the leading sector for IoT device shipments as soon as government and home sectors gain momentum.
  • The topmost benefit offered by IoT will be increased efficiency and reduced costs as IoT promises to increase efficiency within home, city and workplace by providing utmost control to the IoT device user.

These IoT connected devices includes wearable’s such as fitness tracking devices, smart homes and the offices where connected devices can be lights, thermostats, TV, refrigerators, weather sensors, pollution sensors, security systems, and cars ( connectivity is defined between the engine and car parking sensors).  Umbrellium’s Thingful, the world’s first search engine specially designed for public IoT devices, can provide the geographical index of where exactly the things are, who is the owner and how and why these things are used.

In current market, globally, Google and Apple are the two major companies in the IoT market. The application developed by Google, named as ‘ Google Glass’ provides fast access to information by giving commands to the microphone built-in smart eyewear device. Apple has developed a new smart framework called as Homekit which can be used to control devices connected inside home. Giri Krishna affirms that IoT can surely increase business efficiency, as in the recent time it is best innovation in the science and technology world and IoT has been and will benefit companies as it not only supports but also provides a new angle to have a comfortable lifestyle.

IoT Challenges

Although, IoT offers huge and wide-range possibilities, there are few challenges associated with IoT deployment.

IoT Security Challenges

Security and privacy remain the major concerns associated with IoT.  IoT is about more data and more connected devices and this means that more opportunities for hackers and cybercriminals to steal private information or misuse this information. When security systems are fully automated, hackers can break in and lock the entire system. Therefore, the security risk associated with IoT must be taken into account by all the businesses and entire industry will continue to develop strong and creative remedy as IoT expands. Currently, there are only few standards (regulations) available that are used to run IoT devices. However, there are groups comprising of electronics, global industrial and tech companies to standardize IoT and solve the major concern of IoT i.e. security. [4]

IoT Scalability

The companies who primarily adopted IoT products and technologies in their business environments are facing real scalability challenges. Even necessity of having highly specialized and customized solutions makes IoT more difficult to scale. Due to this, IoT deployment is moving at very slow pace than anticipated. The news also states that many organisations are still in the POC (proof of concept) stage for IoT, despite working from past several years.

To better understand the scalability challenge associated with IoT, let’s assume that a manufacturing company would like to deploy IoT technology to gather better insights about its overall operations, improve its manufacturing efficiency and modernize its operations.  Also assume that manufacturer already has multiple manufacturing plants with different kinds of equipment with multiple workflows. In the computer world, the devices and software are referred as legacy when they turn five years old. However, in the manufacturing world, even 35 years old equipment would be fully functional and it is very difficult to deal with manufacturing equipment of different age of one site and it can get very challenging when it comes to dealing with number of dissimilar sites. Hence, it becomes extremely challenging to find out ways to reliably get set of data to analyse across all types of devices.

Nevertheless, modern equipment offers wide range of data. However, upgrades typically are done by outside specialists. Thus, simple solution can be used to replace all the old equipment with new ones, but this will require high capital, and it is not realistic option.

IoT Adoption

One challenge can be is making customer to trust and get adapted to the new technologies. As, IoT is sort of buzzword and some people don’t want their personal information to be shared; this can cause adoption challenge. However, this can’t be applied to businesses as companies have already started to implement IoT on mass basis and businesses will definitely adapt IoT with less hesitation.

IoT Maintenance

Another challenge associated with IoT is building and maintaining IoT systems. According to Lester till the time IoT systems are properly built and open enough to share and analyze data, most of the information will not be much useful to the organisations looking to profit from own networks and sensors. Lester also states that organisations are too busy understanding the technical part of IoT and due to that they are missing business opportunities. Majority of the companies says that, it is very important to connect to the production data, however only 51 percent of companies are actually gathering the data, and less than one-third are using that data for decision making, and are able to analyse it. Hence, there is a clear gap and to bridge such gap, companies will have to bring already organised data into business systems and not use separate dedicated IoT system. This will allow easy access to the people who would need to use data daily to analyse it.


It is usual that many businesses, especially smaller businesses adopt technology very lately. However, IoT can add value to the businesses of all the size including areas such as customer satisfaction, bottom lines and other important KPIs. The companies will have to remain very proactive in building a concrete plan to practically deploy IoT. It is highly recommended to invest in the IoT technologies such as sensors, data intelligence and infrastructure to support the connectivity and data. Nevertheless, there will be unexpected and impulsive challenges that can occur in real-time and it will be hard to be prepared for such challenges. Nevertheless, the conclusion is to proceed with more caution and strategies on adopting a data driven business model which can create new and improved insights into customer behaviour resulting into innovation in product design, customer management and product delivery.


[1] Samiksha Jain, “Make your business thrive with Internet of Things”, Oct. 2016, Available at: [Accessed: 24 Jan. 17].

[2] Adam C. Uzialko, Business News Daily, “How the Internet of Things Will Make Your Business Better at Customer Service”, Aug. 2016, Available at: [Accessed: 24 Jan. 17].

[3] Nicole Fallon, Business News Daily, “Internet of Things: How Businesses Can Prepare and Adapt”, Jul. 2014, Available at: [Accessed: 24 Jan. 17].

[4] John Greenough, “The ‘Internet of Things’ Will Be The World’s Most Massive Device Market And Save Companies Billions Of Dollars”, Oct. 2014, Available at: [Accessed: 24 Jan. 17].

[5] Waypost, “Can Your Business Benefit from the Internet of Things?” ,May. 2016, Available at: [Accessed: 24 Jan. 17].

Vishal Prajapati

Senior Business Analyst


Nowadays, web Applications are becoming our integral part in day to day life due to their 24X7 availability and accessing huge data on fingertips. As more and more vital data is stored in web applications and the numbers of transactions are increased on the web, proper security testing of web applications is becoming very important.

The prime objective of Security testing is to find out ways to identify vulnerability in the system and to ensure that data is protected from hackers & invaders.

Most Common Types of Attacks causing Web Vulnerabilities

Injection Flaws [A1]: Injection Flaws results from failure in filtering un-trusted inputs. There are various forms of injection attacks like passing unfiltered data to the DataBase (SQL injection), to the browser (XSS), to the LDAP server (LDAP injection). This allows an attacker to submit malicious DB queries and pass commands directly to a database/server. To prevent such injections we need to make sure that application input fields should accepts inputs by filtering data, preferably according to a whitelist and should not support to black listed data.

Broken Authentication & Session Management [A2]: Broken Authentication and Session Management attacks are generated to try and retrieve passwords, user IDs, account details and some of the common causes are:

  • The URL may contain the session id which will leak in the referrer header
  • The passwords may not be encrypted or hard coded
  • The session ids may be predictable
  • No Session timeouts implementation using HTTP, SSL

There are numerous steps that developers can use to prevent these attacks, including session expiration, login expiration and various other strategies like Two-factor authentication, Methods to enforce user to change their password after certain duration.

Cross Site Scripting (XSS) [A3]: Cross Site Scripting (XSS) is a type of vulnerability where information is sent to web service providers such as banks or online stores, an attacker can interrupt the transaction process and extract valuable information. This is achieved by enabling attackers to inject client-side script into Web pages, viewed by other users and trick a user to click on that URL. Once it executed by the other user’s browser, this code then performs action to change website behaviour and stealing personal data.

Developers should make use of existing security control libraries, such as OWASP’s Enterprise Security API or Microsoft’s Anti-Cross Site Scripting Library. Also they should ensure that any client inputs are checked, filtered and encoded before being passed back to the user.

Insecure Direct Object Reference [A4]: Poor application design where authentication levels are not sufficiently checked and users can gain administrative access to system data. E.g. if a user’s account ID is shown in the page URL, an attacker may able to guess another user’s ID and can resubmit the request to access their data, provided if the ID is a predictable value.

The best ways to prevent this vulnerability are user ID creation using UUID method, by randomly and authenticate user each time when try to access sensitive files or content.

Security Misconfiguration [A5]: The Primary cause of this vulnerability is misconfiguration of the infrastructure that supports a Web application. Common issues include default usernames such as “admin”, and passwords, such as “password” or “123”. Various unattended web pages/services running on server can also cause for such flaws.

This can be prevented by educating the resources about the Security & Privacy and implementing them on priority at work by providing adequate training.

Sensitive Data Exposure: This vulnerability occurs when sensitive data like User ID, password, Session ID, cookies are not encrypted and shows in browser URLs.

Following are preventive measure to avoid above vulnerability:

  • Sensitive data should be encrypted all times including in transit and at rest by using “HTTPS”
  • Payment transactions should process by using Payment Processor such as “Stripe”, “Braintree”
  • All passwords should be hashed and stored in encrypted using encryption utility such as “Bcrypt”

Missing Function Level Access Control [A6]: An authorization failure will cause this vulnerability. This vulnerability exists when websites has hierarchal or tier level user access accounts and depending on the account’s privileges, the user will be able to access a certain level of applications.

Whenever a valid user sends some request, the application verifies its access & privilege and sends an approval token to him. However, in case of untrusted, anonymous users, administrative functions become targets as they are prone to unauthorized functionality.

To prevent it, authorization must be done for every server side calls.

Cross Site Request Forgery (CSRF Or XSRF) [A7]: This is one of the most prevalent attacks from online scammers and spammers, where users are manipulated to provide sensitive information through a forged website. Attackers typically warn the user that their “account has been suspended”; their “password has changed” which force users to submit their information through the forged site.

Use of CSRF, XSRF cookies into the session will validate every HTTP request and prevent such vulnerability.

Denial of Service (DoS) or Distributed Denial of Service (DDoS) [A8]: These are attempts to flood a site with external requests, making the site unavailable for users. “DoS” attacks usually target specific ports, IP ranges, or entire networks, but can be targeted to any connected device or service.

“Denial of Service” attacks are when one computer with an internet connection attempts to flood a server with packets. “DDoS” attacks are when many devices, which are widely distributed and attempt to flood the target with hundreds, often thousands of requests.

Main DDoS attacks are:

  • Volume Attacks where the attack attempts to overwhelm bandwidth on a targeted site.
  • Protocol Attacks where packets attempt to consume server or network resources.
  • Application Layer Attacks where requests are made with the intention of crashing the web server by overwhelming the application layer.

Invalidated Redirect & Forwards [A9]: This is again an input filtering issue, where a web application accepts unverified input that affects URL redirection and redirects users to malicious websites. In addition, hackers can alter automatic forwarding routines to gain access to sensitive information.

Summing up:

Top N vulnerability lists may initially appear to be interesting data sets but all of these are interwoven, and one can lead to another. Hence it is vital that one should have an understanding of the application security landscape to decide the approach for security testing to reduce the risk. This can be achieved by including multiple assessment approaches rather than depending on traditional approach, such as – code review/static analysis, threat modelling, and application-specific assessment methodologies like mobile or embedded, to get a more comprehensive picture of your software security threats.

Sr.Software Engineer


Burn-down charts are commonly used for sprint tracking by agile practitioners. The most effective and used method is plotting burn-down charts using efforts remaining versus remaining time to complete it, by doing so teams can manage their progress.

At any point in a Sprint, the remaining efforts in the Sprint Backlog can be summed. The team tracks these remaining efforts for every Daily Scrum to showcase how to achieve Sprint Goal.

The Product Owner calculates total work remaining at least in every Sprint Review. In sprint reviews work remaining is compared with this amount by Product Owner to check progress toward finishing the projected work by the desired time for the goal.

How To Create Burn-Down Chart

The very first step is to breakdown task to different sub-tasks. This is done during the sprint planning meeting. Each task should have working hours associated to it (ideally not more than 12, roughly two days’ work at six per day), which the team agrees on during the planning meeting.
Once the task breakdown is done, the ideal burn-down chart is plotted. This chart reflects progress considering that all tasks along with their sub-tasks are accomplished within the sprint at uniform rate (refer to the red line in below figure).

Many Agile tools (JIRA, Rally, Mingle etc.) have built-in feature for burn-down charts. However, a burn-down chart can be plotted and maintained in a spreadsheet in its simplest form. Sprint Cycle (Dates) in the sprint is plotted on the X axis, while efforts remaining are plotted on the Y axis.

Refer below example:
Duration of Sprint – 2 weeks
Size of Team – 7
Time (Hours/Day) – 6
Total Capacity to complete work  – 420 hours

On Day 1 of the sprint, once the task breakdown is in place, the ideal burn-down will be plotted as below:


The Y axis depicts total hours in the sprint (420 hours), which should be achieved by the end of the sprint. Ideal progress is shown in the blue line, which assumes all tasks will be completed by the end of sprint.

How To Update Burn-Down Chart

Each member picks up tasks from the task breakdown and works on them. At the end of the day, they update effort remaining for the task, along with its status.

Refer below example; the total evaluated effort for Task 1 is 10 hours. After spending 6hrs on the task, if the developer thinks he requires another 4hrs to complete, the “Effort Pending” section should be updated as 4. Requirements team have completed their task hence they have updated status as “Closed” and “Effort Spent” as 6. QA team has not yet started with their task hence status is “In-progress” and “Effort Pending” as 12.


As we progress during the sprint, the burn-down will look like this:


Sometimes, scrum teams are not able to predict efforts for sprints. Important aspect of the chart at the end of the day is that it should accurately reflect work remaining against required efforts to accomplish it with single team member updating it. Also, this can be discussed in daily scrum.

For example: If team had initially broken a task into 3 sub-tasks without understanding the complexity and dependency then they have created a potential bottleneck situation.

To tackle such instance team can re-visit efforts required and recalculate it to complete task at the end of day when they have started sprint or if they are in initial phase of sprint and update burn-down chart.

Here team can consider adding a “Spike Task” of 2-3 days to understand the complexity (by taking KT sessions, revisiting references, and walkthrough from product owner or BA) and remove uncertainty around the task. They can thus add “follow on tasks” to the original estimate and recalculate efforts required for the sprint.

Understanding Burn-Down Chart

 There are only two lines drawn in Burndown chart, but the situation they describe might have different reasons and meaning to it. If effort remaining is above the ideal effort hours, it means we are going at a slower pace and may not be able to finish all the commitments of the sprint decided during sprint meetings. If effort remaining is below the ideal effort hours, it shows that we are going at a better rate and may be able to finish earlier in the sprint.

Below are different stages of scrum teams in a sprint and way to interpret it.

Sprint commitment MET


Above progress is observed on charts of experienced agile teams. It indicates team is able to organize itself. The team has completed work on time and achieved sprint goal.

The most important is they have great product owner who understands the reason for a locked sprint backlog and a great scrum master able to help the team.

The team is not taking more work outside team’s capacity and velocity and finished the spring backlog on time. The team is also able to estimate capacity correctly.

 Sprint commitment NOT met


This burndown chart says: “You have not completed your commitment”. This progress is mostly observed in inexperienced agile team. The team has been late for the entire sprint. They did not adapt the sprint scope to appropriate level. It shows that the team has not completed stories that should have been split or moved to the next sprint.

In such situation amount of work allocated in next sprint should be reduced. If this happens again, curative actions should be taken after a few days when progress is slow. Typically, less priority story should be pushed to the next sprint or back to the product backlog.

 Team stretched towards end to meet the commitment


This chart says that team started well in the first half of the sprint but later in middle of sprint team lost focus and worked at slower pace. At the end, team completed sprint on time by meeting sprint goals by stretching working hours.

In retrospection, team should discuss the reasons of late progress in the first half of the sprint and solve issues so they are better positioned in the coming sprint. Team should also consider the capacity of task they are able to complete in one sprint.

Team is not consistent


A chart like this depicts that stories or tasks were not estimated correctly during the sprint planning meeting, though the commitment is met at the end, the team’s performance has not been consistent.

Teams can come across such state when estimation of work is not correctly done. Team did not identify the problems coming before start of the sprint.

In retrospection, team should focus on estimating stories correctly. They should rearrange there planning method by correctly calculating team`s load and velocity for coming sprints. Scrum Master should pitch in here and help team identify work estimation problems and guide them and bring them out of this situation.

Sprint commitment met early before time


Such situation arises when teams probably overestimated stories without understanding the difficulty of task or committed less during sprint meeting; hence they finished them ahead of time. Also team velocity has not been estimated correctly.

Team implemented all committed stories but did not worked on additional backlog stories even they had time to do so. To fix this situation, team should immediately arrange a planning meeting, re-estimate remaining user stories, include them in the sprint according to their velocity and start the sprint.

In retrospection meet, scrum master must be proactive in getting his team to fix estimation by providing training after identifying problem areas. He can have a word with product owner and work on backlog stories to be included in sprint.

Avoiding Mistakes While Using Burn-Down Charts
Multiple stories have a common task

There are occasions when different stories may have similar efforts involved. In such cases, if we include these efforts under each story, it will provide incorrect number of total hours, and tracking will hamper. For example, consider “data set-up”. This can be common task and applicable for all stories

Tasks are too detailed or huge

Tracking becomes difficult for teams if many tasks are created. At same time, tasks should not be huge in size (ideally not more than 12 hours) else tracking will be painful on daily basis. If tasks are more than 12 hours, it becomes difficult for teams to measure remaining efforts.

Misreading between effort remaining and effort spent

One of the common mistake new scrum teams does in first few sprints is to misread “effort remaining” as “effort spent”. When updating effort column every day, team should re-estimate task again and update remaining efforts to complete that task.

Update chart on daily basis without fail

Every scrum team member is required to update “effort remaining”. Every team member representing there scrum teams should update it at the end of the day. This will help teams to come up with burn down charts that depict correct position of scrum teams in ongoing sprints and eventually in the release.

Benefits Of Using Burn-Down Charts

 Below are benefits which scrum teams can achieve if burn-down charts are plotted and used effectively on daily basis.

Risk mitigation

Burn down charts provide daily updates on efforts and schedule, thereby mitigating risks and raising alarms as soon as something goes wrong in sprint. Thus providing daily visibility to scrum teams and involved customers and stakeholders.

If red line shown in figures above which is actual progress line goes flat and hovers above blue line which resembles ideal line, then the scrum team knows they are in trouble. Risk mitigation can be planned by them immediately rather waiting for end of sprint.

Single communication tool for scrum teams, customers & stakeholders

Burn down charts can be printed and placed in agile rooms or shared at a common place with involved audiences in sprint on daily basis at the end of every day’s work. Thus providing high visibility of scrum team’s progress on sprint which ultimately helps in release completion.

Common planning & tracking tool for scrum team

Scrum teams come up with task breakdown, updates estimated efforts and effort remaining. This enable teams to own the plan made for sprints. Biggest advantage is that entire scrum team is involved in planning and tracking using burn down chart.

Placeholder to track retrospective action items

It’s a good practice to add retrospective action items from the previous completed sprint as “non-functional requirements” in the task breakdown for the current sprint. This way, the team targets those action items, and they are also tracked as the sprint progresses.


Sprint burn-downs are usually monitored using effort remaining, it`s a common practice to use story points to monitor release burn-down.

After its introduction, many deviations of burn-down charts have been derived. Cumulative Flow Diagram (CFD) is one more favorite tool among agile practitioners which provides greater level of detail and insight into various stages of story.

Few practitioners find “burn-up” charts useful at sprint and release level but at the end it all comes up to the end result and how efficiently team uses it to track it`s daily activities at end of the day.

However, recent studies show that burn-down charts remain the most favored tracking tool for agile practitioners, due to their effectiveness and simplicity.

Ameya Tawde

Sr. Test Engineer


Python: A tester’s choice

Python is a general purpose, dynamic and flexible language, hence a lot of applications are being developed using Python. From a tester’s perspective, it has readily available modules and libraries which make scripts creation a lot easier. Tests can be written as xUnit-style classes or as functions. It provides full fledge automation testing solution for any sort of project cause and It is capable enough to perform unit, functional, system and BDD testing.

Pytest: Best among all python testing frameworks

Pytest is a python testing framework which provides a single solution for unit, functional and Acceptance testing.
It is popular than the other available frameworks because of its attractive features. Below are some of the features of Pytest:

  • It offers test design with no boilerplate
  • It does not require separate methods for assertion like  assertEquals, assertTrue, assertContains
  • Tests can be parametrized which reduces code duplication
  • Pytest can run tests written in unittest, doctest, autres and nose
  • 150+ external plugins available to support all sorts of functional testing
  • Plugins available like pytest-BDD and pytest-konira for writing tests for Behaviour Driven Testing
  • It works wonder for GUI automation testing, when used along with testing tools like selenium webdriver or splinter

In short pytest is a one stop solution for all sort of python testing be it unit, functional, highly complex functional or acceptance tests (BDD).

Top Most Pytest Features and properties

Pytest Fixtures: Nervous system of pytest
Fixtures are the key concept in pytest which can essentially provide baseline for test creation and execution. To declare any method as fixture just put the annotation for the method “@pytest.fixture” and put them in “”

Fixture example:

def open_browser():
driver = webdriver.Firefox()
assert “Python” in driver.title

The above designed fixture will be available for the whole project provided this should specified in project directory file.

  • file contains all configuration settings, all defined fixtures, hooks implementations and it is applicable to the directory level. They get loaded by default whenever tool is invoked.

Some key points on fixtures:

  • Fixtures have names and can be called from anywhere in the project, modules, classes and tests
  • Fixtures can return or not return any value or just execute the specified steps in it.
  • Fixtures can be passed as function arguments and in that case the returning value of fixture will be available in the mentioned method.
  • Fixture can be specified in directory level file. Can be called from any method and it will execute the steps specified in it.
  • A fixture can take multiple fixtures, and each fixture triggers other fixture, thus serves a modular approach.
  • Fixtures can be scoped as per the need. This is a good practice keeping in mind time-expensiveness. Scope can be “session”, “module”, “class” and “function”.

Request object: Introspect agent
This is one of the useful features to search or introspect the “requesting item”. It can introspect from test function, class, module, or session. Items specified in config file or returning by some other fixtures can be used efficiently by the called fixture by “getattr” function. Just check the below mentioned fixture:

  • Example:

def driver(request, browser_type):
“””return a webdriver instance for testing
_driver = getattr(webdriver, browser_type)()
except AttributeError as error:“Browser type %s unavailable!”, browser_type)
raise AttributeError
return _driver
Finalization code: Setting up of teardown
Fixture can be used for teardown as well. This can be achieved by using “yield” keyword. Just put the fixture with the annotation “@pytest.yield_fixture“ and put the steps after “yield” keyword in the fixture. And whenever fixture will go out of scope the steps after the “yield” keyword will serves as  teardown process. Have a look at the below modified steps of the “driver” fixture.

  • Example:

def driver(request, browser_type):
“””return a webdriver instance for testing
_driver = getattr(webdriver, browser_type)()
except AttributeError as error:“Browser type %s unavailable!”, browser_type)
raise AttributeError
yield _driver‘Finishing test”)‘*************************’)

In the above mentioned fixture whenever “browser_type” is not available the driver instance will be quit.

Fixture parametrization: Enhances reusability
Fixtures can be parametrized and can be executed multiple times with different set of data, the same way as a normal function is executed.

Usefixtures: Call for fixture from anywhere
One way by which fixtures can be made available anywhere in the project is by calling them by using the decorator “@pytest.mark.usefixtures(“fixture_1”, “fixture_2”)”.

Autouse fixture: Mark the fixture for  all
AF are the fixtures methods which get invoked without “usefixtures” decorator or “funcgars”.
Any fixture can be registered for autouse. Just need to put keyword autouse with “True” flag. “pytest.fixture(scope=”module”,autouse=True)”. Fixture will run for a class or test or module as mentioned in the scope. If they are defined in conftest then they will be invoked by the all tests present below the directory. These fixtures are particularly useful to set applicable global settings for the test session.

Auto Test Discovery in Pytest: Ease of execution
One of the very useful features is auto test discovery in pytest. Its means it detects all the tests once execution command is invoked,  user only need to specify the test modules and test with a prefix “test_*.” while designing . Command line arguments can be specified with tests names, directories, node ids and file names. In case of absence of any command line arguments then collection with start from specified ‘testpaths’ (provided they need to be configured). This feature help in running all the tests, multiple tests in groups, single test, and tests belong to specific directories. Alternatively test can be organized in specific folder structure as per the modules and thus can be executed as per the need.

Test parametrization
Tests can be parametrize using the built in decorator “pytest.mark.parametrize “.

  • Example:

@pytest.mark.parametrize(“input, expected_result”, [
(“2+5”, 7),
(“2+3”, 5)])

def test_calc(input, expected_result):
    assert eval(input) == expected_ result

Another way by which parametrization can be done in pytest is by using “pytest_generate_tests” hook which is automatically called at the time of tests collection. “Metafunc” object can be called to get requesting context or calling “metafunc.paratrize()” method for parametrization of the items.

Pytest Markers (Custom and inbuilt)
Tests can be marked with custom metadata. This provide flexibility in selection of tests for execution .
Markers can be applied only to the tests and not to fixtures. These can be implemented at class level, module level and test levels

  • Example:

@pytest.mark.mytest #mytest marker
def test_1():

Command to run only the “mytest” marked tests: $ pytest -v -m mytest

Some useful builtin markers

Skip – This is used when a test need to be. An optional reason can be specified for the test.

@pytest.mark.skip(reason=“different test data required “)    skipif – This is used when a test need to be skipped if certain condition is met. An optional reason can be specified for the test.@pytest.mark.skipif(“condition”)

xfail – At times tests are expected to be failed and thus need to be marked for “expected failure”.  Xfail marker can be used for such types of tests.


Command lines markers/flags: Way to control selection and execution
Pytest provides some command line flags which again comes handy for tests collection, execution and for generating result in required format. 

Selection: Command options for  selecting tests :

pytest   – Collect n execute all the tests present in the specified module

pytest  testpath      – Collect all the tests from specified path

Execution: Command options for  executing tests

pytest –x: This flag stop the execution after first failure

pytest –maxfail=2:  This flag stop the execution after two failures

pytest –durations=10 : Collect list of the slowest 10 test durations

pytest –ff: Collect all tests but execute failed one first

pytest –lf: Collect only failed test and re-execute

pytest –q –s  -v = Display result on the console

Plugins: Rewarded association
Pytest has a rich plugin infrastructure. There are so many builtin plugins available which makes this tool a hit among available ones. One can use builtin plugins, external ones or can write new plugins. Plugins contain well specified hooks functions which ultimately are responsible for implementation of configuration, running, reporting and gathering the tests. Whenever the tool is started, all builtin plugins get loaded up following by external ones which are registered through setuptools entry points and at last the hooks functions specified in or in other words user created ones. Many external plugins with excellent additional features are available which works wonder along with pytest. User created ones can be specified in and will be available to whole project or remain directory specific tests.

Most popular external plugins

pytest-sugar : It generates prettier output and  shows failures instantly

pytest-cache: Its allow to run only the failed tests the previous run with –lf.

pytest-xdist: In case tests need to be run parallel this plugin can be used. It distributes the tests among the specified nodes.

pytest-cov: This plugin measure code coverages and give the report.

pytest-ordering: If tests need to be run in some order due to dependency of outputs then the plugin can be used. Just need to put a decorator for sequence indication.

pytest-expect: Pytest tests with asserts, and in case of multiple asserts in a single testcase the execution stops on the first failed assert, this can overcome by using the mentioned plugin which causes whole test to execute irrespective of assert failure in between.

Pytest-Selenium: Great association for Funtional Testing
Pytest for its simplify and selenium webdriver the top most UI testing tools when combined together then provides a robust solution for UI Automaton Testing. Selenium webdriver supports nearly all web browsers and can work across many platforms. Pytest’s test design, assertion approach and test results reporting are magnificent from testing perspective. Pytest support for external plugins provides a stable background for complex browsers interaction through scripts. All these factors are congenial for high quality GUI automation testing.

Pytest-bdd and pytest-koniraBehaviour-driven testing made easy for Automated Acceptance testing

Now these days more and more software projects are going for BDD approach because of its simplicity and clear understanding of software features among all the associated people

Pytest-bdd plugin is an extension of pytest which supports BDD testing. It implements Gherkin language due to which Behaviour Driven Testing becomes easier. When other BDD tools requires separate runners, it uses all the flexibility and power of pytest. Tests written in GIVEN – WHEN – THEN format are easy to understand and communicate the purpose clearly. Sets of examples are bonus in providing the clarity of application behaviour. Prerequisites, actions and expected output are conveyed effortlessly. This helps in designing simple unit level tests to highly complex end to end testing scenarios.

Pytest: Getting Started

Installation: Just run below command and its done…

pip install -U pytest

Sample test example: for testing assertion
def func(x):
return x + 1

def test1_should_pass():
assert func(4) == 5

def test2_should_pass():
assert func(3) == 5-1

def test1_should_fail():
assert func(3) == 5

def test2_should_fail():
assert func(3) == 5

Execution Result of the example:

============================= test session starts =============================

platform win32 — Python 3.5.0, pytest-2.8.7, py-1.4.31, pluggy-0.3.1

rootdir: D:\UI-Automationfff_test, inifile:

plugins: expect-0.1, bdd-2.17.1, html-1.9.0, rerunfailures-2.0.0, xdist-1.14

collected 4 items

tests\creatives\concept\ ..FF

================================== FAILURES ===================================

______________________________ test1_should_fail ______________________________

def test1_should_fail():

>       assert func(3) == 5

E       assert 4 == 5

E        +  where 4 = func(3)

tests\creatives\concept\ AssertionError

______________________________ test2_should_fail ______________________________

def test2_should_fail():

>       assert func(3) == 5

E       assert 4 == 5

E        +  where 4 = func(3)

tests\creatives\concept\ AssertionError

===================== 2 failed, 2 passed in 0.17 seconds ======================


Above result clearly states the way pytest generates the test result and very convenient convey the failure reason, which is very easy to interpret.


Installation: Just run below command and its done…

pip install pytest-bdd

  • Example:

Feature: Verification of gmail login page
Scenario: Verify user can login with valid username and password

Given I navigates to login page
And I enter valid username
And I enter valid password
When I clicked on submit button
Then I should get login successful message

|user_name |password      |
|xyz       |12345         |




Automation Analyst



Companies are in search for a way to decrease expenses and increase revenue, as global competition is continuing shrinking profit margins. However, at the same time, companies are overwhelmed with data, which is getting generated through operations and actions. As rapid surge in information is creating new challenges for some companies, other companies are using the same information to get higher profits into their business. These smart companies are using predictive analytics to gain a competitive advantage by turning data into knowledge.

The predictive intelligence is achieved by business users using statistics, text analytics along with data mining and this is accomplished by unveiling relationships and patterns from unstructured and structured data. The structured data generally has relational data model and it is about real –world objects, whereas un-structured data is generally opposite, as it doesn’t have pre-defined data model, as it is usually text. To deal with unstructured data usually involve text- analysis and sentiment analysis. [1]

Why Use Predictive Analytics?

Predictive Analytics (PA) can be used in any industry including marketing, financial, workforce, healthcare, and manufacturing. It is mainly used for customer management (customer acquisitions and customer retention), fraud and risk management to increase revenue, improve current operations and reduce associated risks. Almost every industry makes profits through selling goods and services. Credit cards industry has been using models since decades, who predict the response to a low-rate offer. Nowadays, due to sudden growth in e-commerce, companies are also using online behavior and customer profile information to promote offers to the customers.


Figure – 1. Response/Purchase PA model

The figure given above borrowed from article referenced as [5] depicts response/purchase PA model. This model represents the customer lifecycle starting from old customer/former customer, established customer, and new customer/prospect customer. The scores derived using these models can be used expand the customer acquisition ratio or lower the expenses, or can be used for both. The below given are the real world instances where response/purchase PA model is used currently in day to day business decision making process.


Many banks are using PA to foresee the probability of fraud transaction before they get authorised and PA provides answer within 40 milliseconds of the transaction commencement.


One of the best office supply retailer uses PA to determine which products to stock, when to execute promotional events and which offers are most suitable for consumers and doing so 137 % of surge in ROI was observed.


One of top notch computer manufacturer who has used PA to predict the warranty claims associated with computers and its peripherals and using so it has been able to bring down 10% to 15% warranty cost to the company.

Talent Acquisition & Resource Management

According to the survey conducted by Radius, start-up companies such as Gilds, Entelos, and many others are using PA to find out the right candidates suitable for the job. The selection criteria of candidates using keywords for job descriptions and search is not only restricted to LinkedIn, but they are also targeting blog posts and forums that includes candidate’s skills. In some instance, finding candidate for a particular skill is hard, such as master of new programming language, and in such cases a PA approach can help in discovering candidates; candidates having skills closely related to the requirement. There are PA algorithms that can even predict when a hopeful candidate (candidate who is already employed), is likelihood to change the job and become available. [3]

Predictive Analytics Process

Predictive Analysis Process

Figure – 2. Predictive Analytics Process

A typical predictive analytics process can be depicted as shown in the figure given above which was borrowed from article referenced as [6] and only the main stages of the process are briefly outlined here:

  1. Define Project: In this step project outcomes, deliverables, scope of the project, business objectives are defined and data sets which are going to be used for analysis are identified.
  1. Data Collection: The data for PA is generally collected from multiple sources in order to perform PA. It provides a complete view of the customer interactions to the user.
  1. Data Analysis: This is the vital and critical step of PA process, as in this step data is analysed to identify the trends, imputation, outlier detection, and identifying meaningful variables etc. to discover the information which can help business users take right decisions and arrive at conclusions.
  1. Statistics: In PA process, the statistical analysis facilitates to validate the assumptions and test those assumptions using standard statistical models which include analysis of variance, chi-squared test, correlation, factor analysis, regression analysis, time series analysis muti- variates, co-variates and many more techniques.
  1. Modelling: Using predictive modelling, user is given ability to automatically create accurate predictive models about future. For predictive modelling, mainly machine learning, artificial intelligence, and statistics are used. The model for predictive modelling is chosen based on testing, validation and evaluation using the detection theory to assume the probability of a result for given set of input data.
  1. Deployment: This is the phase where actually, user is given a preference to deploy the analytical results into everyday decision making process and to automate the decision making process. Depending on the requirements, this phase can be very simple as generating a report or it can be complex as implementing data mining process.PA model can be deployed in offline/online mode depending on the data availability and decision making requirements. It generally assists a user to make informed decisions.

Predictive Model Implementation

In this blog, we will target business problem associated with the retail industry to learn more about how exactly PA works. SportDirect (a fictitious company) is an online sports retailer and wants to come up with the strategy for selling more sports equipments to existing customers to increase revenue in total. To achieve the same, company tried several different marketing campaigning programs. However, it resulted in waste of time and money. Store didn’t receive any outcome out of these programs. The store has now become very keen to identify information that, which customers are eager to buy more sports equipments, what products they are most likely to buy and what effort would be required to make them purchase sports equipment’s and products. Based on these insights, marketing team needs to project their next customer offer. The store has eventually stored several years of data including sales and customer data online, which will play vital role. Store has decided to put into action an IBM SPSS predictive- analytics solution.

To develop the accuracy of analysis and prediction the store is required to build and deploy a predictive model. This model will provide suggestions for offers to be given on special products to the set of clients. To build this model and deploy it, thorough participation from an administrator, a data architect and an analyst will be required. The administrator will configure, manage and control access to the analytic environment, the data architect will provide the data, and the analyst will use the data to create the model itself.

For the first step, the team of an analyst, an administrator and an architect would discover and locate all the required information. The significant subset of the store’s chronological (Historical) sales and customer information will be used to build model. However, building model from historical data doesn’t provide store a comprehensive view of its existing customers. Thus, business analyst provides suggestions to survey preferences and opinions regarding sports equipment of existing customers. The store would use IBM SPSS Data Collection to pull together the additional data by creating a customer survey, gathering information from completed surveys and managing the resulting data. To determine customer buying habits, patterns and preferences related to the sports equipment the survey data will be inserted into the model and associated with historical data.

SportsDirect would use IBM SPSS Text Analytics software to analyse and identify the valuable customer sentiment and product feedback which could lie within the text inform of thousands of blog entries and email customer have sent to its service center. This information can be used to gain insights into customer buying patterns, habits and opinions about products. Hence, this information can be used to feed into the model and figure given below borrowed from article referenced as [1] demonstrates the steps required to build a predictive model.

Steps for building predictive model

Figure – 3. Steps for building Predictive Model

To generate a model, an algorithm and complete set of data are required, where in algorithm is used to mine data, and identify trends & patterns that leads to predicting outcomes. The analyst will perform market-basket analysis using association algorithm, and this algorithm will automatically discover the combinations of products that are sold well together and will provide suggestion for providing specific offers to the distinct clients.

The next step is building, training and testing model based on the collected data and algorithm and IBM SPSS Modeler workbench is used for the same. Now, PA includes information that can be used to real-world customer to determine buying behaviours, predicting future buying patterns and identify the best marketing offer for each customer resulting in increase in sales. This modelling process provides an output which is called as ‘scoring.’ The sales and marketing team managers uses this score as an input to their respective marketing campaign and decision-making process. This scoring output generally contains the list of clients who are most likely to purchase a certain type of products. In special cases, special discount is also offered to attract classified set of customer to act swiftly.

To better understand the scoring output of PA modeller, let’s consider tennis players as a customer for this conjectural findings and recommended actions. Tennis players are the customers who have purchased a tennis racquet from SprotsDirect in the past. These tennis players live in hot-region and due to the hot weather, they purchase three times as many racquet grips in given time-period compared to players from other regions. However, same customers are buying limited number of cans of tennis balls to those of residing in other regions. Based upon this discovery, hot-weather customers will be given an email offer of 25 % discount on the next order, if customer would purchase racquet grips and tennis balls together. This also provides many other recommendations targeting different types of customers and sports. This can also help in pricing policies, reducing price at the end of buying season for a particular product line, generally when demand is quite low. [2]


The PA focuses on finding and identifying hidden patterns in the data using predictive models and these models can be used to predict future outcomes. It has been acknowledged that predictive models are built automatically. However, for overall success of the business, it actually requires exceptional marketing strategies and powerful team as James Taylor in [4] states that “Value comes only when insights gained from analysis are used to drive to improve decision making process.” PA can make real difference, by optimising resources to make better decisions and take actions for the future.

The predictive analytics is currently used in retail, insurance, banking, marketing, financial services, oil & gas, healthcare, travel, pharmaceuticals and other industries. If applied correctly and successfully, predictive analytics can definitely take your company to the next level as there are many reasons including,

  • It can help your organization to work with own strengths and taking full advantage of areas where competitors are falling.
  • It can help your company to limit the distribution of offers and distribution codes only to the audience who are about to leave.
  • It can help your company to grow beyond increasing sales and it provides insights through which company can improve its core offerings.
  • It can help your company to grow existing base and acquire new customer base by enabling positive customer experience.


[1] Imanuel, What is deployment of predictive models?   [Online]. Available: [Accessed: Nov. 16, 2016].
[2] Beth L. Hoffman, “Predictive analytics turns insight into action”, Nov. 2011, [Online]. Available:  [Accessed: Dec. 8, 2016].
[3] Gareth Jarman, “Future of the Global Workplace: The Changing World of Recruiting”, Sep. 2015 [Online]. Available: [Accessed: Dec. 12, 2016].
[4] Kaitlin Noe, “7 reasons why you need predictive analytics today”, Jul. 2015[Online]. Available: [Accessed: Dec. 14, 2016].
[5] Olivia Parr-Rud, “Drive Your Business with Predictive Analytics” [Online]. Available: [Accessed: Dec. 14, 2016].
[6] Imanuel, “What is Predictive Analytics?”, Sep 2014, [Online]. Available: [Accessed: Oct. 14, 2016].

Vishal Prajapati

Senior Business Analyst


When was last time you spent waiting more than 2 seconds for a page to load? The average user has no patience to wait too long for a page to load. Users lose their interest in a site if they don’t get a response quickly, people like fast responding websites.

The Riverbed Global Application Performance Survey has revealed a major performance gap between the needs of business and its current ability to deliver.

  • According to the survey 98% of executives agree that optimal enterprise application performance is essential to achieving optimal business performance.
  • 89% of executives say poor performance of enterprise applications has negatively impacted their work.
  • 58% of executives specified a weekly impact on their work

Poor app performance impacts every area of the business.

  • 41% cited dissatisfied clients or customers
  • 40% experienced contract delays
  • 35% missed a critical deadline
  • 33% lost clients or customers
  • 32% suffered negative impact on brand
  • 29% faced decreased employee morale

Application performance should be on top priority; every millisecond matters, few milliseconds difference is enough to send users away.  Performance optimization saves time, money and valuable resources.

Our team was assigned on a critical mission to bring down a core business API execution time from 21 seconds to 4 second. I am sharing my experience on this mission; hopefully it will help you in understanding performance monitoring process and optimization techniques. Performance improvement is an ongoing iterative process. This blog discusses more about server side application tuning. On reading this article you would learn

  • How to initiate application performance tuning
  • Performance monitoring
  • Identifying optimization areas and optimization techniques

Below is the sequence of steps in performance optimization.

  • Benchmarking
  • Running performance test
  • Establish a Baseline
  • Identify bottlenecks
  • Optimization

Below diagram shows typical process of running performance initiative

Process of Running Performance Initiative


Benchmarking can be simply defined as “setting expectation”.

An athlete sets a benchmark of running 200mtrs distance in 20 seconds (athlete is setting expectation here), similarly a product owner sets a benchmark for a Login API that it needs to be executed not more than 1000ms for 15 parallel users. API will be running on 3 application servers under load balancer having 1TB External memory, 12GB RAM, running on Intel i7 core processor each. Application server connecting to DB server will have same hardware configuration as an application server. These are examples of benchmarking. It’s very important that system Hardware Configuration (HDD, RAM, and no. of server), application configuration and acceptable payload must be fixed during benchmarking and ensure it remains unchanged during all subsequent performance tests. Performance environment generally are replica of production environments.

Running Performance Test 

The purpose of the performance testing is to measure the response time of an application/API with an expected number of users under moderate load. It’s generally done to establish a baseline for future testing and/or measure the saving over baseline on performance related code changes. Performance tests can be carried out by using tools like SOAP UI, Load Runner, etc. Ensure you’re having same configuration, payload as fixed on benchmarking before running test. You also need to have a performance monitoring tools like AppDynamics, ANTS profiler, etc. configured to capture call graph for the executed application/API. These tools help in analyzing and identifying bottlenecks.

Establish a Baseline

Baseline is a system assessment that tells how far we are from benchmark figures; Current state of the system is a baseline.  It is an iterative process; it keeps evolving on code changes.

Currently athlete runs 200mtrs in 25 seconds is an example of baseline (Athlete is still behind by 5 seconds from benchmark example above), a test performed on a login API with same criteria used as benchmarking takes 1500ms (still 500ms behind of benchmark figures as mentioned in example above). This gap between benchmark and baseline need to be filled in by increasing performance so that baseline figures are equal or less than benchmark figures.

 Identify bottlenecks

Increasing performance would first need identifying the bottlenecks. This is very important part of performance tuning, it needs keen observation. Performance monitoring tool gives you report with detail call graph and statement wise time taken. Call graph need to be further analyzed and narrowed down to the root cause of performance issues, ensuring no single opportunity goes unnoticed.

Hardware bottlenecks – objective is to monitor hardware resources such as CPU utilization, memory utilization, I/O usage, load balancer, etc. to see if it has some bottleneck.

Software bottlenecks – monitor webserver (IIS), DB server, etc. to see if it has any bottleneck.

Code bottlenecks – No matter how careful and attentive your developer team is, things are going to happen. Identifying code bottlenecks is a technique, find out code areas that takes more resources or execution time. Finding out such code areas will open more performance opportunities. Below are few common code bottlenecks to be looked for.

Identify Methods/block/statement on call graph that takes long time to execute.

Find duplicate DB/IO calls on a call graph.

Identify long running SQL queries/IO operations.


After finding bottlenecks, next step is to find out the solution on identified bottlenecks.

Hardware optimization – if you see high memory or CPU utilization during performance test run, analyze code and find root cause of issue. There are many possible reasons behind this issue. E.g. memory leaks, multi threads, etc.

If you particularly find multithreading reason behind high system consumption, try combining these threads statements into main thread to run it synchronously. Obviously, it will increase overall execution time. If you can afford it, go ahead and implement change else execute these threads asynchronously on other server to keep application server health under control without compromising overall execution time.

Address any other hardware bottlenecks if found.

Software optimization – Analyze bottleneck and find root cause. You may sometime need to involve respective expertise (IIS, Database, etc.)

Code optimization –

  • If possible, use object cache on heavy non-changing objects.
  • Check if time taking statements/methods can be executed asynchronously without violating fair system usage.
  • Use proper Indexing on table to increase query performance.
  • Check call graph to see if same method getting called multiple of times, if so apply appropriate cache mechanism to avoid these duplicate DB calls.


Good design and coding practices leads to high-performance applications. Irrespective of the power of the hardware, application can be inefficient when not designed well and not optimized. Many performance problems are related to application design rather than specific code problems. It’s very important to have high performance applications in this highly competitive market to grow and sustain.  We see many applications failing when data grows significantly, as data grows performance becomes crucial, it’s important to keep application performance consistent even if data grows. At Xoriant, we have specialized team which works on Performance tuning and monitoring that help clients to tune their critical enterprise applications performance even with large data sets.

Technical Lead


The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.

-Bill Gates

Mobile Apps are a new window to user solutions in IT. With every user need shifting to mobile, the numbers of mobile apps are increasing, therefore increasing the competition to deliver quality apps. Testing mobile apps is thus becoming a key process before rolling out app releases to users. Hence, mobile test automation is the need of the hour, to facilitate thorough testing of mobile apps efficiently and in less amount of time.

Robot framework is an open source test automation framework used for Acceptance-Test Driven Development (ATDD) implemented using python. It has an ecosystem which consists of various test libraries and tools that adhere to the keyword driven approach of robot framework. One of the external test libraries for mobile test automation is Appium Library which uses Appium to communicate with android and iOS applications. This blog is a walkthrough of how robot communicates with appium to bring out the best of robot framework and appium to mobile test automation with the help of a demo on running a test suite for testing a basic android application.

Robot framework

Robot Framework is a generic test automation framework released under Apache License 2.0. Robot has standard test libraries and can be extended by test libraries implemented either with Python or Java.

Key Features of Robot Framework
  • Business Keyword driven, tabular and easy to understand syntax for test case development
  • Allows creation of reusable higher-level keywords from the existing keywords
  • Allows creation of custom keywords
  • Platform and application independence
  • Support for standard and external libraries for test automation
  • Tagging to categorize and select test cases to be executed
  • Easy-to-read reports and logs in HTML format

Robot framework requires installation of the following on the system:

  • Java (JRE and JDK)
  • Python
  • Robot framework package (pip install)
  • Python IDE (PyCharm)
Appium Library

 Appium Library is one of the external libraries of robot framework for mobile application testing which only supports Python 2.x. It uses Appium (version 1.x) to communicate with Android and iOS applications. Here most of the capabilities of Appium are framed into keywords, which are easy to understand and help understand the purpose of the test case by reading the script.

 Key Features of Appium
  •  No recompilation or modification of app to be tested is required
  • App source code is not needed
  • Tests can be written in any language using any framework
  • Standard automation specification and API

To use appium library with robot framework for mobile app test automation requires installation of the following on the system:

  • Node js
  • Robot framework appium library package (pip install)
  • Appium Desktop Client (Appium Server)
  • Android SDK (For Android apps)
  • Xcode (For iOS apps)
Robot – Appium Interaction

A basic flow of robot framework’s interaction with the application under test is illustrated in the following diagram.

Fig 1: Interaction of robot framework with the application under test

Test Suites consisting of test cases written using robot’s keyword-driven approach are used to test the mobile application (Android/iOS). Appium server, robot’s Pybot and Appium-Python Client play a significant role in this interaction.

Appium Server – Appium is an open source engine running on Node.js. It is mainly responsible for the interaction between the app’s UI and robot’s appium library commands. It needs to be up and running to facilitate this interaction.

Pybot – This is a robot framework module used to trigger the test scripts written in Robot framework format. Pybot reads the different framework files from framework’s code base and executes the tests by interacting with Appium Library. On completion of test case/suite execution, pybot generates report and log files with complete details of the test run.

Appium-Python Client Appium-Python Client facilitates the interaction between appium library and appium server using JSON Wire Protocol.  This client initiates a session with the appium server in ways specific to appium library, resulting in a POST /session request to the appium server, with a JSON object. The appium server then starts an automation session and responds with a session ID. This session ID is used in sending further commands to the server.

This is illustrated in the below flow diagram.

Fig 2: Flow Diagram of Robot – Appium Interaction

Fig 2: Flow Diagram of Robot – Appium Interaction


 The following example is for testing the Calculator App on an Android device using robot framework’s appium library.

Test Suite

A test suite is a .robot file which can be written and executed using a python IDE. The basic skeleton of a test suite written using robot framework’s syntax consists of the following sections.

Fig 3: Basic skeleton of Test Suite

Fig 3: Basic skeleton of Test Suite

  • Settings – This section consists of the test suite documentation, imports of libraries and resource files, suite and test level setup and teardown. (Fig 4)
  • Variables – This section consists of all the variable declarations for the variables used in the test suite. (Fig 4)
  • Keywords – This section consists of the higher level keywords formed using in built keywords from robot’s standard libraries and appium library. (Fig 5)
  • Test Cases – This section consists of all the test cases that belong to the test suite. (Fig 5)
Fig 4: Settings and Variables section of test suite

Fig 4: Settings and Variables section of test suite

Fig 5: Keywords and Test Cases section of test suite

Fig 5: Keywords and Test Cases section of test suite


            UIAutomator is a tool used to obtain the locators of all the elements on a particular android application. It is a part of the android SDK. The locators in the form of xpaths were obtained using UIAutomator for the calculator app (Fig 6)

Fig 6: UIAutomator screenshot for Calculator App for Android

Fig 6: UIAutomator screenshot for Calculator App for Android

Test Reports and Logs

            The above test suite can be executed using the following command on python terminal:

pybot -d Results\TestSuite  TestSuite.robot

 On execution of the test suite, report and log files are created in the form of HTML documents. These files have a detailed summary of the test case execution and all the necessary statistics related to the test case execution. (Fig 7, 8)

Fig 7: Report of the execution of Test Suite for Calculator App

Fig 7: Report of the execution of Test Suite for Calculator App

Fig 8: Log of the execution of Test Suite for Calculator App

Fig 8: Log of the execution of Test Suite for Calculator App

In conclusion, the appium library of robot framework facilitates automation of test cases for mobile applications with a simple tabular syntax, which is easy to read and is platform independent, without altering the source code of the application under test. The keyword driven approach of robot framework ensures the reusability and readability of the test cases, thus making the automation framework robust and tester friendly.

Sayalee Pote

Software Engineer


Behavior Driven Development Methodology has been talk of the town for some time now, and here we are going to look into why it is becoming day by day popular. This blog will cover following points:

  • Why use BDD?
  • What is BDD?
  • How it benefits?
  • How is it implemented?
  • Challenges & The take away!

Why use BDD? A little background with problem we have…

With new software development world working with more agile and CI/CD processes, it is typically important to reduce cost, time and efforts in each phase of SDLC. The main goal is to understand the business goals of the product and deliver quality with upcoming change requests with minimal turnaround time. Having said that lets us look at a typical approach of requirements flow and development:

Approach to requirement flow and development

  • In the above scenario there is a lot of chance of miscommunication or misunderstanding of client’s requirements
  • Moreover if a scenario is missed from a Developer or QA perspective then requirements have to be revisited with BAs
  • Reducing turnover time for feedbacks and change requests needs to be minimized

BDD answers all these three points in a very subtle way.

What is BDD? The approach that addresses the concern above…

Behavior Driven Development framework emphasizes the requirements to be written in Gherkin Language which is business or domain driven so putting customer’s requirement as the basic orientation of the whole approach. Any product may be divided into a set of features. And these requirements aka features are collaboratively written by Product owners, BAs, QAs and Developers in English-like language with ‘Given-When-Then’ (GWT) scenarios. These scenarios are further implemented by both developers and testers.  Following picture describes this collaborative approach:

Behaviour-Driven Development Collaborative approach

Typically a Gherkin language is written in Given, When & Then – Scenarios which look something like this:

  • Scenario: A typical description of a use case with one precise goal
  • Steps: sequence of Steps in form of
    • Given- preconditions are defined or contextual steps are taken to set the test case
    • When- one or more event or steps taken
    • Then- final outcome expected of the scenario
  • Examples: Generally a set of concrete data on which scenarios should be iteratively run.

How it benefits? Reasons to go for it…

Using BDD gives an edge over other frameworks due to following reasons:

  1. Acceptance criteria is written in a feature driven, easily understandable plain language during planning phase with inputs from all stakeholders of SDLC. No technical jargons yet precise with features and concrete examples.
  2. Development has to be done based on the clear use cases, so no miscommunication as their concerns and queries are already clear.
  3. The feature files directly serve as test scenarios and their steps can be directly implemented into step definition files in automation scripts
  4. Time & effort loss of analyzing requirements by developers and testers is cut short
  5. Test cases may be directly derived from feature file steps and set of examples given, hence its implementation is easy and does not require any extra .csv or excel file for test data.
  6. Reusability of steps defined once throughout the framework by importing scenarios and relevant feature files as steps act as fixtures.
  7. Automated test suites on one hand validate the software on each build or as required and provide the updated technical and functional documentation on another, so turnaround time and cost of maintenance goes down.
  8. Unlike TDD and ATDD the development is not test driven but rather business logic driven.

How is it implemented? An Example would be the best way to explain…

There are many frameworks through which this is implemented, to name a few:

  • Cucumber
  • Lettuce
  • JBehave
  • Aloe
  • Pytest-BDD

Let us take the most common feature of simple login page as the example. So a typical feature file would be:

Acceptance Criteria:

Feature: My Articles Login Page
Given I am on My Articles Login Page
When I enter admin username and password
And I click on Login
Then I am logged in successfully
And I am able to see message “Welcome, you have all administrative rights”


Given I am on My Articles Login Page
When I enter reader username and password
And I click on Login
Then I am logged in successfully
And I am able to see message “Welcome reader! Happy reading”


Given I am on My Articles Login Page
When I enter blank username or password
And I click on Login
Then I am not logged in
And I am able to see message “Sorry, username and/or password cannot be blank”


Given I am on My Articles Login Page
When I enter invalid username and password
And I click on Login
Then I am not logged in
And I am able to see message “Sorry, you have entered wrong username and/or password”

Implemented feature file (my_login.feature) might look like:

Feature: My Login Page
Scenario: User is able to login with valid credentials
Given I am on my web login page
When I enter <username> and <password>
And I click on Login
Then I am logged in successfully
And I am able to see <message>



| username | password | message|

| admin | ad_p@ss1234 | “Welcome, you have all administrative rights” |

| reader | re@der123# | “Welcome reader! Happy reading”|


Scenario: User is not able to login with invalid or blank credentials
Given I am on my web login page
When I enter <username> and <password>
And I click on Login
Then I am not logged in
And I am able to see <message>



| username | password | message|

| “admin” | “re@der123#” | “Sorry, you have entered wrong username and/or password” |

| “rader” | “re@der123#” | “Sorry, you have entered wrong username and/or password” |

| “reader” | “” | “Sorry, username and/or password cannot be blank” |

| “”| “” | “Sorry, username and/or password cannot be blank” |

| “”| “re@der123#” | “Sorry, username and/or password cannot be blank” |

Presently we are using ‘Pytest-BDD’ framework and a typical step definition file implemented for testing might look like:


@given(“I am on my web login page”)
def i_am_on_my_web_login_page()
#code for any prerequisite like open browser

@when(“I enter <username> and <password>”)
def i_enter_username_password(username, password)
#code for entering username and password

@when(“I click on Login”)
def i_click_on_login()
#code for clicking on login

@then(“I am not logged in”)
def i_am_not_logged_in()
#code for verifying the user is still on login page

@then(“I am logged in successfully”)
def i_am_logged_in_successfully()
#code for verifying the user is successfully logged in and move to home

@then(“I am able to see <message>”)
def i_am_able_to_see_message(message)
#code to verify the message

*Note here we don’t have to write full functions again, same steps are reusable for multiple scenarios over multiple feature files and data is passed from feature files itself in form of ‘Examples’, as multiple iterations of a scenario will run over each row of data set in ‘Examples’.

Some Key Findings…

Findings suggest that around 30% of major defects were reduced as edge cases scenarios, and application wide standard scenarios were discussed in planning phase with business itself, so time and effort were reduced both at developer and QA end; this results to faster delivery, for example, normally the feature that can take 3 sprint to get completed, can get deployed to production in 2 sprints.

Challenges & The take away…

BDD framework derives its strength from feature driven, plain English-like language scenarios, and collaborative approach and this has its challenges related to it.

  • As discussed above, BDD is business logic driven, hence needs more engagement from people on business side, which is a typical challenge sometimes.
  • In terms of plain language usage, if the scenarios are poorly written then maintenance becomes tedious and time taking.
  • And most importantly, this framework cannot be used where business, development and testing teams work in a loosely coupled manner and have minimal interactions on their progress and update.

To conclude, BDD is a collaborative way of development which can be used in Agile and Iterative Developments Cycles, with business groups actively taking part and creating requirements in a collaborative manner. At Xoriant, we are already working on this framework with some of our clients and this has proved to be efficient. So, we would like to recommend BDD as it helps us to ensure on-time quality delivery and also does a value add:

  • By reducing costs and wastage of time & efforts by avoiding miscommunication.
  • Focusing on business features and ensure all use cases are covered.
  • Ensures faster changes and releases as testing efforts are moved more towards automation.

Hope this blog helps you understand BDD framework better. We will come up soon with a comparison with Pytest-BDD and Aloe frameworks. Keep watching. Happy Learning!!



Test Lead


In Google IO event held in May 2016, Google has come up with a new operating system called Android N (7.0). Google has officially announced the name of Android N as Nougat. Some new features such as Multi-window, Drag and Drop, Picture-in-picture are introduced which most of the users were eagerly waiting for. Some Samsung devices such as Note 3, Galaxy S5 support multi window. But this feature was not supported by pure vanilla OS prior to Android N.
Android N is still in development stage but developers can try N Developer preview for testing these new features.
Let us see how to get started with these new features. We need to first configure Android N Preview environment.

Steps to Configure Android N Preview:

  • Download and install Android Studio 2.1
  • Install Android N Preview SDK in Android Studio
  • Update to Java 8 JDK
  • Create a new project and start developing

Please note that API level for Android N is 24. So, in build.gradle file we need to set the value of compileSdkVersion to 24.

For debugging and testing, we also need to either

  1. Setup emulator running Android N.
  2. Install Android N on supported devices namely Nexus 5X, 6, 6P, 9 by enrolling the device to Android Beta Program.

We will focus on two major features namely Multi-window and Drag and Drop.


Android N provides a much awaited feature called Multi-window support. Now users can actually multitask between two different applications. Please note that at a time there can be only two applications sharing a screen. Users can open two applications side by side in split-screen mode. For example, user can split the screen, chatting with friends on one side while checking a location on the map on the other side. These applications can be resized by dragging the divider line which separates them. This feature can be very effective in tablets or phones with large screens.

How to achieve Multi-window support?

In order to enable multi-window support, we need to set android:resizeableActivity attribute to true in Android manifest file. If this attribute is set to true, activity will be launched in split-screen mode. The default value of android:resizeableActivity is true. This means we need to specify the value of this attribute only if we don’t want our activity to support multi-window mode. If this attribute is set to false, activity will be launched in full-screen mode. Our application can thus have some of activities viewed in full-screen mode while some of the activities supporting multi-window mode.

That’s it! Simple! One setting in Manifest and you have your application as being part of a multi-window support in the OS.


Drag and Drop:

In Android N, multi-window feature is taken further and enhanced to support drag and drop functionality. In previous versions of OS, users were able to drag and drop views or data within single activity. In Android N, this functionality is taken one step further. Users can drag and drop views and thereby pass data from one application to another once in multi-window mode.

How to achieve Drag and Drop?

Drag and Drop functionality can be achieved with the help of Android Drag / Drop framework. This framework includes drag event class, drag listeners and some other helper classes. In Android N, View class supports drag and drop across applications.

To understand about this feature lets take an example. Suppose we have two different applications: SampleSource and SampleDestination. Our objective is to drag a view from SampleSource and send some data associated with it to SampleDestination. Hence, SampleSource is the application which sends data (starts drag event) and SampleDestination is the application which receives data (receives drop event). Since our application need to receive drag events, we must implement Drag Listeners and register our views to listen to those events. Please note, startDrag() method of view is deprecated in API level 24.

Hence we need to use startDragAndDrop() method which takes following parameters:

  • Clipdata clipdata : This object holds the data to be transferred from SampleSource to SampleDestination
  • DragShadowBuilder builder : This object builds the drag shadow.
  • int flags : This parameter is very important. It defines type of operation (read / write) that needs to be performed by recipient application (SampleDestination in our case). These flags can be any of the following newly added fields in View class.

Newly added fields in View class:

  • DRAG_FLAG_GLOBAL: This flag has significant importance. Setting this flag enables a view to be dragged across window boundaries of the application. Thus, cross app interaction is possible provided both the applications are built with targetSdkVersion>= 24.
  • DRAG_FLAG_GLOBAL_URI_READ: If this flag is used with DRAG_FLAG_GLOBAL, the target application, SampleDestination in our case will have read access to the URI present in the Clipdata object.
  • DRAG_FLAG_GLOBAL_URI_WRITE: If this flag is used with DRAG_FLAG_GLOBAL, SampleDestination will have write access to the URI present in the Clipdata object.
  • DRAG_FLAG_OPAQUE: If this flag is set the drag shadow will be opaque, else it will be semitransparent.

We implement drag listeners in both of our applications. In SampleSource, we initiate the drag event by calling startDragAndDrop() as explained earlier, and in SampleDestination, we override onDrag() method and handle the drop event.

In this way, a recipient application can define a protocol regarding the type of data it can accept. And, any sender application interested in sharing data can send the data in the format accepted by the recipient application and perform desired operation.

This has paved way for developers to build more exciting and interactive applications!!!

Please check out sample projects from this link to understand Multi-window and Drag and Drop features in a better way.


Software Engineer


Mobile devices are an integral part of our lives and we use multiple applications on these devices to not just entertain ourselves but help us in communication, completing tasks, etc. Often the only way to interact with these devices to unlock, tap an application, perform the task and move on. Voice enabled commands have made significant inroads in popular mobile operating systems like Android and iOS. Google Now and Apple Siri understand and can perform commands that are complete voice driven.

Amazon took the Voice Enabled Application experience a step further by focusing their research on what would make an optimum voice experience and feel almost natural when interacting with a device. Their research culminated in releasing the Amazon Echo, which is a Natural Language and Hands-free Voice-enabled wireless device. The Amazon Echo has been a huge success and consumers have been largely accepting of the experience that is completely driven by Voice and which feels natural to most of us.

What does an Amazon Echo device do?

What does the Amazon Echo range of devices do? The process of interacting with it is very simple. It goes something along these lines:

  • You activate it via a wake word. For e.g. “Alexa” is one of the wake (activation) words.
  • It accepts your voice command and translates that to text. It parses out the commands and maps that to what it calls Skills. These Skills are software programs that are running and available in the cloud, either written, run and managed by Amazon themselves or by other developers.
  • The Skill application interprets the request and sends back the response.
  • The Amazon Echo device then speaks out the response via voice.

The magic behind this is the Amazon Alexa Voice Service, which is the intelligent engine that drives the whole experience. This Voice Service is hosted and managed by Amazon.

Amazon Voice Application Ecosystem

Amazon has created not just the Echo range of devices but an entire ecosystem around Voice Applications, thereby catering to all categories of users i.e. you could be just a consumer of the service, a developer adding a new capability to the device, a hardware manufacturer who wants to integrate a Voice service in their product and more. The ecosystem consists of the following:

  • Amazon Echo range of devices are consumer devices
  • Alexa Voice Service to allow manufacturers to integrate Voice into their products
  • Alexa Skills Kit to help developers create custom skills for the platform
  • Alexa Fund is a $100 fund to fuel innovation on the platform
Alexa Skills Development Kit

One of the key reasons for wide spread mobile penetration was not just the availability of devices but an open ecosystem where the popular mobile OS vendors hosted a Marketplace for applications. These Marketplaces (Google Play and Apple iTunes) allowed independent developers to develop and publish their applications for everyone to install and use. It resulted in millions of applications thereby providing a choice to the consumer and applications in multiple categories.

Amazon has taken a similar approach and Voice Skills for the Alexa Platform can be developed and published in their marketplace. To help developers jumpstart development of new Alexa Skills, they have also released an Alexa Skills Kit, which provided boilerplate code in multiple languages like Node.js, Java and allows a developer to quickly understand and develop new skills. The development of the skills is straight forward and any experienced web developer will be able to understand the process quickly. What is most important to spend time on designing the skill and the voice interaction design. Amazon has documentation available to designing an optimum experience.

At present there are about 1500 skills present in the Alexa Marketplace and while several of the skills are not very useful, these are initial days and the marketplace is likely to reward early adopters and movers as consumers begin a push towards Voice Enabled Applications.


Amazon definitely has a winner on its hands via the Amazon Echo range of devices. We do expect that these devices will change over time in terms of form but the fundamental premise of allowing users to experience pure voice driven applications is likely to grow. The move to setup a whole ecosystem that allows everyone to participate is a good move and is likely to pay dividends in the future.

For developers it is important to consider what kind of voice experiences they would like to create. Not every mobile application can be converted to a voice experience and a good amount of time should be spent to design the Voice interaction for the particular skill that you want to develop. And if you get the experience right, your users are going to appreciate the completely hands-free experience to get the data or complete the task that they wanted.


Romin Irani

Principal Architect