Introduction:
Service Virtualization is a method to
emulate the behavior of specific components in heterogeneous component-based
applications such as service-oriented architectures.
It is used to provide software development
and QA/Testing team access to dependent system components that are needed to
exercise an application under test (AUT), but are unavailable or
difficult-to-access for development and testing purposes.
With the behavior of the dependent
components "virtualized," testing and development can proceed without
accessing the actual live components.
It involves creating and deploying a
"virtual asset" that simulates the behavior of a real component which
is required to exercise the application under test
It emulates only the behavior of the
specific dependent components that developers or testers need to exercise in
order to complete their end-to-end transactions.
Available Tools in the
market:
- CA LISA Service Virtualization
- IBM GreenHat
- HP Service Virtualization
- Parasoft Service Virtualization
- Atos SimulateBox
There is a lot of potential value that can
be derived from virtualization but the key to maximizing that value is to
understand what the expectations and business drivers are. Each client planning
virtualization adopts one or more goals based upon product claims, press
coverage, and industry peer input. One common virtualization goal is to improve
the way in which IT manages its resources. This improvement may take the form of
increased peak capacity, improved resilience, reduced configuration costs, or
reduced systems management errors.
Although there may be some debate as to
where to start and which goal saves the most, there is little doubt that
virtualization has value and is here to stay.
Why use Virtualization:
There are both advantages and disadvantages
to using virtualization in your environment. It is critical that you understand
what virtualization can offer in conjunction with your required level of
skills/commitment. It is up to you to reconcile these factors with your
expectations of how virtualization can be used in your environment. Finally, it
is critical to understand that:
Applications aren’t suddenly going to require fewer
resources just because they are virtualized
On the contrary, virtualization adds
overhead and a virtualized application will use more resources than before. A
virtualized application will not run faster unless it is hosted on faster
hardware than it was run on originally. Thus, attempting to virtualize using
your existing hardware is typically a “bad” idea.
It is important to make sure you have
enough storage space, memory, CPU, network bandwidth and other resources to
handle the applications plus the virtualization overhead. If the applications
are business critical, you should plan for worst case scenarios; however avoid
dedicating more resources than necessary since this will negatively impact
other virtual machines on this host.
When to Avoid Virtualization:
Regardless of whether it is possible to
virtualize servers and applications in your environment, there are certain
situations in which the potential risks far outweigh any advantages that might
be gained.
Applications that make frequent and
unpredictable demands on a large parts of the system’s available resources are
not ideal candidates for virtualization. A couple of examples are:
- Large database servers. Virtualization of database servers is rarely beneficial. Database server utilization is better improved by employing multiple database instances.
- Application Virtualization type servers, such as Citrix, and other types of servers that already include their own techniques for virtualization.
Additional examples of more common
disadvantages are listed later in this guide. In these situations, projects
should be analysed, on a case by case basis, to carefully calculate the risks
connected to the application / system.
Implementation Strategies:
- Reason for the project (i.e., what are the business drivers?)
- What you are trying to virtualize (e.g., specific functions or applications?)
- How much the project is expected to cost (and to save)
- What risks – both functional and financial – are expected and, more importantly, acceptable.
- Scope of the implementation (i.e., is it a single, focused project, or will there be multiple phases and milestones)
- Other changes that are anticipated in the environment and how they might impact or be impacted by virtualization
Taking Advantage of the
Advantages:
As you can see there are many advantages to
virtualization – but you need to understand the realities of those advantages,
as well as how to counter the potential, and often related, disadvantages. The
following sections help you take advantage of those advantages by:
- Deciding where to start
- Deciding on the appropriate virtualization engine
- Identifying Management Tools and Requirements for that selection
- Identifying change control methods
- Identifying data storage resources and limitations
- Defining pertinent maintenance tasks
- Determining costs as well as return on investment
- Allocating Dedicated Resources
Identify Service
Virtualization Candidates:
Service virtualization can be
implemented in order to test end to end business functionality to make sure no
future defects during integration testing phase. Below are few points which
will give good candidates for service virtualization.
- In order to identify which services to virtualize, first we have to check that our SV tool supports that protocol to virtualize (for example HP SV tool doesn’t support SWIFT protocol)
- 3rd party services which are not in our control may charge always for each call
- Services/interfaces which are not accessible or limited access
- Services/interfaces which are not yet developed and it takes long time to develop interface
- Data too difficult to source
- Security and compliance restrict access
Service Virtualization
Checklists:
Any virtualization project requires
thorough planning and careful consideration of which applications to
virtualize, in what order and pace to virtualize them, what hardware to use,
how to configure the environment and who should be responsible for various
parts of the project. Below is the questioner
- What is the exact requirement/expectation from the management/client?
- Do you know which tool you are going to use to virtualize services?
- Do you know which protocol is being used by service to virtualize?
- Is the service is accessible? If not do you have at least request and response files?
- Are there any SME(subject matter experts) available to help us to access details?
- What is the frequency of accessing virtual service?
- How many services need to virtualize?
- Is the existing application is running without any know defects?
- Are we using any test management tools to store our artefacts being generated from SV tools?
- What is the hardware configuration of the system where the SV tool is installed?
- Where is the application hosted?
- Is application hosted on intranet or internet / Cloud?
Creating robust virtual
service:
·
Always create a virtual service
by placing in the server, so that everybody can access it
·
Parameterize the stub/virtual
service with various set of data(external files like csv/txt/excel)
·
Add strong verification points
·
Add the logic by pass thorough
the virtual service immediately if the actual service is down
·
Always Add filters to extract
output generating by virtual service (this can be used to feed as input to
other service)
·
After creating Stub/Virtual
service test it with all possible options by creating any automation/manual
test cases
·
It is good to have to set some
benchmarks to the virtual service for the response time, latency..etc
·
It is nice to have to executing
performance test on created virtual service
Do’s with Service Virtualization:
- Given a predefined scope of transactions, ensure that human resources are scheduled for the service virtualization implementation and remain "in the loop" throughout the complete implementation process.
- Service virtualization assets are created or versioned and validated as part of the software deliverable.
- The entire dev/test team is trained on the system dependencies associated with the application being developed and tested.
- Train the team on the value of a simulated test environment vs. a brittle stub. It's important to stress how service virtualization enables all team members to access the same simulated asset, which has been confirmed to represent the expected behavior. Be sure to communicate how this reuse and consistency increases velocity while reducing the risk of defects slipping into the final product.
- Base your implementation strategy on frequency and severity of constraints associated with particular dependencies. For example, dependencies that are frequently offline or difficult to configure should be prioritized over broadly-accessible internal web services.
- Clearly define the drivers for implementing service virtualization and use that to prioritize access.
- Define the core reason for adoption (access restriction, privacy, security, risk based, etc.), define a structured pilot project, and ensure that the pilot project achieves the primary object for acquiring the technology.
- Don’t permit "service virtualization issues" to be used as an excuse for ignoring failing test cases. If a test failure is indeed caused by an issue with a service virtualization asset, updating the asset's behavior should be non-negotiable.
- Consider how much of the dependent application you really need to access. Understand how service virtualization relates to other simulation technologies (server virtualization, virtual or cloud-based labs, etc.) and determine what is the most appropriate, given the complexity of the dependency and your level of access to it.
- Don't ignore training. Leveraging simulation technology requires both technical and cultural changes. Organizations need to be aware of the benefits as well as the risks.
Don’ts with Service Virtualization
- Resources should not under-allocated for adoption
- Updating service virtualization assets is an ad-hoc task
- Limited understanding of dependencies reduces the value that can be derived from service virtualization
- Failure to educate developers on why a reusable artifact is more valuable than isolated stubbing efforts
- Starting too simple: not eliminating a true constraint
- Over-restricting access through a pure CoE approach
- Pilot project is too broadly scoped
- Allowing developers and testers to use "service virtualization asset shortcomings" as an excuse to ignore test failures
- Management sets unrealistic expectations
- Inadequate training on using a simulated test environment
Analogica Data is the Best Test Automation Services Company in India, offer software Automation Testing Services, Automation testing is nothing but enhancing efficiency, effectiveness and the coverage of your product.
ReplyDeleteThanks for sharing this information with us.
ReplyDeleteCloud Testing Training
Cloud Testing Online Training