Frequently Asked Questions

 

What is test automation?

Test automation means that there is no human input required to generate a test. Test automation is associated with automating the identified test cases – either regression release or integration which will help the team to execute them faster and reduce their test execution time.

What is the difference between automated testing and test automation?

Automated testing is the act of conducting specific tests via automation (e.g. a set of regression tests) as opposed to conducting them manually, while test automation refers to automating the process of tracking and managing the different tests.

What is a web application?

A web application is a software application that runs on a remote server. Created using a combination of programming languages and web application frameworks. It may use the RAM, allow user interactivity and is designed for many users. The main goal: to interact with users; respond to user’s various requests

What is the difference between a web application and a website?

Websites are informative in nature. Its primary purpose is to convey information to the end-user, whether it in the form of news, like CNN, or recipes, as you’ll find on recipe.com. There is little to no interaction on the part of the visitor, other than possibly submitting an email address to receive a monthly newsletter or performing a search. In contrast, web applications are usually responsible for some form of interaction with your visitors. Visitors can request information or manipulate data.

What is difference testing?

While e.g. assertion-based test automation aims at verifying individual “rules” (called assertions) upon test execution, difference testing aims at finding differences between individual test executions.
These can be between different browser versions (cross-browser testing), different devices and screen resolutions (cross-device testing) or between different versions of the software (regression testing). Difference testing can be implemented by using Golden Masters.

What is Golden Master-based software testing?

A Golden Master is a (partly) copy of a previous output of the software, against which the current output of the software can be compared against (implementing difference testing, see above).

How does retest use AI for test automation?

retest uses two types of AI: genetic algorithms and neural networks (see below). Since AI can only check for obvious errors like crashes and doesn’t know what the software should do or how it should behave – if it would, it could generate the software instead of testing it. We use AI to do Golden Master-based difference testing: See what the result value is in a previous version (e.g. 1.0), then see whether it’s the same result in the current version (e.g. 1.1).

How does retest use neural networks during test generation?

When generating tests, it is often vital to generate realistic tests: tests with actions that a typical user would. Since training neural networks requires a certain amount of data and is computationally expensive, it is a good thing we don’t need to do this per customer or per software.

Instead, we train our neural networks with how humans typically use typical software (i.e. what button a human probably would press next). These neural networks essentially codify a typical user interface and user experience (UI/UX) guidelines. In addition, we can train our neural networks for specific customer needs, e.g. based on the existing tests, we combine this approach with genetic algorithms (see below).

How does retest use genetic algorithms during test generation?

we use neural networks for action selection as described above, we employ a genetic algorithm to control the overall flow of the test generation process. Here we use a so-called multi-objective genetic algorithm, which optimizes for three distinct objectives:

  • Bugs: any form of found bugs (e.g. HTTP/backend errors).
  • Coverage: Any form of coverage (e.g. visited URLs).
  • Costs: any form of cost (e.g. test suite length).

Bugs and coverage are maximized, whereas costs are minimized. In order to do so, the genetic algorithm applies different operators to mutate and crossover test suites. For example, mixing test cases from two test suites. The resulting test suites are Pareto-optimal.

On top of this, we create a graph-based representation of states and actions of the tested software. This model can be extended with pre-defined tests, which allows the test generator to pass login masks or non-trivial forms.

What about using AI for unit testing?

Using AI for Unit testing is still ongoing research. When generating Unit tests, there are many more challenges than when generating tests on an interface (e.g. the GUI):

  • A failure may be ok (e.g. receiving a NullPointerException when giving null).
  • Creating a valid initial state may be challenging (e.g. creating a database connection, loading a configuration, etc.).
  • What is a sensible “next action”, since many methods are available, but often an implicit “call protocol” exists (e.g. open before close, etc.)
  • Parameters may be of complex types (e.g. a “user” object, a “database connection”, etc.).
  • These tests are much more prone to become invalid when changes occur—so they do not serve well as regression tests…

If you still want to know more about that topic, we recommend checking out Evosuite.

What is cross-browser testing?

Cross Browser Testing is a process to test web applications across multiple browsers. Cross-browser testing involves checking the compatibility of your application across multiple web browsers and ensures that your web application works correctly across different web browsers.

What is review? Can it run on-premise or is it a web-based tool?

review is our efficient and intuitive GUI that lets users accept or ignore changes easily and seamlessly. With review, developers and testers have a reduced learning curve because of its patented 1-click mechanism.

Unlike its competitors, review is not a SaaS tool; it is a fully functional stand-alone product that works offline. This means detested test maintenance is removed from the user’s workload. A web-based version is currently in development.

How does review execute tests?

review doesn’t execute tests by itself. review can be combined e.g. with standard build tools (like Maven or Gradle) to execute the tests, and integrated with CI/CD tools (like Jenkins or Travis) to execute the tests in the cloud. Upon execution of those tests, test reports files are created, that can be loaded into review or the recheck.CLI to easily maintain the Golden Masters.

What does recheck-web test by default?

recheck-web can easily be configured (via its recheck.ignore file) to serve as a cross-browser or cross-device testing tool, or it can be configured to only detect changes in content. This depends on how you want to use recheck-web.

Can I integrate review with my CI/CD pipeline?

review is a GUI and as such cannot be integrated directly. But recheck-web is a transparent wrapper for Selenium, that can easily be integrated into your CI/CD pipeline and existing tool stack.

Does review produce a lot of false positives?

review comes with a powerful ignore mechanism, that easily lets you ignore irrelevant changes, either on attribute or element level or via JavaScript rules. Correctly configured, review results in very robust tests.

Can I provide retest with feedback?

We live and thrive with feedback. Please provide as much as you can, either via chat (see below), email, via customer support, via our GitHub repositories or in any other form that is convenient for you.

What type of support do you have?

You can get support through email, phone, and in-app chat in the application. You can also read our documentation.

More Questions?

We’re happy to answer any questions you may have about retest products. Send us a message and we’ll get back to you shortly.