Domanda

I am trying to automate integration testing in our team, and I am wondering if making tests with parameters is good or bad design.

My problem is that the integration tests should run some perl scripts from our codebase that work with database and compare the data in database before and after the test. I would like to set a flag for all automated tests to revert the database changes, so different tests will not interact with each other. But I also want to disable the flag on purpose, so I will be able to manually check the data so the test does not overwrite them back immediately after finishing.

Is there some cleaner/more common solution for this? I know about database and data mocking, but I can not use that.

È stato utile?

Soluzione

When we speak of running tests with parameters, what we commonly mean is running a bunch of tests with A=5, immediately followed by running the same bunch of tests with A=6, and so on, all of this together constituting a single test run.

What you seem to need instead, is to run all your tests just once, but with a specific configuration, which may change from run to run.

So, all you need to do is have each one of your tests read a certain configuration file at start up, telling it whether it should revert, erase, etc.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
scroll top