Consecutive Testing Pros
Consecutive testing is very easy to set up and track. If you conduct a consecutive test all of your Site Catalyst reports become A/B reports based only on the calendar range of your reports.

Take for instance the landing page example mentioned previously. All of the reports for a week “A” in the Site Catalyst relate to a Landing Page A. Comparing landing page A to B is as simple as selecting the appropriate week in the Site Catalyst reports.

This option requires fewer resources as well, because you will not have to set anything up to send different versions to different groups of people.

Consecutive Testing Cons
The problem with consecutive testing is that is produces test result that are not as accurate as those returned from synchronous testing. There are a number of external variables that cannot be controlled that might change your data. For example, changes in the stock market, politics, news, competitor behavior and even weather could all affect your data and change from one week to another. The problem is that when you do your analysis it’s impossible to be sure if the modification you made resulted in any conversion changes (and their degree) or if the result was due to one of the variables that can’t be controlled. This issue actually varies for different clients, because some Web sites are more sensitive to outside influences.

Again, take our landing page example. The data might say that week B, which used landing B, resulted in a lower number of conversions. But suppose that this was also a week when our competitors launched a major sale campaign. There would be no way to properly analyze if the lower conversions for that week were because of the landing page or the competitions sales.

Another big issue with consecutive testing is that you are betting the bank on it. You are risking 100% of your traffic, and therefore 100% of your conversions, on the new version of the page. If it is a bad change, you just gave the bad version to EVERYONE.

Consecutive Testing Pointers
if you choose to do consecutive A/B testing, be sure to watch your results very closely. If it has negative effects on the site, you want to be able to change it back before id does much damage.

Be sure to understand the time frames for both (all) versions. This seems obvious, but if you have been running version A for a month, and then you try B for a week, you obviously will have to measure by conversion percentages, not total conversions.

Also make sure that your test period is long enough for a valid sample, If your traffic goes up and down day by day, for example, you will want to test at least a week, if from week to week the traffic is representative of your usual group of visitors.

Synchronous testing is the process of testing both versions of your page or Web element (A and B) at the same time be splitting your traffic and directing visitor to one or the other and then tracking them based on the version they interacted with.

For example, if we used two different landing pages in our A/B test we would send some of our visitors to page A and some to page B. We would then use Site Catalyst tool to track the conversion rate of both groups to determine which was more effective.

Synchronous Testing Pros
When properly deployed, synchronous test are more accurate the their consecutive counterparts because they test both variations at the same time, eliminating possible influence from external variables. This also means that you can decide how much traffic you want to risk on the new versions(s). Some people choose to split the traffic 50/50 between he old and new versions. This is certainly the easiest way to do synchronous testing, but again, you are risking 50% of your traffic on the new version (maybe that belongs in the “Cons” section). In any case, you can also choose to risk a very small percentage on the new version and track success with conversion metrics. As noted, this is the most accurate way to do A/B testing.