Debugging, right? Not sure if I love it or hate it. The part of me that loves it enjoys the learning that comes with it. The hated part is usually the fact that debugging feels like wasted time. In this article I’d like to share a couple of tips on how I usually debug tests in Cypress in hope of helping you with your own debugging.
When using Cypress in GUI mode, you can use .pause()
command to stop your test at a problematic spot. I usually do this to look at the test right before problematic assertion or action that caused the test to fail. After pausing your test, you can interact with your page, examine the state and then click play button to continue with the test.
You don’t need to worry about leaving this command in your tests, since it is ignored when you run your tests in headless mode. Read more about .pause()
in Cypress docs.
Cypress is all JavaScript & it runs inside the browser, where you can make use of all the powers of DevTools. If you are not yet comfortable with using debugger or don’t feel like using it, simple console.log()
is your friend.
This code will output the response body of an intercepted request. You can see the output in your browser console. I’ve seen this confuse few people, as they would look for this output in the terminal. But while you start your Cypress runner using npx cypress open
command, the Cypress script itself runs inside the browser. And that’s where your console.log()
output will be.
Sometimes you need to debug a test, because it’s flaky. In my experience, the biggest source of flakiness is the speed of how test is executed. There’s a really good section on Cypress blog page on the whole topic of how to stabilize a flaky test.
But knowing a test is flaky is only a part of the story. To stabilize a test, you need to find the source of the problem. When fighting different race condition situations (click goes too fast, assertion goes too fast, network is unstable) I tend to run my tests multiple times. This is because I often get into situations where test fails on pipeline but passes locally. Running a test multiple times usually surfaces the problem. You can use standard for
loop, but I’ve enjoyed wrapping a single test with Lodash times
function like this:
Might seem like an obvious one, but many times I tend to forget that the first thing I should look at is not the error itself, but the context in which the error happens. Cypress records all video automatically in headless mode, but it can be disabled if screenshots are good enough. I have written an article on how you can improve your screenshots to make them more useful for debugging purposes.
The most often, people disable video recording for speed purposes, but you can actually make a pretty good compromise. videoUploadOnPasses
set to false
in your cypress.json
will upload video only if there is a failed test in your spec. This can shave of minutes from your test run.
Timeline in GUI is a great debugging tool. You can look at the state at each of the stage of your tests and examine what might have caused the failure. I see a common error happening with a following test:
I’ve seen a test like this fail a lot. There is a lot happening in between typing in the new item and the item actually appearing on the page. There might be a http request, websocket message, reorder, re-rendering of a list. Al these processes might have caused the test to fail.
This is because Cypress will automatically retry an assertion + previous command. But it will only retry the previous command, not entire command chain. If you would hover over the .eq()
command in our test, you would see that we are actually not getting the right element. If an item renders with a delay, you would get stuck with the state of your app as it was when .get()
command was made. Example of what might be happening:
I explore this topic a little more in one of my
Cypress Dashboard is free to use for 500 monthly recordings, or unlimited if you are working on an open source project. If you record your test results to dashboard, you and your team can look into screenshots and examine the test failures. But not only that. I especially enjoy the analytics overview, where I can look into most common failures or most flaky tests. This gives me some great pointers into what may be be the greatest weak points in my tests. I still need to roll up my sleeves and use previously mentioned tools for debugging, but analytics provides great set of toolset for finding issues proactively.
If you have enjoyed this, feel free to let me know. I write posts like this every week, so if you feel like getting notified, put your email down below this article or follow me on Twitter and LinkedIn where I usually let the world know that a new article is out there.
From time to time I send some useful tips to your inbox and let you know about upcoming events. Sign up if you want to stay in loop.