Artwork

Content provided by Mandy Moore, Charles Lowell, and The Frontside Team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mandy Moore, Charles Lowell, and The Frontside Team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !

115: Testing Issues and BigTest Solutions

50:07
 
Distribuie
 

Manage episode 222263646 series 1402166
Content provided by Mandy Moore, Charles Lowell, and The Frontside Team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mandy Moore, Charles Lowell, and The Frontside Team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

In this internal episode, Charles and Wil talk about testing issues and BigTest solutions. Pieces of the testing story are discussed, such as the start and launch
application, component setup and teardown, interacting with the application and component, convergent assertions, and network. Then they talk about testing issues: the fact that cross browser and device-simulated browsers are not good enough, maintainability and when and when not to DRY (RYE), slowness and why (acceptance) testing is slow, portability and why tests are coupled to the framework, and reliability. Finally, they talk about BigTest solutions:

  • @bigtest/cli to start / launch (Karma
    recommended for now)
  • @bigtest/react, @bigtest/vue, etc for setup & teardown
  • @bigtest/interactor for interactions
  • @bigtest/convergence for assertions
  • @bigtest/network in the future (Mirage
    recommended for now)

Resources:

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello, everybody and welcome to The Frontside Podcast, Episode 115. My name is Charles Lowell, this episode's host and a developer here at the Frontside. With me today to talk some shop is Mr Wil Wilsman.

WIL: Hello.

CHARLES: Hello, Wil.

WIL: How's it going?

CHARLES: It's going good. I'm actually pretty excited to get to jump into this topic because we're going to be talking about some of the big things that are happening at Frontside and some of the things that we've been developing in almost for the last year.

WIL: Yeah. It's been about a year now.

CHARLES: It's been about a year and we've talked about it in various podcast but we're going to be talking about it again because there's just been so much progress that we've made, I think in a lot of clarity in kind of what we're going for here when we talk about BigTest and testing big and how we want to roll out the BigTest framework. We just have a lot more experience using it on a number of different projects, so we get to talk about that today.

Before we get started, I just wanted to talk a little bit about what BigTest is, both in terms of the framework and also the philosophy. Wil, you're the one who works the most on BigTest. When you think about philosophically, what does BigTest mean to you?

WIL: It's the size of your test, not a physical size like size and storage but how much your task actually does. The test itself can be very small as our test are but it tests the whole application from the user interacting with it down to the network requests. That's the definition of the philosophy of a BigTest to me. It's to tests your application from the biggest point of view.

CHARLES: Actually, achieving that can be surprisingly difficult, especially in a frontend JavaScript application and there are a lot of solutions out there for testing and we've talked about them. One of the questions that arises is when we talk about BigTest, what exactly are we talking about? Are we talking about a product that you can download and install? Are we talking about the philosophy that you just outlined? Or are we talking about the individual pieces of software that make that philosophy real? I think the answer is we're kind of talking about all three but we want to take this episode to talk about where we're going with the product.

What we've identified is the subcomponent pieces of that product. In other words, in order to get started testing big, what are the things that you need to think about? What are the things that you need to do? And then what are the component pieces? Because one of the things that I think is very important to us is that you be able to arrive at wherever you are in your project, whatever framework you are using, whatever current testing solution and be able to begin using BigTest. That means, you might be using some of it or you might be using a lot of it but we want to meet you exactly where you are, so that you can then, get onboarded and start testing big.

WIL: Yeah. Definitely an important distinction that we get confusion about is what is BigTests and people just assume like this whole test suite is BigTest but we used the parts of it ourselves like we use Mocha, which is not part of BigTest. We use Chai, which is not part of BigTest. We use Mirage which is kind of part of BigTest but definitely it originate in BigTest and Karma and things like that. BigTest isn't your testing suite. It's not one thing to go-to to grab, to start writing tests. It is a small pieces that you can use in conjunction with other small pieces, just to make it really easy and flexible to test your application.

CHARLES: Exactly. Because it turns out that there's a lot going on in the application. Maybe we should talk about what some of those pieces are that you might want to start using BigTest with or that you might need to test big, I guess I should say. What's a good place to start? Let's start with talking about some of the issues that you want to do when your testing big. Then we can talk about what pieces of the testing story that fit in to solve those issues.

One of them is you need to test that your application works, like actually works. That means you need to be able to test on a multiplicity of browsers, for example. We're limiting to the domain of web applications. There are actually a shockingly large number of browsers. It's not just Chrome. It's not just Safari. There's Mobile Chrome, Mobile Safari, which are subtly different. There's Edge and I'm sure the Mobile Edge is slightly different too, so you want to be able to test cross browser, right?

WIL: Yeah, absolutely and things like Nightmare and JS DOM and things that simulated browsers, we don't necessarily think those are the best tools for writing BigTest because we want to ensure that those browser quirks are caught and tested as well.

CHARLES: This is not theoretical like sometimes you'll have a syntax, like the parser is slightly different and you have something that throws a syntax error in Safari or in the Internet Explorer and your whole app is completely busted. If you just take in the time, just even trying to load the app in that browser, you would have caught that. That's what I've been on many times.

WIL: Yeah and what I just saw came up yesterday, which comes up frequently is not closing your CSS Selector and Chrome doesn't really care like web to browsers don't care too much but that will fail in Edge and depending on what you're missing, the failing is part of that too but mostly, Firefox and Chrome don't care about that kind of thing.

CHARLES: Right. It seems like the majority of testing solutions are kind of focused around Headless Chrome or some variation of Electron. That entire class of really dumb errors has already been caught. Like I said, to actually catch it, it takes less than a millisecond of CPU time just to load it onto the browser and see that thing doesn't work. Unfortunately, they can be catastrophic errors but the problem is how do you actually do.

We want to test like cross browser. This is something that we want to do. For me, I just can't imagine shipping an application without having some form of cross browser testing, some capability of being able to say, "I want to test it," like, "We want to work on these eight browsers and so we're going to test it on these eight browsers," but how do you actually go about doing that?

WIL: Right now, we are working on the BigTest CLI which will help us launch browsers but that's not complete yet. It has some bugs on. For the meantime we've been using Karma, which is great. Basically, you just have this service that's able to find the browser binary on the system and just launch them pointing to local hosts with your app loaded up and your normal development server take care of loading the test up and running the test. Karma and the BigTest CLI is just there to capture output and launch those separate browsers.

CHARLES: Yeah. I remember when I was first using working with Karma and I think Testim is another tool that's in this space. There's Testim, Karma and BigTest actually is we're developing a launcher because launching is something that you're going to need but it's such a weird problem. I feel like with the browser launchers, there's three levels of inversion of control because you're starting a server that then starts another process, which then calls back to your server, which then loads the app resources, which then loads the tests and then runs the test. There's a lot of sleight of hand that has to happen and –

WIL: Including injecting the adapter that you use, like the Mocha adapter, the Jasmine adapter that ends up reporting back to the CLI. That's something that Karma and Testim and BigTest will handle for you.

CHARLES: Right, so you're fanning out the test suite to a suite of browsers then collecting the results but basically, you need some sort of agent living inside the browser that's going to act on behalf of the test suite, to collect the results. I remember when I first came into contact with Karma and Testim, I was like, "This is so unnecessarily complex," but then, having used it for a while and I think there are some complexity that can be removed but if you want to do cross browser testing, that kind of level of ping-ponging is there's a certain amount of it that just necessary. It's something that's actually quite complex that you need to have in your stack, in your toolbox, if you want to truly test big.

WIL: Yeah and all the solutions is mechanisms for detecting when the browser has launched and restarting the browser based on its health check, etcetera and things like that that you wouldn't think of actually loading up a browser but you need to think of when you're doing automated testing.

CHARLES: What is it that sets apart, for example the launcher solution? We kind of call this class of solutions launchers, so Testim, Karma, the BigTest CLI. What is it that sets BigTest CLI apart from say, Karma and Testim?

WIL: We're trying to be as minimal config as possible and just really easy to get started and going. Karma has a lot of plugins that you need to make sure you have installed and loaded in the options set for those plugins. Testim has some stuff bundled but it still requires this big config bulk at the beginning that you need to passing or that's all what you were doing. We're trying to avoid that with BigTest CLI and one of the ways that we're able to avoid that is by just letting your Bundler handle bunding the test. In Karma, you need Karma webpack or something. Testim has some stuff that it needs and really, we just want like in-testing mode. When you're in the testing environment, just change your index to point your tests, instead of your application and your Bundler will do all the work and we just serve that file and collect the results.

CHARLES: Right, so it doesn't matter if you're using Parcel or you're using webpack or you're using Ember CI.

WIL: Yeah, Rollup even.

CHARLES: Or even just like low level Broccoli or Gulp or whatever. There's a preponderance of bundling solutions and that was always something that was just a huge pain in the butt with Karma. I know it's like just getting to the point where my tests are loaded and you look with Testim, most of my experience come with Testim comes through how it's used in Ember CLI like the histrionics that undertaken just to bundle all your tests assets and your application assets and your vendor assets and just kind of bootstrap that thing. It's a lot of work.

WIL: Another thing with BigTest CLI that doesn't include in Karma and Testim does is a concept of a watcher because all these Bundlers, you have HMR -- hot module reloading, Rollup and things like that. It come with plenty of plugins. Parcel is always set out of the box, so if you're using your Bundler, your existing Bundler to bundle your test, you get that watch feature for free, so it's another complexity that the BigTest CLI kind of eliminates.

CHARLES: What it means is we've hidden most of that complexity. Just let the Bundler handle it, right? The Bundler is the part of your project that bundles.

WIL: Yeah.

CHARLES: You should have your launcher actually doing that for you but we still do need to have some way to do that set up and tear down. When we have that testing endpoint, we have some way to say, "We're starting a test, not the application. We're ending the test, tear it down," so how do you abstract that away?

WIL: That's kind of something that we can't really avoid. It is just like some sort of dependency on the framework itself, your application framework. It's like you need to mount a React app. You need to mount an Ember app, etcetera and there's different ways to mount those things. This is one of the things that can't really be decoupled as much as everything else can but BigTest has BigTest React and BigTest Vue and we want to eventually gets like BigTest Ember but really, the main export of all these packages is just a simple mount helper that will mount and clean up your application for you and your testing hooks, whether you're using before each from Mocha or before from something else like Jasmine. You know, no matter what you're doing, you just have a hook that mounts your application and then, cleans it up on the next mount.

CHARLES: It's worth pointing out here that this is kind of a core concern of testing and testing big is being able to mount your application and tear it down with regularity and having hooks into that process. Whether you're using BigTest or whether you're not, can you still use BigTest React and BigTest Vue, even if you weren't using anything else?

WIL: Yeah, absolutely. Like I said, they just export simple mount helpers. I don't even think they have any other inner BigTest dependencies. They just have pure dependencies on their frameworks.

CHARLES: Right and so, you could use it, even if you wanted to roll everything else by hand or you wanted to get started somehow and you needed to do set up and tear down, again this is something that's key to being able to test big, so you should be able to use it independently, whether you use the CLI or not, whether you're using any of the other tools or not. All of the tools can be used independently.

WIL: Then another feature of the BigTest React and BigTest Vue is the tear down that happens before set up, rather than happening after your test runs, having a separate tear down. This allows it. Whether your test passes or fails, you can look at it and play with it and inspect it and debug it much easier than if you had tear down. You have to disable at tear down or throw a pause in there to keep other or something.

CHARLES: Yeah, I love that. When something goes wrong, you can just let the test case run and the last test that it runs, it just leaves at set up. It does the tear down right before the set up.

WIL: Exactly, yeah. At the very end of the whole test run, there's an app there waiting for you to play with.

CHARLES: If you focus in on a single test, we most commonly use Mocha, so you say a '.only' to run that single focus test, then you have the state of the application at that test case set up and ready to go. You can just play with it, you can inspect it, you can actually just use it as a starting off point and interact with the app normally as you would.

WIL: I want to say, Cypress does this too. They do their tear down before they're set up as well. That's how you're able to play with Cypress test.

CHARLES: Yeah, I like that trick. Now, we talked about launching, setup and tear down but we haven't actually talked about much of what actually happens in the test cases themselves. We talked about how to start and launch your test suite, how to do that across a bunch of different browsers, how inside of that, you have a separate concern as applications set up and tear down and how you want to lean on how you're actual app is actually bundled because that fits in with the philosophy of testing big. You don't want to use an external Bundler for your test suite. You want to use your real Bundler, how the asset is actually going to look.

But when it comes down to actually writing the tests, you need to be able to interact with at the highest level as you possibly can. When I say highest level, we want to verify that the users, when they take certain actions, we'll see certain outcomes and so, we want those outcomes and we already talked about this to be reflected in a real DOM, in a real browser. But at the same time, the real interactions, we want those to be as high fidelity as possible, so you want to be sending events to the browser. You want real mount events, real key events, real interactions.

WIL: Yeah, interacting with application. That's another core philosophy that we kind of talked about earlier that defines a BigTest. It's the user interacting with your application. We're not calling methods and expecting other callbacks or arguments to be passed or clicking on a button and expecting a message to pop up that says, "Form submitted successfully." These are user-facing things were starting on and acting on.

CHARLES: Yeah and then, it can be really tricky because these things don't happen synchronously. They're happening inside of your browser's event loop. I click that button and then it goes off and there's some loading state and then, I might get an error message that pops up this thing that animates out and then, goes away. The state of the browser is in constant flux. It's constantly changing and so, it can be very difficult to put your finger and say, I want to be in this state if you are limiting yourself to only reading from the DOM.

Some frameworks, Ember for example, you have kind of a white box where you can actually inspect the state of the Ember run loop and use that to do some synchronization but it can be very, very hard to coordinate these interactions.

WIL: Yeah. You know, to talk about getting to the solution as a BigTest interactor, which is basically modern page components or page objects. If you ever heard of page objects, it's just a way to encapsulate interacting with big pieces of your pages. It's not a new concept. It's been around for a while but BigTest interactor has kind of a new twist on it where they're immutable, composable interactions that are also convergent, which we'll get into later, which basically means if your buttons not there, it won't click the button until it is there. They're really powerful and they're making really easy and fun to write these tests.

CHARLES: Yeah, they're super powerful. I remember we talked about convergences last time when we talked about BigTest but interactors, I think are definitely a new development. I think we should spend a little bit of time there talking about, not just the power but also the ergonomics of interactors because they are like page components or page objects, except they're scope to the component. Not only do they have all this wonderful stuff where it'll make sure that the component exists before it starts to interact with it and things like that but their composable. If I have a button, then there are certain operations that are valid for that button. I can click it. I can hover over it. I can do all these things. They're the operations that make it unique to the button. Now, those might actually map to real events.

WIL: Similarly, their assertions about that button as well, like as primary is secondary. If this button is repeated throughout your application, you might want to make sure that your form has a primary and secondary button.

CHARLES: Exactly. It really encapsulates all the knowledge of how you can interact with both in terms of taking action and reading state from that button. It almost feels like an accessibility API. It would be easy to write a screen reader if you had these interactors for every single component on the page.

WIL: That's kind of what it is. It's just like you're defining an API around how your user would interact with your application and what your user would expect in the application. That's the point of page objects and interactors as you're defining this user API, essentially.

CHARLES: Yeah and so, really the step that interactor take is that they take the classic page object and it make them composable, so I can have, you kind of touched on this before, a modal dialogue interactor, which is composed out of two button interactors. One for the primary action, one for the secondary action and maybe, it's aware of its own title text, so you can assert on the title text but I didn't actually have to write the individual button interactors for that modal dialog interactor. Then I might have a second modal dialog interactor or a form that's on a modal dialog just composed of the modal dialog interactors and the individual form components, which appear on that particular modal dialog.

WIL: It's essentially how we've been building applications lately with components but this is for page objects in your test if you want to mirror that. You don't have to have one-to-one mappings of an interactor to a component but if you do, it's really powerful.

CHARLES: Yeah. I found that when we have one-to-one interactors, that's when it just feels the best.

WIL: Yeah and on top of this, if you have a component library and your component library exports the interactor that it uses for the component test, like we said, this BigTest technology, they're sprinkled also. We don't have to use interactors in big acceptance tests. We can use them for smaller component tests too, so if we ship these component interactors with the component library, your application that's consuming this component library now can test those components for free, without having to write their own interactors. It can just compose the interactors exported by the library.

CHARLES: Man, I almost want you to repeat that word for word again, just so it can sink in. It's so awesome. Because when you actually go to write your tests, you're not starting from ground zero like, "How do I do this?" They're like, "I'm writing some tests for this thing and I'm using these components and so, I've already got the prepackaged interactions for those components." It's like you start writing your tests. If your tests are a 10-story building, it's like you're starting on Floor 7 and you only have to walk up to Floor 10, instead of slogging up all 10 stories.

WIL: One really helpful interactor that we work within the open source stuff we've been working on is a date-picker interactor because date-pickers can be really complex. Just having that common interactor and have a date-picker on multiple forms where we can just use that one interactor, we don't have to tell every single test how to interact with that date-picker. We just say pick date and pass the date.

CHARLES: Yeah, it's so awesome. That is actually a great example. It doesn't feel scary to write a test for a page that has a date-picker on it or two. If you're doing like a date range or something like that, you're like, "Oh, my God. I don't write the selectors to test this." You just import your date-picker interactor, you set the date, it actually worries about all the low level events and there you go. It feels like you're operating at a much higher level.

WIL: Yeah. The interactor API essentially, you're telling me the test what the user would be doing and what the user would be seeing.

CHARLES: Yeah. It's worth pointing out again. We've identified starting and launching. We've identified set up and tear down but interaction is a core concern of BigTesting, no matter what tool you're using. One of the things that we found as interactors are something that you can sprinkle on literally any test suite if you're testing an interface and it makes it better. We've used it inside big acceptance tests. We use it inside Jest, doing just little component tests. There are people in the BigTest community who have used it to basically, write component tests against a JS DOM and while theoretically, philosophically, you want to make those tests as big as you possibly can, you can use that piece in your test suite.

If you are using a simulated DOM and if you're running a node in a browser, these interactors will still work and you're going to get high fidelity test cases that are resilient to this asynchrony and are composable and if they do have a full-fledged test suite, you can reuse these interactors. They are a really awesome power up that you can bring into your test suite.

WIL: And they are not tied to the framework at all. We use them in React for our stuff but we've also written some in Ember. Robert's written some in the Vue and ported some test and one of the beautiful things we've seen from this is that one interactor goes everywhere. You just write the interactor once and you can use it in Ember, in React, in Vue, in those test suites. If the rest of your test suite is framework agnostic, you have this test suite that you can jump frameworks in your test suites until it works and can test your application with high fidelity.

CHARLES: Yeah, it's fantastic. I remember when we first tried using interactors inside an Ember test suite because Ember comes with like a big kitchen sink in testing set up but interactors just slotted right in and there's absolutely no issue.

WIL: Yeah and there is actually a speed boost even because in most of the Ember test offers a hook into the Ember run loop and interactors are not. There is actually a good speed boos just using interactors.

CHARLES: Yeah. This is a good point. It's a good segue because typically, we think of acceptance tests as being really slow and one of the reasons, even the people [inaudible] acceptance tests or testing big as they think like it's going to take a long time. We found that actually we've been able to maintain a happy medium of testing big but also, having those test be really, really fast. When you say you said a speed boost from using interactors with Ember, where is that speed boost actually come from?

WIL: I mentioned the Ember test offers a hook into the Ember run loop and interactors aren't and the reason of this is because interactors are converging and they wait for things in the DOM to exist before interacting with them. Instead of waiting for the framework to settle, it just waits for the thing to appear and then interacts with it immediately. If you're starting something about a button toward the top of the page, you don't really care that another button at the bottom of the page has rendered yet, unless of course you have assertion about that but if they're converging, you don't need to hook into the wrong loop to wait for the entire page to load, to interact with just one piece of it.

CHARLES: Right. You're just waiting and you say, "I'm expecting something to happen and the moment I detect it, no matter what else is going on, the page could be taking 30 seconds to load but if that button appears and I can interact with it, I can take my action then or I can make my assertion then." It's about kind of removing gates -- artificial gates.

WIL: Yeah. Another common thing that's helped with is animations as most test that are hooked into the run loop, you kind of have to wait for some of these animations to finish before you can even interact with the element and that means if a model has a half second animation where it flies in and you have 30 tests around this modal, those tests are extremely slow now because you have to wait for that modal to come in, whereas --

CHARLES: -- Straight up flaky.

WIL: Yeah, straight up flaky. Whereas in the actual DOM, that modal is inserted pretty immediately and can be interacted with pretty immediately. With interactors, they don't need to wait for the animation to finish. They can just immediately interact with that modal but of course, if you need to wait for the animation to finish, there are options for that as well.

CHARLES: Yeah. If there's some fade in that needs to happen, you can kind of assert on any state and as long as it's achieved at some point, the interactor will recognize it and recognize it at the soonest possible time that it possibly could. I remember getting bitten on one project where the modal animations in particular were so brutal. Not only were they flaky, they just were slow because there was all these manual time outs. It wasn't even a paper cut. It was kind of like a knife cut, like there's someone sitting there and kind of slashing you with a pocket knife. It just was a constant source of pain in your side.

WIL: Yeah and that's how you end up with things like waits and sleeps in your test suite. When you need to wait for the animation to happen or something, you just see a sleep for four seconds with a comment because we have to wait for the components to load in. That's kind of a code now.

CHARLES: Yeah, that's just asking for trouble both in terms of slowness and in terms of it's going to get flaky again. That has been kind of one the most freeing things about working with interactors and working with convergent assertions on which they're based is you just don't ever have to worry about asynchrony. Really, really truly, most of the time, you're writing your tests, like it's all synchronous and that kind of makes sense because from the user's perspective, their consciousness is synchronous and they don't care about the internal run loop. It's just they were making observations in serial and at some point, they're going to observe something, so the interactor sits at that point and really observes the application the way that your user would.

WIL: Yeah. We've mentioned a few times now the convergent assertions, which interactors are based on. A little caveat there if you're using interactors and you're making non-convergent assertions, they might fail or be flaky. That's because interactors wait for the thing to be there to interact with, so as soon as the buttons there, it clicks it but it doesn't wait for after that event has fired and your application has reacted to that event, that's your application is concerned. We need something there like our convergent assertions that can converge on that state and wait for that state to be true before it considers itself passing or in times out.

CHARLES: Maybe we should dig a little bit into convergent assertions. I think the last time we had a public conversation on the podcast about this, this is kind of where we were, like we hadn't built the interactors, we hadn't built these other component pieces of the testing story. We were really focused on the convergent assertion. We've talked a little bit about this but I think it's worth rehashing a little bit because it's a unique way of approaching the system but it's also kind of horrifying when you see how it works under the covers.

I think when we tell people about the fact that it’s basically polling underneath the covers. The timeout is configurable but it's basically polling every 10 milliseconds to observe a state. I remember the first time being confronted with this idea and I was horrified and like my programmer hackles on the back of my neck, like raised up and I was like, "Wait a minute. This is going to be slow. It's going to be computationally intensive."

WIL: Yeah. That was my exact thought too because this is going to be slow. If acceptance tests are slow and we're doing an acceptance test every 10 milliseconds, it's going to be really slow and that's actually not the case completely. It's actually the opposite. They're extremely fast.

CHARLES: It is shockingly fast. You've got to try it to believe how fast it is, how fast you can run acceptance tests.

WIL: Yeah, talking like 100 tests in just tens of seconds.

CHARLES: Right. You basically gated by how fast your framework can render. Your tests are not part of the slowness. Your test --

WIL: And also, memory leaks can be costly too. We experience that recently where we had memory leaks that were slowing down our test but we fixed those up in test and put our backup.

CHARLES: Yeah, because basically, running the assertion or running the convergence is very fast. It's just a very light ping. I kind of think of it is as it is light as the brush of a photon or something that was bouncing off of a surface, so that you can observe it. It's extremely light and most of the time, it's just waiting so the test and the convergence really just gets out of the way. Just because they can run a thousand times or a hundred times in a second, it's doesn't gun it up. But the thing is it means that your tests run as fast as your application will run. You get back to the point... Was it in React where the kind of the key insight is that JavaScript is not the bottleneck? Well, your tests are not the bottleneck.

WIL: Yeah.

CHARLES: I guess this is what it is. I don't know if there's anything else that you want to say about convergences.

WIL: No. We pretty much summed it up there and that's what interactors are based on. That's how they're able to wait for things in a DOM. It basically polls the DOM until it exists and then it moves on and actually does the interaction.

CHARLES: Once again, this is actually a very low level thing on which BigTest is based but this is once again, something that you can use independently. You can write your own convergent assertions. You can write your own convergences that honestly have nothing to do with testing your assertions. It's a free standing library that you can use in your test suite or elsewhere should you choose.

WIL: That doesn't need to be a DOM for BigTest convergence there. I use BigTest convergence in BigTest CLI to converge on the browser being launched. Instead of waiting for the browser to report that, I can just kind of poll and see how that process is doing and the convergence waits for that process to start before moving on.

CHARLES: Right. I guess the best way I've thought about it it's a way to synchronize on observations and not on callbacks. It’s a synchronization mechanism and 99% of the synchronization mechanisms that we're used to, they've involved some sort of callback, a promise, an event-listener, things like that or even a generator where control is handed back explicitly to a piece of code when something happens. Whereas, this is a fundamentally different synchronization primitive, where you are writing synchronous code that's based on observations, so what I observe this, do this. When I observe this, do this. It's extremely robust.

WIL: Yeah, very.

CHARLES: It is a core piece. A fundamental thing that on which interactors are based on, which the CLI is based, I don't know if it's core to writing tests but --

WIL: It definitely helps.

CHARLES: It doesn't helps. We couldn't have BigTest interactor without that.

WIL: No, definitely not.

CHARLES: Because that's what makes it fast, that's what makes it not flaky at all and having those things, I think it makes it easy to maintain because you can work at the interactor level or the level of user interaction and you don't have to worry about synchronization, so the flow of your tests are very natural.

WIL: Yeah. We don't have to explicitly wait for request to be done for making an assertion about your app. That'll just come with convergences, just waiting for test date in application to true.

CHARLES: Let's talk about one more piece of the testing issue because when you're testing big, when you're testing in the browser, there's always the issue of what are you going to do about your API. You got to have your API running. It's just always an issue and this is kind of interesting because this sits at the crossroads of testing big and also, getting the most utility out of your test because in an ideal world, if you're testing really big, you're going to be using a real API. You're not going to poke holes in reality.

WIL: Yeah. One of the things that we avoid in BigTest is poking holes. We're not shallow mounting the components and testing the methods and the results. We're fully mounting these things and fully interacting with them through the full DOM API.

CHARLES: Yeah, exactly, using real browsers. It just occurred to me the irony of us talking about reality being things that are still running inside of a computer processor. I think we've inherited this term from that talk that Justin Searls at AssertJS in 2017. It's a really, really excellent talk. I think he gave it at RubyConf. It's the 'Don’t mock me.'

WIL: Yeah, it's one of my favorite talks.

CHARLES: Yeah, it's a great talk. In it, he talks about the value of a test is a balance of how many holes you poke in reality and sometimes, you encounter a test where all it is like holes in reality. Whether you're mocking this, you're mocking that, you're mocking the DOM, you're mocking the browser, you're mocking your network layer, you're mocking this external API and the more holes you poke, the less useful it's going to be. Network is one of those where it can be very difficult to not poke holes in that reality because it's a huge part of your application. Your frontend application is how it's going to interact with the server but at the same time, servers are gigantic pieces of software themselves, each with their own dependencies, each with their own set up and tear down --

WIL: Have their own concerns.

CHARLES: Yeah, exactly. They might be in a different language. They've got runtime, things like they might need external C libraries and crazy stuff like that. They're their own beast. To get a true big end-to-end test, you going to have to stand up your server but the problem that presents is you want your tests to be also isolatable. If you're a developer, I can go to a repo, I can do an install of my dependencies and I can run the tests without having to do any external dependencies other than the repository and the language in which I'm working.

This is one where we kind of have tried to walk the line of not wanting to poke holes in reality but also, have the test be containable to the actual application. In order to do that, you need something that presents a high fidelity version of the network. You can kind of try and have your cake and eat it too. You want to have something that acts like a server and really acts like a server but it's actually not a server.

WIL: And still poke as few holes as possible and the application and how that's all set up, we don't want to be intercepting methods and responding with fake data. That's not a good way to mock that network.

CHARLES: Right. We want to be calling actual fetches, calling actual XMLHttpRequest. Ideally, if you've got service workers, making actual service worker requests.

WIL: Basically, as far as the application is concerned, it's talking to a real server on us.

CHARLES: Yeah and that's kind of the litmus test for is it a hole in reality or is it just a really great illusion?

WIL: Yeah and that's a good name for Mirage, right? It's a really great illusion.

CHARLES: Yeah. It is a simulation of reality, so we use Mirage, which is something from the Ember testing world but something that we have extracted and made available as BigTest Mirage.

WIL: Yeah. The main difference just being is that we've taken away the Ember dependencies and the run loop stuff. It's just plain JavaScript Mirage. It works exactly the same as you use it in Ember minus the auto imports and the file... Oh, man. I can't think of that word. Aside from automatically importing your files for your server config, you have to do that manually because Ember is what provides that but other than that, it's a form of Mirage. You define models and serializers and factories and all the good stuff.

CHARLES: Right and then you can use those factories and you can use those models to really give a high fidelity server. If you are building something in whatever framework, you can use BigTest Mirage to simulate that network layer. Again, we've used it in a number of different scenarios but having that in place means that you're going to be able to have those high fidelity tests where your application is actually making XMLHttpRequest but it's all isolatable, so that it can be run in repo. This isn't really related to testing but it has a fantastic capability where you can prepopulate, you can use the factories to prepopulate your server with data, so that you can use the application without the actual server being implemented.

WIL: Yeah. That's extremely powerful. That's we were talking about earlier and getting at is the scenarios which are setting up specific, essentially fixtures but you're generating these fixtures. Factories are essentially high level fixtures, network of fixtures.

CHARLES: Yeah, higher order of fixtures.

WIL: Yeah, so the scenarios are just setting up these fixtures for a scenario of your applications, like the backend is down or the list only responds with two items as opposed to 5000 items, something like that. You want to be able to, not only test these things but be able to develop against it and Mirage makes that really easy because you can just start your app with Mirage-enabled point to that scenario and you're there. You have that exhausted scenario to develop in.

CHARLES: If you've never used Mirage, it is really hard to understand just how incredibly powerful it can be. We've used it now on at least four projects, where we did develop the entire first version of the product without any backend whatsoever. It's an incredible product development tool, even apart from testing, that then informs the shape of what the API was going to be. I know we've talked about this on the podcast before but it's really an incredible technology and it is available to you no matter what framework you're using. I think it's one of the best kept secrets in JavaScript development.

WIL: Yeah. That's definitely great. That said, though it does have some fallacies. It's great but it can be a little slow sometimes, so we are eventually working on a BigTest network like another piece of the BigTest pie that you'll be able to sprinkle into your application but in the meantime, praise Mirage.

CHARLES: Yeah. We are going to be offering an alternative or maybe collaborating for another version of Mirage but hopefully, we can make Mirage faster, we will be able to make this thing faster, so that it can use service workers and be used in a bunch of different scenarios.

Just to recap, we've talked about a lot of different components but over the past year, a couple of years, these are the things that we've identified as being really key components as big part of your acceptance testing and really your testing stack. How you're going to start and launch these things? How are you going to set them up and tear them down? How are you going to interact with the application from a user, both in terms of making assertions and how are you going to take action on behalf of the user and still have it be maintainable, have it be resistant to flakiness, have it be performant?

BigTest is the answer to that for those particular areas of the testing story and so, some were using we're using existing components like we use Karma, we use Mirage to date. Those, we did not develop but where we see kind of key pieces of that puzzle missing is where we started writing the BigTest solutions so things like the interactor. Eventually, we are going to make BigTest into a product that's you're going to be able to use kind of out of the box, just like you might install Cypress, where it's a very quick set up and we make all of the decisions about the components for you.

But in the meantime, we're really trying to take our time, identify those pieces of the puzzle and build the software component that fits that piece of the puzzle at the absolute best so when they're polished, use them in a more comprehensive product. Things like convergence, things like interactor, things like BigTest React, BigTest Vue and very soon, BigTest Ember. These are things that you can use today, to make your tests just that much bigger and that much better, especially interactor. It's been an incredible journey this past year as we kind of develop these individual pieces and there's just going to be more goodness to come.

WIL: Absolutely. Right now, I'm working on some validation type API for interactor that I'm hoping to land soon. That'll open up the possibilities of maybe hiding away those convergent assertions a bit more in your tests and just handling this automatically. It'll be pretty good.

CHARLES: It's really exciting. Writing test has got more and more easy and more and more fun over the last year for us. I think we're already starting in a pretty good place. If you have any questions about BigTest, how would folks get in touch with us?

WIL: We have a BigTest Gitter channel. You can find a link to that on the BigTest website: BigTestJS.io. Just ask us questions on Gitter and we'll try to answer them.

CHARLES: And as always, you can ask us directly. You can send email to Contact@Frontside.io or reach out to us on Twitter at @TheFrontside or you can actually reach out to the BigTestJS Twitter account directly and just call us on Twitter at @BigTestJS. Thank you very much, Wil.

WIL: Thank you, Charles.

  continue reading

133 episoade

Artwork
iconDistribuie
 
Manage episode 222263646 series 1402166
Content provided by Mandy Moore, Charles Lowell, and The Frontside Team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mandy Moore, Charles Lowell, and The Frontside Team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

In this internal episode, Charles and Wil talk about testing issues and BigTest solutions. Pieces of the testing story are discussed, such as the start and launch
application, component setup and teardown, interacting with the application and component, convergent assertions, and network. Then they talk about testing issues: the fact that cross browser and device-simulated browsers are not good enough, maintainability and when and when not to DRY (RYE), slowness and why (acceptance) testing is slow, portability and why tests are coupled to the framework, and reliability. Finally, they talk about BigTest solutions:

  • @bigtest/cli to start / launch (Karma
    recommended for now)
  • @bigtest/react, @bigtest/vue, etc for setup & teardown
  • @bigtest/interactor for interactions
  • @bigtest/convergence for assertions
  • @bigtest/network in the future (Mirage
    recommended for now)

Resources:

This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC.

Transcript:

CHARLES: Hello, everybody and welcome to The Frontside Podcast, Episode 115. My name is Charles Lowell, this episode's host and a developer here at the Frontside. With me today to talk some shop is Mr Wil Wilsman.

WIL: Hello.

CHARLES: Hello, Wil.

WIL: How's it going?

CHARLES: It's going good. I'm actually pretty excited to get to jump into this topic because we're going to be talking about some of the big things that are happening at Frontside and some of the things that we've been developing in almost for the last year.

WIL: Yeah. It's been about a year now.

CHARLES: It's been about a year and we've talked about it in various podcast but we're going to be talking about it again because there's just been so much progress that we've made, I think in a lot of clarity in kind of what we're going for here when we talk about BigTest and testing big and how we want to roll out the BigTest framework. We just have a lot more experience using it on a number of different projects, so we get to talk about that today.

Before we get started, I just wanted to talk a little bit about what BigTest is, both in terms of the framework and also the philosophy. Wil, you're the one who works the most on BigTest. When you think about philosophically, what does BigTest mean to you?

WIL: It's the size of your test, not a physical size like size and storage but how much your task actually does. The test itself can be very small as our test are but it tests the whole application from the user interacting with it down to the network requests. That's the definition of the philosophy of a BigTest to me. It's to tests your application from the biggest point of view.

CHARLES: Actually, achieving that can be surprisingly difficult, especially in a frontend JavaScript application and there are a lot of solutions out there for testing and we've talked about them. One of the questions that arises is when we talk about BigTest, what exactly are we talking about? Are we talking about a product that you can download and install? Are we talking about the philosophy that you just outlined? Or are we talking about the individual pieces of software that make that philosophy real? I think the answer is we're kind of talking about all three but we want to take this episode to talk about where we're going with the product.

What we've identified is the subcomponent pieces of that product. In other words, in order to get started testing big, what are the things that you need to think about? What are the things that you need to do? And then what are the component pieces? Because one of the things that I think is very important to us is that you be able to arrive at wherever you are in your project, whatever framework you are using, whatever current testing solution and be able to begin using BigTest. That means, you might be using some of it or you might be using a lot of it but we want to meet you exactly where you are, so that you can then, get onboarded and start testing big.

WIL: Yeah. Definitely an important distinction that we get confusion about is what is BigTests and people just assume like this whole test suite is BigTest but we used the parts of it ourselves like we use Mocha, which is not part of BigTest. We use Chai, which is not part of BigTest. We use Mirage which is kind of part of BigTest but definitely it originate in BigTest and Karma and things like that. BigTest isn't your testing suite. It's not one thing to go-to to grab, to start writing tests. It is a small pieces that you can use in conjunction with other small pieces, just to make it really easy and flexible to test your application.

CHARLES: Exactly. Because it turns out that there's a lot going on in the application. Maybe we should talk about what some of those pieces are that you might want to start using BigTest with or that you might need to test big, I guess I should say. What's a good place to start? Let's start with talking about some of the issues that you want to do when your testing big. Then we can talk about what pieces of the testing story that fit in to solve those issues.

One of them is you need to test that your application works, like actually works. That means you need to be able to test on a multiplicity of browsers, for example. We're limiting to the domain of web applications. There are actually a shockingly large number of browsers. It's not just Chrome. It's not just Safari. There's Mobile Chrome, Mobile Safari, which are subtly different. There's Edge and I'm sure the Mobile Edge is slightly different too, so you want to be able to test cross browser, right?

WIL: Yeah, absolutely and things like Nightmare and JS DOM and things that simulated browsers, we don't necessarily think those are the best tools for writing BigTest because we want to ensure that those browser quirks are caught and tested as well.

CHARLES: This is not theoretical like sometimes you'll have a syntax, like the parser is slightly different and you have something that throws a syntax error in Safari or in the Internet Explorer and your whole app is completely busted. If you just take in the time, just even trying to load the app in that browser, you would have caught that. That's what I've been on many times.

WIL: Yeah and what I just saw came up yesterday, which comes up frequently is not closing your CSS Selector and Chrome doesn't really care like web to browsers don't care too much but that will fail in Edge and depending on what you're missing, the failing is part of that too but mostly, Firefox and Chrome don't care about that kind of thing.

CHARLES: Right. It seems like the majority of testing solutions are kind of focused around Headless Chrome or some variation of Electron. That entire class of really dumb errors has already been caught. Like I said, to actually catch it, it takes less than a millisecond of CPU time just to load it onto the browser and see that thing doesn't work. Unfortunately, they can be catastrophic errors but the problem is how do you actually do.

We want to test like cross browser. This is something that we want to do. For me, I just can't imagine shipping an application without having some form of cross browser testing, some capability of being able to say, "I want to test it," like, "We want to work on these eight browsers and so we're going to test it on these eight browsers," but how do you actually go about doing that?

WIL: Right now, we are working on the BigTest CLI which will help us launch browsers but that's not complete yet. It has some bugs on. For the meantime we've been using Karma, which is great. Basically, you just have this service that's able to find the browser binary on the system and just launch them pointing to local hosts with your app loaded up and your normal development server take care of loading the test up and running the test. Karma and the BigTest CLI is just there to capture output and launch those separate browsers.

CHARLES: Yeah. I remember when I was first using working with Karma and I think Testim is another tool that's in this space. There's Testim, Karma and BigTest actually is we're developing a launcher because launching is something that you're going to need but it's such a weird problem. I feel like with the browser launchers, there's three levels of inversion of control because you're starting a server that then starts another process, which then calls back to your server, which then loads the app resources, which then loads the tests and then runs the test. There's a lot of sleight of hand that has to happen and –

WIL: Including injecting the adapter that you use, like the Mocha adapter, the Jasmine adapter that ends up reporting back to the CLI. That's something that Karma and Testim and BigTest will handle for you.

CHARLES: Right, so you're fanning out the test suite to a suite of browsers then collecting the results but basically, you need some sort of agent living inside the browser that's going to act on behalf of the test suite, to collect the results. I remember when I first came into contact with Karma and Testim, I was like, "This is so unnecessarily complex," but then, having used it for a while and I think there are some complexity that can be removed but if you want to do cross browser testing, that kind of level of ping-ponging is there's a certain amount of it that just necessary. It's something that's actually quite complex that you need to have in your stack, in your toolbox, if you want to truly test big.

WIL: Yeah and all the solutions is mechanisms for detecting when the browser has launched and restarting the browser based on its health check, etcetera and things like that that you wouldn't think of actually loading up a browser but you need to think of when you're doing automated testing.

CHARLES: What is it that sets apart, for example the launcher solution? We kind of call this class of solutions launchers, so Testim, Karma, the BigTest CLI. What is it that sets BigTest CLI apart from say, Karma and Testim?

WIL: We're trying to be as minimal config as possible and just really easy to get started and going. Karma has a lot of plugins that you need to make sure you have installed and loaded in the options set for those plugins. Testim has some stuff bundled but it still requires this big config bulk at the beginning that you need to passing or that's all what you were doing. We're trying to avoid that with BigTest CLI and one of the ways that we're able to avoid that is by just letting your Bundler handle bunding the test. In Karma, you need Karma webpack or something. Testim has some stuff that it needs and really, we just want like in-testing mode. When you're in the testing environment, just change your index to point your tests, instead of your application and your Bundler will do all the work and we just serve that file and collect the results.

CHARLES: Right, so it doesn't matter if you're using Parcel or you're using webpack or you're using Ember CI.

WIL: Yeah, Rollup even.

CHARLES: Or even just like low level Broccoli or Gulp or whatever. There's a preponderance of bundling solutions and that was always something that was just a huge pain in the butt with Karma. I know it's like just getting to the point where my tests are loaded and you look with Testim, most of my experience come with Testim comes through how it's used in Ember CLI like the histrionics that undertaken just to bundle all your tests assets and your application assets and your vendor assets and just kind of bootstrap that thing. It's a lot of work.

WIL: Another thing with BigTest CLI that doesn't include in Karma and Testim does is a concept of a watcher because all these Bundlers, you have HMR -- hot module reloading, Rollup and things like that. It come with plenty of plugins. Parcel is always set out of the box, so if you're using your Bundler, your existing Bundler to bundle your test, you get that watch feature for free, so it's another complexity that the BigTest CLI kind of eliminates.

CHARLES: What it means is we've hidden most of that complexity. Just let the Bundler handle it, right? The Bundler is the part of your project that bundles.

WIL: Yeah.

CHARLES: You should have your launcher actually doing that for you but we still do need to have some way to do that set up and tear down. When we have that testing endpoint, we have some way to say, "We're starting a test, not the application. We're ending the test, tear it down," so how do you abstract that away?

WIL: That's kind of something that we can't really avoid. It is just like some sort of dependency on the framework itself, your application framework. It's like you need to mount a React app. You need to mount an Ember app, etcetera and there's different ways to mount those things. This is one of the things that can't really be decoupled as much as everything else can but BigTest has BigTest React and BigTest Vue and we want to eventually gets like BigTest Ember but really, the main export of all these packages is just a simple mount helper that will mount and clean up your application for you and your testing hooks, whether you're using before each from Mocha or before from something else like Jasmine. You know, no matter what you're doing, you just have a hook that mounts your application and then, cleans it up on the next mount.

CHARLES: It's worth pointing out here that this is kind of a core concern of testing and testing big is being able to mount your application and tear it down with regularity and having hooks into that process. Whether you're using BigTest or whether you're not, can you still use BigTest React and BigTest Vue, even if you weren't using anything else?

WIL: Yeah, absolutely. Like I said, they just export simple mount helpers. I don't even think they have any other inner BigTest dependencies. They just have pure dependencies on their frameworks.

CHARLES: Right and so, you could use it, even if you wanted to roll everything else by hand or you wanted to get started somehow and you needed to do set up and tear down, again this is something that's key to being able to test big, so you should be able to use it independently, whether you use the CLI or not, whether you're using any of the other tools or not. All of the tools can be used independently.

WIL: Then another feature of the BigTest React and BigTest Vue is the tear down that happens before set up, rather than happening after your test runs, having a separate tear down. This allows it. Whether your test passes or fails, you can look at it and play with it and inspect it and debug it much easier than if you had tear down. You have to disable at tear down or throw a pause in there to keep other or something.

CHARLES: Yeah, I love that. When something goes wrong, you can just let the test case run and the last test that it runs, it just leaves at set up. It does the tear down right before the set up.

WIL: Exactly, yeah. At the very end of the whole test run, there's an app there waiting for you to play with.

CHARLES: If you focus in on a single test, we most commonly use Mocha, so you say a '.only' to run that single focus test, then you have the state of the application at that test case set up and ready to go. You can just play with it, you can inspect it, you can actually just use it as a starting off point and interact with the app normally as you would.

WIL: I want to say, Cypress does this too. They do their tear down before they're set up as well. That's how you're able to play with Cypress test.

CHARLES: Yeah, I like that trick. Now, we talked about launching, setup and tear down but we haven't actually talked about much of what actually happens in the test cases themselves. We talked about how to start and launch your test suite, how to do that across a bunch of different browsers, how inside of that, you have a separate concern as applications set up and tear down and how you want to lean on how you're actual app is actually bundled because that fits in with the philosophy of testing big. You don't want to use an external Bundler for your test suite. You want to use your real Bundler, how the asset is actually going to look.

But when it comes down to actually writing the tests, you need to be able to interact with at the highest level as you possibly can. When I say highest level, we want to verify that the users, when they take certain actions, we'll see certain outcomes and so, we want those outcomes and we already talked about this to be reflected in a real DOM, in a real browser. But at the same time, the real interactions, we want those to be as high fidelity as possible, so you want to be sending events to the browser. You want real mount events, real key events, real interactions.

WIL: Yeah, interacting with application. That's another core philosophy that we kind of talked about earlier that defines a BigTest. It's the user interacting with your application. We're not calling methods and expecting other callbacks or arguments to be passed or clicking on a button and expecting a message to pop up that says, "Form submitted successfully." These are user-facing things were starting on and acting on.

CHARLES: Yeah and then, it can be really tricky because these things don't happen synchronously. They're happening inside of your browser's event loop. I click that button and then it goes off and there's some loading state and then, I might get an error message that pops up this thing that animates out and then, goes away. The state of the browser is in constant flux. It's constantly changing and so, it can be very difficult to put your finger and say, I want to be in this state if you are limiting yourself to only reading from the DOM.

Some frameworks, Ember for example, you have kind of a white box where you can actually inspect the state of the Ember run loop and use that to do some synchronization but it can be very, very hard to coordinate these interactions.

WIL: Yeah. You know, to talk about getting to the solution as a BigTest interactor, which is basically modern page components or page objects. If you ever heard of page objects, it's just a way to encapsulate interacting with big pieces of your pages. It's not a new concept. It's been around for a while but BigTest interactor has kind of a new twist on it where they're immutable, composable interactions that are also convergent, which we'll get into later, which basically means if your buttons not there, it won't click the button until it is there. They're really powerful and they're making really easy and fun to write these tests.

CHARLES: Yeah, they're super powerful. I remember we talked about convergences last time when we talked about BigTest but interactors, I think are definitely a new development. I think we should spend a little bit of time there talking about, not just the power but also the ergonomics of interactors because they are like page components or page objects, except they're scope to the component. Not only do they have all this wonderful stuff where it'll make sure that the component exists before it starts to interact with it and things like that but their composable. If I have a button, then there are certain operations that are valid for that button. I can click it. I can hover over it. I can do all these things. They're the operations that make it unique to the button. Now, those might actually map to real events.

WIL: Similarly, their assertions about that button as well, like as primary is secondary. If this button is repeated throughout your application, you might want to make sure that your form has a primary and secondary button.

CHARLES: Exactly. It really encapsulates all the knowledge of how you can interact with both in terms of taking action and reading state from that button. It almost feels like an accessibility API. It would be easy to write a screen reader if you had these interactors for every single component on the page.

WIL: That's kind of what it is. It's just like you're defining an API around how your user would interact with your application and what your user would expect in the application. That's the point of page objects and interactors as you're defining this user API, essentially.

CHARLES: Yeah and so, really the step that interactor take is that they take the classic page object and it make them composable, so I can have, you kind of touched on this before, a modal dialogue interactor, which is composed out of two button interactors. One for the primary action, one for the secondary action and maybe, it's aware of its own title text, so you can assert on the title text but I didn't actually have to write the individual button interactors for that modal dialog interactor. Then I might have a second modal dialog interactor or a form that's on a modal dialog just composed of the modal dialog interactors and the individual form components, which appear on that particular modal dialog.

WIL: It's essentially how we've been building applications lately with components but this is for page objects in your test if you want to mirror that. You don't have to have one-to-one mappings of an interactor to a component but if you do, it's really powerful.

CHARLES: Yeah. I found that when we have one-to-one interactors, that's when it just feels the best.

WIL: Yeah and on top of this, if you have a component library and your component library exports the interactor that it uses for the component test, like we said, this BigTest technology, they're sprinkled also. We don't have to use interactors in big acceptance tests. We can use them for smaller component tests too, so if we ship these component interactors with the component library, your application that's consuming this component library now can test those components for free, without having to write their own interactors. It can just compose the interactors exported by the library.

CHARLES: Man, I almost want you to repeat that word for word again, just so it can sink in. It's so awesome. Because when you actually go to write your tests, you're not starting from ground zero like, "How do I do this?" They're like, "I'm writing some tests for this thing and I'm using these components and so, I've already got the prepackaged interactions for those components." It's like you start writing your tests. If your tests are a 10-story building, it's like you're starting on Floor 7 and you only have to walk up to Floor 10, instead of slogging up all 10 stories.

WIL: One really helpful interactor that we work within the open source stuff we've been working on is a date-picker interactor because date-pickers can be really complex. Just having that common interactor and have a date-picker on multiple forms where we can just use that one interactor, we don't have to tell every single test how to interact with that date-picker. We just say pick date and pass the date.

CHARLES: Yeah, it's so awesome. That is actually a great example. It doesn't feel scary to write a test for a page that has a date-picker on it or two. If you're doing like a date range or something like that, you're like, "Oh, my God. I don't write the selectors to test this." You just import your date-picker interactor, you set the date, it actually worries about all the low level events and there you go. It feels like you're operating at a much higher level.

WIL: Yeah. The interactor API essentially, you're telling me the test what the user would be doing and what the user would be seeing.

CHARLES: Yeah. It's worth pointing out again. We've identified starting and launching. We've identified set up and tear down but interaction is a core concern of BigTesting, no matter what tool you're using. One of the things that we found as interactors are something that you can sprinkle on literally any test suite if you're testing an interface and it makes it better. We've used it inside big acceptance tests. We use it inside Jest, doing just little component tests. There are people in the BigTest community who have used it to basically, write component tests against a JS DOM and while theoretically, philosophically, you want to make those tests as big as you possibly can, you can use that piece in your test suite.

If you are using a simulated DOM and if you're running a node in a browser, these interactors will still work and you're going to get high fidelity test cases that are resilient to this asynchrony and are composable and if they do have a full-fledged test suite, you can reuse these interactors. They are a really awesome power up that you can bring into your test suite.

WIL: And they are not tied to the framework at all. We use them in React for our stuff but we've also written some in Ember. Robert's written some in the Vue and ported some test and one of the beautiful things we've seen from this is that one interactor goes everywhere. You just write the interactor once and you can use it in Ember, in React, in Vue, in those test suites. If the rest of your test suite is framework agnostic, you have this test suite that you can jump frameworks in your test suites until it works and can test your application with high fidelity.

CHARLES: Yeah, it's fantastic. I remember when we first tried using interactors inside an Ember test suite because Ember comes with like a big kitchen sink in testing set up but interactors just slotted right in and there's absolutely no issue.

WIL: Yeah and there is actually a speed boost even because in most of the Ember test offers a hook into the Ember run loop and interactors are not. There is actually a good speed boos just using interactors.

CHARLES: Yeah. This is a good point. It's a good segue because typically, we think of acceptance tests as being really slow and one of the reasons, even the people [inaudible] acceptance tests or testing big as they think like it's going to take a long time. We found that actually we've been able to maintain a happy medium of testing big but also, having those test be really, really fast. When you say you said a speed boost from using interactors with Ember, where is that speed boost actually come from?

WIL: I mentioned the Ember test offers a hook into the Ember run loop and interactors aren't and the reason of this is because interactors are converging and they wait for things in the DOM to exist before interacting with them. Instead of waiting for the framework to settle, it just waits for the thing to appear and then interacts with it immediately. If you're starting something about a button toward the top of the page, you don't really care that another button at the bottom of the page has rendered yet, unless of course you have assertion about that but if they're converging, you don't need to hook into the wrong loop to wait for the entire page to load, to interact with just one piece of it.

CHARLES: Right. You're just waiting and you say, "I'm expecting something to happen and the moment I detect it, no matter what else is going on, the page could be taking 30 seconds to load but if that button appears and I can interact with it, I can take my action then or I can make my assertion then." It's about kind of removing gates -- artificial gates.

WIL: Yeah. Another common thing that's helped with is animations as most test that are hooked into the run loop, you kind of have to wait for some of these animations to finish before you can even interact with the element and that means if a model has a half second animation where it flies in and you have 30 tests around this modal, those tests are extremely slow now because you have to wait for that modal to come in, whereas --

CHARLES: -- Straight up flaky.

WIL: Yeah, straight up flaky. Whereas in the actual DOM, that modal is inserted pretty immediately and can be interacted with pretty immediately. With interactors, they don't need to wait for the animation to finish. They can just immediately interact with that modal but of course, if you need to wait for the animation to finish, there are options for that as well.

CHARLES: Yeah. If there's some fade in that needs to happen, you can kind of assert on any state and as long as it's achieved at some point, the interactor will recognize it and recognize it at the soonest possible time that it possibly could. I remember getting bitten on one project where the modal animations in particular were so brutal. Not only were they flaky, they just were slow because there was all these manual time outs. It wasn't even a paper cut. It was kind of like a knife cut, like there's someone sitting there and kind of slashing you with a pocket knife. It just was a constant source of pain in your side.

WIL: Yeah and that's how you end up with things like waits and sleeps in your test suite. When you need to wait for the animation to happen or something, you just see a sleep for four seconds with a comment because we have to wait for the components to load in. That's kind of a code now.

CHARLES: Yeah, that's just asking for trouble both in terms of slowness and in terms of it's going to get flaky again. That has been kind of one the most freeing things about working with interactors and working with convergent assertions on which they're based is you just don't ever have to worry about asynchrony. Really, really truly, most of the time, you're writing your tests, like it's all synchronous and that kind of makes sense because from the user's perspective, their consciousness is synchronous and they don't care about the internal run loop. It's just they were making observations in serial and at some point, they're going to observe something, so the interactor sits at that point and really observes the application the way that your user would.

WIL: Yeah. We've mentioned a few times now the convergent assertions, which interactors are based on. A little caveat there if you're using interactors and you're making non-convergent assertions, they might fail or be flaky. That's because interactors wait for the thing to be there to interact with, so as soon as the buttons there, it clicks it but it doesn't wait for after that event has fired and your application has reacted to that event, that's your application is concerned. We need something there like our convergent assertions that can converge on that state and wait for that state to be true before it considers itself passing or in times out.

CHARLES: Maybe we should dig a little bit into convergent assertions. I think the last time we had a public conversation on the podcast about this, this is kind of where we were, like we hadn't built the interactors, we hadn't built these other component pieces of the testing story. We were really focused on the convergent assertion. We've talked a little bit about this but I think it's worth rehashing a little bit because it's a unique way of approaching the system but it's also kind of horrifying when you see how it works under the covers.

I think when we tell people about the fact that it’s basically polling underneath the covers. The timeout is configurable but it's basically polling every 10 milliseconds to observe a state. I remember the first time being confronted with this idea and I was horrified and like my programmer hackles on the back of my neck, like raised up and I was like, "Wait a minute. This is going to be slow. It's going to be computationally intensive."

WIL: Yeah. That was my exact thought too because this is going to be slow. If acceptance tests are slow and we're doing an acceptance test every 10 milliseconds, it's going to be really slow and that's actually not the case completely. It's actually the opposite. They're extremely fast.

CHARLES: It is shockingly fast. You've got to try it to believe how fast it is, how fast you can run acceptance tests.

WIL: Yeah, talking like 100 tests in just tens of seconds.

CHARLES: Right. You basically gated by how fast your framework can render. Your tests are not part of the slowness. Your test --

WIL: And also, memory leaks can be costly too. We experience that recently where we had memory leaks that were slowing down our test but we fixed those up in test and put our backup.

CHARLES: Yeah, because basically, running the assertion or running the convergence is very fast. It's just a very light ping. I kind of think of it is as it is light as the brush of a photon or something that was bouncing off of a surface, so that you can observe it. It's extremely light and most of the time, it's just waiting so the test and the convergence really just gets out of the way. Just because they can run a thousand times or a hundred times in a second, it's doesn't gun it up. But the thing is it means that your tests run as fast as your application will run. You get back to the point... Was it in React where the kind of the key insight is that JavaScript is not the bottleneck? Well, your tests are not the bottleneck.

WIL: Yeah.

CHARLES: I guess this is what it is. I don't know if there's anything else that you want to say about convergences.

WIL: No. We pretty much summed it up there and that's what interactors are based on. That's how they're able to wait for things in a DOM. It basically polls the DOM until it exists and then it moves on and actually does the interaction.

CHARLES: Once again, this is actually a very low level thing on which BigTest is based but this is once again, something that you can use independently. You can write your own convergent assertions. You can write your own convergences that honestly have nothing to do with testing your assertions. It's a free standing library that you can use in your test suite or elsewhere should you choose.

WIL: That doesn't need to be a DOM for BigTest convergence there. I use BigTest convergence in BigTest CLI to converge on the browser being launched. Instead of waiting for the browser to report that, I can just kind of poll and see how that process is doing and the convergence waits for that process to start before moving on.

CHARLES: Right. I guess the best way I've thought about it it's a way to synchronize on observations and not on callbacks. It’s a synchronization mechanism and 99% of the synchronization mechanisms that we're used to, they've involved some sort of callback, a promise, an event-listener, things like that or even a generator where control is handed back explicitly to a piece of code when something happens. Whereas, this is a fundamentally different synchronization primitive, where you are writing synchronous code that's based on observations, so what I observe this, do this. When I observe this, do this. It's extremely robust.

WIL: Yeah, very.

CHARLES: It is a core piece. A fundamental thing that on which interactors are based on, which the CLI is based, I don't know if it's core to writing tests but --

WIL: It definitely helps.

CHARLES: It doesn't helps. We couldn't have BigTest interactor without that.

WIL: No, definitely not.

CHARLES: Because that's what makes it fast, that's what makes it not flaky at all and having those things, I think it makes it easy to maintain because you can work at the interactor level or the level of user interaction and you don't have to worry about synchronization, so the flow of your tests are very natural.

WIL: Yeah. We don't have to explicitly wait for request to be done for making an assertion about your app. That'll just come with convergences, just waiting for test date in application to true.

CHARLES: Let's talk about one more piece of the testing issue because when you're testing big, when you're testing in the browser, there's always the issue of what are you going to do about your API. You got to have your API running. It's just always an issue and this is kind of interesting because this sits at the crossroads of testing big and also, getting the most utility out of your test because in an ideal world, if you're testing really big, you're going to be using a real API. You're not going to poke holes in reality.

WIL: Yeah. One of the things that we avoid in BigTest is poking holes. We're not shallow mounting the components and testing the methods and the results. We're fully mounting these things and fully interacting with them through the full DOM API.

CHARLES: Yeah, exactly, using real browsers. It just occurred to me the irony of us talking about reality being things that are still running inside of a computer processor. I think we've inherited this term from that talk that Justin Searls at AssertJS in 2017. It's a really, really excellent talk. I think he gave it at RubyConf. It's the 'Don’t mock me.'

WIL: Yeah, it's one of my favorite talks.

CHARLES: Yeah, it's a great talk. In it, he talks about the value of a test is a balance of how many holes you poke in reality and sometimes, you encounter a test where all it is like holes in reality. Whether you're mocking this, you're mocking that, you're mocking the DOM, you're mocking the browser, you're mocking your network layer, you're mocking this external API and the more holes you poke, the less useful it's going to be. Network is one of those where it can be very difficult to not poke holes in that reality because it's a huge part of your application. Your frontend application is how it's going to interact with the server but at the same time, servers are gigantic pieces of software themselves, each with their own dependencies, each with their own set up and tear down --

WIL: Have their own concerns.

CHARLES: Yeah, exactly. They might be in a different language. They've got runtime, things like they might need external C libraries and crazy stuff like that. They're their own beast. To get a true big end-to-end test, you going to have to stand up your server but the problem that presents is you want your tests to be also isolatable. If you're a developer, I can go to a repo, I can do an install of my dependencies and I can run the tests without having to do any external dependencies other than the repository and the language in which I'm working.

This is one where we kind of have tried to walk the line of not wanting to poke holes in reality but also, have the test be containable to the actual application. In order to do that, you need something that presents a high fidelity version of the network. You can kind of try and have your cake and eat it too. You want to have something that acts like a server and really acts like a server but it's actually not a server.

WIL: And still poke as few holes as possible and the application and how that's all set up, we don't want to be intercepting methods and responding with fake data. That's not a good way to mock that network.

CHARLES: Right. We want to be calling actual fetches, calling actual XMLHttpRequest. Ideally, if you've got service workers, making actual service worker requests.

WIL: Basically, as far as the application is concerned, it's talking to a real server on us.

CHARLES: Yeah and that's kind of the litmus test for is it a hole in reality or is it just a really great illusion?

WIL: Yeah and that's a good name for Mirage, right? It's a really great illusion.

CHARLES: Yeah. It is a simulation of reality, so we use Mirage, which is something from the Ember testing world but something that we have extracted and made available as BigTest Mirage.

WIL: Yeah. The main difference just being is that we've taken away the Ember dependencies and the run loop stuff. It's just plain JavaScript Mirage. It works exactly the same as you use it in Ember minus the auto imports and the file... Oh, man. I can't think of that word. Aside from automatically importing your files for your server config, you have to do that manually because Ember is what provides that but other than that, it's a form of Mirage. You define models and serializers and factories and all the good stuff.

CHARLES: Right and then you can use those factories and you can use those models to really give a high fidelity server. If you are building something in whatever framework, you can use BigTest Mirage to simulate that network layer. Again, we've used it in a number of different scenarios but having that in place means that you're going to be able to have those high fidelity tests where your application is actually making XMLHttpRequest but it's all isolatable, so that it can be run in repo. This isn't really related to testing but it has a fantastic capability where you can prepopulate, you can use the factories to prepopulate your server with data, so that you can use the application without the actual server being implemented.

WIL: Yeah. That's extremely powerful. That's we were talking about earlier and getting at is the scenarios which are setting up specific, essentially fixtures but you're generating these fixtures. Factories are essentially high level fixtures, network of fixtures.

CHARLES: Yeah, higher order of fixtures.

WIL: Yeah, so the scenarios are just setting up these fixtures for a scenario of your applications, like the backend is down or the list only responds with two items as opposed to 5000 items, something like that. You want to be able to, not only test these things but be able to develop against it and Mirage makes that really easy because you can just start your app with Mirage-enabled point to that scenario and you're there. You have that exhausted scenario to develop in.

CHARLES: If you've never used Mirage, it is really hard to understand just how incredibly powerful it can be. We've used it now on at least four projects, where we did develop the entire first version of the product without any backend whatsoever. It's an incredible product development tool, even apart from testing, that then informs the shape of what the API was going to be. I know we've talked about this on the podcast before but it's really an incredible technology and it is available to you no matter what framework you're using. I think it's one of the best kept secrets in JavaScript development.

WIL: Yeah. That's definitely great. That said, though it does have some fallacies. It's great but it can be a little slow sometimes, so we are eventually working on a BigTest network like another piece of the BigTest pie that you'll be able to sprinkle into your application but in the meantime, praise Mirage.

CHARLES: Yeah. We are going to be offering an alternative or maybe collaborating for another version of Mirage but hopefully, we can make Mirage faster, we will be able to make this thing faster, so that it can use service workers and be used in a bunch of different scenarios.

Just to recap, we've talked about a lot of different components but over the past year, a couple of years, these are the things that we've identified as being really key components as big part of your acceptance testing and really your testing stack. How you're going to start and launch these things? How are you going to set them up and tear them down? How are you going to interact with the application from a user, both in terms of making assertions and how are you going to take action on behalf of the user and still have it be maintainable, have it be resistant to flakiness, have it be performant?

BigTest is the answer to that for those particular areas of the testing story and so, some were using we're using existing components like we use Karma, we use Mirage to date. Those, we did not develop but where we see kind of key pieces of that puzzle missing is where we started writing the BigTest solutions so things like the interactor. Eventually, we are going to make BigTest into a product that's you're going to be able to use kind of out of the box, just like you might install Cypress, where it's a very quick set up and we make all of the decisions about the components for you.

But in the meantime, we're really trying to take our time, identify those pieces of the puzzle and build the software component that fits that piece of the puzzle at the absolute best so when they're polished, use them in a more comprehensive product. Things like convergence, things like interactor, things like BigTest React, BigTest Vue and very soon, BigTest Ember. These are things that you can use today, to make your tests just that much bigger and that much better, especially interactor. It's been an incredible journey this past year as we kind of develop these individual pieces and there's just going to be more goodness to come.

WIL: Absolutely. Right now, I'm working on some validation type API for interactor that I'm hoping to land soon. That'll open up the possibilities of maybe hiding away those convergent assertions a bit more in your tests and just handling this automatically. It'll be pretty good.

CHARLES: It's really exciting. Writing test has got more and more easy and more and more fun over the last year for us. I think we're already starting in a pretty good place. If you have any questions about BigTest, how would folks get in touch with us?

WIL: We have a BigTest Gitter channel. You can find a link to that on the BigTest website: BigTestJS.io. Just ask us questions on Gitter and we'll try to answer them.

CHARLES: And as always, you can ask us directly. You can send email to Contact@Frontside.io or reach out to us on Twitter at @TheFrontside or you can actually reach out to the BigTestJS Twitter account directly and just call us on Twitter at @BigTestJS. Thank you very much, Wil.

WIL: Thank you, Charles.

  continue reading

133 episoade

Tüm bölümler

×
 
Loading …

Bun venit la Player FM!

Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.

 

Ghid rapid de referință