Identifying a technology need is usually pretty easy - your team will complain at every opportunity, however tangential, about how some application is too complicated or is not powerful enough or has a major missing feature or doesn't integrate with other applications or you can't search it or it's too slow or it uses different conventions to the other tools or there was something better at their last job or they just plain don't like it.
You'll usually agree. And you'll usually want to wait for a (non-existent, and you know it) better time to think about it because introducing a new technology can be time-consuming, hard work and risky.
Eventually events will overtake you. When that happens, I start by drawing up a list of application-specific requirements, prioritised of course, and then add this basic set of parameters that I want to compare across any candidate tools:
- user community: is it active? how is the tool viewed?
- support: forums, bug database, blogs etc
- developer community: are people building and building on the tool?
- maturity: will the tool be changing under your feet?
- regular releases/fixes: is the tool being maintained?
- dependencies/requirements: what else needs to be installed?
- deployment: Does it use standard packages? Is it easy to update?
- integration: does it offer any APIs or ability to customise?
- price: include maintenance, per-user licenses fees, your own costs etc
In a short initial phase, identify as many tools as you can - be inclusive at this stage, so bring in anything that looks remotely possible - and quickly grade them in against your requirements. Don't spend long on this and don't be afraid to put don't know entries in the table to start with. Sometimes you'll find that a tool does something you hadn't thought of that you might like. Don't be afraid to add it to your comparison table as you go. What you're trying to do here is discover (a) classes of tool, (b) obvious non-starters and (c) obvious candidates for a deeper review.
Once you've done that you can rank and cluster the tools based on your criteria and choose a selection (e.g. one from each class you've identified) to take to the next round. The next round has to be more specific to your intended usage. It might be another review, based on deeper reading about the tools or it might be trial installations, or you might have already identified one outstanding candidate in which case you're done.
As an example, when we were looking for GUI automation tools recently we had 20 or so requirements including these, with their priorities:
- P1 programmatic access to GUI components
- P1 supports testing Swing
- P1 allows versioned source control
- P2 easy for Dev to run alongside unit tests
- P3 ability to drive other products
- P3 works with applications and applets
Our initial list of around 30 tools included pyWinAuto, Win32::GuiTest, Abbot, AutoHotkey, SIKULI, FEST, SilkTest and Squish and we identified three classes of tool:
- purely record/playback
- purely programmatic
- hybrid
We trialed at least one of each class, attempting to create a small set of tests we identified as interesting for our product, and ultimately chose FEST, not least because we can share skills and test cases with the Dev team. They'll use the library for unit tests and we'll drive it using JUnit for running application-level tests too.
We invested effort into choosing this technology to give ourselves the best chance of making the right choice first time but, as so often, we won't know whether it really does everything that we want until we're much further down the road. It'd be so much easier if we could just ask the 8-Ball.
We invested effort into choosing this technology to give ourselves the best chance of making the right choice first time but, as so often, we won't know whether it really does everything that we want until we're much further down the road. It'd be so much easier if we could just ask the 8-Ball.
Comments
Post a Comment