But there is one thing that we are missing completely. We aren’t testing user facing performance. When we push all that processing to the browser, that’s where our user performance occurs. I’ve seen cases where the network transfer for all the data on an SPA view takes less than 500 milliseconds, but the page took 16 seconds to render. With traditional testing this would be ignored unless someone notices it through anecdotal evidence. We can’t rely on this happening as we will typically hear, “that isn’t a production environment”, harkening back to the days when nearly everything was server side.
So what do we do? Most performance testing tools still focus on server response times. Some may have add-ons that will test rendering time, generally only on one browser. ATDD tools typically don’t measure performance, but can be adapted to do so. But is this the best approach?
When dealing with any paradigm shift there are three key components which need to be addressed, People, Process and Tools. For this shift the people need to change to see that server response times are only a part of the equation. The process needs to address that a significant portion of the processing is moving to the browser and that processing must be performance tested. Finally tools need to be improved or adapted to address this need.
The point is, performance testing just got a lot harder!
This article was previously published in SogetiLabs Blog
Matthew Elmore (@mhelmore), with 20+ years of experience in development of distributed systems, joined Sogeti Des Moines in June 2013 as a Manager Consultant specializing in Java technologies with a focus on Web Service Architecture and Design. More on Matthew.
0 comments on “What do we mean by performance testing and what is the real purpose?”