View Single Post
Old 12-24-2009, 02:33 PM   #34
Kolenka
<Insert Wit Here>
Kolenka ought to be getting tired of karma fortunes by now.Kolenka ought to be getting tired of karma fortunes by now.Kolenka ought to be getting tired of karma fortunes by now.Kolenka ought to be getting tired of karma fortunes by now.Kolenka ought to be getting tired of karma fortunes by now.Kolenka ought to be getting tired of karma fortunes by now.Kolenka ought to be getting tired of karma fortunes by now.Kolenka ought to be getting tired of karma fortunes by now.Kolenka ought to be getting tired of karma fortunes by now.Kolenka ought to be getting tired of karma fortunes by now.Kolenka ought to be getting tired of karma fortunes by now.
 
Kolenka's Avatar
 
Posts: 1,017
Karma: 1275899
Join Date: Jan 2008
Location: Puget Sound
Device: Kindle Oasis, Kobo Forma
If I was running a project, I'd make sure the full test suite ran against a patched device (30MB sounds like they issued a partial patch, rather than a full firmware image, which is riskier and can introduce regressions).

And if you have automation in the mix, the suite can be quite extensive depending on how long it's been built up, and what metrics you also capture (battery life, for example, or running stress testing to find the mean time it takes for the device to crash). From this standpoint, I'm talking about testing an actual mobile device where you 'own' the whole firmware.

The larger the project, and the larger the team, the more important it becomes to do full passes on final builds (and partial passes on interim builds instead to verify the fixes rapidly), especially when your patches aren't full copies of the firmware. I've seen stuff slip through many 'quality gates' only to be caught on the final full pass... especially when you have software running that can drastically affect the performance of the whole device.
Kolenka is offline   Reply With Quote