Hello everyone, this is a project we are in the process of implementing.
Scope: Create a WBT solution that will train analysts how to view images, assess the quality of the image, and assign a rating.
Background: The aerial imaging community utilizes the National Imagery
Interpretability Rating Scale
(NIIRS) to define and measure the quality of images and performance of
imaging systems.
Through a process referred to as "rating" an image, the NIIRS is used by
imagery analysts to
assign a number which indicates the interpretability of a given image.
The NIIRS concept provides a means to directly relate the quality of an
image to the
interpretation tasks for which it may be used. Although the NIIRS has
been primarily applied in
the evaluation of aerial imagery, it provides a systematic approach to
measuring the quality of
photographic or digital imagery, the performance of image capture
devices, and the effects of
image processing algorithms.
What
contributed to the project’s success or failure?
The project is going to be implemented, but not as we would like it. Basically the some of the images that where provided for the practical exercises and quizzes were doctored, the SMEs did not go out and get the actual image for the rating as presented. This caused problems for the students taking the course during the pilot phase, as each analyst may have a different monitor with varying settings, so the images do not display at the appropriate NIIRS rating.
Which parts of the PM
process, if included, would have made the project more successful?
I don't think there is anything we could have done, we advised them before using the images that they should find real images to use. However, the SMEs stated that the images would work as provided. We therefore loaded the images into the module and during testing it was discovered that they would not be appropriate. This caused about a 3 week delay, because now the SMEs had to go back and get new images.
No comments:
Post a Comment