This study (Jacobs, Martin, & Otieno, 2008) has implications for continued research in the area of teacher content planning and subsequent instruction. By utilizing SLPAI to examine teacher participants' lessons plans, the programs' science education instructors are provided an opportunity to critically and explicitly discuss content planning in the content of the actual classrooms in which teacher participants are working. By explicitly engaging teachers in conversations around the SLPAI findings, teacher participants would be better supported to think about and plan for activities that may develop students' understanding of the nature of science or that in crease student discourse in science. This instrument may prove useful in helping teachers to identify areas in which they could improve and areas in which they already excel and can consciously choose to continually strengthen best practices. In addition, the SLPAI could be used as a professional development tool for program instructors as a means of evaluating their intended and enacted curriculum plans in the MSP courses. Conversations around data generated from the SLPAI and RTOP scores of faculty members with relation to program goals for teacher participants' development could strengthen course instruction and better align instructor classroom practices with program goals.
With these caveats in mind, the SLPAI in a unique and powerful tool for measuring practice over time, especially when used in concert with other measurement tools. The SLPAI is an example of such an instrument, which can be used for dual purposes: as a formative tool that informs program development and promotes effective instruction of the teacher-participants, and as a summative measure that allows evaluators to provide a richer, more complete picture of program effectiveness. As MSP and other teacher professional development programs expand as a result of nationwide efforts to improve teacher quality, especially in STEM fields, it is recommended that evaluation researchers continue to develop and implement methods that can be used to triangulate various qualitative and quantitative measures being utilized to learn from these large, diverse programs. In doing so, the education community stands to gain in terms of not only producing new knowledge related to science teaching and learning, but also improved evaluation methodologies.
References:
Bransford, J. D., Brown,
A. L., & Cocking, R. R. (Eds.). (1999). How people learn: Brain, mind, experience,
and school. Washington, DC: National Academies Press.
Brickhouse, N. W.
(1990). Teachers' beliefs about the nature of science and their relationship to classroom
practice. Journal of Teacher Education, 41(3), 53 - 62.
Brown, A. L., & Campione, J. C. (1996). Psychological theory and the design of
innovative learning environments: On procedures, principles, and systems. In L. Schauble
& R. Glaser (Eds.), Innovations in learning: New environments for education
(pp. 289 - 325). Hillsdale, NJ: Erlbaum
Chinn, C. A., & Malhotra, B. A.
(2002). Epistemologically authentic inquiry in schools: A theoretical framework for
evaluating inquiry tasks. Science Education, 86, 175 - 218.
Crowther,
D. T., Lederman, N. G., & Lederman, J. S. (2005). Understanding the true meaning of
nature of science. Science and Children, 43(2), 50 - 52.
Jacobs, C., Martin,
S., & Otieno, T. (2008). A science lesson plan analysis instrument for formative and
summative program evaluation. Science Education, 92(6), pp. 1096-1126.
http://www3.
interscience.wiley.com/journal/118824306/abstract
Kahle, J. B., &
Scantlebury, K. C. (2006). Evaluation of the University of Pennsylvania Science
Teacher Institute-2005 - 6. Oxford, OH: Miami University, Ohio's Evaluation &
Assessment Center for Mathematics and Science Education.
National Research
Council. (1996). National Science Education Standards. Washington, DC: National
Academies Press.
National Research Council. (2006). Rising above the
gathering storm: Energizing and employing America for a brighter economic future.
Washington, DC: National Academies Press.
Sawada, D., Piburn, M., Turley, J., Falconer, K., Benford, R., Bloom, I., et al. (2002). Development of evaluation and strategies and methods. In D. Sawada (Ed.), Reformed teacher education in science and mathematics: An evaluation of the Arizona Collaborative for Excellence in the Preparation of Teachers (ACEPT) (pp. 13 - 46).
Retrieved November 29, 2007, from Arizona
State University, ACEPTWeb site: http://acept.asu.edu
Scantlebury, K. Kahle, J.B. & Yue, L.
(2008). Evaluation of the University of Pennsylvania Science Teacher
Institute-2007-2008. Oxford, OH: Miami University, Evaluation & Assessment Center
for Mathematics and Science Education.
Trigwell, K., & Prosser, M.
(2004). Development and use of the Approaches to Teaching Inventory. Educational
Psychology Review, 16(4), 409 - 424.
Wiggins, G., & McTighe, J. (2001). Understanding by design. Upper Saddle River, NJ: Merrill/Prentice Hall.