One of the continuing barriers to successful STEM education across all levels of learning is the lack of a connection between the three constituencies that most impact secondary students in their STEM educational experiences: 1) the university faculty who prepare the teachers, 2) secondary STEM teachers, and 3) the school district administrators who employ them. We use multiple methodologies to examine and quantify three related questions that are central to the continuing development and sustainability of STEM education improvement efforts:
In other words, are math and science teacher preparation programs providing
teacher graduates with what they need to successfully promote STEM learning as in-service
teachers and if not, where are the gaps and what can be done about closing those gaps?
The National Science Foundation has invested considerable resources into math and science
educational reform at the secondary education level. These efforts have spanned a
range of curricular reform, professional development activities for secondary teachers,
and recruitment efforts to increase the quality and quantity of math and science
teachers. This study focuses on an area that has received little attention in peer
reviewed literature, but may be critically important to systemic reform in math and
science education: building data-driven communication between in-service teachers, the
university faculty who teach them, and the school district administrators who employ and
evaluate them.
While it is certainly true that pre-service teachers are by
definition involved in a process that involves to some extent a university faculty
supervisor and school district personnel (cooperating teacher, administrator,
etc.) during their field experiences, there remains the largely unexplored territory
of university faculty who are not field supervisors but who teach math and science
education courses, in-service teachers who are no longer part of induction processes, and
school district administrators who must contend with reform efforts in their
schools. It is the lack of quantitative research involving these constituencies and,
most importantly, the relationships between and among them that speaks to the utility of
our study.
All teacher preparation institutions must be accredited at
the very least by their respective state agencies, but if STEM education is to develop
partnerships with school districts which are truly beneficial to all parties, there must
be systematic communication between the educators who prepare future STEM teachers, the
teachers themselves, and the people who employ them. But what mechanisms exist for
linking the needs of local and regional schools (who are likely to employ teacher
graduates from local and regional universities) with the universities who prepare the
teachers? In fact there is no real way to test the efficacy of any school
improvement effort until these issues are resolved because university faculty will
continue to meet accreditation requirements by doing what they believe is best in terms of
disciplinary and pedagogical knowledge and this may or may not (more likely the latter) be
congruent with the improvements efforts of secondary schools. There is a potential clash
of beliefs here that has less to do with who is right or who is wrong, but has do to
instead with the lack of a reliable and meaningful feedback loop. Until there is
some aligning of these constituencies, we will not be able to effectively evaluate
improvement efforts in a holistic manner. Our study seeks to create a context in
which we can provide the information stakeholders need to have data-driven discussions
about STEM teacher preparation programs that result in all three constituencies
(university STEM and education faculty, school district administrators, and in-service
teachers) working towards enhancing STEM teaching and learning by revising teacher
preparation programs based on feedback from local and regional schools.